id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0001/math-ph0001024.html | ar5iv | text | # The Boltzmann/Shannon entropy as a measure of correlation
## Abstract
It is demonstrated that the entropy of statistical mechanics and of information theory, $`S(𝐩)=p_i\mathrm{log}p_i`$ may be viewed as a measure of correlation. Given a probability distribution on two discrete variables, $`p_{ij}`$, we define the correlation-destroying transformation $`C:p_{ij}\pi _{ij}`$, which creates a new distribution on those same variables in which no correlation exists between the variables, i.e. $`\pi _{ij}=P_iQ_j`$. It is then shown that the entropy obeys the relation $`S(𝐩)S(\pi )=S(𝐏)+S(𝐐)`$, i.e. the entropy is non-decreasing under these correlation-destroying transformations.
The concept of correlation has underlain statistical mechanics from its inception. Maxwell derived his velocity distribution law <sup>1</sup><sup>1</sup>1Maxwell, J.C., Phil. Soc., 1860 by asserting that such a distribution $`\mathrm{\Phi }(\stackrel{}{v})`$ for an ideal gas should obey two properties: (1) the velocity distribution along each axis should be uncorrelated, i.e.
$$\mathrm{\Phi }(\stackrel{}{v})d^3\stackrel{}{v}=(\varphi (v_x)dv_x)(\varphi (v_y)dv_y)(\varphi (v_z)dv_z)$$
and (2) the velocity distribution should show no preferred orientation, $`\mathrm{\Phi }(\stackrel{}{v})d^3\stackrel{}{v}=f(v)d^3\stackrel{}{v}`$, where v is the norm of $`\stackrel{}{v}`$. He showed these two assumptions led to the velocity distribution $`\mathrm{\Phi }(\stackrel{}{v})d^3\stackrel{}{v}=\mathrm{exp}(\alpha v^2)d^3\stackrel{}{v}`$, where $`\alpha `$ is a positive constant (later shown by Boltzmann to be equal to $`\frac{m}{2kT}`$). (This reduces to the more familiar form by writing this expression in polar coordinates, $`\mathrm{\Phi }(v_r,v_\theta ,v_\varphi )d^3\stackrel{}{v}=\mathrm{exp}(\alpha v_r^2)v_r^2dv_rdv_\theta dv_\varphi `$).
Inspired by the observations of Ochs <sup>2</sup><sup>2</sup>2Ochs, W., Rep. Math. Phys., 9, 135 (1976), we would like to show that the Boltzmann/Shannon formula for entropy may be viewed as a measure of correlation, by showing that for a class of transformations which destroys correlations between variables in a probability distribution, the Boltzmann/Shannon formula for the entropy is non-decreasing.
Suppose that we have a set of states, $`\{X_{ij}\}`$, indexed by their values along two distinct, discrete variables A and B, where i runs over the set of n discrete states of A, and j runs over the m set of discrete states of B. Consider a probability distribution over these states, $`p_{ij}`$. If A and B are uncorrelated, there exist some $`P_i`$ and $`Q_j`$ such that $`p_{ij}=P_iQ_ji,j`$. In this case, it is apparent that the entropy obeys the property S(p) = S(P) + S(Q).
However, in general, this will not be true. But, given an arbitrary $`p_{ij}`$, we can define the following transformation, which in effect destroys the correlations between its dependence on A and on B:
$`C:`$ $`p_{ij}\pi _{ij}`$ (1)
$`\pi _{ij}`$ $`=`$ $`P_iQ_j`$ (2)
$`P_i`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{m}{}}}p_{ij}`$ (3)
$`Q_j`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}p_{ij}`$ (4)
We assert that the entropy is non-decreasing under such transformations, i.e.
$$S(p)S(\pi )$$
(5)
To demonstrate this assertion, we need first to demonstrate a fundamental property of the Boltzmann/Shannon entropy formula, the averaging property. Given a set of states $`\{Y\}_{k=1}^N`$, and two probability distributions defined over these states, $`G=\{g_k\}_{k=1}^N`$ and $`H=\{h_k\}_{k=1}^N`$, one may construct a third distribution, $`U=\{u_k\},u_k=\frac{1}{\alpha +\beta }(\alpha g_k+\beta h_k)`$, the weighted average of G and H, where $`\alpha `$ and $`\beta `$ are arbitrary real values. We assert that
$$S(U)\frac{1}{\alpha +\beta }(\alpha S(G)+\beta S(H))$$
(6)
This assertion can be demonstrated by observing that the similar inequality holds term-by-term in the sum. Defining $`\sigma (x)=x\mathrm{ln}x`$, the averaging property will hold if
$$\sigma (u_k)\frac{1}{\alpha +\beta }(\alpha \sigma (g_k)+\beta \sigma (h_k))$$
(7)
This property of $`\sigma (x)`$ follows from it being concave everywhere over the domain of interest, $`x[0,1]`$, i.e. $`\sigma ^{\prime \prime }(x)<0`$.
Note, too, that a consequence of the averaging property of entropy is that, given a set of different distributions over $`\{Y\}_{k=1}^K`$, $`Z^1,Z^2,Z^3,\mathrm{}Z^K`$, and a set of weights $`w_i,_iw_i=1`$, that the entropy of $`\overline{Z}=_kw_kZ^k`$, the weighted average of all these distributions, obeys the following property:
$$S(\overline{Z})\underset{k}{}w_kS(Z^k)$$
(8)
Returning to the fundamental assertion, equation 5, this may be demonstrated by recalling a fundamental property of the Boltzmann/Shannon entropy formula, one which Shannon <sup>3</sup><sup>3</sup>3C. Shannon and W. Weaver, The Mathematical Theory of Communication,Urbana: Univ. of Ill. Press, 1949 took not as a derived property but rather an axiomatic property that an entropy functional must have: If we decompose the distribution $`p_{ij}`$ into a two stages, where initially we distribute among the states over A by the distribution $`P_i`$, and next we distribute among the states of B by the distribution $`\zeta _j^{(i)}`$, such that $`p_{ij}=P_i\zeta _j^{(i)}`$, the entropy obeys the formula
$$S(p)=S(P)+\underset{i}{}P_iS(\zeta ^{(i)})$$
(9)
In the general case, each of the distributions $`\zeta ^{(i)}`$ will be different, i.e. the variables A and B are correlated. The distribution $`Q_j`$ above represents a weighted average of the $`\zeta ^{(i)}`$’s, weighted by $`P_i`$, i.e.
$$Q_j=\underset{k}{}P_k\zeta _j^{(k)}$$
(10)
Hence, the Shannon axiom with the averaging property of the entropy leads to the desired assertion:
$`S(p)`$ $`=`$ $`S(P)+{\displaystyle \underset{k}{}}P_kS(\zeta ^{(k)})`$ (11)
$`S(p)`$ $``$ $`S(P)+S(Q)`$ (12)
$`S(p)`$ $``$ $`S(\pi )`$ (13)
Jaynes <sup>4</sup><sup>4</sup>4Jaynes, Phys. Rev., 106, 620 (1957) showed how Shannon’s theory could be merged with statistical mechanics, leading to the conceptualization of the thermodynamic principle of maximum entropy as a principle expressing that the distribution of energy among the microstates should be that distribution which is least-biased, given the constraint of a specified temperature.
Viewing entropy $`k_kp_i\mathrm{ln}p_i`$ as a measure of (lack of) correlation provides a new twist to Jaynes’ perspective. One may say that the equilibrium distribution is that distribution which is least-correlated given the constraint(s). The Second Law of Thermodynamics may be rephrased to state that correlations are highly unlikely to arise spontaneously, and that the natural course of evolution of a system is one in which correlations diminish.
Thinking about entropy as a measure of correlation leads to a key implicit assumption in both Boltzmann’s theory and Shannon’s theory: the individual (micro)states are assumed to be uncorrelated. This hails back to Laplace’s balls-in-urns, where the probability of finding a ball in a given urn is generally uncorrelated with the probability of finding a ball in a different urn. If the probability of occupation of the states are intrinsically correlated, the maximum entropy distribution cannot be viewed as the correct, least-biased distribution. In communication theory and statistical mechanics, this assumption may in certain circumstances be valid, but where this assumption breaks down severely is the case when we attempt to take the limit to a continuous set of states. If one constructs a discrete set of bins from an intrinsically continuous variable, the degree of bin-bin correlation grows as these bins become steadily finer. This leads to the question ’what is the correct measure of entropy for a continuous distribution?’.
Dill <sup>5</sup><sup>5</sup>5Dill, K.A.,J. Biol.Chem., 272, 701 (1997) points out that these issues of correlation and additivity pervade our thinking about the fundamental aspects of chemical and biological phenomena. He highlights some of the pitfalls which one may encounter in settings for which an assumption of non-correlation may not be valid. |
no-problem/0001/hep-ph0001181.html | ar5iv | text | # Stabilizing Textures in 3+1 Dimensions with Semilocality
## I Introduction
Texture-like topological defects where the topological charge emerges by integrating over the whole physical space (not just the boundary) have played an important role in both particle physics and cosmology. Typical examples are the skyrmion which offers a useful effective model for the description of the nucleon and the global texture where an instability towards collapse of the scalar field configuration has been used to construct an appealing mechanism for structure formation in the universe.
A typical feature of this class of scalar field configurations are instabilities towards field rescalings which usually lead to collapse and subsequent decay to the vacuum via a localized highly energetic event in space-time. The property of collapse is a general feature of global field configurations in 3+1 dimensions and was first described by Derrick. This feature is particularly useful in a cosmological setup because it provides a natural decay mechanism which can prevent the dominance of the energy density of the universe by texture-like defects. At the same time, this decay mechanism leads to a high energy event in space-time that can provide the primordial fluctuations for structure formation.
In the particle physics context where a topological defect predicted by a theory can only be observed in accelerator experiments if it is at least metastable, the above instability is an unwanted feature. A usual approach to remedy this feature has been to consider effective models where non-renormalizable higher powers of scalar field derivatives are put by hand. This has been the case in QCD where chiral symmetry breaking is often described using the low energy ’pion dynamics’ model. Texture-like configurations occur here and as Skyrme first pointed out they may be identified with the nucleons (Skyrmions). Here textures are stabilized by non-renormalizable higher derivative terms in the quantum effective action. However no one has ever found such higher derivative terms with the right sign to stabilize the Skyrmion.
An alternative approach to stabilize texture-like configurations is the introduction of gauge fields which can be shown to induce pressure terms in the scalar field Lagrangian thus balancing the effects of Derrick-type collapse. In the case of complete gauging of the vacuum manifold however, it is possible for the texture configuration to relax to the vacuum manifold by a continuous gauge transformation that can remove all the gradient energy (the only source of field energy for textures) from the non-singular texture-like configuration. This mechanism of decay via gauge fields is not realized in singular defects where the topological charge emerges from the boundaries. In these defects, singularities, where the scalar field is 0, can not be removed by continuous gauge transformations.
Recent progress in semilocal defects has indicated that physically interesting models can emerge by a partial gauging of the vacuum manifold of field theories. This partial gauging (semilocality) can lead to new classes of stable defect solutions that can persist as metastable configurations in more realistic models where the gauging of the vacuum is complete but remains non-uniform. A typical example is the semilocal string whose embedding in the standard electroweak model has led to the discovery of a class of metastable 2+1 dimensional field configurations in this model.
In the context of texture-like configurations, the concept of semilocality can lead to an interesting mechanism for stabilization. In fact the semilocal gauge fields are unable to lead to relaxation of the global field gradient energy because they can not act on the whole target space. They do however induce pressure terms in the Lagrangian that tend to resist the collapse induced by the scalar sector. Therefore they have the features required for the construction of stable texture-like configurations in renormalizable models without the adhoc use of higher powers of derivatives.
The goal of this paper is to demonstrate the stabilization induced by semilocal gauge fields in the context of global textures that form during the symmetry breaking $`O(4)O(3)`$. In the semilocal case discussed below the $`O(3)`$ subgroup of the global $`O(4)`$ symmetry is gauged.
## II Virial Theorem
Consider first a texture field configuration in 3+1 dimensions emerging in the context of a field theory describing a global symmetry breaking $`O(4)O(3)`$. This is described by a four component scalar field $`\stackrel{}{Q}=(Q_1,Q_2,Q_3,Q_4)`$ whose dynamics is described by the potential
$$V(\stackrel{}{Q})=\frac{\lambda }{4}(\stackrel{}{Q}^2F^2)^2$$
(1)
The initial condition ansatz
$$\stackrel{}{Q}=(\mathrm{sin}\chi \mathrm{sin}\theta \mathrm{sin}\phi ,\mathrm{sin}\chi \mathrm{sin}\theta \mathrm{cos}\phi ,\mathrm{sin}\chi \mathrm{cos}\theta ,\mathrm{cos}\chi )$$
(2)
with $`\chi (r)`$ varying between 0 and $`\pi `$ as $`r`$ goes from 0 to infinity and $`\theta `$, $`\phi `$ spherical polar coordinates, describes a configuration that winds once around the vacuum $`M_0=S^3`$ as the physical space is covered. Since $`\pi _3(M_0)1`$ this is a nontrivial configuration which is topologically distinct from the vacuum. The energy of this configuration is of the form
$$E=_{\mathrm{}}^+\mathrm{}\frac{1}{2}(\stackrel{}{}\stackrel{}{Q})^2+V(\stackrel{}{Q})T+V$$
(3)
where we have allowed for possible small potential energy excitations during time evolution. A rescaling of the spatial coordinate $`r\alpha r`$ of the field $`\stackrel{}{Q}(r)`$ leads to $`E_\alpha =\alpha ^1T+\alpha ^3V`$ which is monotonic with $`\alpha `$ and leads to collapse, highly localized energy and eventual unwinding of the configuration. These highly energetic and localized events in spacetime have provided a physically motivated mechanism for the generation of primordial fluctuations that gave rise to structure in the universe.
The possible stabilization of these collapsing configurations could lead to a cosmological overabundance and a cosmological problem similar to the one of monopoles, requiring inflation to be resolved. At the same time however it could lead to observational effects in particle physics laboratories. There are at least two ways to stabilize a collapsing texture in 3+1 dimensions. The first is well known and includes the introduction of higher powers of derivative terms in the energy functional. These terms scale like $`\alpha ^p`$ ($`p>0`$) with a rescaling and can make the energy minimization possible thus leading to stable skyrmions. Stable Hopfions (solitons with non-zero Hopf topological charge) have also been constructed recently by the same method.
The second method of stabilization is less known (but see ref for applications in 2+1 dimensions) and can be achieved by introducing gauge fields that partially cover the vacuum manifold. In particular, the simplest Lagrangian that accepts semilocal texture configurations in 3+1 dimensions can be written as follows:
$$=\frac{1}{4}G_{ij}^aG_{ij}^a\frac{1}{2}D_iQ_aD_iQ_a\frac{1}{2}D_iQ_4D_iQ_4V$$
(4)
where ($`i,j=1,2,3`$),
$$V=\frac{1}{2}\mu ^2(Q_a^2+Q_4^2)+\frac{1}{8}\lambda (Q_a^2+Q_4^2)^2$$
(5)
and $`\stackrel{}{Q}(Q_a,Q_4)`$ is a four component scalar field ($`a=1,2,3`$) with vacuum $`\stackrel{}{Q}^2=F^2`$ with $`F^2\frac{2\mu ^2}{\lambda }`$. Here we are using the notation of Ref. i.e.
$$G_{ij}^a_iW_j^a_jW_i^a+eϵ_{abc}W_i^bW_j^c$$
(6)
and
$$D_iQ_a=_iQ_a+eϵ_{abc}W_i^bQ_c$$
(7)
The field ansatz that describes a semilocal texture may be written as
| $`Q_a`$ | $`=r_aQ(r)`$ |
| --- | --- |
| $`Q_4`$ | $`=P(r)`$ |
| $`W_i^a`$ | $`=ϵ_{iab}r_bW(r)`$ |
(8)
The asymptotic behavior of the field functions $`Q(r)`$, $`W(r)`$ and $`P(r)`$ can be obtained by demanding finite energy, single-valueness of the fields and non-trivial topology. Thus we obtain the following asymptotics
| $`rQ(r)`$ | $`0`$ |
| --- | --- |
| $`er^2W(r)`$ | $`0`$ |
| $`P(r)`$ | $`F`$ |
(9)
for $`r\mathrm{}`$ and
| $`rQ(r)`$ | $`0`$ |
| --- | --- |
| $`er^2W(r)`$ | $`0`$ |
| $`P(r)`$ | $`F`$ |
(10)
for $`r0`$. We now rescale the coordinate $`r`$ as $`rr/eF`$ and define the rescaled fields
| $`q(r)`$ | $`rQ(r)F`$ |
| --- | --- |
| $`p(r)`$ | $`P(r)F`$ |
| $`w(r)`$ | $`W(r)/er^2`$ |
(11)
Using these rescalings, the energy of the semilocal texture ansatz (8) may be written as
$$4\pi e^1F_0^{\mathrm{}}𝑑r$$
(12)
where the energy density $``$ may be expressed in terms of the rescaled fields as
| $``$ | $`=w^2+{\displaystyle \frac{2w^2}{r^2}}(1w+{\displaystyle \frac{w^4}{4}})`$ |
| --- | --- |
| | $`+{\displaystyle \frac{1}{2}}r^2(q^2+p^2)+q^2(1w)^2+{\displaystyle \frac{1}{8}}r^2\beta (q^2+p^21)^2`$ |
(13)
where $`\beta \frac{\lambda }{e^2}`$. By varying the energy density with respect to the field functions $`w`$, $`q`$ and $`p`$ it is straightforward to obtain the field equations for these functions. These may be written as
| $`w^{\prime \prime }{\displaystyle \frac{2w}{r^2}}+{\displaystyle \frac{3w^2}{r^2}}{\displaystyle \frac{w^3}{r^2}}+q^2(1w)`$ | $`=0`$ |
| --- | --- |
| $`(r^2q^{})^{}2q(1w)^2{\displaystyle \frac{\beta }{2}}r^2q(p^2+q^21)`$ | $`=0`$ |
| $`(r^2p^{})^{}{\displaystyle \frac{\beta }{2}}r^2p(p^2+q^21)`$ | $`=0`$ |
(14)
with boundary conditions
| $`r`$ | $`0:w0,q0,p1`$ |
| --- | --- |
| $`r`$ | $`\mathrm{}:w0,q0,p1`$ |
(15)
The stability towards collapse of the semilocal texture that emerges as a solution of the system (14) with the boundary conditions (15) can be studied by examining the behavior of the energy after a rescaling of the spatial coordinate $`r`$ to $`r\alpha r`$. For global textures, this rescaling leads to a monotonic increase of the energy with $`\alpha `$ (Derrick’s theorem) indicating instability towards collapse. It will be shown that this instability is not present in the field configuration of the semilocal texture due to the outward pressure induced by the gauge field which has the same effect as the higher powers of derivatives present in the skyrmion.
The rescaling $`r\alpha r`$ modifies the total energy as follows
$$E\alpha E_1+\alpha ^1E_2+\alpha ^3E_3$$
(16)
where
| $`E_1`$ | $`=A{\displaystyle _0^{\mathrm{}}}𝑑r[w^2+{\displaystyle \frac{2w^2}{r^2}}(1w+{\displaystyle \frac{w^4}{4}})]`$ |
| --- | --- |
| $`E_2`$ | $`=A{\displaystyle _0^{\mathrm{}}}𝑑r[{\displaystyle \frac{1}{2}}r^2(q^2+p^2)+q^2(1w)^2]`$ |
| $`E_3`$ | $`=A{\displaystyle _0^{\mathrm{}}}𝑑r[{\displaystyle \frac{1}{8}}r^2\beta (q^2+p^21)^2]`$ |
(17)
and $`A4\pi e^1F`$. The expression (16) has an extremum with respect to $`\alpha `$ which is found by demanding $`\frac{\delta E}{\delta \alpha }|_{\alpha =1}=0`$. This leads to a virial theorem connecting the energy terms $`E_1`$, $`E_2`$ and $`E_3`$ as
$$E_1=E_2+3E_3$$
(18)
By considering the second variation of the energy with respect to the rescaling parameter $`\alpha `$ it is easy to see that the extremum of the energy corresponding to the semilocal texture solution is indeed a minimum. Indeed
$$\frac{\delta ^2E}{\delta \alpha ^2}|_{\alpha =1}=2E_2+12E_3>0$$
(19)
## III Conclusion
We therefore conclude that the semilocal texture field configuration is a local minimum of the energy functional with respect to coordinate rescalings in contrast to its global counterpart. In addition it is impossible to unwind this configuration to the vacuum by a gauge transformation because only the $`O(3)`$ sub-group of the full symmetry $`O(4)`$ is gauged and therefore it is impossible to ’rotate’ all four components of the scalar field $`\stackrel{}{Q}`$ by a gauge transformation. Therefore an initial field configuration with non-trivial $`\pi _3(S^3)`$ topology can neither unwind continously to the vacuum (due to non-trivial topology and insufficient gauge freedom) nor collapse (due to the derived virial theorem). Thus the energy of the configuration will remain trapped and localized either in the form of a static configuration (if a solution to the static system (14) exists) or in the form of a localized time-dependent breather-type configuration. In both cases there can be interesting consequences for both particle physics and cosmology.
Interesting extensions of this brief report include the study of the cosmological effects of semilocal textures and the possibility of embedding these objects in realistic extensions of the standard model. It is also important to perform a detailed numerical construction of these objects. A detailed study including these and other issues is currently in progress.
## IV Acknowledgements
I thank T. Tomaras for useful discussions. |
no-problem/0001/hep-lat0001031.html | ar5iv | text | # Cost of Generalised HMC Algorithms for Free Field Theory
## 1 GENERALISED HMC
The work reported here extends results presented previously , to which the reader is referred for details. We begin by recalling that the generalised HMC algorithm is constructed from two kinds of update for a set of fields $`\varphi `$ and their conjugate momenta $`\pi `$.
Molecular Dynamics Monte Carlo: This consists of three parts: (1) *MD:* an approximate integration of Hamilton’s equations on phase space which is exactly area-preserving and reversible; $`U(\tau ):(\varphi ,\pi )(\varphi ^{},\pi ^{})`$ where $`detU=1`$ and $`U(\tau )=U(\tau )^1`$. (2) A momentum flip $`F:\pi \pi `$. (3) *MC:* a Metropolis accept/reject test.
Partial Momentum Refreshment: This mixes the Gaussian-distributed momenta $`\pi `$ with Gaussian noise $`\xi `$:
$$\left(\begin{array}{c}\pi ^{}\\ \xi ^{}\end{array}\right)=\left(\begin{array}{cc}\mathrm{cos}\vartheta & \mathrm{sin}\vartheta \\ \mathrm{sin}\vartheta & \mathrm{cos}\vartheta \end{array}\right)F\left(\begin{array}{c}\pi \\ \xi \end{array}\right)$$
The HMC algorithm is the special case where $`\vartheta =\frac{\pi }{2}`$. The L2MC/Kramers algorithm corresponds to choosing arbitrary $`\vartheta `$ but MDMC trajectories of a single leapfrog integration step.
### 1.1 Tunable Parameters
The GHMC algorithm has three free parameters, the trajectory length $`\tau `$, the momentum mixing angle $`\vartheta `$, and the integration step size $`\delta \tau `$. These may be chosen arbitrarily without affecting the validity of the method, except for some special values for which the algorithm ceases to be ergodic. We may adjust these parameters to minimise the cost of a Monte Carlo computation, and the main goal of this work is to carry out this optimisation procedure for free field theory.
Horowitz pointed out that the L2MC algorithm has the advantage of having a higher acceptance rate than HMC for a given step size, but he did not take in to account that it also requires a higher acceptance rate to get the same autocorrelations because the trajectory is reversed at each MC rejection. It is not obvious a priori which of these effects dominates.
The parameters $`\tau `$ and $`\vartheta `$ may be chosen independently from some distributions $`P_R(\tau )`$ and $`P_M(\vartheta )`$ for each trajectory. In the following we shall consider various choices for the momentum refreshment distribution $`P_M`$, but we shall always take a fixed value for $`\vartheta `$.
## 2 FREE FIELD THEORY
Consider a system of harmonic oscillators $`\{\varphi _p\}`$ for $`p\mathrm{}_V`$. The Hamiltonian on phase space is $`H=\frac{1}{2}_{p\mathrm{}_V}\left(\pi _p^2+\omega _p^2\varphi ^2\right)`$. This describes free field theory in momentum space if the frequencies $`\omega _p`$ are chosen as
$$\omega _p^2=m^2+4\underset{\mu =1}{\overset{d}{}}\mathrm{sin}^2\frac{\pi p_\mu }{L}.$$
(1)
## 3 AUTOCORRELATION FUNCTIONS
### 3.1 Simple Markov Processes
Let $`(\varphi _1,\varphi _2,\mathrm{},\varphi _N)`$ be a sequence of field configurations generated by an equilibrated ergodic Markov process, and let $`\mathrm{\Omega }(\varphi )`$ denote the expectation value of some connected operator $`\mathrm{\Omega }`$. We may define an *unbiased estimator* $`\overline{\mathrm{\Omega }}`$ over the finite sequence of configurations by $`\overline{\mathrm{\Omega }}\frac{1}{N}_{t=1}^N\mathrm{\Omega }(\varphi _t)`$, As usual, we define $`C_\mathrm{\Omega }(\mathrm{})\frac{\mathrm{\Omega }(\varphi _t+\mathrm{})\mathrm{\Omega }(\varphi _t)}{\mathrm{\Omega }(\varphi )^2}`$ as the *autocorrelation function* for $`\mathrm{\Omega }`$. The variance of the estimator $`\overline{\mathrm{\Omega }}`$ is
$$\overline{\mathrm{\Omega }}^2=\{1+2A_\mathrm{\Omega }\}\frac{\mathrm{\Omega }(\varphi )^2}{N}\left[1+O\left(\frac{N_{\text{exp}}}{N}\right)\right],$$
where $`A_\mathrm{\Omega }_{\mathrm{}=1}^{\mathrm{}}C_\mathrm{\Omega }(\mathrm{})`$ is the *integrated autocorrelation function* for the operator $`\mathrm{\Omega }`$ and $`N_{\text{exp}}`$ is the *exponential autocorrelation time*. This result tells us that on average $`1+2A_\mathrm{\Omega }`$ correlated measurements are needed to reduce the variance by the same amount as a single truly independent measurement.
### 3.2 Autocorrelation Functions for Quadratic Operators
In order to carry out these calculations we make the simplifying assumption that the acceptance probability $`\mathrm{min}(1,e^{\delta H})`$ for each trajectory may be replaced by its value averaged over phase space $`P_{\text{acc}}\mathrm{min}(1,e^{\delta H})`$; we neglect correlations between successive trajectories. Including such correlations leads to seemingly intractable complications. It is not obvious that our assumption corresponds to any systematic approximation except, of course, that it is valid when $`\delta H=0`$.
Details of the calculation of $`P_{\text{acc}}`$ for leapfrog integration are published elsewhere .
## 4 COMPARISON OF COSTS
If we make the reasonable assumption that the cost of the computation is proportional to the total fictitious (MD) time for which we have to integrate Hamilton’s equations, then the cost $``$ per independent configuration is proportional to $`(1+2A_\mathrm{\Omega })\overline{\tau }/\delta \tau `$ with $`\overline{\tau }`$ denoting the average length of a trajectory. The optimal trajectory length is obtained by minimising the cost, that is by choosing $`\overline{\tau }`$ so as to satisfy $`d/d\overline{\tau }=d/d\vartheta =0`$.
We wish to compare the performance of the HMC, L2MC and GHMC algorithms for one dimensional free field theory. To do this we compare the cost of generating a statistically independent measurement of the magnetic susceptibility $`M_c^2`$, choosing the optimal values for the angle $`\vartheta `$ and the average trajectory length $`\overline{\tau }`$. We can minimise the cost with respect to $`\vartheta `$ without having to specify the form of the refresh distribution.
The next step is to minimise the cost with respect to the average trajectory length $`\xi =\omega \overline{\tau }`$. Strictly speaking we should note that the acceptance probability $`\overline{P}_{\text{acc}}`$ is a function of $`\overline{\tau }`$, but to a good approximation we may assume that $`P_{\text{acc}}`$ depends only upon the integration step size $`\delta \tau `$ *except* in the case of very short trajectories.
### 4.1 Exponentially Distributed Trajectory Lengths
To proceed further we need to choose a specific form for the momentum refresh distribution. In this section we will present results for the case of exponentially distributed trajectory lengths, $`P_R(\tau )=re^{r\tau }`$ where the parameter $`r`$ is just the inverse average trajectory length $`r=1/\overline{\tau }`$.
The cost at the point $`(c_{\text{opt}}\mathrm{cos}\vartheta _{\text{opt}},\xi _{\text{opt}})`$ is
$$_{M_c^2}^{\text{exp}}(\delta \tau ,\vartheta _{\text{opt}},\xi _{\text{opt}})=$$
$$\frac{\left(\begin{array}{c}(7\overline{P}_{\text{acc}}3\overline{P}_{\text{acc}}^{}{}_{}{}^{2}4)\xi _{\text{opt}}^{}{}_{}{}^{2}c_{\text{opt}}^{}{}_{}{}^{3}+1c_{\text{opt}}^{}{}_{}{}^{2}\\ +(2\overline{P}_{\text{acc}}+1)c_{\text{opt}}+(2\overline{P}_{\text{acc}}1)c_{\text{opt}}^{}{}_{}{}^{3}\\ +(\overline{P}_{\text{acc}}^{}{}_{}{}^{2}5\overline{P}_{\text{acc}}+4)c_{\text{opt}}\xi _{\text{opt}}^{}{}_{}{}^{2}\\ +(\overline{P}_{\text{acc}}+4)\xi _{\text{opt}}^{}{}_{}{}^{2}+(4+3\overline{P}_{\text{acc}})c_{\text{opt}}^{}{}_{}{}^{2}\xi _{\text{opt}}^{}{}_{}{}^{2}\end{array}\right)}{\left(\begin{array}{c}\overline{P}_{\text{acc}}\delta \tau \omega (\overline{P}_{\text{acc}}1)c_{\text{opt}}^{}{}_{}{}^{3}\xi _{\text{opt}}\\ +\xi _{\text{opt}}\overline{P}_{\text{acc}}\delta \tau \omega \xi _{\text{opt}}\overline{P}_{\text{acc}}\delta \tau \omega c_{\text{opt}}^{}{}_{}{}^{2}\\ \overline{P}_{\text{acc}}\delta \tau \omega (1+\overline{P}_{\text{acc}})c_{\text{opt}}\xi _{\text{opt}}\end{array}\right)}.$$
This solution is a function of $`\delta \tau `$ and $`\overline{P}_{\text{acc}}`$ which are not independent variables, and using the results for $`\overline{P}_{\text{acc}}(\tau ,\delta \tau )`$ we can compute the cost as a function of $`\overline{P}_{\text{acc}}`$ as shown in Figure 1.
#### 4.1.1 HMC
The Hybrid Monte Carlo algorithm corresponds to setting $`\vartheta =\pi /2`$, and we find that the optimal trajectory length in this case is $`\xi _{\text{opt}}=1/\sqrt{4\overline{P}_{\text{acc}}}`$, corresponding to a cost
$$_{\text{opt}}=\frac{2\sqrt{4\overline{P}_{\text{acc}}}}{\overline{P}_{\text{acc}}\delta \tau \omega }.$$
This is also shown in Figure 1.
### 4.2 Fixed Length Trajectories
For fixed length trajectories we shall only analyse the case of L2MC for which the trajectory length $`\xi =\omega \delta \tau `$. In this case the value of $`\vartheta _{\text{opt}}`$ and the corresponding cost are also plotted in Figure 1. From this figure it is clear that the minimum cost occurs for $`\overline{P}_{\text{acc}}`$ very close to unity, where the scaling variable $`x=V\delta \tau ^6`$ is very small. We may then express $`c_{\text{opt}}`$ and $`\overline{P}_{\text{acc}}`$ as power series in $`x`$, keeping only the first few terms. From these relations we find that the minimum cost for L2MC is
$$_{\text{opt}}=\left(\frac{10}{\pi }\right)^{1/4}V^{5/4}m^{3/2}.$$
This result tells us that not only does the tuned L2MC algorithm have a dynamical critical exponent $`z=3/2`$, but also it has a volume dependence of exactly the same form as HMC . We may understand why this behaviour occurs rather than the naive $`V^{7/6}m^1`$ by the following simple argument.
If $`\overline{P}_{\text{acc}}<1`$ then the system will carry out a random walk backwards and forwards along a trajectory because the momentum, and thus the direction of travel, must be reversed upon a Metropolis rejection. A simple minded analysis is that the average time between rejections must be $`O(1/m)`$ in order to achieve $`z=1`$. This time is approximately
$$\underset{n=0}{\overset{\mathrm{}}{}}\overline{P}_{\text{acc}}^{}{}_{}{}^{n}(1\overline{P}_{\text{acc}})n\delta \tau =\frac{\overline{P}_{\text{acc}}\delta \tau }{1\overline{P}_{\text{acc}}}=\frac{1}{m}.$$
For small $`\delta \tau `$ we have $`1\overline{P}_{\text{acc}}=erf\sqrt{kV\delta \tau ^6}\sqrt{V\delta \tau ^6}`$ where $`k`$ is a constant, and hence we must scale $`\delta \tau `$ so as to keep $`V\delta \tau ^4/m^2`$ fixed. Since the L2MC algorithm has a naive dynamical critical exponent $`z=1`$, this means that the cost should vary as $`V(V\delta \tau ^4m^2)^{1/4}/m\delta \tau =V^{5/4}m^{3/2}`$. |
no-problem/0001/hep-ph0001262.html | ar5iv | text | # Description of local multiplicity fluctuations and genuine multiparticle correlations
## 1 Introduction
The study of the multiplicity distribution of the produced hadrons along with the analysis of the correlations among them stands in the frontier of investigations in the area of multiparticle dynamics. The multiplicity distribution plays a fundamental role in extracting first information on the underlying particle production mechanism, while the correlations give details of the dynamics. Whereas the full-multiplicity distribution is a global characteristic and is influenced by conservation laws, the multiplicity distributions in restricted phase space domains contributing to the correlations are local characteristics and have an advantage of being much less affected by global conservation laws.
In the last decade, study of multiplicity distributions in limited regions (bins) has attracted high interest in view of search for local dynamical fluctuations of an underlying self-similar (fractal) structure, the so-called intermittency phenomenon. For review, see Refs. . This phenomenon is seen in various reactions; however, many questions about intermittency and, in particular, its origin are still open and further investigations are needed .
Studying local fluctuations, one must remember that the fluctuation of a given number of particles, $`q`$, is contributed by genuine lower-order, $`p<q`$, correlations. To extract signals of these $`p`$-order correlations, one needs to use advanced statistical techniques such as normalized factorial cumulant moments (cumulants) . However, this method requires measurements to be of high statistics (and of high precision), lack of which leads to a smearing out of high-order correlations .
A real opportunity to extract genuine multiparticle correlations came with vast amount of multihadronic events collected now at LEP1. The statistics available allows one to decrease significantly the measurement errors and to reveal relatively small effects. The first reports representing the studies of genuine correlations in e<sup>+</sup>e<sup>-</sup> annihilation have just appeared <sup>2</sup><sup>2</sup>2Earlier, using correlation (strip) integrals to reduce statistical errors, multiparticle genuine correlations have been searched for in hadron-hadron and lepton-hadron interactions. It is worth to note that except for problems arising due to various possibilities in defining of a proper topology of particles and a distance between them (see e.g. ), a rather important difficulty in interpreting results could come from a translation invariance breaking of the many-particle distributions . This will lead to different results on moments/cumulants according to a variable used because different variables are sensitive to different hadroproduction mechanisms . For example, in the high-order genuine correlations obtained in Ref. the study is performed in four-momentum difference squared, $`Q^2`$, dependent on Bose-Einstein correlations, whereas in the pseudorapidity analysis no correlations higher than two-particle ones were found, since (pseudo)rapidity seems to be more “natural” variable to search for jet formation than e.g. for Bose-Einstein correlations. .
DELPHI has analysed correlations in one- and two-dimensional angular bins of jet cones, while OPAL has performed its study in three dimensional phase space using conventional kinematic variables such as rapidity, transverse momentum and azimuthal angle. DELPHI has shown an existence of the correlations up to third order, and OPAL with increased statistics has, for the first time in e<sup>+</sup>e<sup>-</sup> annihilation, calculated multidimensional cumulants and has established sensitive genuine correlations even at fifth order. Note that multidimensional analysis carried out in hadronic interactions shows that the cumulants are consistent with zero at $`q>3`$ there, while in heavy-ion collisions non-zero cumulants have been observed at second order only .
The genuine correlations measured in e<sup>+</sup>e<sup>-</sup> annihilations are found to exceed considerably the QCD analytic description and to indicate significant deviations from Monte Carlo (MC) predictions. A smallness of genuine correlations predicted by perturbative QCD is seen also in cumulant-to-factorial moment studies, even when higher orders of the analytical approximation are used . These findings along with the deviations obtained by L3 in its recent detailed MC analysis of local angular fluctuations show particle bunching in small bins to be a sensitive tool to find differeneces between parton distributions treated by QCD vs. hadron ones detected in experiments. The study of bunchings seems to be critical to a choice of a more convenient basis for a suitable approach of multiple hadroproduction.
In this paper we compare the intermittency and correlation results from OPAL with predictions of various parametrizations, or regularities. We consider the negative binomial distribution, its modified and generalized versions, the log-normal distribution, and the pure-birth stochastic production mechanism. For the first time we examine these parametrizations with high-order genuine correlations. The incorporation of the multiplicity distribution in the study of correlations provides more advanced information by using various approximations and models. In particular, this study gives more understanding about the structure of multiparticle correlations, e.g. their relation to two-particle correlations.
All the above listed parametrizations are well-known in multiparticle high-energy physics for a long time and are used to describe the shape of the multiplicity distribution, either in full phase space or in its bins. Essentially all these parametrizations are sort of branching models and show an intermittent behavior. This feature becomes particularly important in hadroproduction studies in e<sup>+</sup>e<sup>-</sup> annihilations where parton showers play a significant role and the hypothesis of local parton-hadron duality is applied to the hadronization process .
## 2 Normalized factorial moments and cumulants <br>from various parametrizations
There is a variety of models which describe particle production as a branching process, see e.g. . The main prediction of these models is a suitable parametrization for the multiplicity distribution. Further, more details of the underlying dynamics come from investigations of the factorial moments and cumulants of the multiplicity distribution given .
One of the most popular parametrization used to describe the data for full and limited phase-space (basically, rapidity) bins is the negative binomial (NB) distribution, see Refs. for review on the subject and its historical development.
The NB regularity depends on two parameters: $`n`$ is the average multiplicity and $`1/k`$ is the so-called aggregation coefficient which influences the shape of the distribution. For the NB distribution, the normalized factorial moments and cumulants are governed by the parameter $`1/k`$ and can be derived from the following formulae,
$$F_q=F_{q1}\left(1+\frac{q1}{k}\right)$$
(1)
and
$$K_q=(q1)!k^{1q},$$
(2)
respectively.
Fractal properties of the NB distribution have been studied elsewhere . To demonstrate the fractality of the NB distribution, it has been proposed to use its generalization or to study individual sources obtained by consideration of different topologies, e.g. number of jets . A good fit by the NB parametrization should yield $`k`$ independent of $`q`$-order, but the limited statistics may also stabilize the $`q`$-dependence and hide the (unknown) real distribution . The bin-size dependence of the $`k`$ parameter, expected from the bin-size dependence of the factorial moments and cumulants, Eqs. (1) and (2), is a consequence of the unstable nature of the NB distribution, i.e. the convolution of the distributions in two neighbouring bins does not give the same type of distribution . In the other words, the bin-size dependence of $`k`$ reflects the fact that distribution in one bin depends on that in the other bin .
In e<sup>+</sup>e<sup>-</sup> annihilations at the $`\mathrm{Z}^0`$ peak and lower energies , the NB parametrization was found to fail consistently in describing the multiplicity distribution, either in rapidity bins or for the full phase space. Using e<sup>+</sup>e<sup>-</sup> results on intermittency , it was also shown that this parametrization does not reproduce the large fluctuation patterns, while it is appropriate for phase space bins in which the fluctuations are sufficiently small. Such an effect has been observed in (pseudo)rapidity studies in other types of collisions too<sup>3</sup><sup>3</sup>3Comparing the formation of dense groups of particles in e<sup>+</sup>e<sup>-</sup> and in hadronic (or nuclear) interactions, one finds noticeable difference of the fluctuations structure being isotropic (self-similar) in the former case and anisotropic (self-affine) in the latter one . This is expected to reflect different dynamics of the hadroproduction process in these two types of collisions. . The likely reason is the above noted instability of the NB distribution. Indeed, the NB law provides an acceptable description in the central regions of the multiplicity distribution, away from the tails . In this region, the distribution measured is mostly flat and, therefore, less sensitive to instability effects.
Another reason is that the NB regularity underestimates the high multiplicity tail which gives the main contribution to the fluctuations and which is influenced by instabilities. For high multiplicities the NB distribution transforms to the stable $`\mathrm{\Gamma }`$-distribution . This type of distribution was found to be the most adequate to describe the multiplicity distribution of the OPAL data. This is in contradiction to the results obtained in nuclear collisions , where the $`\mathrm{\Gamma }`$-distribution was found to be significantly inconsistent to reproduce the measurements: it underestimates the low-multiplicity parts of the experimental multiplicity distributions in different rapidity bins, while overestimates the high-multiplicity tails. Again in contrast to e<sup>+</sup>e<sup>-</sup> data, the NB regularity is found to be the best one to describe small fluctuations in the multiplicity distribution in nuclear data, and large fluctuations are well reproduced by two-particle correlations <sup>4</sup><sup>4</sup>4See preceding footnote. .
Another popular choice for the parametrization of the multiplicity distribution is the log-normal (LN) or Gaussian distribution . This type of regularity of final state distribution can be obtained by assuming a scale invariant stochastic branching process to be the basis of the multiparticle production mechanism. The LN distribution is defined by two parameters, the average and dispersion. To describe the data a third parameter has been introduced to take into account an asymmetry in the shape of the full phase space distribution measured.
In e<sup>+</sup>e<sup>-</sup> annihilation this model has been found to describe successfully the data for the full rapidity window . For restricted bins the best agreement between the LN parametrization and the data has been obtained for very small bins in the central rapidity region, while for the intermediate size bins the deviation observed is assumed to arise from perturbative (multi-jet) effects . Compared to the NB, the LN regularity is found to give much better description which leads to understanding the multiparticle production process as a scale invariant stochastic branching process.
The fluctuations have been studied in a model of this type in Ref. , within the so-called “$`\alpha `$-model” and it was found that in this particular case the normalized factorial moments obey the recurrence relation,
$$\mathrm{ln}F_q=\frac{q(q1)}{2}\mathrm{ln}F_2.$$
(3)
Strictly speaking, this formula connects the standard normalized LN moments $`C_q=n^q/n^q`$ rather than the factorial moments $`F_q`$. The difference between this two types of moments becomes negligible at large multiplicities which is not the case for small bins. Thus, the results based on the Eq. (3) with $`F_2`$ defined by data can deviate from the true LN regularity predictions, particularly at small bins .
Recently, the modified negative binomial (MNB) regularity has been introduced to correct deviation between the NB parametrization predictions and the e<sup>+</sup>e<sup>-</sup> and p$`\overline{\mathrm{p}}`$ data . One finds for the normalized factorial cumulants of the MNB distribution,
$$K_q^{}=(q1)!k^{1q}\frac{r^q\mathrm{\Delta }^q}{(r\mathrm{\Delta })^q}.$$
(4)
Here, $`r=\mathrm{\Delta }+n/k`$ and the superscript minus indicates that this law is applied for negatively charged particles. The MNB regularity reduces to the NB one if $`\mathrm{\Delta }=0`$, cf. Eq. (2).
The MNB parametrization has been found to give an accurate description of the full phase-space multiplicity distributions in e<sup>+</sup>e<sup>-</sup> annihilation measured from a few GeV up to LEP2 energies and in lepton-nucleon scattering data in the wide energy range . A similar energy dependence of the parameter $`k`$ has been obtained in these two types of collisions. Recently, a simple extension of the MNB law has been found to describe the charged particle multiplicity distributions in symmetric (pseudo)rapidity bins as well .
In contrast to the NB, the MNB parametrization is shown to reproduce fairly well the factorial moments and the cumulants of the full like-sign, e.g. negatively charged particle phase space from e<sup>+</sup>e<sup>-</sup> data at the energies ranging from 14 to 91.2 GeV . The fits of the multiplicity distributions, the moments and cumulants give rise to $`k>0`$, $`\mathrm{\Delta }<0`$ <sup>5</sup><sup>5</sup>5 Negative values of the parameter $`\mathrm{\Delta }`$ are interpreted as the probability $`\mathrm{\Delta }`$ of intermediate neutral cluster to decay into the charged or neutral hadron pairs . Positive $`\mathrm{\Delta }`$ values are also acceptable , but in this case $`k`$ is the maximum number of sources at some initial stage of the cascading changing from one event to another, in contrast with the fixed $`k`$ value corresponding to the case of $`\mathrm{\Delta }<0`$ . and $`0<r|\mathrm{\Delta }|<1`$ <sup>6</sup><sup>6</sup>6 Nevertheless, at LEP1.5 energy, $`\sqrt{s}133`$ GeV, an inverse inequality $`|\mathrm{\Delta }|<r0.91`$ can be found. An increase of the parameter $`r`$ with increasing energy is allowed but this complicates the particle-production scenario leading to the MNB parametrization . , that, according to Eq. (4), leads to the cumulants being negative at even values of order $`q`$ and positive at odd ones. This fact has been utilized to explain the oscillations of the ratio of the cumulants to factorial moments as a function of $`q`$ .
To obtain the quantities for all charged particles one uses the fact that the number of charged particles produced in e<sup>+</sup>e<sup>-</sup> collisions is twice that of negative ones. Then, the normalized factorial moments and cumulants are given by
$$K_2=K_2^{}+\frac{1}{2k(r\mathrm{\Delta })},K_3=K_3^{}+\frac{3K_2^{}}{2k(r\mathrm{\Delta })},\mathrm{𝑒𝑡𝑐}.$$
(5)
The stochastic nature of the NB and MNB laws allows to generalize them by introducing a stochastic equation of the pure birth (PB) process with immigration . Then, the NB and the MNB distributions can be derived from this birth process under the appropriate initial conditions, namely, the birth process with no particles in the initial stage leads to the NB law, while the MNB distribution is resulted from the birth process with the initial binomial distribution .
For the PB stochastic process one finds
$`F_q`$ $`=`$ $`\mathrm{\Gamma }(q)x^{1q}L_{q1}^{(1)}(x)`$
$`=`$ $`1+{\displaystyle \frac{q(q1)}{x}}+{\displaystyle \frac{q(q1)^2(q2)}{2!x^2}}+{\displaystyle \frac{q(q1)^2(q2)^2(q3)}{3!x^3}}+\mathrm{}`$
$`=`$ $`1+{\displaystyle \frac{q(q1)}{x}}+q{\displaystyle \underset{i=1}{\overset{q2}{}}}{\displaystyle \frac{qi1}{(i+1)!x^{i+1}}}{\displaystyle \underset{j=1}{\overset{i}{}}}(qj)^2`$
for the normalized factorial moments , and
$$K_q=q!x^{1q}$$
(7)
for the normalized cumulants. Here, $`L_q^{(1)}(a)`$ is the associated Laguerre polynomial .
The PB stochastic model has been applied to describe the multiplicity distribution and its moments in the entire (pseudo)rapidity range and in its bins in p$`\overline{\mathrm{p}}`$ collisions at c.m. energy ranging from 11.5 to 900 GeV. Considering high order moments, it was shown that the results of this approach are close to the NB predictions revealling the stochastic nature of particle production and, in particular, of the NB model. Further analysis , which includes intermittency study extending from e<sup>+</sup>e<sup>-</sup> to nuclear collisions<sup>7</sup><sup>7</sup>7 See footnote 2. , have shown that the data is well reproduced by this model. Note that the predictions of this model are systematically below the NB calculations.
The last form of the multiplicity distribution, considered in this paper, is the recently introduced generalized negative binomial distribution called the HNB distribution due to the type of special H-function used to derive it . This distribution represents an extension of the NB regularity to the Poisson-transformed generalized $`\mathrm{\Gamma }`$-distribution by incorporating some perturbative QCD characteristics. Varying the shape parameter $`k>0`$, the scale parameter $`\lambda >0`$ and the scaling exponent $`\mu 0`$, one receives special and limiting cases of the HNB distribution. The Poisson and the LN distributions are special cases of the HNB distribution in the limit of $`k\mathrm{}`$ with $`\mu =1`$ and $`\mu 0`$, respectively. The HNB regularity converges to the NB distribution at $`\mu =1`$.
Applied to high-energy data , the HNB distribution has been found to agree with the data, depending on the parameter $`\mu `$ values being positive or negative<sup>8</sup><sup>8</sup>8 For $`\mu \stackrel{<}{_{}}\mathrm{\hspace{0.25em}0}`$ the following reparametrization of the HNB density has been suggested : $`(k,\lambda ,\mu )(p,\sigma ,\alpha )`$ with $`p=1/\sqrt{k}`$, $`\sigma =p/\mu `$ and $`\alpha =\mathrm{ln}\lambda `$. With these parameters, e.g. one gets the LN distribution when $`p=0`$. ($`|\mu |>1`$) or approaching zero, for different types of reactions. The HNB regularity with $`\mu >1`$ and $`k=1`$ (the Weibull law) has been found to describe the data successfully in inelastic pp and p$`\overline{\mathrm{p}}`$ reactions up to ISR energies and in deep-inelastic e<sup>+</sup>p scattering at HERA energies in the entire rapidity phase space as well as in its restricted bins . The multiplicity distribution from UA5 data of non-diffractive p$`\overline{\mathrm{p}}`$ collisions at $`\sqrt{s}=900`$ GeV has been shown to be fitted reasonably well by the LN ($`\mu 0`$, $`k\mathrm{}`$) HNB limit in the full pseudorapidity window and in its symmetric bins.
In e<sup>+</sup>e<sup>-</sup> annihilations, it was found that HNB describes the data below the top PETRA energies for $`\mu <1`$ and $`k=1`$ , whereas at high energies the HNB description favour $`\mu 0`$ and large $`k`$ . The latter, as well as the above mentioned success of the LN limit HNB distribution to reproduce the non-diffractive UA5 data, is ascribed to the LN regularity of the multiplicity distribution<sup>9</sup><sup>9</sup>9It is worth to mention the remarkable different shapes of the multiplicity distribution in these two cases, namely, a heavy-tailled form with a narrow peak in p$`\overline{\mathrm{p}}`$ collisions vs. a bell-like shape in e<sup>+</sup>e<sup>-</sup> annihilation. Nevertheless, recently it was found that the multiplicity distribution in both types of interactions show the same behaviour expected from the log-KNO scaling . obtained earlier and recently explained by a renormalization group approach . For the LEP1 data $`\mu `$-value lying between $`1.2`$ and $`0.6`$ and $`k20÷130`$ have been obtained. Note that while the error range of the parameter $`\mu `$ is found to be small, so that $`\mu `$ is above $`2.2`$ and below zero, the errors for $`k`$ allow it to vary between $`O(10)`$ and $`O(10^4)`$ . The situation does not change when the energy increases up to higher than the $`\mathrm{Z}^0`$ peak or when the multiplicity distribution in rapidity bins instead that in full phase phase space is considered . It is interesting that for central rapidity bins, $`\mu `$ is obtained to be positive, $`0<\mu <1`$, increasing with enlarging the bin size, while $`k`$ decreases and is of $`O(10)`$. Recent analysis of the full phase space of uds-quark jet of the OPAL data has shown the error bars for the $`k`$-parameter to lie between 20 and 420, with the central value at 54. The parameter $`\mu `$ has been found to be about $`0.5`$.
One gets for the HNB-defined normalized factorial moments,
$$F_q=\frac{\mathrm{\Gamma }(k+q/\mu )}{\mathrm{\Gamma }(k)}\left[\frac{\mathrm{\Gamma }(k)}{\mathrm{\Gamma }(k+1/\mu )}\right]^q,$$
(8)
approaching asymptotically for large $`q`$ to the $`\mathrm{\Gamma }`$-function of the rescaled rank, $`q/\mu `$.
## 3 Comparison with OPAL measurements and discussion
In Figs. 1 and 2 we show, respectively, the normalized factorial moments and the normalized factorial cumulants, measured by OPAL in e<sup>+</sup>e<sup>-</sup> annihilation and compared to a few parametrizations (lines) and to the MC (shaded areas). The moments are represented in one-, two- and three dimensions of the phase space of rapidity, transverse momentum and azimuthal angle calculated with respect to the sphericity axis.
For the NB law we used the second-order factorial moments or cumulants to compute $`k`$ and then the factorial moments and cumulants of order $`q3`$, according to Eqs. (1) and (2). The resulting quantities are shown by the dashed lines.
From Fig. 1 one can see that in general the NB regularity underestimates the measured factorial moments. The deviation is more pronounced in one and two-dimensions for number of bins $`M>5`$ (in one projection) and $`q>3`$. Note that these our conclusions coincide with those from the analogous investigations of earlier LEP1 results on intermittency . The better agreement between the parametrization and the data we find for low-order ($`q=3`$) moments or for those in three dimensions, the cases when the NB predictions are within the MC results. Nevertheless, even in three dimensions, the NB values are below the data points at large $`M`$ (small bin size) and high orders.
The situation becomes more clear when one addresses to the cumulants, namely the genuine correlations contributing to the fluctuations, Fig. 2. From one- and two-dimensional cumulants one can see that not only for high $`q`$’s, but even in the case of $`q=3`$ the correlations given by the NB model are weaker than those from the data. The same is observed for $`p_T`$, $`y`$$`\times `$$`p_T`$ and $`\mathrm{\Phi }`$$`\times `$$`p_T`$ projections (not shown). The NB cumulants lie far away from the measured ones in comparison with the MC predictions being much nearer to the data. Moreover, it is seen that the discrepancy between the data and the NB results does not begin at some intermediate $`M`$, as in the case of the fluctuations, but is visible even at smaller $`M`$-values (larger bin sizes). This observation agrees with the inadequacy of the NB regularity to fit the full phase space multiplicity distribution, i.e. at $`M=1`$. The NB parametrization describes the data reasonably well in three dimensions, although some deviations are seen for $`M>125`$.
Using Eq. (2) we have estimated the parameter $`k`$ as a function of $`M`$ at different $`q`$. The parameter is found to decrease with increasing $`M`$ and to depend weakly on $`q`$. The values of $`k`$ lie between $`0.2÷2.8`$ and $`5÷8`$ at two extreme $`M`$ values and vary at fixed $`M`$ with a change in $`q`$ and with the dimensionality of subspaces. The higher is the dimension of the subspace, the smaller is the lower bound of the $`k`$-range. These lower bound values are almost independent of $`q`$. Conversely, the values of upper bound on $`k`$ show their $`q`$-dependence. They are about 8 at $`q=2`$ and about 5 at $`q=3`$ and 4 regardless of the subspace dimension. According to the expectations , the observed values of $`k`$ and its $`q`$-dependence do not seem to be related to the truncation effect but rather to small cascades. On the other hand, taking into account a long enough cascade at the e<sup>+</sup>e<sup>-</sup> collision energies considered here, this conclusion confirms that NB encounters difficulties in a reasonable and consistent description of the measurements. It is interesting to note that the values of $`k`$ obtained are close to those found in the MNB-type analysis of the multiplicity distributions in restricted rapidity intervals in e<sup>+</sup>e<sup>-</sup> annihilations at the $`\mathrm{Z}^0`$ peak .
Contrary to the NB, the LN regularity overestimates the data regardless of the dimensionality or the type of the variable used. The dotted line in Figs. 1 shows how the LN predictions compares the measurements. The cumulants are calculated from the factorial moments using their interrelations . The smaller the bin size is, the larger the difference is. The LN parametrization describes the data quite well for order $`q=3`$ only.
These findings are in agreement with the earlier studies of LN fits to factorial moments in e<sup>+</sup>e<sup>-</sup> annihilation . The LN distribution overestimates high multiplicities<sup>10</sup><sup>10</sup>10 See, however, ., which leads to an overestimate of the fluctuations and genuine correlations as shown here. In contrast to the studies of the full-multiplicity parametrization , the deviations are found for all bin sizes and not only for intermediate ones. This is in contrast with the above mentioned multi-jet (perturbative) effects <sup>11</sup><sup>11</sup>11A better description of the multiplicity distribution in the case of smaller number of jets in e<sup>+</sup>e<sup>-</sup> annihilation, namely for two-jet events, compared to the inclusive sample, has been also obtained by OPAL using the NB regularity . and indicates significant contribution of the non-perturbative stage dynamics, i.e. soft hadronization, to the formation of fluctuations and correlations. This agrees with recent theoretical studies . Violation of the Gaussian law of Eq. (3) have already been observed in nuclear and hadronic interactions.
The predictions of another, the MNB regularity, are shown in Figs. 1 and 2 by the solid lines. The cumulants are calculated using Eqs. (4) and (5), while the factorial moments are derived from their relationships with cumulants . The parameters $`r`$ and $`\mathrm{\Delta }`$ have been fixed at the values $`r=0.91`$ and $`\mathrm{\Delta }=0.71`$, the best values found to describe at least the third order cumulants. The only parameter depending on the bin size is $`k`$ extracted from $`K_2`$. The value of the $`\mathrm{\Delta }`$ parameter is near to that found in multiplicity studies , while the parameter $`r`$ has the value $`r>|\mathrm{\Delta }|`$, in contrast to the $`r|\mathrm{\Delta }|`$ inequality obtained from the analysis of the full multiplicity distributions. From the Figures, one can conclude that, in general, with the parameters obtained, the MNB regularity describes the data well, although underestimates the latter at small bins and the parameters $`|\mathrm{\Delta }|`$ and $`r`$ have an inverse hierarchy.
The obtained exceeding of the value of $`r`$ over the $`\mathrm{\Delta }`$-value is expected if one applies the MNB regularity not to negative-particle distributions but to all-charged-particle ones<sup>12</sup><sup>12</sup>12See, however, footnote 5., see e.g. DELPHI publication . In this case, two effect are contributing: the number of sources increases with increasing the width of the phase space bin, while a neutral cluster decay gives 0, one or two particles hit in the given bin <sup>13</sup><sup>13</sup>13It is interesting to note that being limited to like-charged particles, a study of particle bunching is less dependent on correlations induced by charge conservation (and partly by resonance production), in addition to the above-mentioned advantage of such a study to be less affected by the energy-momentum constraints . . The latter effect is taken into account by implementing an additional parameter . This is not the case when corrected formulae (5) are used, therefore the change of the inequality between $`r`$ and $`|\mathrm{\Delta }|`$ has a physical meaning to be investigated.
The resulted factorial moments and cumulants of the multiplicity regularity given by the PB process, are represented in Figs. 1 and 2 by the dashed-dotted lines. The parameter $`x`$ is extracted from the second order cumulants $`K_2`$ and defines all the higher-order moments and cumulants, given by Eqs. (2) and (7), respectively. For both quantities the PB predictions are seen to lie lower than those from the above considered parametrizations. In all the cases the PB calculations underestimate the data. The difference between the PB predictions and the data is in contrast to earlier lower-energy e<sup>+</sup>e<sup>-</sup> parametrization of the factorial moments in the rapidity subspace , where the PB process has been found to explain the data while underestimating higher-order moments.
No curves predicted by the HNB regularity are shown in the Figures, since we would like to estimate the regions of the parameters $`\mu `$ and $`k`$ of this approach. Combined analysis of the factorial moments and cumulants, the latter being calculated as combinations of the factorial moments , show that, assuming $`k`$ to be positive, the $`\mu `$-parameter is obtained to be either positive or negative. The factorial moments and cumulants are found to be sign-changing functions of $`k`$ for negative $`\mu `$, in particular at $`1<\mu <0`$, so that one can find more than one $`k`$-region which satisfies the measurements<sup>14</sup><sup>14</sup>14Our observation contradicts the property of the factorial moments and cumulants shown to oscillate around zero at $`|\mu |>1`$ and not e.g. at $`\mu >1`$. This disagreement can be assigned to different regions of the parameter $`k`$, found to be large ($`k\mathrm{}`$) in the case of the full-multiplicity distribution studies while having finite values in our investigation (vide infra). . In this case we take the largest $`k`$-value, above which no sign-changing behaviour is seen and the calculation fits most of the data. For $`\mu >0`$ the factorial moments and the cumulants decrease with increasing $`k`$.
At fixed $`k`$, the absolute value of the parameter $`\mu `$ decreases with the number $`M`$ of bins. The parameter is found to be limited in the interval $`0.3\stackrel{<}{_{}}|\mu |\stackrel{<}{_{}}\mathrm{\hspace{0.25em}1.8}`$ when one considers the quantities under study of order $`q=2,3`$. For $`q>3`$, one faces problems reaching the highest measured values of the moments and cumulants and, in the case of the highest $`q=5`$ cumulants, fitting their lowest (negative) values. This narrows the interval of $`\mu `$ down to $`0.6<|\mu |<1.6`$. The values of the shape parameter $`k`$ depends on $`\mu `$, but are found to be $`\stackrel{<}{_{}}\mathrm{\hspace{0.25em}30}`$. The smallest $`k`$ is obtained to be 1.2 for $`\mu <0`$ and 0.1 for $`\mu >0`$. The larger the value of $`|\mu |`$ is, the smaller is the interval of $`k`$. For example, if at $`|\mu |>1.3`$ the values of $`k`$ are only of a few units, $`2÷5`$, then for $`0.5<|\mu |<1.0`$ these values lie between 2 and 20. The parameter $`k`$ slightly depends on the order $`q`$, increasing with $`q`$ for negative $`\mu `$ and decreasing for positive ones. This parameter depends also on the number of bins, being a decreasing function of $`M`$. Note that these two properties are similar to those observed here for the NB regularity.
Comparing the results obtained to the HNB studies of the LEP data and, particularly, to that of the uds-quark jets (see also Sect. 2), one can see similarities as well as important differences. Similar to those studies we find that the parameter $`\mu `$ can be positive as well as negative and does not exceed 2 in its absolute value. Moreover, this parameter shows the same behaviour with bin size, and $`k`$ tends to obtain large values . The main difference from the HNB studies is that the region $`\mu 0`$ is excluded by our investigation, so that the LN law is not a suitable one. This conclusion agrees with the OPAL observation and the above discrepancy between the data and the LN predictions. It is also worth to notice that (i) negative $`\mu `$’s can be also used to describe the multiplicity distributions in restricted bins, in addition to the positive ones found recently , and (ii) the values of the $`k`$-parameter are less than 30 and do not tend to infinity. All this shows that to describe the hadroproduction process correctly, one needs a more complicated scenario to be realized than those leading to the regularities discussed here, even generalized to the HNB case. One could not e.g., find $`\mu `$ to be only in the interval $`0<\mu <1`$ as it would expected due to our findings for the LN regularity ($`\mu 0`$) to overestimate the measured fluctuations and correlations while the NB law ($`\mu 1`$) underestimates them.
## 4 Summary and Conclusions
To summarize, variuos regularities of the multiplicity distribution of charged particles are studied in restricted phase-space bins of e<sup>+</sup>e<sup>-</sup> annihilation into hadrons at the $`\mathrm{Z}^0`$ peak. The study is based on recent high-statistics results on multidimensional local fluctuations and genuine correlations obtained with multihadronic OPAL sample by means of normalized factorial moments and cumulants. Such parametrizations as the log-normal and negative binomial distributions, modified and generalized versions of the negative binomial law, and the generalized stochastic birth process with immigration are considered. For the first time these parametrizations, being most common in the field of multiparticle production, are examined with genuine high-order correlations.
All the parametrizations are found to give a reasonably good description of low-order fluctuations while they do show deviations for the high-order fluctuations and correlations, especially at small resolutions. Some discrepancy can arise from a bin-size dependence of the measured multiplicity distribution and from its truncation. However, in our consideration, the influence of these effects is minimized since the models are based on the measured second-order factorial moments and cumulants, which carry most of the information given by the multiplicity distribution. Moreover, the effects mentioned arise mostly according to low statistics that is not the case for the data considered here, even at higher orders. To note is also that even with low statistics data, a simultaneous analysis of multiplicity distributions in different bin sizes and the corresponding factorial moments, carried out in e<sup>+</sup>e<sup>-</sup> annihilation and in nuclear reactions , does not show any sensitive influence of the finite statistics to the results.
From the study presented, one concludes that genuine high-order ($`q>3`$) correlations have to be taken into account when the hadroproduction process is modelling, in particular by a multiplicative law for particle distributions. Indeed, since all the parametrizations used are essentially based on the average multiplicity and the two-particle correlations, the discrepancies between the predictions and the measurements indicate multiparticle character of bunching of hadrons. This could be considered as a reason why all the regularities give a good description of fluctuations and correlations at order $`q=3`$. Our conclusion confirms the OPAL result from Ref. , on which we are based here and which shows the important contributions of many-particle correlations to the dynamical fluctuations by the decomposition of the factorial moments into lower-order cumulants. This our finding is also in agreement with the observation of DELPHI shown the measured angular cumulants not to be reproduced by small genuine correlations given by perturbative QCD.
A self-similar nature of multihadron production is another issue of the above study. All the regularities result from the particle-production process of the stochastic nature. Therefore their capability to show the intermittent behaviour seen in the data can be attributed to their branching self-similar nature. The better agreement between the regularities and the measurements found in three dimensions, where QCD cascading is expected to be fully developed, stresses the essential self-similarity of the particle production mechanism. However, the discrepancies obtained show that a suitable hadroproduction model seems to be more sophisticated than those giving the parametrizations considered in the present paper.
## Acknowledgements
I would like to thank my colleagues from the OPAL Collaboration and particularly those from the Tel Aviv team for fruitful discussions and help. Comments on the manuscript and/or relevant communications from G. Alexander, G. Bella, M. Biyajima, S. Chekanov, I. Dremin, J. Grunhaus, S. Hegyi, W. Kittel, G. Lafferty, S. Manjavidze, O. Smirnova and O. Tchikilev are highly acknowledged. Thanks go to J. Zhou for providing me with his Ph.D. Thesis and to M. Groys for his assistance. |
no-problem/0001/astro-ph0001196.html | ar5iv | text | # Current Development of the Photo-Ionization Code Cloudy
## 1. Introduction
This poster centers on the development of Cloudy, a large-scale code designed to compute the spectrum of gas in photo-ionization or collisional balance. Such plasma is far from equilibrium, and its conditions are set by the balance of a host of micro-physical processes. The development of Cloudy is a three-pronged effort requiring advances in the underlying atomic data base, the numerical and computational methods used in the simulation, and culminating in the application to astronomical problems. These three steps are strongly interwoven.
In recent years, Cloudy has undergone a major upgrade in the atomic data. A large part of this effort was concentrated on improving the photo-ionization cross sections. Since then the code has been transformed from a Fortran 77 to an ANSI C code and work has begun to completely recode the model atoms used to calculate line emissivities. They will be organized in such a way that each iso-electronic sequence has a common code appropriate for all members of that sequence. This makes validating the code easier and also facilitates future upgrades in the atomic data. Work on the hydrogen sequence is complete and work on the helium sequence is underway. A more detailed account of the current status of the code can be found in Section 2. The grain model in Cloudy is also currently undergoing a major upgrade, which is described in more detail in Section 3.
## 2. Improvements in Cloudy 94
Cloudy has undergone a major transformation in the past year with respect to the last release Cloudy 90. The most important differences are listed below. A first release of Cloudy 94 is currently available from the Cloudy web-site (for details see Section 4).
* The code is now strict ANSI/ISO 89 C. As a result Cloudy is now exceptionally GNU gcc and Linux friendly. After this version is released the development version will move to C++ as gcc evolves onto the ANSI/ISO C++ standard and the C++ standard template library matures.
* All hydrogenic species H i through Zn xxx, and their respective ions are treated with a common model atom that uses a single code base. This atom reproduces accurate hydrogenic emissivity to within the uncertainties in the atomic data. The 30 hydrogenic ions can have up to 400 levels.
* The continuum now extends down to 10<sup>-8</sup> Ryd ($`\lambda 10`$ m). This is needed because the continuum must extend to the energy of the 400 – 399 transition of hydrogen.
* Previous versions used simple approximations for the hydrogenic ionization and level balance for temperatures too low to compute NLTE departure coefficients. The new version determines level populations for low temperatures rather than departure coefficients, so low temperature predictions are as valid as high temperature results.
* Two additional sets of NLTE stellar atmospheres are available – the CoStar grid of wind-blanketed O-stars, and the Rauch low-metallicity grid of white dwarf atmospheres.
* The optimization algorithm PHYMIR is included. This allows optimization runs to be executed much more efficiently on parallel UNIX computers by calculating individual models simultaneously on different processors. This can drastically reduce the amount of wallclock time needed, depending on the number of free parameters.
* The ionization/thermal kernel has been totally rewritten to incorporate all the lessons learned from known convergence problems. As a result C94 is more stable than C90, with far better convergence properties.
* All previous versions only considered ionization stages that could be produced by the incident continuum. This limit has been lifted, so collisional or coronal equilibrium models can be computed with very soft incident continua.
* Much of the code is now double precision. As a result the code will work for a broader range of densities than before. Densities well below 10<sup>-5</sup> cm<sup>-3</sup> or above 10<sup>17</sup> cm<sup>-3</sup> can be computed without under/over flow.
* The assert command has been introduced. This tells the code to verify that its predictions agree (within a stated uncertainty) with a known result. The test cases make extensive use of this feature, which provides an automatic way to validate the code. As of now, the entire test suite of standard models is recomputed and verified every single night.
* All large storage items are dynamically allocated at run time, taking only the needed memory. As a result, in its default state, C94 actually takes less memory than C90. It also executes slightly faster than C90.
## 3. The Grain Model
The current grain model was introduced to Cloudy in 1990 to facilitate more accurate modeling of the Orion nebula (for a detailed description see Baldwin et al., 1991, ApJ 374, 580). The model has since undergone some minor revisions but remained largely the same. Recently, our knowledge of grains has been greatly advanced by the results from the ISO mission. In view of these rapid developments we have undertaken a major upgrade of the grain model in Cloudy. The two main aims are to make the code more flexible and to make the modeling results more realistic.
In particular, in the current model the grain opacities for a handful of grain species are hard-wired in the code. Furthermore, only a single equilibrium temperature is calculated for each species. This is inappropriate for a grain size distribution since small grains will be hotter than large grains. This is caused by the fact that grain opacities depend quite strongly on grain size, as shown in Figure 1. To improve the model, the following changes are being implemented:
* We will include a spherical Mie code in Cloudy. The necessary optical constants needed to run the code will be read from a separate file. This allows greater freedom in the choice of grain species. Files with optical constants for a range of materials will be included in the Cloudy distribution. However, the user can also supply his own optical constants for a completely new grain type.
* We will introduce mixing laws to the code. This will allow the user to define grains which are mixtures of different materials. Cloudy will then calculate the appropriate opacities by combining the optical constants of these grain types. This will allow the user to simulate aggregate or ‘fluffy’ grains.
* The absorption and scattering opacities will be calculated by Cloudy for more arbitrary grain size distributions. This will give the user considerably more freedom. Currently only a standard ISM and a truncated Orion size distribution are included in the code.
* The size distribution will be split up in many small bins, and an equilibrium temperature will be calculated for each bin separately. This allows non-equilibrium heating to be treated and more realistic grain emission spectra to be calculated. First tests show that under realistic conditions this can make large differences in the flux (at least a factor of two) in the Wien tail of the grain emission. The total flux emitted by the grains remains virtually unchanged however.
## 4. Cloudy on the Web
The source for Cloudy can be obtained from the web at the following URL:
http://www.pa.uky.edu/$``$gary/cloudy
This site contains the current major release (Cloudy 90.05) as well as the beta2 release of Cloudy 94 (actually called Cloudy 93.03).
You can add your name to the Cloudy mailing list by logging on to the following URL:
http://nimbus.pa.uky.edu/cloudy/versions.htm |
no-problem/0001/hep-ph0001071.html | ar5iv | text | # Preprint MPI-PhT/2000-02Effects of CP-Violation in Neutralino Scattering and Annihilation
We show that in some regions of supersymmetric parameter space, CP violating effects that mix the CP-even and CP-odd Higgs bosons can enhance the neutralino annihilation rate, and hence the indirect detection rate of neutralino dark matter, by factors of $`10^6`$. The same CP violating effects can reduce the neutralino scattering rate off nucleons, and hence the direct detection rate of neutralino dark matter, by factors of $`10^7`$. We study the dependence of these effects on the phase of the trilinear coupling $`A`$, and find cases in the region being probed by dark matter searches which are experimentally excluded when CP is conserved but are allowed when CP is violated.
The neutralino elastic scattering cross section (in pb) is plotted in fig. 1 as a function of neutralino mass (in GeV) for $`10^6`$ values in SUSY parameter space. The upper panel is for the case of CP violation via $`Im(A)0`$ while the lower panel is for the case of no CP violation. In the upper panel, it is the maximally enhanced cross section (as a function of $`\mathrm{arg}(A)`$) that is plotted. The dark points refer to those values of parameter space which have the maximum value of the cross section for nonzero $`Im(A)`$ and which are experimentally excluded at zero $`Im(A)`$. The grey region refer to those values of parameter space which are enhanced when CP violation is included and which are allowed also at zero $`Im(A)`$. The light grey empty squares refer to those values of parameter space which have no enhancement when CP violation is included. The solid lines indicate the current experimental bounds placed by DAMA and CDMS; the dashed lines indicate the future reach of the CDMS (Soudan), GENIUS,
Figure 1.
and CRESST proposals.
In fig. 2 we show the enhancement and suppression factors of the elastic scattering cross section for the case of CP violating $`\mathrm{arg}(A)`$. The plot shows the ratios $`R_{\mathrm{max}}=\sigma ^{\mathrm{max}}/\mathrm{max}[\sigma (0),\sigma (\pi )]>1`$ and $`R_{\mathrm{min}}=\sigma ^{\mathrm{min}}/\mathrm{min}[\sigma (0),\sigma (\pi )]`$ as a function of the values $`\varphi _A`$ of the phase of $`A`$ where the maximum/minimum occur Here $`\sigma ^{\mathrm{max}}`$ ($`\sigma ^{\mathrm{min}}`$) is the enhanced (suppressed) scattering cross section and the superscript max (min) indicates the maximal enhancement (suppression) as one goes through the phase of $`A`$. The denominator of the ratio $`R_{\mathrm{max}}`$ ($`R_{\mathrm{min}}`$) chooses the larger (smaller) value of the scattering cross section without CP violation, i.e., for phase = 0 or phase = $`\pi `$.
Fig. 3 shows the enhancement and suppression factors of the neutralino annihilation cross section times relative velocity $`\sigma v`$ (at $`v=0`$). The ratios $`R_{\mathrm{max}}^{\mathrm{ann}}`$ and $`R_{\mathrm{min}}^{\mathrm{ann}}`$ are defined similarly to $`R_{\mathrm{max}}`$ and
Figure 2.
$`R_{\mathrm{min}}`$ but with $`\sigma v`$ replacing the scattering cross section $`\sigma `$.
Finally, in figs. 4 and 5 we show two examples of the behavior of the scattering and annihilation cross sections with the phase of $`A`$. The four panels from top to bottom display the following: the scattering cross section $`\sigma _{\chi p}`$ in pb, the annihilation cross section $`\sigma v`$ in cm<sup>3</sup>/s, the branching ratio $`BR(bs\gamma )\times 10^4`$, and the lightest Higgs boson mass $`m_{h_1}`$ in GeV as a function of the phase $`\varphi _A`$ of $`A`$. CP conserving phases are $`\varphi _A=0,\pi `$ while all other values are CP violating. In the third and fourth panels we hatch the regions currently ruled out by accelerator experiments. In all four panels we denote the part of the curves that is experimentally allowed by thickened solid lines, and the part that is experimentally ruled out by thinner solid lines. In this figure, the possible phases are bound by the limit on the $`bs\gamma `$ branching ratio. In the allowed regions, the scattering cross section at CP-violating phases is suppressed, while the annihilation cross section is enhanced. The latter takes its maximum allowed value when the $`bs\gamma `$ limit is reached. In the case plotted in fig. 4, both CP conserving cases are experimentally excluded while some CP violating cases are allowed. The scattering cross section is of the order of $`10^6`$ pb, and lies in the region being probed by direct detection experiments. The annihilation cross section peaks at $`\varphi _A=3\pi /4`$; notice that this value is not the point of maximal CP violation $`\varphi _A=\pi /2`$.
Figure 3.
Figure 4.
Figure 5.
A detailed presentation and references can be found in Gondolo and Freese, hep-ph/9908390. |
no-problem/0001/hep-th0001207.html | ar5iv | text | # References
CERN-TH/2000-035
hep-th/0001207
January 2000
On a possible quantum limit for the stabilization of moduli in brane-world scenarios
Giovanni AMELINO-CAMELIA<sup>1</sup><sup>1</sup>1Marie Curie Fellow of the European Union (address from February 2000: Dipartimento di Fisica, Universitá di Roma “La Sapienza”, Piazzale Moro 2, Roma, Italy)
Theory Division, CERN, CH-1211, Geneva, Switzerland
ABSTRACT
I consider the implications for brane-world scenarios of the rather robust quantum-gravity expectation that there should be a quantum minimum limit on the uncertainty of all physical length scales. In order to illustrate the possible significance of this issue, I observe that, according to a plausible estimate, the quantum limit on the length scales that characterize the bulk geometry could affect severely the phenomenology of a recently-proposed brane-world scenario.
An extensive research effort (see, e.g., Refs. and references therein) has been recently devoted to the possibility that the non-gravitational degrees of freedom be confined to one or more p-branes while gravitational degrees of freedom have access to some extra dimensions. With respect to the development of related formalism an important observation is that in certain string theories it is quite natural to obtain this type of different properties for gravitational and non-gravitational degrees of freedom. Concerning phenomenological implications, since it is the gravitational realm which is most affected by these “brane-world scenarios”, it is not surprising that significant constraints come from the requirement that classical gravity should behave as observed in the regimes we have already explored experimentally. On the quantum-gravity side some constraints also emerge; in particular, interestingly, while more conventional pictures lead to graviton effects that are negligibly small, one finds that certain portions of the parameter space of a given brane-world scenario turn out to be excluded for predicting graviton effects that are inconsistent with data obtained at existing particle colliders. Larger portions of these parameter spaces will be probed at planned colliders, such as LHC at CERN.
In this brief note I observe that, in addition to graviton contributions to processes studied at particle colliders, there is another class of quantum-gravity effects which could have important implications for brane-world scenarios. These effects are associated with the rather robust quantum-gravity expectation that physical length scales should not be definable with perfect accuracy, there should be a minimum length uncertainty, and there should be quantum fluctuations of lengths. This is conventionally (and somewhat generically) expressed with formulas of the type $`\mathrm{\Delta }RL_{min}`$, intended to be valid for any physical length scale. I shall argue that, if such quantum limitations on the stabilization of length scales apply to the length scales that characterize the bulk geometry, there might be implications also for observables on the brane where the Standard Model fields reside.
In conventional quantum-gravity scenarios $`L_{min}`$ is expected to coincide with $`L_{QG}`$, the length scale that characterizes the strength of gravitational interactions ($`L_{QG}`$ would be given by the Planck length $`L_p10^{35}m`$ in the conventional picture with only 3+1 space-time dimensions, but in the bulk of a brane-world scenario one can have $`L_{QG}L_p`$).
In quantum-gravity scenarios based on string theory traditionally there has been the expectation that the measurability bound should be even more stringent: $`L_{min}L_s>L_{QG}`$, where $`L_s`$ is the string length ($`L_s>L_{QG}`$ in the perturbative regime). More recently the analysis of certain stringy scenarios with several length scales has suggested that in presence of appropriate hierarchies of scales it may be possible to have $`L_{min}<L_{QG}`$. For example, in the scenario considered in Ref. it appears that $`L_{min}(M_{D0}L_s)^{1/12}L_{QG}<L_{QG}`$, where $`M_{D0}`$ is the mass of D-particles.
For brane-world scenarios in which quantum gravity (possibly in the guise of a string theory) behaves in the bulk in such a way that $`L_{min}L_{QG}`$ one would find that every given length scale $`R_{bulk}`$ characterizing the bulk geometry (e.g., a curvature radius or an overall length of a finite extra dimension) would be affected by a quantum limitation: $`\mathrm{\Delta }R_{bulk}L_{min}L_{QG}`$. In the ordinary case, in which $`L_{QG}L_p`$, such quantum limits are very weak for all lengths $`R`$ that we can access experimentally (extremely small relative uncertainty $`\mathrm{\Delta }R/RL_{QG}/R`$), but in the bulk of a brane-world scenario they can be significant because $`L_{QG}L_p`$ and some of the length scales $`R_{bulk}`$ are not much larger than $`L_{QG}`$.
In the mentioned stringy scenarios with several length scales and an appropriate hierarchy of scales it might be possible to have $`\mathrm{\Delta }R_{bulk}L_{min}<L_{QG}`$, but values of $`\mathrm{\Delta }R_{bulk}`$ that are significantly smaller than $`L_{QG}`$ may require a strong hierarchy of scales. For example, in the scenario considered in Ref. even just the availability of $`\mathrm{\Delta }R_{bulk}`$ of order, say, $`L_{QG}/1000`$, would already require a very strong hierarchy between the mass of D-particles and the string scale: $`M_{D0}10^{36}/L_s`$. The fact that the availability of $`\mathrm{\Delta }R_{bulk}`$ significantly smaller than $`L_{QG}`$ may require such strong hierarchies can be quite significant since most brane-world scenarios intend to solve the ordinary “hierarchy problem” and may therefore loose most of their motivation if requiring for other reasons (see below) that some new hierarchy problems arise.
Let me now discuss the possible implications of this set of ideas in the significant illustrative example provided by the model proposed by Randall and Sundrum in Ref. , which in particular assumes that all length scales characterizing the bulk geometry are not too far from the fundamental bulk-gravitational length scale $`L_{QG}`$ (which in the model is taken to be close to the TeV scale). In this model of Ref. the mass $`m`$ of an ordinary Standard-Model field and the mass scale $`M_p=1/L_p`$ setting the strength (weakness) of gravity on the brane where the Standard Model fields reside can be related through an exponential of the ratio between two of the length scales characterizing the bulk geometry: $`m=M_pexp(\pi R_{bulk,1}/R_{bulk,2})`$ (in the notation of Ref. $`R_{bulk,1}=r_c`$ and $`R_{bulk,2}=1/k`$). Values of $`exp(\pi R_{bulk,1}/R_{bulk,2})`$ close to $`10^{15}`$ are of interest for a solution of the ordinary hierarchy problem , but if $`R_{bulk,1}`$ (and/or $`R_{bulk,2}`$) is not much bigger than $`L_{min}`$ one would then predict a rather significant limitation<sup>2</sup><sup>2</sup>2For example, the relation $`m=M_pexp(\pi R_{bulk,1}/R_{bulk,2})`$ for $`exp(\pi R_{bulk,1}/R_{bulk,2})=10^{15}`$ and $`R_{bulk,1}100L_{min}100\mathrm{\Delta }R_{bulk,1}`$ leads to $`\mathrm{\Delta }(m/M_p)`$ of order $`m/M_p`$. on the accuracy of the ratio $`m/M_p`$, while instead we measure with great accuracy both the masses of Standard Model particles and the strength of ordinary gravitational interactions.
If the quantum gravity (or string theory) appropriate for the model of Ref. behaves in the bulk in such a way that $`L_{min}L_{QG}`$ the length scales $`R_{bulk,1}`$ and $`R_{bulk,2}`$ should indeed not be much bigger than $`L_{min}`$, since, as mentioned, in the model of Ref. all length scales characterizing the bulk geometry are not too far from $`L_{QG}`$. In this case the alarming prediction of significant limitations on the accuracy of the ratio $`m/M_p`$ appears to be inevitable.
If the model of Ref. could be embedded in a stringy quantum-gravity scenario of the type considered in Ref. , with several length scales and a hierarchy of scales appropriate for having $`L_{min}L_{QG}`$, one might be able to escape the prediction of significant limitations on the accuracy of the ratio $`m/M_p`$, but then, as mentioned, one would easily end up having a new hierarchy problem associated with the requirement $`L_{min}L_{QG}`$.
In summary, at least at the heuristic level of the present discussion, it would seem that the quantum-gravity expectation that there should be a limit $`\mathrm{\Delta }R_{bulk}L_{min}`$ on the measurability of any length scale $`R_{bulk}`$ might affect non-trivially the analysis of the scenario proposed in Ref. . Of course, a definite statement must await more rigorous and quantitative analyses of the quantum properties of the bulk geometry in models based on the scenario proposed in Ref. . The analyses should tell us whether $`L_{min}<L_{QG}`$ (actually, even though the evidence for an $`L_{min}>0`$ is quite robust , one cannot exclude that $`L_{min}=0`$ might be found in some quantum-gravity or stringy-quantum-gravity pictures) or $`L_{min}L_{QG}`$, and if $`L_{min}<L_{QG}`$ an estimate should be given of the amount of tuning required to eliminate the ordinary hierarchy problem.
The example of the model proposed in Ref. might also indicate that in general any claim of consistency of a brane-world scenario must await the results of at least the level of analysis of the quantum properties of the bulk geometry necessary to address rigorously the issues I considered heuristically here. It is important that such analyses be performed in very physical terms, always resorting to operative definitions of gravitational observables. It is in fact well known that formal estimates of quantum uncertainties in geometric observables can be misleading. For example, one finds that it is not sufficient to identify formally one of the objects in the formalism as a distance observable; it is instead necessary to analyze an operative definition of distance and consider all the possible limitations which might be caused by each of the elements of the measurement procedure. It is perhaps worth emphasizing that the operative definition of gravitational observables, which is already a delicate task in more conventional physical scenarios , might be a formidable task in the case of those observables of a brane-world scenario that concern the bulk geometry; in fact, the measurement procedures that have been discussed in the conventional quantum-gravity literature all rely on several non-gravitational elements, while the bulk is not accessible to non-gravitational degrees of freedom. It might be nontrivial even to establish what is genuinely observable in the bulk, and what type of measurement procedures, particularly with respect to the probes to be exchanged, would be appropriate.
Besides the analysis of possible quantum limits for the stabilization of geometric observables in the bulk, it might be also necessary to consider quantum limits for the stabilization of geometric observables of the brane where the Standard Model fields reside; in fact, on some of these observables we start to have significant experimental constraints .
It is a pleasure to thank Yaron Oz, for conversations on the results reported in Ref. , and Gabriele Veneziano, for conversations on the results reported in Ref. . |
no-problem/0001/astro-ph0001004.html | ar5iv | text | # Theory of the Interaction of Planetary Nebulae with the Interstellar Medium
## 1. Introduction
The interaction of planetary nebulae (PNs) with the interstellar medium (ISM) was first discussed by Gurzadyan (1969). Smith (1976) used the thin shell model to analyze the evolution of a spherically symmetric expanding PN shell moving through the ISM (see also Isaacman 1979). On the observational side, only a few PNs showed interaction with the ISM (hereafter refereed to as interacting PNs) until the last decade. With better observational imaging instruments (CCD cameras) the field interacting PNs became more popular (e.g., Borkowski, Tsvetanov, & Harrington 1993; Jacoby & Van de Steene 1995; Tweedy, Martos & Noriega-Crespo 1995; Hollis et al. 1996; Zucker & Soker 1997). Borkowski, Sarazin & Soker (1990;hereafter BSS) discuss the potential in using interacting PNs to probe the ISM, and list many interacting and possibly interacting PNs. This list was extended in the review article by Tweedy (1995) and in the atlas of interacting PNs (Tweedy & Kwitter 1996). Tweedy with several collaborators has been leading the observational research in the field (e.g., Tweedy & Kwitter 1994a,b; Tweedy & Napiwotzki 1994). Other studies are in progress, and preliminary results were presented in the conference (e.g. Kerber et al. 2000; Dopita et al. 2000 ).
On the theoretical side, Soker, Borkowski & Sarazin (1991;hereafter SBS) performed numerical simulations, both in the isothermal and adiabatic limits. Their results validate the thin-shell approximation of Smith (1976). In the adiabatic case they found that the Rayliegh-Taylor (RT) instability plays a crucial role in the evolution. Recent numerical calculations reported in this conference (Dopita et al. 2000; Villaver et al. 2000) confirmed these results. In their observational analysis of the interacting PN IC 4593, Zucker & Soker (1993) suggested that the interaction process oscillates between the adiabatic and isothermal cases. An analytical study of this process, the so-called radiative shock overstability, was performed by Dgani & Soker (1994) for finite-sized objects (e.g., spherical nebulae). The radiative overstability was found to occur only in large PNs, with radii of $`R>10^{19}n_0^1\mathrm{cm}`$, where $`n_0`$ is the total number density of the ISM (in $`\mathrm{cm}^3`$) The instability occurs only for large PNs because the transverse flow, here it is the flow around the nebula, stabilizes the radiative overstability (Dgani & Soker 1994; see also Stevens, Blondin & Pollock 1992, for a numerical study of a different situation). For an earlier review of the theory see Dgani(1995). Several recent theoretical studies were conducted since that review. Soker & Dgani (1997) studied analytically the interaction of PNs with a magnetized ISM. Dgani & Soker (1998) and Dgani(1998) discussed the role of instabilities in the interaction. New numerical studies are underway (Villaver et al. 2000; Dopita et al. 2000), preliminary results presented in the conference show that instabilities play an important role in the interaction.
## 2. Preliminary Considerations
### 2.1. Why is the interaction important?
The interaction of PNs with the ISM causes deviation form axisymmetry (or point symmetry). Several other processes that cause deviation from axisymmetry involving binaries were discussed by Soker and collaborators (e.g. Soker 1994; Soker 1996; Soker et al 1998).
It is important to understand as accurately as possible the ISM effects on the morphology and to distinguish them from other processes.
Interacting PNs are also a tool for studying the ISM. This role was emphasized in previous reviews (Tweedy 1995; Dgani 1995).
### 2.2. Detecting the interaction
The interaction can be divided into at least three phases: free expansion, deceleration and stripping off the central star. Initially, PNs expand freely because their densities are of orders of magnitude higher than the densities of their surrounding medium. Eventually, though, every nebula will reach a stage in which the pressure inside the nebula is of the same order as the pressure outside the nebula. A strong enough pressure wave will then propagate into the nebula and decelerate it. If the nebula moves supersonically, the dominant ISM pressure is the ram pressure. The critical density $`n_{crit}`$, at which the ISM and the nebula reach the same pressure is (BSS, Eq. 1):
$`n_{crit}=\left({\displaystyle \frac{v_{}+v_e}{c}}\right)^2n_0,`$ (1)
where $`c`$ is the isothermal sound speed in the nebula, $`c`$ 10 km s<sup>-1</sup> in most cases, $`v_{}`$ is the velocity of the central star, $`v_e`$ is the expansion velocity of the nebula and $`n_0`$ is the ISM density. The most difficult parameter to observe directly in the above formula is the ISM density, $`n_0`$. The nebula at this early stage of the interaction is not distorted, but its leading edge is brighter. Measuring the value of the density at the leading edge, $`n_{crit}`$, and using the above formula yields an estimate for $`n_0`$.
If we take the average value of $`v_e=`$ 20 km s<sup>-1</sup> (Weinberger 1989) and $`v_{}=`$ 60 km s<sup>-1</sup> (Pottash 1984; BSS), then $`n_{crit}=60n_0`$. As the ISM ram pressure depends on the velocity squared, the interaction is much stronger for fast moving planetaries. For example, if $`v_{}=`$ 150 km s<sup>-1</sup> then the critical density is about $`300`$ times that of the ISM.
The electron number density of extended planetaries is about 100 cm<sup>-3</sup> (Kaler 1983) for the low surface brightness PNs from the NGC catalog, and 10 cm<sup>-3</sup> for the fainter PNs from the Abell (1966) catalog. The very large nebula S-216 has $`n_e=`$5 cm<sup>-3</sup>. In principle, therefore, fast moving planetary nebulae can be detected in very low density environments $`n_00.01`$.
## 3. Numerical Models
The first realistic models of moving PNs were calculated numerically by SBS. They performed hydrodynamic simulations of the PN-ISM interaction with the particle in cell (PIC) method in cylindrical symmetry. They calculated two types of models of thick nebular shells moving through the ISM: an adiabatic model which applies to fast nebulae moving in the galactic halo and an isothermal model relevant to average nebulae moving in the galactic plane.
### 3.1. Cooling in the ISM shock
A nebula moving supersonically in the ISM creates a strong bow shock before it. The cooling time of the shocked ISM gas depends on the ionization state of the gas before the shock. The following analytical formula for the cooling time is based on numerical calculations of a steady shock with ionization precursor. It is accurate within a factor of 2 for $`v_s>`$60 km s<sup>-1</sup> Mckee (1987):
$`t_{cool}=100\left({\displaystyle \frac{v_s}{50km/s}}\right)^3n_0^1yr.`$ (2)
In steady state shocks with velocities of about 50 km s<sup>-1</sup>, the cooling drops by a factor of 10 when the material is preionized (Raymond 1979). However, the cooling time is still shorter than the flow time for an average nebula moving in the galactic plane. Only when the relative velocity is high and the density is low, will an adiabatic flow appear.
### 3.2. The adiabatic model
SBS choose the parameters for the adiabatic model so that the cooling time is larger than the flow time. An instability appears near the axis while the shell is decelerating which is interpreted as a Rayleigh-Taylor instability in the decelerated shell. When the shocked ISM cannot cool, the deceleration of the dense nebular shell by the dilute shocked ISM is Rayleigh-Taylor unstable. The main morphological features of the adiabatic flow are the ISM shock front, the decelerated PN shell and the RT instability. The last manifests itself as a ”bump” and a hole in the nebula along the upstream symmetry axis. The numerical scheme used by SBS, though, being of 2D cylindrically symmetric nature and limited spatial resolution, forces the fastest growing mode to be of large wavelength and near the symmetry axis. Similar instability modes are obtained in numerical simulations of other types of dense bodies moving through an ambient media (Brighenti & D’Ercole 1995 for Wolf-Rayet ring nebulae, and Jones, Hyesung & Tregillis 1994 for dense gas clouds). Brighenti & D’Ercole (1995) present a test run where the numerical grid is Cartesian rather than cylindrical. They find similar fragmentation of the flow, but not along the symmetry axis.
### 3.3. The isothermal model
In the isothermal model, efficient cooling is assumed. The cooling is calculated in the following way: for every cell in which the temperature is more than 10<sup>4</sup>K the temperature is set equal to 10<sup>4</sup>K. The model parameters are typical for an average nebula moving in the galactic plane.
In this case the shocked region is thin and ”arms” of denser ISM material follow the nebula. The Rayleigh-Taylor instability does not appear in this case. This is because the shocked ISM is cooled and its density is of the same order as the nebular density. The model agrees with the thin shell approximation of Smith.
### 3.4. Recent numerical simulations.
Numerical simulations show that several types of instabilities which develop on the interface of spherically expanding shells (or winds) moving with respect to the ISM can fragment the shells (e.g. Brighenti & D’Ercole 1995 ). The detailed 2D numerical simulations of Brighenti & D’Ercole (1995) nicely show how RT and KH instabilities on the interface of the wind and the ISM develop, and allow the ISM to penetrate into the wind-bubble.
New numerical studies of interacting PNs are underway (Villaver et al. 2000; Dopita et al. 2000). Preliminary results presented in the conference show that the RT instabilities can fragment the nebular shell and allow the ISM to penetrate to the inner parts of the nebulae in some cases.
## 4. The Role of the ISM Magnetic Field in Interacting PNs
The important role of the ISM magnetic field on shaping interacting PNs was first mentioned in the observational paper of Tweedy et al (1995) about the nebula Sh 216. The morphology of the nebula is dominated by a bow shock followed by several parallel elongated filaments. Tweedy et al. (1995) noted that the magnetic pressure of the ISM is negligible compared to its ram pressure for the assumed central star velocity ($`v_{}20`$ km s<sup>-1</sup>). They have argued that the central star velocity must be very low i.e. $`v_{}5`$ km s<sup>-1</sup> in order to explain the magnetic shaping of the nebula. Dgani & Soker (1998; table 1) noted that too many interacting PNs (a dozen) show signs of magnetic shaping i.e. parallel elongated filaments. It is very unlikely that all of them have such abnormally low central star velocity.
### 4.1. General features
Soker & Dgani (1997) conduct a theoretical study of the processes involved when the ISM magnetic field is important in the interaction. In the case where the ISM is fully ionized, we define four characterizing velocities of the interaction process: the adiabatic sound speed $`v_s=(\gamma kT/\mu m_H)`$ and the Alfven velocity $`v_A=B_0/(4\pi \rho _0)^{1/2}`$ of the ISM, the expansion velocity of the nebula $`v_e`$, and the relative velocity of the PN central star and the ISM $`v_{}`$. The interesting cases, with the magnetic field lines being at a large angle to the relative velocity direction, are:
1. $`v_{}v_Av_sv_e`$, and a rapid cooling behind the shock wave. Both the thermal and magnetic pressure increase substantially behind a strong shock. If radiative cooling is rapid, however, the magnetic pressure will eventually substantially exceed the thermal pressure, leading to several strong MHD instabilities around the nebula, and probably to magnetic field reconnection behind the nebula.
2. $`v_{}v_Av_sv_e`$, and negligible cooling behind the shock. The thermal pressure, which grows more than the magnetic pressure in a strong shock, will dominate behind the shock. Magnetic field reconnection is not likely to occur behind the nebula. This domain characterizes the interaction of the solar wind with the atmospheres of Venus and Mars (e.g., Phillips & McComas 1991).
In the ISM it is likely that $`v_sv_A`$, and $`v_e10\mathrm{km}\mathrm{s}^1`$ will in most cases not exceed the ISM sound speed. The central star velocity $`v_{}v_s`$ for most cases. Case (1) is the magnetic parallel of the isothermal model (section 3.1) which is relevant for planetary nebulae moving in the galactic plane. Case (2) is relevant for galactic halo PNs..
### 4.2. The magnetic Rayliegh-Taylor instability for galactic plane PNs
The isothermal case (case 1) is the more typical for the majority of PNs, moving moderately fast ($`v_{}60km/s`$) close to the galactic plain, where the ISM density is relatively high. Soker et al. (1991) find in their numerical simulations that the isothermal model shows no sign of the RT instability. Soker & Dgani (1997) show the ISM magnetic field, which was not incorporated in the numerical simulations of Soker et al. (1991), can lead to an interesting manifestation of the RT instability in the isothermal case (case 1). We assume that there is a magnetic field of intensity $`B_s`$ in the shocked ISM, but not in the nebula, and that its field lines are parallel to the boundary separating the ISM and nebular material. The acceleration $`g`$ (deceleration in the frame of the central star) of the nebular front (pointing downstream) is in the opposite direction to the density gradient. In the linear regime the growth rate of the RT instability, $`\sigma `$, with a wavenumber $`k=2\pi /\lambda `$, is given by (e.g., Priest 1984; §7.5.2)
$`\sigma ^2=gk{\displaystyle \frac{\rho _n\rho _I}{\rho _n+\rho _I}}+{\displaystyle \frac{B_s^2}{4\pi (\rho _n+\rho _I)}}k^2\mathrm{cos}^2\theta ,`$ (3)
where $`\rho _I`$ and $`\rho _n`$ are the ISM and nebular density, respectively, $`\theta `$ is the angle between the magnetic field lines and the wave-vector, and RT instability occurs when $`\sigma ^2<0`$. The deceleration of the leading shell by the ISM ram pressure is given by
$`g\pi R^2\rho _0v_{}^2/M_F,`$ (4)
where R is the nebular radius, and $`M_F`$ is the mass being decelerated. Taking $`R=0.5\mathrm{pc}`$, $`v_{}=60`$ km s<sup>-1</sup>, $`\rho _0=10^{25}`$ g cm<sup>-3</sup> and $`M_F=0.05M_{}`$ we obtain $`g3\times 10^7`$ cm s<sup>-2</sup>.
We assume that after it is shocked the nebular material reaches $`10^4K`$, and that its thermal pressure $`\rho _nv_{sn}^2`$ is approximately equal to the ram pressure of the ISM, $`\rho _0v_{}^2`$. Here $`\rho _n`$ is the shocked nebular density, $`v_{sn}10\mathrm{km}\mathrm{s}^1`$ is the isothermal sound speed in the shocked cooled nebula, and we use the assumption $`v_{}v_e`$. The post-shock nebular density is therefore
$`\rho _n=\rho _0v_{}^2/v_{sn}^2.`$ (5)
RT instability occurs when the density of the shocked cooled ISM $`\rho _s`$ is smaller than the post-shock nebular density. This can happened for a magnetic shock when the magnetic pressure provides the support for the shocked ISM region.For a fast enough growth of RT instability we require $`\rho _s/\rho _n<1/3`$. Assuming equipartition in the undisturbed ISM, we find the condition for a strong enough RT. Namely, we find that for a typical PN velocity $`v_{}=60`$ km s<sup>-1</sup> through the ISM the pre-shocked ISM magnetic field line should be within $`45^{}`$ of the shock front direction. Therefore, in most typical cases, i.e., equipartition and $`v_{}60\mathrm{km}\mathrm{s}^1`$, fast growing RT instability modes with long wavelengths will develop.
The magnetic field has a destabilizing effect only for modes having wave numbers close to being perpendicular to the field lines (and in the plane separating the ISM and the nebula). Instability modes having wave numbers along the field lines are suppressed by the magnetic tension. This effect may create elongated structures in the direction of the magnetic field lines.
### 4.3. The morphology of interacting PNs and their galactic latitude
Dgani & Soker (1998) explore the observational consequences of the development of RT in galactic plane PNs moving in a magnetized ISM. In order to define the galactic plane and galactic halo population we use the expression for the cooling time after the ISM shock (eq. 2). The cooling will be effective when the flow time of shocked ISM around the nebula $`t_{flow}R/v_{}`$ is longer than the cooling time. Calling the ratio of the two times $`\eta `$, we obtain:
$`\eta =t_{cool}/t_{flow}0.1\left({\displaystyle \frac{v_{}}{50\mathrm{km}\mathrm{s}^1}}\right)^4\left({\displaystyle \frac{R}{10^{18}\mathrm{cm}}}\right)^1\left({\displaystyle \frac{n_0}{0.1\mathrm{cm}^3}}\right)^1`$ (6)
Halo PNs are expected to have a speed of $`>100\mathrm{km}\mathrm{s}^1`$ through the ISM, while those in the plane move much slower. In addition, close to the galactic plane the density is higher than in the halo. We therefore expect that halo PNs will have $`\eta >1`$ and therefore the flow will be adiabatic, while for disk PNs $`\eta <1`$ and the flow will be isothermal. Several high quality observations, most notably given in the Atlas published by Tweedy & Kwitter (1996) provide us with a sample of 34 deep images of interacting PNs. In table 1 of Dgani & Soker (1998), we give a list of the objects and their references. Some nebulae have several parallel stripes. We define a nebula as striped only if it has at least two parallel long filaments. All of the 12 striped nebulae are close to the galactic plane ($`|z|<`$250 pc). The fact that the striped nebulae are confined to the galactic plane is compatible with the the prediction that RT will be effective for galactic plane PNs moving in the warm medium, because of magnetic effects, but only for modes perpendicular to the direction of the magnetic field. The result is elongated structures or ”RT rolls”. For galactic halo interacting PNs the cooling is inefficient and magnetic pressure is negligible. RT will be effective but it will form fingers or blobs and not stripes.
### 4.4. Measuring the ISM magnetic field with interacting PNs
The group of 12 nebulae that show stripes is not homogeneous; some have thick loose stripes, some have thin densely packed stripes. In Dgani (1998) the properties of the striped nebulae and their application to the study of the ISM are explored. In particular, a simple model for the shocked ISM region is used to derive a relation between the distance between adjacent stripes and the strength of the magnetic field of the ISM. If $`\mathrm{\Delta }z`$ is the average spacing of the stripes in units of the radius of the nebula then (see Dgani 1997, eq. 4):
$`\mathrm{\Delta }zv_{A0}\mathrm{cos}\alpha _0/v_{},`$ (7)
where $`v_{A0}`$ is the Alfven speed in the ISM, $`\alpha _0`$ is the angle between the pre-shock magnetic field and the shock front, and $`v_{}`$ is the velocity of the central star. According to the formula nebulae with densely packed stripes either move faster or move in a weaker magnetic field than the ones with thicker looser stripes. Applying this simple formula to several striped nebulae, Dgani shows that information about the strength of the ISM magnetic field in the local neighborhood of these nebulae can be extracted by accurate observations of the members of this group.
Acknowledgement: It is a pleasure to thank Noam Soker for introducing me to the fascinating subject of interacting nebulae and for a very pleasant collaboration.
## References
Abell, G. O. 1966, ApJ, 144, 259.
Borkowski, K. J., & Tsvetanov, Z., & Harrington, J. P. 1993, ApJ Lett., 402, L57.
Borkowski, K. J., & Sarazin, C. L., & Soker, N. 1990, ApJ, 360, 173 (BSS).
Brighenti, F. & D’Ercole, A. 1995, MNRAS, 277, 53.
Dgani, R. 1995, Ann. of the Israel Physical Society, Vol. 11: Asymmetrical Planetary Nebulae, eds. A. Harpaz and N. Soker (Haifa, Israel), p. 219.
Dgani, R. 1998, RevMexAC, 7, 149.
Dgani, R., & Soker, N. 1994, ApJ, 434, 262.
Dgani, R., & Soker, N. 1997, ApJ, 495, 337.
Dopita, M. et. al., 2000, these proceedings.
Gurzadyan, G. A. 1969, Planetary Nebulae (New York : Gordon & Breach), p. 235.
Hollis, J. M., Van Buren, D., Vogel, S. N., Feibelman, W. A., Jacoby, G. H., & Pedelty, J. A. 1996, ApJ, 456, 644.
Isaacman, R. 1979, A&A, 77, 327.
Jacoby, G. H., & Van de Steene, G. 1995, AJ, 110, 1285.
Jones, T. W., Kang, H., & Tregillis, I. L. 1994, ApJ, 432, 194.
Kaler J. B. 1983, ApJ, 271, 188.
Kerber et. al. 2000, these proceedings.
McKee, C. F. 1987 in Spectroscopy of Astrophysical Plasmas, ed. A. Dalgarno
Pottash, S. R. 1984, Planetary Nebulae (Dordrecht : Reidel).
Priest, E. R. 1984, Solar Magnetohydrodynamics, Reidel Pub. Com., (Dordrecht, holland).
Raymond, J. C. 1979, ApJS, 39, 1.
Smith, H. 1976, MNRAS, 175, 419.
Soker, N.., 1994, MNRAS, 270, 774.
Soker, N.., 1996, ApJ, 496, 734. .
Soker, N.. & Dgani, R., 1997, ApJ, 484, 277.
Soker, N., Borkowski, K. J., & Sarazin, C. L. 1991, AJ, 102, 1381 (SBS).
Soker, N., Rappaport, S., & Harpaz, A., 1998, ApJ. 496, 842.
Stevens, I. R., Blondin, J. M., & Pollock, A. M, 1992, ApJ, 386, 265.
Tweedy, R. W. 1995, Ann. of the Israel Physical Society, Vol. 11: Asymmetrical Planetary Nebulae, eds. A. Harpaz and N. Soker (Haifa, Israel), p. 209.
Tweedy, R. W., & Kwitter, K. B. 1994a, AJ 108, 188.
Tweedy, R. W., & Kwitter, K. B. 1994b, ApJ 433, L93.
Tweedy, R. W., & Kwitter, K. B. 1996, ApJS 107, 255.
Tweedy, R. W., Martos, M. A., & Noriega-Crespo, A. 1995, ApJ, 447, 257.
Tweedy, R. W., & Napiwotzki, R. 1994, AJ 108, 978.
Villaver, E. et. al. these proceedings.
Weinberger, R. 1989, A&A Supp., 78, 301.
Zucker, D. B. & Soker, N. 1993, ApJ, 408, 579.
Zucker, D. B. & Soker, N. 1997, MNRAS, 289,665. |
no-problem/0001/astro-ph0001426.html | ar5iv | text | # 1 Introduction
## 1 Introduction
CNO abundances in metal-poor stars can tell us about nucleosynthesis and mixing processes in stars, and about several fundamental parameters of Galactic chemical evolution (initial mass function, star formation rate, etc.). Unevolved stars play a key role in this respect because the original surface abundances of CNO in evolved giant stars can be altered by internal nucleosynthesis and mixing between the core and outer layers. Mixing is expected to occur when a star becomes a red giant following the exhaustion of hydrogen in the core (so called “the first dredge-up”). Important results have been obtained from the analysis of CNO molecular bands and atomic lines in metal-poor stars (see Wheeler et al. 1989 and references therein). However, such analyses have been either based on high resolution spectra of small samples (less than 10) or low/intermediate resolution spectra of large samples ($``$ 100) of stars. We are studying CNO abundances from high resolution spectra of more than 40 halo dwarfs. In this paper we report preliminary results based on 24 halo dwarfs discussed by Israelian, García López and Rebolo (1998) and briefly discuss the current understanding of the galactic evolution of these elements.
## 2 Observations and Analysis
The observations were carried out in different runs using the UES ($`R=\lambda /\mathrm{\Delta }\lambda 50000`$) of the 4.2-m WHT at the Observatorio del Roque de los Muchachos (La Palma), and the UCLES ($`R60000`$) of the 3.9-m AAT. The final signal-to-noise ratio (S/N) varies for the different echelle orders, being in the range 30–100 for most of the stars.
A grid of LTE, plane-parallel, constant flux, and blanketed model atmospheres provided by Kurucz (1992), computed with atlas9 without overshooting, and interpolated for given values of $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$, and \[Fe/H\] was used. Details on the analysis of the OH lines have been presented in Israelian et al. (1998). We have derived carbon abundances from the CH band at 3145 Å using the list of Kurucz (1992). Nitrogen abundances have been derived from the NH band at 3360 Å using the atomic and molecular data from Norris (1999, private communication). Synthetic spectra were computed with the WITA3 code by Pavlenko (1991). Effective temperatures ($`T_{\mathrm{eff}}`$) for our stars were estimated using the Alonso et al. (1996) calibrations versus $`V`$$``$$`K`$ and $`b`$$``$$`y`$ colors, which were derived applying the infrared flux method, and cover a wide range of spectral types and metal content. Metallicities were adopted from literature values obtained from high resolution spectra. Gravities were derived using the accurate parallaxes measured by Hipparcos (ESA 1997). These gravity values are larger by 0.28 dex in average than the values adopted in our previous analysis of OH lines (Israelian et al. 1998). This implies a mean small reduction of 0.09 dex in the oxygen abundances inferred in the latter paper, which does not affect the previously observed linear relationship between \[O/Fe\] and \[Fe/H\].
## 3 Abundances of CNO in metal poor dwarfs
### 3.1 Oxygen
Type II SNe are expected to produce significant amounts of oxygen. Iron is produced in both, Type II and in Type I SNe. Since the latter come from longer lifetime progenitors, it has been argued for a long time that oxygen must be overabundant in very old stars. Evidence for high \[O/Fe\] ratios in many metal-poor stars has been reported during the last decades. A so called “traditional” view is based on the study of \[O i\] lines at 6300 and 6363 Å in giants (though the second line at 6363 Å is not visible in very metal-poor stars and the analysis is based only on one line) by Barbuy (1988), Gratton & Ortolani (1992), Sneden et al. (1991) and Kraft et al. (1992). These authors found that \[O/Fe\]$`=0.30.4`$ dex at \[Fe/H\]$`<1`$ and is constant with decreasing metallicity. In contrast, oxygen abundances derived in dwarfs using the O i IR triplet at 7774 Å by Abia & Rebolo (1989), Tomkin et al. (1992), King & Boesgaard (1995), and Cavallo, Pilachowski, & Rebolo (1997) point towards increasing \[O/Fe\] values with decreasing \[Fe/H\], reaching a ratio $`1`$ for stars with \[Fe/H\]$`3`$, suggesting a higher production of oxygen during the early Galaxy.
New oxygen abundances derived from near-UV OH lines (which form in the same layers of the atmosphere as \[O i\]) for 24 metal-poor stars have been presented by Israelian, García López, & Rebolo (1998). It is shown how the \[O/Fe\] ratio of metal-poor stars increases from 0.6 to 1 between \[Fe/H\]=$``$1.5 and $`3`$ with a slope of $`0.31\pm 0.11`$ (Fig 1). Contrary to the previously accepted picture (Bessell, Sutherland, & Ruan 1991, who used older model atmospheres with a coarser treatment of the opacities in the UV), these new oxygen abundances derived from low-excitation OH lines, agreed well with those derived from high-excitation lines of the O i IR triplet at 7774 Å. The comparison with oxygen abundances derived using O i data from Tomkin et al. (1992) showed a mean difference of $`0.00\pm 0.11`$ dex for the stars in common. On the other hand, Boesgaard et al. (1999) have obtained high quality Keck spectra of many metal-poor stars in the near UV, and recently concluded their analysis of a different set of OH lines. They find a very good agreement with the results obtained by Israelian et al. (1998), and basically the same dependence of \[O/Fe\] versus metallicity. The mean difference in oxygen abundance for ten stars in common is $`0.00\pm 0.06`$ dex when the differences in stellar parameters are taken into account. In Fig. 1, upper panel, we plot these oxygen abundances based on OH lines as a function of metallicity.
Balachandran & Bell (1998) have pointed out that the continuous opacity of the Sun is not fully accounted for in the spectral syntheses performed in the near UV region. Although this is still a matter of debate, it would have a minor effect on the recent OH results since most of the stars in the samples of Israelian et al. and Boesgaard et al. are hotter than the Sun (and very metal-poor), and the corrections to oxygen abundances for individual stars due to this effect would be lower than 0.15 dex. This would not affect significantly the \[O/Fe\] vs. \[Fe/H\] trend.
Recently, Mishenina et al. (2000) performed a non-LTE analysis of the O i IR triplet to re-derive oxygen abundances for a sample of 38 metal-poor stars from the literature. They confirmed earlier results (Abia & Rebolo 1989; Tomkin et al. 1992; Kiselman 1993) indicating that the mean value of the non-LTE correction in unevolved metal-poor stars is typically 0.1 dex and never exceeds 0.2 dex. Mishenina et al. found the same linear trend as Israelian et al. (1998) and Boesgaard et al. (1999) from the OH lines, and confirmed that oxygen abundances do not show any trend with $`T_{\mathrm{eff}}`$ or $`\mathrm{log}g`$ (Boesgaard et al. 1999). Furthermore, Asplund et al. (1999) showed that the O i IR triplet is not affected by 3D effects, convection and small-scale inhomogeneities in the stellar atmosphere. In addition, oxygen abundances derived form this triplet are not significantly affected by chromospheric activity either, and we can conclude that the O i IR triplet provides reliable oxygen abundances in metal-poor dwarfs. This conclusion should also apply to the oxygen abundances derived from the UV OH lines given the good agreement shown between both indicators. In Fig. 1, mid-panel, we plot oxygen abundances based on the oxygen triplet. The larger scatter observed, as compared with the measurements based on OH lines, can be associated with the different scales of stellar parameters ($`T_{\mathrm{eff}}`$, gravities, and metallicities) adopted by the authors of each set of stars, and to the fact that some measurements have not been corrected for NLTE effects.
Israelian et al. (1998) found four dwarfs (HD 22879, HD 76932, HD 103095 and HD 134169) in their sample for which oxygen abundances had been previously derived using \[O i\]. They synthesized the forbidden oxygen line for these stars adopting the same set of stellar parameters than for the OH analysis, the $`gf`$ value given by Lambert (1978), and the equivalent widths provided in the literature for the $`\lambda `$ 6300 Å line. The estimated abundances were in reasonable agreement with those derived from OH but still slightly lower. The abundances found in that work using Hipparcos gravities when analyzing both indicators are in better agreement, which strongly suggests that a reliable gravity scale may indeed be key to explain the discrepancies on oxygen abundances from forbidden and permitted lines in unevolved metal-poor stars.
In the lower panel of Fig.1 we compile oxygen measurements for unevolved stars based on the \[O i\] $`\lambda `$ 6300 Å line. The presence of a linear trend of \[O/Fe\] versus metallicity strongly depends on the only two measurements available at \[Fe/H\]$`2`$. These two measurements have been recently reported by Fulbright & Kraft (1999) for the stars BD $`+37`$ 1458 and BD $`+23`$ 3130, which were also considered by Israelian et al.(1998) and Boesgaard et al. (1999; only BD $`+37`$ 1458 in this case). There is an apparent discrepancy between the results obtained from the forbidden and the OH lines. However, we argue here that this discrepancy cannot be sustained when a critical analysis of the uncertainties involved in the determination from the forbidden line is performed. The analysis carried out by Fulbright & Kraft is based on gravities derived from LTE iron ionization balance of these subgiants where it is well known that NLTE effects are strong. In a recent paper, Allende Prieto et al. (1999) have shown that gravities derived using this technique in metal-poor stars do not agree with the gravities inferred from accurate Hipparcos parallaxes, which casts a shadow upon oxygen abundance analyses of very metal-poor stars based on gravities derived from the ionization balance. They find that gravities are systematically underestimated when derived from ionization balances and that upward corrections of 0.5 dex or even higher can be required at metallicities similar to those of our stars. We remark here that any underestimation of gravities will also strongly underestimate the abundances inferred from the forbidden line. For the two stars under discussion our Hipparcos based gravities are 0.45 and 1.05 dex (for BD $`+37`$ 1458 and BD $`+23`$ 3130, respectively) higher than derived by Fulbright & Kraft, and would imply the corrections in the oxygen abundances indicated in Fig. 1 (the details of the analysis are out of the scope of these proceedings and will be presented in a forthcoming paper). Our conclusion is that the uncertainties in the gravities of these subgiants allow the abundances inferred from the forbidden line to be consistent with those estimated from the OH lines or the triplet. Actually, consistency with the other oxygen indicators is achieved for the high gravities inferred from Hipparcos, and this could be taken as an indication that the high gravities are indeed the correct ones.
Chemical evolution models of the early Galaxy where stellar lifetimes are taken into account and assuming that Type Ia SN appear at a Galactic age of 30 million years can also explain the evolution of oxygen delineated in Fig. 1. (Chiappini et al. 1999.). The evolution of oxygen proposed in this paper also helps to understand the evolution of <sup>6</sup>Li versus \[Fe/H\] and the <sup>6</sup>Li/Be ratio at low metallicities in the framework of standard Galactic Cosmic Ray Nucleosynthesis (Fields & Olive 1999). In addition, Ramaty et al. (1999) have proposed that a delay between the effective deposition times into the ISM of Fe and O (only a fraction of which condensed in oxide grains) can explain a linear trend of \[O/Fe\].
### 3.2 Carbon
Until very recently, it has been commonly accepted that intermediate mass stars ($`M_{}<M<10M_{}`$) are the main source of Galactic C (Wheeler et al. 1989). It has been shown (Laird 1985, Carbon et al. 1987, Tomkin et al. 1992) that \[C/Fe\] is approximately zero independently of \[Fe/H\]. Note that all these studies were based on the 4300 Å feature of CH. Tomkin et al. (1992) have demonstrated that C i at $``$ 9100 Å provides an average \[C/Fe\]=+0.3$`\pm `$0.2, whereas the CH band provides \[C/Fe\]=$``$0.1$`\pm `$0.2. Recently, Gustafsson et al. (1999) performed an abundance analysis of carbon in a sample of 80 unevolved disk stars. They found that \[C/Fe\] increases with decreasing \[Fe/H\] with a slope of $``$0.17$`\pm `$0.03. This result was explained by carbon enrichment from superwinds of metal-rich massive stars. Our preliminary analysis for more metal-poor stars confirms earlier results that \[C/Fe\]$``$0 in a wide range of \[Fe/H\] (Fig. 2). Studies of very low metallicity halo stars ( \[Fe/H\]$`<3`$) have revealed a significant number of stars with very high overabundance of carbon, up to 1-2 dex (Beers et al. 1992). Given these new results, it appears necessary to review our knowledge about carbon production sites in the Galaxy.
### 3.3 Nitrogen
The isotope <sup>14</sup>N is synthesized from <sup>12</sup>C and <sup>16</sup>O through the CNO cycles in the H-burning layer. Observations of the NH band at 3360 Å have allowed to delineate the Galactic evolution of N down to \[Fe/H\]$``$2.8. Tomkin & Lambert (1984) used high resolution spectra of 8 disk and 6 halo stars ($`0.3<`$ \[Fe/H\]$`<2.3`$) and found \[N/Fe\]$``$0.25. Laird (1985) and Carbon et al. (1987) obtained \[N/Fe\]= $`0.67\pm 0.14`$ (intermediate resolution spectra of 116 stars) and \[N/Fe\]= $`0.11\pm 0.06`$ (low resolution spectra of 76 stars), respectively. However, we should stress that this band is blended with Ti and Sc lines and it is preferable to use high resolution spectra in order to avoid any systematics due to the overabundance of Ti ($`\alpha `$-element) and Sc in metal-poor stars. Our preliminary analysis confirms previous results that \[N/Fe\] is constant in a wide range of \[Fe/H\]. It would be extremely interesting to check how \[N/O\] behaves in ultra metal-poor stars with \[Fe/H\]$`<3`$. The trend of \[N/Fe\] at very low metallicities is not yet investigated but the existence of N rich halo subdwarfs has been already demonstrated. It is very important to check the \[N/O\] trend at \[Fe/H\]$`<`$3 as this may help to understand the possible sites for a production of nitrogen as primary element. Recently, Maeder & Meynet (2000) have shown that the average rotation of massive stars can possibly be faster at lower metallicities. They have also found a primary N production in rapidly rotating stars in the mass range 10-15 $`M_{}`$. To our knowledge these are the only models showing a primary nitrogen production in normal stars (see also Maeder, A., this conference).
## 4 Concluding remarks
We have derived carbon, nitrogen and oxygen abundances for a sample of metal-poor unevolved stars. \[C/Fe\] and \[N/Fe\] ratios appear to be constant, independently of metallicity, in the range $`0.3>`$\[Fe/H\]$`>3.0`$, while the \[O/Fe\] ratio increases from approximately 0 to 1, with consistent oxygen abundances derived from different indicators. Carbon and nitrogen abundances show larger scatter than oxygen at a given metallicity, which could reflect the larger variety of stellar production sites for these elements. Work is in progress to derive CNO abundances for a larger sample of unevolved metal-poor stars using other features in the optical and IR spectral regions, which should allow a fully reliable estimate of the Galactic evolution of these elements. Consistent abundances from different spectral features of C, N and O will be used in order to delineate \[N/O\], \[C/O\] and \[(C+N+O)/Fe\] behaviour as a function of \[O/H\] and \[Fe/H\]. |
no-problem/0001/hep-ph0001271.html | ar5iv | text | # CPHT S758.0100 Dispersion Relation Analyses of Pion Form Factor, Chiral Perturbation Theory and Unitarized Calculations
(February 2000 )
The Vector Pion form factor below 1 GeV is analyzed using experimental data on its modulus, the P-wave pion pion phase shifts and dispersion relation. It is found that causality is satisfied. Using dispersion relation, terms proportional to $`s^2`$ and $`s^3`$ are calculated using the experimental data, where $`s`$ is the momentum transfer. They are much larger than the one-loop and two-loop Chiral Perturbation Theory calculations. Unitarized model calculations agree very well with dispersion relation results.
Chiral Perturbation Theory (ChPT) is a well-defined perturbative procedure allowing one to calculate systematically low energy phenomenon involving soft pions. It is now widely used to analyze low energy pion physics even in the presence of resonance as long as the energy region of interest is sufficiently far from the resonance. In this scheme, the unitarity relation is satisfied perturbatively order by order.
The standard procedure of testing ChPT calculation of the pion form factor , which claims to support the perturbative scheme, is shown here to be unsatisfactory. This is so because the calculable terms are extremely small, less than 1.5% of the uncalculable terms at an energy of 0.5 GeV or lower whereas the experimental errors are of the order 10-15%. The main purpose of this note is to show how this situation can be dealt with without asking for a new measurement of the pion form factor with a precision much better than 1.5%.
Although dispersion relation (or causality) has been tested to a great accuracy in the forward pion nucleon and nucleon nucleon or anti-nucleon scatterings at low and high energy, there is no such a test for the form factors. This problem is easy to understand. In the former case, using unitarity of the S-matrix, one rigourously obtained the optical theorem relating the imaginary part of the forward elastic amplitude to the total cross section which is a measurable quantity. This result together with dispersion relation establish a general relation between the real and imaginary parts of the forward amplitude.
There is no such a rigourous relation, valid to all energy, for the form factor. In low energy region, the unitarity of the S-matrix in the elastic region gives a relation between the phase of the form factor and the P-wave pion pion phase shift, namely they are the same . Strictky speaking, this region is extended from the two pion threshold to $`16m_\pi ^2`$ where the inelastic effect is rigourously absent. In practice, the region of the validity of the phase theorem can be exended to 1.1-1.3 GeV because the inelastic effect is negligible. Hence, using the measurements of the modulus of the form factor and the P-wave phase shifts, both the real and imaginary parts of the form factors are known. Beyond this energy, the imaginary part is not known. Fortunately for the present purpose of testing of locality (dispersion relation) and of the validity of the perturbation theory at low energy, thanks to the use of subtracted dispersion relations, the knowledge of the imaginary part of the form factor beyond 1.3 GeV is unimportant.
Because the vector pion form factor $`V(s)`$ is an analytic function with a cut from $`4m_\pi ^2`$ to $`\mathrm{}`$, the $`n^{th}`$ times subtracted dispersion relation for $`V(s)`$ reads:
$$V(s)=a_0+a_1s+\mathrm{}a_{n1}s^{n1}+\frac{s^n}{\pi }_{4m_\pi ^2}^{\mathrm{}}\frac{ImV(z)dz}{z^n(zsiϵ)}$$
(1)
where $`n0`$ and, for our purpose, the series around the origin is considered. Because of the real analytic property of $`V(s)`$, it is real below $`4m_\pi ^2`$. By taking the real part of this equation, $`ReV(s)`$ is related to the principal part of the dispersion integral involving the $`ImV(s)`$ apart from the subtraction constants $`a_n`$.
The polynomial on the R.H.S. of Eq. (1) will be referred in the following as the subtraction constants and the last term on the R.H.S. as the dispersion integral (DI). The evaluation of DI as a funtion of $`s`$ will be done later. Notice that $`a_n=V^n(0)/n!`$ is the coefficient of the Taylor series expansion for $`V(s)`$, where $`V^n(0)`$ is the nth derivative of $`V(s)`$ evaluated at the origin. The condition for Eq. (1) to be valid was that, on the real positive s axis, the limit $`s^nV(s)0`$ as $`s\mathrm{}`$. By the Phragmen Lindeloff theorem, this limit would also be true in any direction in the complex s-plane and hence it is straightforward to prove Eq. (1). The coefficient $`a_{n+m}`$ of the Taylor’s series is given by:
$$a_{n+m}=\frac{1}{\pi }_{4m_\pi ^2}^{\mathrm{}}\frac{ImV(z)dz}{z^{(n+m+1)}}$$
(2)
where $`m0`$. The meaning of this equation is clear: under the above stated assumption, not only the coefficient $`a_n`$ can be calculated but all other coefficients $`a_{n+m}`$ can also be calculated. The larger the value of $`m`$, the more sensitive is the value of $`a_{n+m}`$ to the low energy values of $`ImV(s)`$. In theoretical work such as in ChPT approach, to be discussed later, the number of subtraction is such that to make the DI converges.
The elastic unitarity relation for the pion form factor is $`ImV(s)=V(s)e^{i\delta (s)}sin\delta (s)`$ where $`\delta (s)`$ is the elastic P-wave pion pion phase shifts. Below the inelastic threshold of $`16m_\pi ^2`$ where $`m_\pi `$ is the pion mass, $`V(s)`$ must have the phase of $`\delta (s)`$ . It is an experimental fact that below $`1.3GeV`$ the inelastic effect is very small, hence, to a good approximation, the phase of $`V(s)`$ is $`\delta `$ below this energy scale.
$$ImV(z)=V(z)\mathrm{sin}\delta (z)$$
(3)
and
$$ReV(z)=V(z)\mathrm{cos}\delta (z)$$
(4)
where $`\delta `$ is the strong elastic P-wave $`\pi \pi `$ phase shifts. Because the real and imaginary parts are related by dispersion relation, it is important to know accurately $`ImV(z)`$ over a large energy region. Below 1.3 GeV, $`ImV(z)`$ can be determined accurately because the modulus of the vector form factor and the corresponding P-wave $`\pi \pi `$ phase shifts are well measured except at very low energy.
It is possible to estimate the high energy contribution of the dispersion integral by fitting the asymptotic behavior of the form factor by the expression, $`V(s)=(0.25/s)ln(s/s_\rho )`$ where $`s_\rho `$ is the $`\rho `$ mass squared.
Using Eq. (3) and Eq. (4), $`ImV(z)`$ and $`ReV(s)`$ are determined directly from experimental data and are shown, respectively, in Fig.1 and Fig.2.
In the following, for definiteness, one assumes $`s^1V(s)0`$ as $`s\mathrm{}`$ on the cut, i.e. $`V(s)`$ does not grow as fast as a linear function of $`s`$. This assumption is a very mild one because theoretical models assume that the form factor vanishes at infinite energy as $`s^1`$. In this case, one can write a once subtracted dispersion relation for $`V(s)`$, i.e. one sets $`a_0=1`$ and $`n=1`$ in Eq. (1).
From this assumption on the asymptotic behavior of the form factor, the derivatives of the form factor at $`s=0`$ are given by Eq. (2) with n=1 and m=0. In particular one has:
$$<r_V^2>=\frac{6}{\pi }_{4m_\pi ^2}^{\mathrm{}}\frac{ImV(z)dz}{z^2}$$
(5)
where the standard definition $`V(s)=1+\frac{1}{6}<r_V^2>s+cs^2+ds^3+\mathrm{}`$ is used. Eq.(5) is a sum rule relating the pion rms radius to the magnitude of the time like pion form factor and the P-wave $`\pi \pi `$ phase shift measurements. Using these data, the derivatives of the form factor are evaluated at the origin:
$$<r_V^2>=0.45\pm 0.015fm^2;c=3.90\pm 0.20GeV^4;d=9.70\pm 0.70GeV^6$$
(6)
where the upper limit of the integration is taken to be $`1.7GeV^2`$. By fitting $`ImV(s)`$ by the above mentioned asymptotic expression, the contribution beyond this upper limit is completely negligible. From the 2 $`\pi `$ threshold to $`0.56GeV`$ the experimental data on the the phase shifts are either poor or unavailable, an extrapolation procedure based on some model calculations to be discussed later, has to be used. Because of the threshold behavior of the P-wave phase shift, $`ImV(s)`$ obtained by this extrapolation procedure is small. They contribute, respectively, 5%, 15% and 30% to the $`a_1,a_2`$ and $`a_3`$ sum rules. The results of Eq. (6) change little if the $`\pi \pi `$ phase shifts below $`0.56GeV`$ was extrapolated using an effective range expansion and the modulus of the form factor using a pole or Breit-Wigner formula.
The only experimental data on the derivatives of the form factor at zero momentum transfer is the rms radius of the pion, $`r_V^2=0.439\pm .008fm^2`$ . This value is very much in agreement with that determined from the sum rules. In fact the sum rule for the rms radius gets overwhelmingly contribution from the $`\rho `$ resonance as can be seen from Fig.1. The success of the calculation of the r.m.s. radius is a first indication that causality is respected and also that the extrapolation procedures to low energy for the P-wave $`\pi \pi `$ phase shifts and for the modulus of the form factor are legitimate.
Dispersion relation for the pion form factor is now shown to be well verified by the data over a wide energy region. Using $`ImV(z)`$ as given by Eq. (3) together with the once subtracted dispersion relation, one can calculate the real part of the form factor $`ReV(s)`$ in the time-like region and also $`V(s)`$ in the space like region. Because the space-like behavior of the form factor is not sensitive to the calculation schemes, it will not be considered here. The result of this calculation is given in Fig.2. As it can be seen, dispersion relation results are well satisfied by the data.
The i-loop ChPT result can be put into the following form, similar to Eq. (1):
$$V^{pert(i)}(s)=1+a_1s+a_2s^2+\mathrm{}+a_is^i+D^{pert(i)}(s)$$
(7)
where $`i+1`$ subtraction constants are needed to make the last integral on the RHS of this equation converges and
$$DI^{pert(i)}(s)=\frac{s^{1+i}}{\pi }_{4m_\pi ^2}^{\mathrm{}}\frac{ImV^{pert(i)}(z)dz}{z^{1+i}(zsiϵ)}$$
(8)
with $`ImV^{pert(i)}(z)`$ calculated by the $`ith`$ loop perturbation scheme.
Similarly to these equations, the corresponding experimental vector form factor $`V^{exp(i)}(s)`$ and $`DI^{exp(i)}(s)`$ can be constructed using the same subtraction constants as in Eq. (7) but with the imaginary part replaced by $`ImV^{exp(i)}(s)`$, calculated using Eq. (3).
The one-loop ChPT calculation requires 2 subtraction constants. The first one is given by the Ward Identity, the second one is proportional to the r.m.s. radius of the pion. In Fig. 1, the imaginary part of the one-loop ChPT calculation for the vector pion form factor is compared with the result of the imaginary part obtained from the experimental data. It is seen that they differ very much from each other. One expects therefore that the corresponding real parts calculated by dispersion relation should be quite different from each other.
In Fig.2 the full real part of the one loop amplitude is compared with that obtained from experiment. At very low energy one cannot distinguish the perturbative result from the experimental one due to the dominance of the subtraction constants. At an energy around $`0.56GeV`$ there is a definite difference between the perturbative result and the experimental data. This difference becomes much clearer in Fig. 3 where only the real part of the perturbative DI, $`ReDI^{pert(1)}(s)`$, is compared with the corresponding experimental quantity, $`ReDI^{exp(1)}(s)`$. It is seen that even at 0.5 GeV the discrepancy is clear. Supporters of ChPT would argue that ChPT would not be expected to work at this energy. One would have to go to a lower energy where the data became very inaccurate.
This argument is false as can be seen by comparing the ratio $`ReDI^{pert(1)}/ReDI^{exp(1)}`$. It is seen in Fig. 4 that *everywhere* below 0.6 GeV this ratio differs from unity by a factor of 6-7 due to the presence of non perturbative effects.
Similarly to the one-loop calculation, the two-loop results are plotted in Fig. (1) - Fig. (4) . Although the two-loop result is better than the one-loop calculation, because more parameters are introduced, calculating higher loop effects will not explain the data.
It is seen that perturbation theory is inadequate for the vector pion form factor even at very low momentum transfer. This fact is due to the very large value of the pion r.m.s. radius or a very low value of the $`\rho `$ mass $`s_\rho `$ (see below). In order that the perturbation theory to be valid the calculated term by ChPT should be much larger than the non perturbative effect. At one loop, by requiring the perturbative calculation dominates over the nonperturbative effects at low energy, one has $`s_\rho >>\sqrt{960}\pi f_\pi m_\pi =1.3GeV^2`$ which is far from being satisfied by the physical value of the $`\rho `$ mass.
The unitarized models are now examined. It has been shown a long time ago that to take into account of the unitarity relation, it is better to use the inverse amplitude $`1/V(s)`$ or the Pade approximant method .
The first model is obtained by introducing a zero in the calculated form factor in the ref. to get an agreement with the experimental r.m.s. radius . The pion form factor is now multiplied by $`1+\alpha s/s_\rho `$ where $`s_\rho `$ is the $`\rho `$ mass squared .
The experimental data can be fitted with a $`\rho `$ mass equal to $`0.773GeV`$ and $`\alpha =0.14`$. These results are in excellent agreement with the data .
The second model, which is more complete at the expense of introducing more parameters, is based on the two-loop ChPT calculation with unitarity taken into account. It has the singularity associated with the two loop graphs. Using the same inverse amplitude method as was done with the one-loop amplitude, but generalizing this method to two-loop calculation, Hannah has recently obtained a remarkable fit to the pion form factor in the time-like and space-like regions. His result is equivalent to the (0,2) Padé approximant method as applied to the two-loop ChPT calculation . Both models contain ghosts which can be shown to be unimportant .
As can be seen from Figs. 1, 2 and 3 the imaginary and real parts of these two models are very much in agreement with the data. A small deviation of $`ImV(s)`$ above $`0.9GeV`$ is due to a small deviation of the phases of $`V(s)`$ in these two models from the data of the P-wave $`\pi \pi `$ phase shifts.
In conclusion, higher loop perturbative calculations do not solve the unitarity problem. The perturbative scheme has to be supplemented by the well-known unitarisation schemes such as the inverse amplitude, N/D and Padé approximant methods .
The author would like to thank Torben Hannah for a detailed explanation of his calculation of the two-loop vector pion form factor and also for a discussion of the experimental situation on the pion form factor data. Useful conversations with T. N. Pham are acknowleledged.
Figure Captions
Fig.1 : The imaginary part of the vector pion form factor $`ImV`$, given by Eq. (3), as a function of energy in the $`GeV`$ unit. The solid curve is the the experimental results with experimental errors; the long-dashed curve is the two-loop ChPT calculation, the medium long-dashed curve is the one-loop ChPT calculation, the short-dashed curve is from the modified unitarized one-loop ChPT calculation fitted to the $`\rho `$ mass and the experimental r.m.s. radius, and the dotted curve is the unitarized two-loop calculation of Hannah .
Fig. 2 : The real parts of the pion form factor $`ReV`$, given by Eq. (4) as a function of energy. The curves are as in Fig. 1. The real part of the form factor calculated by the once subtracted dispersion relation using the experimental imaginary part is also given by the solid line.
Fig. 3 : The real parts of the dispersion integral ReDI as a function of energy. The curves are as in Fig. 1.
Fig. 4 :The ratio of the one-loop ChPT to the corresponding experimental quantity, $`ReDI^{pert(1)}/ReDI^{exp(1)}`$, defined by Eq. (8), as a function of energy, is given by the solid line; the corresponding ratio for the two-loop result is given by the dashed line. The ratio of the unitarized models to the experimental results is unity (not shown). The experimental errors are estimated to be less than 10%. |
no-problem/0001/cond-mat0001328.html | ar5iv | text | # Energy spectra and photoluminescence of charged magneto-excitons
## Acknowledgment.
A.W. and J.J.Q. acknowledge partial support by the Materials Research Program of Basic Energy Sciences, US Dept. of Energy. |
no-problem/0001/astro-ph0001130.html | ar5iv | text | # Nature of eclipsing pulsars
## 1 Introduction
Eclipsing millisecond pulsars were expected to be a missing link in the evolutionary connection between millisecond pulsars and low-mass $`X`$-ray binaries (Alpar et al., 1982). To the best of our knowledge, the following eclipsing millisecond pulsars have been observed for today: PSR B$`1957+20`$ (Fruchter et al., 1988), PSR B$`174424`$A (Lyne et al., 1990), PSR B1259-63 (Johnston et al., 1992), PSR J$`20510827`$ (Stappers et al., 1996a), 47 Tuc I, J (Manchester et al., 1991), O, and R (Camilo et al., 1999). Up to now, PSRs B$`1957+20`$ and J$`20510827`$ are the best studied of them. These two pulsars exhibit stable, anomalously long and frequency-dependent radio eclipses, that is, eclipses are long at low observed frequencies, and become shorter at higher frequencies.
In this paper we propose a physical mechanism for the eclipsing binary pulsars and test it for PSRs B$`1957+20`$ and J$`20510827`$. Our model assumes that the companion stars are magnetic white dwarfs. Relativistic particles from the pulsar wind are trapped by the companion’s magnetic field. This leads to formation of the extended magnetosphere of the companion star. We demonstrate that the pulsar radio emission undergoes strong cyclotron damping while passing through the companion’s magnetospheric plasma. The model accounts for the most peculiarities of the observed eclipse pattern in both aforementioned eclipsing systems. To begin with, let us shortly review the main observational features of these interesting objects.
### 1.1 PSR B$`1957+20`$
One of the two fastest pulsars known (with the period $`P1.6`$ ms and the spin-down rate $`\dot{P}1.7\times 10^{20}`$), the “black widow” pulsar B$`1957+20`$ shows regular and entirely periodic eclipses at meter wavelengths, which occupy about 10% of its $`T_{\mathrm{orb}}9.2`$ hr orbital period. The eclipses are quite stable in length at any given observing frequency, although their length depends strongly on the frequency: at 318 MHz it averages to about 55 minutes, but decreases to about 33 minutes at 1.4 GHz. This frequency dependence appears to fit a $`\mathrm{\Delta }t_e\nu ^{0.4\pm 0.1}`$ power law, where $`\mathrm{\Delta }t_e`$ denotes eclipse duration at a frequency $`\nu `$. The orbit of the binary system is rather circular, with the separation of the companions $`a1.7\times 10^{11}\mathrm{cm}2.5R_{}`$ (Brookshaw & Tavani, 1995), where $`R_{}=6.96\times 10^{10}`$ cm is a solar radius.
At lowest observing frequencies eclipse ingress is quite rapid, whereas eclipse egress is slower and somewhat turbulent. At 1.4 GHz, however, this asymmetry is not observed. The excess electron density on either side of the eclipse is found to vary by a factor of two from eclipse to eclipse. The delay between the left and right circularly polarized pulse arrival times were also detected (Thorsett et al., 1989; Fruchter et al., 1990; Ryba & Taylor, 1991).
HST optical images of the PSR B$`1957+20`$ made at different orbital phases display a dramatically variable system which is brightest when the side of the companion heated by the pulsar wind faces the Earth, and darkens by several magnitudes when the opposite cool side comes into view (Fruchter, 1995). Companion’s radius, $`R_{\mathrm{opt}}0.12R_{}`$, was derived from the optical images using the estimated distance to this object $`d=0.8`$ kpc. Therefore $`R_{\mathrm{opt}}`$ is less than one-half the Roche lobe radius $`R_L0.29R_{}`$, and the enigmatic companion of PSR B$`1957+20`$ does not even fill its Roche lobe (Djorgovski & Evans, 1988).
Soft $`X`$-rays from this system were detected using the ROSAT observatory (Fruchter & Goss, 1992; Kulkarni et al., 1992). An $`X`$-ray luminosity of about $`10^{31}\mathrm{erg}\mathrm{s}^1`$ (which is $`10^4`$ of the pulsar spin-down energy) was found, although no variability has been detected. Below we discuss possible sources of this high-energy and optical radiation.
### 1.2 PSR J$`20510827`$
The discovery of this eclipsing millisecond binary system was reported by Stappers et al. (1996a) who carried out timing observations at several frequencies between 408 MHz and 2.0 GHz. The orbital period $`T_{\mathrm{orb}}2.38`$ hr of this system makes it the third shortest pulsar orbital period known, behind 47 Tuc R (Camilo et al., 1999) and PSR B1744-24A (Lyne et al., 1990). Thus, the system is extremely compact, with a separation of the binary components $`a1.0R_{}`$. High-precision timing observations of PSR J$`20510827`$ (the dynamical parameters of this pulsar are: $`P=4.5`$ ms and $`\dot{P}=1.3\times 10^{20}`$) indicate that the orbital period is decreasing at a rate of $`\dot{T}_{\mathrm{orb}}=\left(11\pm 1\right)\times 10^{12}`$ (Stappers et al., 1998). Such an orbital period derivative implies a decay time for the orbit of only $`25`$ Myr, which is much shorter than the expected timescale for the ablation of the companion. This, in combination with a very slow transverse velocity of the pulsar itself, questions the formation of the system in the standard manner (Stappers et al., 1998).
The duration of the pulsar eclipse at 436 MHz is $`10\%`$ of the orbital period, which implies a radius of the eclipse region surrounding the companion $`R_e2\times 10^{10}\mathrm{cm}0.3R_{}`$. The variation in eclipse duration between 436 and 660 MHz seems to be frequency dependent with $`\mathrm{\Delta }t_e\nu ^{0.15}`$. On the other hand, at 1.4 GHz the pulsar emission was detected throughout the low-frequency eclipse region in approximately half of the observing sessions (Stappers et al., 1996a). These results, along with the observations at 2.0 GHz, suggests that no eclipse occurs at high radio frequencies.
Stappers et al. (1996b) also detected an optical counterpart of the companion of PSR J$`20510827`$. The amplitude of its light curve is at least 1.2 mag. Thus, PSR J$`20510827`$ is the second pulsar binary for which direct heating of the companion was observed. The radius of the companion’s optical counterpart (derived in the blackbody approximation) falls in the range $`R_{\mathrm{opt}}0.0670.18R_{}`$ (Stappers et al., 1996b). Thus, the companion star is likely to fill its Roche lobe ($`R_L0.13R_{}`$). The best-fit models, based on the new photometry of the secondary star (Stappers et al., 1999), require that greater than 30% of the incident energy is absorbed by the companion and reradiated as optical emission. The unilluminated side of the companion (i.e., observed at an orbital phase corresponding to the pulsar radio eclipse) was found to be cool (Stappers et al., 1999), with a best-fit temperature of 3000 K, similar to that obtained for the companion to PSR 1957+20 (Fruchter et al., 1995).
## 2 Nature of the companion stars
Let us begin the discussion with a few comments on the nature of the companion stars. The minimum masses of the companions in both cases were calculated from the mass function $`f(m_p,m_c)`$ using observed orbital parameters and assuming the pulsar mass $`m_p1.4M_{}`$ and the orbital inclination $`i60^{}`$, the latter necessary for the existence of eclipses. For the companions of PSRs J$`20510827`$ and B$`1957+20`$ this yields $`m_c0.027M_{}`$ (Stappers et al., 1996a) and $`m_c0.022M_{}`$ (Fruchter et al., 1988), respectively (i.e., very low companion masses). The actual masses of companions are within a factor of 1.2 of the above minimum values (because of $`i60^{}`$). Fruchter (1995) argues that the companion of PSR 1957+20 is obviously well below the hydrogen burning limit, and one would expect this star to be a degenerate dwarf. We suggest that the same is valid for the companion of PSR J$`20510827`$. The radius of a low-density white dwarf (assuming nonrelativistic electron Fermi gas in its interior) is given by the mass-radius relation (Shapiro & Teukolsky, 1983)
$$\frac{R_c}{R_{}}4.05\times 10^2\left(\frac{m_c}{M_{}}\right)^{1/3}\mu _e^{5/3},$$
(1)
where $`\mu _e=A/Z`$ is a mean molecular weight per electron<sup>1</sup><sup>1</sup>1For brevity, hereafter subscripts “1” and “2” correspond to the physical quantities regarding the companions of PSRs B$`1957+20`$ and PSR J$`20510827`$, respectively.. Substituting the masses of each companion star in equation (1) we can calculate their radii for the cases of pure hydrogen ($`\mu _e=1`$) and pure helium ($`\mu _e=2`$) white dwarfs, which yield $`R_{c1}^\mathrm{H}=0.145R_{}`$, $`R_{c1}^{\mathrm{He}}=0.046R_{},`$ $`R_{c2}^\mathrm{H}=0.135R_{}`$, and $`R_{c2}^{\mathrm{He}}=0.043R_{}`$, respectively. For a mixture of 25% He and 75% H by mass $`(X=0.25)`$, we obtain that $`R_{c1}R_{c2}0.1R_{}`$. We see that these values of companion radii are comparable to those estimated from the optical observations (see Section 1).
Because magnetic flux is conserved in gravitational collapse, the stellar magnetic fields can be amplified to more than a million Gauss when a star contracts to the white dwarf stage. Observations show that the magnetic fields at white dwarf surfaces vary from a few tens to millions of Gauss (Lang, 1992). We assume that both white dwarfs discussed above possess significant magnetic fields at their surfaces, and estimate these magnetic fields below.
## 3 Eclipse mechanism
### 3.1 Review
Thompson et al. (1994) and Thompson (1995) analyzed a variety of physical mechanisms which can cause pulsar eclipses. They considered eclipses by a wind from the stellar companion, by a stellar magnetosphere, or by material entrained in the pulsar wind. The refractive eclipse proposed by Phinney (1988) was ruled out by Thompson et al. (1994) due to inability of this mechanism to explain a sensitive dependence of the eclipse duration on frequency, as well as measured time delays. The other mechanisms discussed by Thompson et al. (1994) and Thompson (1995) are as follows: a free-free absorption, scattering by plasma turbulence, induced Compton scattering, stimulated Raman scattering parametric instability and synchro-cyclotron absorption. Thompson et al. (1994) and Thompson (1995) are inclined to the opinion that the eclipse is due to cyclotron absorption by plasma of temperature $`T_e(14)\times 10^8`$ K and density $`n_e4\times 10^4`$ cm<sup>-3</sup> in a field of strength $`B_e1520`$ G. Another possibility is a synchrotron absorption by relativistic plasma electrons (Eichler, 1991).
Detailed discussion of eclipse mechanisms was also presented by Eichler & Gedalin (1995). They focus on three wave processes, in particular Raman scattering (the decay $`photonphoton+plasmon`$ stimulated by an ambient plasmon field) and induced plasmon emission (i.e. the same process stimulated by photons already in the final state). These authors claim that both mechanisms are able to produce pulsar eclipses via large angle scattering on the initially beamed pulsar radiation, and neither of them requires high plasma densities. Brookshaw & Tavani (1995) considered different scenarios of the mass outflow in binaries, such as mass outflow intrinsic to the companion star and pulsar irradiation-driven outflows or Roche lobe overflow, and discarded the latter.
A mechanism based on three-wave interactions involving low-frequency acoustic turbulence was proposed to be responsible for pulsar eclipses by Luo & Melrose (1995); Luo & Chian (1997). The authors claim that the eclipsing material consists of plasmas from the companion wind, heated by the pulsar wind. This may result in a hot corona-like plasma which is possibly non-isothermal. They considered both the case of electron acoustic waves when the ion acoustic temperature $`T_i`$ is higher than the electron temperature $`T_e`$ and that of ion acoustic waves when $`T_e>T_i`$. It was found by Luo & Chian (1997) that induced scattering off electron acoustic waves can be important and can cause pulsar eclipse. At the same time, it was demonstrated that induced Brillouin scattering involving low-frequency ion acoustic waves cannot be the main reason of eclipse, because of the relatively small growth rates of these waves.
### 3.2 Geometry of the model
As we have seen in Section 1, the ratio of the eclipse duration (at low frequenices) to the orbital period is about the same in both PSR B$`1957+20`$ and PSR J$`20510827`$, i.e., $`\mathrm{\Delta }t_e/T_{\mathrm{orb}}0.1`$. As the eclipse nears, the pulse decreases in amplitude but does not change in shape, showing that the eclipse mechanism removes photons from the line of sight but does not significantly scatter the pulse. Thus, only absorption or large-angle scattering may be responsible for it (Stappers et al., 1996a). On the other hand, the eclipse length indicates that the eclipsing medium extends to the distances much larger than the dimensions of a companion of any conceivable composition (Fruchter, 1995). Indeed, for PSR B$`1957+20`$ the radius of eclipsing region $`R_e(318\mathrm{MHz})0.76R_{}`$ (so that the “eclipsing spot” lies at least $`1.5R_{}`$ across), $`R_e(1.4\mathrm{GHz})0.46R_{},`$ whereas for PSR J$`20510827`$ $`R_e(436\mathrm{MHz})0.31R_{}`$. We see that in both cases (at all radio frequencies) the eclipse region lies even well beyond the corresponding companions’ Roche lobes ($`R_{L1}0.29R_{}`$ and $`R_{L2}0.13R_{}`$, respectively).
Thompson (1995) points out that the large size of the eclipses (compared to the companion’s Roche lobe) could be explained either by diffusion of plasma into the pulsar wind, or by a large companion magnetosphere. We adopt this statement and claim that companions of both pulsars are degenerate dwarfs with significant magnetic fields at their surfaces. At the same time, we suggest that the magnetospheres of these magnetic white dwarfs are infused with relativistic particles supplied permanently by the pulsar relativistic wind. Figure 3.2 represents schematically geometry of such a binary system.
Let us estimate the density of a wind plasma at the distances from the pulsar surface corresponding to the binary separation $`a`$. The number density of pair plasma inside the pulsar light cylinder (with the radius $`R_{_{\mathrm{LC}}}=Pc/2\pi `$) is
$$n__p=2.2\times 10^{18}\frac{\kappa }{\mathrm{sin}\alpha }\left(\frac{\dot{P}}{P}\right)^{0.5}\left(\frac{R_0}{R}\right)^3[\mathrm{cm}^3],$$
(2)
where $`\kappa `$ is a Sturrock multiplication factor (Sturrock, 1971) and $`\alpha `$ is an inclination angle between the pulsar magnetic and rotation axes. From equation (2) we can calculate the number density of particles at distance of one light cylinder radius, $`n_{_{pL}}`$. Assuming further that the density falls according to the inverse square law beyond the pulsar light cylinder $`n__p(R)=n_{_{pL}}(R_{_{\mathrm{LC}}}/R)^2`$ we get
$$n__p(R)=3\times 10^3\frac{\kappa }{\mathrm{sin}\alpha }\left(\frac{\dot{P}_{15}}{P^3}\right)^{0.5}\left(\frac{R_{}}{R}\right)^2[\mathrm{cm}^3].$$
(3)
Substituting in equation (3) the dynamic parameters of each of the pulsars discussed in this paper, taking $`\alpha 90^{}`$ for PSR B$`1957+20`$ (note that this pulsar has an interpulse indicating that it is an almost orthogonal rotator), and assuming that the inclination angle is rather large for PSR J$`20510827`$ too, we find that at the distances corresponding to binary separations in each of the systems ($`a_12.5R_{}`$ and $`a_21.0R_{}`$, respectively), the plasma densities in pulsar winds are $`n__p(a_1)3.1\times 10^2\kappa `$ cm<sup>-3</sup> and $`n__p(a_2)\mathrm{\hspace{0.17em}3.6}\times 10^2\kappa \mathrm{cm}^3`$, respectively. We see that the densities are about the same order of magnitude in these two cases. Note that the actual value of plasma density in the companions’ magnetospheres may be much higher than inferred from equation (3). The point is that the slowest particles of pulsar wind trapped by the companion’s magnetic field should keep bouncing between its magnetic poles without relative density loss on an extended timescale. Such an accumulation of plasma particles may apparently increase their number density.
### 3.3 The model
We share the opinion expressed by Thompson et al. (1994) that the most promising model of eclipsing pulsar binaries is cyclotron absorption. However, we believe that the proper consideration of waves damping in the relativistic magnetized plasma should be performed basing on kinetic treatment of the cyclotron damping. Basing on this statement, as well as on the physical picture of an eclipsing binary system presented in Section 3.2 (see also Fig. 3.2), we presented a model for the radio eclipse of PSR B$`1957+20`$ (Khechinashvili & Melikidze, 1997). Here we develop and further extend this model by applying it to PSR J$`20510827`$.
Let us check the possibility of waves damping in the plasma of companion’s magnetosphere. Radio waves emitted by the pulsar (with the vacuum spectrum $`\omega =kc`$) should split into the eigenmodes of the relativistic pair plasma, as they enter the companion’s magnetosphere. It was shown (e.g., Lominadze et al., 1986; Arons & Barnard, 1986) that in the magnetized relativistic pair plasma there exist three wavemodes with the frequencies much less than $`\omega __B/\gamma __p`$, where $`\omega __B=eB/mc`$ is the gyrofrequency and $`\gamma __p`$ is a mean Lorentz factor of the plasma particles. In the general case of oblique propagation with respect to the local magnetic field, these eigenmodes are: i) the purely electromagnetic extraordinary (X) mode; ii) the subluminous Alfvén (A) mode; and iii) the superluminous ordinary (O) mode. The last two modes are of mixed electrostatic-electromagnetic nature. Electric field vectors $`𝐄^\mathrm{O}`$ and $`𝐄^\mathrm{A}`$ of O and A-modes lie in the $`\left(𝐤𝐁\right)`$ plane, while the electric field of X-mode $`𝐄^\mathrm{X}`$ is directed perpendicularly to this plane.
All the three wavemodes may be strongly damped in the companion’s magnetosphere at the cyclotron resonance
$$\omega k_{}v\frac{\omega __B}{\gamma _{_{\mathrm{res}}}}=0$$
(4)
on the particles of ambient plasma with the mean Lorentz factor $`\gamma _{_{\mathrm{res}}}=\gamma __p`$. Note that in equation (4) $`k_{}`$ denotes the component of the wavevector along the local magnetic field, and we neglected the curvature drift of the bulk plasma particles across the plane of the magnetic field line curvature.
Let us calculate the characteristic frequency of damping of the plasma eigenmodes at the cyclotron resonance. We assume that all the modes are vacuum-like in the domain of their spectrum where the cyclotron damping occurs, i.e. their spectrum can be represented as
$$\omega =kc(1\delta ),\mathrm{where}\delta 1.$$
(5)
Using the dispersion relation in this form we can rewrite the resonance condition (4) as follows
$$kc(1\delta )kc\mathrm{cos}\theta \left(1\frac{1}{2\gamma __p^2}\right)=\frac{\omega __B}{\gamma __p},$$
(6)
where $`\theta `$ is an angle between the wave-vector and the magnetic field in the resonance region, and we used $`v11/2\gamma __p^2`$. From equation (6) we obtain two approximations of the frequency of damped waves:
$$\omega __d2\gamma __p\omega __B,\mathrm{if}\theta \frac{1}{\gamma __p},$$
(7)
and
$$\omega __d\frac{\omega __B}{\gamma __p\left(1\mathrm{cos}\theta \right)},\mathrm{if}\theta \frac{1}{\gamma __p}.$$
(8)
The decrements of cyclotron damping for the plasma eigenmodes are obtained by solving the corresponding dispersion equations (see, e.g., Lominadze et al., 1986; Khechinashvili, 1999) for the imaginary part of the complex frequency ($`\omega \omega +i\mathrm{\Gamma }`$). Therefore the decrements of X, O and A-modes write
$$\mathrm{\Gamma }_\mathrm{X}=\frac{\pi }{2}\frac{\omega __p^2}{\gamma __T\omega __d}$$
(9)
and
$$\mathrm{\Gamma }_\mathrm{O}\mathrm{\Gamma }_\mathrm{A}=\frac{\pi }{2}\frac{\omega __p^2}{\gamma __T\omega __d}g\left(\theta \right),$$
(10)
respectively, where
$$g\left(\theta \right)=\mathrm{cos}^2\theta +\frac{2\mathrm{sin}^2\theta \mathrm{cos}\theta }{1\mathrm{cos}\theta +\left(1/2\gamma __p^2\right)},$$
(11)
and $`\gamma __T`$ is a thermal spread of the particles energy. In equations (9) and (10) we used the mean value theorem and the normalization of the distribution function, namely that $`f\left(\gamma __p\right)\gamma __T=f\left(\gamma \right)𝑑\gamma =1`$. Note that for small angles (or small Lorentz factors of the ambient plasma particles) $`g\left(\theta \right)1`$ and the decrement of O and A-modes (eq. ) transforms into that of X-waves (eq. ),
$$\mathrm{\Gamma }_{(\mathrm{O},\mathrm{A})}\frac{\pi }{2}\frac{\omega __p^2}{\gamma __T\omega __d},\mathrm{if}\theta 1/\gamma __p.$$
(12)
In the opposite limit, i.e. when the angles (or the plasma mean Lorentz factors) are relatively large, $`g(\theta )`$ is plotted in Figure 3.3, for different values of $`\gamma __p`$. It is seen that if $`1/\gamma __p\theta <1`$ the value of the decrement of O and A-modes is a few times larger than that of the X-mode (provided that the other parameters remain the same in both cases). For example, for $`\theta =30^{}`$ and $`\gamma __p=100`$ (a dashed line in Fig. 3.3) we have $`\mathrm{\Gamma }_{(\mathrm{O},\mathrm{A})}/\mathrm{\Gamma }_\mathrm{X}4`$. The two decrements become equal each other $`\left(g(\theta )1\right)`$ for the angle $`\theta 70^{}`$. On top of that, it appears that O and A-modes propagating nearly transversely to the local magnetic field undergo no cyclotron damping, since $`g\left(\theta \right)0`$ if $`\theta 90^{}`$. Indeed, the A-mode vanishes at exactly transverse propagation to the local magnetic field (its group velocity tends to zero), whereas the O-mode transforms into a mode whose electric field is parallel to the external magnetic field and $`𝐤𝐄`$. Such a wave is not subject to cyclotron damping.
It is obvious that the angles between wavevectors and the local magnetic field cannot be small, hence, $`\theta 1/\gamma __p`$. Therefore, we can calculate the frequency of damped waves from equation (8), which is now convenient to represent in the following form
$$\frac{\nu __d}{[\mathrm{GHz}]}2.8\times 10^3\frac{B_c}{\gamma __p\left(1\mathrm{cos}\theta \right)}\left(\frac{R_c}{r}\right)^3.$$
(13)
Here $`r`$ is a distance from the companion’s center, $`R_c`$ is the radius of the companion star, and $`B_c`$ is its surface magnetic field measured in Gauss. Note that we assume the companion’s magnetic field to be dipolar, i.e. $`B\left(r\right)=B_c(R_c/r)^3`$.
It is seen from equation (13) that the frequency of damped waves is inversely proportional to the cube of the distance from the companion’s surface (obviously, this dependence results from $`\nu __d\omega _{_{Be}}B_e`$, the latter being the magnetic field value in the eclipse region). In other words, distinct frequencies are damped at distinct heights above the stellar surface (corresponding to appropriate values of the magnetic field), while higher frequencies are damped closer to the companion star. The waves with frequencies somewhat higher and lower than the one given by equation (13) propagate almost freely in the plasma at this distance from the star. Although, they can also reach a region at a different altitude or with different direction of local magnetic field (hence, angle $`\theta `$) and be damped there. Figure 3.3 represents schematically a top view of the eclipse region. It follows from this picture that the size of “eclipsing spot” is frequency-dependent, being larger at lower frequencies. Indeed, radio waves with a low frequency $`\nu ^{}`$ (from the entire range of the pulsar radio spectrum) are damped at some large distance $`r^{}`$ from the white dwarf surface. At the same time, waves with higher frequency $`\nu ^{\prime \prime }>\nu ^{}`$ propagate almost freely through this outer region, although they are damped as they reach the altitude $`r^{\prime \prime }<r^{}`$ from the stellar surface with a higher value of magnetic field.
Let us find the values of physical parameters matching the observational data on PSR 1957+20 with our eclipse model. Substituting first $`\nu __d=0.318`$ GHz and then $`\nu __d=1.4`$ GHz in equation (13), and using corresponding values of eclipse region radii at these frequencies $`r=R_e(\nu __d)`$ we can find that this range of radio frequencies is damped at the distances $`0.460.76R_{}`$ from the stellar surface, if
$$\frac{B_cR_c^3}{\gamma __p\left(1\mathrm{cos}\theta \right)}50R_{}^3.$$
(14)
First assume that the companion star is a hydrogen white dwarf with $`R_c=0.145R_{}`$. Speculating further that the mean Lorentz factor of the particles of companion’s magnetospheric plasma are moderate $`\gamma __p10100`$ and the average angle is rather large, $`\theta =60^{}90^{}`$, we obtain that the magnetic field strength at white dwarf’s surface $`B_c2\times 10^53\times 10^6`$ G. Alternatively, if the companion is built by pure helium (so that $`R_c=0.046R_{}`$), equation (14) yields $`B_c5\times 10^610^8`$ G for the same set of parameters.
Applying the above procedure to PSR J$`20510827`$ system we obtain that the band of radio frequencies up to about 2 GHz is damped in the companion’s magnetosphere, if the following condition should is satisfied,
$$\frac{B_cR_c^3}{\gamma __p\left(1\mathrm{cos}\theta \right)}5R_{}^3.$$
(15)
Here it was taken into account that the waves with $`\nu __d=436`$ MHz are damped at the distance $`R_c=0.31R_{}`$. Assuming, as in the previous case, that $`\gamma __p10100`$ and $`\theta =60^{}90^{}`$, we can estimate that $`B_c2\times 10^44\times 10^5`$ G for the hydrogen white dwarf ($`R_c=0.135R_{}`$), and $`B_c5\times 10^510^7`$ G for the helium white dwarf ($`R_c=0.043R_{}`$).
For all the intermediate cases of white dwarfs made of various mixtures of He and H by mass (that is, various values of $`X`$, see Section 2) the surface magnetic field falls in the range between the above values of $`B_c`$ in both cases. Detailed spectral observations of both companion stars would determine the spectral class (hence, the chemical composition) of each white dwarf, as well as constrain the values of surface magnetic fields at their surfaces. In the frame of our model (eqs. ) this would provide an indirect measurement of average angles $`\theta `$ and mean Lorentz factors of companion’s magnetospheric plasma particles. The latter, in turn, may specify the energy of the slowest (and the most numerous) particles of the pulsar wind. All in all, we see that both companion stars possess quite strong surface magnetic fields, hence we deal with so called magnetic white dwarfs, probably with mega-Gauss magnetic fields. Let us notice that such objects are quite common in the Galaxy (Lang, 1992).
The necessary condition of the cyclotron resonance development is that the characteristic timescale of the instability, $`\tau __d1/|\mathrm{\Gamma }_d|`$, should be much less than the time waves take to escape the resonance region $`\tau __e2R_e/c`$. The decrement $`\mathrm{\Gamma }_d`$ is given by equation (9) for X-mode, and equation (10) for O and A-modes. Assuming a relatively broad distribution function for companion’s magnetospheric plasma (so that $`\gamma __p\gamma __T`$) the former can be rewritten as follows
$$\mathrm{\Gamma }_d^\mathrm{X}=0.8\frac{n__p(a)}{\gamma __p}\left(\frac{\nu __d}{[\mathrm{GHz}]}\right)^1,$$
(16)
while the latter differs from equation (16) by an angle-dependent multiplier $`g\left(\theta \right)`$ (eq. , see also Fig. 3.3). Thus, for pulsar to be invisible at some frequency $`\nu __d(R_e),`$ the following two conditions should be satisfied at the corresponding distance $`R_e`$ from the companion’s surface
$$\frac{\tau __e}{\tau __d}=\frac{2\left|\mathrm{\Gamma }_d\right|R_e}{c}1,$$
(17)
$$\frac{2\left|\mathrm{\Gamma }_d\right|R_e}{c}g\left(\theta \right)1$$
(18)
Combining equations (16) and (3), expressing $`R_e`$ from equation (13) by use of first equation (14) and then equation (15), we can rewrite the condition for X-mode damping (eq. ) as
$$6\times 10^2\frac{\kappa }{\gamma __p}\left(\frac{\nu __d}{[\mathrm{GHz}]}\right)^{4/3}1$$
(19)
in the magnetosphere of the companion to PSR B$`1957+20`$, and as
$$3\times 10^2\frac{\kappa }{\gamma __p}\left(\frac{\nu __d}{[\mathrm{GHz}]}\right)^{4/3}1$$
(20)
in that of companion to PSR J$`20510827`$, respectively. According to equation (18), the corresponding conditions for O and A modes in each case differ from equations (19) and (20) by the angle-dependent multiplier $`g(\theta )`$ (eq. ) on the left-hand side of these inequalities.
Figs. 3.3 and 3.3 display damping conditions versus frequency in both cases. The solid lines correspond damping of X-mode at $`\gamma __p=50`$ and $`\gamma __p=100`$, and the dashed lines represent damping condition of coupled modes for the same values of mean Lorentz factor and the angle values $`\theta =60^{}`$ and $`\theta =80^{}`$. It is clear that all the intermediate cases fall between the curves presented in these figures. The dashed horizontal line corresponds to the characteristic value $`\tau __e/\tau __d=3`$ at which the intensity of waves drops by about $`e^{2\tau __e/\tau __d}400`$ times.
It is seen in Figure 3.3 that in the case of PSR B$`1957+20`$ the damping conditions are well satisfied for all the plasma wavemodes if $`\gamma __p=50`$ (hence, $`\kappa =2\times 10^4`$). So damping is very strong and the pulsar radio emission certainly cannot reach an observer for this set of plasma parameters. However, the calculations corresponding $`\gamma __p=100`$ (hence, $`\kappa =10^4`$) indicate that for such parameters the pulsar emission (at least one of the modes) may appear visible at relatively high frequencies during a low-frequency eclipse. However, the present lack of high-frequency eclipse observations of this pulsar do not allow us to judge strictly parameters of the magnetospheric plasma of its companion.
Figure 3.3 shows that the set of parameters $`\gamma __p=100`$, $`\kappa =10^4`$ and $`\theta =60^{}80^{}`$ are certainly ruled out in the case of PSR J$`20510827`$, because the predictions are not consistent with the observational data from this binary system. However, it is seen that for $`\gamma __p=50`$ (hence, $`\kappa =2\times 10^4`$) some critical frequency ($`\nu __d1.4`$ GHz) may exist at which O and A-modes propagating with large angle $`\theta 80^{}`$ to the magnetic field escape from the eclipse region without significant loss, while X-mode is still strongly damped. It means that the pulsar remains visible at this frequency, although only the waves with one polarization can be detected. If such a situation appears at the edge of the eclipse region (where the angle $`\theta `$ varies in quite a small range), one can observe significant linear polarization at eclipse ingress and egress through quite a large frequency range (up to about 3 GHz, according to Fig. 3.3). On the other hand, nonstationary processes taking place in the companion’s magnetospheric plasma (such as, e.g., conal or cyclotron instabilities) may induce significant variations of plasma density in the companion’s magnetospheric plasma. This may affect fulfillment of damping conditions, resulting in the eventual cyclotron damping of all the wavemodes at the critical frequency $`\nu __d1.4`$ GHz. Thus, one observes an alternating eclipse at this frequency. Indeed, as it was mentioned in Section 1.2, PSR J$`20510827`$ was detected at 1.4 GHz throughout the low-frequency eclipse region in about half of observing sessions (Stappers et al., 1996a). Another reason of such an absence of eclipse at frequencies $`1.4`$ GHz could be a misalignment of the binary orbital plane with an observer’s line of sight. Indeed, even at moderate inclination angle it is possible that observer misses completely the narrow high-frequency “eclipsing spot” (with the radius $`R_e(1.4\mathrm{GHz})0.22R_{}`$), while at lower frequencies the pulsar still remains eclipsed at the same orbital phases (see Figs. 3.2 and 3.3).
## 4 Discussion
We acknowledge that the model presented above is to some extent idealized. This regards, first of all, the energy distribution of plasma particles, which we assumed to be stable and constant with the altitude. In fact, a more realistic approach would imply that more energetic particles accumulate at larger distances from the stellar surface, whereas mean Lorentz factors of plasma particles decrease closer to the star, due to substantial synchrotron losses in the stronger magnetic field there. It is clear that this should affect calculation of both damping frequency (eq. ) and decrement (eq. ). Indeed, according to our model, the eclipse length scales with the frequency as $`\mathrm{\Delta }t_e\nu ^{0.33}`$. On the other hand, observations give $`\mathrm{\Delta }t_e\nu ^{0.4\pm 0.1}`$ for PSR B$`1957+20`$ and $`\mathrm{\Delta }t_e\nu ^{0.15}`$ for PSR J$`20510827`$, respectively. Such a deviation of the eclipse pattern from the theoretical prediction in both cases would result from the aforementioned variation of plasma distribution function with the altitudes from the stellar surfaces.
Another factor which can have a strong impact on the eclipse process described above is magnetic field orientation and its configuration. One can speculate about the orientation of the companion’s magnetic field with respect to the star itself, and its magnetic axis could be inclined to the rotation axis with an arbitrary yet constant angle (see Fig. 3.2). On the other hand, the long history of tidal interaction with the neutron star in such close binary systems as PSRs B$`1957+20`$ and J$`20510827`$ are, would align the angular momenta of the white dwarf and the neutron star with the binary orbital angular momentum. The same would also bring to the situation when the companion faces the pulsar always with one side (like the moon to the Earth), a fact apparently confirmed by the optical observations of both systems (see the discussion below). This implies that the companion’s magnetosphere in both cases is strictly oriented with respect to the pulsar wind, as well as its radio emission cone (the latter being beamed around the pulsar magnetic axis $`\mu `$). This, according to our model, would provide relative stability of the eclipse pattern, excluding eclipse variations associated with the eventual precession of the companion’s magnetic axis around its rotation rotaion axis.
Regarding the configuration of companion’s magnetic field, we assumed the latter being purely dipolar, although one can hypothesize that the global magnetic field of the white dwarf is modified by either a substantial contribution of local magnetic fields close to the stellar surface or, e.g., a quadrupole component. Naturally, such a change of the magnetic field value and direction should affect waves damping in the component’s magnetosphere, especially at high radio frequencies (which are damped closer to the stellar surface).
It is worth estimating how the pulsar wind can influence the companion’s magnetic field structure in the eclipse region. For that it is necessary to calculate the pulsar wind magnetic pressure
$$w_p\frac{B_p^2}{8\pi }=\frac{\pi }{c}\frac{I}{R^2}\frac{\dot{P}}{P^3}$$
(21)
at the distance of companion $`a`$ (here $`I10^{45}`$ g cm<sup>2</sup> is the neutron star moment of inertia and $`R`$ is a distance from the pulsar), and to compare it with the companion’s magnetic pressure
$$w_c\frac{B_c^2}{8\pi }=4\times 10^2B_c^2\left(\frac{R_c}{r}\right)^6,$$
(22)
in the eclipse region. For the parameters of each of the pulsars (see Section 1) equation (21) yields $`w_p(a_1)14\mathrm{erg}\mathrm{cm}^3`$ and $`w_p(a_2)3\mathrm{erg}\mathrm{cm}^3`$, respectively. On the other hand, at the distances from the white dwarf’s surface corresponding to low-frequency eclipse region in each case ($`R_e(318\mathrm{MHz})=0.76R_{}`$ and $`R_e(436\mathrm{MHz})=0.31R_{}`$, respectively), we find that for the typical values given by equations (14) and (15) the magnetic pressure $`w_{c1}=1.3\times 10^6\mathrm{erg}\mathrm{cm}^3`$ and $`w_{c2}=2.8\times 10^6\mathrm{erg}\mathrm{cm}^3`$, respectively. We see that $`w_pw_c`$ in the eclipse region in both cases, so it can be said that the eclipse region of white dwarf’s magnetosphere does not “feel” the pulsar wind at all, and the only (yet very important) role of the latter is to supply this magnetosphere with relativistic charged particles.
Moreover, one can estimate that the companion’s magnetosphere dominates the pulsar wind magnetic field in the vast expanse of both binary systems. For PSR B$`1957+20`$ the “inner front” of the companion’s magnetosphere, that is, the point between the white dwarf and the neutron star where $`w_pw_c`$, is located at the distance $`r_{\mathrm{inner}}2.3R_{}`$ from the companion, i.e. only about $`0.2R_{}`$ from the pulsar surface (note that this distance is still equivalent to about $`1800`$ light cylinder radii of this pulsar). The latter means that the pulsar wind particles soon fall under control of the companion’s magnetic field. At the same time, companion’s magnetoitail is stretched up to $`r_{\mathrm{outer}}8.3R_{}`$ away from the white dwarf. In the transverse direction companion’s magnetosphere takes over pulsar wind up to $`r_{}7.5R_{}`$. The corresponding dimensions of PSR J$`20510827`$ companion’s magnetosphere yield $`r_{\mathrm{inner}}0.97R_{},`$ $`r_{\mathrm{outer}}5.8R_{}`$ and $`r_{}5.4R_{}`$. We see that in this binary system pulsar wind particles start moving in the companion’s field after they flee from the pulsar magnetosphere on the distances more than about its 10 light cylinder radii.
Let us finish the discussion by a few remarks concerning possible physical mechanisms of the observed high-energy emission from these remarkable astrophysical systems. As it was mentioned in Section 1, highly variable optical counterparts of the companion stars were observed in both cases, and $`X`$-rays were detected from PSR B$`1957+20`$. It is out of the question that the pulsar is the only source of $`X`$-ray energy, but there are still lots of uncertainties in the explanation of the actual physical mechanism. There are no data so far on the $`X`$-ray spectrum, and it is still even not clear whether these $`X`$-rays are emitted by the millisecond pulsar itself, or they originate due to physical processes taking place somewhere between the pulsar and the white dwarf. Thompson (1995) points to the fact that the large spin-down luminosity of PSR B$`1957+20`$, taking into account the radius of the companion, gives the energy flux at the companion’s orbit which is seven times more than the one existing at the surface of our sun. In the model of Arons & Tavani (1993) the pulsar wind, shocked by its interaction with the wind off the companion, emits $`X`$-rays and $`\gamma `$-rays with a soft $`X`$-ray luminosity comparable to that observed. A large fraction of this radiation is then absorbed by the companion’s surface facing the pulsar, due to the combination of relativistic beaming and the proximity of the companion to the shock. They show that the observed luminosity of $`X`$-rays can easily power also companion’s optical emission.
To the best of our knowledge, there is so far no spectroscopic data on the optical counterparts of the companion stars in both binaries. Let us first assume that the optical emission will be found to be nonthermal. The fastest “beam” particles of the pulsar wind with $`\gamma __b10^{56}`$ can reach the strong magnetic field at low altitudes from the stellar surface of the white dwarf, retaining their ultrarelativistic energies. Estimations show that such particles, gyrating around magnetic field lines, can emit a synchrotron emission falling into optical band. This might occur very close to the stellar surface, at distances from the companion’s center that compare with the companion’s optical sizes calculated from the observations (see Section 1).
Another mechanism can be suggested to explain the companion’s high-energy radiation if the latter proves to be of thermal nature. Relativistic electrons and positrons trapped by the magnetic field of the white dwarf should “bounce” between higher-field points near the North and South poles (in the same manner as in the ordinary magnetic trap or in the Earth’s magnetosphere) and slowly precess around the star due to $`B`$ and curvature drift. The most energetic particles with the ratio $`v_{}/v_{}`$ at white dwarf’s equatorial plane
$$\frac{v_{}}{v_{}}>\left(\frac{B_{\mathrm{max}}}{B_{\mathrm{min}}}1\right)^{1/2}$$
(23)
are in a so called “loss cone”, and should be “poured out” on the magnetic poles. It can be demonstrated that broad polar regions of the companion star will be heated up to temperatures high enough to power the thermal $`X`$-ray emission. If this happens, the thermal optical emission can be just a long-wavelength tail of the Planck $`X`$-ray spectrum.
We would like to notice that the same physical mechanism was found to operate in the case of enigmatic pulsar Geminga (Gil et al., 1998). Namely, it appeared that the cyclotron damping of radio waves in the magnetosphere of this pulsar, combined with the almost aligned geometry, leads to the apparent absence of its emission at radio frequencies higher than about 100 MHz. Moreover, we found that the cyclotron resonance has a strong impact on the general problem of the escape of radio waves from pulsar magnetospheres (Khechinashvili, 1999). We plan to develop the eclipse model in forthcoming papers, where we expect to achieve better agreement with the observations and also include other eclipsing binary systems into the framework of our model.
We thank D. Lorimer for critical reading of the manuscript and many useful comments. The work is supported in part by the KBN Grants 2 P03D 015 12 and 2 P03D 003 15 of the Polish State Committee for Scientific Research. G. M. and D. K. acknowledge also support from the INTAS Grant 96-0154. |
no-problem/0001/hep-ex0001013.html | ar5iv | text | # Measurement of the 𝑩^𝟎 and 𝑩⁺ meson masses from 𝑩^𝟎→𝝍⁽'⁾𝑲^𝟎_𝑺 and 𝑩⁺→𝝍⁽'⁾𝑲⁺ decays
## Abstract
Using a sample of $`9.6\times 10^6`$ $`B\overline{B}`$ meson pairs collected with the CLEO detector, we have fully reconstructed 135 $`B^0\psi ^{()}K_S^0`$ and 526 $`B^+\psi ^{()}K^+`$ candidates with very low background. We fitted the $`\psi ^{()}K`$ invariant mass distributions of these $`B`$ meson candidates and measured the masses of the neutral and charged $`B`$ mesons to be $`M(B^0)=5279.1\pm 0.7[\mathrm{stat}]\pm 0.3[\mathrm{syst}]`$ MeV/$`c^2`$ and $`M(B^+)=5279.1\pm 0.4[\mathrm{stat}]\pm 0.4[\mathrm{syst}]`$ MeV/$`c^2`$. The precision is a significant improvement over previous measurements.
preprint: CLNS 99/1658 CLEO 99-22
S. E. Csorna,<sup>1</sup> I. Danko,<sup>1</sup> K. W. McLean,<sup>1</sup> Sz. Márka,<sup>1</sup> Z. Xu,<sup>1</sup> R. Godang,<sup>2</sup> K. Kinoshita,<sup>2,</sup><sup>*</sup><sup>*</sup>*Permanent address: University of Cincinnati, Cincinnati OH 45221 I. C. Lai,<sup>2</sup> S. Schrenk,<sup>2</sup> G. Bonvicini,<sup>3</sup> D. Cinabro,<sup>3</sup> L. P. Perera,<sup>3</sup> G. J. Zhou,<sup>3</sup> G. Eigen,<sup>4</sup> E. Lipeles,<sup>4</sup> M. Schmidtler,<sup>4</sup> A. Shapiro,<sup>4</sup> W. M. Sun,<sup>4</sup> A. J. Weinstein,<sup>4</sup> F. Würthwein,<sup>4,</sup>Permanent address: Massachusetts Institute of Technology, Cambridge, MA 02139. D. E. Jaffe,<sup>5</sup> G. Masek,<sup>5</sup> H. P. Paar,<sup>5</sup> E. M. Potter,<sup>5</sup> S. Prell,<sup>5</sup> V. Sharma,<sup>5</sup> D. M. Asner,<sup>6</sup> A. Eppich,<sup>6</sup> T. S. Hill,<sup>6</sup> R. Kutschke,<sup>6</sup> D. J. Lange,<sup>6</sup> R. J. Morrison,<sup>6</sup> A. Ryd,<sup>6</sup> R. A. Briere,<sup>7</sup> B. H. Behrens,<sup>8</sup> W. T. Ford,<sup>8</sup> A. Gritsan,<sup>8</sup> J. Roy,<sup>8</sup> J. G. Smith,<sup>8</sup> J. P. Alexander,<sup>9</sup> R. Baker,<sup>9</sup> C. Bebek,<sup>9</sup> B. E. Berger,<sup>9</sup> K. Berkelman,<sup>9</sup> F. Blanc,<sup>9</sup> V. Boisvert,<sup>9</sup> D. G. Cassel,<sup>9</sup> M. Dickson,<sup>9</sup> P. S. Drell,<sup>9</sup> K. M. Ecklund,<sup>9</sup> R. Ehrlich,<sup>9</sup> A. D. Foland,<sup>9</sup> P. Gaidarev,<sup>9</sup> L. Gibbons,<sup>9</sup> B. Gittelman,<sup>9</sup> S. W. Gray,<sup>9</sup> D. L. Hartill,<sup>9</sup> B. K. Heltsley,<sup>9</sup> P. I. Hopman,<sup>9</sup> C. D. Jones,<sup>9</sup> D. L. Kreinick,<sup>9</sup> M. Lohner,<sup>9</sup> A. Magerkurth,<sup>9</sup> T. O. Meyer,<sup>9</sup> N. B. Mistry,<sup>9</sup> E. Nordberg,<sup>9</sup> J. R. Patterson,<sup>9</sup> D. Peterson,<sup>9</sup> D. Riley,<sup>9</sup> J. G. Thayer,<sup>9</sup> P. G. Thies,<sup>9</sup> B. Valant-Spaight,<sup>9</sup> A. Warburton,<sup>9</sup> P. Avery,<sup>10</sup> C. Prescott,<sup>10</sup> A. I. Rubiera,<sup>10</sup> J. Yelton,<sup>10</sup> J. Zheng,<sup>10</sup> G. Brandenburg,<sup>11</sup> A. Ershov,<sup>11</sup> Y. S. Gao,<sup>11</sup> D. Y.-J. Kim,<sup>11</sup> R. Wilson,<sup>11</sup> T. E. Browder,<sup>12</sup> Y. Li,<sup>12</sup> J. L. Rodriguez,<sup>12</sup> H. Yamamoto,<sup>12</sup> T. Bergfeld,<sup>13</sup> B. I. Eisenstein,<sup>13</sup> J. Ernst,<sup>13</sup> G. E. Gladding,<sup>13</sup> G. D. Gollin,<sup>13</sup> R. M. Hans,<sup>13</sup> E. Johnson,<sup>13</sup> I. Karliner,<sup>13</sup> M. A. Marsh,<sup>13</sup> M. Palmer,<sup>13</sup> C. Plager,<sup>13</sup> C. Sedlack,<sup>13</sup> M. Selen,<sup>13</sup> J. J. Thaler,<sup>13</sup> J. Williams,<sup>13</sup> K. W. Edwards,<sup>14</sup> R. Janicek,<sup>15</sup> P. M. Patel,<sup>15</sup> A. J. Sadoff,<sup>16</sup> R. Ammar,<sup>17</sup> A. Bean,<sup>17</sup> D. Besson,<sup>17</sup> R. Davis,<sup>17</sup> N. Kwak,<sup>17</sup> X. Zhao,<sup>17</sup> S. Anderson,<sup>18</sup> V. V. Frolov,<sup>18</sup> Y. Kubota,<sup>18</sup> S. J. Lee,<sup>18</sup> R. Mahapatra,<sup>18</sup> J. J. O’Neill,<sup>18</sup> R. Poling,<sup>18</sup> T. Riehle,<sup>18</sup> A. Smith,<sup>18</sup> J. Urheim,<sup>18</sup> S. Ahmed,<sup>19</sup> M. S. Alam,<sup>19</sup> S. B. Athar,<sup>19</sup> L. Jian,<sup>19</sup> L. Ling,<sup>19</sup> A. H. Mahmood,<sup>19,</sup>Permanent address: University of Texas - Pan American, Edinburg TX 78539. M. Saleem,<sup>19</sup> S. Timm,<sup>19</sup> F. Wappler,<sup>19</sup> A. Anastassov,<sup>20</sup> J. E. Duboscq,<sup>20</sup> K. K. Gan,<sup>20</sup> C. Gwon,<sup>20</sup> T. Hart,<sup>20</sup> K. Honscheid,<sup>20</sup> D. Hufnagel,<sup>20</sup> H. Kagan,<sup>20</sup> R. Kass,<sup>20</sup> T. K. Pedlar,<sup>20</sup> H. Schwarthoff,<sup>20</sup> J. B. Thayer,<sup>20</sup> E. von Toerne,<sup>20</sup> M. M. Zoeller,<sup>20</sup> S. J. Richichi,<sup>21</sup> H. Severini,<sup>21</sup> P. Skubic,<sup>21</sup> A. Undrus,<sup>21</sup> S. Chen,<sup>22</sup> J. Fast,<sup>22</sup> J. W. Hinson,<sup>22</sup> J. Lee,<sup>22</sup> N. Menon,<sup>22</sup> D. H. Miller,<sup>22</sup> E. I. Shibata,<sup>22</sup> I. P. J. Shipsey,<sup>22</sup> V. Pavlunin,<sup>22</sup> D. Cronin-Hennessy,<sup>23</sup> Y. Kwon,<sup>23,</sup><sup>§</sup><sup>§</sup>§Permanent address: Yonsei University, Seoul 120-749, Korea. A.L. Lyon,<sup>23</sup> E. H. Thorndike,<sup>23</sup> C. P. Jessop,<sup>24</sup> H. Marsiske,<sup>24</sup> M. L. Perl,<sup>24</sup> V. Savinov,<sup>24</sup> D. Ugolini,<sup>24</sup> X. Zhou,<sup>24</sup> T. E. Coan,<sup>25</sup> V. Fadeyev,<sup>25</sup> Y. Maravin,<sup>25</sup> I. Narsky,<sup>25</sup> R. Stroynowski,<sup>25</sup> J. Ye,<sup>25</sup> T. Wlodek,<sup>25</sup> M. Artuso,<sup>26</sup> R. Ayad,<sup>26</sup> C. Boulahouache,<sup>26</sup> K. Bukin,<sup>26</sup> E. Dambasuren,<sup>26</sup> S. Karamov,<sup>26</sup> S. Kopp,<sup>26</sup> G. Majumder,<sup>26</sup> G. C. Moneti,<sup>26</sup> R. Mountain,<sup>26</sup> S. Schuh,<sup>26</sup> T. Skwarnicki,<sup>26</sup> S. Stone,<sup>26</sup> G. Viehhauser,<sup>26</sup> J.C. Wang,<sup>26</sup> A. Wolf,<sup>26</sup> and J. Wu<sup>26</sup>
<sup>1</sup>Vanderbilt University, Nashville, Tennessee 37235
<sup>2</sup>Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061
<sup>3</sup>Wayne State University, Detroit, Michigan 48202
<sup>4</sup>California Institute of Technology, Pasadena, California 91125
<sup>5</sup>University of California, San Diego, La Jolla, California 92093
<sup>6</sup>University of California, Santa Barbara, California 93106
<sup>7</sup>Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
<sup>8</sup>University of Colorado, Boulder, Colorado 80309-0390
<sup>9</sup>Cornell University, Ithaca, New York 14853
<sup>10</sup>University of Florida, Gainesville, Florida 32611
<sup>11</sup>Harvard University, Cambridge, Massachusetts 02138
<sup>12</sup>University of Hawaii at Manoa, Honolulu, Hawaii 96822
<sup>13</sup>University of Illinois, Urbana-Champaign, Illinois 61801
<sup>14</sup>Carleton University, Ottawa, Ontario, Canada K1S 5B6
and the Institute of Particle Physics, Canada
<sup>15</sup>McGill University, Montréal, Québec, Canada H3A 2T8
and the Institute of Particle Physics, Canada
<sup>16</sup>Ithaca College, Ithaca, New York 14850
<sup>17</sup>University of Kansas, Lawrence, Kansas 66045
<sup>18</sup>University of Minnesota, Minneapolis, Minnesota 55455
<sup>19</sup>State University of New York at Albany, Albany, New York 12222
<sup>20</sup>Ohio State University, Columbus, Ohio 43210
<sup>21</sup>University of Oklahoma, Norman, Oklahoma 73019
<sup>22</sup>Purdue University, West Lafayette, Indiana 47907
<sup>23</sup>University of Rochester, Rochester, New York 14627
<sup>24</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309
<sup>25</sup>Southern Methodist University, Dallas, Texas 75275
<sup>26</sup>Syracuse University, Syracuse, New York 13244
The previous measurements of the $`B`$ meson masses at $`e^+e^{}`$ colliders operating at $`\mathrm{{\rm Y}}(4S)`$ energy were obtained from fits to the distributions of the beam-constrained $`B`$ mass, defined as $`M_{\mathrm{bc}}\sqrt{E_{\mathrm{beam}}^2p^2(B)}`$, where $`p(B)`$ is the absolute value of the $`B`$ candidate momentum. Substitution of the beam energy for the measured energy of the $`B`$ meson candidate results in a significant improvement of the mass resolution, therefore the beam-constrained mass method is the technique of choice for the $`M(B^0)M(B^+)`$ mass difference measurement. However, the precision of the measurement of the absolute $`B^0`$ and $`B^+`$ meson masses is limited by the systematic uncertainties in the absolute beam energy scale and in the correction for initial state radiation. For this measurement we selected $`B^0\psi ^{()}K_S^0`$ and $`B^+\psi ^{()}K^+`$ candidates , reconstructing $`\psi ^{()}\mathrm{}^+\mathrm{}^{}`$ and $`K_S^0\pi ^+\pi ^{}`$ decays. We used both $`e^+e^{}`$ and $`\mu ^+\mu ^{}`$ modes for the $`\psi ^{()}`$ reconstruction. We then determined the $`B^0`$ and $`B^+`$ meson masses by fitting the $`\psi ^{()}K_S^0`$ and $`\psi ^{()}K^+`$ invariant mass distributions. The main reasons to use the $`\psi ^{()}K`$ rather than more copious $`D^{()}n\pi `$ final states are, first, that the background is very low; second, that the $`J/\psi `$ and $`\psi (2S)`$ mesons are heavy, and their masses are very well measured . As discussed below, constraining the reconstructed $`J/\psi `$ and $`\psi (2S)`$ masses to their world average values makes our $`B`$ mass measurement insensitive to imperfections in the lepton momentum reconstruction. By comparing the beam-constrained $`B`$ mass to the $`B^0`$ and $`B^+`$ mass values obtained in our measurement, one could set the absolute beam energy scale at the $`e^+e^{}`$ colliders operating in the $`\mathrm{{\rm Y}}(4S)`$ energy region.
The data were collected at the Cornell Electron Storage Ring (CESR) with two configurations of the CLEO detector, called CLEO II and CLEO II.V . The components of the CLEO detector most relevant to this analysis are the charged particle tracking system, the CsI electromagnetic calorimeter, and the muon chambers. In CLEO II, the momenta of charged particles are measured in a tracking system consisting of a 6-layer straw tube chamber, 10-layer precision drift chamber, and 51-layer main drift chamber, all operating inside a 1.5 T solenoidal magnet. The main drift chamber also provides a measurement of the specific ionization, $`dE/dx`$, used for particle identification. For CLEO II.V, the straw tube chamber was replaced with a 3-layer silicon vertex detector. The muon chambers consist of proportional counters placed at various depths in the steel absorber. Track fitting was performed using a Kalman filtering technique, first applied to track fitting by P. Billoir . The track fit sequentially adds the measurements provided by the tracking system to correctly take into account multiple scattering and energy loss of a particle in the detector material. For each physical track, separate fits are performed using different particle hypotheses.
For this measurement we used 9.1 $`\mathrm{fb}^1`$ of $`e^+e^{}`$ data taken at the $`\mathrm{{\rm Y}}(4S)`$ energy and 4.4 $`\mathrm{fb}^1`$ recorded 60 MeV below the $`\mathrm{{\rm Y}}(4S)`$ energy. Two thirds of the data used were collected with the CLEO II.V detector. All of the simulated event samples used in this analysis were generated with a GEANT-based simulation of the CLEO detector response and were processed in a similar manner as the data.
Electron candidates were identified based on the ratio of the track momentum to the associated shower energy in the CsI calorimeter and specific ionization in the drift chamber. We recovered some of the bremsstrahlung photons by selecting the photon shower with the smallest opening angle with respect to the direction of the $`e^\pm `$ track evaluated at the interaction point, and then requiring this opening angle to be smaller than $`5^{}`$. For the $`\psi ^{()}\mu ^+\mu ^{}`$ reconstruction one of the muon candidates was required to penetrate the steel absorber to a depth greater than 3 nuclear interaction lengths. We relaxed the absorber penetration requirement for the second muon candidate if it pointed to the end-cap muon chambers and its energy was too low to reach a counter. For these muon candidates we required the ionization signature in the CsI calorimeter to be consistent with that of a muon.
We extensively used normalized variables, taking advantage of well-understood track and photon-shower four-momentum covariance matrices to calculate the expected resolution for each combination. The use of normalized variables allows uniform candidate selection criteria to be applied to the data collected with the CLEO II and CLEO II.V detector configurations. The $`e^+(\gamma )e^{}(\gamma )`$ and $`\mu ^+\mu ^{}`$ invariant mass distributions for the $`\psi ^{()}\mathrm{}^+\mathrm{}^{}`$ candidates in data are shown in Fig. 1. We selected the $`\psi ^{()}\mathrm{}^+\mathrm{}^{}`$ signal candidates requiring the absolute value of the normalized invariant mass to be less than $`3`$. The average $`\mathrm{}^+\mathrm{}^{}`$ invariant mass resolution is approximately 12 MeV$`/c^2`$. For each $`\psi ^{()}`$ candidate we performed a fit constraining its mass to the world average value . This mass-constraint fit improves the $`\psi ^{()}`$ energy resolution almost by a factor of 4 and the absolute momentum resolution by 30%.
The $`K_S^0`$ candidates were selected from pairs of tracks forming well-measured displaced vertices. The daughter pion tracks were re-fitted taking into account the position of the displaced vertex and were constrained to originate from the same spatial point. The resolution in $`\pi ^+\pi ^{}`$ invariant mass is approximately 4 MeV$`/c^2`$. After requiring the absolute value of the normalized $`\pi ^+\pi ^{}`$ invariant mass to be less than 3, we performed a fit constraining the mass of each $`K_S^0`$ candidate to the world average value .
The $`B\psi ^{()}K`$ candidates were selected by means of two observables. The first observable is the beam-constrained $`B`$ mass $`M_{\mathrm{bc}}\sqrt{E_{\mathrm{beam}}^2p^2(B)}`$. The resolution in $`M_{\mathrm{bc}}`$ for the $`B\psi ^{()}K`$ candidates is approximately 2.7 MeV/$`c^2`$ and is dominated by the beam energy spread. We required $`|M_{\mathrm{bc}}5280\mathrm{MeV}/c^2|/\sigma (M_{\mathrm{bc}})<3`$. The requirement on $`M_{\mathrm{bc}}`$, equivalent to a requirement on the absolute value of the $`B`$ candidate momentum, is used only for background suppression, and does not bias the $`B`$ mass measurement. The second observable is the invariant mass of the $`\psi ^{()}K`$ system. The average resolutions in $`M(\psi ^{()}K_S^0)`$ and $`M(\psi ^{()}K^+)`$ are respectively 8 MeV/$`c^2`$ and 11 MeV/$`c^2`$. The $`M(\psi ^{()}K)`$ distributions for the candidates passing the beam-constrained $`B`$ mass requirement are shown in Fig. 2. To select signal candidates, we required $`|M(\psi ^{()}K_S^0)5280\mathrm{MeV}/c^2|/\sigma (M)<4`$ and $`|M(\psi ^{()}K^+)5280\mathrm{MeV}/c^2|/\sigma (M)<3`$; the allowed invariant mass intervals are sufficiently wide not to introduce bias in the $`B`$ mass measurement. These selections yielded 135 $`B^0\psi ^{()}K_S^0`$ candidates: 125 in the $`B^0J/\psi K_S^0`$ mode and 10 in the $`B^0\psi (2S)K_S^0`$ mode. We estimated the background to be $`0.13_{0.05}^{+0.09}`$ events. The selections yielded 526 $`B^+\psi ^{()}K^+`$ candidates: 468 in $`B^+J/\psi K^+`$ mode and 58 in $`B^+\psi (2S)K^+`$ mode. The background from $`B^+\psi ^{()}\pi ^+`$ decays was estimated to be $`0.9\pm 0.3`$ events, whereas all other background sources were estimated to contribute $`2.3_{0.5}^{+1.0}`$ events. The backgrounds were evaluated with simulated events and the data recorded at the energy below the $`B\overline{B}`$ production threshold. We discuss the systematics associated with background below, together with other systematic uncertainties.
The $`B`$ meson masses were extracted from the $`\psi ^{()}K_S^0`$ and $`\psi ^{()}K^+`$ invariant mass distributions with an unbinned likelihood fit. The likelihood function is
$`(M(B),S)={\displaystyle \underset{i}{}}{\displaystyle \frac{G(M_iM(B)|S\sigma _i)}{G(MM(B)|S\sigma _i)𝑑M}},`$ (1)
where $`M_i`$ is the invariant mass of a $`\psi ^{()}K`$ combination, $`\sigma _i`$ is the calculated invariant mass uncertainty for that $`\psi ^{()}K`$ combination, and $`G(x|\sigma )1/(\sqrt{2\pi }\sigma )\mathrm{exp}(x^2/2\sigma ^2)`$. The product is over the $`B^0`$ or $`B^+`$ meson candidates. The parameters of the fit are the $`B`$ meson mass $`M(B)`$ and a global scale factor $`S`$ that modifies the calculated invariant-mass uncertainties $`\sigma _i`$. The integration limits of the normalization integral in the denominator correspond to the signal regions defined for the $`\psi ^{()}K`$ invariant mass distributions: \[5280 MeV/$`c^2\pm 4\sigma (M)`$\] for $`B^0`$ and \[5280 MeV/$`c^2\pm 3\sigma (M)`$\] for $`B^+`$ candidates. From the fits to the $`\psi ^{()}K_S^0`$ invariant-mass distribution, we obtained $`M(B^0)=5278.97\pm 0.67`$ MeV/$`c^2`$, $`S=1.24\pm 0.08`$, and the correlation coefficient $`\rho (M(B^0),S)=0.013`$. For $`\psi ^{()}K^+`$ we obtained $`M(B^+)=5279.50\pm 0.41`$ MeV/$`c^2`$, $`S=1.09\pm 0.04`$, and $`\rho (M(B^+),S)=0.015`$. The values of the scale factor $`S`$ and uncertainty in $`M(B)`$ returned by the fits are in good agreement with the values obtained from simulated events.
Table I lists the bias corrections together with associated systematic uncertainties, which we will discuss below.
(i) Measuring $`B`$ masses using simulated events. — Applying the same procedure as in the data analysis, we measured the $`B^0`$ and $`B^+`$ mass using 30286 $`B^0\psi ^{()}K_S^0`$ and 34519 $`B^+\psi ^{()}K^+`$ candidates reconstructed from a sample of simulated events. We obtained $`M(B^0)M_{B^0}^{\mathrm{input}}=0.08\pm 0.04`$ MeV/$`c^2`$ and $`M(B^+)M_{B^+}^{\mathrm{input}}=+0.13\pm 0.05`$ MeV/$`c^2`$. We applied $`+0.08`$ MeV/$`c^2`$ and $`0.13`$ MeV/$`c^2`$ corrections to the $`B^0`$ and $`B^+`$ mass values and assigned 100% of those corrections as systematic uncertainties.
(ii) Background. — The estimated mean background for the $`B^0`$ candidates is $`0.13_{0.05}^{+0.09}`$ events, we therefore conservatively assumed the probability of finding a background event in our sample to be 22%. The $`B^0`$ background candidates are expected to be uniformly distributed across the $`B^0`$ mass signal region. We performed the $`B^0`$ mass fits excluding the one candidate with the highest or the lowest normalized $`\psi ^{()}K_S^0`$ invariant mass; the largest observed $`B^0`$ mass shift was 0.25 MeV/$`c^2`$. We multiplied this 0.25 MeV/$`c^2`$ shift by the 22% probability of having a background event in our sample and assigned 0.05 MeV/$`c^2`$ as the systematic uncertainty in $`B^0`$ mass due to background. For the $`B^+`$ signal, the background from $`B^+\psi ^{()}\pi ^+`$ decays was estimated to be $`0.9\pm 0.3`$ events, all other background sources were estimated to contribute $`2.3_{0.5}^{+1.0}`$ events. The $`B^+`$ background candidates, with the exception of $`B^+\psi ^{()}\pi ^+`$ events, are expected to be uniformly distributed across the $`B`$ mass signal region. The $`B^+\psi ^{()}\pi ^+`$ events reconstructed as $`B^+\psi ^{()}K^+`$ produce high $`\psi ^{()}K^+`$ invariant mass. We performed the $`B^+`$ mass fits excluding 4 candidates with the highest or the lowest normalized $`\psi ^{()}K^+`$ invariant mass and assigned the largest shift of the measured $`B^+`$ mass (0.17 MeV/$`c^2`$) as the systematic uncertainty in $`B^+`$ mass due to background.
(iii) $`B`$ mass likelihood fit. — We studied the systematics associated with the unbinned likelihood fit procedure by changing the fit function from a Gaussian to sum of two Gaussians. We also allowed the fit to determine different scale factors $`S`$ for the candidates coming from CLEO II and CLEO II.V data, or for the candidates with $`\psi ^{()}e^+e^{}`$ and $`\psi ^{()}\mu ^+\mu ^{}`$. We assigned the largest shift of the measured $`B`$ mass as the systematic uncertainty.
(iv) $`\psi ^{()}`$ four-momentum measurement. — Even if the $`B`$ mass measurements using the simulated events show negligible bias, a bias in the measurement is still in principle possible because of the uncertainty in the absolute magnetic field scale, an imperfect description of the detector material used by the Billoir fitter, or detector misalignment. For the $`\psi ^{()}`$ four-momentum measurement, these systematic effects along with the systematics associated with bremsstrahlung are rendered negligible by the the $`\psi ^{()}`$ mass-constraint fit. The measured position of the $`J/\psi `$ mass peak allows a reliable evaluation of the possible bias in the lepton momentum measurement. We measured the positions of the $`J/\psi \mu ^+\mu ^{}`$ and $`J/\psi e^+e^{}`$ peaks by fitting the inclusive $`\mu ^+\mu ^{}`$ and $`e^+(\gamma )e^{}(\gamma )`$ invariant mass distributions. In these fits we used the signal shapes derived from a high-statistics sample of simulated $`J/\psi \mathrm{}^+\mathrm{}^{}`$ events generated with the $`J/\psi `$ mass of 3096.88 MeV/$`c^2`$ . In simulated $`J/\psi \mu ^+\mu ^{}`$ events, the reconstruction procedure introduces a bias of less than 0.03 MeV/$`c^2`$ in the measured $`J/\psi `$ mass. We found that the $`J/\psi \mu ^+\mu ^{}`$ peak was shifted by $`+0.5\pm 0.2`$ MeV/$`c^2`$ in data compared to simulated events; the corresponding value of the $`J/\psi e^+e^{}`$ peak shift was $`+0.7\pm 0.2`$ MeV/$`c^2`$. A $`+0.5`$ MeV/$`c^2`$ shift corresponds to overestimation of the lepton absolute momenta by approximately 0.02%. A variation of the lepton absolute momenta by 0.1% produced a shift of less than 0.02 MeV/$`c^2`$ in the measured $`B`$ mass. We therefore neglected the systematic uncertainty associated with the lepton momentum measurement. In addition, we varied the world average $`J/\psi `$ and $`\psi (2S)`$ mass values used in the mass-constraint fits by one standard deviation ; the resulting 0.05 MeV/$`c^2`$ and 0.06 MeV/$`c^2`$ shifts of the measured $`B^0`$ and $`B^+`$ masses were assigned as systematic uncertainties.
(v) $`K_S^0`$ four-momentum measurement — The systematic uncertainty of our $`B`$ mass measurement is dominated by a possible bias in the kaon four-momentum measurement. The measured position of the $`K_S^0`$ mass peak allows a reliable evaluation of the possible bias in the $`K_S^0`$ four-momentum measurement. We selected inclusive $`K_S^0`$ candidates satisfying the same $`K_S^0`$ selection criteria as in the $`B^0\psi ^{()}K_S^0`$ analysis; the momenta of the selected inclusive $`K_S^0`$ candidates were further restricted to be from 1.55 to 1.85 GeV/$`c`$, which corresponds to the momentum range of the $`K_S^0`$ mesons from $`B^0J/\psi K_S^0`$ decays. Using this sample, we measured the mean reconstructed $`K^0`$ mass to be within 10 keV/$`c^2`$ of the world average value of $`497.672\pm 0.031`$ MeV/$`c^2`$ . However, we also observed a $`\pm 40`$ keV/$`c^2`$ variation of the measured mean $`K^0`$ mass depending on the radial position of the $`K_S^0`$ decay vertex. To assign the systematic uncertainty, we conservatively took the $`K^0`$ mass shift to be 40 keV/$`c^2`$ and added in quadrature the 30 keV/$`c^2`$ uncertainty in the world average $`K^0`$ mass to obtain a total shift of 50 keV/$`c^2`$. This 50 keV/$`c^2`$ variation in the measured $`K_S^0`$ mass could be obtained by varying each daughter pion’s momentum by 0.018%; the resulting variation of the measured $`B^0`$ mass was 0.26 MeV/$`c^2`$, which we assigned as a systematic uncertainty due to the $`K_S^0`$ four-momentum measurement. This uncertainty in $`M(B^0)`$ has a contribution from the uncertainty in the world average value of the $`K^0`$ mass, which partially limited the precision of our $`K_S^0`$ mass peak position measurement. In addition, we varied by one standard deviation the world average $`K^0`$ mass value used for the $`K_S^0`$ mass-constraint fit; the resulting 0.04 MeV/$`c^2`$ variation of the measured $`B^0`$ mass was added to the systematic uncertainty.
(vi) $`K^+`$ momentum measurement. — Comparing the momentum spectra of the muons from inclusive $`J/\psi `$ decays and the kaons from $`B^+\psi ^{()}K^+`$ decays, we concluded that $`J/\psi \mu ^+\mu ^{}`$ decays provide excellent calibration sample for the study of the systematic uncertainty associated with the $`K^+`$ momentum measurement. As discussed above, the observed $`+0.5\pm 0.2`$ MeV/$`c^2`$ shift of the $`J/\psi \mu ^+\mu ^{}`$ mass peak corresponds to a systematic overestimation of the muon momenta by 0.02%. We decreased the measured $`K^+`$ momenta by 0.02%, which resulted in a $`0.32`$ MeV/$`c^2`$ shift of the measured $`B^+`$ mass. We applied a $`0.32`$ MeV/$`c^2`$ correction to our final result and assigned $`100\%`$ of the correction value as the systematic uncertainty. The ionization energy loss for muons from inclusive $`J/\psi `$’s differs slightly from that for kaons from $`B^+\psi ^{()}K^+`$ decays. To account for the systematic uncertainty due to this difference, we measured the $`B^+`$ mass using the pion Billoir fit hypothesis for kaon tracks. The resulting shift ($`0.08`$ MeV/$`c^2`$) was added in quadrature to the systematic uncertainty. Because of acceptance of the muon chambers, the muons pointing to the end-cap region of the detector are under-represented in comparison with the kaons from the $`B^+\psi ^{()}K^+`$ decays. The tracks with low transverse momentum are more likely to be affected by the magnetic field inhomogeneity, thus providing an additional source of systematic bias, which will not be taken into account by studying $`J/\psi \mu ^+\mu ^{}`$ decays. However, if a $`K^+`$ track has low transverse momentum, then its track parameters are poorly measured, and the mass fit naturally assigns a low weight to this $`B^+`$ candidate. We studied the possible systematic bias both by varying the measured $`K^+`$ momentum by 0.1% for the $`K^+`$ tracks with $`|\mathrm{cos}\theta |>0.8`$, where $`\theta `$ is the angle between a track and the beam direction, and by excluding these low angle tracks altogether. The largest shift ($`0.08`$ MeV/$`c^2`$) was added in quadrature to the systematic uncertainty.
(vii) Detector misalignment. — The detector misalignment effects were studied with high-momentum muon tracks from $`e^+e^{}\mu ^+\mu ^{}`$ events. We measured the mean of the transverse momentum difference between the $`\mu ^+`$ and $`\mu ^{}`$ tracks. We also studied the dependence of the sum of the $`\mu ^+`$ and $`\mu ^{}`$ momenta on azimuthal angle $`\varphi `$ and polar angle $`\theta `$ of the $`\mu ^+`$ track. We parametrized our findings in terms of an average as well as $`\varphi `$\- and $`\theta `$-dependent false curvature. We varied the measured curvature of the signal candidate tracks according to these parametrizations and found the detector misalignment effects to be negligible for our $`B`$ mass measurements.
In conclusion, we have determined the masses of neutral and charged $`B`$ mesons with significantly better precision than any previously published result . We obtained $`M(B^0)=5279.1\pm 0.7[\mathrm{stat}]\pm 0.3[\mathrm{syst}]`$ MeV/$`c^2`$ and $`M(B^+)=5279.1\pm 0.4[\mathrm{stat}]\pm 0.4[\mathrm{syst}]`$ MeV/$`c^2`$. The systematic uncertainties for the $`M(B^0)`$ and $`M(B^+)`$ measurements are independent except for the small common uncertainties due to the imperfect knowledge of the $`J/\psi `$ and $`\psi (2S)`$ masses (item (iv) in Table I). Combining our $`M(B^0)`$ and $`M(B^+)`$ measurements with the world average value of the mass difference $`M(B^0)M(B^+)=0.34\pm 0.32`$ MeV/$`c^2`$ , we obtained $`M(B^0)=5279.2\pm 0.5`$ MeV/$`c^2`$ and $`M(B^+)=5278.9\pm 0.5`$ MeV/$`c^2`$. Although these $`M(B^0)`$ and $`M(B^+)`$ values are more precise than the results given above, obviously they are strongly correlated: the correlation coefficient is $`\rho (M(B^0),M(B^+))=0.81`$.
We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the Natural Sciences and Engineering Research Council of Canada, the A.P. Sloan Foundation, the Swiss National Science Foundation, and the Alexander von Humboldt Stiftung. |
no-problem/0001/hep-ph0001226.html | ar5iv | text | # Introduction
## Introduction
In this letter, we investigate the possibility that the Higgs boson(s) $`h`$ associated with electroweak symmetry breaking may be found in the $`h\gamma \gamma `$ decay channel at the Fermilab Tevatron. Our intention is to augment the many important studies preceding and associated with the RunII workshop on Supersymmetry and Higgs physics at the Tevatron . In these studies, Standard Model Higgs boson detectability has been studied vigorously in its most promising production and decay channels. The classic channel of $`p\overline{p}Whl\nu b\overline{b}`$ remains the most important channel in the search for the SM Higgs boson, yet other modes can contribute to the total signal significance and perhaps yield evidence for the Higgs boson if sufficient luminosity is attained.
We wish to study in detail the Higgs boson decays to two photons for many reasons. First, in our estimation this decay mode has not received adequate attention in previous studies. The capabilities of Higgs boson discovery in this mode should be carefully documented in order to better understand the Tevatron’s full potential for Higgs boson detection. Second, there are many interesting and motivated theories that predict an enhanced decay rate into the $`\gamma \gamma `$ channel, and simultaneous suppression of the $`hb\overline{b}`$ channel. Therefore, in these cases, non-standard search strategies must be employed to either find this Higgs boson or rule out its existence in the kinematically accessible mass range. And finally, we feel that studies such as these contribute to a more knowledgeable discussion regarding the worth of a higher luminosity Tevatron (e.g., run III).
Since there is no renormalizable and gauge invariant operator in the Standard Model that leads to $`h\gamma \gamma `$ decays, it must be induced by electroweak symmetry breaking effects. The decay proceeds mainly through loop diagrams containing $`W^\pm `$ bosons and the $`t`$ quark. The $`W^\pm `$ boson loop is dominant. The branching fraction for this decay in the light Higgs boson mass range $`100\stackrel{<}{}m_h\stackrel{<}{}150\text{ GeV}`$ is never much larger than $`10^3`$ since the $`\gamma \gamma `$ partial width must compete with the larger partial widths associated with $`b\overline{b}`$, $`\tau ^+\tau ^{}`$, $`c\overline{c}`$, gg, and $`WW^{}`$ decays. The branching fractions for the Standard Model Higgs boson have been reliably calculated in Ref. . The maximum of the Standard Model branching fraction is 0.22% and is reached at $`m_h=125\text{ GeV}`$. For $`m_h>125\text{ GeV}`$, the branching fraction falls somewhat rapidly due to the increased importance of $`WW^{}`$ decays.
## Beyond the Standard Model
It is widely recognized that the Standard Model is an unsatisfactory explanation of electroweak symmetry breaking. In this section, we review several well-motivated alternatives to the Standard Model Higgs sector. The first of these is low–scale supersymmetry, where the symmetry breaking tasks are shared by two fields: $`H_u`$ and $`H_d`$. $`H_u`$ receives a vacuum expectation value and gives mass to the up-type quarks, while $`H_d`$ receives a vacuum expectation value and gives mass to the down-type quarks. Both $`H_u`$ and $`H_d`$ vacuum expectation values contribute to the $`W^\pm `$ and $`Z^0`$ masses. In general, the sharing of the electroweak symmetry breaking task between two or more fields will disrupt expectations of Higgs boson phenomenology based solely on the analysis of the Standard Model Higgs boson. It is important to identify regions of parameter space where our naive expectations fail, and where a more expansive search strategy must be engaged to find evidence of a Higgs boson.
The mass matrix for the CP-even neutral Higgs bosons of supersymmetry in the $`\{H_d^0,H_u^0\}`$ interaction basis is
$$^2=\left(\begin{array}{cc}m_A^2\mathrm{sin}^2\beta +m_Z^2\mathrm{cos}^2\beta & \mathrm{sin}\beta \mathrm{cos}\beta (m_A^2+m_Z^2)\\ \mathrm{sin}\beta \mathrm{cos}\beta (m_A^2+m_Z^2)& m_A^2\mathrm{cos}^2\beta +m_Z^2\mathrm{sin}^2\beta \end{array}\right)+\left(\begin{array}{cc}\mathrm{\Delta }_{dd}& \mathrm{\Delta }_{ud}\\ \mathrm{\Delta }_{ud}& \mathrm{\Delta }_{uu}\end{array}\right),$$
(1)
where $`m_A^2`$ represents the pseudo-scalar mass, whose value is set by supersymmetry breaking, and $`\mathrm{\Delta }_{ij}`$ are quantum corrections whose form can be extracted from Ref. .
In the limit $`m_Am_Z`$ the mass eigenstates of the above mass matrix are
$`h_{\mathrm{light}}^0`$ $`=`$ $`\mathrm{cos}\beta H_d^0+\mathrm{sin}\beta H_u^0`$ (2)
$`h_{\mathrm{heavy}}^0`$ $`=`$ $`\mathrm{sin}\beta H_d^0+\mathrm{cos}\beta H_u^0.`$ (3)
One can immediately see that $`h_{\mathrm{light}}^0=v`$ and $`h_{\mathrm{heavy}}^0=0`$, and it is also true that all interactions of $`h_{\mathrm{light}}^0`$ are equivalent to the SM Higgs boson. It is instructive to rotate the Higgs mass matrix to the $`\{h_{\mathrm{light}}^0,h_{\mathrm{heavy}}^0\}`$ basis:
$$_{}^{}{}_{}{}^{2}=\left(\begin{array}{cc}m_Z^2\mathrm{cos}^22\beta & m_Z^2\mathrm{sin}2\beta \mathrm{cos}2\beta \\ m_Z^2\mathrm{sin}2\beta \mathrm{cos}2\beta & m_A^2+m_Z^2\mathrm{sin}^22\beta \end{array}\right)+\left(\begin{array}{cc}\mathrm{\Delta }_{11}^{}& \mathrm{\Delta }_{12}^{}\\ \mathrm{\Delta }_{12}^{}& \mathrm{\Delta }_{11}^{}\end{array}\right),$$
(4)
where the $`\mathrm{\Delta }_{ij}^{}`$ can be expressed in terms of the more commonly given corrections $`\mathrm{\Delta }_{ij}`$,
$`\mathrm{\Delta }_{11}^{}`$ $`=`$ $`\mathrm{\Delta }_{dd}\mathrm{cos}^2\beta +2\mathrm{\Delta }_{ud}\mathrm{cos}\beta \mathrm{sin}\beta +\mathrm{\Delta }_{uu}\mathrm{sin}^2\beta `$
$`\mathrm{\Delta }_{12}^{}`$ $`=`$ $`\mathrm{\Delta }_{dd}\mathrm{cos}\beta \mathrm{sin}\beta +\mathrm{\Delta }_{ud}\mathrm{cos}2\beta +\mathrm{\Delta }_{uu}\mathrm{cos}\beta \mathrm{sin}\beta `$
$`\mathrm{\Delta }_{22}^{}`$ $`=`$ $`\mathrm{\Delta }_{dd}\mathrm{sin}^2\beta 2\mathrm{\Delta }_{ud}\mathrm{cos}\beta \mathrm{sin}\beta +\mathrm{\Delta }_{uu}\mathrm{cos}^2\beta .`$
“Higgs decoupling” in supersymmetry means that one Higgs boson stays light and couples just like the SM Higgs boson as supersymmetry breaking mass scales get very high. This property of the supersymmetric Higgs sector can be immediately understood as a complete $`SU(2)`$ Higgs doublet becoming very heavy ($`h_{\mathrm{heavy}}^0,A^0,H^\pm `$), while another doublet stays light ($`h_{\mathrm{light}}^0,Z_L^0,W_L^\pm `$). In the expressions above, this is equivalent to noting that $`m_A^2`$ occurs only in the $`_{}^{}{}_{22}{}^{2}`$-element of the $`h_{\mathrm{light}}^0h_{\mathrm{heavy}}^0`$ mass matrix.
In supersymmetry model building, the supersymmetry breaking scale is a free parameter and is cycled over a very large range. This gives the false impression that over the vast majority of the parameter space, $`m_A`$ is sufficiently larger than $`m_Z`$ to be in the “decoupling region” described in the previous two paragraphs, and the lightest Higgs boson is well approximated by $`h_{\mathrm{light}}^0`$. However, a natural electroweak potential — meaning a potential that has no large cancellations to produce the $`Z`$ boson mass — prefers supersymmetry breaking near the weak scale. If we take naturalness and fine tuning arguments seriously, we expect $`m_Am_Z`$, which leads to potentially significant deviations of the light Higgs boson couplings to the SM particles.
An interesting departure from SM Higgs phenomenology occurs when the light Higgs boson mass eigenstate of supersymmetry is the weak eigenstate $`h_u^0`$ -. This scenario, or close approximations to it, can naturally occur in theories with large $`\mathrm{tan}\beta =H_u/H_d`$, which are motivated by supersymmetric $`SO(10)`$ unification , and by minimal gauge-mediated supersymmetry theories that solve the soft CP-violating phase problem . The $`h_u^0`$ eigenstate has no tree-level coupling to $`b\overline{b}`$ or $`\tau ^+\tau ^{}`$, and the total width for this light Higgs boson is greatly reduced. Loop corrections can modify these arguments. For example, supersymmetry breaking can induce couplings such as $`\lambda _b^{}H_u^{}b\overline{b}`$ (and $`\lambda _\tau ^{}H_u^{}b\overline{b}`$) in addition to the usual $`\lambda _bH_db\overline{b}`$. The most important of these corrections often comes from gluino-squark loops (which do not contribute to $`\lambda _\tau ^{}`$).
If a significant $`\lambda _b^{}`$ coupling is induced, the condition for shutting off the $`b\overline{b}`$ coupling is to shift the Higgs rotation angle to $`\mathrm{tan}\alpha =ϵ/\mathrm{tan}\beta `$ (or $`\mathrm{tan}\alpha =ϵ/\mathrm{tan}\beta `$ for the case when the heavier CP even Higgs boson is Standard Model–like) where $`ϵ\mathrm{\Delta }m_b/(m_b\mathrm{\Delta }m_b)`$ and $`\mathrm{\Delta }m_b\lambda _b^{}H_u`$. The $`\tau \tau `$ branching is not zero now, but is modified by a factor of $`ϵ^2`$ compared to the Standard Model (in the limit that $`\lambda _\tau ^{}`$ is small). In contrast to the suppressed down-type fermion couplings, the partial width to two photons is equal to that of the Standard Model since no down-type quarks or leptons contribute significantly to the loop diagrams in either case. For these reasons, the branching ratio for the two photon final state can be greatly enhanced. Furthermore, the production rates through $`ggh_u^0`$ and $`q\overline{q}^{}Wh_u^0`$ are the same as in the Standard Model, since neither of these rely on the down-type fermion couplings.
Other interesting theories imply enhanced branching fractions to two photons. The bosonic Higgs $`h_{\mathrm{bh}}^0`$ that gives all , or rather nearly all, the mass to the vector bosons and has no couplings to fermions is a good example of a Higgs boson with enhanced branching fractions to two photons. However, the production cross-section of $`ggh_{\mathrm{bh}}^0`$ is negligible in this model since the top quark does not couple to this Higgs. One must rely completely on electroweak boson couplings for production of the $`h_{\mathrm{bh}}^0`$, such as in $`q\overline{q}^{}Wh_{\mathrm{bh}}^0`$ or $`WWh_{\mathrm{bh}}^0`$.
Another example that has suppressed couplings to the fermions is an electroweak Higgs boson $`h_{\mathrm{ew}}^0`$ added to top-quark condensate models . In this approach, the top and bottom quarks are assumed to get their masses through a strongly coupled group that condenses top quark pairs , and all the remaining fermions and vector bosons get mass mainly through $`h_{\mathrm{ew}}^0`$. A good approximation in studying the phenomenology of a light $`h_{\mathrm{ew}}^0`$ is to assume that it couples like the Standard Model Higgs to all particles except the top quark and bottom quark, to which it has zero couplings.
In Fig.1 we plot the branching fraction into two photons for the four Higgs bosons that we mentioned above: $`h_{\mathrm{sm}}^0`$, $`h_u^0`$, $`h_{\mathrm{bh}}^0`$, and $`h_{\mathrm{ew}}^0`$.
In each non-SM case considered, the branching fraction is larger than that of $`h_{\mathrm{sm}}^0`$. Some models that we have not discussed here may have even higher branching fraction or perhaps lower. It should be kept in mind that any model of physics beyond the simple Standard Model will likely have different branching fractions into two photons. Since the two photon partial width is a one-loop process, it will also be sensitive to new particles in loop diagrams. Hence, even greater variability is possible than what we have shown here. For example, supersymmetric partners in the loops may increase or decrease the overall partial width of $`h\gamma \gamma `$ . In general, we should be prepared to discover and study a Higgs boson with any branching fraction to two photons, since that is perhaps the most likely branching fraction to be altered significantly by new physics.
There are several sizable sources of Higgs boson production within the Standard Model. At the Tevatron, they are $`ggh`$, which is the largest, followed by $`q\overline{q}^{}Wh`$ and $`q\overline{q}Zh`$. For a heavy enough Higgs boson, the vector boson fusion processes $`WW,ZZh`$ are also competitive. Although the decay branching fraction of $`h\gamma \gamma `$ is of order $`\frac{g^2}{16\pi ^2}`$, there is hope that the narrow $`M_{\gamma \gamma }`$ peak of the signal can be utilized to cut extraneous two photon backgrounds to sufficiently low levels that a signal can be detected. In the following two sections, we discuss search strategies based on inclusive and exclusive final states. A description of our calculational methods is provided in the Appendix.
## Inclusive $`\gamma \gamma +X`$ production
First, we consider the total inclusive production of Higgs bosons, followed by their prompt decay to $`\gamma \gamma `$, where all Higgs boson production mechanisms can contribute. To study inclusive production, we apply cuts only on the properties of the individual photons or the photon pair, without studying the rest of the event in great detail. Before applying any cuts, the photon energy $`E^\gamma `$ is smeared by a resolution function typical of Run I conditions :
$$\frac{\mathrm{\Delta }E^\gamma }{E^\gamma }=\frac{.15}{\sqrt{E^\gamma (\text{ GeV})}}.03.$$
(5)
To optimize the acceptance of signal events, while reducing the “irreducible” backgrounds and those from jets fragmenting to photons, we apply the cuts,
$`p_T^\gamma >20\mathrm{GeV},|\eta ^\gamma |<2`$ $`(\mathrm{triggering}\mathrm{and}\mathrm{acceptance})`$
$`\mathrm{\Delta }R^{\gamma \gamma }\sqrt{(\eta _1^\gamma \eta _2^\gamma )^2+(\varphi _1^\gamma \varphi _2^\gamma )^2}>0.7`$ $`(\mathrm{separation})`$
$`{\displaystyle \underset{(i),R<.4}{}}E_T^{(i)}p_T^\gamma <2\mathrm{GeV}`$ $`(\mathrm{isolation}).`$ (6)
The high $`p_T`$, central photons constitute a suitable trigger. We failed to find a more efficient $`p_T^\gamma `$ cut than the one listed. With these cuts, the dominant source of background comes from the $`q\overline{q}\gamma \gamma `$ process. For each event, we treat the diphoton pair as a Higgs boson candidate with mass $`M_{\gamma \gamma }`$.
A further cut on the angular distribution of the photons in the rest frame of the Higgs boson candidate increases $`S/B`$:
$`|\mathrm{cos}\theta ^{}|<0.7.`$ (7)
The angle $`\theta ^{}`$ is defined to be the angle that the photon makes with the boost direction in the $`\gamma \gamma `$ rest frame. The signal is rather flat in $`\mathrm{cos}\theta ^{}`$ whereas the raw background peaks at $`|\mathrm{cos}\theta ^{}|=1`$. This cut is somewhat redundant to the other acceptance cuts, but will suppress fake backgrounds.
With the above cuts it would require well over $`100\text{ fb}^1`$ of integrated luminosity to even rule out a SM Higgs boson (95% C.L.) at any mass (the details will be given later). Therefore, new physics that provides a significant enhancement of the $`\gamma \gamma +X`$ total rate is required for this kind of signal process to be a relevant search. Large enhancements can occur either in the production cross-sections or in the decay branching fraction to photons. In the Standard Model, the $`ggh`$ process constitutes roughly $`2/3`$rds of the total production rate up to about $`m_h=160`$ GeV, while the rest of the rate is mainly $`q\overline{q}W/Z+h`$ production. One does not expect the production cross-sections $`q\overline{q}W/Z+h`$ to ever greatly exceed the SM production cross-section given the nature of Higgs boson couplings to electroweak vector bosons. One does expect, however, that the $`ggh`$ rate could be greatly enhanced by an increased coupling of the top quarks to Higgs boson , or by many virtual states contributing to the one–loop, effective $`ggh`$ coupling, or from higher dimensional operators induced in theories with large extra dimensions . In fact, these effects that increase the rate of $`ggh`$ production will usually also alter the $`h\gamma \gamma `$ branching fraction. Therefore, we focus on the possibility of large enhancements of the ratio
$$R_{gg}=\frac{\sigma (ggh)B(h\gamma \gamma )}{\sigma (ggh)_{\mathrm{sm}}B(h\gamma \gamma )_{\mathrm{sm}}}.$$
(8)
In the following we will investigate the inclusive $`\gamma \gamma +X`$ rate from $`ggh`$ signal production alone. Although this underestimates the total cross-section by not taking into account the $`W/Z+h`$ and $`WW,ZZh`$ contributions, it lends itself to easy generalizations of $`R_{gg}1`$ where there is hope to find a signal at reasonable luminosity and where the other contributions are very small in comparison. Later on, we will see that the $`q\overline{q}W/Z+h`$ signature alone lends itself to a useful, complementary analysis based on exclusive final states. In Table 1 we list the total SM signal cross-section ($`ggh`$ only) and the differential background rate after all cuts have been applied.
The Higgs boson width in the Standard Model is less than $`20\text{ MeV}`$ for $`m_h<150\text{ GeV}`$. Therefore, the invariant mass measurement of the two photons will have a spread entirely due to the photon energy resolution of the detector, which we call $`\mathrm{\Delta }M_{\gamma \gamma }`$. In Table 2 we show $`\mathrm{\Delta }M_{\gamma \gamma }`$ for various Higgs boson masses, based on folding the photon energy resolution function Eq. (5) with the photon kinematics.
Based on Tables 1 and 2, we are now able to determine the significance of the signal with respect to background after all cuts. We use the formula,
$$N_S=\frac{S}{\sqrt{B}}=\frac{0.96\sigma _{\mathrm{sig}}\sqrt{}}{\sqrt{\widehat{\sigma }_{\mathrm{bkgd}}}},$$
(9)
where
$$\widehat{\sigma }_{\mathrm{bkgd}}=4\mathrm{\Delta }M_{\gamma \gamma }\frac{d\sigma _{\mathrm{bkgd}}}{dM_{\gamma \gamma }},$$
(10)
and $``$ is the integrated luminosity. This formula counts the significance of signal to background within a mass window $`M_{\gamma \gamma }\pm 2\mathrm{\Delta }M_{\gamma \gamma }`$. This is a conservative and simple choice. When both the signal and background can be described adequately using Gaussian statistics, and the signal itself has a Gaussian shape, and the background is constant, the optimal mass window is $`M_{\gamma \gamma }\pm \sqrt{2}\mathrm{\Delta }M_{\gamma \gamma }`$. In our case, the background is not a constant, but the differential distribution is well approximated by a straight line with a negative slope. Therefore, an asymmetric mass window (with respect to the peak) would most likely yield the best significance. We also require everywhere in our analysis that no limit or discovery capability is possible unless at least 5 events are present in this $`2\sigma `$ spread mass bin. On the graphs we show below, this is a limitation mainly for the $`2\text{ fb}^1`$ integrated luminosity curve. From Eqs. (9) and (10), it is worth noting that an increase in integrated luminosity is equivalent to an improved energy resolution.
In Fig. 2 we plot the $`95\%`$ C.L. ($`N_S=1.96`$) exclusion curves for a given luminosity in the $`R_{gg}`$-$`m_h`$ plane. The SM Higgs boson corresponds to $`R_{gg}=1`$ across the plot. We have put on a line on the graph corresponding to $`h_u^0`$ to give a non-SM reference example of $`R_{gg}`$. Other theories such as those discussed above can have $`R_{gg}`$ much greater than that of $`h_{\mathrm{sm}}^0`$ or $`h_\mathrm{u}^0`$. The plot is intended to be useful for comparing any theory to Tevatron capabilities.
For a given integrated luminosity, the region above the corresponding curve can be ruled out to $`95\%`$ confidence level. Therefore, with $`30\text{ fb}^1`$ one could exclude a $`h_u^0`$ up to $`120\text{ GeV}`$. The solid lines never cross $`R_{gg}=1`$ which indicates that the SM Higgs boson could not be excluded in the $`\gamma \gamma `$ mode by the Tevatron even with over $`100\text{ fb}^1`$ of data. One interesting limit to consider is a Higgs boson with only one–loop decays to $`gg`$ and $`\gamma \gamma `$ final states. In this case, the production cross section $`\times `$ branching ratio is proportional to $`\mathrm{\Gamma }(hgg)BR(h\gamma \gamma )\mathrm{\Gamma }(h\gamma \gamma )`$, and $`R_{gg}=\frac{\mathrm{\Gamma }(h\gamma \gamma )}{\mathrm{\Gamma }(h_{SM}gg)BR(h_{SM}\gamma \gamma )}10^3\frac{\mathrm{\Gamma }(h\gamma \gamma )}{\mathrm{\Gamma }(h_{SM}gg)}`$. Therefore, large values of $`R_{gg}`$ are not unreasonable.
Discovery of Higgs bosons with enhanced $`\gamma \gamma +X`$ production rates requires higher significance. For $`N_S=5`$ we plot in Fig. 3 the necessary enhancement $`R_{gg}`$ to see a signal at this level at the Tevatron. With less than $`30\text{ fb}^1`$, discovery is not likely for a $`h_u^0`$ Higgs boson with mass greater than $`100\text{ GeV}`$. Therefore, Tevatron detection sensitivity in this channel is not as good as the Higgs boson search capacity at LEP2, which should exceed $`105\text{ GeV}`$ for both $`h_{\mathrm{sm}}^0`$ and $`h_u^0`$. Nevertheless other theories with larger enhancements of $`\sigma (ggh)B(h\gamma \gamma )`$ may be discovered in the $`\gamma \gamma +X`$ mode first.
## Exclusive $`W/Z+hW/Z+\gamma \gamma `$ signal of Higgs bosons
We now attempt to gain more significance of signal to background by employing additional cuts. It is well known that the kinematics of resonance production at hadron colliders can be significantly affected by multiple soft gluon emission. Because of the different color factors associated with the $`q\overline{q}\gamma \gamma `$ and $`ggh`$ processes, the $`p_T^{\gamma \gamma }`$ spectrum of the Higgs boson signal is harder than the background. One strategy of LHC searches is to exploit this difference to establish a Higgs signal . However, the process $`W/Z+hW/Z+\gamma \gamma `$ typically has large $`p_T^{\gamma \gamma }`$ even before including these QCD effects. At the Tevatron collider, the $`W/Z+h`$ production process is relatively much more important than at the LHC, and quickly becomes the dominant process at even moderate values of $`p_T^{\gamma \gamma }`$ with respect to $`M_{\gamma \gamma }`$. For this reason, and because many of the extensions of the SM considered here have no $`ggh`$ component or only one of SM strength, we concentrate only on the $`W/Z+h`$ signal in the following. For reasons discussed later, the $`WW/ZZh`$ signal is not as relevant for our analysis.
We have done an analysis of varying the $`p_T^{\gamma \gamma }`$ cut to maximize the total signal significance. We find that we optimally retain a significant portion of the total Higgs boson signal while reducing the backgrounds with the requirement,
$$p_T^{\gamma \gamma }>M_{\gamma \gamma }/2.$$
(11)
Also, the two photons from the Higgs boson decay tend to be balanced in $`p_T`$, so we demand
$$p_T^\gamma >M_{\gamma \gamma }/3$$
(12)
as a further aid to reduce backgrounds and increase $`S/B`$.
After demanding such a significant cut on $`p_T^{\gamma \gamma }`$, the dominant background becomes $`\gamma \gamma `$+1 jet. However, the signal will most likely not have this topology. Rather, the decay of $`W`$ and $`Z`$ bosons can lead to: (1) two hard jets with $`M_{jj}M_W,M_Z`$, (2) one or more high $`p_T`$ leptons from $`We,\mu `$ and $`Zee,\mu \mu `$, or (3) missing transverse energy from $`Z\nu \nu `$, $`W\tau \mathrm{soft}\mathrm{jet}`$, and $`We,\mu \mathrm{soft}\mathrm{or}\mathrm{very}\mathrm{forward}\mathrm{leptons}`$. Therefore, it is useful to consider $`\gamma \gamma `$ signals that have one or two leptons, or missing energy, or two leading jets with $`M_{j_1j_2}m_W,m_Z`$. To this end we require at least one of the following “vector boson acceptance” criteria to be satisfied:
* $`p_T^{e,\mu }>10\text{ GeV}`$ and $`|\eta ^{e,\mu }|<2.0`$.
* $`\text{ / }E_T>20\text{ GeV}`$.
* 2 or more jets with $`50\text{ GeV}<M_{j_1j_2}<100\text{ GeV}`$.
To perform this analysis, we constructed jets ($`E_T^j>15\text{ GeV}`$, $`|\eta ^j|<2.5`$ and $`R=0.5`$) using the toy calorimeter simulation in PYTHIA with an energy resolution of $`80\%/\sqrt{E^j(\text{ GeV})}`$. $`\text{ / }E_T`$ was calculated by summing all calorimeter cells out to $`\eta =4`$.
The vector boson acceptance cuts eliminate a fair portion of the $`\gamma \gamma `$ plus jet background, as well as a potential contribution from vector boson fusion. The total rate of the vector fusion process (without cuts) is comparable to $`W/Z+h`$ only for $`M_h>160\text{ GeV}`$, where large values of $`B(h\gamma \gamma )`$ are not well motivated (see, e.g., Fig. 1). Nonetheless, we examined the effects of replacing cuts (a)-(c) by the requirement $`M_{j_1j_2}>100\text{ GeV}`$ to accept the jets associated with vector boson fusion: $`q\overline{q}q^{}\overline{q}^{}h`$. The results were not as promising as those based on cuts (a)-(c), and so we did not include a $`M_{j_1j_2}>100\text{ GeV}`$ acceptance cut in our analysis.
In Table 3 we show the signal and differential cross-section rates after all cuts, including the $`p_T^{\gamma \gamma }>M_{\gamma \gamma }/2`$ and “vector boson acceptance” requirements.
Table 3 can then be used to determine detectability of a Higgs boson given its mass and $`R_V`$:
$$R_V=\frac{\sigma (W/Z+h)B(h\gamma \gamma )}{\sigma (W/Z+h)_{\mathrm{sm}}B(h\gamma \gamma )_{\mathrm{sm}}}.$$
(13)
The parameter $`R_V`$ is useful if we make the reasonable assumption that increases in $`\sigma (W+h)`$ and $`\sigma (Z+h)`$ scale equivalently.
In Fig. 4 we plot the $`95\%`$ C.L. ($`N_S=1.96`$) exclusion curves for a given luminosity in the $`R_V`$-$`m_h`$ plane. On the curve we have put lines for $`h_u^0`$ and the purely gauge coupled Higgs boson $`h_{\mathrm{bh}}^0`$. The SM Higgs boson corresponds to $`R_V=1`$ across the plot. For a given integrated luminosity, the region above the corresponding curve can be ruled out to $`95\%`$ confidence level. The luminosity curves never cross $`R_V=1`$ which indicates that the SM Higgs boson could not be excluded in the $`\gamma \gamma `$ mode by the Tevatron even with $`100\text{ fb}^1`$ of data. However, with $`30\text{ fb}^1`$ one could exclude $`h_{\mathrm{bh}}^0`$ up to $`137\text{ GeV}`$ and $`h_u^0`$ up to $`129\text{ GeV}`$ in the $`\gamma \gamma `$ channel alone.
For $`N_S=5`$ discovery we plot in Fig. 5 the necessary enhancement of $`B(h\gamma \gamma )`$ to see a signal at this level at the Tevatron. Discovery is possible up to $`126\text{ GeV}`$ for the bosonic Higgs boson as long as at least $`30\text{ fb}^1`$ is obtained, and $`h_u^0`$ can be discovered up to approximately $`114\text{ GeV}`$. Both discovery reaches are beyond the expected reach capacity of LEPII.
## Discussion and conclusion
We have analyzed the capability of the Tevatron to find a Higgs boson decaying into two photons. We have found that the SM Higgs boson cannot be probed beyond LEP2 capabilities if the Tevatron accrues less than $`100\text{ fb}^1`$. However, Higgs bosons in theories beyond the Standard Model may be probed (discovered or excluded) effectively with significantly less luminosity. For example, a Higgs boson that couples only to the vector bosons but has no couplings to the fermions can be probed up to $`127\text{ GeV}`$ with less than $`10\text{ fb}^1`$ integrated luminosity. In the MSSM, when $`hh_u^0`$, so that $`hb\overline{b}`$ is suppressed and the $`W/Z+h(b\overline{b})`$ signal vanishes, our analysis shows coverage up to $`m_h=114`$ GeV with 30 fb<sup>-1</sup>, and exclusion capability up to $`m_h=129\text{ GeV}`$.
In an attempt to be as model independent as possible, we have presented graphs (Figs. 2-5) of exclusion and detectability as integrated luminosity contours in the plane of Higgs mass and $`R_i`$ ($`R_{gg}`$ and $`R_V`$), where $`R_i`$ parameterizes the enhancement of the $`\gamma \gamma `$ signal cross-section over the Standard Model. Therefore, any theories beyond the SM that have predictions for production cross-sections and decay widths of Higgs bosons can be compared with these graphs to attain an estimate of the Tevatron’s capability. Of course, these figures are not applicable to a Higgs boson that has an intrinsic width greater than the detector resolution. The theories discussed above are far from this case.
Finally, we comment on previous studies of $`\gamma \gamma `$ invariant mass signals at the Tevatron -. Much of the earlier work emphasized detectability at lower luminosities of $`h_{\mathrm{bh}}^0`$ with $`m_{h_{\mathrm{bh}}^0}<100\text{ GeV}`$, where the branching fraction to two photons was $`𝒪(1)`$. For example, this was the Higgs boson and the mass region covered in Ref. . They also performed their simulations at the parton level and applied much looser cuts than our analysis, and much looser than those typically used in the experimental analyses of DØ and CDF in Run I .
Another important analysis was completed very recently in Ref. with results similar to ours, although the analysis differs in several ways (e.g., no $`p_T^{\gamma \gamma }`$ cut). This study suggests that a signal of two photons combined with a single jet or two jets is an effective method to search for a Higgs boson in the two photon decay mode, but it did not utilize cuts as stringent as ours. A careful comparison needs to be made among all the observables in the various studies under precisely the same assumptions to ascertain which observables are the most effective. And, of course, a combination of all useful observables should be employed to maximize our sensitivity to Higgs bosons. The suggested two photon observables outlined in this paper appear to be useful additions to the list of Higgs boson search observables.
Acknowledgements: J.W. thanks Lawrence Berkeley National Laboratory for partial support in the participating guest program. SM thanks G.L. Kane for useful conversations.
## Appendix: estimation of signal and backgrounds
### Signal
As mentioned in the text, we are mainly concerned with the processes $`ggh`$ and $`q\overline{q}Vh`$ (with $`V=W`$ or $`Z`$). The $`ggh\gamma \gamma `$ process is calculated based on $`b`$-space resummation (see, e.g., Ref. ), and performed to NLO accuracy. The total event rates (without cuts) agree with other fixed order calculations . Since the multiple, soft gluon emissions are integrated out, the effect of isolation cuts must be determined by some other means. We use a constant isolation efficiency per photon $`ϵ_{\mathrm{iso}}=0.95`$ for these inclusive studies. Our results can be easily scaled if necessary to account for a different efficiency.
The $`q\overline{q}Vh`$ process is calculated using PYTHIA , but multiplied by a constant $`K`$–factor based on the resummed calculation of Ref. . For completeness, the contribution of vector boson fusion processes were also calculated using PYTHIA without any effective-$`W`$ approximation and no $`K`$ factor. This process was never relevant for our analysis, for reasons discussed in the main text.
### Background
The background estimate of the inclusive production of $`\gamma \gamma `$ pairs – where kinematic cuts are applied only on the properties of the individual photons or the diphoton pair – uses the next–to–leading order, resummed calculation of Ref. applied to the 2.0 TeV collider energy. Since the resummed calculation integrates out the history of the soft gluon emission, the photon isolation efficiency must be estimated by another means, such as a showering Monte Carlo or from $`Z`$ boson data. We use a constant isolation efficiency per photon $`ϵ_{iso}=0.95`$ as for the signal. No backgrounds from fragmentation photons (e.g. $`\pi ^0,\eta \gamma \gamma `$) are included in our numbers. The results of Ref. show good agreement with Run I data, and contain only a small component of fragmentation photons. For simplicity, we have ignored it entirely. Of course, the actual contribution from fragmentation photons depends critically on the isolation criteria and on the minimum $`p_T^\gamma `$. Note that the resummed calculation does include the final state bremsstrahlung processes, e.g. $`qgq\gamma \gamma `$. We find that our calculational method yields good agreement with the invariant mass distribution presented in Ref. without a large fragmentation component.
To estimate the backgrounds to $`W`$ or $`Z`$ \+ $`\gamma \gamma `$, where the gauge bosons decay leptonically or hadronically, we need to determine the properties of the individual quarks and gluons emitted in the standard $`\gamma \gamma `$ production processes. This is not straightforward, since parton showering is accurate at describing event shapes but not event rates, whereas the hard NLO corrections to the $`\gamma \gamma `$ production rate are known to be important. For moderate values of $`p_T^{\gamma \gamma }`$ relative to $`M_{\gamma \gamma }`$, a fixed order (in $`\alpha _s`$) calculation is as accurate in describing the kinematics of the photon pair as a resummed one (the transition between the two perturbative schemes is handled naturally in the resummation formalism, but the gluon emissions are integrated out). Therefore, we use the partonic subprocesses $`q\overline{q}\gamma \gamma g`$, $`qg\gamma \gamma q`$, etc., to set the event rate, plus the parton showering method to reconstruct the full history of possibly multiple gluon emissions. For the $`gg\gamma \gamma `$+jets background, we use parton showering with the $`gg\gamma \gamma `$ process, but using the improvements of Ref. to approximate the NLO corrections (the effect of using the exact pentagon diagram for the $`gg\gamma \gamma g`$ process is not important ). The hard scale is set to the photon pair invariant mass. In all cases, we calculate the isolation efficiency explicitly. |
no-problem/0001/hep-ph0001019.html | ar5iv | text | # Neutralino Proton Cross Sections In Supergravity Models
## I Introduction
Supersymmetric models with R-parity invariance generally predict the existance of dark matter relics from the Big Bang. Experimental bounds on exotic isotopes strongly imply that the lightest supersymmetric particle (LSP), which is absolutely stable by R-parity invariance, must be electrically neutral and weakly interacting. The minimal supersymmetric model (MSSM) then has two possible candidates, the lightest neutralino ($`\stackrel{~}{\chi }_1^0`$) and the sneutrino ($`\stackrel{~}{\nu }`$). In gravity mediated supergravity (SUGRA) grand unified models (GUTs) , the allowed region in the supersymmetry (SUSY) parameter space where the $`\stackrel{~}{\nu }`$ is the LSP is generally small. The absence of the decay $`Z\stackrel{~}{\nu }+\stackrel{~}{\nu }`$ at LEP implies $`m_{\stackrel{~}{\nu }}45`$ GeV and LEP bounds on the light Higgs is sufficient to eliminate the $`\stackrel{~}{\nu }`$ as the LSP for the minimal SUGRA model (mSUGRA) . If one further includes cosmological constraints, the sneutrino is also excluded for the general MSSM . Thus for these models, the $`\stackrel{~}{\chi }_1^0`$ is the unique candidate for cold dark matter (CDM). It is an appealing feature then of SUGRA models that the predicted amount of relic density of neutralino CDM is consistent with what is observed astronomically for a significant part of the SUSY parameter space.
Since the initial observation that the $`\stackrel{~}{\chi }_1^0`$ represented a possible CDM particle and the subsequent suggestion that local $`\stackrel{~}{\chi }_1^0`$ in the Milky Way might be observed by terrestrial detectors , there has been a great deal of theoretical analysis and experimental activity concerning the detection of local CDM particles. Recent theoretical calculations in Refs. \[6-25\] have made use of a number of different SUSY models. Thus Refs. \[6-10\] assume the MSSM model, and calculations in Refs. \[11-19\] are performed using mSUGRA GUT models (with universal soft breaking at the GUT scale $`M_G2\times 10^{16}`$ GeV). Refs. allow for nonuniversal soft breaking in the Higgs sector and Refs. include also nonuniversal effects in the third generation. In addition, different authors limit the parameter space differently.
The neutralino-nucleus scattering amplitude contains spin independent and spin dependent parts. However for detectors with heavy nuclei, the spin independent part dominates. In these, the neutron and proton scattering amplitudes are approximately equal, which allows one to extract from the data the spin independent neutralino-proton cross section $`\sigma _{\stackrel{~}{\chi }_1^0p}`$. The sensitivity of current experiments (e.g., DAMA, CDMS) is approximately $`(110)\times 10^6`$ pb for $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ , and perhaps a factor of 10 improvement may be expected in the near future. It is the purpose of this paper to examine what part of the SUSY parameter space can be tested with such a sensitivity. We do this by examining the maximum theoretical cross section that can lie in the domain
$$0.1\times 10^6\mathrm{pb}\sigma _{\stackrel{~}{\chi }_1^0\mathrm{p}}10\times 10^6\mathrm{pb}$$
(1)
as one varies SUSY parameters (e.g., $`\mathrm{tan}\beta `$, $`m_{\stackrel{~}{\chi }_1^0}`$). Our calculations are done within the framework of SUGRA GUT models with non-universal soft breaking allowed in both the Higgs and third generation squark and slepton sectors. (As discussed in Ref. and will be seen below, it is necessary to include both Higgs and third generation nonuniversalities as the two can have constructive interference.) We also update earlier analyses by including the latest LEP bounds on the light chargino ($`\stackrel{~}{\chi }_1^\pm `$) and light Higgs ($`h`$) mass ($`m_{\stackrel{~}{\chi }_1^\pm }>94`$ GeV, $`m_h>95`$ GeV) and include the $`bs+\gamma `$ and Tevatron constraints.
In calculating $`\sigma _{\stackrel{~}{\chi }_1^0p}`$, we restrict the SUSY parameter space to be consistent with the astronomical estimates of the amount of relic CDM. This is conventionally measured by the quantity $`\mathrm{\Omega }_{\mathrm{CDM}}=\rho _{\mathrm{CDM}}/\rho _c`$ where $`\rho _{\mathrm{CDM}}`$ is the mean CDM mass density, and $`\rho _c=3H_0^2/8\pi G_N`$ ($`H_0`$ = Hubble constant parameterized by $`H_0=(100\text{km s}^1\text{Mpc}^1)h`$, $`G_N`$ = Newton constant). Recent measurements of $`\mathrm{\Omega }_m=\rho _m/\rho _c`$ ($`\rho _m`$ is the matter density), $`H_0`$, the Cosmic Microwave Background (CMB), supernovae data, etc., indicate that $`\mathrm{\Omega }_{\mathrm{CDM}}`$ is smaller then previously thought. We will see below that both the upper and lower bounds on $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ strongly affect the predicted values of $`\sigma _{\stackrel{~}{\chi }_1^0p}`$.
In our analysis below we use the one loop renormalization group equations (RGE) from $`M_G`$ to the $`t`$-quark mass $`m_t=175`$ GeV and impose the radiative breaking constraint at the electroweak scale. We start with a set of parameters at $`M_G`$, integrating out the heavy particles at each threshold, and iterate until a consistent spectrum is obtained. One loop corrections are included in diagonalizing the Higgs mass matrix, and L-R mixing is included in the sfermion mass matrices so that large $`\mathrm{tan}\beta `$ may be treated. Naturalness constraints, that the gluino mass obey $`m_{\stackrel{~}{g}}1`$ TeV, the scalar mass $`m_01`$ TeV and $`|A_0/m_0|5`$ are imposed. Gaugino masses are assumed universal at $`M_G`$, and possible CP violating phases are set to zero. Thus we do not treat here D-brane models (which will be discussed in a subsequent paper). The SUSY mass spectrum is also constrained so that coannihilation effects are negligible. (We find, in fact, that this is a significant constraint with nonuniversal soft breaking even for low $`\mathrm{tan}\beta `$.) We examine $`\mathrm{tan}\beta `$ in the range $`2\mathrm{tan}\beta 50`$, and include leading order (LO) corrections to the $`bs+\gamma `$ decay and correct approximately for NLO effects . We require that the theoretical branching ratio lie in the range $`1.9\times 10^4\text{BR}(BX_s\gamma )4.5\times 10^4`$, and use one loop corrections to the $`b`$-quark mass so that $`m_b`$ takes on its experimental value $`m_b(m_b)=(4.14.5)`$ GeV . (See Appendix. The loop correction is significant for large $`\mathrm{tan}\beta `$ and stems from the part of the Lagrangian given by $`\mu ^{}\lambda _b\stackrel{~}{b}_L\stackrel{~}{b}_R^{}H_2^0+h.c.`$.) We do not assume any particular GUT group constraints and do not impose $`b\tau `$ Yukawa unification (since the latter is sensitive to possible unknown GUT physics).
In Sec. 2 we discuss the range of the astrophysical parameters that enter into the relic density analysis, and also the uncertainties of the quark content of the proton which affect our calculation of $`\sigma _{\stackrel{~}{\chi }_1^0p}`$. In Sec. 3 we examine the mSUGRA model where it is seen that $`\sigma _{\stackrel{~}{\chi }_1^0p}>1\times 10^6`$ pb (the current experimental sensitivity) requires $`\mathrm{tan}\beta `$ to be quite large, though this is somewhat relaxed in the domain $`0.1\times 10^6\mathrm{pb}\sigma _{\stackrel{~}{\chi }_1^0\mathrm{p}}1\times 10^6\mathrm{pb}`$. In Sec. 4 we discuss the nonuniversal models, and see that here Eq. (1) can be satisfied for relatively small $`\mathrm{tan}\beta `$ and large $`m_0`$, and also that $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ sustains for large $`m_{\stackrel{~}{\chi }_1^0}`$. The SUSY mass spectrum expected for our domain of cross sections is also examined. Conclusions are summarized in Sec. 5. A brief qualitative discussion is also given there of the effect of these results on proton decay since the above nonuniversal results appear to releave some of the tension previously noted between $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ in the range of Eq. (1) and current Super Kamiokande proton lifetime bounds .
## II Astronomical and Quark Parameters
The basic experimental quantity that controls the SUSY analysis of relic density is $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$. Recent measurements at the Hubble Space Telescope using a number of different techniques has led to a combined average of
$$H_0=(71\pm 3\pm 7)\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1.$$
(2)
There is now sufficient data on the CMB anisotropies to show that $`\mathrm{\Omega }_{tot}1`$ and $`\mathrm{\Omega }_m`$ small is strongly favored . Measurements on clusters of galaxies yield $`\mathrm{\Omega }_m<0.32\pm 0.05`$ , and these results are consistent with the supernovae data . An analysis of combined data (excluding microlensing) yields $`\mathrm{\Omega }_m=0.23\pm 0.08`$. In view of possible systematic errors, we assume here $`\mathrm{\Omega }_m=0.3\pm 0.1`$, and since the baryonic content is $`\mathrm{\Omega }_B0.05`$, we take
$$\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}=0.25\pm 0.10.$$
(3)
Combining errors in quadrature then yields $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2=0.126\pm 0.052`$. In the following we will restrict the range of $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ by what is approximately 2 std. around the mean:
$$0.02\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2<0.25.$$
(4)
(As pointed out in Ref. , the lower bound is the minimum amount of DM to account for the rotation curves of spiral galaxies.) Future measurements by the MAP and Planck sattelites will greatly reduce these errors.
The fundamental SUSY Lagrangian allows one to calculate the neutralino-quark scattering amplitude. To obtain the $`\stackrel{~}{\chi }_1^0p`$ cross section one needs to know in addition, the quark content of the proton. In the notation of Ref. , the two parameters that enter sensitively are
$$\widehat{f}=\frac{\sigma _{\pi N}}{m_p},$$
(5)
and
$$f=\frac{p|m_s\overline{s}s|p}{m_p},$$
(6)
where $`\sigma _{\pi N}`$ is the ($`\pi N`$) $`\sigma `$-term and is given by
$$\sigma _{\pi N}=\frac{1}{2}(m_u+m_d)p|\overline{u}u+\overline{d}d|p.$$
(7)
$`f`$ can be written as
$$f=\frac{1}{2}ry\frac{\sigma _{\pi N}}{m_p},$$
(8)
where
$$r=\frac{m_s}{\frac{1}{2}(m_u+m_d)},$$
(9)
and
$$y=\frac{p|\overline{s}s|p}{\frac{1}{2}p|\overline{u}u+\overline{d}d|p}1\frac{\sigma _0}{\sigma _{\pi N}}.$$
(10)
The quark mass ratios are fairly well known, and we use in the following $`r=24.4\pm 1.5`$ . Recently, the uncertainties in $`\sigma _{\pi N}`$ and $`\sigma _0`$ have been analyzed in Ref. . They find
$$40\mathrm{MeV}\stackrel{<}{}\sigma _{\pi \mathrm{N}}\stackrel{<}{}65\mathrm{MeV},30\mathrm{MeV}\stackrel{<}{}\sigma _0\stackrel{<}{}40\mathrm{MeV}.$$
(11)
In the following we will consider two possible choices for $`\sigma _{\pi N}`$ and $`\sigma _0`$:
$$\begin{array}{ccc}\text{Set 1: }\hfill & \sigma _{\pi N}=40\mathrm{MeV},\sigma _0=30\mathrm{MeV};\hfill & \widehat{f}=0.0480,f=0.195.\hfill \\ \text{Set 2: }\hfill & \sigma _{\pi N}=65\mathrm{MeV},\sigma _0=30\mathrm{MeV};\hfill & \widehat{f}=0.0693,f=0.455.\hfill \end{array}$$
(12)
Set 1 corresponds approximately to the original analysis of Ref. (and is the most conservative possibility) while Set 2 is similar to the Set 2 of Ref. . (Set 3 of Ref. gives considerably larger cross sections.) In the following, we will use mostly Set 2 in showing our result, but we will exhibit the difference between Set 1 and Set 2 in one case to illustrate some of the uncertainties that exist.
## III The mSUGRA Model
We begin our analysis by examining the minimal SUGRA model which depends on four parameters and the sign of the Higgs mixing parameter $`\mu `$. A convenient choice of parameters is $`m_0`$ (the universal scalar mass at $`M_G`$), $`m_{1/2}`$ (the universal gaugino mass at $`M_G`$), $`A_0`$ (the cubic soft breaking mass at $`M_G`$) and $`\mathrm{tan}\beta =H_2/H_1`$ (where $`H_{1,2}`$ gives rise to (down, up) quark masses). It is convenient sometimes to replace $`m_{1/2}`$ by the gluino mass $`m_{\stackrel{~}{g}}(\alpha _3/\alpha _G)m_{1/2}`$ ($`\alpha _G1/24`$ is the GUT scale gauge coupling constant) or $`m_{\stackrel{~}{\chi }_1^0}`$ which also scales with $`m_{1/2}`$. Our sign convention on the $`\mu `$ parameter is defined by the quadratic term in the superpotential
$$W^{(2)}=\mu H_1H_2=\mu (H_1^0H_2^0H_1^{}H_2^+).$$
(13)
(With this convention, the $`bs+\gamma `$ constraint eliminates mostly $`\mu >0`$.)
In calculating $`\sigma _{\stackrel{~}{\chi }_1^0p}`$, one must impose the relic density constraint of Eq. (4). This is governed by the Boltzmann equation describing $`\stackrel{~}{\chi }_1^0`$ annihilation in the early universe :
$$\frac{dn_{\stackrel{~}{\chi }_1^0}}{dt}+3\frac{\dot{R}}{R}n_{\stackrel{~}{\chi }_1^0}=\sigma _{ann}v_{rel}(n_{\stackrel{~}{\chi }_1^0}n_{eq})$$
(14)
where $`n_{\stackrel{~}{\chi }_1^0}`$ is the number density of $`\stackrel{~}{\chi }_1^0`$, $`n_{eq}`$ its equilibrium value, $`\sigma _{ann}`$ is the annihilation cross section, $`v_{rel}`$ the relative velocity, and $``$ means thermal average. The diagrams governing $`\sigma _{ann}`$ are shown in Fig. 1. The final relic density is given by
$$\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2=2.48\times 10^{11}\left(\frac{T_{\stackrel{~}{\chi }_1^0}}{T_\gamma }\right)^3\left(\frac{T_\gamma }{2.73}\right)^3\frac{N_f^{1/2}}{_0^{x_f}𝑑x\sigma _{ann}v_{rel}}$$
(15)
where $`x_f=kT_f/m_{\stackrel{~}{\chi }_1^0}1/20`$, $`T_f`$ is the freezeout temperature, $`N_f`$ is the number of degrees of freedom at freezeout, and $`(T_{\stackrel{~}{\chi }_1^0}/T_\gamma )^3`$ is the reheating factor.
The relic density decreases with increasing annihilation cross section, and in order to understand some of the results obtained below, we first discuss which parameters control $`\sigma _{ann}`$. From Fig. 1 one expects $`\sigma _{ann}`$ to fall with increasing $`m_{\stackrel{~}{\chi }_1^0}`$ and also increasing $`m_0`$ (since $`m_{\stackrel{~}{f}}^2`$ increases with $`m_0^2`$). However, if $`2m_{\stackrel{~}{\chi }_1^0}`$ is near $`m_h`$, $`m_H`$ or $`m_A`$ (but lies below), the s-channel pole gives rise to a large amount of annihilation (which can reduce $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ below the allowed minimum) and due to the thermal averaging, this effect can be significant when $`2m_{\stackrel{~}{\chi }_1^0}`$ is less than the Higgs mass and within five times the Higgs width of the Higgs mass . The LEP data, has eliminated most of this effect for the light Higgs. However, we will see that since $`H`$ and $`A`$ become light at large $`\mathrm{tan}\beta `$, effects of this type become significant in that regime. Further, if one of the sleptons or squarks becomes light i.e. $`100`$ GeV, the t-channel annihilation will drive $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ down . This effect can again become significant at large $`\mathrm{tan}\beta `$ where large L-R mixing in the sfermion mass matrices reduces $`m_{\stackrel{~}{f}}^2`$.
We turn next to $`\sigma _{\stackrel{~}{\chi }_1^0p}`$, which is governed by the diagrams of Fig. 2. We see here that the cross section can become large for light (first generation squarks) and light Higgs bosons. These regions of parameter space are just the ones that reduce $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$, and so there can be a bound produced on $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ so that $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ does not fall below its minimum.
In order now to see the sensitivity of current detectors to mSUGRA we plot in Fig. 3 (for Set 2 parameters of Eq. (12)), the maximum value of $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ for $`\mathrm{tan}\beta =`$20, 30, 40 and 50 as a function of $`m_{\stackrel{~}{\chi }_1^0}`$ (obtained by allowing all other parameters to vary subject to the constraints listed in Secs. 1 and 2). We see the expected fall off with increasing $`m_{\stackrel{~}{\chi }_1^0}`$. The current DAMA experiment is thus sensitive to mSUGRA for $`\mathrm{tan}\beta \stackrel{>}{}25`$ (i.e. $`\sigma _{\stackrel{~}{\chi }_1^0p}\stackrel{>}{}1.0\times 10^6`$ pb). We note that the fall off is less severe for $`\mathrm{tan}\beta =50`$, since at this high value of $`\mathrm{tan}\beta `$ the $`H`$ and $`A`$ Higgs become relatively light enhancing the $`\stackrel{~}{\chi }_1^0p`$ cross section. This can be seen in Fig. 4 where we have plotted $`m_H`$ for $`\mathrm{tan}\beta =30`$ and $`\mathrm{tan}\beta =50`$. We note also the importance of including the loop corrections to $`m_b`$ for large $`\mathrm{tan}\beta `$ (e.g. $`\mathrm{tan}\beta =50`$) to obtain the correct results here.
Fig. 5 shows the sensitivity of the calculations to the choice of particle physics parameters. Set 1 gives cross sections about a factor of 2 smaller than Set 2. In the following, we will use Set 2 in all our analysis.
Fig. 6 shows $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ vs $`m_{\stackrel{~}{\chi }_1^0}`$ for $`\mathrm{tan}\beta =30`$. We see, as expected, $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ is an increasing function of $`m_{\stackrel{~}{\chi }_1^0}`$ (since $`\sigma _{ann}`$ is a decreasing function). The upper bound on $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ then implies an upper bound of the neutralino mass of $`m_{\stackrel{~}{\chi }_1^0}120`$ GeV (i.e. $`m_{\stackrel{~}{g}}\stackrel{<}{}900`$ GeV) as has been discussed previously \[12-14\]. Note that one can obtain cross sections within the DAMA sensitivity range without going to the edges of the parameter space in $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$. Also, these cross sections all fall below the current experimental sensitivity before the uncertainties in the Milky Way astronomical parameters discussed in become important.
Figs. (7-9) exhibit the particle spectrum expected for the example of $`\mathrm{tan}\beta =30`$, when $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ takes on its maximum value. Fig. 7 shows that the $`d`$-squark is quite heavy ($`m_0`$ is large). This arises from our constraint $`m_{\stackrel{~}{\tau }_R}m_{\stackrel{~}{\chi }_1^0}25`$ GeV to prevent coannihilation effects from occurring. Thus the large $`\mathrm{tan}\beta `$ being considered here reduces $`m_{\stackrel{~}{\tau }_R}`$ (due to L-R mixing) and $`m_0`$ must be increased to prevent it from becoming degenerate with the $`\stackrel{~}{\chi }_1^0`$. These coannihilation effects are thus different from the ones that can occur at low $`\mathrm{tan}\beta `$ , since they occur at low $`m_{\stackrel{~}{\chi }_1^0}`$ where $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ is large enough to fall within the range of Eq. (1). (They will be discussed elsewhere.) Fig. 8 shows the light Higgs mass, which is relatively heavy due to the fact that $`m_0`$ is large. Fig. 9 shows $`m_{\stackrel{~}{\chi }_1^\pm }`$ vs $`m_{\stackrel{~}{\chi }_1^0}`$ for $`\mathrm{tan}\beta =30`$. One sees that scaling is obeyed , i.e. $`m_{\stackrel{~}{\chi }_1^\pm }2m_{\stackrel{~}{\chi }_1^0}`$ since $`\mu `$ is relatively large ($`\mu ^2/M_Z^21`$).
## IV Nonuniversal Models
Nonuniversal soft breaking can arise in SUGRA models if in the Kahler potential, the interactions between the fields of the hidden sector (that break supersymmetry) and the physical sector are not universal. Nonuniversalities allow for a remarkable increase in the neutralino-proton cross section.
In order to suppress flavor changing neutral currents (FCNC), we will assume the first two generations of squarks and sleptons are universal at $`M_G`$ (with soft breaking mass $`m_0`$) but allow for nonuniversalities in the Higgs and third generation. Thus we parameterize the soft breaking mass at $`M_G`$ as follows:
$`m_{H_1}^{2}=m_0^2(1+\delta _1);m_{H_2}^{2}=m_0^2(1+\delta _2);`$ (16)
$`m_{q_L}^{2}=m_0^2(1+\delta _3);m_{u_R}^{2}=m_0^2(1+\delta _4);m_{e_R}^{2}=m_0^2(1+\delta _5);`$ (17)
$`m_{d_L}^{2}=m_0^2(1+\delta _6);m_{l_L}^{2}=m_0^2(1+\delta _7);`$ (18)
where $`m_0`$ is the universal mass of the first two generations, $`q_L=(\stackrel{~}{t}_L,\stackrel{~}{b}_L)`$; $`l_L=(\stackrel{~}{\nu }_L,\stackrel{~}{\tau }_L)`$; $`u_R=\stackrel{~}{t}_R`$; $`e_R=\stackrel{~}{\tau }_R`$ etc. We take here the bounds
$$1\delta _i1$$
(19)
An alternate way of satisfying the FCNC constraint is to make the first two generations very heavy, and only the third generation light. This is essentially included in the above parameterization by making $`m_0`$ large, and taking the $`\delta _i`$ sufficiently close to -1.
One of the important parameters effected by the nonuniversal soft breaking masses is $`\mu ^2`$, which is determined by the radiative breaking condition. While the RGE must be solved numerically, an analytic expression can be obtained for low and intermediate $`\mathrm{tan}\beta `$ :
$`\mu ^2`$ $`=`$ $`{\displaystyle \frac{t^2}{t^21}}\left[\left\{{\displaystyle \frac{13D_0}{2}}+{\displaystyle \frac{1}{t^2}}\right\}+\left\{{\displaystyle \frac{1D_0}{2}}(\delta _3+\delta _4){\displaystyle \frac{1+D_0}{2}}\delta _2+{\displaystyle \frac{\delta _1}{t^2}}\right\}\right]m_0^2`$ (21)
$`+\text{universal parts + loop corrections}`$
where $`t\mathrm{tan}\beta `$ and
$$D_01(\frac{m_t}{200\mathrm{sin}\beta })^2.$$
(22)
A similar expression holds for large $`\mathrm{tan}\beta `$ in the $`SO(10)`$ limit so Eq. (21) gives a qualitative picture of the effects of nonuniversalities in general (a result borne out from detailed numerical calculations).
We see first that in general $`D_0`$ is small, i.e. for $`m_t=175`$ GeV, $`D_00.2`$, and hence the squark nonuniversality, $`\delta _3`$ and $`\delta _4`$, produce comparable size effects as the Higgs nonuniversalities $`\delta _1`$ and $`\delta _2`$, so that both must be included for a full treatment . Second, one can choose the signs of $`\delta _i`$ such that either $`\mu ^2`$ is reduced or $`\mu ^2`$ is increased. The significance of this is that in general, the $`\stackrel{~}{\chi }_1^0`$ is a mixture of Higgsino and gaugino pieces
$$\stackrel{~}{\chi }_1^0=\alpha \stackrel{~}{W}_3+\beta \stackrel{~}{B}+\gamma \stackrel{~}{H}_1+\delta \stackrel{~}{H}_2$$
(23)
Now the spin independent part of $`\stackrel{~}{\chi }_1^0q`$ scattering depends on interference between the gaugino and Higgsino parts of $`\stackrel{~}{\chi }_1^0`$ (it would vanish for pure gaugino or pure Higgsino) and this interference increases if $`\mu ^2`$ is decreased (increasing $`\sigma _{\stackrel{~}{\chi }_1^0q}`$) and decreases if $`\mu ^2`$ is increased (decreasing $`\sigma _{\stackrel{~}{\chi }_1^0q}`$. Thus there are regions in the parameter space of nonuniversal models where $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ is significantly increased compared to the universal case.
The above effect can be seen in Fig. 10 where the maximum $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ are plotted for $`\mathrm{tan}\beta =7`$ for the nonuniversal and universal cases. We see that nonuniversalities can increase $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ by a factor of $`10`$. Fig. 11 plots the nonuniversal curves for $`\mathrm{tan}\beta =3,5`$, and 7. One sees here that with nonuniversal soft breaking, the current DAMA sensitivity requires $`\mathrm{tan}\beta \stackrel{>}{}4`$ (compared to $`\mathrm{tan}\beta \stackrel{>}{}25`$ in the universal case). For larger $`\mathrm{tan}\beta `$ one can get very large nonuniversal cross sections. Fig. 12 shows the maximum $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ for $`\mathrm{tan}\beta =15`$, which already lies in the region excluded by CDMS and DAMA.
For GUT groups containing an $`SU(5)`$ subgroup (such as $`SU(5)`$, $`SO(10)`$, $`SU(6)`$ etc.) with matter in the usual $`10+\overline{5}`$ representations, the $`\delta _i`$ of Eqs. (17,18) obey
$$\delta _3=\delta _4=\delta _5\delta _{10};\delta _6=\delta _7\delta _{\overline{5}}$$
(24)
We consider this case in more detail (where it is assumed that the gauge group breaks to the Standard Model at $`M_G`$). Fig. 13 shows $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ when $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ takes on its maximum value for the characteristic example of $`\mathrm{tan}\beta =7`$. One sees that $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ is generally small since one has $`\delta _{10}<0`$ to obtain the maximum $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ (reducing $`\mu ^2`$ in Eq. (21) and hence increasing the cross section). This however reduces $`m_{\stackrel{~}{\tau }_R}`$ (from Eq. (17)) increasing the annihilation rate as in the discussion of Fig. 1. (If the $`SU(5)`$-type constraint were relaxed and $`\delta _5`$ left arbitrary, $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ could be increased. For example, $`\delta _5=0`$ produces $`50\%`$ increase in $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$.) The further fall off of $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ for $`m_{\stackrel{~}{\chi }_1^0}\stackrel{>}{}110`$ GeV arises from the fact that $`m_H300`$ GeV, and the nearness of the $`m_H`$ s-channel pole of Fig. 1 increases the early universe annihilation. This can be seen explicitly in Fig. 14 where $`2m_{\stackrel{~}{\chi }_1^0}`$ is close to $`m_H`$ when $`m_{\stackrel{~}{\chi }_1^0}\stackrel{>}{}110`$ GeV. Fig. 15 shows that the light Higgs for this case is quite light lying just above the LEP2 bounds. Particularly interesting is that the first two generations of squarks, however, are relatively heavy. This is shown in Fig. 16 for the $`d`$-squark. The reason for this can be seen from Eq. (21) where since $`\delta _3=\delta _4=\delta _{10}<0`$ (to lower $`\mu ^2`$ and hence increase $`\sigma _{\stackrel{~}{\chi }_1^0p}`$) the nonuniversal terms produce a net negative $`m_0^2`$ contribution to $`\mu ^2`$, the lowering of $`\mu ^2`$ being enhanced, then, the larger $`m_0`$ is. Thus it is possible to get heavy squarks in the first two generations at low $`\mathrm{tan}\beta `$, which may have implications with respect to proton decay as discussed in the next section.
## V Conclusions
If the dark matter of the Milky Way is indeed mainly neutralinos, then current detectors are now sensitive to interesting parts of the SUSY parameter space. Thus either discovery (or lack of discovery) will determine (or eliminate) parts of the parameter space, and this analysis is complementary to what one may learn from accelerator experiments.
To examine what parts of the parameter can be tested with current detectors or in the near future, we have considered $`\sigma _{\stackrel{~}{\chi }_1^0p}`$, the $`\stackrel{~}{\chi }_1^0p`$ cross section, in the range $`0.1\times 10^6\mathrm{pb}\sigma _{\stackrel{~}{\chi }_1^0\mathrm{p}}10\times 10^6`$ pb, and have plotted the maximum theoretical cross section for different SUGRA models. There is a major difference between the universal and nonuniversal soft breaking models. Thus the current DAMA experiment (with sensitivity of $`\sigma _{\stackrel{~}{\chi }_1^0p}\stackrel{>}{}1\times 10^6`$ pb) is sensitive to $`\mathrm{tan}\beta \stackrel{>}{}25`$ for universal soft breaking (Fig. 3) while it is sensitive to $`\mathrm{tan}\beta \stackrel{>}{}4`$ for the nonuniversal model (Fig. 11). Thus while dark matter cross sections increase with $`\mathrm{tan}\beta `$ and hence detectors are more sensitive at higher $`\mathrm{tan}\beta `$, it is possible for current detectors to probe part of the low $`\mathrm{tan}\beta `$ parameter space for the nonuniversal models.
For the mSUGRA model, we find that $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ monotonically increases with $`m_{\stackrel{~}{\chi }_1^0}`$ from the minimum to the maximum bounds of Eq. (4) (Fig. 6), leading to the upper bound $`m_{\stackrel{~}{\chi }_1^0}120`$ GeV ($`m_{\stackrel{~}{g}}\stackrel{<}{}900`$ GeV) which is below where astronomical uncertainties about the Milky Way become significant. In general $`\mu ^2`$ is large (i.e. $`\mu ^2/M_Z^21`$) leading to the usual gaugino scaling relations e.g. Fig. 9, and the Higgs mass is relatively heavy (Fig. 8). At the very largest $`\mathrm{tan}\beta `$, e.g. $`\mathrm{tan}\beta =50`$, the loop corrections to $`\lambda _b`$ at the electroweak scale become very large (see Appendix), requiring that $`\lambda _b`$, the $`b`$-Yukawa coupling, be adjusted so that one obtains the experimental $`b`$-quark mass .
For the nonuniversal model, significantly increased $`\stackrel{~}{\chi }_1^0p`$ cross sections can be obtained by choosing $`\delta _{3,4}<0`$ and $`\delta _2>0`$ in Eq. (21). This reduces $`\mu ^2`$, increasing the Higgsino content of the $`\stackrel{~}{\chi }_1^0`$, and hence increasing the Higgsino-gaugino interference which enters in $`\sigma _{\stackrel{~}{\chi }_1^0p}`$. (In the $`SU(5)`$-like models, this generally leads to a light $`\stackrel{~}{\tau }_R`$ and hence a relatively low $`\mathrm{\Omega }_{\stackrel{~}{\chi }_1^0}h^2`$ (Fig. 13)). In this case the maximum cross sections arise with $`\mu ^2`$ relatively small, and so scaling no longer holds accurately, and the light Higgs lies close to the LEP2 bounds (Fig. 15).
While coannihilation effects have not been treated in this analysis, we have noted two regions where such effects can occur. In mSUGRA models, due to the fact that $`\mathrm{tan}\beta `$ must be large to obtain $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ in the range of Eq. (1), L-R mixing reduces $`m_{\stackrel{~}{\tau }_R}`$ making the $`\stackrel{~}{\tau }_R`$ near degenerate with the $`\stackrel{~}{\chi }_1^0`$. In the nonuniversal case, where $`\mathrm{tan}\beta `$ is small or moderate, large $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ are obtained by lowering $`\mu ^2`$ which makes the $`\stackrel{~}{\chi }_1^\pm `$ nearly degenerate with the $`\stackrel{~}{\chi }_1^0`$. Both these domains of coannihilation are different that previously treated , and they inhabit regions of parameter space with $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ within the reach of current detectors. We have prevented coannihilation here from becoming significant by imposing the constraints $`m_{\stackrel{~}{\tau }_R}m_{\stackrel{~}{\chi }_1^0}`$, $`m_{\stackrel{~}{\chi }_1^\pm }m_{\stackrel{~}{\chi }_1^0}25`$ GeV. Further study is required to see what occurs when these constraints are removed.
It has for sometime been realized that tension exist in GUT theories that simultaneously allow for dark matter and proton decay . Thus for $`SU(5)`$-type models, minimal SUGRA GUT proton decay proceeds through the $`\stackrel{~}{H}_3`$, the superheavy Higgsino color triplet components of the Higgs $`5`$ and $`\overline{5}`$ representations. The basic diagram is shown in Fig. 17, showing that the decay rate scales approximately by
$$\mathrm{\Gamma }(p\overline{\nu }K)\frac{1}{M_3^2}\left(\frac{m_{\stackrel{~}{\chi }_1^\pm }}{m_{\stackrel{~}{q}}^2}\frac{1}{\mathrm{sin}\beta \mathrm{cos}\beta }\right)^2$$
(25)
where $`M_3=O(M_G)`$ is the $`\stackrel{~}{H}_3`$ mass. In mSUGRA models, scaling is generally a good approximation and $`m_{\stackrel{~}{\chi }_1^\pm }2m_{\stackrel{~}{\chi }_1^0}`$. Hence proton stability requires small $`m_{\stackrel{~}{\chi }_1^0}`$, large $`m_{\stackrel{~}{q}}`$ and small $`\mathrm{tan}\beta `$. We have seen, however, that if dark matter exists with the sensitivity of the current DAMA experiment, while moderately heavy squark masses could exist in mSUGRA (Fig. 7), $`\mathrm{tan}\beta `$ would have to be quite large i.e. $`\mathrm{tan}\beta \stackrel{>}{}25`$, which would be sufficient to violate the current Super Kamiokande bounds on the proton lifetime . However, this tension is releaved for the nonuniversal SUGRA GUT models. Thus we saw in these cases, one could have a $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ in the range of the DAMA experiment for small $`\mathrm{tan}\beta `$, i.e. $`\mathrm{tan}\beta \stackrel{>}{}4`$, and further such large cross sections also implied large squark masses, Fig. 16. This would be expected to remove any disagreement between a large $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ and a small proton decay rate.
Finally we mention that in this paper we have plotted the maximum $`\stackrel{~}{\chi }_1^0p`$ cross sections for each $`\mathrm{tan}\beta `$ and $`m_{\stackrel{~}{\chi }_1^0}`$. Of course nature may not chose SUSY parameters such that $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ takes on its maximum value. However, by looking at the maximum $`\sigma _{\stackrel{~}{\chi }_1^0p}`$ we are able to see in a given model whether detection of dark matter at current detector sensitivities is consistent with the predictions of the theoretical model.
## VI Acknowledgments
This work was supported in part by National Science Foundation Grant No. PHY-9722090.
## VII Appendix
The $`b`$-quark coupling to the down type Higgs field which gives rise to tree level bottom mass is described by
$$L_{bbH}=\lambda _b\overline{b}_Lb_RH_1^0+h.c..$$
(26)
There also exists a term in the Lagrangian where the bottom squarks are coupled to the up type neutral Higgs ($`H_2^0`$) and is given by:
$$L_{\stackrel{~}{b}\stackrel{~}{b}H}=\lambda _b\mu ^{}\stackrel{~}{b}_L\stackrel{~}{b}_R^{}H_{2}^{0}{}_{}{}^{}+h.c..$$
(27)
The above interaction can give rise to a one loop contribution to the tree level bottom mass . We do the analysis in the mass insertion approximation which produces errors of less than 10$`\%`$ in $`m_b`$ for the relevant parts of the parameter space. The loop diagram arising from the above interaction, shown in Fig. 18a, involves gluino, squark fields, $`\alpha _s`$ and $`\mathrm{tan}\beta `$ and hence can be large for large $`\mathrm{tan}\beta `$. There also exists another one loop contribution which involves the stop quarks and the chargino. This loop, shown in Fig. 18b, depends on $`\lambda _t^2`$ and contributes less than the gluino loop. The net $`b`$-quark mass generated from the above contributions is $`m_b+\delta m_b`$, where
$$\delta m_b=\lambda _bv_1K\mathrm{tan}\beta ;v_1=H_1^0$$
(28)
$`K{\displaystyle \frac{2\alpha _s}{(3\pi )}}m_{\stackrel{~}{g}}\mu G(m_{\stackrel{~}{b}_L}^2,m_{\stackrel{~}{b}_R}^2,m_{\stackrel{~}{g}}^2){\displaystyle \frac{\lambda _t^2}{(4\pi )^2}}A_t\mu G(m_{\stackrel{~}{t}_L}^2,m_{\stackrel{~}{t}_R}^2,\mu ^2)`$ (29)
where
$$G(a,b,c)=\frac{ab\text{ Log}[\frac{a}{b}]+bc\text{ Log}[\frac{b}{c}]+ac\text{ Log}[\frac{c}{a}]}{(ab)(bc)(ac)},$$
(30)
$`m_{\stackrel{~}{g}}`$ is the gluino mass, $`m_{\stackrel{~}{b}_{L,R}}`$ are the left and right handed sbottom masses and $`m_{\stackrel{~}{t}_{L,R}}`$ are the left and right handed stop masses.
The correction $`K`$ is evaluated at the electroweak scale which we take here to $`m_t`$ (the endpoint of running the RGE down from $`M_G`$). Using the RGE for $`\lambda _b`$, we then determine $`\lambda _b(m_t)`$ so that the total $`b`$-quark mass, $`m_b=\lambda _bv_1+\delta m_b`$, agrees with the experimental value of $`m_b(m_b)`$ at the $`b`$ scale. This produces a significant change in $`\lambda _b`$ for large $`\mathrm{tan}\beta `$. |
no-problem/0001/astro-ph0001424.html | ar5iv | text | # Stellar Dynamics and the implications on the merger evolution in NGC 6240 (catalog )
## 1 Introduction
The luminous infrared galaxy (LIRG) NGC 6240 (catalog ) (Wright et al., 1984; Thronson et al., 1990) has a remarkable disturbed morphology which is characterised in the visible by loops, branches and arms extending out to $`50\mathrm{kpc}`$ (Fosbury & Wall, 1979). From this large-scale morphology, and the discovery of two nuclei in the central region Fried & Schulz (1983) concluded that NGC 6240 (catalog ) is an interacting and merging system of two galaxies. This conclusion is supported by spectroscopic observations in the visible (Fosbury & Wall, 1979; Fried & Ulrich, 1985) and near-infrared wavelength range (Herbst et al., 1990; Van der Werf et al., 1993; Sugai et al., 1997). NGC 6240 (catalog ) is the most luminous source of near-infrared line emission ($`L(\mathrm{H}_2)_{\mathrm{tot}}10^9\mathrm{L}_{\mathrm{}}`$, Wright et al. (1984)). The $`\mathrm{H}_2`$ emission is most likely excited in shocks triggered by the collision of the two galaxies.
NGC 6240 (catalog ) has a bolometric luminosity $`L_{\mathrm{bol}}L_{\mathrm{IR}}=6\times 10^{11}\mathrm{L}_{\mathrm{}}`$ (for $`\mathrm{H}_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{D}=97\mathrm{Mpc}`$). This bolometric luminosity classifies NGC 6240 (catalog ) almost as an ultra luminous infrared galaxy (ULIRG; $`L_{\mathrm{IR}}10^{12}\mathrm{L}_{\mathrm{}}`$). LIRGs and ULIRGs emit more than 90% of their bolometric luminosity in the infrared wavelength range. There is consensus that the infrared emission comes from warm dust but what is the primary energy source responsible for heating the dust is still a matter of debate. The two possibilities are a deeply dust embedded active galactic nucleus (AGN) or a super starburst. Observations with the Infrared Space Observatory (ISO) (Genzel et al., 1998) show that the majority of ULIRGs, including NGC 6240 (catalog ), are starburst dominated. On the other hand, detection of a highly absorbed hard X-ray source indicates that NGC 6240 (catalog ) also contains a powerful AGN (Vignati et al., 1999). Most ULIRGs are interacting or merging systems with the star formation and/or AGN activity triggered by gas compression and inflows towards the center (Sanders et al., 1988). As such NGC 6240 (catalog ) is a nearby test case of a luminous merger. It can be used to test predictions of interacting and merging galaxies (Barnes & Hernquist, 1996; Mihos & Hernquist, 1996). The effect of the interaction on the star formation can be compared with model results to assess the true nature of ULIRGs. To investigate the stellar content and the properties of the starburst of NGC 6240 (catalog ), to determine the stellar dynamics and to test predictions of interacting galaxy models we have carried out high resolution near-infrared spectroscopy with the Max-Planck-Institut für extraterrestrische Physik (MPE) integral field spectrometer 3D.
## 2 Observations and Data Reduction
NGC 6240 (catalog ) was observed with the MPE near-infrared imaging spectrograph 3D (Weitzel et al., 1996) in conjunction with the tip-tilt correction adaptive optics system ROGUE (Thatte et al., 1995) in two observing runs. 3D is an integral field spectrograph that simultaneously obtains spectra for each of 256 spatial pixels covering a square field of view with over 95% fill factor. In both observing runs the spectral resolving power ($`\mathrm{R}\lambda /\mathrm{\Delta }\lambda `$) was 2000 and Nyquist sampled using two settings of a piezo-driven flat mirror.
The first observing run took place in April 1996 at the ESO $`2.2\mathrm{m}`$ telescope on La Silla, Chile. The pixel scale was $`0\stackrel{}{\mathrm{.}}3`$ per pixel, and a wavelength range from $`2.18\mathrm{\mu m}`$ to $`2.45\mathrm{\mu m}`$ was covered. The total on-source integration time was $`4600\mathrm{s}`$ with individual frame integration times of $`300\mathrm{s}`$ or $`400\mathrm{s}`$. The same amount of time was spent off-source $`1\mathrm{}`$ E and W of the nuclear region of NGC 6240 (catalog ) for sky background subtraction. The seeing during the observations was better than $`0\stackrel{}{\mathrm{.}}8`$.
The second observing run took place in March 1998 at the Anglo Australian Telescope in Coonabarabaran, Australia. Here the pixel scale was $`0\stackrel{}{\mathrm{.}}4`$ per pixel and a wavelength range from $`2.155\mathrm{\mu m}`$ to $`2.42\mathrm{\mu m}`$ was covered. This wavelength setting includes the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission line at a redshifted wavelength of $`2.173\mathrm{\mu m}`$. The total on-source integration time was $`3100\mathrm{s}`$ with individual integrations of $`100\mathrm{s}`$ each. Again, the same amount of time was spent off-source $`1\mathrm{}`$ E of the nuclear region of NGC 6240 (catalog ) to subtract the sky background. The seeing throughout the observations was better than $`1\mathrm{}`$.
To correct for the atmospheric transmission a reference star was observed before and after the science integrations. The data were reduced using the set of 3D data analysis routines written for the GIPSY (Van der Hulst et al., 1992) data reduction package. This included wavelength calibration, spectral and spatial flat fielding, dead and hot pixel correction, and division by the reference stellar spectrum. Data cubes from the individual exposures were recentered and added using the centroid of the broadband emission from the southern nucleus. Absolute flux calibration was established by comparison of the broadband emission with K-band photometry by Thronson et al. (1990). Emission line and absorption line maps as well as the continuum map were extracted by performing a linear fit through line-free regions in the vicinity of the lines in the spectrum of each spatial pixel.
## 3 Results
### 3.1 Line Maps and Spectra
The spectrum of NGC 6240 (catalog ) in the range from $`2.15\mathrm{\mu m}`$ to $`2.45\mathrm{\mu m}`$ contains a number of molecular and atomic emission lines, as well as stellar absorption features.
Figure 1 shows the line-free continuum map in the wavelength range from $`2.2\mathrm{\mu m}`$ to $`2.45\mathrm{\mu m}`$, a Brackett $`\gamma `$ map and a map of the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead. All three maps are dominated by two compact sources (hereafter referred to as nuclei) separated by $`1\stackrel{}{\mathrm{.}}6`$. The double nuclear structure has been observed previously in the visible (B, V, R) (Keel, 1990) and NIR (J, H, K) bands (Thronson et al., 1990) but the maps are also similar to what is observed at other wavelengths (Condon et al., 1982; Colbert et al., 1994; Keto et al., 1997). In addition to the double nucleus the Brackett $`\gamma `$ map shows emission extended out to $`1\stackrel{}{\mathrm{.}}5`$ northwest of the northern nucleus in the direction of the third radio emission peak N3 observed by Colbert et al. (1994).
Also shown in Figure 1 is the integrated $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission. In contrast to the K-band continuum the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission shows a single emission peak *between* the nuclei with filaments extending from it. The emission peak is at a separation of $`0\stackrel{}{\mathrm{.}}35`$ from the southern nucleus towards the northern nucleus and is extended in the northern direction. This $`\mathrm{H}_2`$ morphology has been previously observed with Fabry-Perot imaging techniques by Herbst et al. (1990), Van der Werf et al. (1993) and Sugai et al. (1997). The $`\mathrm{H}_2`$ morphology is very similar to the $`\mathrm{CO}\mathrm{J}=21`$ morphology, indicating that also the near-infrared $`\mathrm{H}_2`$ emission lines trace the bulk of the molecular material in NGC 6240 (catalog ).
The line maps in Figure 1 show three major regions of interest. These are the two nuclei, and the region of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission peak between them. Figure 2 shows the spectra of these regions, each within an $`1\mathrm{}`$ diameter circular aperture. Even though the peak of the $`\mathrm{H}_2`$ emission lies between the nuclei, $`\mathrm{H}_2`$ dominates the line emission in all three regions. In fact, NGC 6240 (catalog ) has the highest ratio of $`\mathrm{H}_2v=10\mathrm{S}(1)`$ luminosity to bolometric luminosity of any galaxy which has thus far been observed (Wright et al., 1984). The $`\mathrm{H}_2`$ lines are very broad and have a full width half maximum (FWHM) of $`550\mathrm{km}\mathrm{s}^1`$ with line wings extending over $`1600\mathrm{km}\mathrm{s}^1`$ full width zero power (FWZP). The emission line profiles are asymmetric with red and/or blue wings whose strength varies over the emission region. Figure 3 shows a comparison of the line-of-sight velocity profiles of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ and $`\mathrm{CO}\mathrm{J}=21`$ lines for the nuclear region of NGC 6240 (catalog ).
A characteristic of NGC 6240 (catalog ) is its low Brackett $`\gamma `$ equivalent width (e.g. Lester, Harvey & Carr, 1988; Van der Werf et al., 1993). In earlier work (see references in Van der Werf et al., 1993) the line was not or only marginally detected. In the improved sensitivity 3D data the line is now clearly detected. The Brackett $`\gamma `$ equivalent width of the northern and southern nucleus is $`6.5\pm 1.6\mathrm{\AA }`$ and $`2.9\pm 0.4\mathrm{\AA }`$, respectively. A line map and spectra in different positions can be extracted from the data cube. The FWHM of the Brackett $`\gamma `$ line is $`716\mathrm{km}\mathrm{s}^1`$ and $`689\mathrm{km}\mathrm{s}^1`$ for the northern and southern nucleus, respectively, measured over a $`1\mathrm{}`$ diameter circular aperture. The northern nucleus is redshifted with respect to the southern nucleus by $`137\pm 49\mathrm{km}\mathrm{s}^1`$, a value similar to the velocity difference of $`147\mathrm{km}\mathrm{s}^1`$ measured by Fried & Ulrich (1985) with optical emission and absorption lines.
The 3D spectra also show that NGC 6240 (catalog ) has deep CO absorption bandheads with a large velocity dispersion. From similar data Lester & Gaffney (1994) and Doyon et al. (1994) deduce a velocity dispersion of $`350\mathrm{km}\mathrm{s}^1`$ and $`359\mathrm{km}\mathrm{s}^1`$, respectively, for the region of NGC 6240 (catalog ) containing both nuclei. Using 3D, we are able to resolve spatially the velocity dispersions of each nucleus. A detailed analysis of this is presented in §5. The equivalent width of the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ bandhead is determined using the wavelength intervals given in Origlia, Moorwood & Oliva (1993) and corrected for the velocity broadening using the formula in Oliva et al. (1995). The velocity dispersions are the ones derived in §5 and the resulting equivalent widths are $`13\pm 3\mathrm{\AA }`$ and $`15\pm 1\mathrm{\AA }`$ for the northern and southern nucleus, respectively.
### 3.2 Extinction
From the spectra in Figure 2 it is apparent that the spectrum of the $`\mathrm{H}_2`$ emission peak has a shallower, i.e. redder, slope than the two nuclei. Assuming an intrinsically constant continuum slope over the entire nuclear region of NGC 6240 (catalog ), this reddening can either be due to extinction or a nonstellar contribution to the spectrum at long wavelengths. The spectral energy distribution (SED) of NGC 6240 (catalog ) (Draine & Woods, 1990) shows no nonstellar continuum below $`5\mathrm{\mu m}`$. Above $`5\mathrm{\mu m}`$ thermal emission from dust with a temperature of $`200\mathrm{K}`$ is dominant. Different spectral slopes over the nuclear region of NGC 6240 (catalog ) thus are almost certainly due to extinction variations.
To derive the extinction values the spectral slope of all pixels in the data cube are compared with the spectral slope of a K4.5 supergiant, which was observed with 3D during the La Silla 1996 observing run. A direct comparison of stellar spectra with the spectrum of NGC 6240 (catalog ) yields a K4.5 supergiant as the best fitting template (see §4.1). With the interstellar extinction law from Draine (1989)
$$\frac{\mathrm{A}_\lambda }{\mathrm{A}_\mathrm{V}}=0.351\lambda _{\mu \mathrm{m}}^{1.75}$$
and an uniform foreground screen model (UFS) we derive the extinction map shown in Figure 4. The extinction towards the southern and northern nucleus is $`\mathrm{A}_\mathrm{V}^\mathrm{S}=5.8`$ and $`\mathrm{A}_\mathrm{V}^\mathrm{N}=1.6`$, respectively. The peak extinction is $`\mathrm{A}_\mathrm{V}=7.2\pm 0.7`$. For a mixed model, where absorbing dust and stellar emission are completely mixed, the peak value is $`\mathrm{A}_\mathrm{V}=18.4_{1.9}^{+3.1}`$. The morphology of the extinction map with a single peak between the continuum nuclei indicates a dust concentration between the nuclei and is coincident with the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ and $`\mathrm{CO}\mathrm{J}=21`$ emission peaks.
## 4 The Infrared Nuclei
### 4.1 The Nature of the K-band Continuum
To determine the stellar type dominating the near-infrared light in starburst galaxies the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead is an often used spectral absorption feature. For late-type giants and supergiants this absorption feature is rather deep, and even with data of medium spectral resolution the equivalent width can be used to determine the stellar type. There is an ambiguity in the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ equivalent width between KI and MIII stars, however. From the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead equivalent width the spectrum of NGC 6240 (catalog ) can be explained either by a population of red giants or red supergiants. However, because of the high spectral resolution of our data, we can resolve the discrepancy by a direct comparison of the spectrum of NGC 6240 (catalog ) with both stellar types. Figure 5 compares the spectrum of the CO absorption bandheads of the southern nucleus of NGC 6240 (catalog ) with four stellar spectra of giants and supergiants also observed with 3D at the same spectral resolution (Schreiber, 1999). For the comparison the spectrum of NGC 6240 (catalog ) was shifted into the restframe of zero redshift, and the stellar spectra were broadened with a Gaussian velocity profile of $`600\mathrm{km}\mathrm{s}^1`$ FWHM. Clearly the K and M giant spectra do not fit the deep absorption features and the continuum slope. On the other hand the absorption bandheads of a M3+ supergiant are too deep to represent the NGC 6240 (catalog ) spectrum. The best matching stellar spectrum is that of a K4.5 supergiant. However, its CO absorption bandheads are still somewhat too shallow and we conclude that the near-infrared light is due to late K (K5) or early M (M0/M1) supergiants. Our conclusions are consistent with those of Sugai et al. (1997) whose data have lower spectral resolution but cover a wider spectral range. The fact that the K-band light of the nuclei is dominated by red supergiants indicates a starburst as the source of the K-band luminosity. The presence of supergiants implys that the starburst was triggered quite recently.
### 4.2 Starburst Simulations
To further constrain the age and other characteristics of the starburst we used the program STARS (Schreiber, 1999) to simulate several starburst scenarios. In Figure 6 we show model predictions for the variations of Brackett $`\gamma `$ equivalent width, the K-band luminosity to stellar mass ratio ($`L_\mathrm{K}/_{}`$) and the starburst bolometric luminosity to K-band luminosity ratio ($`L_{\mathrm{bol}}^{}/L_\mathrm{K}`$) for four burst durations: $`1`$, $`5`$ and $`20`$ million years as well as continuous star formation. We adopted a Salpeter (1955) initial mass function (IMF) for masses between $`100_{\mathrm{}}`$ and $`1_{\mathrm{}}`$. The calculated luminosity output of the starburst is insensitive to the shape of the IMF at the low mass end ($`<1_{\mathrm{}}`$). The low mass stars do contribute to the total mass of the stars formed in the starburst, however. To take this into account, we integrated a Miller-Scalo IMF (Miller & Scalo, 1979) from $`1_{\mathrm{}}`$ down to $`0.08_{\mathrm{}}`$, the lower mass limit for hydrogen burning. Because stars with masses $`>25_{\mathrm{}}`$ do not evolve into red supergiants, the K-band luminosity does not depend on the choice of the upper mass cutoff, and due to their small number, their contribution to the total mass is negligible.
### 4.3 Starburst Age and Duration
The $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead equivalent width (or better the presence of late type supergiants) allows us to determine the age of the starbursts in NGC 6240 (catalog ). The late K or early M supergiants that dominate the K-band light of the nuclei have masses from $`10`$ to $`20_{\mathrm{}}`$ and a typical age of $`15`$ to $`25`$ million years. Since red supergiants dominate the K-band luminosity only during this period, the starburst must be of similar age. This age is shown in Figure 6 as a vertical hatched bar.
The Brackett $`\gamma `$ emission line equivalent width allows us to constrain the duration of the starburst activity. The agreement of the Brackett $`\gamma `$ and K-band continuum morphologies indicates that both the line and continuum emission originates in the same star forming regions. The $`\mathrm{H}_2`$ emission can be explained by excitation in slow C-shocks (Van der Werf et al., 1993; Sugai et al., 1997; Egami, 1998). Since the morphology of the $`\mathrm{H}_2`$ emission differs from that of the Brackett $`\gamma `$ emission, shocks most likely do not contribute to the Brackett $`\gamma `$ emission. In a starburst, only stars with masses $`>2030_{\mathrm{}}`$ ionise their surrounding medium to produce Brackett $`\gamma `$ emission, but due to their high mass their lifetime is very short ($`<10`$ million years). The low Brackett $`\gamma `$ equivalent width in NGC 6240 (catalog ) indicates a low number of hot, young stars, either because the starburst is aging or because stars with masses $`>20_{\mathrm{}}`$ were never formed. We prefer the former explanation, because of the signatures for $`50`$ to $`100_{\mathrm{}}`$ stars in nearby starburst templates (see Thornley et al., 1999).
The middle panel of Figure 6 shows that the starburst duration is $`<5`$ million years. The short starburst duration can be explained by strong negative feedback effects from the starburst itself. The onset of vigorous star formation produces young, massive stars and, thus, supernovae in a very short time ($`5`$ million years). The winds of these stars deplete the molecular gas and the star formation subsides. The existence of a superwind in NGC 6240 (catalog ) was shown by Heckman et al. (1990) from $`\mathrm{H}\alpha `$ line mapping and spectroscopy.
The duration of the starbursts is much smaller than the age of the starburst but comparable to the dynamical time scale of $`7`$ million years of the two rotating nuclei (see §5.1). The time scale for the interaction and merging of the two galaxies is several hundred million years and hence much larger than the starburst duration *and* starburst age. Such short but violent star formation events are predicted by models of interacting and merging galaxies, where they are triggered by the close encounters of the interacting partners (Mihos & Hernquist, 1996).
### 4.4 Stellar Mass in the Starburst
The light-to-mass ratio ($`L_\mathrm{K}/_{}`$) of the starburst can be calculated from simulations of the K-band luminosity and the total mass of stars formed in the starburst. The middle panel of Figure 6 shows $`L_\mathrm{K}/_{}`$ as a function of starburst age. It peaks at the time when the red supergiants dominate the K-band luminosity. For the starburst age in NGC 6240 (catalog ) the simulation yields $`1L_\mathrm{K}/_{}3`$. From the dereddened K-band luminosities in Table 1 within $`1\mathrm{}`$ a stellar mass $`_{}=0.41.2\times 10^8_{\mathrm{}}`$ and $`_{}=0.82.3\times 10^8_{\mathrm{}}`$ is derived for the northern and southern nucleus, respectively. Here the lower mass cutoff was assumed to be $`0.08_{\mathrm{}}`$. If the lower mass cutoff was $`0.25_{\mathrm{}}`$, $`0.5_{\mathrm{}}`$ or $`1_{\mathrm{}}`$ the estimated stellar mass would be smaller by $`12\%`$, $`23\%`$ or $`39\%`$, respectively. This is the mass of young stars formed in the starburst contained within the central $`500\mathrm{pc}`$ of the two nuclei. It shows that the two nuclei are massive objects and possibly could be the nuclei of two galaxies involved in a collision, based on the mass of young stars alone.
### 4.5 The Luminosity of the Nuclei
The bottom panel in Figure 6 shows the starburst bolometric luminosity to K-band luminosity ratio ($`L_{\mathrm{bol}}^{}/L_\mathrm{K}`$) as a function of starburst age. For a starburst age of $`15`$ to $`25`$ million years this ratio is $`100`$. From the dereddened K-band luminosity within $`2\mathrm{}`$ diameter of each nucleus (see Table 1) this ratio yields $`L_{\mathrm{bol}}^{}0.7\times 10^{11}\mathrm{L}_{\mathrm{}}`$ and $`L_{\mathrm{bol}}^{}1.7\times 10^{11}\mathrm{L}_{\mathrm{}}`$ for the northern and southern nucleus, respectively, totaling to $`L_{\mathrm{bol}}^{}2.4\times 10^{11}\mathrm{L}_{\mathrm{}}`$. This is ⅓ to ½ of the entire bolometric luminosity $`L_{\mathrm{bol}}6\times 10^{11}\mathrm{L}_{\mathrm{}}`$ of NGC 6240 (catalog ) in the IRAS beam. Within a $`5\mathrm{}`$ diameter aperture the total K-band luminosity $`L_\mathrm{K}=6.1\times 10^9\mathrm{L}_{\mathrm{}}`$ is $``$$`\frac{1}{100}`$ of the total bolometric luminosity. In the bottom panel of Figure 6 this ratio is shown as a horizontal, hatched bar. Within the errors of the simulations the entire bolometric luminosity of NGC 6240 (catalog ) can be explained by the starburst.
The claim that the starbursts in the nuclei contribute significantly to the bolometric luminosity of NGC 6240 (catalog ) is further strengthened by radio observations. The radio flux and the far infrared (FIR) luminosity of NGC 6240 (catalog ) follow the radio-to-FIR correlation for starburst galaxies. Lisenfeld, Voelk & Xu (1996) determined empirically the parameter
$$\mathrm{q}_{2.4\mathrm{GHz}}=\mathrm{log}\left(\frac{S_{\mathrm{FIR}}}{3.75\times 10^{12}S_{2.4\mathrm{GHz}}}\right)$$
relating the radio-power to the FIR-luminosity. For starburst galaxies $`\mathrm{q}_{2.4\mathrm{GHz}}=2.40\pm 0.22`$. Taking the radio-power $`S_{2.4\mathrm{GHz}}=1.7\times 10^{23}\mathrm{W}\mathrm{Hz}^1`$ at $`\nu =2.4\mathrm{GHz}`$ for the nuclear region including both nuclei, and comparing it with the FIR-luminosity $`S_{\mathrm{FIR}}=1.92.3\times 10^{38}\mathrm{W}`$ yields for NGC 6240 (catalog ) $`\mathrm{q}_{2.4\mathrm{GHz}}=2.472.55`$, within the uncertainties of the radio-to-FIR correlation. If we include the radio-emission from the western region (region W in Colbert et al., 1994), which has no counterpart in the visible or infrared, we get $`\mathrm{q}_{2.4\mathrm{GHz}}=2.312.39`$, also within the uncertainties of the q-parameter for starburst galaxies.
X-ray observations and the detection of $`25.9\mathrm{\mu m}`$ \[O IV\] emission indicate the presence of a powerful AGN in NGC 6240 (catalog ) as well (Schulz et al., 1998; Komossa, Schulz & Greiner, 1998; Iwasawa & Comastri, 1998; Genzel et al., 1998; Vignati et al., 1999). In fact, Vignati et al. (1999) conclude from their *Beppo*SAX data that the highly absorbed ($`\mathrm{N}_\mathrm{H}2\times 10^{24}\mathrm{cm}^2`$) AGN can account for most of the bolometric luminosity of NGC 6240 (catalog ). These two conclusions seem to contradict each other . However, extrapolating from the near/mid-infrared spectroscopy and hard X-ray photometry to bolometric luminosity and the possibility of significant correction of the observed luminosity for anisotropy introduce uncertainties of factors $`2`$. We conclude that NGC 6240 (catalog ) is a composite object where both star formation and AGN play a role.
At the position of the K-band nuclei also Brackett $`\gamma `$ emission is observed, however, very little continuum emission is detected at the position of the north-west extension of the Brackett $`\gamma `$ emission. This could be an indication that the Brackett $`\gamma `$ emission is arising from a very young starburst that has not yet formed enough stars to contribute to the continuum emission, but whose hot, young stars have already ionised the interstellar medium (ISM). If true, we can estimate the bolometric luminosity of this emission region using the radio-to-FIR correlation and get $`S_{\mathrm{FIR}}=0.4\times 10^{11}\mathrm{L}_{\mathrm{}}`$. This is about half the bolometric luminosity of the northern nucleus and $`15\%`$ of the total bolometric luminosity of NGC 6240 (catalog ), a non-negligible contribution to the total bolometric luminosity of NGC 6240 (catalog ). Colbert et al. (1994) have interpreted their third radio peak N3, based on its steeper radio spectrum, as a clump of electrons, driven away from the nucleus by a superwind. From the Brackett $`\gamma `$ emission we think this explanation is less likely and favour instead that N3 is a young starburst region.
## 5 Stellar Kinematics
In the near-infrared wavelength range from $`22.5\mathrm{\mu m}`$ stellar kinematics can be determined from absorption features of Ca, Na and CO. By far the strongest absorption feature is the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead at $`2.29\mathrm{\mu m}`$. It has a very sharp blue edge which is very sensitive to stellar motions. Another advantage is that the extinction in the K-band is only $`10\%`$ of the visual extinction. The $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead is, therefore, very well suited for the determination of the stellar kinematics in the gas- and dust-rich infrared galaxies.
We have derived the velocity dispersions and radial velocities of the nuclei of NGC 6240 (catalog ) by using a modified Fourier correlation quotient (FCQ) method (Bender, 1990; Anders, 1999) and a direct fitting method on the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ absorption bandhead data. In the FCQ method the cross-correlation of the template spectrum with the galaxy spectrum and the auto-correlation of the template spectrum are computed in the Fourier domain. Then the quotient of the cross-correlation and the auto-correlation is calculated. To suppress high-frequency noise a Wiener filter is applied to the correlation-quotient. Fourier transformation of the correlation quotient finally yields the line-of-sight velocity distribution in the galaxy spectrum. In the direct fitting method a stellar template spectrum was convolved with a Gaussian broadening function. The broadening function was parameterised by a radial velocity and a velocity dispersion. With a least-$`\chi ^2`$ fitting technique the best fitting set of parameters was computed. To derive the kinematic parameters with the FCQ method the deconvolved velocity profile is fit with a Gaussian profile. For the template spectra used in both methods we chose a K4.5 supergiant spectrum from the 3D stellar library (Schreiber, 1999). While the FCQ method is rather insensitive to template mismatch, the results from the direct fitting method depend very much on the choice of the stellar template spectrum. We find that the best choice for the type of the stellar template is a late K or an early M supergiant (see also §4.1). The errors of both the FCQ and the direct fitting method were determined by modeling of galaxy spectra from stellar spectra with different noise levels and trying to recover the input spectrum. With the direct fitting method a $`\chi _{\mathrm{red}}^2`$-analysis was used as an independent way to determine the fit error.
### 5.1 Stellar Mean Velocity field: Rotation and Dynamical Mass
Because of the sharp edged profile of the CO absorption bandhead we were able to determine the stellar velocity field for the entire central region of NGC 6240 (catalog ) with the direct fitting method. The resulting velocity field is shown in contour form in Figure 7 superposed on the NICMOS K-band HST archival image (PI: N. Scoville, proposal ID: 7219). Velocity gradients across both nuclei are clearly recognizable. The velocity error at the position of the southern nucleus is $`\pm 18\mathrm{km}\mathrm{s}^1`$ and increases to $`\pm 50\mathrm{km}\mathrm{s}^1`$ at a distance of $`0\stackrel{}{\mathrm{.}}6`$; for the northern nucleus the value is $`\pm 60\mathrm{km}\mathrm{s}^1`$ and $`\pm 85\mathrm{km}\mathrm{s}^1`$ at a distance of $`0\stackrel{}{\mathrm{.}}5`$. Along the edge of the velocity field shown in Figure 7 the uncertainty is $`\pm 150\mathrm{km}\mathrm{s}^1`$.
The southern nucleus shows a velocity gradient at a position angle of $`34\mathrm{°}`$ west of the north-south axis with the south-east of the nucleus being redshifted. The velocity gradient of the northern nucleus has a position angle of $`41\mathrm{°}`$ east of the north-south axis. The north-east of the northern nucleus is redshifted with respect to the center. The maximum relative velocity shift from the center along the velocity gradient within $`1\mathrm{}`$ is $`\pm 165\mathrm{km}\mathrm{s}^1`$ and $`\pm 170\mathrm{km}\mathrm{s}^1`$ for the southern and northern nucleus, respectively. The agreement of the K-band morphology from NICMOS with the kinematical morphology of the stellar velocity field is remarkable. The two nuclei are elongated along the velocity gradient in the velocity field, indicating individual inclined rotating disks in the nuclei. The northern nucleus is redshifted with respect to the southern nucleus by $`50\mathrm{km}\mathrm{s}^1`$. This value is smaller than the value measured in the Brackett $`\gamma `$ line and measured by Fried & Ulrich (1985) but consistent within the uncertainties of our measurement.
The stellar velocity is surprisingly different compared to the velocity field of the molecular gas. The gas forms a rotating disk *between* the two nuclei (Tacconi et al., 1999) with a axis of rotation not aligned with either of the stellar disks in the nuclei. This implys a decoupling of the stellar motions from the gas motions, most likely caused by the tidal forces of the interaction.
The dynamical mass of the NGC 6240 (catalog ) nuclei can be determined from the stellar velocity field. Under the assumption that the stars move on circular orbits around the center of each nucleus, the dynamical mass within the radius $`R`$ for a inclination corrected rotation velocity $`v_{\mathrm{rot}}`$ is given by
$$_{\mathrm{dyn}}=2.3\times 10^2\left(\frac{R}{\mathrm{pc}}\right)\left(\frac{v_{\mathrm{rot}}}{\mathrm{km}\mathrm{s}^1}\right)^2_{\mathrm{}}.$$
(1)
The measured recessional velocities for the two nuclei are shown in Figure 8. We fitted a model rotation curve to the data with a solid body ($`vR`$) part for radii $`R100\mathrm{pc}`$ and a flat part at a rotation velocity $`v_{\mathrm{rot}}`$ for larger radii. The model disk was inclined towards the line of sight by an angle of $`45\mathrm{°}`$ and $`60\mathrm{°}`$ for the northern and southern nucleus, respectively. The inclination angles were determined by fitting ellipses to the isophotes of the NICMOS H-band image. The model disk was convolved with a Gaussian profile with $`0\stackrel{}{\mathrm{.}}8`$ FWHM, corresponding to the average observing conditions. For the assumed inclinations the best fit of the rotation curve is achieved with $`v_{\mathrm{rot}}=270\pm 90\mathrm{km}\mathrm{s}^1`$ for the southern nucleus and $`v_{\mathrm{rot}}=360\pm 195\mathrm{km}\mathrm{s}^1`$ for the northern nucleus. Even for an inclination of $`90\mathrm{°}`$ the rotation velocities are $`180\pm 90\mathrm{km}\mathrm{s}^1`$ and $`200\pm 155\mathrm{km}\mathrm{s}^1`$, respectively. Assuming an extreme inclination angle, the dynamical mass within $`500\mathrm{pc}`$ of the northern nucleus is $`2.4\times 10^9_{\mathrm{}}`$, and $`1.9\times 10^9_{\mathrm{}}`$ for the southern nucleus. Such large masses can only be explained if the observed K-band nuclei are indeed true nuclei of the two merging progenitor galaxies. Given the likely inclinations indicated by the NICMOS images the true masses are likely even factors $`2`$ higher. Table 2 lists the derived dynamical masses of both nuclei for an inclination of $`45\mathrm{°}`$ and $`60\mathrm{°}`$ for the northern and southern nucleus, respectively.
We also list in Table 2 the ratio of dynamical mass to stellar mass content of the starburst derived from the light-to-mass ratio. The dynamical mass exceeds the visible mass by about an order of magnitude. The missing mass cannot be explained by a contribution of molecular gas at the position of the nuclei. The molecular gas is concentrated between the nuclei with a relatively small mass at the position of the nuclei (Tacconi et al., 1999). From the millimeter $`\mathrm{CO}\mathrm{J}=21`$ line flux within an $`1\mathrm{}`$ diameter aperture on the nuclei an upper limit for the cold molecular gas mass in the nuclei can be determined. For the northern and southern nucleus this upper limit is $`0.2`$ and $`0.25`$, respectively, of the gas mass within the central $`2\mathrm{}`$ of the CO-disk. Taking $`2\times 10^9_{\mathrm{}}`$ for the mass of the gas concentration (Tacconi et al., 1999) the contribution from the cold molecular gas to the total mass of the nuclei is $`45\times 10^8_{\mathrm{}}`$, much smaller than the missing mass. The contribution from a dark matter halo to the bulge masses cannot be determined from our data. In normal spiral galaxies this contribution is negligible. The only remaining likely source of missing mass is in the old stellar population of the progenitor galaxies themselves. The number of faint stars could add up to a significant mass, yet their luminosity would be negligible compared to the red supergiants of the recent starburst. We conclude that the two K-band peaks are indeed the central bulges of the progenitor galaxies and that these bulges are fairly massive, in accordance with the prediction of Mihos & Hernquist (1996) and Barnes & Hernquist (1996).
### 5.2 Stellar Velocity Dispersion: Mass Concentration between the Nuclei
While the sharp edge of the $`\mathrm{CO}\mathrm{\hspace{0.25em}2}0`$ bandhead allows a redshift determination even for data with a low signal-to-noise ratio, we cannot determine the velocity dispersion from such data. We have, therefore, completed the analysis of the spatial variation of the velocity dispersion with integrated spectra for several apertures centered in the nuclear region of NGC 6240 (catalog ). To determine the true velocity dispersion we first subtracted the stellar rotation component as derived above. Within the selected apertures we corrected the data cube for the stellar rotational velocity field by shifting the spectra of all spatial pixels to the same restframe. Figure 9 shows the distribution of the velocity dispersion over the nuclear region of NGC 6240 (catalog ). We show the calculated velocity dispersion for four apertures: two centered on the nuclei; one centered between the nuclei; and one south of the southern nucleus. The error bars along the declination axis denote the width of the apertures over which the spectrum was integrated. The aperture width in right ascension is $`0.8\mathrm{}`$. Figure 9 shows that the velocity dispersion peaks *between* the two nuclei at $`\sigma =276\pm 51\mathrm{km}\mathrm{s}^1`$, while the velocity dispersion at the northern and southern nucleus is $`\sigma =174\pm 54\mathrm{km}\mathrm{s}^1`$ and $`\sigma =236\pm 24\mathrm{km}\mathrm{s}^1`$, respectively. Because this result is derived from the velocity field corrected data cube, the broad velocity profile cannot be due to the overlapping of two velocity profiles from components separated by scales larger than the spatial resolution of our observations, that is $`350`$ to $`400\mathrm{pc}`$.
Replacing $`v_{\mathrm{rot}}`$ with the velocity dispersion $`\sigma `$, equation (1) can also be used to estimate the mass $`=2.3\times 10^2R_{\mathrm{pc}}\sigma _{\mathrm{km}\mathrm{s}^1}^2_{\mathrm{}}`$ within the radius $`R`$. This is a very simplistic form of the Jeans equation (Binney & Tremaine, 1987) where only the isotropic part is considered and numerical factors from the space distribution of the stars and the spatial dependency of rotation and velocity dispersion are neglected (thus resulting likely in an under-estimate of the dynamical mass). Following this argument the peak of the velocity dispersion indicates a mass concentration between the two nuclei. We get masses for the northern and southern nucleus of $`0.8\pm 0.5\times 10^9_{\mathrm{}}`$ and $`1.8\pm 0.4\times 10^9_{\mathrm{}}`$, respectively. Within the uncertainties this is consistent with the dynamical masses derived above. The mass between the nuclei is $`2.1\pm 0.8\times 10^9_{\mathrm{}}`$. The value of the masses determined in this way are somewhat questionable, since the Jeans equation applies to relaxed systems, a requirement that is not fulfilled in NGC 6240 (catalog ). At the peak of the stellar velocity dispersion little stellar continuum is detected. The low continuum flux between the nuclei cannot be explained by extinction which, although it peaks between the nuclei, is too low to explain a hidden concentration of stars. On the other hand, Tacconi et al. (1999) find a concentration of cold molecular gas of $`2\times 10^9_{\mathrm{}}`$ centered within $`1\mathrm{kpc}`$ between the nuclei. It is thus very likely that the peak of the stellar velocity dispersion is caused by this molecular gas concentration. Our extinction and $`\mathrm{H}_2v=10\mathrm{S}(1)`$ maps indicate also that there is a corresponding peak of dust and hot $`\mathrm{H}_2`$.
## 6 Molecular Gas
### 6.1 Shock Excited $`𝐇_\mathrm{𝟐}`$ Emission
From an analysis of the near-infrared $`\mathrm{H}_2`$ line ratios Van der Werf (1996) and Sugai et al. (1997) conclude that $`\mathrm{H}_2`$ is thermally excited with an excitation temperature of $`2000\mathrm{K}`$. The four $`\mathrm{H}_2`$ lines we observed also fit this result. With the Infrared Space Observatory (ISO) pure rotational $`\mathrm{H}_2`$ lines were detected for the first time in NGC 6240 (catalog ) (Lutz et al., 1996; Egami, 1998; Rigopoulou et al., 1999). The analysis of their line ratios yields an excitation temperature $`<400\mathrm{K}`$. Van der Werf (1996), Sugai et al. (1997) and Egami (1998) all find a slow continuous shock (C-shock) as the best fitting model for the $`\mathrm{H}_2`$ excitation. Fast shocks would lead to a discontinuous or jump shock (J-shock). Due to the discontinuity J-shocks ionise the interstellar medium and hence emission from ionised elements is expected. The low Brackett $`\gamma `$ to $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line ratio therefore indicates strongly that the shock is non-dissociative. The agreement of the Brackett $`\gamma `$ and continuum morphology suggests the starburst as the source of the Brackett $`\gamma `$ emission.
In the C-shock models the shock has a velocity $`40\mathrm{km}\mathrm{s}^1`$, yet, the observed $`\mathrm{H}_2`$ line widths are $`\mathrm{\Delta }v550\mathrm{km}\mathrm{s}^1`$. The large line width are most likely a superposition of several narrower $`\mathrm{H}_2`$ lines with different radial velocities along the line-of-sight. If we consider the distribution of molecular gas in form of dense clouds within a less dense intercloud medium we can explain the slow shock speed. The collision between molecular clouds occurs under a variety of angles and the shock speed therefore is only the projected radial velocity difference. But even for a direct collision, the shock speed in the denser cloud is smaller than in the lower density intercloud medium. An originally fast shock propagating in the intercloud medium will propagate with a slower shock speed when it enters a medium of higher density like a molecular cloud. It is these secondary slow shocks in the dense clouds that we are observing in the near-infrared ro-vibrational $`\mathrm{H}_2`$ lines.
### 6.2 Cold and Hot Molecular Gas
The gas motions in the central region of NGC 6240 (catalog ) are highly turbulent. Tacconi et al. (1999) find $`\mathrm{CO}\mathrm{J}=21`$ line width of up to $`400\mathrm{km}\mathrm{s}^1`$ FWHM (and $`1000\mathrm{km}\mathrm{s}^1`$ FWZP) and their channel maps show filaments extending from a central CO-disk out to $`2\mathrm{kpc}`$.
A comparison of Figure 10 with the channel maps of the $`\mathrm{CO}\mathrm{J}=21`$ line (Tacconi et al., 1999, Figure 2) indicates that the cold molecular gas ($`\mathrm{CO}\mathrm{J}=21`$) and the hot, shock-heated $`\mathrm{H}_2`$ have a very similar overall morphology. Local differences between the $`\mathrm{CO}\mathrm{J}=21`$ and $`\mathrm{H}_2v=10\mathrm{S}(1)`$ channel maps are apparent. The most obvious difference is in the central region between the nuclei. The hot gas is more extended than the cold gas and a disk structure as seen in the $`\mathrm{CO}\mathrm{J}=21`$ line is not visible. In addition, the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line is $`150\mathrm{km}\mathrm{s}^1`$ to $`250\mathrm{km}\mathrm{s}^1`$ broader than the $`\mathrm{CO}\mathrm{J}=21`$ line. Both effects can be explained with a different distribution of the cold and hot gas. Roughly half of the cold gas has already settled in the center between the nuclei, while the shock-heated interstellar medium of the two galaxies is still more extended. The $`\mathrm{H}_2`$ emission is excited on the surfaces of molecular clouds while the $`\mathrm{CO}\mathrm{J}=21`$ emission originates more in the volume of the molecular cloud. Because we sample with the $`\mathrm{H}_2`$ emission a larger volume than with the $`\mathrm{CO}\mathrm{J}=21`$ emission, we also sample a larger range in radial velocities and hence observe larger linewidths for the $`\mathrm{H}_2`$ lines. The larger line width of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ lines are not symmetric about the peak of the $`\mathrm{CO}\mathrm{J}=21`$ lines but are blueshifted by $`150\mathrm{km}\mathrm{s}^1`$.
In the structure outside the nuclear region of NGC 6240 (catalog ) the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ and $`\mathrm{CO}\mathrm{J}=21`$ emission is dominated by a filamentary structure. The filaments in the south-east and south-west are the most prominent and show similar morphology and kinematics in the $`\mathrm{CO}\mathrm{J}=21`$ and $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line. Also the weaker filaments north of the nuclei show this similarity. They are most likely gas flows towards the centers of the nuclei.
A comparison of the velocity profiles of the $`\mathrm{CO}\mathrm{J}=21`$ and $`\mathrm{H}_2v=10\mathrm{S}(1)`$ lines is shown in Figure 3. From the line-of-sight velocity profiles in Figure 3 several kinematical components of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line are visible. They vary over the field, especially at the positions where the filaments extend from the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission peak. North-east of the northern nucleus two components are recognizable with a velocity difference of $`250\mathrm{km}\mathrm{s}^1`$. The filament in the south-west shows also two major components with a velocity difference of $`200\mathrm{km}\mathrm{s}^1`$. The $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line exhibits this multi-component character over the entire emission region of NGC 6240 (catalog ).
The radial velocity difference between northern and southern nucleus is $`150\mathrm{km}\mathrm{s}^1`$ in the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line and $`100\mathrm{km}\mathrm{s}^1`$ in the $`\mathrm{CO}\mathrm{J}=21`$ line. This is similar to the radial velocity difference of the nuclei measured in the Brackett $`\gamma `$ line. Globally the molecular gas appears to follow the motion of the two nuclei. As with the cold gas, a rotation of the hot gas seems the best explanation for the velocity gradient of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ line. The velocity gradient peaks between the nuclei at the position of the $`\mathrm{CO}\mathrm{J}=21`$ emission peak. At this position the velocity dispersion of the $`\mathrm{H}_2v=10\mathrm{S}(1)`$ emission also reaches its maximum of $`240\mathrm{km}\mathrm{s}^1`$.
The central disk of cold molecular gas has a rotation axis which is not correlated with the rotation axis of either nucleus. Rather the sense of the rotation corresponds to the relative motions of the nuclei. The CO-disk seems to retain the memory of the orbital history of the interaction and is independent of the gravitational forces exerted by the nuclei.
## 7 Interaction and Merging
We have shown that NGC 6240 (catalog ) is a merging system of two galaxies with massive bulges. In the following sections results from our observations and published results are used to draw a more detailed picture of the interaction in NGC 6240 (catalog ).
### 7.1 Prograde Encounter?
The rotation velocities for both nuclei, derived in §5.1, are much larger than the relative velocity difference between the two nuclei. Assuming circular orbits for the nuclei the small radial velocity difference and projected distance between the nuclei yields a small projected orbital angular momentum of the nuclei. This means that the two galaxies either have lost a major fraction of their original orbital angular momentum in the interaction, or that the true orbital angular momentum is much larger than the projected value. A larger true orbital angular momentum means that the actual distance between the two nuclei is larger than the measured, projected separation. On the other hand, during the interaction of two galaxies their orbits can vary and the nuclei might rather be on radial than on circular orbits. In this case their orbital momentum could be still large. The exact trajectories of the galaxies depend on the initial conditions of the interaction.
Because we can only observe projected angular momenta, the exact orientation of the disks with respect to the orbital plane cannot be determined. A schematic view of the collision geometry is shown in Figure 11. The projected spin angular momentum of the northern nucleus is almost parallel to the systems projected orbital angular momentum, while the projected spin angular momentum of the southern nucleus is nearly normal to it. If the projected angular momenta are the true angular momenta, one partner in the collision, now seen as the northern nucleus, was subject to a prograde encounter, while the other partner is inclined with respect to the orbital plane. For the motion of the two nuclei around each other we assume a circular orbit, whose position angle and inclination is equal to the position angle and the inclination of the rotating CO-disk between the nuclei (Tacconi et al., 1999). Under this assumption and with a radial velocity difference of $`150\mathrm{km}\mathrm{s}^1`$ the true distance between the two nuclei can be calculated. With a position angle of the CO-disk of $`40\mathrm{°}`$ and an inclination of $`75\mathrm{°}`$ the true separation of the two nuclei is $`1.4\mathrm{kpc}`$, their orbital velocity is $`155\mathrm{km}\mathrm{s}^1`$ with an orbital period of $`27`$ million years. Assuming a simple two-body problem with equal mass of $`2\times 10^9_{\mathrm{}}`$, this orbit is unstable with the gravitational force being $`\frac{1}{8}`$ of the centrifugal force. This means that the two nuclei will separate in the future and probably are already past the pericenter. Whether the system is bound can be estimated from the escape velocity. We compute an escape velocity of $`110\mathrm{km}\mathrm{s}^1`$, which is of the same order as the radial velocity difference of the two nuclei. From the radial velocity difference the system seems to be unbound. However, if the nuclei separate again they will see not only the mass of the other nucleus but also the halo of the other galaxy. Conversion of orbital angular momentum into spin angular momentum in the halo leads to gravitational braking, the galaxies reach their apocenter and fall back together again (Barnes & Hernquist, 1996; Mihos & Hernquist, 1996). It seems therefore likely, that NGC 6240 (catalog ) has recently undergone an encounter and is awaiting its next encounter.
### 7.2 Tidal tails
The first indication that NGC 6240 (catalog ) might be an interacting galaxy system was its disturbed morphology with loops, branches and arms. Toomre & Toomre (1972) showed that such tails can be formed by tidal forces in an interaction of two galaxies. Tidal forces during the interaction pull stars from the galaxies into regions at a distance of up to several tens of kiloparsec from the center of the galaxies. The formation of tidal tails is therefore an efficient way to dissipate orbital angular momentum of the interacting galaxies thus allowing the galaxies to come closer to each other and finally merge. That the tails in NGC 6240 (catalog ) are dominated by stellar light can be seen from broad and narrow band imaging. The tidal tails visible in the R-band image of Armus, Heckman & Miley (1990) are not as prominent in their narrow band image of $`\mathrm{H}\alpha +`$\[N II\] which are the most dominant emission lines in this wavelength band. The formation of tidal tails is favoured if at least one partner in the collision is subject to a prograde encounter. The spin and orbital angular momenta of the nuclei suggest a prograde encounter for what is now the northern nucleus. But not only the geometry of the collision is important for the formation of tidal tails. Springel & White (1998) and Dubinski, Mihos & Hernquist (1999) show that the mass density distribution of the interacting galaxies is an important parameter in the formation of tidal tails. Galaxies with more massive halos tend to form less sharp and shorter tidal tails. The tidal tails in NGC 6240 (catalog ) are not as “crisp” as in NGC 4038/9 (catalog ) (“The Antennae”) or in NGC 4676 (catalog ) (“The Mice”), indicating the collision of two massive spiral galaxies. Because the parameter space of galaxy interaction models is too large, it is impossible to derive a detailed history of NGC 6240 (catalog ) from the morphology of its tidal tails. However, simulations of interacting and merging galaxies show, that tidal tails can only be formed *after* the first encounter of the galaxies. Because we see tidal tails in NGC 6240 (catalog ) which are quite long, it must have undergone at least one encounter. The tidal tails of NGC 6240 (catalog ) were most likely created in the first encounter. How many passages NGC 6240 (catalog ) has undergone we cannot determine from the tidal tails. But because the two nuclei are still distinct from each other, it seems that only few encounters can have happened and that NGC 6240 (catalog ) is in a rather early merger phase.
Doyon et al. (1994) showed from K-band photometry that the overall brightness distribution of NGC 6240 (catalog ) is very similar to that of an elliptical galaxy. They concluded that NGC 6240 (catalog ) is in an advanced merger state and in the process of forming an elliptical galaxy. We believe that this evidence is rather weak and the conclusion should be treated with caution.
1. The near-infrared light of the nuclei comes from supergiants and not from the old stars representing the overall stellar population. Population and $`L/`$ changes with radius are rather likely.
2. While a $`r^{1/4}`$-law or a King-profile fits the data it is not unique to elliptical galaxies.
3. The stellar dynamics definitely shows that the two galaxies are still independent entities and have not yet merged.
However, the small separation of the nuclei and the decoupling of the gas from the stars in NGC 6240 (catalog ) strongly suggest that the nuclei will merge. But as the formation of tidal tails depends on the mass density distribution of the interacting galaxies the merging time also varies with the mass density distribution. Galaxies with extended massive halos have longer merging times than galaxies with compact low mass halos. Only in the final merging phase violent relaxation leads to a strong and rapid dynamical evolution also on global scales (Mihos & Bothun, 1998).
What type of galaxy the merger remnant will resemble we cannot predict from the data. However, NGC 6240 (catalog ) might form an elliptical galaxy of which it shows already signatures. The stellar velocity dispersion in the nuclei of NGC 6240 (catalog ) is typical for an elliptical galaxy. Outside the nuclear region the K-band light can be fit by a $`r^{1/4}`$-law which also approximates the surface brightness profile of merger remnants in numerical simulations of galaxy mergers (Barnes, 1992).
### 7.3 Nuclear Starbursts
Another hint at how many passages NGC 6240 (catalog ) has already undergone comes from the age and duration of the nuclear starbursts. In §4.2 we determined the age of the starburst to be $`15`$ to $`25`$ million years. Numerical simulations predict that such nuclear starbursts in merging galaxies are triggered by close passages of the two galaxies. If this was true, the last encounter happened $`20`$ million years ago. Because the tidal tails are quite long and pronounced, the first encounter must be well past and the last passage was probably at least the second encounter. The dynamical time-scale of the orbital motion of the two nuclei is a few $`10`$ million years and is of the same order as the age of the starburst. The extent of the superwind ($`5\mathrm{kpc}`$) and its velocity ($`500\mathrm{km}\mathrm{s}^1`$) yields an age for the superwind of $`10`$ million years, similar to the lifetime of a superwind (Heckman et al., 1990). If taken into account that the superwind sets in roughly $`10`$ million years after the starburst, the superwind and the K-band continuum can be explained by the same starburst. The duration of the starburst is much shorter than the age of the starburst and can be explained by a negative feedback effects of the starburst through supernovae and stellar winds. Numerical simulation by Mihos & Hernquist (1996) predict an episodic star formation during the merging of two galaxies. Gas streaming into the central regions gets compressed and triggers star formation. Depending on the existence of massive bulges one major starburst takes place either after the first encounter or during the final merging of the two galaxies. Because NGC 6240 (catalog ) is clearly not in the stage of final merging, the starburst we see has likely been triggered in an early encounter.
### 7.4 Gas Concentration between the Nuclei
The presence of a self-gravitating cold molecular gas concentration between the nuclei of NGC 6240 (catalog ) is not expected from numerical simulations of galaxy mergers (Barnes & Hernquist, 1996; Mihos & Hernquist, 1996). These simulations predict that the highly dissipative gas follows the stellar distribution. Gas disks form around the individual galaxy nuclei and get more compact as the merger advances. From the point of view of these simulations it would appear as if NGC 6240 (catalog ) has to be in a rare transient phase.
However, NGC 6240 (catalog ) is not the only galaxy with a gas concentration between two nuclei. VV 114 (catalog ) (Yun, Scoville & Knop, 1994), NGC 6090 (catalog ) (Gao et al., 1998; Bryant & Scoville, 1999) and NGC 4038/9 (catalog ) (Stanford et al., 1990) all have prominent CO-emission peaks between the two nuclei. In all cases the projected distance between the nuclei ($`6\mathrm{kpc}`$ for VV 114 (catalog ), $`3.5\mathrm{kpc}`$ for NGC 6090 (catalog ) and $`7\mathrm{kpc}`$ for NGC 4038/9 (catalog ); $`\mathrm{H}_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) is much larger than the separation of $`750\mathrm{pc}`$ for NGC 6240 (catalog ). NGC 6240 (catalog ) is also an exception because very little CO-emission is observed at the position of the nuclei. In NGC4038/9 (catalog ) three clearly distinct gas concentrations are detected: two at the position of the nuclei; and a third between the nuclei, which contains roughly half of the total gas mass. NGC 6240 (catalog ), too, has half of the molecular gas in a central concentration between the nuclei but the other half is in filaments extending from the central gas concentration. In VV 114 (catalog ) the molecular gas is in a bar-like concentration with tails extending from it. NGC 6090 (catalog ) shows a ridge-like structure of molecular gas which covers also both nuclei. VV 114 (catalog ), NGC 6090 (catalog ) and NGC 4038/9 (catalog ) are all, mainly based on the large separations, considered to be early mergers. The small separation of the two nuclei in NGC 6240 (catalog ) could be interpreted as if NGC 6240 (catalog ) was in an advanced merger state. However, to derive the evolutionary state of merging galaxies only on the nuclear separation can be misleading. Due to projection effects and unknown orbits of the galaxies the nuclear separation could be much larger.
## 8 Conclusions
From the high resolution near-infrared integral field spectroscopy of NGC 6240 (catalog ) we find:
* The K-band light of the two nuclei in NGC 6240 (catalog ) is dominated by red supergiants which must have been formed in a starburst $`1525`$ million years ago. The duration of the star formation is $`5`$ million years, thus only a small fraction of the starburst age. The total mass of stars formed in the starburst is $`_{}=0.42\times 10^8_{\mathrm{}}`$.
* The stars in the two K-band peaks exhibit fast rotation. From the stellar velocity field the dynamical mass of the nuclei is determined to be $`_{\mathrm{dyn}}=28\times 10^9_{\mathrm{}}`$ within the central $`500\mathrm{pc}`$ with similar masses for both nuclei. This exceeds the mass of the most massive star forming regions by more than $`100`$, implying that the two infrared emission peaks are the massive, rotating bulges of two interacting and merging galaxies.
* After correction for the rotation component the stellar velocity dispersion peaks between the two nuclei at the emission peak of the molecular gas. The inferred mass concentration is a self-gravitating concentration of molecular gas.
* The $`\mathrm{H}_2`$ emission of NGC 6240 (catalog ) peaks between the two nuclei and is thermally excited in a slow, continuous shock triggered by the collision of the two galaxies. Molecular gas streams to the centers of the galaxies where the turbulent motions also give rise to shock excited $`\mathrm{H}_2`$ emission.
* From the orbital and spin angular momenta of the interacting galaxies it seems that one galaxy in the system is subject to a prograde encounter. Based on the morphology of the tidal tails, the starburst scenario, the kinematics and the excitation of the $`\mathrm{H}_2`$ emission, we argue that we observe NGC 6240 (catalog ) shortly after an early encounter which triggered the observed starburst.
The authors are grateful to the staff of the ESO $`2.2\mathrm{m}`$ telescope and the Anglo-Australian Telescope for their support during the observation. We also would like to thank S. Mengel, J. Gallimore, R. Maiolino, A. Krabbe and H. Kroker for their help setting up and operating the instruments 3D and ROGUE and collecting the data.
##
## |
no-problem/0001/physics0001028.html | ar5iv | text | # References
Protein Folding Simulations in a Deformed Energy Landscape
Ulrich H.E. Hansmann <sup>1</sup><sup>1</sup>1 e-mail: hansmann@mtu.edu
Department of Physics
Michigan Technological University
Houghton, MI 49931-1295
ABSTRACT
A modified version of stochastic tunneling, a recently introduced global optimization technique, is introduced as a new generalized-ensemble technique and tested for a benchmark peptide, Met-enkephalin. It is demonstrated that the new technique allows to evaluate folding properties and especially the glass temperature $`T_g`$ of this peptide.
Key words: Generalized-Ensemble Simulations, Protein Folding, Stochastic Tunneling.
Numerical simulations of biological molecules can be extremely difficult when the molecule is described by “realistic” energy functions where interactions between all atoms are taken into account. For a large class of molecules, in particular for peptides or proteins, the various competing interactions lead to frustration and a rough energy landscape. At low temperatures canonical simulations will get trapped in one of the multitude of local minima separated by high energy barriers and physical quantities cannot be calculated accurately. One way to overcome this difficulty in protein simulations is by utilizing so-called generalized ensembles, which are based on a non-Boltzmann probability distribution. Multicanonical sampling and simulated tempering are prominent examples of such an approach. Application of these techniques to the protein folding problem was first addressed in Ref. and their usefulness for simulation of biological molecules and other complex systems - has become increasingly recognized.
However, generalized-ensemble methods are not without problems. In contrast to canonical simulations the weight factors are not a priori known. Hence, for a computer experiment one needs estimators of the weights, and the problem of finding good estimators is often limiting the use of generalized-ensemble techniques. Here we describe and test a new generalized ensemble where determination of the weights is by construction of the ensemble simple and straightforward. Our method is based on a recently introduced global optimization technique, stochastic tunneling .
Canonical simulations of proteins at low temperature are hampered by the roughness of the potential energy surface: local minima are separated by high energy barriers. To enhance sampling we propose to weight conformations not with the Boltzmann factor $`w_B(E)=\mathrm{exp}(E/k_BT)`$, but with a weight
$$w_f(E)=\mathrm{exp}(f(E)/k_BT).$$
(1)
Here, $`T`$ is a low temperature, $`k_B`$ the Boltzmann constant, and $`f(E)`$ is a non-linear transformation of the potential energy onto the interval $`[0,1]`$ chosen such that the relative location of all minima is preserved. The physical idea behind such an approach is to allow the system at a given low temperature $`T`$ to “tunnel” through energy barriers of arbitrary height, while the low energy region is still well resolved. A transformation with the above characteristics can be realized by
$$f_1(E)=e^{(EE_0)/n_F}.$$
(2)
Here, $`E_0`$ is an estimate for the ground state and $`n_F`$ the number of degrees of freedom of the system. Eq. 2 is a special choice of the transformation recently introduced under the name “stochastic tunneling” to the corresponding problem of global minimization in complex potential energy landscapes. One can easily find further examples for transformations with the above stated properties, for instance,
$$f_2(E)=(1+(EE_0)/n_F)^1.$$
(3)
We will restrict our investigation to these two transformations without claiming that they are an optimal choice.
A simulation in the above ensemble, defined by the weight of Eq. 1 with a suitable chosen non-linear transformation $`f(E)`$, will sample a broad range of energies. Hence, application of re-weighting technique allows to calculate the expectation value of any physical quantity $`𝒪`$ over a large range of temperatures $`T`$ by
$$<𝒪>_T=\frac{{\displaystyle 𝑑E𝒪(E)P_f(E)w_f^1(E)e^{E/k_BT}}}{{\displaystyle 𝑑EP_f(E)w_f^1(E)e^{E/k_BT}}}.$$
(4)
In this point our method is similar to other generalized-ensemble techniques such as the multicanonical sampling , however, our method differs from them in that the weights are explicitly given by Eq. 1. One only needs to find an estimator for the ground-state energy $`E_0`$ in the transforming functions $`f_1(E)`$ or $`f_2(E)`$ (see Eqs. 2 and 3) which in earlier work was found to be much easier than the determination of weights for multicanonical algorithm or simulated tempering .
The new simulation technique was tested for Met-enkephalin, one of the simplest peptides, which has become a often used model to examine new algorithms. Met-enkephalin has the amino-acid sequence Tyr-Gly-Gly-Phe-Met. The potential energy function $`E_{tot}`$ that was used is given by the sum of the electrostatic term $`E_{es}`$, 12-6 Lennard-Jones term $`E_{vdW}`$, and hydrogen-bond term $`E_{hb}`$ for all pairs of atoms in the peptide together with the torsion term $`E_{tors}`$ for all torsion angles:
$`E_{tot}`$ $`=`$ $`E_{es}+E_{vdW}+E_{hb}+E_{tors},`$ (5)
$`E_{es}`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}{\displaystyle \frac{332q_iq_j}{ϵr_{ij}}},`$ (6)
$`E_{vdW}`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}\left({\displaystyle \frac{A_{ij}}{r_{ij}^{12}}}{\displaystyle \frac{B_{ij}}{r_{ij}^6}}\right),`$ (7)
$`E_{hb}`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}\left({\displaystyle \frac{C_{ij}}{r_{ij}^{12}}}{\displaystyle \frac{D_{ij}}{r_{ij}^{10}}}\right),`$ (8)
$`E_{tors}`$ $`=`$ $`{\displaystyle \underset{l}{}}U_l\left(1\pm \mathrm{cos}(n_l\chi _l)\right),`$ (9)
where $`r_{ij}`$ is the distance between the atoms $`i`$ and $`j`$, and $`\chi _l`$ is the $`l`$-th torsion angle. The parameters ($`q_i,A_{ij},B_{ij},C_{ij},D_{ij},U_l`$ and $`n_l`$) for the energy function were adopted from ECEPP/2.
The computer code SMC<sup>2</sup><sup>2</sup>2The program SMC was written by Dr. Frank Eisenmenger (eisenmenger@rz.hu-berlin.de) was used. The simulations were started from completely random initial conformations (Hot Start) and one Monte Carlo sweep updates every torsion angle of the peptide once. The peptide bond angles $`\omega `$ were fixed to their common value $`180^{}`$, which left 19 torsion angles ($`\varphi ,\psi `$, and $`\chi `$) as independent degrees of freedom (i.e., $`n_F=19`$). The interaction of the peptide with the solvent was neglected in the simulations and the dielectric constant $`ϵ`$ set equal to 2. In short preliminary runs it was found that $`T=8`$ K was the optimal temperatures for simulations relying on the transformation $`f_1(E)`$ (Eq. 2), and $`T=6K`$ for simulations relying on the second chosen transformation $`f_2(E)`$ (Eq. 3). The free parameter $`E_0`$ was set in Eq. 2 or (3) to $`E_0=10.72`$ kcal/mol, the ground state energy as known from previous work. In addition, simulations were also performed where $`E_0`$ was dynamically updated in the course of the simulation and set to the lowest ever encountered energy. In these runs the (known) ground state was found in less than 5000 MC sweeps. Hence, determination of the weights is easier than in other generalized-ensemble techniques since in earlier work it was found that at least 40,000 sweeps were needed to calculate multicanonical weights. We remark that a Monte Carlo sweep in both algorithm takes approximately the same amount of CPU time.
All thermodynamic quantities were then calculated from a single production run of 1,000,000 MC sweeps which followed 10,000 sweeps for thermalization. At the end of every sweep we stored the energies of the conformation and the radius of gyration
$$R=\frac{1}{N_{atoms}^2}\underset{i,j}{\overset{N_{atoms}}{}}(\stackrel{}{r}_i\stackrel{}{r}_j)^2$$
(10)
for further analyses.
In order to demonstrate the dynamical behavior of the algorithm the “time series” and histograms of potential energy are shown for both choices of the transforming functions $`f_1(E)`$ (Fig. 1) and $`f_2(E)`$ (Fig. 2). Both choices of the non-linear transformation with which the energy landscape was deformed in the simulations lead to qualitatively the same picture. In Fig. 1a and Fig. 2a, respectively, one can see that the whole energy range between $`E<10`$ kcal/mol (the ground state region) and $`E20`$ kcal/mol (high-energy, coil states) is sampled. However, unlike in the multicanonical algorithm the energies are not sampled uniformly and low-energy states appear with higher frequency than high energy states. However, as one can see from the logarithmic scale of Fig. 1b and 2b where the histograms are displayed for these simulations, high-energy states are only suppressed by three orders of magnitude and their probability is still large enough to allow crossing of energy barriers. Hence large parts of the configuration space are sampled by our method and it is justified to calculate from these simulations thermodynamic quantities by means of re-weighting, see Eq. 4.
Here, the average radius of gyration $`<R>`$, which is is a measure for the compactness of protein configurations and defined in Eq. 10, was calculate for various temperatures. In Fig. 3 the results for the new ensemble, using the defining non-linear transformations $`f_1(E)`$ or $`f_2(E)`$, are compared with the ones of a multicanonical run with equal number of Monte Carlo sweeps. As one can see, the values of $`<R>(T)`$ agree for all three simulations over the whole temperature range. Hence, it is obvious that simulations in the new ensemble are indeed well able to calculate thermodynamic averages over a wide temperature range.
After having established the new techniques as a possible alternative to other generalized-ensemble techniques such as multicanonical sampling or simulated tempering, its usefulness shall be further demonstrated by calculating the free energy of Met-enkephalin as a function of $`R`$:
$$G(R)=k_BT\mathrm{log}P(R)$$
(11)
where
$$P(R)=P_f(R)w_f^1(E(R))e^{E(R)/k_BT}.$$
(12)
Here, a normalization is chosen where the minimal value of $`G_{min}(R)=0`$. The chosen temperature was $`T=230`$K, which was found in earlier work as the folding temperature $`T_f`$ of Met-enkephalin. The results, which rely on the transformation $`f_1(E)`$ of the energy landscape given by Eq. 2 are displayed in Fig. 4. At this temperature one observes clearly a “funnel” towards low values of $`R`$ which correspond to compact structures. Such a funnel-like landscape was already observed in Ref. for Met-enkephalin, utilizing a different set of order parameters, and is predicted by the landscape theory of folding .
The essence of the funnel landscape idea is competition between the tendency towards the folded state and trapping due to ruggedness of the landscape. One way to measure this competition is by the ratio :
$$Q=\frac{\overline{EE_0}}{\sqrt{\overline{E^2}\overline{E}^2}},$$
(13)
where the bar denotes averaging over compact configurations. The landscape theory asserts that good folding protein sequences are characterized by large values of $`Q`$ . Using the results of our simulations and defining a compact structure as one where $`R(i)23\AA `$, we find $`\overline{EE_0}=13.96(3)`$ Kcal/mol, $`\overline{E^2}\overline{E}^2=0.49(2)`$, from which we estimate for the above ratio $`Q=20.0(5)`$. This value indicates that Met-enkephalin is good folder and is consistent with earlier work where we evaluated an alternative characterization of folding properties. Thirumalai and collaborators have conjectured that the kinetic accessibility of the native conformation can be classified by the parameter
$$\sigma =\frac{T_\theta T_f}{T_\theta },$$
(14)
i.e., the smaller $`\sigma `$ is, the more easily a protein can fold. Here $`T_f`$ is the folding temperature and $`T_\theta `$ the collapse temperature. With values for $`T_\theta =295`$ K and $`T_f=230`$ K, as measured in Ref. , one has for Met-enkephalin $`\sigma 0.2`$, indicating again that the peptide has good folding properties.
Yet another characterization of folding properties relies on knowledge of the glass temperature $`T_g`$ and is closely related to Eq. 13. As the number of available states gets reduced with the decrease of temperature, the possibility of local trapping increases substantially. Glassy behavior appears when the residence time in some local traps becomes of the order of the folding event. Folding dynamics is now non-exponential since different traps have different escape times . For temperatures above the glass transition temperature $`T_g`$, the folding dynamics is exponential and a configurational diffusion coefficient average the effects of the short lived traps . It is expected that for a good folder the glass transition temperature, $`T_g`$, where glass behavior sets in, has to be significantly lower than the folding temperature $`T_f`$, i.e. a good folder can be characterized by the relation
$$\frac{T_f}{T_g}>1.$$
(15)
I present here for the first time a numerical estimate of this glass transition temperature for the peptide Met-enkephalin. The calculation of the estimate is based on the approximation
$$T_g=\sqrt{\frac{\overline{E^2}\overline{E}^2}{2k_BS_0}},$$
(16)
where the bar indicates again averaging over compact structures and $`S_0`$ is the entropy of these states estimated by the relation
$$S_0=\frac{\overline{\mathrm{log}w(i)}}{\overline{w(i)}}\mathrm{log}\stackrel{~}{z}C$$
(17)
Here, $`\stackrel{~}{z}=_{compact}w(i)`$ and $`C`$ chosen such that the entropy of the ground state becomes zero. The results of the simulation in the new ensemble defined by the transformation $`f_1(E)`$, leads to a value of $`s_0=2.3(7)`$. Together with the above quoted value for $`\overline{E^2}\overline{E}^2=0.49(2)`$ (in (Kcal/mol)<sup>2</sup>) one therefore finds as an estimate for the glass transition temperature
$$T_g=160(30)\mathrm{K}.$$
(18)
Since it was found in earlier work that $`T_f=230(30)`$, it is obvious that the ratio $`T_f/T_g>1`$ and again one finds that Met-enkephalin has good folding properties. Hence, we see that there is a strong correlation between all three folding criteria.
Let me summarize my results. I have proposed to utilize a recently introduced global optimization technique, stochastic tunneling, in such a way that it allows calculation of thermodynamic quantities. The new generalized-ensemble technique was tested for a benchmark peptide, Met-enkephalin. It was demonstrated that the new technique allows to evaluate the folding properties of this peptide and an estimate for the glass transition temperature $`T_g`$ in that system was presented. Currently I am evaluating the efficiency of the new method in simulations of larger molecules.
Acknowledgements:
This article was written in part when I was visitor at the Institute of Physics, Academia Sinica, Taipei, Taiwan. I like to thank the Institute and specially C.K. Hu, head of the Laboratory for Statistical and Computational Physics, for the kind hospitality extended to me. Financial support from a Research Excellence Fund of the State of Michigan is gratefully acknowledged.
FIGURE CAPTIONS:
1. “Time series”(a) of potential energy $`E`$ of Met-enkephalin for a simulation in a generalized ensemble defined by the transformation $`f_1(E)`$ of Eq. 2 and the corresponding histogram (b) of potential energy.
2. “Time series”(a) of potential energy $`E`$ of Met-enkephalin for a simulation in a generalized ensemble defined by the transformation $`f_2(E)`$ of Eq. 3 (a) and the corresponding histogram (b) of potential energy.
3. Average radius of gyration $`<R>`$ (in $`\AA ^2`$) as a function of temperature (in $`K`$). The results of a multicanonical simulation of 1,000,000 MC sweeps were compare with simulations of equal statistics in the new ensemble utilizing either the no-linear transformation $`f_1(E)`$ or $`f_2(E)`$.
4. Free energy $`G(R)`$ as a function of the radius of gyration $`R`$ for $`T=230`$ K. The results rely on a generalized-ensemble simulation based on the transformation $`f_1(E)`$ of the energy landscape s defined in Eq. 2. |
no-problem/0001/astro-ph0001156.html | ar5iv | text | # Discovery of 9 Ly𝛼 emitters at redshift 𝑧∼3.1 using narrow-band imaging and VLT spectroscopy1footnote 11footnote 1Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile; ESO programmes 63.N-0530 and 63.I-0007
## 1 Introduction
Ly$`\alpha `$ emitting galaxies at high redshift are of much interest for studies of galaxy formation. While early surveys failed to find such objects (e.g. Thompson et al. 1995), recent narrow-band imaging searches with spectroscopic follow-up identified a number of Ly$`\alpha `$ emitters at high $`z`$, first in fields near high-redshift QSOs (Hu & McMahon 1996) and later in blank fields (Cowie & Hu 1998; Hu et al. 1998; Pascarelle et al. 1998). These Ly$`\alpha `$ emitters have typically very faint continua and high Ly$`\alpha `$ equivalent widths, i.e., may represent an early phase of galaxy formation when substantial amounts of dust had not yet formed.
Here we report on the discovery of 9 Ly$`\alpha `$ emitters at redshift $`z=3.1`$ during our program of studying the intracluster planetary nebula (PN) population in the Virgo cluster, which uses similar narrow-band imaging techniques to identify candidate PNs. The first indication of the existence of a diffuse intracluster stellar population in Virgo was the discovery by Arnaboldi et al. (1996) that a few PNs in the galaxy NGC 4406 (M 86; redshift $``$230 km s<sup>-1</sup>) have redshifts typical of the Virgo cluster (around +1300 km s<sup>-1</sup>). Subsequently, direct evidence for red giant stars belonging to this stellar population was reported by Ferguson et al. (1998). Because PNs offer the chance to measure radial velocities and perhaps even abundances for such a diffuse population, a search for intracluster PNs in different positions across the Virgo cluster started immediately and produced several dozens of PN candidates (Méndez et al. 1997, Feldmeier et al. 1998) in a total surveyed area of $`0.23`$ deg<sup>2</sup>.
The “on-band/off-band” narrow-band filter technique used to discover the PN candidates (see e.g. Jacoby et al. 1992) allows the detection of a single emission line. This is not necessarily the desired \[O III\] $`\lambda `$5007; it might be another emission line at higher $`z`$, redshifted into the on-band filter. For example \[O II\] $`\lambda `$3727 at $`z=0.35`$, or Ly$`\alpha `$ at $`z=3.13`$. In previous work (Méndez et al. 1997) we argued that most of our detections had to be real PNs because (a) the surface density of emission line galaxies derived from earlier studies was not high enough to explain all the detections; (b) the luminosity function of the detected sources is in good agreement with the PN luminosity functions derived in several Virgo galaxies (Jacoby et al. 1990).
We have used the first ESO Very Large Telescope (VLT) unit (UT1) with the Focal Reducer and Spectrograph (FORS) in multi-object spectroscopic mode, and the 4-m Anglo-Australian Telescope (AAT) with the 2-degree-field (2df) fiber spectrograph. Our purpose was to confirm the nature of the Virgo intracluster PN candidates, to measure their radial velocities, and (in the VLT case) to detect the faint diagnostic lines required for abundance determinations.
The present paper reports on the results of the VLT+FORS observations. The AAT+2df observations, which confirm the existence of Virgo intracluster PNs through the detection of both the $`\lambda `$4959 and $`\lambda `$5007 \[O III\] lines (Freeman et al. 1999), will be presented and discussed by Freeman et al. (2000, in preparation).
## 2 Observations and first results
For the purpose of detecting faint nebular diagnostic lines with VLT+FORS (ESO programme 63.N-0530), we selected Field 1 of Feldmeier et al. (1998) as our first priority, because it had the brightest PN candidates, with a luminosity function cutoff about half a magnitude brighter ($`m_{5007}=25.8`$) than elsewhere in the Virgo cluster (see Feldmeier et al. 1998). The total area of Field 1 is 256 arcmin<sup>2</sup>. We also wanted to use FORS (ESO programme 63.I-0007) to verify the PN nature of candidates in the smaller “La Palma Field” (50 arcmin<sup>2</sup>, Méndez et al. 1997), with magnitudes $`m_{5007}`$ between 26.8 and 28.6. The lack of bright PNs in the La Palma Field was understandable in Méndez et al. (1997) as a consequence of the sample size effect: if the total PN sample is small, the chance of finding a bright PN is too small.
The observations were made with FORS at the VLT UT1 on the nights of 11/12, 15/16, 16/17 and 19/20 April 1999. Since the FORS field is slightly smaller than 7$`\times `$7 arc mins (with the standard collimator), we selected a portion of Field 1 where several of the brightest candidates were located, and took on-band and off-band images of the selected portion of Field 1, and of the La Palma Field, on the night 11/12, using FORS in imaging mode.
The on-band and off-band interference filters we used have, respectively, central wavelengths 5039 and 5300 Å, and FWHMs 52 and 250 Å. Some 50 stellar images in the short on-band and off-band FORS exposures (10 min and 3 min, respectively) were used together with the corresponding stellar images in the Kitt Peak and La Palma discovery images to define coordinate transformations that gave the pixel values of the positions of the PN candidates in the FORS images, given the pixel values of their positions in the discovery images. In this way it was possible to define the FORS slitlet positions with sufficient accuracy (in fact, the brightest PN candidates were visible in the short on-band FORS exposures, allowing us to verify directly the accuracy of the pixel transformation, with typical errors below 0.5 pixels, which is equivalent to 0.1 arc sec).
On the night of 15/16 April, R.P.K. and R.H.M. started FORS multi-object spectroscopy (MOS) in Field 1, using grism 300V without any order separation filter, for maximum spectral coverage. This grism gives a spectral resolution of about 10 Å, at 5000 Å, for slitlets 1 arc sec wide. Of the 19 available slitlets, 10 were placed at the positions of PN candidates (numbers 1, 4, 5, 6, 16, 17, 21, 26, 27, 42, ordered by brightness as measured by Feldmeier). These candidates had magnitudes $`m_{5007}`$ between 25.8 and 26.4. The remaining 9 slitlets were placed on stars or galaxies in the field, in order to check the slitlet positioning (which was done by taking a short exposure without grism through the slitlets) and to help locate the dispersion lines as a function of position across the field, which is important in the case of spectra consisting of isolated emission lines.
After taking 5 exposures of 40 min each in Field 1, all with the same slitlet configuration, and having made preliminary on-line reductions, it was clear that the \[O III\] emission line candidates were not PNs. Object 1 showed one strong isolated emission line at 5021 Å. It cannot be \[O III\] $`\lambda `$5007 because there is no hint of the companion line $`\lambda `$4959 at the corresponding wavelength. Object 5 was identified as a starburst with $`z=0.35`$; it shows narrow emission lines with a continuum, $`\lambda `$3727 is redshifted into the on-band filter, and H$`\beta `$ and the \[O III\] lines 4959 and 5007 are visible, also redshifted with $`z=0.35`$. Object 17 was not detected. The remaining 7 objects were identified as continuum objects. Many of them are galaxies: they show redshifted emissions.
In summary, none of the candidates tested in Field 1 with VLT+FORS are PNs. This result in fact solves a problem, because these Field 1 candidates were surprisingly abundant and were somewhat brighter than typical PNs in the Virgo galaxies (see the discussion by Feldmeier et al. 1998). The high percentage of continuum objects (7 out of 9 detections) indicates that the off-band exposure by Feldmeier et al. was not deep enough. A re-examination of the Field 1 images by J. Feldmeier has subsequently confirmed that due to sudden changes in the seeing and transparency the off-band does go to a brighter limiting magnitude than intended. Briefly, the Field 1 exposures consisted of three 3600s on-band exposures and five 600s off-band exposures taken at the Mayall 4-m telescope. The additional off-band exposures were intended to compensate for the transparency, which was decreasing at the time of the exposures. Unfortunately, although these additional exposures do partially compensate for the change in transparency, the seeing increased by 0.3 arcsec, and consequently the mean off-band exposure is not deep enough.
On the other hand, Freeman et al. (1999) did confirm spectroscopically some fainter PN candidates in Field 1. Taken together, these results imply that the surface density of the intracluster stellar population originally estimated for Field 1 has to be reduced (Freeman et al. 2000, in preparation).
In view of the result in Field 1, we immediately pointed the VLT UT1 to the La Palma field. In this field 12 PN candidates had been found, 11 reported in Méndez et al. (1997) and one found afterwards. The distribution of the 12 PN candidates in the sky made it impossible to assign slitlets to all of them in one slitlet configuration. In our first configuration we defined 1 arc sec wide slitlets for 9 PN candidates and 2 objects suspected to be QSOs or starbursts because they were visible (although much weaker) in the off-band discovery image (Méndez et al. 1997). On the April nights 15/16 and 16/17 we completed 5 MOS exposures (40 min each) of the initial slitlet configuration, and on April 19/20 we took 3 additional MOS exposures of the La Palma Field (again 40 min each) with a different slitlet configuration, which allowed to add one PN candidate not observed before.
Thus spectra for a total of 10 PN candidates were acquired in the La Palma Field, and 7 were detected. They all show an isolated and narrow emission at wavelengths from 5007 to 5042 Å. In all cases this is the only feature visible in the spectrum. The other 3 PN candidates, which are relatively faint, were not detected. Perhaps their slitlets were slightly misplaced, although we would think the errors in the pixel transformation were too small to have any effects on the detectability. Perhaps some of these sources, if they are not PNs, have variable brightness.
Of the 2 QSO or starburst candidates, one was confirmed as a QSO at z=3.13, showing a typical broad-lined Ly$`\alpha `$ and C IV $`\lambda `$1550. The other one appears to be a starburst because it shows one strong, isolated and narrow emission line, with faint continuum.
The success rate for emission line detection in the La Palma Field was satisfactorily high. It may be useful to give a few numbers for comparison with other searches, complementing information given in Méndez et al. (1997). The La Palma on-band image had a limiting magnitude $`m_{5007}`$ = 28.7, equivalent to a flux of 10<sup>-17</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. We would collect the same amount of photons through the on-band filter from a star of visual magnitude 25.5. The off-band image had a limiting magnitude 0.2 mag fainter. The search for candidates was done by blinking the on-band versus the off-band image. To compare with the selection criteria used e.g. by Steidel et al. (1999b) for their narrow-band imaging survey, we define on-band and off-band magnitudes so that they coincide for an average star. All our emission-line candidates, being fainter or absent in the off-band image, have positive colors offband$``$onband. We can give only lower limits to the colors of candidates undetected in the off-band image. All tested emission-line candidates with colors above 1.0 mag have been spectroscopically confirmed. The lower limits to the colors of the 3 objects that were not detected with FORS are below 1.0 mag. Our 2 QSO or starburst candidates, detected in both images, have colors of 2.1 and 1.7 mag, respectively.
## 3 Analysis of the La Palma Field spectra
The CCD reductions were made using IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomical Observatories, operated by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation of the U.S.A. standard tasks. After bias subtraction, flat field correction and image combination to eliminate cosmic ray events, the object spectra were extracted and the sky background subtracted. Then the He-Ar-Hg comparison spectra were extracted and the object spectra were wavelength calibrated. Spectrograms of the standard stars G138-31 and G24-9 (Oke 1990) were used for the flux calibration.
We have designated the La Palma Field sources with LPF plus a number ordered according to the brightness in the discovery image. LPFnew is the latest PN candidate, found after the discovery paper (Méndez et al. 1997) was published. Object LPFs1 is the QSO, and LPFs2 is the object identified as a starburst from the start, because of its stronger continuum. F1-1 and F1-5 are objects 1 and 5 in Feldmeier’s Field 1. F1-1 turns out to also have a very faint continuum, barely visible in the final processed spectrogram.
Figs. 1 to 3 show, respectively, the spectra of the QSO, the starburst with visible continuum, and one of the La Palma Field PN candidates showing no detectable continuum. All the LPF PN candidates look very similar, with only one emission line detected across the whole spectrum.
Having found no direct evidence of \[O III\] $`\lambda `$4959, which would be expected if the detected emission line were $`\lambda `$5007, we can put an upper limit to the percentage of our emission-line objects that can be PNs. After rejecting objects that show a continuum, which clearly cannot be PNs, we proceed in the following way:
(1) shift the spectra, so that the wavelengths of the detected emission lines fall at 5007 Å.
(2) normalize the intensities of the emission lines to the same value, e.g. 300.
(3) add all the spectra and measure the intensity of the resulting $`\lambda `$4959. If it is 50, for example, comparing to the expected value of 100 (since $`\lambda `$5007 was defined to be 300) we can argue that 50% of the objects must be PNs.
The result of this test is shown in Fig. 4, where we have added the normalized spectra of the 7 PN candidates detected in the La Palma Field. The complete absence of $`\lambda `$4959 indicates that, at most, one of the 7 candidates can be a PN. This conclusion is based on the noise level, not on any marginal detection of $`\lambda `$4959. Besides, there is in Fig. 4 a hint of a weak continuum, not detectable in the individual spectra, which reinforces the rejection of these candidates as PNs.
## 4 Identification of the detected emission line as Ly$`\alpha `$
Having rejected \[O III\]$`\lambda `$5007, because $`\lambda `$4959 is not visible in Fig. 4, we consider the alternatives:
(1) \[O II\] $`\lambda `$3727 at $`z=0.35`$ was confirmed in one case (object 5 in Field 1) but can be rejected in all other cases because we do not see H$`\beta `$, \[O III\] $`\lambda \lambda `$4959, 5007 and H$`\alpha `$ at the corresponding redshifted wavelengths. This is illustrated in Figs. 5 and 6.
(2) Mg II $`\lambda `$2798 at $`z=0.79`$ can also be rejected for a similar reason: in this case we do not see \[O II\] $`\lambda `$3727 at the expected redshifted wavelength, as illustrated in Fig. 7. The same argument can be applied to other lines: assuming C III $`\lambda `$1909, we do not see $`\lambda `$2798; and so on.
We conclude that the isolated emission line must be Ly$`\alpha `$ at $`z=3.1`$. This identification is supported by the strength of the line: since we see at most a very faint continuum, the equivalent width is fairly large, typically 200 Å (observed) and 50 Å (rest frame). We have mentioned in Méndez et al. (1997) that very few starburst galaxies show, for example, \[O II\] $`\lambda `$3727 stronger than 100 Å in equivalent width.
## 5 Implications for the surface density of intracluster PNs in Virgo
A reliable estimate of the surface density of intracluster PNs in Virgo will have to await a survey of a sufficiently large area on the sky, which is currently in progress. Our results to date and some preliminary conclusions are the following.
No PN candidates have been confirmed in the La Palma Field. Only 5 of the 12 candidates have a chance of remaining as PNs: 2 were not tested and 3 were tested but not detected. Since we have identified most of these candidates as Ly$`\alpha `$ emitters, it is clear that the intracluster PN sample size in the La Palma Field and the inferred surface density must be substantially smaller than we estimated.
On the other hand, the AAT+2df multi-object fiber spectroscopy has confirmed the existence of intracluster PNs in Fields 1 and 3 of Feldmeier et al. (1998). This will be reported in detail by Freeman et al. (2000, in preparation). These PNs are brighter than $`m_{5007}=27`$. Freeman et al. will show that the contamination by Ly$`\alpha `$ emitters at these brighter magnitudes is not as important as in the La Palma Field. Thus it appears that the fraction of Ly$`\alpha `$ emitters in \[O III\] narrow-band selected samples is magnitude-dependent, increasing towards fainter values of $`m_{5007}`$.
The lack of bright PNs in the La Palma Field implies a lower surface density than in other Virgo cluster positions, and may indicate some degree of clustering in the distribution of the diffuse intracluster population.
Our VLT+FORS observations have shown that a spectroscopic confirmation of intracluster PN candidates, involving the detection of both the $`\lambda `$4959 and $`\lambda `$5007 \[O III\] emission lines, is necessary.
## 6 Observed properties of the high-redshift Ly$`\alpha `$ emitters
In Table 1 we have collected some basic information about the narrow-lined sources: the measured Ly$`\alpha `$ fluxes, measured Ly$`\alpha `$ wavelengths, redshifts, and Ly$`\alpha `$ equivalent widths (W<sub>λ</sub> in almost all cases lower limits, because the continuum is not detected). The measured wavelengths provide yet another argument favoring the interpretation of all these sources as unrelated to the Virgo cluster: they are randomly distributed across the on-band filter transmission curve, with no concentration at the Virgo cluster redshift (between 5020 and 5030 Å). See Fig. 8.
Notice that object LPF3 is brighter than measured in the discovery image. The reason is that in this case Ly$`\alpha `$ falls near the edge of the on-band filter transmission curve. LPF3 is remarkable also for the very large lower limit to its Ly$`\alpha `$ equivalent width.
The observed fluxes are between $`2\times 10^{17}`$ and $`2\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. We are looking into a small redshift range, defined by the transmission of the on-band filter used at La Palma, of about $`\mathrm{\Delta }\mathrm{z}=0.04`$. The La Palma Field covers 50 arcmin<sup>2</sup>. The total Ly$`\alpha `$ flux measured within our “discovery box” is $`5\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Adopting $`z=3.13`$, H<sub>0</sub>=70 km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub>=0.5 we get a luminosity distance of 1.8 $`\times 10^4`$ Mpc, which implies Ly$`\alpha `$ luminosities for our sources between $`2\times 10^8`$ and $`2\times 10^9`$ L$``$. The total Ly$`\alpha `$ luminosity within the sampled volume is $`5\times 10^9`$ L$``$. The sampled comoving volume is 1650 Mpc<sup>3</sup>, which gives from 8 sources a comoving space density $`5\times 10^3`$ Mpc<sup>-3</sup>.
These numbers depend on our assumptions about the cosmological parameters: for a flat universe with $`\mathrm{\Omega }_0`$=0.2, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.8, and the same $`z`$ and H<sub>0</sub> as above, the luminosity distance becomes 3 $`\times `$ 10<sup>4</sup> Mpc, the sampled comoving volume 10<sup>4</sup> Mpc<sup>3</sup>, and the Ly$`\alpha `$ luminosities become larger by a factor 2.8.
## 7 Starburst models and derived quantities
What produces the narrow Ly$`\alpha `$ emission? Given rest frame equivalent widths below 200 Å, the most probable source of ionization is massive star formation (Charlot and Fall 1993). Active galactic nuclei (AGNs) might be an alternative, although there is no hint of C IV 1550 in our spectra, as shown in Fig. 9. We have taken massive star formation as our working hypothesis.
We have made a simple population synthesis model to explore some basic properties of the stellar population responsible for the ionization of the H II regions. Our main interest in doing this analysis is to estimate star formation rates and densities; we would like to verify if these Ly$`\alpha `$ sources correspond to the formation of massive giant galaxies or rather to the formation of smaller structures.
We adopt a standard initial mass function (IMF), f(M) $``$ M<sup>-2.35</sup>, stellar evolutionary models for low metallicity (Z=0.001; this choice of metallicity will be justified in next paragraph) from Schaller et al. (1992), and ionizing fluxes from NLTE model atmospheres with winds (Pauldrach et al. 1998), again for a low metallicity, in this case that of the SMC (5 times below solar).
The relation between the number of stellar Lyman continuum photons $`N_{LyC}`$ and the Ly$`\alpha `$ luminosity can be obtained from a simple recombination model:
$$\mathrm{L}(\mathrm{Ly}\alpha )=\mathrm{h}\nu (\mathrm{Ly}\alpha )0.68X_BN_{LyC}$$
(1)
where 0.68 is the fraction of recombinations that yield Ly $`\alpha `$ (Case B, see e.g. Storey and Hummer 1995), and $`X_B`$ is the product of the fraction of $`N_{LyC}`$ really absorbed times the fraction of Ly$`\alpha `$ photons that really escape. We necessarily have $`0X_B1`$. Note that $`X_B1`$ requires both an optically thick nebula and very low dust content, because the large number of Ly$`\alpha `$ scatterings in an optically thick nebula will lead to their absorption by dust grains, if such grains are present in any significant number. This explains our choice of a low metallicity. How low is low? Since we do not know much about dust properties, we have used empirical information provided by Charlot and Fall (1993) in their Fig. 8, which shows Ly$`\alpha `$ equivalent widths as a function of oxygen abundance in nearby star-forming galaxies. In that figure we find that Ly$`\alpha `$ equivalent widths larger than 20 and 50 Å are associated respectively with oxygen abundances below 25% and 10% solar. In future work, to obtain more quantitative constraints, we intend to carry out Ly$`\alpha `$ radiative transfer calculations in the presence of dust and velocity fields as an extension of the work by Hummer and Kunasz (1980), Hummer and Storey (1992) and Neufeld (1990, 1991).
Figs. 10 and 11 show Ly$`\alpha `$ equivalent widths for single stars as a function of $`X_B`$ and of the stellar T<sub>eff</sub>. We need both a large $`X_B`$ and many massive, hot main sequence stars in order to produce the observed Ly$`\alpha `$ equivalent widths.
We have explored two different histories of star formation in a low-metallicity stellar population: (a) a starburst, and (b) continuous star formation. These two alternatives are defined as star formation extending over a time (a) similar to ($`3\times 10^6`$ years) and (b) much longer than (10<sup>9</sup> years) the duration of a main sequence OB star.
Figs. 12 and 13 show the resulting contour plots of Ly$`\alpha `$ equivalent widths as a function of $`X_B`$ and of the maximum main sequence mass in the population. In these figures, the quantities on the x axis allow different interpretations. In the case of a starburst, smaller values of $`M_{\mathrm{max}}`$ can be interpreted to represent an increasing age of the starburst, or in other words the time elapsed since the starburst happened; as the starburst grows older, the most massive stars are removed from the main sequence. In the case of continuous star formation, the value of $`M_{\mathrm{max}}`$ indicates at which mass the integration of the IMF was stopped. In other words, the left part of the plot shows a case in which very massive main sequence stars were not formed.
We can obtain Ly$`\alpha `$ equivalent widths $`>50`$ Å only for $`X_B>0.5`$ (continuous star formation) or $`X_B>0.3`$ (starburst). This points to almost completely optically thick, extremely dust-poor nebulae. It is easy to understand why a large $`X_B`$ is necessary. If it is small, many stellar ionizing photons are lost. In order to explain the observed Ly$`\alpha `$ fluxes we must add more massive stars, but these stars make a strong contribution to the continuum, and therefore the Ly$`\alpha `$ equivalent width must decrease.
It may seem surprising that we can get large Ly$`\alpha `$ equivalent widths at so low values of $`M_{\mathrm{max}}`$. The reason is that the Schaller et al. main sequence at low metallicity is shifted to rather high T<sub>eff</sub> because of the low stellar opacity. This means that lower mass objects on the main sequence give much more ionizing flux than at solar metallicity.
The source LPF3 can be explained only as a very young starburst with $`X_B1`$, because of its very large Ly$`\alpha `$ equivalent width.
We set the lower mass limit of the Salpeter IMF at 0.5 M$``$. For a maximum stellar mass of 120 M$``$, this gives an average mass of 1.65 M$``$, which is then the conversion factor between number of stars and total stellar mass. The average mass decreases only slightly if we decrease $`M_{\mathrm{max}}`$, and only if $`M_{\mathrm{max}}`$ is interpreted as in Fig. 13, because in that case very massive stars are not formed. The average mass does not decrease if $`M_{\mathrm{max}}`$ is low due to the age of the starburst, because the very massive stars are assumed to have formed and evolved away from the main sequence.
Figs. 14 and 15 show contour plots of the logarithms of Ly$`\alpha `$ luminosities in a plane where the x axis is the same $`M_{\mathrm{max}}`$ used in Figs. 12 and 13, and the y axis is the product of $`X_B`$ times the total number of stars produced over the total duration of the star formation ($`3\times 10^6`$ and $`10^9`$ yr, respectively). The number of stars required to produce a given Ly$`\alpha `$ luminosity depends on the value of $`M_{\mathrm{max}}`$. Since we cannot determine $`M_{\mathrm{max}}`$ empirically or derive it from first principles, we have considered the full range of $`M_{\mathrm{max}}`$, from 120 M$``$ to as low as permitted by the limits on the Ly$`\alpha `$ equivalent widths; sometimes as low as 20 M$``$. This produces rather large uncertainties in the derived masses, particularly in the case of starbursts. A comparison of Figs. 14 and 15 also shows that, as expected, in the case of continuous star formation many more stars are needed to obtain a given Ly$`\alpha `$ luminosity.
The results of these mass estimates are listed in Table 1. The total masses of stars and star formation rates turn out to be very sensitive to the star formation history. For continuous star formation and young starbursts, i.e. high values of $`M_{\mathrm{max}}`$, our strongest sources have SFRs of the order of 10 M$``$ yr<sup>-1</sup>. However these numbers increase dramatically if we consider older starbursts, and we cannot rule out SFRs higher than 200 M$``$ yr<sup>-1</sup> in a few cases.
Adding up the masses and SFRs in Table 1, and using the average stellar mass of 1.65 M$``$, we have built Table 2, which provides the total number and total mass of stars formed, and the resulting star formation rate, needed to explain our total Ly$`\alpha `$ luminosity of $`5\times 10^9`$ L$``$ in the La Palma Field, for the cases of starbursts and continuous star formation. Here we have not added the numbers for F1-1, which belongs to another field.
How uncertain are these numbers? First of all, the real SFR might be higher than we inferred for our sources because of extinction by dust. However, we consider this to be unlikely because it would be necessary to argue that there is dust in the foreground (to produce extinction) but not inside (it would destroy the Ly$`\alpha `$ emission). We do not argue in terms of low metallicity because there is at least one case of a very metal-deficient blue compact dwarf galaxy, SBS 0335-052, where dust patches are clearly visible (Thuan and Izotov 1999). Note however that this source has Ly$`\alpha `$ in absorption, implying that extinction is accompanied by destruction of Ly$`\alpha `$ emission, as expected. Based on this reasoning we do not include a correction for extinction in our Tables.
The reader may argue that geometry probably plays the dominant role in dictating how many Ly$`\alpha `$ photons escape, making our attempt to estimate star formation rates a futile exercise. However, our sources are a rather special case. Geometry is useful when we need to explain why a star-forming region shows Ly$`\alpha `$ in absorption; see e.g. Kunth et al. (1998). But our sources do show strong Ly$`\alpha `$ emission, and furthermore, show a very faint continuum. The absence of continuum (i.e. the large equivalent width of Ly$`\alpha `$) is very important because it precludes the existence of large amounts of hot stars. If there is any significant destruction of Ly$`\alpha `$ photons, then, in order to explain the observed Ly$`\alpha `$ fluxes we must add more massive stars. But these additional stars contribute to the continuum, and the Ly$`\alpha `$ equivalent width decreases too much. Thus the absence of continuum acts as a ”safety valve” restricting higher values of the star formation rate. As explained above, extinction by dust is not very likely.
On the other hand, the derived masses and SFRs are strongly dependent on the lower mass limit of the IMF. If we selected a lower mass limit of 0.1 instead of 0.5 M$``$ there would be no change in the number of stars needed to produce the ionizing photons, but we would be adding a lot of low-mass stars; the total mass estimate for a given total Ly$`\alpha `$ luminosity would increase by a factor 1.9. The SFRs would have to be corrected by the same factor. The ranges of masses and SFRs in Tables 1 and 2 do not include this source of uncertainty, but we will take it into account in the discussion.
The adopted cosmological parameters also have some influence. We have adopted H<sub>0</sub>=70 km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub>=0.5, which implies a sampled comoving volume of 1650 Mpc<sup>3</sup>. If we take, for example, a total SFR of 50 M$``$ yr<sup>-1</sup>, we get a star formation density in the comoving volume of 0.03 M$``$ yr<sup>-1</sup> Mpc<sup>-3</sup>. If we adopted a universe with $`\mathrm{\Omega }_0`$=0.2, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.8, but with the same $`z`$ and H<sub>0</sub>, then our star masses and SFRs would be larger by the same factor 2.8 as for the luminosities. However the star formation density mentioned above would drop from 0.03 to 0.014 M$``$ yr<sup>-1</sup> Mpc<sup>-3</sup> because of the enlarged sampled comoving volume of 10<sup>4</sup> Mpc<sup>3</sup>.
## 8 Discussion
Recent blank field searches for Ly$`\alpha `$ emitters have been successful at redshifts from 2.4 to 5: Hu 1998, Cowie and Hu 1998, Pascarelle et al. 1998, Hu et al. 1998. The sources that have been verified spectroscopically (Hu 1998, Hu et al. 1998) are like ours: strong, narrow, isolated emission line identified as Ly$`\alpha `$, with a similar range of equivalent widths.
One may ask why the earlier searches failed to detect such emission-line sources. For example Thompson et al. (1995) reported no detection in a total area of 180 arcmin<sup>2</sup> down to a 1-$`\sigma `$ limiting flux 10 times fainter than the flux of our brightest source, LPFs2. This might be attributed to a very low surface or space density in their directions, but since they looked in many directions this interpretation looks improbable. The surface density in the La Palma Field does not seem to be abnormally high. The 8 sources we detected in the 50 arcmin<sup>2</sup> of the La Palma Field and within our redshift range of 0.04 are equivalent to 14400 emitters deg<sup>-2</sup> per unit $`z`$ with Ly$`\alpha `$ fluxes above $`1.5\times 10^{17}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. This is a lower limit to the true density of such sources; if some of the three photometric candidates not detected and of the two candidates not tested are Ly$`\alpha `$ emitters with similar fluxes, the true density could be higher by up to 60%. The density inferred from the present spectroscopic sample is similar to the 15000 deg<sup>-2</sup> per unit $`z`$ reported by Hu et al. (1998) from Keck searches in different regions of the sky, with the same limiting flux. Our sources are brighter than those detected with the Keck telescope (Hu 1998, Cowie and Hu 1998, Hu et al. 1998), but our redshift is smaller; see Fig. 16. In summary, the surface density of Ly$`\alpha `$ emitters in the La Palma Field is of the same order of magnitude as for the sources detected by Hu et al. Pascarelle et al. (1998) report an order-of-magnitude range of space densities at $`z=2.4`$, in some cases even higher than ours. More surveys will probably clarify whether or not there is any structure or clustering in the distribution of Ly$`\alpha `$ emitters.
Concerning masses and star formation rates, if we adopt continuous star formation or young starbursts, where star formation has not yet (or has just) stopped, then from Table 1 we read that our strongest sources show a SFR not higher than about 10 M$``$ yr<sup>-1</sup>, similar to the maximum SFRs reported by Hu et al. (1998), and short of what would be expected from a proto-giant spheroidal galaxy forming more than 10<sup>11</sup> M$``$ in 10<sup>9</sup> years. We cannot completely rule out the possible existence of an inconspicuous stellar population, already formed, which would be detectable only in the infrared, but we would find it difficult to explain the very low dust content in that case. The possible existence of undetected neutral and molecular H gas would provide additional means to scatter and eventually destroy the Ly$`\alpha `$ photons (e.g. Neufeld 1990), and therefore we consider it unlikely. Thus from the available evidence we would seem to be witnessing the formation of small subgalaxies, some of which are perhaps destined to merge. This conclusion is also supported by the total mass produced, even in the case of continuous star formation, and would be unchanged if we adopted the lowest IMF mass limit of 0.1 M$``$. If we also adopted the flat accelerating universe with $`\mathrm{\Omega }_0`$=0.2 and $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.8, which implies larger distances, luminosities and SFRs, the starbursts would still point to small entities, but the continuous star formation in the case of LPFs2 would be able to produce several times 10<sup>10</sup> M$``$.
On the other hand, continuous star formation implies the previous production of supernovae and metals, which makes it more difficult to explain a very low dust content. For that reason we consider young starbursts more likely than continuous star formation or old starbursts. But in dealing with this subject we prefer to be cautious and leave all conceivable options open.
What would happen if we allowed for older starbursts, in which star formation has stopped 10<sup>7</sup> years ago? Then the mass of stars formed would increase substantially, but notice that in such cases the mass produced after star formation has stopped is not larger than 10<sup>9</sup> M$``$. Therefore even in this extreme case starbursts are not able to make a proto-giant spheroidal galaxy out of any of our sources. This conclusion would not be weakened by assuming the accelerating universe mentioned above.
Now we try to estimate lower and upper limits for the star formation density in the sampled comoving volume that corresponds to the La Palma Field, assuming that we can rule out production of Ly$`\alpha `$ photons through AGN activity. A lower limit for the SFR of 15 M$``$ yr<sup>-1</sup> comes from the continuous star formation case in Table 2. The maximum SFR in Table 2 (old starbursts) is rather improbable, because it is obtained assuming that all the starbursts ended at the same time, some 10<sup>7</sup> years before the Ly$`\alpha `$ photons we detected were emitted. A more reasonable upper limit is obtained assuming that one of the starbursts is at the right age to produce the maximum SFR, while all the others make smaller contributions. Let us assume that the source produced by the only allowed old starburst is LPFs2, which would then contribute 140 M$``$ yr<sup>-1</sup>. Adding the almost negligible contributions from the other sources, we obtain in this way a range of plausible total SFRs in the comoving volume between 15 and 170 M$``$ yr<sup>-1</sup>. Allowing now for the uncertainty in the lower mass limit of the IMF, we get a range between 15 and 320 M$``$ yr<sup>-1</sup>, which implies a star formation density between 0.009 and 0.19 M$``$ yr<sup>-1</sup> Mpc<sup>-3</sup>. For comparison, Hu et al. (1998) obtained, from their own samples, 0.01 in the same units.
In order to compare our star formation density with other available information, we adopt in this paragraph H<sub>0</sub>=50 km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub>=0.5, which is what other authors have done, e.g. Steidel et al. (1999a). This gives luminosities twice as large as in our earlier choice, and a sampled comoving volume of 4500 Mpc<sup>3</sup>. Our resultant star formation density drops slightly to between 0.007 and 0.14 M$``$ yr<sup>-1</sup> Mpc<sup>-3</sup>. This is plotted in Fig. 17 together with other data collected from several galaxy surveys by Steidel et al. (1999a). Our star formation density in Ly$`\alpha `$ emitters is comparable to that in other star forming sources at that redshift. Our sources are probably the low-metallicity (or low-dust) tail in a distribution of star forming regions at high redshifts. This is underlined by the fact that in our sampled volume we also have a QSO (LPFs1) with apparently higher metal abundances, judging from the strength of the C, N, O lines in its spectrum.
As already remarked by Hu et al. (1998), since we expect lower metallicity at higher redshifts, we should expect the strong Ly$`\alpha `$ emitters to become more frequent at higher redshifts relative to Lyman-break galaxies, which normally have weak or absent Ly$`\alpha `$ emission and are therefore presumably more metal-rich. Note, however, that low metallicity does not necessarily imply emission in Ly$`\alpha `$: the blue compact dwarf galaxy SBS 0335-052 (Thuan and Izotov 1999), with $`Z=Z_{}/41`$ and Ly$`\alpha `$ in absorption, provides a beautiful cautionary note. Kunth et al. (1998) have argued that the geometry and velocity structure of the interstellar medium play an important role in determining the strength of the Ly$`\alpha `$ emission, and this may complicate the comparison between Ly$`\alpha `$ emitters and Lyman-break galaxies.
Finally, we consider our Ly$`\alpha `$ emitters as possible sources of ionization of the intergalactic medium at high redshift (see e.g. Madau et al. 1998). Keeping the same cosmological parameters as in last paragraph, we get a total L(Ly$`\alpha `$), in our sampled comoving volume, of $`10^{10}`$ L$``$. Converting this into $`N_{LyC}`$ using Eq. (1), on the assumption that $`X_B=0.5`$, we get $`7\times 10^{54}`$ photons s<sup>-1</sup>. Since the sampled comoving volume is 4500 Mpc<sup>3</sup>, this implies a ionizing photon density of 1.5 $`\times 10^{51}`$ photons s<sup>-1</sup> Mpc<sup>-3</sup>. This is the total number of ionizing photons produced by the stars. It is very difficult to estimate what fraction is available for ionization of the intergalactic medium, because it depends on the relative contributions of the two factors that enter into $`X_B`$. In at least one case (LPF3) we have $`X_B1`$, which means that this source cannot contribute many ionizing photons. Assuming optimistically that there remain $`10^{51}`$ ionizing photons s<sup>-1</sup> Mpc<sup>-3</sup> available, our Ly$`\alpha `$ emitters seem to contribute about 0.5 times as much as the star-forming galaxies at $`z=3`$ found in Lyman-break galaxy surveys (see e.g. Fig. 2 of Madau et al. 1998). We conclude that the contribution by Ly$`\alpha `$ emitters may be comparable in order of magnitude to what QSOs provide at $`z=3`$.
## 9 Summary of conclusions and perspectives
We have discovered a population of high-redshift Ly$`\alpha `$ emitters in our sample of Virgo intracluster PN candidates obtained with an “on-band/off-band” filter technique. Our VLT+FORS spectra show that the Ly$`\alpha `$ emitters at $`z=3.13`$ look very similar to those discovered in other fields at other redshifts by Hu et al. (1998). Only a narrow and strong Ly$`\alpha `$ emission is visible in their spectra; no other spectral line, and no (or at most a very weak) continuum. On the assumption that Ly$`\alpha `$ emission is produced by massive star formation, we have estimated the total mass of stars formed and star formation rates, and we have estimated the star formation density in our sampled comoving volume. The Ly$`\alpha `$ emitting nebulae must be nearly optically thick and extremely dust-poor, probably indicating a very low metallicity. The total mass formed and the SFRs appear to suggest that we are witnessing the formation of rather small galaxies. This conclusion depends on several assumptions about the IMF, star formation history and cosmological parameters. There is one source (LPFs2) that might qualify as a proto-giant spheroidal if we assumed continuous star formation, a lower mass cutoff of 0.1 M$``$ in the IMF, and a flat accelerating universe with $`\mathrm{\Omega }_0`$=0.2 and $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.8. However, to assume continuous star formation implies the previous production of supernovae and metals, making it more difficult to explain the very low dust content required by the observed spectra of our sources. We are more probably dealing with young starbursts.
Taking all sources into account, the implied star formation density in our sampled comoving volume is probably somewhat smaller than, but of the same order of magnitude as the star formation density at $`z3`$ derived by other authors from Lyman-break galaxy surveys. This result agrees with the expectation that the Ly$`\alpha `$ emitters are a low-metallicity (or low-dust) tail in a distribution of star forming regions at high redshifts. Finally, the Ly$`\alpha `$ emitters may contribute as many H-ionizing photons as QSOs at $`z3`$. They are therefore potentially significant for the ionization budget of the early universe.
More extensive surveys at different redshifts will be needed to build a luminosity function for the Ly$`\alpha `$ emitters and to decide if they show any evidence of clustering and evolution as a function of redshift. HST images might be able to resolve these sources; their morphology might offer clues about their star formation processes. High-resolution spectroscopy of Ly$`\alpha `$ would provide important kinematic information about the interstellar medium in these galaxies. We need infrared spectra to try to detect forbidden lines and either determine or put some upper limits to the metallicity. It might also be possible to clarify to what extent the production of Ly$`\alpha `$ photons can be attributed to AGN activity. The firm detection of an infrared continuum would help to constrain the characteristics and total mass of the stellar populations through comparison with population synthesis models.
The implications of our results for the surface density of the diffuse intracluster stellar population in the Virgo cluster are not clear yet. Future searches for intracluster PNs will have to include the spectroscopic confirmation of the candidates, by detection of the two bright \[O III\] emissions at 4959 and 5007 Å. From the spectroscopic work of our group to date, including the confirmation of 23 intracluster PNs by the detection of the two \[OIII\] lines by Freeman et al. (1999), the fraction of high-redshift Ly$`\alpha `$ sources in the on-band/off-band samples appears to be higher at faint magnitudes. In the specific case of the “La Palma Field” we have not found any intracluster PNs, perhaps suggesting some degree of clumpiness in the distribution of the intracluster stellar population. In other Virgo fields with brighter PN candidates, the fraction of high-redshift Ly$`\alpha `$ emitters among the detected sources appears to be about 25% (see Freeman et al. 2000, in preparation, for a more extensive discussion).
RPK would like to thank the director and staff of Steward Observatory, Tucson, for their hospitality and support during an inspiring sabbatical which among other results has led to the ESO proposal 63.N-0530. Thanks also to Stefan Wagner for helpful comments. RPK and RHM express their gratitude to the ESO staff at Cerro Paranal, who contributed with their efforts to make this VLT run an enjoyable experience. |
no-problem/0001/astro-ph0001270.html | ar5iv | text | # Helium Emission in the Type Ic SN 1999cq
## 1 Introduction
The nature of supernovae (SNe) of Type Ib and Ic has been the subject of much speculation; see Harkness & Wheeler (1990) for an early review, and Clocchiatti et al. (1997) for a more recent discussion. Spectra of SNe Ib/c lack the hydrogen lines that distinguish SNe II, yet they are also missing the deep absorption near 6150 Å (thought to be blueshifted Si II $`\lambda `$6355) that characterizes SNe Ia. The defining characteristic of SNe Ib is the presence of strong He I lines near maximum light. This is followed by a nebular-phase spectrum dominated by emission lines of \[O I\], \[Ca II\], and Ca II. SNe Ic are spectroscopically quite similar to SNe Ib, but they do not show the strong He I lines. This results in the occasional designation of SNe Ic as “helium-poor” SNe Ib (Wheeler et al. 1987).
The most widely accepted model for the SN Ib/c explosion mechanism is that it is related to the mechanism of SNe II—core collapse in massive stars—but that SNe Ib/c have lost their hydrogen (and helium, in the case of SNe Ic) envelopes through winds or mass transfer to a companion (e.g., Woosley, Langer, & Weaver 1993; Yamaoka, Shigeyama, & Nomoto 1993; Nomoto et al. 1994; Iwamoto et al. 1994). There is considerable circumstantial evidence that the progenitors of SNe Ib and Ic are massive stars. SNe Ib/c are associated with Population I stars (Wheeler & Levreault 1985; Uomoto & Kirshner 1985; Harkness et al. 1987; Huang 1987; Van Dyk 1992; Bartunov et al. 1994, Van Dyk, Hamuy, & Filippenko 1996), their hosts are virtually always late-type galaxies (Porter & Filippenko 1987), and they exhibit radio emission (e.g., Weiler et al. 1998, and references therein), thought to be the result of interaction with circumstellar material (Chevalier 1982, 1984). Contrasted with this are the favored progenitors of SNe Ia, white dwarfs (e.g., Branch et al. 1995; Livio 1999), although models for SNe Ib/c using white dwarfs as progenitors have been proposed (e.g., Branch & Nomoto 1986; Iben et al. 1987). The late-time spectra of SNe Ib/c are also very similar to those of SNe II, with the obvious exception of the hydrogen lines. Perhaps the best evidence connecting SNe Ib/c with SNe II is the direct transformation of an individual supernova from one type to another. SN 1987K was initially spectroscopically a Type II, but the late-time spectra resembled those of SNe Ib (Filippenko 1988); it thus earned a new label, Type IIb (after Woosley et al. 1987). Unfortunately, the actual transition was not observed. Filippenko (1992) and Jeffery et al. (1991) also suggested that the Type Ic SN 1987M showed evidence of hydrogen in its spectrum.
Like SN 1987K, SN 1993J was spectroscopically a SN II in the first few weeks after explosion, but then began to exhibit the He I lines of a SN Ib, making it a SN IIb (Filippenko & Matheson 1993; Filippenko, Matheson, & Ho 1993; Swartz et al. 1993a; Wheeler & Filippenko 1996). The various theoretical models rapidly converged on the same general concept—a massive star that had lost most, but not all, of its hydrogen envelope (Woosley et al. 1994; Wheeler & Filippenko 1996, and references therein). Gradually the \[O I\], \[Ca II\], and Ca II emission lines emerged, making the spectrum of SN 1993J closely resemble that of an aging SN Ib, although the hydrogen emission never fully disappeared. Indeed, the hydrogen emission became very prominent at late times, but this was almost certainly the result of circumstellar interaction (Filippenko, Matheson, & Barth 1994). Qiu et al. (1999) document thoroughly a similar transition in SN 1996cb.
Although there has been no object showing as definitive a transformation between SNe Ib and Ic as SN 1993J or SN 1996cb showed between SNe II and Ib, there have been several SNe Ic with possible signs of helium in their spectra. These include the detection of He I $`\lambda `$10830 in SN 1990W (Wheeler et al. 1994; based on this line, they propose SN 1990W was misclassified, and should be Type Ib), SN 1994I (Filippenko et al. 1995), and SN 1994ai (Benetti, quoted in Clocchiatti et al. 1997), in addition to the discovery of other He I lines at high velocity in a reanalysis of SN 1994I, SN 1987M, and SN 1988L (Clocchiatti et al. 1996b) and in SN 1997X (Munari et al. 1998). (See, however, Baron et al. and Millard et al. for cautionary arguments.) By analogy to the models of SN 1993J, one could postulate that the progenitors of SNe Ic have not only had their hydrogen envelope removed, but most or all of their helium layer as well (Harkness et al. 1987; Yamaoka, Shigeyama, & Nomoto 1993; Swartz et al. 1993b; Iwamoto et al. 1994; Nomoto et al. 1994), so the weakness or absence of He I lines is the result of small abundance. The SNe Ic that exhibited some helium would then be the analogs of SN 1993J, but with the transition less obvious. An alternative explanation for the cause of the differences between SNe Ib and Ic relies on the need for a large amount of non-thermal excitation to form He I lines (Harkness et al. 1987; Lucy 1991). The source of this excitation is electrons accelerated by the $`\gamma `$-rays emitted by the decay of radioactive <sup>56</sup>Ni synthesized in the explosion. In this scenario, helium could be present in SNe Ic, but the <sup>56</sup>Ni is not mixed into the helium layers, and so cannot produce the helium lines (e.g., Wheeler et al. 1987; Shigeyama et al. 1990; Hachisu et al. 1991). Whatever the cause of the helium features, the characteristics of the previous detections of helium in SNe Ic (absorption at high velocity) suggest that the helium is in the ejecta, not the circumstellar material.
The spectrum of a truly transitional object between SNe Ib and Ic could provide clues as to whether the difference between them is the result of abundances, mixing, or both. In this paper we present optical to near-infrared (near-IR) spectra of SN 1999cq, a SN Ic with unusual intermediate-width helium emission lines, along with an unfiltered light curve. We propose that the helium lines represent the detection of material lost from the progenitor shortly prior to explosion. We also find an excess of blue emission in comparison with other SNe Ic, with implications for iron and iron-group element abundance, mixing, and the reddening of SN 1999cq.
## 2 Observations
SN 1999cq was discovered by the Lick Observatory Supernova Search (LOSS) (Modjaz & Li 1999) on 1999 June 25.4 UT, (JD 2,451,354.9; note that all calendar dates used are UT) at an unfiltered magnitude of $``$ 16.0. It was present on previous images at similar magnitudes ( June 22.4, $``$ 15.9; June 19.4, $``$ 15.8). An image taken on 1999 June 15.4 showed nothing at the position of the SN to a limit of $``$ 19.0 mag. The supernova was 1.$`\mathrm{}`$5 east and 4.$`\mathrm{}`$1 south of the nucleus of UGC 11268. A spectrum obtained July 9.4 (JD 2,451,368.9, hereinafter “the July spectrum”) indicated that SN 1999cq was of Type Ib or Ic, and starting to enter the nebular phase, although Na I D did show a P-Cygni profile (Filippenko 1999).
This spectrum was obtained with the Kast double spectrograph (Miller & Stone 1993) at the Cassegrain focus of the Shane 3-m reflector at Lick Observatory with an exposure time of 1800 s. Reticon $`400\times 1200`$ pixel CCDs were used in both cameras. The slit was oriented at a position angle of 160$`\mathrm{°}`$ to include the galaxy nucleus. The optimal parallactic angle (Filippenko 1982) was 81$`\mathrm{°}`$, but the low airmass (1.1) and the large slit width (3$`\mathrm{}`$) imply that the differential light losses were negligible. Standard CCD processing and optimal spectral extraction were accomplished with IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.. We used our own routines to flux calibrate the data, using the sdO comparison star BD +28$`\mathrm{°}`$4211 (Stone 1977) in the range 3300–5400 Å and the sdG comparison star BD +17$`\mathrm{°}`$4708 (Oke & Gunn 1983) in the range 5300–9900 Å. Telluric absorption bands were removed through division by the intrinsically featureless spectrum of BD+17$`\mathrm{°}`$4708 (Wade & Horne 1988). The dispersion of the spectrum is $``$ 5 Å per pixel. We obtained a second spectrum on 1999 August 17.3 (JD 2,451,407.8, hereinafter “the August spectrum”) under almost identical conditions. The only differences were a narrower slit width (2$`\mathrm{}`$), a slightly higher airmass (1.2), and the use of BD +26$`\mathrm{°}`$2606 (Oke & Gunn 1983) as the comparison star for the red wavelengths.
As SN 1999cq lies in a complex region only 4.$`\mathrm{}`$4 from the nucleus of UGC 11268, background sky subtraction and the extraction of the supernova were extremely difficult. There is H II region contamination over a considerable portion of the long slit ($``$ 48$`\mathrm{}`$ out of the $``$ 125$`\mathrm{}`$ spatial extent of our exposures). The least contamination of the object was achieved by choosing sky regions for background subtraction very near the supernova. There is still some galaxy contamination, made more complicated by unusually strong emission lines. In fact, the regions of the supernova spectrum near H$`\alpha `$+\[N II\] $`\lambda \lambda `$6548, 6583 and \[S II\] $`\lambda \lambda `$6716, 6731 still show some unavoidable residual subtraction errors. Incomplete removal of the very strong Na I D night-sky emission line also affects the spectrum.
The unusual nature of SN 1999cq was not immediately recognized, so it was not followed photometrically through standard $`UBVRI`$ filters. However, UGC 11268 is part of the LOSS sample of galaxies and is thus imaged regularly (every three to four days) with the 0.76-m Katzman Automatic Imaging Telescope at Lick Observatory (KAIT; Treffers et al. 1997; Richmond, Treffers, & Filippenko 1993). The detector used by KAIT is a SITe $`512\times 512`$ pixel CCD with a field of view of 6.′8 $`\times `$ 6.′8. The observations are taken without a filter, so the light curve for SN 1999cq (Figure 1) is measured in unfiltered magnitudes. Photometric calibration of KAIT images indicates that the unfiltered response is similar to a standard $`R`$ filter. The images were reduced using a galaxy subtraction technique; SN 1999cq is in a very complex region of the host galaxy, and traditional aperture or point-spread-function (PSF) fitting photometry would not perform well. The template image of UGC 11268 was taken 1998 September 16 with a limiting magnitude of $``$ 20. The template is shifted spatially, scaled to the same intensity level and PSF, and then subtracted from each individual observation. Photometry is then performed on the galaxy-subtracted images using PSF fitting in DAOPHOT with more than 10 stars in the field used to construct the PSF. The magnitude of SN 1999cq was calculated by averaging its magnitude relative to all the stars used to determine the PSF. For all of the observations of SN 1999cq the limiting unfiltered magnitude was $``$ 19.
## 3 Results
Our spectra of SN 1999cq are shown in Figure 2. A low value for the Galactic component of reddening \[$`E(BV)0.05`$ mag\] is found using the Galactic maps of Schlegel, Finkbeiner, & Davis (1998). There is no obvious absorption near the location of Na I D at the redshift of the host galaxy ($``$ 7900 km s<sup>-1</sup>, determined from the nuclear H$`\alpha `$ emission line; Marzke, Huchra, & Geller found 7890 $`\pm `$ 20 km s<sup>-1</sup> for UGC 11268). To determine a 1$`\sigma `$ upper limit on the equivalent width (EW) of this line, we used the relation EW(1$`\sigma `$) = $`\mathrm{\Delta }\lambda \times \mathrm{\Delta }I`$, where $`\mathrm{\Delta }\lambda `$ is the width of a resolution element (Å) and $`\mathrm{\Delta }I`$ is the 1$`\sigma `$ root-mean-square (rms) fluctuation of the spectrum around a normalized continuum level, based on the derivation of Hobbs (1984). The value we calculated for the Na I D line in the July spectrum of SN 1999cq is EW $``$ 1.8 Å. The Barbon et al. (1990) relation would then give $`E(BV)0.45`$ mag. For reasons discussed below, however, we feel that the intrinsic reddening is much lower than this upper limit. We note that the spectrum of the nucleus of UGC 11268 has a much higher signal-to-noise ratio than that of the supernova, and it shows a strong Na I D absorption line with an EW of $``$ 1.7 Å, for which the Barbon et al. (1990) relation gives $`E(BV)0.44`$ mag.
Figure 3 provides a montage of SNe Ib and Ic at similar phases of development for direct comparison with SN 1999cq. While the July spectrum of SN 1999cq in many ways resembles a typical SN Ic during the transition to the nebular phase (e.g., emission lines of \[O I\] $`\lambda \lambda `$6300, 6364, \[Ca II\] $`\lambda \lambda `$7291, 7324, O I $`\lambda `$7774, and the Ca II near-IR triplet), there are two striking differences. The first is the presence in SN 1999cq of several He I lines in emission at an intermediate velocity width compared with the other lines. (Note that SN 1991ar may show similar emission, but it is much less compelling.) The second is the unusually large amount of emission at blue wavelengths in SN 1999cq.
Most of the emission features in the July spectrum have a full width at half-maximum (FWHM) of 7000-9000 km s<sup>-1</sup>. There are three narrow, but resolved (FWHM $``$ 2000 km s<sup>-1</sup>), lines that we identify as He I $`\lambda `$5876, $`\lambda `$6678, and $`\lambda `$7065. (A typical unresolved night-sky emission line has FWHM $``$ 750 km s<sup>-1</sup>). The velocity width and flux are uncertain for the He I $`\lambda `$6678 line because of poor subtraction of nearby \[S II\] $`\lambda \lambda `$6716, 6731 emission (from H II regions) and also that its redshifted position of 6860 Å is coincident with the telluric B-band. In addition, He I $`\lambda `$5876 is superposed on the P-Cygni emission of Na I D. These issues, along with the noisy nature of the spectrum, make flux determinations problematic. If we assign a flux value of 1.00 to He I $`\lambda `$5876, then He I $`\lambda `$6678 is 0.65 and He I $`\lambda `$7065 is 1.61, with large uncertainties given these caveats.
The August spectrum (Figure 2b) was extremely weak; the supernova had faded considerably and was very difficult to detect. (This was more than 40 days past the last KAIT image where SN 1999cq was visible, i.e. brighter than the limiting unfiltered magnitude of $``$ 19.) Despite this, we still recover clear signs of the supernova. The Ca II near-IR triplet is present, as is the strong amount of emission at blue wavelengths that characterizes the July spectrum. Most importantly, one of the helium lines (He I $`\lambda `$7065) is still clearly visible with approximately the same velocity width.
The photometric evolution of SN 1999cq is fairly unusual. It is difficult to make direct comparisons with other SNe as our values for SN 1999cq are unfiltered magnitudes. It rose rapidly, climbing above the limiting value of 19 mag by 3.2 mag in 4 days. The comparison with the $`R`$-band rise of SN Ic 1994I (Richmond et al. 1996) in Figure 1 is particularly dramatic. SN 1999cq then dropped quickly as well, falling 0.2 mag in the 6 days after the first detection, and then 3.0 mag in the remaining 16 days that it was above the detection limit. During a similar period (25 days) following maximum in the $`R`$ band, SN 1994I fell 2.3 mag (Richmond et al. 1996), while SN 1993J declined by 1.0 mag (Richmond et al. 1994). If there are photometric sub-classes for core-collapse SNe (Clocchiatti et al. 1996a, 1997), then SN 1999cq clearly belongs to the “fast” class. Models for SN 1994I have been able to reproduce a rapidly declining light curve with a core-collapse event in stars whose envelopes have been lost to a companion (Iwamoto et al. 1994; Woosley et al. 1995)
There have been no distance measurements for UGC 11268, although it is a good candidate for IR Tully-Fisher distance determination (Haynes et al. 1999). If the recession velocity (7900 km s<sup>-1</sup>) is chiefly due the Hubble flow then the distance of UGC 11268 is 118 Mpc (using $`H_o=67`$ km s<sup>-1</sup> Mpc<sup>-1</sup>), which is a distance modulus of 35.4 mag. Although our photometric points are unfiltered magnitudes, and the zero-point of our scale is not well determined, our brightest observed magnitude of 15.8 and this distance modulus give us a rough estimate of the absolute magnitude of SN 1999cq near maximum of $`19.6`$ mag. While this value is quite uncertain, it is still intriguing. An absolute magnitude this bright would make SN 1999cq one of the most luminous SNe Ic ever observed. Clocchiatti et al. (1999) report an absolute $`V`$ magnitude of $`19.2`$ $`\pm `$ 0.02 or $`20.2`$ $`\pm `$ 0.02 for the Type Ic SN 1992ar. (The two values are the results of extrapolation of the light curve using two different assumptions for its shape—the fainter value if it followed the slow-type light curve, the brighter one if it were a fast-type decliner.) In addition, the Type Ic SN 1998bw (possibly associated with gamma-ray burst 980425) had an absolute magnitude of $`M_V`$ = $`19.35\pm 0.05`$ (Galama et al. 1998), with an extremely rapid rise to maximum (Woosley, Eastman, & Schmidt 1999). As Clocchiatti et al. (1999) discuss in detail, the existence of SNe Ic as bright as SNe Ia complicates the identification of high-redshift SNe used to study cosmological parameters (Riess et al. 1998; Perlmutter et al. 1999). Without spectra of SN 1999cq near maximum light, it is not possible to determine whether or not it could be mistaken for a SN Ia under the difficult conditions presented by the observations of cosmologically distant SNe. Nevertheless, such luminous SNe Ic emphasize the need for careful spectroscopic study of the high-redshift SNe.
## 4 Discussion
### 4.1 Helium Lines
The detection of the intermediate-width helium lines in the spectra of SN 1999cq requires the presence of helium in the circumstellar environment of the progenitor, but not in the ejecta of the exploding star. The width of the lines implies that the helium-emitting region is interacting with the expanding ejecta. The conspicuous absence of Balmer emission suggests a very low abundance of hydrogen. An obvious solution is that SN 1999cq is interacting with material from the progenitor lost through a wind or mass transfer to a companion.
It is also possible that the source of the helium lines is material not originally from the SN progenitor, but rather a companion sufficiently nearby to be affected by the explosion. As mentioned before, mass transfer facilitated by a companion is a common model for SNe Ib/c progenitors, and thus the presence of such a companion must be considered. We note that Chugai (1986) and Marietta, Burrows, & Fryxell (1999) predicted a similar velocity width ($``$ 1000 km s<sup>-1</sup>) for the lines of hydrogen from the interaction of a SN Ia with its companion, although those calculations were for material entrained in the ejecta and the resulting emission would not be visible until several hundred days after the explosion. For SN 1999cq, however, such a companion star would probably be hydrogen-rich if it had accreted hydrogen from the progenitor; thus, it is more likely that the helium lines are evidence of helium-rich material that was lost from the progenitor itself prior to core collapse.
The helium lines exhibit unusual line intensity ratios. In a typical hydrogen-rich atmosphere with a temperature of $`(12)\times 10^4`$ K and electron density $`n_e10^210^6`$ cm<sup>-3</sup> one would expect the ratio of He I $`\lambda `$7065 to He I $`\lambda `$5876 to be $``$ $`0.10.2`$ (Osterbrock 1989). The ratio observed here (1.61) implies a very different state, most likely related to the non-local-thermodynamic-equilibrium excitation required for the helium emission (Lucy 1991). The work of Almog & Netzer (1989), however, studying the emission-line spectrum of He I over a wide range of physical conditions, indicates that the large 7065/5876 ratio could be the result of a very high density. We cannot directly apply the numbers calculated by Almog & Netzer, as their models were for helium in a hydrogen atmosphere. Nonetheless, their results indicate that for a small range of densities ($`n_e10^{10}`$ cm<sup>-3</sup>, at $`T=10^4`$ K), the strength of the 7065 line becomes greater than that of 5876. Such a large ratio of 7065/5876 has been observed in other SNe that exhibited circumstellar interaction through helium and hydrogen emission lines, including intermediate-width lines (e.g., SN 1996L; Benetti et al. 1999). Assuming that the trend of increasing 7065/5876 with increasing density applies to an effectively pure helium atmosphere, then the intermediate-width helium lines in SN 1999cq originate in an extremely dense environment.
Hydrogen and helium emission lines of similar width were also present in the spectra of SN 1988Z (Filippenko 1991; Stathakis & Sadler 1991; Turatto et al. 1993). Chugai & Danziger (1994) (hereinafter CD94) interpreted this intermediate-width component of the emission as evidence for slow radiative shocks propagating either in dense clumps of wind material or in a dense equatorial belt, with both embedded in a uniform wind. Unfortunately, without multiple epochs and well defined broad lines, it is not possible to apply the details of CD94’s models. Moreover, SN 1999cq has the added complication of an apparently pure helium wind, where the issues of excitation and radiative transfer are not as well known; such a calculation is beyond the scope of this paper. Therefore, we cannot differentiate between the two scenarios of CD94. Nonetheless, the qualitative arguments still apply. Intermediate-width components in the spectra of SNe are likely to be the result of ejecta interacting with dense material lost from the progenitor prior to explosion. The lack of narrow (unresolved) and broad (FWHM $``$ 5000 km s<sup>-1</sup>) He I lines in our spectra of SN 1999cq implies that the uniform component of both of CD94’s models is either extremely tenuous and/or that helium excitation is more difficult in such rarefied environments.
As noted above, several models for the progenitors of SNe Ib and Ic postulate that they are massive stars that lose their envelopes prior to core collapse. Both SN 1987K and SN 1993J provided direct evidence for this mechanism by exhibiting hydrogen emission from the low-mass layer of the progenitor’s original envelope that remained. Because the late-time spectra of SNe Ib and Ic are similar (see, e.g., Filippenko 1997), the analogous transformation of a SN Ib to a SN Ic would be much more subtle. In fact, SNe that have broad He absorption lines that fade during the transition to the nebular stage are, by definition, still called SNe Ib. SN 1999cq thus provides evidence for a link in another way. The He I lines we observe indicate an almost pure helium mass loss—exactly what the above models have assumed to produce the stripped-envelope progenitors of SNe Ic.
One question about SN 1999cq remains: Is it legitimately a SN Ic? The late-time spectra of SNe Ib and Ic are similar overall, but the velocity-widths of the lines are different. Schlegel & Kirshner (1989) found the widths of \[O I\] $`\lambda \lambda `$6300, 6364 and \[Ca II\] $`\lambda \lambda `$7291, 7324 in late-time spectra of SNe Ib to be FWHM = 4500 $`\pm `$ 600 km s<sup>-1</sup>. In contrast, Filippenko et al. (1995) reported FWHM values for \[Ca II\] of 9200 km s<sup>-1</sup> for SN Ic 1994I and 6200 km s<sup>-1</sup> for SN Ic 1987M, for spectra at similar phases ($``$ 4.6 months after maximum). For \[O I\], the FWHM was 7700 km s<sup>-1</sup> for SN 1994I and 7500 km s<sup>-1</sup> for SN 1987M. In SN 1999cq, the FWHM of the \[O I\] line is 7800 km s<sup>-1</sup>, while the \[Ca II\] line is 8000 km s<sup>-1</sup> (given the difficulty of determining a continuum level, the uncertainty of these values is $``$ 15%). The spectrum of SN 1999cq was obtained only a few weeks past maximum, but its rapid evolution (c.f. Figure 1) implies that the relative phase is at least comparable. Filippenko et al. (1995) interpreted the larger velocity-width of the lines in SN 1994I as indicative of either a higher explosion energy and/or a smaller ejected mass in comparison with SN 1987M. Following this interpretation, SN 1999cq would have an even larger explosion energy and/or smaller ejected mass. This is consistent both with the evidence for mass loss, as well as the large luminosity of the event, and it makes the classification of SN 1999cq as a Type Ic event more secure.
Another difference between SNe Ib and SNe Ic is the shape of their light curves. The light curves of SNe Ib decline more slowly, typically dropping only $``$ 1.5 mag. in the 25 days after maximum ($`B`$ magnitude; Schlegel & Kirshner 1989). As noted before, in that same time SN 1994I dropped 2.3 mag. and SN 1999cq fell by 3.2 mag. This implies an even more rapid transition to the nebular phase than that of SN 1994I ($``$ 2 months; Filippenko et al. 1995). The first spectrum (July) was obtained only 24 days following the most recent image without the supernova (the temporal position relative to the light curve is marked in Figure 1), and it clearly shows the onset of nebular features. The rapid rise and fall of SN 1999cq also imply that it was of Type Ic.
There is no evidence in the spectra of SN 1999cq for the distinguishing characteristic of SNe Ib—broad helium lines, generally interpreted as P-Cygni profiles. We believe that the P-Cygni profile of Na I D is still evident in Figure 2. The relatively large amount of blue emission in SN 1999cq makes the P-Cygni absorption trough more difficult to discern, but it is there. The blue minimum of the line indicates an expansion velocity of 8300 km s<sup>-1</sup>, well in agreement with the widths of the broad lines. If SN 1999cq did have broad He lines initially, they must have faded very quickly, while the Na I D line remained. Nonetheless, even if SN 1999cq were originally a SN Ib, the intermediate-width He I lines still imply pure helium mass loss.
### 4.2 Blue Emission
As Figure 3 shows, SN 1999cq also exhibits an anomalous “blue bump” of emission in contrast to other SNe Ic. This appears in both of our spectra (July and August). The spectra of the host galaxy nucleus and nearby H II regions extracted from the same long-slit exposures show no evidence for unusual blue emission. Given the extreme overlapping of lines, it is difficult to identify individual features other than perhaps Mg I\] $`\lambda `$4571 and some lines of Fe II.
One possible source for the blue emission is a mixture of overlapping iron lines. Clocchiatti et al. (1997) identified the Fe II lines at $`\lambda `$4924, $`\lambda `$5018, and $`\lambda `$5169 in the Type Ic SN 1983V. They also found Fe II blends of multiplets 27 and 28 ($``$ $`41004400`$ Å), multiplets 37 and 38 ($``$ $`45004650`$ Å), and multiplet 74 (near 6200 Å). Other possibilities include multiplets 3, 14, and 29 ($``$ $`38004000`$ Å), multiplets 43 and 50 ($``$ $`47004750`$ Å), multiplet 42 ($``$ $`49005200`$ Å), and multiplets 48 and 49 ($``$ $`52005400`$ Å) (e.g., Phillips 1976). There is clearly no dearth of potential iron lines in the region of our spectrum that shows this blue emission. In addition, there are several lines from other stable iron-group elements such as Co and Ni (see, e.g., Axelrod 1980).
The presence of such strong iron (and/or iron-group) lines in SN 1999cq compared to other SNe Ic might imply that the mixing of <sup>56</sup>Ni was more extensive in SN 1999cq. This would, however, contradict the standard predictions of the mixing differences between SNe Ib and Ic (Baron 1992), but only if helium were still present in the atmosphere of the progenitor. If the progenitor of SN 1999cq had lost most or all of its helium, leaving a C-O core, then considerable mixing of <sup>56</sup>Ni to the outer layers is possible, resulting in excess iron emission. This could still be consistent with models for mixing in SNe Ic as the <sup>56</sup>Ni is not reaching beyond the C-O core into a helium layer. In addition, the bright absolute magnitude of SN 1999cq may imply the production of more <sup>56</sup>Ni than is produced in a typical SN Ic, thereby providing the excess material to mix into the outer parts of the ejecta.
Given the relatively large cross-sections for the iron-group elements, however, the strong emission lines might not require overly large abundances. A fully quantitative spectral synthesis model would be necessary to determine the actual abundances. A further concern for the effect of mixing on the character of the spectrum is that macroscopic mixing of <sup>56</sup>Ni could occur without necessarily exciting helium, and thus produce the apparent excess of iron-group element emission without creating broad helium lines, as is observed in SN 1999cq. Therefore, SN 1999cq may not provide a discriminant between the mixing models for SNe Ib and Ic. While there are variations in abundances (mainly light elements) and light-curve shapes (which have implications for synthesized <sup>56</sup>Ni) for SNe Ib and SNe Ic; e.g., Clocchiatti et al. 1997, and references therein), to our knowledge no other SN Ib or Ic has shown such extreme iron or iron-group emission. Spectra of other luminous SNe Ic (SN 1992ar, Clocchiatti et al. 1999; SN 1998bw, Branch 1999) do not show unusual emission at blue wavelengths.
Another explanation for the “blue bump” is related to the surrounding environment of the supernova. One way to increase the blue emission of an object is to scatter the spectrum off of interstellar dust, as in a light echo. This phenomenon has a long history of discussion in association with SNe (e.g., Zwicky 1940; van den Bergh 1975; Chevalier 1986; Schaefer 1987). It has only been definitively observed in two supernovae: SN 1987A (Suntzeff et al. 1988; Crotts 1988; Gouiffes et al. 1988) and SN 1991T (Schmidt et al. 1994; Sparks et al. 1999). With spectra from earlier epochs, one can match late-time echoes. Suntzeff et al. (1988) and Schmidt et al. (1994) each characterized the effects of scattering as a power law ($`F_\lambda \lambda ^\alpha `$), with $`\alpha `$ values of 4.9 $`\pm `$ 0.8 and 2, respectively. (Crotts found $`\alpha =3.5\pm 0.5`$ from broad-band colors.) Not having other spectra of SN 1999cq, we instead compared it with spectra of the SNe Ic 1987M and 1994I that had been corrected by the scattering law. We find that $`\alpha 1.5`$ gives a good fit. This variation of the exponent in the scattering power law is presumably related to the nature of the scattering dust. The scattering efficiency for particles of a size scale comparable to optical wavelengths decreases with decreasing size (although it will also decrease for particles much larger than optical wavelengths, eventually producing grey extinction), but the shape and composition of the particles has a significant enough effect on scattering to preclude much speculation regarding the nature of the dust (e.g., Yanamandra-Fisher & Hanner 1999).
As reddening is, in practice, the inverse process to scattering, a more likely solution related to environmental effects is a lack of extinction for SN 1999cq. Both SN 1994I and SN 1987M were considerably reddened. For SN 1994I, Ho & Filippenko (1995) found $`E(BV)=1.0_{0.5}^{+1.0}`$ mag (assuming $`R`$ = $`A_V/E(BV)`$ = 3.0) from high-resolution studies of the Na I D lines, but they considered $`E(BV)0.47`$ mag a more likely limit; this is also the value Iwamoto et al. (1994) found from light-curve studies. Filippenko, Porter, & Sargent (1990) estimated the reddening of SN 1987M to be $`E(BV)0.44`$ mag. The spectrum of SN 1995F in Figure 3 also shows a strong Na I D absorption feature (EW $`1.0`$ Å) that implies an $`E(BV)0.25`$ mag from the Barbon et al. (1990) relation. Dereddening these SNe using the extinction correction of Cardelli, Clayton, & Mathis (1989), with the O’Donnell (1994) modifications at blue wavelengths, results in spectra that effectively match the ones produced by correcting with the previously described scattering law (Figure 4).
As mentioned above, the intrinsic reddening of SN 1999cq is highly uncertain. The 1$`\sigma `$ upper limit for $`E(BV)`$ of 0.45 mag is reasonable when compared with other SNe Ic. A more quantitative estimate, using techniques to be described elsewhere (Leonard et al., in preparation), is that the EW is consistent with zero, but with a 1$`\sigma `$ uncertainty of 0.97 Å. Thus the reddening of $`E(BV)0.45`$ mag is probably an overestimate, with $`E(BV)0.25`$ being more likely. Given how well the dereddened spectra of other SNe Ic match SN 1999cq, it is plausible that it is not significantly reddened. We believe that what appears to be an anomalous level of blue emission might actually represent the true spectrum of an unreddened SN Ic, perhaps with unusually strong Fe II emission since SN 1999cq was such a luminous event. Without a reliable estimate of the extinction, though, this conclusion is highly uncertain.
## 5 Conclusions
Spectra of SN 1999cq reveal significant emission at blue wavelengths in excess of what is observed in other SNe Ic. Any interpretation of this emission is strongly dependent on the intrinsic reddening of SN 1999cq. If we assume SN 1999cq suffers little reddening, then its spectra are indicators of the relative significance of blue emission, presumably iron and iron-group lines, in SNe Ic. The blue emission would contribute almost equally with the standard red nebular lines (\[O I\],\[Ca II\], and Ca II), in contrast to typical SNe Ic. If SN 1999cq is as reddened as SNe Ic usually are, then the alternative solution for the excess blue emission of unusually strong iron and iron-group element emission would be just as interesting, with potential implications for the mixing of <sup>56</sup>Ni into the outer parts of the progenitor and/or the total amount of <sup>56</sup>Ni produced by the explosion. The detailed nature of the mixing of nickel must be understood to see if mixing models can actually produce the difference between SNe Ib and SNe Ic. If the helium is not present, mixing could still occur, but then the abundance of helium is the critical factor in determining whether or not the core collapse of a stripped star is observed as a SN Ib or a SN Ic.
We have also discovered He I emission lines with an intermediate velocity width in SN 1999cq, a SN Ic. Since such intermediate-width lines likely indicate the interaction of the SN ejecta with dense material lost from the progenitor star, we believe this provides evidence of an almost pure helium wind or mass transfer. If this is the case, then the mechanism that differentiates SNe Ib and Ic is similar to the mass loss or transfer that transforms a Type II core-collapse SN into a Type Ib/c. The He I lines in SN 1999cq have the signature of dense mass loss of a helium envelope, which would not be expected if differing degrees of mixing of <sup>56</sup>Ni into the ejecta were the sole criterion for the difference between SNe Ib and Ic. SN 1999cq does not show a direct transformation such as that of SN 1987K, SN 1993J, or SN 1996cb, but, in the He I lines, we may be seeing a reflection of the last step in the process that creates the progenitors of SNe Ic.
This research was supported by NSF grant AST-9417213. We are grateful to the staff of Lick Observatory (especially K. Baker, W. Earthman, and A. Tullis) for their assistance with the observations. We also thank A. L. Coil, J. R. Graham, and J. C. Shields for helpful discussions. We are grateful to Sun Microsystems Inc. (Academic Equipment Grant Program), Hewlett-Packard Inc., the National Science Foundation, and the Sylvia and Jim Katzman Foundation for donations that made KAIT possible. |
no-problem/0001/astro-ph0001469.html | ar5iv | text | # Gravitational Lensing Effects on High Redshift Type II Supernova Studies with NGST
## 1 Introduction
The star formation activity in the universe very likely started with the formation of the so-called Pop III objects (Couchman & Rees 1986, Ciardi & Ferrara 1997, Haiman et al. 1997, Tegmark et al. 1997, Ferrara 1998) at redshift $`z30`$. According to standard hierarchical models of structure formation, these small (total mass $`M10^6M_{}`$ or baryonic mass $`M_b10^5M_{}`$) objects merge together to form larger units. Assembling massive galaxies such as the ones observed today should take a considerable fraction of the Universe lifetime. Thus, it might be plausible that star formation at high redshift ($`z\stackrel{>}{}5`$) occurred at a rate limited by the relatively small amount of baryonic fuel present in early collapsed structures. As a consequence, these objects are likely to be faint and would probably escape the detection of even the largest planned instruments.
However, unless the IMF in Pop III objects is drastically different from the local one, some of the stars formed will end their lives as supernovae (SNe). At peak luminosity, SNe could outshine their host protogalaxy by orders of magnitude and likely become the most distant observable sources since the QSO redshift distribution shows an apparent cutoff beyond $`z4`$ (Dunlop & Peacock 1990). Detecting high-$`z`$ SNe would be of primary importance to clarify the role of Pop IIIs in the reionization and reheating of the universe, and, in general, to derive the star formation history of the universe and to pose constraints on the IMF and chemical enrichment of the universe (Miralda-Escudé & Rees 1997).
The issue becomes even more interesting if we consider the gravitational lensing (GL) effects produced by the intervening cosmological mass distribution on the light emitted by SNe. The flux magnification associated with this process has been investigated in detail in a previous paper (Marri & Ferrara 1998, MF) and found to be substantially dependent on the adopted cosmological model, thus high-$`z`$ SNe seem to be perfect tools to constrain cosmogonies. In this paper we use the method outlined in MF but revised to include realistic lens density profiles. MF considered GL effects produced by point lenses; here we model dark halos/lenses with NFW (Navarro et al. 1997) universal density profiles derived from numerical simulations. Nonetheless, the general numerical methods and principles are the same as in MF and we refer to that paper for details; indeed this more accurate lens modeling improves the predictive power of our calculations to a level comparable to rayshooting methods based on N-body simulations, for which an extension to the high redshift studied here has not yet been possible.
We concentrate mostly on Type II SNe (SNII), although it is straightforward to extend our results to include Type I SNe (SNI). The motivation for this choice is essentially the same as in MF. SNIa are on average brighter than SNII by about 1.5 mag; moreover, SNIa are known to be very good standard candles and, for this reason, they are widely used to determine the geometry of the universe (Riess et al. 1998; Perlmutter 1998). However, it is reasonable to expect that SNI at $`z\stackrel{>}{}5`$ constitute very rare events, since they arise from the explosion of C-O white dwarfs triggered by accretion from a companion; this requires evolutionary timescales comparable or larger than the Hubble time at those redshifts. A different possibility would be represented by SNIb, which originate from short-lived progenitors: however, they share problems similar to SNII, i.e. they are poorer standard candles and are fainter than SNIa.
Two different cosmological models are considered here: (i) Standard Cold Dark Matter (SCDM), with $`\mathrm{\Omega }_M=1,\mathrm{\Omega }_\mathrm{\Lambda }=0`$, and (ii) Lambda Cold Dark Matter (LCDM): $`\mathrm{\Omega }_M=0.4,\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$. Power spectra are normalized to give the correct cluster abundance at $`z=0`$ ($`\sigma _8=0.57,0.95`$ for SCDM and LCDM, respectively) and we adopt the value $`h=0.65`$ for the adimensional Hubble constant ($`H_0=65\mathrm{Km}\mathrm{sec}^1\mathrm{Mpc}^1`$). We then derive high-$`z`$ SNII number counts expected when GL flux magnification is included for typical parameters and planned observational capabilities of the Next Generation Space Telescope (NGST, Stockman et al. 1998). The present results may serve as a guide for future mission operation mode planning.
## 2 Cosmic SNII Rate
To derive the cosmic rate of SNII as a function of redshift we begin by calculating the number density of dark matter halos in the two cosmologies specified above. This can be accomplished by using the Press & Schechter (1974, hereafter PS) formalism; this technique is widely used in semi-analytical models of galaxy formation (White & Frenk 1991, Kauffman 1995, Ciardi & Ferrara 1997, Baugh et al. 1998, Guiderdoni et al. 1998) and it has been shown to be in good agreement with the results from N-body numerical simulations. According to such a prescription we can write the normalized fraction of collapsed objects per unit mass at a given redshift as
$$f(M,z)=\sqrt{\frac{2}{\pi }}\frac{\delta _c(1+z)}{\sigma (M)^2}e^{\delta _c^2(1+z)^2/2\sigma (M)^2}\left(\frac{d\sigma (M)}{dM}\right),$$
(1)
where $`\delta _c=1.69`$ is the critical overdensity of perturbations for collapse, and $`\sigma (M)`$ is the gaussian variance of fluctuations on mass scale $`M`$. Next we calculate the star formation rate corresponding to each collapsed halo. Following Ciardi & Ferrara (1997) and Ferrara (1998) we can write the SNII rate per object of total mass $`M`$ as
$$\gamma (z)=\frac{\nu \mathrm{\Omega }_bf_b}{\tau t_{ff}}M1.2\times 10^7\mathrm{\Omega }_{b,5}f_{b,8}(1+z)_{30}^{3/2}M_6\mathrm{yr}^1,$$
(2)
where $`(1+z)_{30}=(1+z)/30`$, $`M_6=M/10^6M_{}`$. We assume a Salpeter IMF with a lower cutoff mass equal to $`0.1M_{}`$, according to which one supernova is produced for each 56 $`M_{}=\nu ^1`$ of stars formed. The baryon density parameter is $`\mathrm{\Omega }_b=0.05\mathrm{\Omega }_{b,5}`$, of which a fraction $`f_b0.08f_{b,8}`$ (Abel et al. 1998) is able to cool and become available to form stars. The halo dark matter density is $`\rho 200\rho _c=200[1.88\times 10^{29}h^2(1+z)^3]`$ g cm<sup>-3</sup>; the corresponding free-fall time is $`t_{ff}=(4\pi G\rho )^{1/2}`$. The star formation efficiency $`\tau ^1=0.6\%`$ is calibrated on the Milky Way.
It is well known that the star formation prescription eq. 2 would lead to a very early collapse of the baryons - with consequent star formation - in small halos, contrary to what is currently believed (Madau et al. 1996). It is therefore necessary to introduce some form of feedback, most likely due to supernova energy input into the interstellar medium of the forming galaxy. To this aim we follow the standard approach used in semi-analytical models (Kauffmann, Guiderdoni & White 1994; Baugh, Cole & Frenk 1996, Baugh et al. 1998). The star formation rate (and consequently the SN rate which is proportional to that) is weighted by the feedback function $`ϵ(M)`$, whose expression is
$$ϵ(M)=\frac{1}{1+ϵ_0\left(M_c/M\right)^\alpha }.$$
(3)
The feedback function contains three free parameters: the efficiency, $`ϵ_0`$, the critical mass for feedback, $`M_c`$, and the power, $`\alpha `$, which expresses the dependence of feedback on galactic mass. We choose these parameters in the following way. We first fix the value of $`M_c=10^{10}M_{}`$. The motivation for this choice comes from the numerical simulations presented by MacLow & Ferrara (1999), who have shown that above that threshold galaxies lose only a negligible mass fraction of gas following moderately powerful starburst episodes. Next, we require the calculated star formation rate to best fit the observed star formation rate in the universe at redshift $`\stackrel{<}{}2`$ (Lilly et al. 1996, Ellis et al. 1996). As considerable uncertainty is present on the behavior of the cosmic star formation curve at higher $`z`$ due to the alleged presence of extinguishing dust (Rowan-Robinson et al. 1997, Smail et al. 1997), such a choice seems to be the most conservative one. The physical interpretation of eq. 3 is that in low mass objects even a relatively small energy input is sufficient to heat (or expel) the gas, thus partially inhibiting further star formation. Although reasonable, the validity of this feedback prescription is uncertain and it should be taken, lacking a better understanding, only as a first approach to the problem. The derived comoving cosmic SNII rate, $`\mathrm{\Gamma }(z)`$, is shown in Fig. 1 for the two cosmological models considered here. SCDM and LCDM models predict similar rates and at $`z\stackrel{<}{}5`$, for example, our rates are broadly consistent with the SNII rates inferred using the empirical cosmic star formation history deduced from UV/optical data (Sadat et al. 1998, Madau et al. 1998).
## 3 Gravitational Lensing Simulations
The peak absolute magnitudes of SNII cover a wide range, overlapping SNIa at the bright end, but more generally 1.5 mag fainter. In a recent study, Patat et al. (1994) conclude that SNII seem to cluster in at least three groups, which they classify according to their B-mag at maximum as Bright ($`M_B=18.7`$), Regular ($`M_B=16.5`$), and Faint ($`M_B=14`$), respectively. Note that the Faint class is constituted by a single object, i.e. SN1987A. This classification is based on a limited sample (about 40 SNII), and therefore a statistical bias cannot be ruled out. The results of Patat et al. are also compatible with the empirical distribution law given by van den Bergh & McClure (1994) which we will use in what follows. This local SNII luminosity function (LF) is assumed not to evolve with redshift.
To obtain a statistical description of the magnification bias due to GL, we perform rayshooting simulations as described in MF. First, we fix a solid angle, $`\omega _m=`$($`7.7`$ arcmin)<sup>2</sup> and thus a cosmic cone in which matter is distributed among halos whose number density as a function of mass and redshift is obtained via a Monte Carlo method applied to the PS mass function. This procedure is repeated for each of the 50 slices in which the redshift range $`z=110`$ is subdivided. We study the propagation of $`N_l=25\times 10^6`$ light rays, uniformly covering at $`z=0`$ a narrower solid angle $`\omega _r=1.5\times 10^6\mathrm{sr}`$, to avoid spurious border effects. The light propagation is studied using the common multiple lens-plane approximation for a thick gravitational lens, in which the 3D matter distribution is projected onto such planes. Finally, light rays are collected within $`N_s=1.4\times 10^5`$ cells on each plane, thus obtaining magnification maps as a function of source position and redshift. The lower and upper limits of the lens mass distribution are equal to $`10^{10}M_{}`$ and $`10^{15}M_{}`$, respectively; all other parameters are the same as in MF.
The mass distribution inside a lens of total mass $`M`$ at a given redshift is supposed to follow the NFW (Navarro et al. 1997) density profile. The projected surface matter density, as well as the corresponding deflection angle, have been calculated by Bartelmann (1996). Using the appropriate numerical routine (kindly provided by J. Navarro), it is straightforward to calculate the relevant parameters for each lens, once the mass and redshift of the collapsed object are specified.
Differently to MF, we adopt here a full beam description of light propagation (Schneider et al. 1992) and, consequently, the average magnification is equal to unity. Such a scheme requires on each plane a negative uniform surface mass density, $`\mathrm{\Sigma }_n^{}<0`$, the total (negative) mass associated with this density being the one given by the sum over all lenses in the plane. As discussed in Schneider & Weiss (1988), this approach automatically guarantees flux conservation without using the Dyer-Roeder model, whose correctness is often questioned.
With this prescription for the matter distribution, the ray impact parameter as a function of discretized lens-plane redshift, $`\xi _n`$, can be recursively calculated from the following expression:
$`\xi _{n+1}={\displaystyle \frac{(1+z_{n1})D_{n,n+1}}{(1+z_n)D_{n1,n}}}\xi _{n1}+{\displaystyle \frac{D_{n1,n+1}}{D_{n1,n}}}\xi _n`$ (4)
$`{\displaystyle \frac{4G}{c^2}}D_{n,n+1}{\displaystyle \underset{k=1}{\overset{N_n}{}}}M_n^kF_n^k(|\xi _n\xi _n^k|){\displaystyle \frac{\xi _n\xi _n^k}{|\xi _n\xi _n^k|^2}}`$
$`{\displaystyle \frac{4\pi G}{c^2}}D_{n,n+1}\mathrm{\Sigma }_n^{}\xi _n,`$
where symbols are as in MF, but $`D(z)`$ is now the usual Friedmann angular distance and the nondimensional “form factor”, $`F_n^k`$, depends on the model adopted for the lens density profile (provided the profile is axially-symmetric), and it is equal to unity for a point-lens; for a NFW lens the $`F`$-factor can be calculated using the formulae given in Bartelmann (1996). As an example, we show the resulting magnification map for the SCDM model for a source located at redshift 10 in Fig. 2.
The result of the simulations relevant to the present work are the magnification probabilities at different redshifts and for the two cosmological models considered: these are shown in Fig. 3. The cumulative magnification probability, $`P(>\mu )`$, which expresses the probability that a source flux is magnified (or demagnified if $`\mu <1`$) more than $`\mu `$ times, is shown for $`z=3,5,10`$. In the lower panel the differential magnification probability is reported for the two redshifts $`z=3,5`$. In agreement with previous works (see e.g. Jaroszyński 1992) we find that LCDM models produce higher magnifications than SCDM models. For comparison, we plot the analogous points from the CDM GL simulation of Wambsganss et al. (1998). Also plotted for the same CDM model are the results of MF: it is clear that the assumption of point lenses in that work overpredicts the magnification with respect to the NFW density profile assumed here. We note that, as the redshift of the source is increased, larger magnifications are allowed: for example, at $`z=10`$ a magnification of 10 times or more has a (cumulative) probability of 0.1% in SCDM and 0.8% in LCDM models, respectively. The position of the peak of the differential magnification probability shifts toward lower $`\mu `$ values as $`z`$ increases and becomes more flatter in the vicinity of that maximum. De-magnification also becomes more pronounced, and the net effect is an increase in the flux dynamic range. Although our simulations explicitly cover the redshift range $`010`$, the behavior of $`P(>\mu )`$ shows that the magnification saturates well before that epoch (see also MF); we can then confidently extrapolate the curve even beyond redshift $`10`$. The following results are obtained assuming an upper redshift limit $`z=15`$, as also clear from an inspection of Fig. 1.
In order to quantify magnification effects on the observed fluxes, we need to perform the following steps: (i) calculate the total number of SNII by integrating $`\mathrm{\Gamma }(z)`$ in a given redshift interval $`\delta z`$ around $`z_i`$ and in a certain cosmic solid angle $`\omega _{obs}`$; (ii) assign a peak luminosity to each SNII by randomly sampling the luminosity function; (iii) to assign an amplification to each SN we first check for multiple image events. To this aim it is necessary to identify the light rays coming from the source in the shooting plane. This is done on the fly during the simulations for a large number of cells on every plane. Once the cell is identified as the source (i.e. a SN), we reconstruct the image configuration (number of images, splitting, amplification ratios) by means of an adapted version of the friend-of-friend algorithm which isolates individual images and determines the corresponding magnifications. The very high angular resolution of NGST ($`0.03`$ arcsec) should be sufficient to resolve all these multiple images, whose minimum separation is typically 3 times higher (corresponding to a $`M_{lens}10^{10}M_{}`$). Each SNe is then characterized by a redshift, an intrinsic luminosity and a GL magnification from which its observed flux can be obtained using the Friedmann luminosity distance.
As in MF, we adopt $`\delta z=0.2`$ and we simulate the observation of 100 NGST fields, $`4^{}\times 4^{}`$ each, i.e. $`0.44`$ deg<sup>2</sup>, hence giving a total observed solid angle $`\omega _{obs}100\times \omega _r`$.
We suppose that these fields are surveyed for one year; in practice, one possible search startegy could be as follows. Each field is surveyed in two colors and revisited within 1 month to 1 year to find the SN candidates through their variability. The typical exposures are about $`10^3`$ s per color. After selection of good candidates, a third epoch observation in three colors, with roughly the same exposure time, is necessary to estimate the redshift and constrain the light curve. Hence, a total of about 7000 seconds per field are necessary. For about 100 fields, this observational time corresponds to a few percent of the total NGST observation time. In reality, this overestimates the allocated time for the project as the high-$`z`$ SN search are also a by-product of the deep galaxy and gravitational lensing surveys. A similar method, as outlined above, has been applied to the different individual frames of the HDF-N by Mannucci & Ferrara (1999) to discover a $`z1`$ SNIb.
The number of SNII peaks between $`5z10`$, where about 1400 SNII occur in the cosmic volume and surveillance time considered. At even higher redshift (our computations stop at $`z=15`$) the total number of SNII events decreases to about 300; The two cosmological models (SCDM, LCDM) predict a total number of (857, 3656) SNII/yr in 100 NGST fields. The LCDM model offers the best perspectives for SNII detection given the higher magnification probability, and larger cosmic volume.
## 4 Results
The previous method allows us to build a synthetic sample of SNIIe, each of them characterized by a given luminosity, distance and gravitational amplification. This sample can be used for several cosmological applications. The most natural one consists in the prediction of the number counts of high-$`z`$ SNIIe as a function of wavelength bands, flux and cosmological model. Detecting these sources would enable the study of the early phases ($`z\stackrel{<}{}15`$ for our specific study) of star formation in the universe, as traced by SNII. We will see in the following that the number counts are very weakly affected by the inclusion of GL effects. However, the number counts turn out to be rather sensitive to the cosmological models, mostly because of their different geometry. In addition, GL is shown below to produce considerable effects on the Hubble diagram, commonly used to constrain the geometry of the universe. We study this effect in the second part of this Section. This implies that the corresponding uncertainty in the cosmological parameter determination should be carefully treated and removed.
### 4.1 SNII Number Counts
In the following we compute the expected SNII number counts for our simulated sample and assess the importance of GL for such a measurement. Given the simulated SNII redshift and luminosity distribution for the two cosmological models, for each SNII we have calculated the apparent AB magnitude in four wavelength bands (JKLM), which should cover the bandpass of NGST (currently $`15`$ $`\mu `$m). Assuming a black-body spectrum, $`B_\nu (T)`$, at a temperature $`T=25000`$ K approximately $`15(1+z)`$ days after the explosion (Kirshner 1990, Woosley & Weaver 1986), negative $`K`$–corrections ($`\stackrel{<}{}4`$ for $`z\stackrel{>}{}4`$) allow the detection of SNII in the above bands at high-$`z`$. We have taken into account absorption by IGM (Madau 1995); however, its relevance is very limited as, among the NGST bands, only the $`J`$ band at $`z9`$ is weakly affected.
Fig. 4 shows the differential SNII counts \[0.5 mag/ yr/ 0.44 deg<sup>2</sup>\] as a function of AB magnitude. The four panels contain the curves for the SCDM and LCDM models and for $`J`$, $`K`$, $`L`$, and $`M`$ bands, both including or neglecting the effects of GL. For comparison, we plotted the NGST magnitude limit $`AB=31.4`$ (vertical line). This is calculated by assuming a constant limiting flux $`_{NGST}=10`$ nJy in the wavelength range $`15\mu `$m (i.e. $`JM`$ bands). This can be achieved, for a 8-m (10-m) mirror size and a S/N=5, in about $`2.6\times 10^4`$ s ($`1.1\times 10^4`$ s)<sup>1</sup><sup>1</sup>1This result has been obtained using the NGST Exposure Time Calculator, available at http://augusta.stsci.edu. Thus, NGST should be able to reach the peak of expected SNII count distribution, which is located at $`AB3031`$ for SCDM and $`AB3132`$ for LCDM (depending on the wavelength band). The differences among the various bands are not particularly pronounced, although $`J`$ and $`K`$ bands present a larger number of luminous ($`AB\stackrel{<}{}27`$) sources, and therefore they might be more suitable for the experiment. Furthermore, we point out that in the $`L`$ and $`M`$ bands NGST, with the current magnitude limit, will not be able to reach the peak of expected SNII count distribution in the LCDM model.
The expected number counts are only very mildly affected by the inclusion of lensing effects. As a general rule, the curves for both models become slightly broader and shifted towards fainter magnitudes by approximately 0.5 mag when gravitational lensing is taken into account. These effects are somewhat more important for the LCDM as a result of the larger magnification dynamic range in this cosmology. This behavior derives from the fact that for both models de-magnification is relatively probable, particularly at high redshift, as seen in Fig. 3.
However, a clear difference is seen in the number counts between the two models. At the NGST detection limit in the $`K`$-band, the (lensed) $`N(m)`$ ratio (SCDM:LCDM) is equal to (1:5); moreover NGST should be able to detect $``$ 75%(51%) of the predicted SNII by SCDM (LCDM) models in the same band. Differences between bands are similar to those outlined above.
### 4.2 Hubble diagram
Although the number counts are weakly affected by GL effects, the latter must be taken into account when deriving cosmological parameters from high $`z`$ SN experiments. To illustrate this point, we show in Figs. 5\- 6 the Hubble diagrams for the SCDM and LCDM models derived from our sample. In particular, we have plotted the distance modulus difference $`\delta (mM)`$ between the SNIIe in our simulated sample (shown as points), which includes GL magnification, and a specific reference model with $`\mathrm{\Omega }_M=0,\mathrm{\Omega }_\mathrm{\Lambda }=0`$. The distance modulus for each supernova is obtained from the formula $`(mM)_{obs}=5\mathrm{log}[D_{lum}(z)/\sqrt{\mu (z)}]+25`$, with $`\mu (z)`$ as derived from our GL simulations discussed above and $`D_{lum}(z)`$ expressed in Mpc. We assume here that the absolute magnitude $`M`$ can be determined within a small error. Clearly, SNIIe are not as good candles as SNIas commonly used to construct the Hubble diagram at lower redshifts. For the reasons already outlined in the Introduction, though, when exploring the more distant universe, only SNe whose progenitors were massive stars might be found, due to the small age of the universe. In addition, our understanding of the physics behind these phenomena is rapidly improving and various methods have been proposed and successfully tested in the local universe to derive the absolute magnitude of SNII once the light curve and/or their spectrum is known. These methods are reviewed in MF. Even allowing for a persisiting error in their absolute magnitudes, this spread should have a statistical nature, as opposed to the systematic one introduced by GL. Therefore, an accurate statistical analysis should be able to disentangle the two effects.
For sake of simplicity we set $`h=0.65`$ in the following discussion and we limit the parameter space to open and flat cosmologies with $`\mathrm{\Omega }_k=1\mathrm{\Omega }_\mathrm{\Lambda }\mathrm{\Omega }_M<0.7`$, $`0<\mathrm{\Omega }_\mathrm{\Lambda }<1`$ and $`0<\mathrm{\Omega }_M<1`$. The luminosity distance $`D_{lum}(z)`$ is calculated for a Friedmann model with $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ values appropriate for the two cosmological models adopted here, i.e. SCDM and LCDM (Fig. 5 and Fig. 6, respectively). For comparison, we have also plotted the same quantity for the model $`\mathrm{\Omega }_M=0,\mathrm{\Omega }_\mathrm{\Lambda }=1`$. The dotted curves in Figs. 5-6 are thus the distance moduli which would be observed in absence of any lensing effect in the cosmological models defined by the values of $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ indicated above. Any source of non systematic errors (as for example the uncertainty on the SNII absolute magnitudes) would introduce a quasi-symmetric spread of the data around those curves.
The SNe in the sample (we allow for a distinction of multiple images, that are counted separately and indicated by circled dots in the Figures; note that high source magnification are associated with both amplified and deamplified multiple images) show a peculiar spread introduced by magnification/de-magnification effects. This spread is more pronounced for LCDM models, due to the flatter $`P(>\mu )`$ distribution obtained for this cosmology (Fig. 3). As a result, the location of the simulated data points does not lie exactly on the line corresponding to the LCDM model from which they have been derived; moreover, the points are not symmetrically distributed around it.
Stated alternatively, if the simulated points would correspond to real data, the determination of the true cosmological parameters might be affected by GL effects. In order to properly estimate the effect we let $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ to vary and determine their best fit value by means of quadratic differences minimization of fluxes:
$$S_F=\left[\frac{F_i}{L_i}\frac{1}{4\pi D_{lum}^2}\right]^2,$$
(5)
where $`F_i`$ and $`L_i`$ are the i-th source flux and luminosity, respectively. We neglect the intrinsic error on $`L_i`$, assuming that it could be minimized as discussed above. Once the best model is found, we look again for values of $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ which have a quadratic deviation only $`10\%`$ larger than the best fit model: $`S_F<1.1S_F^{min}`$. Thus, for the SCDM model we find:
$`\mathrm{\Omega }_M=0.97_{0.18}^{+0.03}`$ (6)
$`\mathrm{\Omega }_\mathrm{\Lambda }=0.01_{0.01}^{+0.03}`$
and for the LCDM model:
$`\mathrm{\Omega }_M=0.36_{0.12}^{+0.15}`$ (7)
$`\mathrm{\Omega }_\mathrm{\Lambda }=0.60_{0.24}^{+0.12}`$
Thus, we conclude that GL has a moderate impact on the determination of cosmological parameters, the largest error being of order of 40%. As a caveat, we remark that if the same best fit method would have been instead applied directly to the modulus differences rather than on the fluxes, i.e. minimizing the quantity
$$S_m=\left[\delta (mM)_i\delta (mM)_{model}\right]^2,$$
(8)
we would have deduced a much larger, albeit unphysical, discrepancy. For example, for the LCDM model we would have obtained $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })(0.3,0.5)`$.
Although relatively small, this error due to gravitational magnification might be relevant for future experiments aiming at determining the geometry of the universe with high $`z`$ SNe, as if on the one hand increasing the SN distance would allow for a better discrimination among different models, on the other hand blurring of the data due to GL will jeopardize this effort, unless proper account of the effect is taken. Models such as the one presented here could be helpful to improve the fitting procedure, particularly for the most distant SNe where GL effects are stronger.
## 5 Comments
A great deal of both theoretical and experimental work has recently been devoted to the determination of cosmological parameters using SNIa at moderate redshifts ($`z1`$). As stated above, SNII are likely to be among the most luminous (and perhaps the only visible) sources in the very distant ($`z10`$) universe. On the other hand, these early epochs could hold the key for the understanding of the formation of the first objects, reionization and metal enrichment of the universe. Also, the degeneracy among different cosmological models can be reduced only by extending the current studies to higher redshifts. For these reasons, it seems necessary to find ways to extract most information from these sources.
Given that techniques like the ones presented here have been applied to low redshift SNIe to assess the impact of GL, it seems instructive to briefly compare our results with those obtained in those works.
Using a numerical method based on geodetic integration, Holz (1998) calculates the magnification probability and is then able to ”correct” the SNIa Hubble diagram for the effects of GL. He argues that lensing systematically skews the apparent brightness distribution of supernovae with respect to the filled beam value of the luminosity distance. It is worth noting that he adopts two different prescriptions for the dark matter lenses: a constant distribution of halo-like objects and a compact object distribution. As was recognized also by Metcalf & Silk (1999), the two distributions yield rather different lensing probabilities. In comparison to that work, our matter distribution is self-consistently derived from CDM cosmological models. This appears to be more appropriate to study the effects arising at redshift greater than unity, where, despite the common normalization to cluster, cosmological models start to differenciate concerning their predictions on structure formation.
Wambsganss et al. (1997), use a matter distribution derived from a high resolution N-body numerical simulation of structure formation in a LCDM model. Our results agree qualitatively with theirs for what concerns the magnification probability, although we push our GL simulations up to higher redshift. In addition, we have been able to predict the number counts, via a semi-analitycal model, and therefore to build a synthetic sample enabling us to investigate the effect on an ensemble of sources rather than on a single one. As we comment in MF98 our method to reconstruct the matter distribution, given a comological model, is by large faster than using numerical simulations, allowing for an efficient inspection of the parameter space. The backdraw is that we miss the information on the spatial ditribution of lenses.
We conclude pointing out some limitations of the model which can be possibly overcome by future work. Substructure in the halos, as for example the one caused by gravitational clustering of dark matter within a single halo (Moore et al. 1999) or baryonic dissipative structures like disks in galaxy halos and, especially, galaxies in clusters, are not taken into account. A compact nature of dark matter could produce different magnification probabilities and microlensing effects. Recent results have shown that the latter effect is probably negligible, as microlensing induced variations of SN light curves can produce measurable effects only for lens masses $`\stackrel{<}{}10^4M_{}`$ (see e.g. Kolatt & Bartelmann 1998; Porciani & Madau 1998).
We thank M. Bartelmann, L. King and P. Schneider for useful comments and discussions. Part of this work has been supported by a JILA Visitor Fellowship (AF); LP acknowledges the support of CNAA during the development of this project; SM wish to thank H. Mathis for the help in adapting the friend-of-friend algorithm. |
no-problem/0001/hep-ph0001009.html | ar5iv | text | # Neutrinos: Heralds of New Physics
## 1 A Neutrino Story
Over the last seventy years, neutrinos have proved to be of central importance for our understanding of fundamental interactions. From their existence which led to Fermi’s theory of $`\beta `$ decay, to their chiral nature which held the key to parity viola tion, and again today it is no surprise that it is their masses that give the first indication of new interactions beyond the standard model.
Once it became apparent that the spectrum of $`\beta `$ electrons was continuous , something drastic had to be done! In December 1930, in a letter that starts with typical panache, $`\mathrm{`}\mathrm{`}`$Dear Radioactive Ladies and Gentlemen…”, W. Pauli puts forward a “desperate” way out: there is a companion neutral particle to the $`\beta `$ electron. Thus earthlings became aware of the neutrino, so named in 1933 by Fermi (Pauli’s original name, neutron, superseded by Chadwick’s discovery of a heavy neutral particle), implying that there is something small about it, specifically its mass, although nobody at that time thought it was that small.
Fifteen years later, B. Pontecorvo proposes the unthinkable, that neutrinos can be detected: an electron neutrino that hits a $`{}_{}{}^{37}Cl`$ atom will transform it into the inert radioactive gas $`{}_{}{}^{37}Ar`$, which can be stored and then detected through radioactive decay. Pontecorvo did not publish the report, perhaps because of the times, or because Fermi thought the idea ingenious but not immediately relevant.
In 1956, using a scintillation counter experiment they had proposed three years earlier , Cowan and Reines discover electron antineutrinos through the reaction $`\overline{\nu }_e+pe^++n`$. Cowan passed away before 1995, the year Fred Reines was awarded the Nobel Prize for their discovery. There emerge two lessons in neutrino physics: not only is patience required but also longevity: it took $`26`$ years from birth to detection and then another $`39`$ for the Nobel Committee to recognize the achievement! This should encourage physicists to train their children at the earliest age to follow their footsteps at the earliest possible age, in order to establish dynasties of neutrino physicists. Perhaps then Nobel prizes will be awarded to scientific families?
In 1956, it was rumored that Davis , following Pontecorvo’s proposal, had found evidence for neutrinos coming from a pile, and Pontecorvo , influenced by the recent work of Gell-Mann and Pais, theorized that an antineutrino produced in the Savannah reactor could oscillate into a neutrino and be detected. The rumor went away, but the idea of neutrino oscillations was born; it has remained with us ever since.
Neutrinos give up their secrets very grudgingly: its helicity was measured in 1958 by M. Goldhaber , but it took 40 more years for experimentalists to produce convincing evidence for its mass. The second neutrino, the muon neutrino is detected in 1962, (long anticipated by theorists Inouë and Sakata in 1943 ). This time things went a bit faster as it took only 19 years from theory (1943) to discovery (1962) and 26 years to Nobel recognition (1988).
That same year, Maki, Nakagawa and Sakata introduce two crucial ideas: neutrino flavors can mix, and their mixing can cause one type of neutrino to oscillate into the other (called today flavor oscillation). This is possible only if the two neutrino flavors have different masses.
In 1964, using Bahcall’s result of an enhanced capture rate of $`{}_{}{}^{8}B`$ neutrinos through an excited state of $`{}_{}{}^{37}Ar`$, Davis proposes to search for $`{}_{}{}^{8}B`$ solar neutrinos using a $`100,000`$ gallon tank of cleaning fluid deep underground. Soon after, R. Davis starts his epochal experiment at the Homestake mine, marking the beginning of the solar neutrino watch which continues to this day. In 1968, Davis et al reported a deficit in the solar neutrino flux, a result that stands to this day as a truly remarkable experimental tour de force. Shortly after, Gribov and Pontecorvo interpreted the deficit as evidence for neutrino oscillations.
In the early 1970’s, with the idea of quark-lepton symmetries suggests that the proton could be unstable. This brings about the construction of underground detectors, large enough to monitor many protons, and instrumentalized to detect the Čerenkov light emitted by its decay products. By the middle 1980’s, several such detectors are in place. They fail to detect proton decay, but in a remarkable serendipitous turn of events, 150,000 years earlier, a supernova erupted in the large Magellanic Cloud, and in 1987, its burst of neutrinos was detected in these detectors! All of a sudden, proton decay detectors turn their attention to neutrinos, while to this day still waiting for its protons to decay! Today, these detectors have shown great success in measuring solar and atmospheric neutrinos, culminating in SuperKamiokande’s discovery of evidence for neutrino masses.
## 2 Standard Model Neutrinos
The standard model of electro-weak and strong interactions contains three left-handed neutrinos. The three neutrinos are represented by two-components Weyl spinors, $`\nu _i`$, $`i=e,\mu ,\tau `$, each describing a left-handed fermion (right-handed antifer mion). As the upper components of weak isodoublets $`L_i`$, they have $`I_{3W}=1/2`$, and a unit of the global $`i`$th lepton number.
These standard model neutrinos are strictly massless. The only Lorentz scalar made out of these neutrinos is the Majorana mass, of the form $`\nu _i^t\nu _j`$; it has the quantum numbers of a weak isotriplet, with third component $`I_{3W}=1`$, as well as two units of total lepton number. Thus to generate a Majorana mass term at tree-level, one needs a Higgs isotriplet with two units of lepton number. Since the standard model Higgs is a weak isodoublet Higgs, there are no tree-level neutrino masses.
Quantum corrections, on the other hand, are not limited to renormalizable couplings, and it is easy to make a weak isotriplet out of two isodoublets, yielding the $`SU(2)\times U(1)`$ invariant $`L_i^t\stackrel{}{\tau }L_jH^t\stackrel{}{\tau }H`$, where $`H`$ is the Higgs doublet. As this term is not invariant under lepton number, it is not be generated in perturbation theory. Thus the important conclusion: The standard model neutrinos are kept massless by global chiral lepton number symmetry. Simply put, neutrino masses offer proof of physics beyond the standard model.
## 3 Neutrino Masses
Direct experimental limits on neutrino masses are quite impressive, $`m_{\nu _e}<10\mathrm{eV}`$, $`m_{\nu _\mu }<170\mathrm{keV}`$, $`m_{\nu _\tau }<18\mathrm{MeV}`$ , and neutrinos must be extraordinarily light. Any model that generates neutrino masses must contain a natural mechanism that explains their small value, relative to th at of their charged counterparts.
We just just mention one way to generate neutrino masses without new fermions: add lepton number carrying Higgs fields to the standard model which break lepton number explicitly or spontaneously through their interactions.
Perhaps the simplest way to give neutrinos masses is to introduce for each one an electroweak singlet Dirac partner, $`\overline{N}_i`$. These appear naturally in the grand unified group $`SO(10)`$. Neutrino Dirac masses are generated by the couplings $`L_i\overline{N}_jH`$ after electroweak breaking. Unfortunately, these Yukawa couplings yield masses which are like quark and charged lepton masses $`m\mathrm{\Delta }I_w=1/2`$.
Based on recent ideas from string theory, it has been proposed that the world of four dimensions is in fact a “brane” immersed in a higher dimensional space. In this view, all fields with electroweak quantum numbers live on the brane, while standard model singlet fields can live on the “bulk” as well. One such field is the graviton, others could be the right-handed neutrinos. Their couplings to the brane are reduced by geometrical factors, and the smallness of neutrino masses is due to the naturally small coupling between brane and bulk fields.
In the absence of any credible dynamics for the physics of the bulk, and the belief that “one neutrino on the brane is worth two in the bulk”, we take the more conservative approach where the bulk does opens up, but at much shorter scales. One indication of such a scale is that at which the gauge couplings unify, the other is given by the value of neutrino masses. The situation is remedied by introducing Majorana mass terms $`\overline{N}_i\overline{N}_j`$ for the right-handed neutrinos. The masses of these new degrees of freedom is arbitrary, since it has no electroweak quantum numbers, $`M\mathrm{\Delta }I_w=0`$. If it is much larger than the electroweak scale, the neutrino masses are suppressed relative to that of their charged counterparts by the ratio of the electroweak scale to that new scale: the mass matrix (in $`3\times 3`$ block form) is
$$\left(\begin{array}{cc}0& m\\ m& M\end{array}\right),$$
(1)
leading to one small and one large eigenvalue
$$m_\nu m\frac{m}{M}\left(\mathrm{\Delta }I_w=\frac{1}{2}\right)\left(\frac{\mathrm{\Delta }I_w=\frac{1}{2}}{\mathrm{\Delta }I_w=0}\right).$$
(2)
This seesaw mechanism provides a natural explanation for the smallness of the neutrino masses as long as lepton number is broken at a large scale $`M`$. With $`M`$ around the energy at which the gauge couplings unify, this yields neutrino masses at or below the eV region.
The flavor mixing comes from two different parts, the diagonalization of the charged lepton Yukawa couplings, and that of the neutrino masses. From the charged lepton Yukawas, we obtain $`𝒰_e`$, the unitary matrix that rotates the lepton doublets $`L_i`$. From the neutrino Majorana matrix, we obtain $`𝒰_\nu `$, the matrix that diagonalizes the Majorana mass matrix. The $`6\times 6`$ seesaw Majorana matrix can be written in $`3\times 3`$ block form
$$=𝒱_\nu ^t𝒟𝒱_\nu \left(\begin{array}{cc}𝒰_\nu & ϵ𝒰_{N\nu }\\ ϵ𝒰_{N\nu }^t& 𝒰_N\end{array}\right),$$
(3)
where $`ϵ`$ is the tiny rastio of the electroweak to lepton number violating scales, and $`𝒟=\mathrm{diag}(ϵ^2𝒟_\nu ,𝒟_N)`$, is a diagonal matrix. $`𝒟_\nu `$ contains the three neutrino masses, and $`ϵ^2`$ is the seesaw suppression. The weak charged current is then given by
$$j_\mu ^+=e_i^{}\sigma _\mu 𝒰_{MNS}^{ij}\nu _j,$$
(4)
where
$$𝒰_{MNS}=𝒰_e𝒰_\nu ^{},$$
(5)
is the matrix first introduced in ref , the analog of the CKM matrix in the quark sector.
In the seesaw-augmented standard model, this mixing matrix is totally arbitrary. It contains, as does the CKM matrix, three rotation angles, and one CP-violating phase, and also two additional CP-violating phases which cannot be absorbed in a redefinition of the neutrino fields, because of their Majorana masses (these extra phases can be measured only in $`\mathrm{\Delta }=2`$ processes). All these additional parameters await to be determined by experiment, although maximal $`\nu _\mu \nu _\tau `$ mixing was anticipated long ago on the basis of grand unified ideas.
## 4 Present Experimental Issues
The best direct limit on the electron neutrino mass come from Tritium $`\beta `$ decay, but it does not specify its type, Dirac or Majorana-like. An important clue is the absence of neutrinoless double $`\beta `$ decay, which puts a limit on electron lepton number violation.
Much smaller neutrino masses can be detected through neutrino oscillations. These can be observed using natural sources of neutrinos; some are somewhat understood and predictable, such as neutrinos produced in cosmic ray secondaries, neutrinos produced in the sun; others, such as neutrinos produced in supernovas close enough to be detected are much rarer. The second type of experiments monitor neutrinos from reactors, and the third type uses accelerator neutrino beams. Below we give a brief description of some of these experiments.
$``$ Atmospheric Neutrinos
Neutrinos produced in the decay of secondaries from cosmic ray collisions with the atmosphere have a definite flavor signature: there are twice as many muon like as electron like neutrinos and antineutrinos, simply because pions decay all the time into muons. It has been known for sometime that this 2:1 ratio differed from observation, hinting at a deficit of muon neutrinos. However last year SuperK was able to correlate this deficit with the length of travel of these neutrinos, and this correlation is the most persuasive evidence for muon neutrino oscillations: after birth, muon neutrinos do not all make it to the detector as muon neutrinos; they oscillate into something else, which in the most conservative view, should be either an electron or a tau neutrino. However, a nuclear reactor experiment, CHOOZ, rules out the electron neutrino as a candidate. Thus there remains two possibilities, the tau neutrino or another type of neutrino that does not interact weakly, a sterile neutrino. The latter possibility is being increasingly disfavored by a careful analyses of matter effects: it seems that muon neutrinos oscillate into tau neutrinos. The oscillation parameters are
$$(m_{\nu _\tau }^2m_{\nu _\mu }^2)10^3\mathrm{eV}^2;\mathrm{sin}^22\theta _{\nu _\mu \nu _\tau }.86.$$
(6)
Although this epochal result stands on its own, it should be confirmed by other experiments. Among these is are experiments that monitor muon neutrino beams, both at short and long baselines. $``$ Solar Neutrinos
Starting with the pioneering Homestake experiment, there is clearly a deficit in the number of electron neutrinos from the Sun. This has now been verified by many experiments, probing different ranges of neutrino energies and emission processes. This neutrino deficit can be parametrized in three ways
* Vacuum oscillations of the electron neutrino into some other species, sterile or active, can fit the present data, with large mixing angle, $`\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}.7`$, and
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)10^{10}10^{11}\mathrm{eV}^2.$$
(7)
This possibility implies a seasonal variation of the flux, which the present data is so far unable to detect.
* MSW oscillations . In this case, neutrinos produced in the solar core traverse the sun like a beam with an index of refraction. For a large range of parameters, this can result in level crossing region inside the sun. There are two distinct cases, according to which the level crossing is adiabatic or not. These interpretations yield different ranges of fundamental parameters.
The non-adiabatic layer yields the small angle solution, $`\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}2\times 10^3`$, and
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)5\times 10^6\mathrm{eV}^2.$$
(8)
The adiabatic layer transitions yields the large angle solution, $`\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}0.65`$ with,
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)10^410^5\mathrm{eV}^2.$$
(9)
This solution implies a detectable day-night asymmetry in the flux.
How do we distinguish between these possibilities? Each of these implies different distortions of the Boron spectrum from the laboratory measurements. In addition, the highest energy solar neutrinos may not all come from Boron decay; some are expected to be “hep” neutrinos coming from $`p+{}_{}{}^{3}He{}_{}{}^{4}He+e^++\nu _e`$.
In their measurement of the recoil electron spectrum, SuperK data show an excess of high end events, which would tend to favor vacuum oscillations. They also see a mild day-night asymmetry effect which would tend to favor the large angle MSW solution. In short, their present data does not allow for any definitive conclusions, as it is self-contradictory.
A new solar neutrino detector, the Solar Neutrino Observatory (SNO) now coming on-line, should be able to distinguishe between these scenarios. It contains heavy water, allowing a more precise determination of the electron recoil energy, as it involves the heavier deuterium. Thus we expect a better resolution of the Boron spectrum’s distortion. Also, with neutron detectors in place, SNO will be able to detect all active neutrino species through their neutral current interactions. If successfull, this will provide a smoking gun test for neutrino oscillations.
$``$ Accelerator Oscillations
These have been reported by the LSND collaboration , with large angle mixing between muon and electron antineutrinos. This result has been partially challenged by the KARMEN experiment which sees no such evidence, although they cannot rule out the LSND result. This controversy will be resolved by an upcoming experiment at FermiLab, called MiniBoone. This is a very important issue because, assuming that all experiments are correct, the LSND result requires a sterile neutrino to explain the other experiments, that is both light and mixed with the normal neutrinos. This would require a profound rethinking of our ideas about the low energy content of the standard model.
At the end of this Century, there remains several burning issues in neutrino physics that are likely to be soon settled by experiments:
* Origin of the Solar Neutrino Deficit
This is being addressed by SuperK, in their measurement of the shape of the $`{}_{}{}^{8}B`$ spectrum, of day-night asymmetry and of the seasonal variation of the neutrino flux. Their reach will soon be improved by lowering their threshold energy.
SNO is joining the hunt, and is expected to provide a more accurate measurement of the Boron flux. Its raison d’être, however, is the ability to measure neutral current interactions. If there are no sterile neutrinos, we might have a flavor independent measurement of the solar neutrino flux, while measuring at the same time the electron neutrino flux!
This experiment will be joined by BOREXINO, designed to measure neutrinos from the $`{}_{}{}^{7}Be`$ capture. These neutrinos are suppressed in the small angle MSW solution, which could explain the results from the $`pp`$ solar neutrino experiments and those that measure the Boron neutrinos.
* Atmospheric Neutrino Deficit
Here, there are several long baseline experiments to monitor muon neutrino beams and corroborate the SuperK results. The first, called K2K, already in progress, sends a beam from KEK to SuperK. Another, called MINOS, will monitor a FermiLab neutrino beam at the Soudan mine, 730 km away. A third experiment under consideration would send a CERN beam towards the Gran Sasso laboratory (also about 730 km away!). Eventually, these experiments hope to detect the appearance of a tau neutrino.
This brief survey of upcoming experiments in neutrino physics was intended to give a flavor of things to come. These measurements will not only determine neutrino parameters (masses and mixing angles), but will help us answer fundamental questions about the nature of neutrinos, especially the possible kinship between leptons and quarks. The future of neutrino physics is bright, and with much more to come: the production of intense neutrino beams in muon storage rings, and even the detection of the cosmological neutrino background!
## 5 Theories
On the theory side, it must be said that theoretical predictions of lepton hierarchies and mixings depend very much on hitherto untested theoretical assumptions. In the quark sector, where the bulk of the experimental data resides, the theoretical origin of quark hierarchies and mixings is a mystery, although there exits many theories, but none so convincing as to offer a definitive answer to the community’s satisfaction. It is therefore no surprise that there are more theories of lepton masses and mixings than there are parameters to be measured. Nevertheless, one can formulate the issues in the form of questions:
* Do the right handed neutrinos have quantum numbers beyond the standard model?
* Are quarks and leptons related by grand unified theories?
* Are quarks and leptons related by anomalies?
* Are there family symmetries for quarks and leptons?
The measured numerical value of the neutrino mass difference (barring any fortuitous degeneracies), suggests through the seesaw mechanism, a mass for the right-handed neutrinos that is consistent with the scale at which the gauge couplings unify. Is this just a numerical coincidence, or should we view this as a hint for grand unification?
Grand unified theories, originally proposed as a way to treat leptons and quarks on the same footing, imply symmetries much larger than the standard model’s. Implementation of these ideas necessitates a desert and supersymmetry, but also a carefully designed contingent of Higgs particles to achieve the desired symmetry breaking. That such models can be built is perhaps more of a testimony to the cleverness of theorists rather than of Nature’s. Indeed with the advent of string theory, we know that the best features of grand unified theories can be preserved, as most of the symmetry breaking is achieved by geometric compactification from higher dimensions .
An alternative point of view is that the vanishing of chiral anomalies is necessary for consistent theories, and their cancellation is most easily achieved by assembling matter in representations of anomaly-free groups. Perhaps anomaly cancellation is more important than group structure.
Below, we present two theoretical frameworks of our work, in which one deduces the lepton mixing parameters and masses. One is ancient , uses the standard techniques of grand unification, but it had the virtue of predicting the large $`\nu _\mu \nu _\tau `$ mixing observed by SuperKamiokande. The other is more recent, and uses extra Abelian family symmetries to explain both quark and lepton hierarchies. It also predicted large $`\nu _\mu \nu _\tau `$ mixing, while both schemes predict small $`\nu _e\nu _\mu `$ mixings.
### 5.1 A Grand Unified Model
The seesaw mechanism was born in the context of the grand unified group $`SO(10)`$, which naturally contains electroweak neutral right-handed neutrinos. Each standard model family appears in two irreducible representations of $`SU(5)`$. However, the predictions of this theory for Yukawa couplings is not so clear cut, and to reproduce the known quark and charged lepton hierarchies, a special but simple set of Higgs particles had to be included. In the simple scheme proposed by Georgi and Jarlskog , the ratios between the charged leptons and quark masses is reproduced, albeit not naturally since two Yukawa couplings, not fixed by group theory, had to be set equal. This motivated us to generalize their scheme to $`SO(10)`$, where it is (technically) natural, which meant that we had an automatic window into neutrino masses through the seesaw. The Yukawa couplings were of the Higgs-heavy, with $`\mathrm{𝟏𝟐𝟔}`$ representations, but the attitude at the time was “damn the Higgs torpedoes, and see what happens”. A modern treatment would include non-renormalizable operators , but with similar conclusion. The model yielded the mass relations
$$m_dm_s=3(m_em_\mu );m_dm_s=m_em_\mu ;$$
(10)
as well as
$$m_b=m_\tau ,$$
(11)
and mixing angles
$$V_{us}=\mathrm{tan}\theta _c=\sqrt{\frac{m_d}{m_s}};V_{cb}=\sqrt{\frac{m_c}{m_t}}.$$
(12)
While reproducing the well-known lepton and quark mass hierarchies, it predicted a long-lived $`b`$ quark, contrary to the lore of the time. It also made predictions in the lepton sector, namely maximal $`\nu _\tau \nu _\mu `$ mixing, small $`\nu _e\nu _\mu `$ mixing of the order of $`(m_e/m_\mu )^{1/2}`$, and no $`\nu _e\nu _\tau `$ mixing.
The neutral lepton masses came out to be hierarchical, but heavily dependent on the masses of the right-handed neutrinos. The electron neutrino mass came out much lighter than those of $`\nu _\mu `$ and $`\nu _\tau `$. Their numerical values depended on the top quark mass, which was then supposed to be in the tens of GeVs!
Given the present knowledge, some of the features are remarkable, such as the long-lived $`b`$ quark and the maximal $`\nu _\tau \nu _\mu `$ mixing. On the other hand, the actual numerical value of the $`b`$ lifetime was off a bit, and the $`\nu _e\nu _\mu `$ mixing was too large to reproduce the small angle MSW solution of the solar neutrino problem.
The lesson should be that the simplest $`SO(10)`$ model that fits the observed quark and charged lepton hierarchies, reproduces, at least qualitatively, the maximal mixing found by SuperK, and predicts small mixing with the electron neutrino
### 5.2 A Grand Ununified Model
There is another way to generate hierarchies, based on adding extra family symmetries to the standard model, without invoking grand unification. These types of models address only the Cabibbo suppression of the Yukawa couplings, and are not as predictive as specific grand unified models. Still, they predict no Cabibbo suppression between the muon and tau neutrinos. Below, we present a pre-SuperK model with those features.
The Cabibbo supression is assumed to be an indication of extra family symmetries in the standard model. The idea is that any standard model-invariant operator, such as $`𝐐_i\overline{𝐝}_jH_d`$, cannot be present at tree-level if there are additional symmetries under which the operator is not invariant. Simplest is to assume an Abelian symmetry, with an electroweak singlet field $`\theta `$, as its order parameter. Then the interaction
$$𝐐_i\overline{𝐝}_jH_d\left(\frac{\theta }{M}\right)^{n_{ij}}$$
(13)
can appear in the potential as long as the family charges balance under the new symmetry. As $`\theta `$ acquires a $`vev`$, this leads to a suppression of the Yukawa couplings of the order of $`\lambda ^{n_{ij}}`$ for each matrix element, with $`\lambda =\theta /M`$ identified with the Cabibbo angle, and $`M`$ is the natural cut-off of the effective low energy theory. As a consequence of the charge balance equation
$$X_{if}^{[d]}+n_{ij}X_\theta =0,$$
(14)
the exponents of the suppression are related to the charge of the standard model-invariant operator , the sum of the charges of the fields that make up the the invariant.
This simple Ansatz, together with the seesaw mechanism, implies that the family structure of the neutrino mass matrix is determined by the charges of the left-handed lepton doublet fields.
Each charged lepton Yukawa coupling $`L_i\overline{N}_jH_u`$, has an extra charge $`X_{L_i}+X_{Nj}+X_H`$, which gives the Cabibbo suppression of the $`ij`$ matrix element. Hence, the orders of magnitude of these couplings can be expressed as
$$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{Y}\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right),$$
(15)
where $`\widehat{Y}`$ is a Yukawa matrix with no Cabibbo suppressions, $`l_i=X_{L_i}/X_\theta `$ are the charges of the left-handed doublets, and $`p_i=X_{N_i}/X_\theta `$, those of the singlets. The first matrix forms half of the MNS matrix. Similarly, the mass matrix for the right-handed neutrinos, $`\overline{N}_i\overline{N}_j`$ will be written in the form
$$\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right)\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right).$$
(16)
The diagonalization of the seesaw matrix is of the form
$$L_iH_u\overline{N}_j\left(\frac{1}{\overline{N}\overline{N}}\right)_{jk}\overline{N}_kH_uL_l,$$
(17)
from which the Cabibbo suppression matrix from the $`\overline{N}_i`$ fields cancels, leaving us with
$$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{}\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right),$$
(18)
where $`\widehat{}`$ is a matrix with no Cabibbo suppressions. The Cabibbo structure of the seesaw neutrino matrix is determined solely by the charges of the lepton doublets! As a result, the Cabibbo structure of the MNS mixing matrix is also due entirely to the charges of the three lepton doublets. This general conclusion depends on the existence of at least one Abelian family symmetry, which we argue is implied by the observed structure in the quark sector.
The Wolfenstein parametrization of the CKM matrix , and the Cabibbo structure of the quark mass ratios
$$\frac{m_u}{m_t}\lambda ^8\frac{m_c}{m_t}\lambda ^4;\frac{m_d}{m_b}\lambda ^4\frac{m_s}{m_b}\lambda ^2,$$
(19)
can be reproduced by a simple family-traceless charge assignment for the three quark families, namely
$$X_{𝐐,\overline{𝐮},\overline{𝐝}}=(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$
(20)
where $``$ is baryon number, $`\eta _{\overline{𝐝}}=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=2`$. Two striking facts are evident:
* the charges of the down quarks, $`\overline{𝐝}`$, associated with the second and third families are the same,
* $`𝐐`$ and $`\overline{𝐮}`$ have the same value for $`\eta `$.
To relate these quark charge assignments to those of the leptons, we need to inject some more theoretical prejudices. Assume these family-traceless charges are gauged, and not anomalous. Then to cancel anomalies, the leptons must themselves have family charges.
Anomaly cancellation generically implies group structure. In $`SO(10)`$, baryon number generalizes to $``$, where $``$ is total lepton number, and in $`SU(5)`$ the fermion assignment is $`\overline{\mathrm{𝟓}}=\overline{𝐝}+L`$, and $`\mathrm{𝟏𝟎}=𝐐+\overline{𝐮}+\overline{e}`$. Thus anomaly cancellation is easily achieved by assigning $`\eta =0`$ to the lepton doublet $`L_i`$, and $`\eta =2`$ to the electron singlet $`\overline{e}_i`$, and by generalizing baryon number to $``$, leading to the charges of the three chiral families
$$X=()(2,1,1)+\eta _{𝐐,\overline{𝐮},\overline{𝐝}}(1,0,1),$$
(21)
where now $`\eta _{\overline{𝐝}}=\eta _L=0`$, and $`\eta _𝐐=\eta _{\overline{𝐮}}=\eta _{\overline{e}}=2`$.
The charges of the lepton doublets are simply $`X_{L_i}=(2,1,1)`$. We have just argued that these charges determine the Cabibbo structure of the MNS lepton mixing matrix to be
$$𝒰_{MNS}\left(\begin{array}{ccc}1& \lambda ^3& \lambda ^3\\ \lambda ^3& 1& 1\\ \lambda ^3& 1& 1\end{array}\right),$$
(22)
implying no Cabibbo suppression in the mixing between $`\nu _\mu `$ and $`\nu _\tau `$. This is consistent with the SuperK discovery and with the small angle MSW solution to the solar neutrino deficit. One also obtains a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. Notice that these predictions are subtly different from those of grand unification, as they yield $`\nu _e\nu _\tau `$ mixing. It also implies a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos.
On the other hand, the scale of the neutrino mass values depend on the family trace of the family charge(s). Here we simply quote the results our model . The masses of the right-handed neutrinos are found to be of the following orders of magnitude
$$m_{\overline{N}_e}M\lambda ^{13};m_{\overline{N}_\mu }m_{\overline{N}_\tau }M\lambda ^7,$$
(23)
where $`M`$ is the scale of the right-handed neutrino mass terms, assumed to be the cut-off. The seesaw mass matrix for the three light neutrinos comes out to be
$$m_0\left(\begin{array}{ccc}a\lambda ^6& b\lambda ^3& c\lambda ^3\\ b\lambda ^3& d& e\\ c\lambda ^3& e& f\end{array}\right),$$
(24)
where we have added for future reference the prefactors $`a,b,c,d,e,f`$, all of order one, and
$$m_0=\frac{v_u^2}{M\lambda ^3},$$
(25)
where $`v_u`$ is the $`vev`$ of the Higgs doublet. This matrix has one light eigenvalue
$$m_{\nu _e}m_0\lambda ^6.$$
(26)
Without a detailed analysis of the prefactors, the masses of the other two neutrinos come out to be both of order $`m_0`$. The mass difference announced by superK cannot be reproduced without going beyond the model, by taking into account the prefactors. The two heavier mass eigenstates and their mixing angle are written in terms of
$$x=\frac{dfe^2}{(d+f)^2},y=\frac{df}{d+f},$$
(27)
as
$$\frac{m_{\nu _2}}{m_{\nu _3}}=\frac{1\sqrt{14x}}{1+\sqrt{14x}},\mathrm{sin}^22\theta _{\mu \tau }=1\frac{y^2}{14x}.$$
(28)
If $`4x1`$, the two heaviest neutrinos are nearly degenerate. If $`4x1`$, a condition easy to achieve if $`d`$ and $`f`$ have the same sign, we can obtain an adequate split between the two mass eigenstates. For illustrative purposes, when $`0.03<x<0.15`$, we find
$$4.4\times 10^6\mathrm{\Delta }m_{\nu _e\nu _\mu }^210^5\mathrm{eV}^2,$$
(29)
which yields the correct non-adiabatic MSW effect, and
$$5\times 10^4\mathrm{\Delta }m_{\nu _\mu \nu _\tau }^25\times 10^3\mathrm{eV}^2,$$
(30)
for the atmospheric neutrino effect. These were calculated with a cut-off, $`10^{16}\mathrm{GeV}<M<4\times 10^{17}\mathrm{GeV}`$, and a mixing angle, $`0.9<\mathrm{sin}^22\theta _{\mu \tau }<1`$. This value of the cut-off is compatible not only with the data but also with the gauge coupling unification scale, a necessary condition for the consistency of our model, and more generally for the basic ideas of grand unification.
## 6 Outlook
Presently, neutrino physics is being driven by many experimental findings that challenge theoretical expectations. Although all can be explained in terms of neutrino oscillations, it is unlikely that they are correct in their conclusions: onje must remember that evidence for neutrino oscillations has often been reported, only to either be withdrawn or else contradicted by other experiments.
The reported anomalies associated with solar neutrinos, neutrinos produced in cosmic ray cascades , and also in low energy reactions , cannot all be correct without introducing a new type of neutrino which does not couple to the $`Z`$ boson, a sterile neutrino .
Small neutrino masses are naturally generated by the seesaw mechanism, which works because of the weak interactions of the neutrinos. A similar mass suppression for sterile neutrinos involves new hitherto unknown interactions, resulting in substantial additions to the standard model, for which there is no independent evidence. Also, the case for a heavier cosmological neutrino in helping structure formation may not be as pressing, in view of the measurements of a small cosmological constant.
Neutrino physics is extremely exciting as it provides the best opportunities for finding and understanding physics beyond the standard model.
## 7 Acknowledgments
I wish to thank Professors Froissart and Vignaud for their kind invitation to speak in such a unique setting. This research was supported in part by the department of energy under grant DE-FG02-97ER41029. |
no-problem/0001/astro-ph0001320.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In a bottom-up scenario of structure formation massive galaxy clusters form from rare, extreme overdensities in the primordial density fluctuation field. At what redshift clusters of a given mass collapse and virialize depends sensitively on the chosen structure formation theory. The comoving number density of clusters as a function of redshift (and, ideally, also of mass) is thus an important statistic that is well suited to constrain cosmological and physical parameters of structure formation models. The tightest constraints can be obtained from observations of very distant, very massive clusters which are the rarest in all cosmological models.
Since clusters are bright X-ray sources, wide angle X-ray surveys are an excellent way of compiling sizeable cluster samples out to cosmological redshifts ($`z1`$). Several such samples have been compiled (and/or published) in the past decade; an overview of the solid angles and flux limits of these surveys is presented in Figure 1. Two kinds of surveys can be distinguished: serendipitous cluster surveys (Bright SHARC, CfA 160 deg<sup>2</sup> survey, EMSS, RDCS, SHARC-S, WARPS) and contiguous area surveys (BCS, BCS-E, NEP, RASS-BS, REFLEX). The former surveys use data from pointed X-ray observations, whereas the latter are all based on the ROSAT All-Sky Survey (RASS). With the exception of the NEP survey , all contiguous cluster surveys cover close to, or more than, 10,000 square degrees but are limited to the X-ray brightest clusters. This fundamental difference in depth and sky coverage has important consequences. As shown in Fig. 1, the NEP survey as well as all serendipitous cluster surveys (with the possible exception of the EMSS) cover too small a solid angle to detect a significant number of X-ray luminous clusters. The RASS large-area surveys, on the other hand, are capable of finding these rarest systems, but are too shallow to detect them in large numbers at $`z>0.3`$.
We are thus in the unfortunate situation that the cosmologically most important systems, the massive, distant clusters, are poorly sampled by all existing X-ray cluster surveys. At low to moderate X-ray luminosities where the number statistics have greatly improved most of the above surveys find little, if any, evolution out to redshifts of $`z0.8`$ . At higher luminosities, however, the EMSS and CfA cluster surveys find evidence of strong negative evolution already at $`z>0.3`$. It is worth bearing in mind though that the latter results are based on very small samples or, in fact, non-detections.
## 2 MACS: a new cluster survey
MACS (MAssive Cluster Survey) was designed to find the population of (possibly) strongly evolving clusters, i.e., the most X-ray luminous systems at $`z>0.3`$. By doing so, MACS will re-measure the rate of evolution and test the results obtained by the EMSS and CfA cluster surveys. Unless negative evolution is very rapid indeed, MACS will find a sizeable number of these systems and thus provide us with targets for in-depth studies of the physical mechanisms driving cluster evolution and structure formation.
As indicated in Fig. 1, MACS aims to achieve these goals by combining the largest solid angle of any RASS cluster survey with the lowest possible X-ray flux limit. Drawing from the list of 18,000 X-ray sources listed in the RASS Bright Source Catalog (BSC) MACS applies the following selection criteria: $`\left|b\right|20^{}`$, $`40^{}\delta 80^{}`$ (to ensure observability from Mauna Kea; the resulting solid angle is 22,735 deg<sup>2</sup>), X-ray hardness ratio greater than a limiting ($`n_\mathrm{H}`$ dependent) value derived from the BCS sample , and $`f_{\mathrm{X},12}1`$ where $`f_{\mathrm{X},12}`$ is the detect cell flux in the 0.1–2.4 keV band in units of $`10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>.
Cluster candidates are tentatively identified from Digitized Sky Survey images, and are confirmed or discarded by R band imaging observations with the University of Hawai‘i’s 2.2m telescope. This process has, so far, resulted in the identification of more than 700 clusters of galaxies at all redshifts; Fig. 2 shows the redshift distribution of the 602 systems with spectroscopic redshifts. As a by-product, MACS has thus already delivered the by far largest X-ray selected cluster catalogue to emerge from the RASS to date. Our spectroscopic follow-up observations focus exclusively on the most distant of these clusters. Redshifts are estimated from the imaging data, and spectroscopic observations with the UH2.2m and Keck 10m telescopes are performed of all systems with $`z_{\mathrm{est}}>0.2`$.
A prediction for the size of the final MACS sample can be obtained from the MACS selection function and the local cluster X-ray luminosity function. In a no-evolution scenario we expect to find 57 clusters at $`z>0.3`$ and $`f_{\mathrm{X},12}2`$, and 151 at $`z>0.4`$ and $`f_{\mathrm{X},12}1`$. More than 20 MACS clusters are predicted to lie at $`z>0.6`$. This constitutes an improvement of a factor of about 50 over existing samples in the same redshift and luminosity range.
## 3 Status and first results
The MACS cluster sample currently comprises 41 systems with spectroscopic redshifts in the range $`0.3z0.56`$ (Fig. 2). An additional 85 MACS sources with photometric redshifts $`z0.3`$ are scheduled for spectroscopic observation.
The completeness of this preliminary sample cannot be easily quantified. However, the bright part of the MACS/BSC source list ($`f_{\mathrm{X},12}2`$) has been mostly identified, and the completeness of the sample of (presently) 25 clusters found above this higher flux limit can be estimated. We thus use this subsample to attempt to constrain cluster evolution at the very highest X-ray luminosities ($`L_\mathrm{X}>1\times 10^{45}`$ erg s<sup>-1</sup>). Even if we knew nothing about the completeness of this subsample, the observed 25 clusters (where 57 are expected) already limit any negative evolution to about a factor of two. If the estimated incompleteness of this subsample is taken into account (R.A. range not yet fully covered, softer X-ray sources not yet screened, etc), we expect to eventually find 43 of the predicted 57 clusters. When the statistical and systematic errors of measurement and prediction are taken into account, the difference of about 15 clusters is statistically not significant. Our tentative conclusion is that there is no significant evolution of the X-ray cluster luminosity function out to $`z0.4`$ at any luminosity. |
no-problem/0001/quant-ph0001025.html | ar5iv | text | # I Introduction
## I Introduction
One of the most puzzling aspects of quantum mechanics is the quantum measurement problem which lies at the heart of all its interpretations. Without a measuring device that functions classically, there are no ‘events’ in quantum mechanics which postulates that the wave function contains complete information of the system concerned and evolves linearly and unitarily in accordance with the Schrödinger equation. The system cannot be said to ‘possess’ physical properties like position and momentum irrespective of the context in which such properties are measured. The language of quantum mechanics is not that of realism.
According to Bohr the classicality of a measuring device is fundamental and cannot be derived from quantum theory. In other words, the process of measurement cannot be analyzed within quantum theory itself. A similar conclusion also follows from von Neumann’s approach . In both these approaches the border line between what is to be regarded as quantum or classical is, however, arbitrary and mobile. This makes the theory intrinsically ill defined.
Some recent approaches have attempted to derive the classical world from a quantum substratum by regarding quantum systems as open. Their interaction with their ‘environment’ can be shown to lead to effective decoherence and the emergence of quasi- classical behaviour , . However, the very concepts of a ‘system’ and its ‘environment’ already presuppose a clear cut division between them which, as we have remarked, is mobile and ambiguous in quantum mechanics. Moreover, the reduced density matrix of the ‘system’ evolves to a diagonal form only in the pointer basis and not in the other possible bases one could have chosen. This shows that this approach does not lead to a real solution of the measurement problem, as claimed by Zurek , though it is an important development that sheds new light on the emergence of quasi-classical behaviour from a quantum susbstratum.
The de Broglie-Bohm approach , on the other hand, does not accept the wave function description as complete. Completeness is achieved by introducing the position of the particle as an additional variable (the so-called ‘hidden variable’) with an ontological status. The wave function at a point is no longer just the probability amplitude that a particle will be found there if a measurement were to be made, but the probability amplitude that a particle is there even if no measurement is made. It is a realistic description, and measurements are reduced to ordinary interactions and lose their mystique. Also, the classical limit is much better defined in this approach through the ‘quantum potential’ than in the conventional approach. As a result, however, a new problem is unearthed, namely, it becomes quite clear that classical theory admits ensembles of a more general kind than can be reached from standard quantum ensembles. The two theories are really disparate while having a common domain of application .
Thus, although it is tacitly assumed by most physicists that classical physics is a limiting case of quantum theory, it is by no means so. Most physicists would, of course, scoff at the suggestion that the situation may really be the other way round, namely, that quantum mechanics is contained in a certain sense in classical theory. This seems impossible because quantum mechanics includes totally new elements like $`\mathrm{}`$ and the uncertainty relations and the host of new results that follow from them. Yet, a little reflection shows that if true classical behaviour of a system were really to result from a quantum substratum through some process analogous to ‘decoherence’, its quantum behaviour ought also to emerge on isolating it sufficiently well from its environment, i.e., by a process which is the ‘reverse of decoherence’. In practice, of course, it would be impossible to reverse decoherence once it occurs for a system. Nevertheless, it should still be possible to prepare a system sufficiently well isolated from its environment so that its quantum behaviour can be observed. If this were not possible, it would have been impossible ever to observe the quantum features of any system.
So, let us examine what the opposite point of view implies, namely that classical theory is more fundamental than quantum theory (in a sense to be defined more precisely). This would, in fact, be consistent with Bohr’s position that the classicality of measuring devices is fundamental (nonderivable), leading to his preferred solution to the quantum measurement problem. At the same time, the approach of de Broglie and Bohm coupled with the notion of decoherence as an environmental effect that can be switched on would fall into place, but the non-realist Copenhagen interpretation would have to be abandoned.
## II The Hamilton-Jacobi Theory
Our starting point is the non-relativistic Hamilton-Jacobi equation
$$S_{cl}/t+\frac{(S_{cl})^2}{2m}+V(x)=0$$
(1)
for the action $`S_{cl}`$ of a classical paticle in an external potential $`V`$, together with the definition of the momentum
$$𝐩=m\frac{d𝐱}{dt}=S_{cl}$$
(2)
and the continuity equation
$$\frac{\rho _{cl}(𝐱,t)}{t}+.(\rho _{cl}\frac{S_{cl}}{m})=0$$
(3)
for the position distribution function $`\rho _{cl}(𝐱,t)`$ of the ensemble of trajectories generated by solutions of equation (1) with different initial conditions (position or momentum). Suppose we introduce a complex wave function
$$\psi _{cl}(𝐱,t)=R_{cl}(𝐱,t)exp(\frac{i}{\mathrm{}}S_{cl})$$
(4)
into the formalism by means of the equation
$$\rho _{cl}(𝐱,t)=\psi _{cl}^{}\psi _{cl}=R_{cl}^2.$$
(5)
What is the equation that this wave function must satisfy such that the fundamental equations (1) and (3) remain unmodified? The answer turns out to be the modified Schrödinger equation
$$i\mathrm{}\frac{\psi _{cl}}{t}=\left(\frac{\mathrm{}^2}{2m}^2+V(x)\right)\psi _{cl}Q_{cl}\psi _{cl}$$
(6)
where
$$Q_{cl}=\frac{\mathrm{}^2}{2m}\frac{^2R_{cl}}{R_{cl}}$$
(7)
Thus, a system can behave classically in spite of it having an associated wave function that satisfies this modified Schrödinger equation.
Notice that the last term in this equation is nonlinear in $`|\psi _{cl}|`$, and is uniquely determined by the requirement that all quantum mechanical effects such as superposition, entanglement and nonlocality be eliminated. It is therefore to be sharply distinguished from certain other types of nonlinear terms that have been considered in constructing nonlinear versions of quantum mechanics . An unacceptable consequence of such nonlinear terms (which are, unlike $`Q_{cl}`$, bilinear in the wave function) is that superluminal signalling using quantum entanglement becomes possible in such theories . Since $`Q_{cl}`$ eliminates quantum superposition and entanglement, it cannot imply any such possibility. Usual action-at-a-distance is, of course, implicit in non-relativistic mechanics, and can be eliminated in a Lorentz invariant version of the theory, as we will see later.
Deterministic nonlinear terms with arbitrary parameters have also been introduced in the Schrödinger equation to bring about collapse of quantum correlations for isolated macroscopic systems. Such terms also imply superluminal signals via quantum entanglement. The term $`Q_{cl}`$ is different from such terms as well in that it has no arbitrary parameters in it and eliminates quantum correlations for all systems deterministically, irrespective of their size.
Most importantly, it is clear from the above analysis that none of the other types of nonlinearity can guarantee strictly classical behaviour described by equations (1) and (3).
Let us now consider the classical version of the density matrix which must be of the form
$`\rho _{cl}(x,x^{^{}},t)`$ $`=`$ $`R_{cl}(x,t)exp\left({\displaystyle \frac{i}{\mathrm{}}}S_{cl}(x,t)\right)R_{cl}(x^{^{}},t)exp\left({\displaystyle \frac{i}{\mathrm{}}}S_{cl}(x^{^{}},t)\right)`$ (8)
$`=`$ $`R^2(x,t)\delta ^3(xx^{^{}})`$ (9)
in order to satisfy the Pauli master equation. The absence of off-diagonal terms is a consequence of the absence of quantum correlations between spatially separated points. This implies that the classical wave function can be written as
$$\psi _{cl}(x,t)=\frac{1}{\sqrt{\pi ^3}}\underset{ϵ0}{lim}\sqrt{\frac{ϵ}{(xx(t))^2+ϵ^2}}exp(\frac{i}{\mathrm{}}S_{cl}).$$
(10)
Such a function has only point support on the particle trajectory $`x=x(t)`$ determined by equation (2). It can also be written as a linear superposition of the delta function and its derivatives . All this ensures a classical phase space.
The wave function $`\psi _{cl}`$ is therefore entirely dispensable and “sterile” as long as we consider strictly classical systems. Conceptually, however, it acquires a special significance in considering the transition between quantum and classical mechanics, as we will see.
The wave function $`\psi `$ of a quantum mechanical system, on the other hand, must of course satisfy the Schrödinger equation
$$i\mathrm{}\frac{\psi }{t}=\frac{\mathrm{}^2}{2m}^2\psi +V\psi .$$
(11)
Using a polar representation similar to (4) for $`\psi `$ in this equation and separating the real and imaginary parts, one can now derive the modified Hamilton-Jacobi equation
$$S/t+\frac{(S)^2}{2m}+Q+V=0$$
(12)
for the phase $`S`$ of the wave function, where $`Q`$ is given by
$$Q=\frac{\mathrm{}^2}{2m}\frac{^2R}{R},$$
(13)
and the continuity equation
$$\frac{\rho (𝐱,t)}{t}+.(\rho \frac{S}{m})=0$$
(14)
These differential equations ((12) and (14)) now become coupled differential equations which determine $`S`$ and $`\rho =R^2`$. Note that the phase $`S`$ of a quantum mechanical system satisfies a modified Hamilton-Jacobi equation with an additional potential $`Q`$ called the “quantum potential”. Its properties are therefore different from those of the classical action $`S_{cl}`$ which satisfies equation (1) . Applying the operator $``$ on equation (12) and using the definition of the momentum (2), one obtains the equation of motion
$$\frac{d𝐩}{dt}=m\frac{d^2𝐱}{dt^2}=(V+Q)$$
(15)
for the quantum particle. Integrating this equation or, equivalently equation (2), one obtains the Bohmian trajectories $`x(t)`$ of the particle corresponding to different initial positions. The departure from the classical Newtonian equation due to the presence of the “quantum potential” $`Q`$ gives rise to all the quantum mechanical phenomena such as the existence of discrete stationary states, interference phenomena, nonlocality and so on. This agreement with quantum mechanics is achieved by requiring that the initial distribution $`P`$ of the particle is given by $`R^2(x(t),0)`$. The continuity equation (14) then guarantees that it will agree with $`R^2`$ at all future times. This guarantees that the averages of all dynamical variables of the particle taken over a Gibbs ensemble of its trajectories will always agree with the expectation values of the corresponding hermitian operators in standard quantum mechanics. This is essentially the de Broglie-Bohm quantum theory of motion. For further details about this theory and its relationship with standard quantum mechanics, the reader is referred to the comprehensive book by Holland and the one by Bohm and Hiley .
Now, let us for the time being assume that quantum mechanics is the more fundamental theory from which classical mechanics follows in some limit. Consider a quantum mechanical system interacting with its environment. It evolves according to the Schrödinger equation
$$i\mathrm{}\frac{\psi }{t}=\left(\frac{\mathrm{}^2}{2m}^2+V(x)+W\right)\psi $$
(16)
where $`W`$ is the potential due to the environment experienced by the system. For a complex enough environment such as a heat bath, the density matrix of the system in the position representation quickly evolves to a diagonal form. In a special model in which a particle interacts only with the thermal excitations of a scalar field in the high temperature limit, the density matrix evolves according to the master equation
$$\frac{d\rho }{dt}=\gamma (xx^{^{}})(_x_x^{^{}})\rho \frac{2m\gamma k_BT}{\mathrm{}^2}(xx^{^{}})^2\rho $$
(17)
where $`\gamma `$ is the relaxation rate, $`k_B`$ is the Boltzmann constant and $`T`$ the temperature of the field. It follows from this equation that quantum coherence falls off at large separations as the square of $`\mathrm{\Delta }x=(xx^{^{}})`$. The decoherence time scale is given by
$$\tau _D\tau _R\frac{\mathrm{}^2}{2mk_B(\mathrm{\Delta }x)^2}=\gamma ^1\left(\frac{\lambda _T}{\mathrm{\Delta }x}\right)^2$$
(18)
where $`\lambda _T=\mathrm{}/\sqrt{2mk_BT}`$ is the thermal de Broglie wavelength and $`\tau _R=\gamma ^1`$. For a macroscopic object of mass $`m=1`$ g at room temperature ( $`T=300K`$) and separation $`\mathrm{\Delta }x=1`$ cm, the ratio $`\tau _D/\tau _R=10^{40}`$ ! Thus, even if the relaxation time was of the order of the age of the universe, $`\tau _R10^{17}`$ sec, quantum coherence would be destroyed in $`\tau _D10^{23}`$ sec. For an electron, however, $`\tau _D`$ can be much more than $`\tau _R`$ on atomic and larger scales.
However, the diagonal matrix does not become diagonal in, for example, the momentum representation, showing that coherence has not really been destroyed. The FAPP diagonal density matrix does not therefore represent a proper mixture of mutually exclusive alternatives, the classical limit is not really achieved and the measurement problem remains .
This is not hard to understand once one realizes that a true classical system must be governed by a Schrödinger equation that is modified by the addition of a unique term that is nonlinear in $`|\psi |`$ (equation (6)), and that such a nonlinear term cannot arise from unitary Schrödinger evolution. On the contrary, it is not unnatural to expect a linear equation of the Schrödinger type to be the limiting case of a nonlinear equation like equation (6). It is therefore tempting to interpret the last term in equation (6) as an ‘effective’ potential that represents the coupling of the classical system to its environment. It is important to bear in mind that in such an interpretation, the potential $`Q_{cl}`$ must obviously be regarded as fundamentally given and not derivable from a quantum mechanical substratum, being uniquely and solely determined by the requirement of classicality, as shown above.
Let us now consider a quantum system which is inserted into a thermal bath at time $`t=0`$. If it is to evolve into a genuinely classical system after a sufficient lapse of time $`\mathrm{\Delta }t`$, its wave function $`\psi `$ must satisfy the equation of motion
$`i\mathrm{}{\displaystyle \frac{\psi }{t}}=\left({\displaystyle \frac{\mathrm{}^2}{2m}}^2+V(x)\lambda (t)Q_{cl}\right)\psi `$ (19)
where $`\lambda (0)=0`$ in the purely quantum limit and $`\lambda (\mathrm{\Delta }t)=1`$ in the purely classical limit. (Here $`\mathrm{\Delta }t\tau _D`$ where $`\tau _D`$ is typically given by $`\gamma ^1(\lambda _T/\mathrm{\Delta }x)^2`$ (18).) Thus, for example, if $`=\lambda (t)=1exp(t/\tau _D)`$, a macroscopic system would very rapidly behave like a true classical system at sufficiently high temperatures, whereas a mesoscopic system would behave neither fully like a classical system nor fully like a quantum mechanical system at appropriate temperatures for a much longer time. What happens is that the reduced density operator of the system evolves according to the equation
$`\rho (x,x^{^{}},\mathrm{\Delta }t)`$ $`=`$ $`exp(i{\displaystyle _0^{\mathrm{\Delta }t}}\lambda Q_{cl}𝑑t/\mathrm{})\rho (x,x^{^{}},0)exp(i{\displaystyle _0^{\mathrm{\Delta }t}}\lambda Q_{cl}𝑑t/\mathrm{})`$ (20)
$`=`$ $`R^2(x,\mathrm{\Delta }t)\delta ^3(xx^{^{}})`$ (21)
during the time interval $`\mathrm{\Delta }t`$ during which the nonlinear interaction $`\lambda Q_{cl}`$ completely destroys all superpositions, so that at the end of this time interval the system is fully classical and the equation for the density operator reduces to the Pauli master equation for a classical system.
A variety of functions $`\lambda (t)`$ would satisfy the requirement $`\lambda =0`$ and $`\lambda =1`$. This is not surprising and is probably a reflection of the diverse ways in which different systems decohere in different environments.
It is clear that a system must be extremely well isolated ($`\lambda =0`$) for it to behave quantum mechanically. Such a system, however, would inherit only a de Broglie-Bohm ontological and causal interpretation, not an interpretation of the Copenhagen type. The practical difficulty is that once a quantum system and its environment get coupled, it becomes FAPP impossible to decouple them in finite time because of the extremely large number of degrees of freedom of the environment. However, we know from experience that it is possible to create quantum states in the laboratory that are very well isolated from their environment. Microscopic quantum systems are, of course, routinely created in the laboratory (such as single atoms, single electrons, single photons, etc.,) and considerable effort is being made to create isolated macroscopic systems that would show quantum coherence, and there is already some evidence of the existence of mesoscopic ‘cat states’ which decohere when appropriate radiation is introduced into the cavity .
Equation (19) is a totally new equation that correctly bridges the gap between the quantum and the classical worlds. It should form a sound starting point for studying systems, parametrized by $`\lambda (t)`$, that lie anywhere in the continuous spectrum stretching between the quantum and classical limits.
Notice that if one defines the momentum by the relation $`\pi =SQdt`$, the equation of motion can be written in the classical form
$$\frac{d\pi }{dt}=V.$$
(22)
This shows that it is $`\pi `$ which is conserved in the absence of any external potential and not the particle momentum $`p`$. This is obviously due to the existence of the quantum potential.
A look at the modified Hamilton-Jacobi equation (12) also shows that the quantity conserved by it is not the classical energy but this energy plus the quantum potential. Also notice that the equation of motion (15) implies that a quantum mechanical particle is not free even in the absence of an external potential. It is obvious therefore that the interaction of the corresponding classical system with its environment must serve to cancel this purely quantum force and restore the classical laws of motion. Once the form of the classical Hamilton-Jacobi equation is restored, conservation of energy is mathematically inevitable.
Notice that the additional interaction of a classical system with its environment in the form of the effective potential $`Q_{cl}`$ becomes manifest only when the Hamilton-Jacobi equation is recast in terms of the classical wave function (equations (6) and (7)). This is why the Hamilton-Jacobi equation can be written without ever knowing about this interaction. The wave function approach reveals what lies hidden and sterile in the traditional classical approach. This is a significant new insight offered by the wave function approach.
It is important to point out a fundamental difference between the two potentials $`V(x)`$ and $`Q`$ in (12). $`V(x)`$ is a given external potential whereas $`Q`$ is not so—it depends on the modulus of the wave function of the system, and is therefore nonlocal in character.
This leads to a fundamental difference of the approach advocated in this paper from the conventional de Broglie-Bohm theory in which quantum mechanics rather than classical mechanics is regarded as being more fundamental. In the de Broglie-Bohm theory the quantum potential must necessarily vanish in the classical limit, and the quantum system appears to behave classically. On the other hand, in the present approach there is no need for the quantum potential to vanish in the classical limit—only its effects must be completely cancelled by nonlinear environment-induced decoherence of a very special type. Furthermore, besides the wave function, de Broglie and Bohm must also introduce the position of the particle as an additional variable to complete the description of the system. If classical mechanics happens to be more fundamental than quantum mechanics, there is no need to do this as the position and trajectory are already present in the fundamental description. It is , in fact, the wave function that acquires a subsidiary role in this approach. It is interesting that some circumstantial evidence already seems to exist indicating that the position of a quantum system plays a more fundamental role than its wave function .
There is therefore a fairly strong case in favour of the possibility that quantum theory might be the limiting case of classical mechanics in which the interaction of the system with its environment (nonlinear in $`|\psi |`$) is completely switched off. It is difficult to see how such a situation can be accommodated within the standard Copenhagen philosophy. The wave function also acquires a new significance—it is sterile and dispensable in the classical limit but becomes potent and indispensable in the quantum limit.
## III The Klein-Gordon Equation
Let the Hamilton-Jacobi equation for free relativistic classical particles be
$$\frac{S_{cl}}{t}+\sqrt{(_iS_{cl})^2c^2+m_0^2c^4}=0.$$
(23)
Then, using the relation $`p_\mu =_\mu S_{cl}=m_0u_\mu `$ where $`u_\mu =dx_\mu /d\tau `$ with $`\tau =\gamma ^1t`$, $`\gamma ^1=\sqrt{1v^2/c^2},v_i=dx_i/dt`$, the particle equation of motion is postulated to be
$$m_0\frac{du_\mu }{d\tau }=0=\frac{dp_\mu }{d\tau }.$$
(24)
It is quite easy to show that the classical equations (23) and (3) continue to hold if one describes the system in terms of a complex wave function $`\psi _{cl}=R_{cl}exp(\frac{i}{\mathrm{}}S_{cl})`$ that satisfies the modified Klein-Gordon equation
$$\left(\mathrm{}+\frac{m_0^2c^2}{\mathrm{}^2}\frac{Q_{cl}}{\mathrm{}^2}\right)\psi _{cl}=0$$
(25)
with
$$Q_{cl}=\mathrm{}^2\frac{\mathrm{}R_{cl}}{R_{cl}}.$$
(26)
As in the non-relativistic case, $`Q_{cl}`$ may be interpreted as an effective potential in which the system finds itself when described in terms of the wave function $`\psi _{cl}`$. If this potential goes to zero in some limit, one obtains the free Klein-Gordon equation which is the quantum limit.
On the other hand, using $`\psi =Rexp(\frac{i}{\mathrm{}}S)`$ in the Klein-Gordon equation and separating the real and imaginary parts, one obtainds respectively the equation
$$\frac{1}{c^2}\left(\frac{S}{t}\right)^2\left(_iS\right)^2m_0^2c^2Q=0$$
(27)
which is equivalent to the modified Hamilton-Jacobi equation
$$\left(\frac{S}{t}\right)+\sqrt{\left(_iS\right)^2c^2+m_0^2c^4+c^2Q}=0$$
(28)
and the continuity equation
$$^\mu (R^2_\mu S)=0.$$
(29)
One can then identify the four-current as $`j_\mu =R^2_\mu S`$ so that $`\rho =j_0=R^2E/c`$ which is not positive definite because $`E`$ can be either positive or negative, and therefore, as is well known, it is not possible to interpret it as a probability density.
Nevertheless, let us note in passing that, if use is made of the definition $`p_\mu =_\mu S`$ of the particle four-momentum, (27) implies
$$p_\mu p^\mu =m_0c^2+Q$$
(30)
and $`p_\mu =M_0u_\mu `$ where $`M_0=m_0\sqrt{1+Q/m_0^2c^2}`$. Thus, the quantum potential $`Q`$ acts on the particles and contributes to their energy-momentum so that they are off their mass-shell. <sup>*</sup><sup>*</sup>*The author is grateful to E. C. G. Sudarshan for drawing his attention to this important point. Applying the operator $`_\mu `$ on equation (27), we get the equation of motion
$$\frac{dp_\mu }{d\tau }=\frac{_\mu Q}{2M_0}$$
(31)
which has the correct non-relativistic limit. The equation for the acceleration of the particle is therefore given by
$$\frac{du_\mu }{d\tau }=\frac{1}{2m_0^2}(c^2g_{\mu \nu }u_\mu u_\nu )^\nu log(1+\frac{Q}{m_0^2c^2}).$$
(32)
If, on the other hand, one uses the modified Klein-Gordan equation (25) and the corresponding Hamilton-Jacobi equation (23), the particles are on their mass-shell and the free particle classical equation (24) is satisfied.
## IV Relativistic spin 1/2 particles
Let us now examine the Dirac equation for relativistic spin $`1/2`$ particles,
$$(i\mathrm{}\gamma _\mu ^\mu +m_0c)\psi =0.$$
(33)
Let us write the components of the wave function $`\psi `$ as $`\psi ^a=R\theta ^aexp(\frac{i}{\mathrm{}}S^a)`$, $`\theta ^a`$ being a spinor component. It is not straightforward here to separate the real and imaginary parts as in the previous cases. One must therefore follow a different method for relativistic fermions.
It is well known that every component $`\psi ^a`$ of the Dirac wave function satisfies the Klein-Gordan equation. It follows therefore, by putting $`\psi ^a=R\theta ^aexp(iS^a/\mathrm{})`$, that $`S^a`$ must satisfy the modified Hamilton-Jacobi equation
$$_\mu S^a^\mu S^am_0^2c^2Q^a=0.$$
(34)
where $`Q^a=\mathrm{}^2\mathrm{}R\theta ^a/R\theta ^a`$. Summing over $`a`$, we get
$$\underset{a}{}_\mu S^a^\mu S^a4m_0^2c^2\underset{a}{}Q^a=0.$$
(35)
Defining
$`_\mu S^\mu S`$ $`=`$ $`{\displaystyle \frac{1}{4}}{\displaystyle \underset{a}{}}_\mu S^a^\mu S^a`$ (36)
$`Q`$ $`=`$ $`{\displaystyle \frac{1}{4}}{\displaystyle \underset{a}{}}Q^a,`$ (37)
we have
$$_\mu S^\mu Sm_0^2c^2Q=0.$$
(38)
Then, defining the particle four-momentum by $`p_\mu =_\mu S`$, one has $`p_\mu p^\mu =m_0^2c^2+Q`$. Therefore, one has the equation of motion
$$\frac{dp_\mu }{d\tau }=\frac{_\mu Q}{2M_0}.$$
(39)
The Bohmian 3-velocity of these particles is defined by the relation
$$v_i=\gamma ^1u_i=c\frac{u_i}{u_0}=c\frac{j_i}{j_0}=c\frac{\psi ^{}\alpha _i\psi }{\psi ^{}\psi }.$$
(40)
Then, it follows that
$$u_\mu =\gamma v_\mu =\gamma c\frac{j_\mu }{\rho }$$
(41)
where $`\rho =\psi ^{}\psi `$. This relation is satisfied because $`j_\mu j^\mu =\rho ^2/\gamma ^2`$ if (40) holds.
As we have seen, for a classical theory of spinless particles, the correct equation for the associated wave function is the modified Klein-Gordon equation (25). Let the corresponding modified wave equation for classical spin $`1/2`$ particles be of the form
$$\left(i\mathrm{}\gamma _\mu D^\mu +m_0c\right)\psi _{cl}=0$$
(42)
where $`D^\mu =^\mu +(i/\mathrm{})Q^\mu `$. Then we have
$$(D_\mu D^\mu +\frac{m_0^2c^2}{\mathrm{}^2})\psi _{cl}^a=0.$$
(43)
Writing $`\psi _{cl}^a=R_{cl}\theta ^aexp(\frac{i}{\mathrm{}}S_{cl}^a)`$, one obtains
$$_\mu S_{cl}^a^\mu S_{cl}^am_0^2c^2Q_{cl}^a+Q_\mu Q^\mu 2Q_\mu ^\mu S_{cl}^a=0$$
(44)
where
$$Q_{cl}^a=\frac{\mathrm{}^2\mathrm{}R_{cl}\theta ^a}{R_{cl}\theta ^a}.$$
(45)
Define a diagonal matrix $`B_\mu ^{ab}_\mu S_{cl}^a\delta ^{ab}`$ such that
$$\frac{1}{2}TrB_\mu =\frac{1}{2}\underset{a}{}_\mu S_{cl}^a_\mu S_{cl}.$$
(46)
Then
$`_\mu S_{cl}^\mu S_{cl}`$ $`=`$ $`{\displaystyle \frac{1}{4}}TrB_\mu TrB^\mu ={\displaystyle \frac{1}{4}}Tr(B_\mu B^\mu )`$ (47)
$`=`$ $`{\displaystyle \frac{1}{4}}{\displaystyle \underset{a}{}}_\mu S_{cl}^a^\mu S_{cl}^a.`$ (48)
Therefore, taking equation (44) and summing over $`a`$, we have
$$_\mu S_{cl}^\mu S_{cl}m_0^2c^2Q_{cl}+Q_\mu Q^\mu Q_\mu ^\mu S_{cl}=0$$
(49)
where
$$Q_{cl}=\frac{1}{4}\underset{a}{}Q_{cl}^a.$$
(50)
In order that the classical free particle equation is satisfied, the effects of the quantum potential must be cancelled by this additional interaction, and one must have
$$Q_\mu (Q^\mu ^\mu S_{cl})=Q_{cl}.$$
(51)
A solution is given by
$`p_\mu `$ $`=`$ $`_\mu S_{cl}=m_0u_\mu ,`$ (52)
$`Q_\mu `$ $`=`$ $`\alpha m_0u_\mu `$ (53)
with
$$\alpha =\frac{1}{2}\pm \frac{1}{2}\sqrt{1+4Q_{cl}/m_0^2c^2}.$$
(54)
## V Relativistic spin 0 and spin 1 particles
It has been shown that a consistent relativistic quantum mechanics of spin 0 and spin 1 bosons can be developed using the Kemmer equation
$$(i\mathrm{}\beta _\mu ^\mu +m_0c)\psi =0$$
(55)
where the matrices $`\beta `$ satisfy the algebra
$$\beta _\mu \beta _\nu \beta _\lambda +\beta _\lambda \beta _\nu \beta \mu =\beta _\mu g_{\nu \lambda }+\beta _\lambda g_{\nu \mu }.$$
(56)
The $`5\times 5`$ dimensional representation of these matrices describes spin 0 bosons and the $`10\times 10`$ dimensional representation describes spin 1 bosons. Multiplying (55) by $`\beta _0`$, one obtains the Schrödinger form of the equation
$$i\mathrm{}\frac{\psi }{dt}=[i\mathrm{}c\stackrel{~}{\beta }_i_im_0c^2\beta _0]\psi $$
(57)
where $`\stackrel{~}{\beta }_i\beta _0\beta _i\beta _i\beta _0`$. Multiplying (55) by $`1\beta _0^2`$, one obtains the first class constraint
$$i\mathrm{}\beta _i\beta _0^2_i\psi =m_0c(\mathrm{\hspace{0.17em}1}\beta _0^2)\psi .$$
(58)
The reader is referred to Ref. for further discussions regarding the significance of this constraint.
If one multiplies equation (57) by $`\psi ^{}`$ from the left, its hermitian conjugate by $`\psi `$ from the right and adds the resultant equations, one obtains the continuity equation
$$\frac{(\psi ^{}\psi )}{t}+_i\psi ^{}\stackrel{~}{\beta }_i\psi =0.$$
(59)
This can be written in the form
$$^\mu \mathrm{\Theta }_{\mu 0}=0$$
(60)
where $`\mathrm{\Theta }_{\mu \nu }`$ is the symmetric energy-momentum tensor with $`\mathrm{\Theta }_{00}=m_0c^2\psi ^{}\psi <0`$. Thus, one can define a wavefunction $`\varphi =\sqrt{m_0c^2/E}\psi `$ (with $`E=\mathrm{\Theta }_{00}𝑑V`$ ) such that $`\varphi ^{}\varphi `$ is non-negative and normalized and can be interpreted as a probability density. The conserved probability current density is $`s_\mu =\mathrm{\Theta }_{\mu 0}/E=(\varphi ^{}\varphi ,\varphi ^{}\stackrel{~}{\beta }_i\varphi )`$ .
Notice that according to the equation of motion (57), the velocity operator for massive bosons is $`c\stackrel{~}{\beta }_i`$, so that the Bohmian 3-velocity can be defined by
$$v_i=\gamma ^1u_i=c\frac{u_i}{u_0}=c\frac{s_i}{s_0}=c\frac{\psi ^{}\stackrel{~}{\beta }_i\psi }{\psi ^{}\psi }.$$
(61)
Exactly the same procedure can be followed for massive bosons as for massive fermions to determine the quantum potential and the Bohmian trajectories, except that the sum over $`a`$ has to be carried out only over the independent degrees of freedom (six for $`\psi `$ and six for $`\overline{\psi }`$ for spin-1 bosons). The constraint (58) implies the four conditions $`\stackrel{}{A}=\stackrel{}{}\times \stackrel{}{B}`$ and $`\stackrel{}{}.\stackrel{}{E}=0`$.
The theory of massless spin 0 and spin 1 bosons cannot be obtained simply by taking the limit $`m_0`$ going to zero. One has to start with the equation
$$i\mathrm{}\beta _\mu ^\mu \psi +m_0c\mathrm{\Gamma }\psi =0$$
(62)
where $`\mathrm{\Gamma }`$ is a matrix that satisfies the following conditions:
$`\mathrm{\Gamma }^2`$ $`=`$ $`\mathrm{\Gamma }`$ (63)
$`\mathrm{\Gamma }\beta _\mu +\beta _\mu \mathrm{\Gamma }`$ $`=`$ $`\beta _\mu .`$ (64)
Multiplying (62) from the left by $`1\mathrm{\Gamma }`$, one obtains
$$\beta _\mu ^\mu (\mathrm{\Gamma }\psi )=0.$$
(65)
Multiplying (62) from the left by $`_\lambda \beta ^\lambda \beta ^\nu `$, one also obtains
$$^\lambda \beta _\lambda \beta _\nu (\mathrm{\Gamma }\psi )=_\nu (\mathrm{\Gamma }\psi ).$$
(66)
It follows from (65) and (66) that
$$\mathrm{}(\mathrm{\Gamma }\psi )=0$$
(67)
which shows that $`\mathrm{\Gamma }\psi `$ describes massless bosons. The Schrödinger form of the equation
$$i\mathrm{}\frac{(\mathrm{\Gamma }\psi )}{dt}=i\mathrm{}c\stackrel{~}{\beta }_i_i(\mathrm{\Gamma }\psi )$$
(68)
and the associated first class constraint
$$i\mathrm{}\beta _i\beta _0^2_i\psi +m_0c(\mathrm{\hspace{0.17em}1}\beta _0^2)\mathrm{\Gamma }\psi =0$$
(69)
follow by multiplying (62) by $`\beta _0`$ and $`1\beta _0^2`$ respectively. The rest of the arguments are analogous to the massive case. For example, the Bohmian 3-velocity $`v_i`$ for massless bosons can be defined by equation (61).
Neutral massless spin-1 bosons have a special significance in physics. Their wavefunction is real, and so their charge current $`j_\mu =\varphi ^T\beta _\mu \varphi `$ vanishes. However, their probability current density $`s_\mu `$ does not vanish. Furthermore, $`s_i`$ turns out to be proportional to the Poynting vector, as it should.
Modifications to these equations can be introduced as in the massive case to obtain a classical theory of massless bosons.
## VI The Gravitational Field
Exactly the same procedure can also be applied to the gravitational field described by Einstein’s equations
$$R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R=0$$
(70)
for the vacuum, where $`R_{\mu \nu }`$ is the Ricci tensor and $`R`$ the curvature scalar. In this section, following , we will use the signature $`+++`$ and the absolute system of units $`\mathrm{}=c=16\pi G=1`$. The decompostion of the metric is given by
$`ds^2`$ $`=`$ $`g_{\mu \nu }dx^\mu dx^\nu `$ (71)
$`=`$ $`(N_iN^iN^2)dt^2+2N_idx^idt+g_{ij}dx^idx^j`$ (72)
with $`g_{ij}(𝐱)`$, the 3-metric of a 3-surface embedded in space-time, evolving dynamically in superspace, the space of all 3-geometries.
By quantizing the Hamiltonian constraint, one obtains in the standard fashion the Wheeler-DeWitt equation
$$\left[G_{ijkl}\frac{\delta ^2}{\delta g_{ij}\delta g_{kl}}+\sqrt{g}^3R\right]\mathrm{\Psi }=0$$
(73)
where $`g=`$ det $`g_{ij}`$, $`{}_{}{}^{3}R`$ is the intrinsic curvature, $`G_{ijkl}`$ is the supermetric, and $`\mathrm{\Psi }[g_{ij}(x)]`$ is a wave functional in superspace. Substituting $`\mathrm{\Psi }=Aexp(iS)`$, one obtains as usual a conservation law
$$G_{ijkl}\frac{\delta }{\delta g_{ij}}\left(A^2\frac{\delta S}{\delta g_{kl}}\right)=0$$
(74)
and a modified Einstein-Hamilton-Jacobi equation
$$G_{ijkl}\frac{\delta S}{\delta g_{ij}}\frac{\delta S}{\delta g_{kl}}\sqrt{g}^3R+Q=0$$
(75)
where
$$Q=A^1G_{ijkl}\delta ^2A/\delta g_{ij}\delta g_{kl}$$
(76)
is the quantum potential. It is invariant under 3-space diffeomorphisms. The causal interpretation of this field theory (as distinct from particle mechanics considered earlier) assumes that the universe whose quantum state is governed by equation (73) has a definite 3-geometry at each instant, described by the 3-metric $`g_{ij}(𝐱,t)`$ which evolves according to the classical Hamilton-Jacobi equation
$$\frac{g_{ij}(𝐱,t)}{t}=_iN_j+_jN_i+2NG_{ijkl}\frac{\delta S}{\delta g_{kl}}|_{g_{ij}(x)=g_{ij}(𝐱,t)}$$
(77)
but with the action $`S`$ as a phase of the quantum wave functional. This equation can be solved if the initial data $`g_{ij}(𝐱,0)`$ are specified. The metric in this field theory clearly corresponds to the position in particle mechanics, equation (77) being its guidance condition.
It is now clear that one can modify the Wheeler-DeWitt equation (73) to the form
$$\left[G_{ijkl}\frac{\delta ^2}{\delta g_{ij}\delta g_{kl}}+\sqrt{g}^3RQ_{cl}\right]\mathrm{\Psi }_{cl}=0$$
(78)
where $`Q_{cl}`$ is defined by an expression analogous to (76) with $`A`$ and $`S`$ replaced by the classical variables $`A_{cl}`$ and $`S_{cl}`$. This leads to the classical Einstein-Hamilton-Jacobi equation
$$G_{ijkl}\frac{\delta S_{cl}}{\delta g_{ij}}\frac{\delta S_{cl}}{\delta g_{kl}}\sqrt{g}^3R=0.$$
(79)
The term $`Q_{cl}`$ can then be interpreted, as before, as a potential arising due to the coupling of gravitation with other forms of energy. If this coupling could be switched off, quantum gravity effects would become important. The question arises as to whether this can at all be done for gravitation.
## VII Concluding Remarks
It is usually assumed that a classical system is in some sense a limiting case of a more fundamental quantum substratum, but no general demonstration for ensembles of systems has yet been given. That a quantum system may, on the other hand, be a part of a classical system in which its typical quantum features lie dormant is, however, clear from the above discussions. The part therefore naturally shares the ontology of the total classical system, and the measurement problem does not even arise. The nonlocal quantum potential that is responsible for self-organization and the creation of varied stable and metastable quantum structures, becomes active only when the coupling of the part to the whole is switched off. This is a clearly defined physical process that links the classical and quantum domains.
According to this view, therefore, every quantum system is a closed system and every classical system is an open system. The first Newtonian law of motion therefore acquires a new interpretation—the law of inertia holds for a system not when it is isolated from everything else but when it interacts with its environment to an extent that all its quantum aspects are quenched. Various attempts to show that the classical limit of quantum systems is obtained in certain limits, like large quantum numbers and/or large numbers of constituents, have so far failed . The reason is clear—a linear equation like the Schrödinger equation can never describe a classical system which is described by a modified Schrödinger equation with a nonlinear term. This nonlinear term must be generated through some mechanism like the coupling of the system to its environment. There are, of course, other purely formal limits too (like $`\mathrm{}`$ going to zero, for example) in which a closed quantum system reduces to a classical system, as widely discussed in the literature.
It is clear from the usual ‘decoherence’ approach that the interaction of a quantum system with its environment in the form of some kind of heat bath is necessary to obtain a quasi-classical limit of quantum mechanics. This is usually considered to be a major advance in recent years. Such decoherence effects have already been measured in cavity QED experiments. Decoherence effects are very important to take into account in other critical experiments too, like the use of SQUIDs to demonstrate the existence of Schrödinger cat states. The failure to observe cat states so far in such experiments shows how real these effects are and how difficult it is to eliminate them even for mesoscopic systems. I have taken these advances in our knowledge seriously in a phenomenological sense and tried to incorporate them into a conceptually consistent scheme.
The usual decoherence approach however suffers from the following difficulty: it does neither solve the measurement problem nor does it lead to a truly classical phase space. The two problems seem to be intimately related. The density matrix becomes diagonal only in the coordinate representation. In other words, it does not represent a proper ‘statistical mixture’. The use of the linear Schrödinger equation then automatically implies that the momentum space representation is necessarily non-diagonal. This does not happen in the approach advocated in this paper because of equation (19) which guarantees the emergence of classical phase space and a proper ‘statistical mixture’. A clear empirical difference must therefore exist between the predictions of the usual decoherence approach and the approach advocated in this paper in the classical limit. It should be possible to test this by suitable experiments which are under consideration. The proposed conceptual frame! work is therefore falsifiable.
## VIII Acknowledgements
The basic idea that quantum mechanics may be contained within classical mechanics occurred during a discussion with C. S. Unnikrishnan in the context of an experiment carried out by R. K. Varma and his colleagues and Varma’s theoretical explanation of it in terms of the Liouville equation written as a series of Schrödinger-like equations. Unnikrishnan should not, however, be held responsible for any defects of my particular formulation of the idea.
The first version of the non-relativistic part of this paper was presented at the “Werner Heisenberg Colloquium: The Quest for Unity: Perspectives in Physics and Philosophy” organized by the Max Muller Bhavan, New Delhi, the National Institute of Science, Technology and & Development Studies, CSIR, Government of India and the Indian Institute of Advanced Study, Shimla from 4th to 7th August, 1997 at Shimla.
The author is grateful to the Department of Science and Technology, Government of India, for a research grant that enabled this work to be completed. |
no-problem/0001/astro-ph0001345.html | ar5iv | text | # The Pulse Scale Conjecture and the Case of BATSE Trigger 2193
## 1 Introduction
The physical mechanisms that create gamma ray bursts (GRBs) and their constituent pulses remain unknown even 30 years after their discovery. Furthermore, GRBs may make powerful tools uniquely visible into the early universe, particularly if it is possible to accurately estimate their distances. Recent redshift determinations of the optical counterparts to GRBs have placed them at cosmological distances (Djorgovski et al., 1997; Metzger et al., 1997), but remain scarce, currently numbering on order ten. Norris, Marani, & Bonnell (2000) have suggested that GRB pulses might calibrate GRB intrinsic luminosity, allowing perhaps a few hundred of the brightest GRBs currently detected by the Burst and Transient Source Experiment (BATSE) on board the Compton Gamma-Ray Observatory (CGRO) to act as standard candles. Specifically, Norris, Marani, & Bonnell (2000) have proposed that the time-lag between BATSE DISCSC energy channels 3 and 1, as measured by a cross-correlation function, might calibrate this standard candle. More recently, Fenimore (2000) has proposed that general variability across the whole GRB may also act to calibrate these explosions as standard candles. The data behind these claims remains small, however, and proposed physical mechanisms behind them unproven.
Desai (1981) originally noted that GRBs have characteristic “separated times” in events, an illusion to structures that are today known as pulses. Norris et al. (1996) recently fit and analyzed the pulses in 41 BATSE GRBs with a minimal functional form. Katz (1994) has suggested that the shape of a GRB pulses originates from time delays inherent in the geometry of its spherically expanding emission front. Liang et al. (1997) have provided arguments that saturated Compton up-scattering of softer photons may be the dominant physical mechanism that creates the shape of GRB pulses. Sumner & Fenimore (1998); Ramirez-Ruiz & Fenimore (1999); Fenimore (2000) claim that the rise time of GRB Fast Rise Exponential Decay (FRED)-like structures is related to the sound speed of the pulse medium, but the decay time is related to a time-delay inherent in the geometry of an expanding, spherical GRB wave front undergoing rapid synchrotron cooling.
How the spectrum of GRBs change with time also has a long history (Wheaton, 1973; Vedrenne, 1981; Norris, 1983; Cline, 1984; Norris et al., 1986). Recent notable work includes Band (1997) who used auto- and cross-correlation statistics to track hardness variability in 209 bright BATSE bursts, and Crider et al. (1999) who tracked the peak of photon frequency times flux of 41 pulses in 26 bursts, finding that this quantity decays linearly with pulse energy fluence.
In this paper a bright GRB dominated by a single smooth pulse is analyzed in an effort to better understand GRB pulses in general, and more specifically to help calibrate energy dependent pulse attributes that are being used as standard candles for GRBs. These attributes may also help distinguish or eliminate some physical processes proposed to create GRB pulses. In §2 data from BATSE trigger 2193 will be introduced and discussed. In §3 the Pulse Start Conjecture will be proposed. In §4, the Pulse Scale Conjecture will be introduced, discussed, and tested on data from 2193. In §5 the next four most fluent GRBs will also used to test the Pulse Start and Pulse Scale Conjectures. In §6, the paper will conclude with estimates of the theoretical implications following from the potential truths of these conjectures.
## 2 A Case Study: GRB 930214c – BATSE Trigger 2193
This project started with the search for a single, isolated, fluent, long GRB pulse to study in detail. The single pulse that appears to compose GRB 930214c (BATSE trigger 2193) was chosen because of its isolated nature, high fluence, and long duration. A list of GRBs with durations greater than two seconds that are dominated by a single pulse has been published by Norris, Bonnell, & Watanabe (1999) as Table 1. BATSE trigger 2193 has the highest fluence on this table, as computed by multiplying the Full Width Half Maximum of the duration (column 4) by peak counts $`A_{max}`$ (column 6). The isolated nature of the main pulse in 2193 minimizes confusion by simultaneously arriving pulses. The relatively high fluence of this pulse makes for relatively good statistics. The relatively long duration of this pulse makes the time lag between BATSE energy channels proportionally longer. The combined fluence and duration of GRB 930214c allow BATSE CONTINUOUS (CONT) data to be useful interesting time and energy scales are resolved. CONT data is divided into 16 energy channels (as opposed to only 4 energy channels for the more frequently analyzed DISCLA and DISCSC data types) but only time-resolved into 2.048 second longs time bins. Important data from before BATSE triggered is also available with the CONT data type. Additionally, CONT data is available in background subtracted form from the Compton Gamma Ray Observatory Science Support Center web pages. Specifically, background subtracted CONT data for 2193 created by G. Marani is available from http://cossc.gsfc.nasa.gov/cossc/batse/.
Light curves for BATSE trigger 2193 are shown for the four main broad-band energy channels of PREB, DISCSC, and DISCLA data in Figure 1, and for all 16-energy channels of CONT in Figure 2. The four energy channels of DISCSC data and displayed in Figure 1 are, approximately: channel 1: 25-60 KeV, channel 2: 60-110 KeV, channel 3: 110-325 KeV, and channel 4: 325 KeV - 1 MeV. These data have relatively course energy resolution but with 64-ms time bins, relatively high time resolution. The sixteen CONT energy channels in Figure 2 are: channel 1: 13-25 KeV, channel 2: 25-33 KeV, channel 3: 33-41 KeV, channel 4: 41-54 KeV, channel 5: 54-72 KeV, channel 6: 72-96 KeV, channel 7: 96-122 KeV, channel 8: 122-162 KeV, channel 9: 162-230 KeV, channel 10: 230-315 KeV, channel 11: 315-424 KeV, channel 12: 424-589 KeV, channel 13: 589-745 KeV, channel 14: 745-1107 KeV, channel 15: 1107-1843 KeV, and channel 16: 1843-100,000 KeV. These energy boundaries are only approximate, as each of BATSE’s eight detectors has slightly different energy boundaries.
Inspection of Figures 1 and 2 show that the counts are highest in the middle range energies detectable by BATSE. The highest fluence is in CONT energy discriminator channel 9. It is clear from both plots that this pulse has the classic Fast Rise Exponential Decay (FRED) shape that is common to many pulses in GRBs.
## 3 The Pulse Start Conjecture
Inspection of Figure 2 shows that the pulse begins at approximately the same time in all 16 CONT energy channels, with an approximate error of one 2.048-second duration time bin. The simultaneous start time is in marked contrast to the time of peak flux in each energy channel, which clearly changes as a function of energy. It is here hypothesized that this behavior is common to all GRB pulses, and it will be referred to here as the “Pulse Start Conjecture.”
The Pulse Start Conjecture is something that may have been obvious to some previously but not discussed explicitly. Its truth, however, may be important to accurate pulse modeling paradigms. The Pulse Start Conjecture is also important to proper application of the following Pulse Scale Conjecture, since it identifies the zero point that is held fixed when scaling the time axis. Note that this start time is generally not the same time as the BATSE trigger time, which is an instrument dependent phenomenon. In general, if a pulse triggers BATSE, its start time will usually occur before the BATSE trigger. If the pulse did not trigger BATSE, its start time will usually occur after the BATSE trigger.
## 4 The Pulse Scale Conjecture
It appears from an informal inspection of Figures 1 and 2 that pulse shapes in each energy channel are similar. In this section it is tested whether the shape in each discriminator energy channel is actually statistically similar to the shape is all other channels, given single scale factors in time and amplitude. In other words, can any light-curve strip in Figure 2 at any energy be stretched along both the x-axis (time) and the y-axis (amplitude) to precisely overlay any other light-curve strip at any other energy? When stretching in the time direction, the zero point (which is fixed) is the energy-independent start time of the pulse. When stretching in the counts direction (amplitude), the zero point is the background level of each burst.
To test the Pulse Scale Conjecture, the light curves of CONT energy channels 4 through 14 of GRB 930214c were stretched to their best-fit match to CONT energy channel 9, the channel with the highest peak count rate. Twenty-six 2.048-sec CONT time bins were isolated from for each CONT energy channel, noting the fit background counts and GRB counts for each time bin. The time series was constructed so that the first bin was the start of each pulse, regardless of energy, in accord with the Pulse Start Conjecture. This zero point was held fixed during all computational time-contractions. Channel 9 was isolated, and compared to other candidate energy channels. A candidate temporal scale factor was iterated between channel 9 and this energy channel. The time bins available for statistical comparison were identified. Lower energy channels were then artificially time-contracted and re-binned to the candidate scale of channel 9, and normalized by the available fluence. A $`\chi ^2`$ statistic per degree of freedom was then computed and recorded. All energy channels were iterated, as were 100 candidate scale factors. Higher energy channels were compared similarly with the exception that channel 9 was artificially time-contracted to the higher-energy scale.
The results are shown in Figure 3. Data points from each energy channel are plotted with the channel number, while a solid line depicts channel 9. When the best-fit dilation factors in time and amplitude are applied, the pulse shapes match that of channel 9 with the following $`\chi ^2`$ per degree of freedom: (0.82, 2.86, 8.60, 1.80, 1.26, 1.71, 0.73, 1.02, 1, 1.08, 2.43, 0.85, 1.63, 1.17, 4.46, N/A). The last energy channel, 16, had so few counts that accurate results could not be obtained. Channels 2, 3, and 15 also had relatively few counts (as visible in Figure 2), which could create an underestimated variance which in turn could create an overestimated $`\chi ^2`$ when compared to channel 9. Most of the energy channels, however, give a relatively good fit, indicating that the pulse shape in most energy channels really is statistically similar to the pulse shape in channel 9. Most energy channels therefore appear to adhere to the Pulse Scale Conjecture.
Figure 4 shows a plot of the scale factor in time needed to best bring a pulse in a given energy channel into alignment with discriminator channel 9, plotted as a function of the geometric mean energy of the energy channel. The error bars in the y-direction indicate the range of scale factors below a $`\chi ^2`$ per degree of freedom of 2.5. The error bars in the x-direction denote the CONT energy channel boundaries.
Inspection of Figure 4 yields several interesting features. First, in general, the higher energy channels appear time-contracted relative to lower energy channels. That pulses have shorter durations at higher energies has been noted previously (e.g. Norris et al. (1986)). Such scaling behavior possibly indicates a relativistic origin for the relative apparent time contraction of the higher energy channels.
Next, inspection of the mid-energy channels in Figure 4 reveals a nearly linear relation between the log of the scale factor in time and the log of the mean energy of counts. This power law dependence appears to hold from at least channel 7 through channel 14, from about 100 KeV to about 1000 KeV. This unexpected relationship may point to a relatively simple progenitor relation between photon energy and relativistic time dilation factor.
Figure 5 shows a plot of the scale factor in amplitude needed to best bring a pulse in a given energy channel into alignment with discriminator channel 9, plotted as a function of the geometric mean energy of the energy channel. We note that the pulse appears to have a lesser amplitude in both higher and lower energy channels. This is not surprising, as channel 9 was initially chosen just because it had the highest peak flux (and hence best counting statistics). Figure 5 is essentially a normalized 16-channel spectrum of the counts received from this GRB. No easily recognizable relation between the scale factors in time and energy has been found.
## 5 The Next Four Most Fluent Single-Pulse GRBs
Possibly BATSE trigger 2193 is an unusual burst. Perhaps the main pulse in 2193 is the only one to approximately fit the Pulse Start and Pulse Scale Conjectures. To test uniqueness, the next four most fluent pulses in single-pulse GRBs as listed in Norris, Bonnell, & Watanabe (1999) were analyzed. These were, in order of decreasing fluence, GRB 930612a, GRB 940529b, GRB 941031b, and GRB 970825, corresponding to BATSE trigger numbers 2387, 3003, 3267, and 6346 respectively. Because these GRBs have significantly fewer counts, it was worried that passing the sixteen-channel test would be significantly less demanding due to inherently larger variances. It was therefore decided to use four energy channel data, also downloadable from the Compton Gamma Ray Observatory Science Support Center.
Each GRB was prepared in a manner similar to that described above for BATSE trigger 2193. Here, however, it was tested to see if the pulse shape in energy channels 1 and 2 could be scaled in time and fluence to match the pulse shape found in energy channel 3. The number of counts was so few in energy channel 4 for each burst that any test involving data from this channel was essentially meaningless. A $`\chi ^2`$ per degree of freedom ($`d`$) statistic was computed between the two lower energy channels and channel 3. For BATSE trigger 2387, the $`\chi _{13}^2`$ per degree of freedom between channels 1 and 3 was 5.06, while between channels 2 and 3 $`\chi _{23}^2/d=2.20`$. For BATSE trigger 3003, $`\chi _{13}^2/d=1.25`$ and $`\chi _{23}^2/d=1.07`$. For BATSE trigger 3267, $`\chi _{13}^2/d=0.98`$ and $`\chi _{23}^2/d=1.09`$. Lastly, for BATSE trigger 6346, $`\chi _{13}^2/d=0.72`$ and $`\chi _{23}^2/d=0.74`$. These tests were all carried out using 64-ms time-binned data.
Plots where the data is binned to 1.024-second bins are shown in Figures 6-9. Trends in the data are easier for the human eye to discern on this large time-bin size. The best-found scale factors between the first two energy channels and channel 3 has been applied. Channel 3 data is plotted with a continuous line, while channel 1 and channel 2 data are plotted with the symbols “1” and “2” respectively.
From these results, it appears that the combination Pulse Start and Pulse Scale Conjectures hold in all cases with the possible exception of BATSE trigger 2387. Even for BATSE trigger 2387, the Pulse Start and Pulse Scale Conjectures were marginally consistent between energy channels 2 and 3. It is speculated that the conjecture tests might have been compromised between channels 1 and 3 because of a second dim soft pulse that occurred well after the peak of the main pulse.
## 6 Summary, Theoretical Implications, and Conclusions
From preliminary inspection of the most fluent single-pulse GRB on the Norris, Bonnell, & Watanabe (1999) list, BATSE trigger 2193, two conjectures have been suggested. They are: 1. The Pulse Start Conjecture: GRB pulses each have a unique starting time that is independent of energy. 2. The Pulse Scale Conjecture: GRB pulses have a unique shape that is independent of energy. The relation between the shape of a GRB pulse at any energy and the shape of the same GRB pulse at any other energy are simple scale factors in time and amplitude.
These two conjectures were then tested on BATSE trigger 2193 and found statistically valid in significantly fluent energy channels. The two conjectures were also indicated as true for three of the next four most fluent single-pulse GRBs, with the discrepant case possibly being affected by the presence of a small secondary pulse. When the Pulse Scale conjecture holds, a unique time-scaling factor is revealed between GRB energy bands. For BATSE trigger 2193, this scale factor was found to be a power law increasing monotonically from about 100 KeV to 1000 KeV.
One might be surprised that the Pulse Scale Conjecture can be tested at all with current BATSE data, since the energy channels available all have finite energy width. In its purest form, the Pulse Scale Conjecture predicts a scaling relation between pulse shapes at monochromatic energies only. In this sense, it is fortunate that the shape of GRB pulses allows pulses in different energy channels to be added together and nearly retain their initial shape. Were this not true, the broad width of BATSE energy channels would have made such a behavior indiscernible. Mathematically, however, the superposition does not work exactly, which may mean that the conjectures works even more precisely than indicated here.
Much research has been done isolating GRB pulses and tracking them across energy channels (Norris et al., 1996; Scargle, 1998). The truth of the Pulse Scale conjecture in other GRB pulses might provide useful constraints on GRB pulse identification and energy tracking.
Several important works involving GRB pulses have been published which involve a cross correlation between pulses in different energy bands (Norris, Marani, & Bonnell, 2000; Fenimore, 2000). The accuracy of these works might be augmented were they to factor in a best-fit scale-factor before computing a cross- correlation. In the case of Norris, Marani, & Bonnell (2000), this might result in a more accurate standard candle for GRBs.
The Pulse Scale Conjecture identifies a potential invariant of each GRB pulse: the ratio of the fluence occurring before the peak to the fluence occurring after the peak. It has long been known that GRBs are time-asymmetric (Nemiroff et al., 1994), but the energy-invariant degree of asymmetry of GRB pulses might now be used as a tool. Since neither pulse scale factors in time nor in amplitude change this quantity, each independent GRB pulse should have its own invariant pre-/post- peak fluence ratio. If different physical processes determine the duration of the rise and decay separately, then one would not expect that these processes would scale the same at different energies. Therefore, the constancy of this rise/decay ratio over different energy bands indicates either that the same physical process creates the rise and decay, or that both processes are effected by a common dilation.
Beyond this, the Pulse Scale Conjecture may be interpreted as a relative time-dilation factor between energies. One might speculate that this behavior results from a relative projected speed or a relative beaming angle, but these speculations cannot yet be definitively tested.
The Pulse Scale Conjecture also gives insight into why some pulses appear to have hardness decrease monotonically with time (Wheaton, 1973; Norris, 1983; Laros, 1985; Norris et al., 1986; Crider et al., 1999), while some pulses appear to have hardness track intensity (Vedrenne, 1981; Cline, 1984; Golenetskii, 1983; Norris et al., 1986; Crider et al., 1999). In the former case, a relatively small temporal scale-factor might exist between the high and low energy channels. A small temporal scale-factor is nearly equal to a constant shift to an earlier time of the higher energy band relative to the lower energy band. For a FRED shaped pulse, such a shift will appear as monotonically decreasing hardness in coarsely binned timing data.
Pulses where hardness appears to track intensity, in contrast, should exhibit a relatively large temporal scale-factor between hardness compared energies. A large scale-factor would allow the pulse at the higher energy to peak while the same pulse at lower energy is still rising slowly. To an approximation, the pulse intensity at the lower energy is flat, and so the ratio of the high to low energy channels is nearly proportional to the intensity of the high-energy channel.
Future work might try to discern the truth of the Pulse Start and Pulse Scale conjectures over a wider variety of pulses. No other GRB pulses were tested in detail and dismissed because they failed the Pulse Start and Pulse Scale Conjectures stated above. It is true, however, that most GRB pulses occur simultaneously with other pulses, and so would be much more difficult to test. Informal inspection of isolated pulses in several other GRBs does indicate that the conjectures proposed here may be applicable to many other – perhaps all other – GRB pulses.
This research was supported by grants from NASA and the NSF. I would like to thank Jay Norris, Jerry Bonnell, and Christ Ftaclas for many helpful discussions. |
no-problem/0001/gr-qc0001100.html | ar5iv | text | # References
## Acknowledgements
I am indebted to Ram Brustein, Michele Maggiore and Gabriele Veneziano for tutoring me on some aspects of the physics of stochastic gravity-wave backgrounds. Still on the theory side I am also grateful to several colleagues who provided encouragement and stimulating feed-back, particularly Dharam Ahluwalia, Abhay Ashtekar, John Ellis, Nick Mavromatos, Jorge Pullin, Carlo Rovelli, Subir Sarkar, Lee Smolin and John Stachel. On the experiment side I would like to thank Jérôme Faist, Peter Fritschel, Luca Gammaitoni, Lorenzo Marrucci, Soumya Mohanty, and Michele Punturo, for conversations on various aspects of interferometry. |
no-problem/0001/astro-ph0001421.html | ar5iv | text | # A Natural Formalism for Microlensing
## 1 Introduction
The geometry of point-lens microlensing (Einstein 1936; Refsdal 1964; Paczyński 1986) is so simple that students can derive all the basic results in a few hours. Nonetheless, this geometry has never been boiled down to its essence: the relationship between the underlying physical quantities and the observables. In particular, the “Einstein ring radius” $`r_\mathrm{E}`$, a central concept in the usual formulation, is not directly observable and has not been observationally determined for even one of the $`500`$ microlensing events observed to date. There appear to be three reasons that the natural geometric formulation has not been developed. First, the standard geometry is already so trivial that further simplification has not seemed worthwhile. Second, the theory of microlensing was already quite developed before it was realized what the observables were, and until very recently, the prospects were poor for measuring these observables except in a handful of events. Third, the original impulse to microlensing searches was to probe the dark matter. This focused attention on the optical depth (a statistical statement about the ensemble of events) and secondarily on the Einstein timescale $`t_\mathrm{E}`$, which of the three observables is the one that has the most convoluted relation to the underlying physical parameters.
However, with the prospect astrometric microlensing it is now possible that a second observable, the angular Einstein radius $`\theta _\mathrm{E}`$, will be routinely measured (Boden, Shao, & Van Buren 1998; Paczyński 1998). Moreover, if these astrometric measurements are carried out by the Space Interferometry Mission (SIM) in solar orbit, then comparison of photometry from SIM and the ground will yield a third observable, the projected Einstein radius $`\stackrel{~}{r}_\mathrm{E}`$ (Refsdal 1966; Gould 1995; Gould & Salim 1999). Hence, it is now appropriate to reformulate the microlensing problem in terms of these observables.
## 2 Geometry
The upper panel of Figure 1 shows the standard presentation of microlensing geometry (e.g. Fig. 3 from Gould 1996). The observer (O), lens (L) of mass $`M`$, and source (S) are aligned. The light is deflected by an angle $`\alpha `$ given by the Einstein (1936) formula
$$\alpha =\frac{4GM}{r_\mathrm{E}c^2},$$
(1)
where $`r_\mathrm{E}`$ is the Einstein radius. It arrives at the observer displaced by an angle $`\theta _\mathrm{E}`$ from the true position of the source. In this case, the source is therefore imaged into a ring. The size of this ring projected onto the source plane is $`\widehat{r}_\mathrm{E}`$. More generally, the alignment will not be perfect, and the axial symmetry will be broken. Hence, there will be two images rather than a ring. However, even in this more general case the Einstein ring provides a natural scale to the problem.
The lower panel of Figure 1 basically inverts the geometry of the upper panel and thereby focuses attention on the observer rather than the source. This seems like a trivial change but it has two advantages. First, the quantities shown at the right, $`\theta _\mathrm{E}`$ and $`\stackrel{~}{r}_\mathrm{E}`$ are the observables. To date, $`\theta _\mathrm{E}`$ has been measured for only 4 events (Alcock et al. 1997; Albrow et al. 1999,2000; Afonso et al. 2000), all by using the source as an “angular ruler” (Gould 1994a; Nemiroff & Wickramasinghe 1994; Witt & Mao 1994). Similarly, $`\stackrel{~}{r}_\mathrm{E}`$ has been determined for only about a half dozen events (Alcock et al. 1995; Bennett et al. 1997; Mao 1999). For all of these, $`\stackrel{~}{r}_\mathrm{E}`$ was found by measuring the deviation of the light curve induced by the Earth’s motion (Gould 1992). The amplitude of this deviation is proportional to $`\pi _\mathrm{E}\mathrm{AU}/\stackrel{~}{r}_\mathrm{E}`$. The measurements of both $`\theta _\mathrm{E}`$ and $`\stackrel{~}{r}_\mathrm{E}`$ have required special conditions (a caustic crossing for $`\theta _\mathrm{E}`$ and an event lasting a large fraction of a year for $`\stackrel{~}{r}_\mathrm{E}`$), which is why so few of these “observables” have actually been observed. However, as mentioned above, both $`\theta _\mathrm{E}`$ and $`\stackrel{~}{r}_\mathrm{E}`$ could be measured routinely in the future.
The second reason for inverting the standard geometry is that doing so makes transparent the relation between the observables and the underlying physical variables: the product of $`\theta _\mathrm{E}`$ and $`\stackrel{~}{r}_\mathrm{E}`$ is essentially the Schwarzschild radius of the lens, and their ratio is essentially the lens-source relative parallax. Using the small angle approximation, one sees immediately from the lower panel of Figure 1 that $`\alpha /\stackrel{~}{r}_\mathrm{E}=\theta _\mathrm{E}/r_\mathrm{E}`$, or
$$\theta _\mathrm{E}\stackrel{~}{r}_\mathrm{E}=\alpha r_\mathrm{E}=\frac{4GM}{c^2}.$$
(2)
Next, from the exterior-angle theorem
$$\theta _\mathrm{E}=\alpha \psi =\frac{\stackrel{~}{r}_\mathrm{E}}{d_l}\frac{\stackrel{~}{r}_\mathrm{E}}{d_s}=\frac{\stackrel{~}{r}_\mathrm{E}}{d_{\mathrm{rel}}},$$
(3)
where $`d_l`$ and $`d_s`$ are the distances to the lens and source, and $`d_{\mathrm{rel}}^1d_l^1d_s^1`$. Note that equation (3) can be written more suggestively as
$$\pi _\mathrm{E}\theta _\mathrm{E}=\pi _{\mathrm{rel}},\pi _\mathrm{E}\frac{\mathrm{AU}}{\stackrel{~}{r}_\mathrm{E}},$$
(4)
where $`\pi _{\mathrm{rel}}=\mathrm{AU}/d_{\mathrm{rel}}`$ is the lens-source relative parallax.
Just as in astrometric parallax determinations where $`\pi `$ is a more natural way to represent the measured quantity than its inverse (distance), so in microlensing “parallax” determinations, $`\pi _\mathrm{E}`$ is more natural than its inverse $`(\stackrel{~}{r}_\mathrm{E})`$. The reason is the same: the observable effect is inversely proportional $`\stackrel{~}{r}_\mathrm{E}`$ but directly proportional to $`\pi _\mathrm{E}`$, so the measurement errors, when expressed in terms of $`\pi _\mathrm{E}`$ exhibit more regular behavior. As in the case of astrometric parallax, this feature becomes especially important for measurements that are consistent with zero at the few $`\sigma `$ level. Indeed, in contrast to astrometric parallaxes, microlensing parallaxes are inherently two-dimensional (Gould 1995). That is, one measures not only the amplitude of $`\stackrel{~}{r}_\mathrm{E}`$ (or $`\pi _\mathrm{E}`$) but also the direction lens-source relative motion. Hence one can generalize $`\pi _\mathrm{E}`$ to a two-dimensional vector $`\pi \pi _\mathrm{E}`$ whose direction is that of the lens relative to the source. The measurement errors in $`\pi \pi _\mathrm{E}`$ are then easily expressed as a covariance matrix. By contrast, there is no natural way to generalize $`\stackrel{~}{r}_\mathrm{E}`$: it can be made into a vector with the same direction $`\stackrel{~}{𝐫}_\mathrm{E}`$, but when $`\pi \pi _\mathrm{E}`$ is consistent with zero, such a vector is very poorly behaved. Moreover, in some cases one component of $`\pi \pi _\mathrm{E}`$ can be very well determined while the other is highly degenerate (Refsdal 1966; Gould 1994b,1995), a situation that is easily represented using $`\pi \pi _\mathrm{E}`$ but unwieldy using $`\stackrel{~}{𝐫}_\mathrm{E}`$. (Note that while no one has ever previously introduced $`\stackrel{~}{𝐫}_\mathrm{E}`$, I have often discussed the closely related projected velocity, $`\stackrel{~}{𝐯}=\stackrel{~}{𝐫}_\mathrm{E}/t_\mathrm{E}`$.)
The Einstein crossing time $`t_\mathrm{E}`$ is the only observable that at present is routinely observed. While I find no fault with $`t_\mathrm{E}`$, considerations of symmetry with the substitution $`\stackrel{~}{r}_\mathrm{E}\pi \pi _\mathrm{E}`$ lead me to substitute $`t_\mathrm{E}\mu \mu _\mathrm{E}`$, where
$$\mu _\mathrm{E}\frac{1}{t_\mathrm{E}},$$
(5)
and where the direction of $`\mu \mu _\mathrm{E}`$ is that of the lens motion relative to the source. With this definition, the relative lens-source proper motion is given by $`\mu \mu _{\mathrm{rel}}=\mu \mu _\mathrm{E}\theta _\mathrm{E}`$.
## 3 Relations Between Observables and Physical Quantities
From equations (2)–(4), one immediately derives
$$\stackrel{~}{r}_\mathrm{E}=\sqrt{\frac{4GMd_{\mathrm{rel}}}{c^2}},\pi _\mathrm{E}=\sqrt{\frac{\pi _{\mathrm{rel}}}{\kappa M}},$$
(6)
and
$$\theta _\mathrm{E}=\sqrt{\frac{4GM}{d_{\mathrm{rel}}c^2}}=\sqrt{\kappa M\pi _{\mathrm{rel}}},$$
(7)
where
$$\kappa \frac{4G}{c^2\mathrm{AU}}=\frac{4v_{}^2}{M_{}c^2}8.144\frac{\mathrm{mas}}{M_{}},$$
(8)
and $`v_{}30\mathrm{km}\mathrm{s}^1`$ is the speed of the Earth.
How well is the coefficient ($`8.14\mathrm{}`$) in $`\kappa `$ known? It suffers from two sources of uncertainty. First, the factor “4” in equations (8) and (1) is a prediction of General Relativity (GR). It’s accuracy (often parameterized by $`\gamma `$) has been verified experimentally by Hipparcos, but only to 0.3% (Froeschle, Mignard, & Arenou 1997). However, if GR is assumed to be exact, then this coefficient can be determined as accurately as $`(v_{}/c)^2`$, which should be known from pulsar timing and solar-system radar ranging to at least nine significant digits.
In astrometric microlensing measurements, one automatically recovers the parallax and proper-motion of the source, $`\pi _s`$ and $`\mu \mu _s`$ (Boden et al. 1998; Gould & Salim 1999). Hence, the observables are $`\mu \mu _\mathrm{E}`$, $`\pi \pi _\mathrm{E}`$, $`\theta _\mathrm{E}`$, $`\pi _s`$ and $`\mu \mu _s`$. When expressed in this natural form, they have a particularly simple relation to the physical properties of the lens:
$$M=\frac{\theta _\mathrm{E}}{\kappa \pi _\mathrm{E}},$$
(9)
$$\pi _l=\pi _\mathrm{E}\theta _\mathrm{E}+\pi _s,$$
(10)
$$\mu \mu _l=\mu \mu _\mathrm{E}\theta _\mathrm{E}+\mu \mu _s,$$
(11)
and
$$𝐯_{,l}=\frac{\mu \mu _\mathrm{E}\theta _\mathrm{E}+\mu \mu _s}{\pi _\mathrm{E}\theta _\mathrm{E}+\pi _s},$$
(12)
where $`\pi _l`$, $`\mu \mu _l`$, and $`𝐯_{,l}`$ are the parallax, proper motion, and transverse velocity of the lens.
Acknowledgements: This work was supported by grant AST 97-27520 from the NSF.
## Appendix A The Need for Uniform Notation
Microlensing suffers from a plethora of mutually inconsistent notational conventions. While this poses no real problem for veterans, it presents significant obstacles to newcomers entering the field. I take the opportunity of this paper (which, more than most, concerns itself with notational issues) to try to forge a consensus. In formulating my proposed conventions, I am influenced primarily by prevalence of current usage, and secondarily by the need for internal consistency.
I am abandoning some of my own prized notations, and I hope others are willing to do the same in the interest of achieving a uniform system. I will post this manuscript on astro-ph and circulate it privately to a wide audience thereby allowing an informal “vote” on my proposal and corrections to it if they seem required. In the final published version of the paper, I will replace this paragraph with the results of that “vote”.
First, all quantities associated with the size of the Einstein ring (in units of length, angle, time, etc.) should be subscripted with an upper-case roman “E” in conformity with ApJ conventions. All physical Einstein radii should be denoted $`r`$. Hence, $`\stackrel{~}{r}_\mathrm{E}`$, $`r_\mathrm{E}`$, and $`\widehat{r}_\mathrm{E}`$ for the Einstein rings in the planes of the observer, lens, and source. The other quanities are $`\theta _\mathrm{E}`$ for the angular Einstein radius, $`t_\mathrm{E}`$ for the Einstein crossing time, $`\mu _\mathrm{E}t_\mathrm{E}^1`$, $`\pi _\mathrm{E}\mathrm{AU}/\stackrel{~}{r}_\mathrm{E}`$, and the direction of $`\pi \pi _\mathrm{E}`$ and $`\mu \mu _\mathrm{E}`$ defined by the direction of the proper motion of the lens relative to the source.
Second, all quantities associated with position in the Einstein ring should be denoted by $`u`$, or possibly by $`𝐮`$ if a vector position is indicated. When $`𝐮`$ is a vector, it must be specified whether it is the source position relative to the lens or vice versa. Common usage seems to conform to the former, and hence I adopt that. However, keep in mind that this means that $`d𝐮/dt=\mu \mu _{\mathrm{rel}}/\theta _\mathrm{E}`$.
Third, all quantities associated with the time of closest approach to the center of the Einstein ring should be denoted by a subscript “0”. Thus, $`t_0`$ for the time of closest approach and $`u_0`$ for the projected separation of the lens and source in units of $`\theta _\mathrm{E}`$ at time $`t_0`$.
Fourth, all quantities associated with the source should be denoted by a subscript “$``$”. Thus, $`\theta _{}`$ for the angular radius of the source and $`r_{}`$ for its physical radius.
Fifth, time normalized to the Einstein crossing time should be denoted $`\tau =(tt_0)/t_\mathrm{E}`$. Hence the vector position in the Einstein ring is $`𝐮=(\tau ,u_0)`$.
Sixth, event parameters as measured from locations other than the Earth should be subscripted, e.g., “$`t_{0,s}`$” for the time of closest approach as seen from a satellite. The subscript “$``$” should be reserved for event parameters as seen from the Sun (not in the Sun frame but from another location). The “$``$” subscript should be used only when needed to avoid confusion.
Finally, the reader will note that I have described different parameters that contain the same information, e.g., $`(\stackrel{~}{r}_\mathrm{E},\pi _\mathrm{E})`$ and $`(t_\mathrm{E},\mu _\mathrm{E})`$. I expect that $`\pi \pi _\mathrm{E}`$ and $`\mu \mu _\mathrm{E}`$ will come into use mainly in technical applications, and that the general reader of microlesing articles will continue to find $`\stackrel{~}{r}_\mathrm{E}`$ and $`t_\mathrm{E}`$ to be more intuitive. In particular, in cases where there is only a microlensing parallax measurement, the projected velocity $`\stackrel{~}{𝐯}=\mu \mu _\mathrm{E}/\pi _\mathrm{E}`$ is often a substantially more useful representation of the measurement than $`\mu \mu _\mathrm{E}`$ and $`\pi _\mathrm{E}`$ reported separately. Note that in contrast to $`𝐯_{l,}=\mu \mu _l/\pi _l`$, which represents two components of an intrinsically three-dimensional vector, $`\stackrel{~}{𝐯}`$ is intrinsically two-dimensional and so should not be subscripted with a “$``$”. |
no-problem/0001/astro-ph0001481.html | ar5iv | text | # 𝜒Variable-Speed-of-Light Cosmologies
## 1 INTRODUCTION
Variable-Speed-of-Light (VSL) cosmologies have recently generated considerable interest as alternatives to the inflationary framework. They serve both to sharpen our ideas concerning falsifiability of the standard inflationary paradigm, and also to provide a contrasting scenario that is itself amenable to observational test. In this presentation we wish to assess the internal consistency of the VSL framework, and ask to what extent it is compatible with geometrodynamics (Einstein gravity). This will lead us to propose a particular class of VSL models that implement this idea in such a way as to inflict minimal “violence” on GR, and which at the same time are “natural” in the context of one-loop QED. For a detailed discussion of all these issues we refer to the paper which has inspired the present talk.
The question of the intrinsic compatibility of VSL models with GR is not a trivial one: Ordinary Einstein gravity has the constancy of the speed of light built into it at a deep and fundamental level; $`c`$ is the “conversion constant” that relates time to space. Even at the level of coordinates we need to use $`c`$ to relate the zeroth coordinate to Newtonian time: $`dx^0=cdt`$. Thus, simply replacing the constant $`c`$ by a position-dependent variable $`c(t,\stackrel{}{x})`$, and writing $`dx^0=c(t,\stackrel{}{x})dt`$ is a highly suspect proposition.
If this substitution is performed at the level of the metric, it is difficult to distinguish VSL from a mere coordinate change (under such circumstances VSL has no physical impact). Apparently more attractive, (because it at least has observable consequences), is the possibility of replacing $`cc(t)`$ directly in the Einstein tensor itself. \[This is the route chosen by Barrow–Magueijo , by Albrecht–Magueijo , and by Avelino–Martins .\] If one does so, the modified “Einstein tensor” is not covariantly conserved (it does not satisfy the contracted Bianchi identities), and this modified “Einstein tensor” is not obtainable from the curvature tensor of any spacetime metric. Indeed, if we define a timelike vector $`V^\mu =(/t)^\mu =(1,0,0,0)`$ then a brief computation yields
$$_\mu G_{\mathrm{m}odified}^{\mu \nu }\dot{c}(t)V^\nu .$$
(1)
Thus violations of the Bianchi identities for this modified “Einstein tensor” are part and parcel of this particular way of trying to make the speed of light variable. If one now couples this modified “Einstein tensor” to the stress-energy via the Einstein equations, then the stress-energy tensor (divided by $`c^4`$) cannot be covariantly conserved either, and so it cannot be variationally obtained from any action. To our minds, if one really wants to say that it is the speed of light that is varying, then one should seek a theory that contains two natural speed parameters, call them $`c_{\mathrm{p}hoton}`$ and $`c_{\mathrm{g}ravity}`$, and then ask that the ratio of these two speeds is a time-dependent (and possibly position-dependent) quantity. To implement this idea, it is simplest to take $`c_{\mathrm{g}ravity}`$ to be fixed and position-independent. So doing, $`c_{\mathrm{g}ravity}`$ can be safely used in the usual way to set up all the mathematical structures of differential geometry needed in implementing Einstein gravity.
## 2 TWO–METRIC VSL COSMOLOGIES
Based on the preceding discussion, we feel that the first step towards making a VSL cosmology “geometrically sensible” is to write a two-metric theory in a form where the photons couple to a second electromagnetic metric, distinct from the spacetime metric that describes the gravitational field. \[Somewhat related two-metric implementations of VSL cosmology are discussed by Clayton and Moffat and by Drummond . See for details on how those implementations differ from our own.\] This permits a precise physical meaning for VSL: If the two null-cones (defined by $`g`$ and $`g_{\mathrm{e}m}`$, respectively) do not coincide at some time, one has a VSL cosmology.
We want to stress that the basic idea of a quantum-physics-induced effective metric, differing from the spacetime metric (gravity metric), and affecting only photons is actually far from being a radical point of view. This concept has gained in the last decade a central role in the discussion of the propagation of photons in non-linear electrodynamics. In particular we stress that “anomalous” (larger than $`c_{\mathrm{g}ravity}`$) photon speeds have been calculated in relation with the propagation of light in the Casimir vacuum , as well as in gravitational fields .
Within our own framework, alternative approaches can be (I) to couple just the photons to $`g_{\mathrm{e}m}`$ meanwhile all the other matter and gravity couple to $`g`$, or (II) to couple all the gauge bosons to $`g_{\mathrm{e}m}`$, but couple everything else to $`g`$, or (III) to couple all the matter fields to $`g_{\mathrm{e}m}`$, keeping gravity as the only field coupled to $`g`$. A particularly simple EM metric is
$$[g_{\mathrm{e}m}]_{\alpha \beta }=g_{\alpha \beta }(AM^4)_\alpha \chi _\beta \chi .$$
(2)
Here we have introduced a dimensionless coupling $`A`$, and taken $`\mathrm{}=c_{\mathrm{g}ravity}=1`$, in order to give the scalar field $`\chi `$ its canonical dimensions of mass-energy. The normalization energy scale, $`M`$, is defined in terms of $`\mathrm{}`$, $`G_{\mathrm{N}ewton}`$, and $`c_{\mathrm{g}ravity}`$. Provided $`M`$ satisfies $`M_{\mathrm{E}lectroweak}<M<M_{\mathrm{P}l}`$, the EM lightcones can be much wider than the standard (gravity) lightcones without inducing a large backreaction on the spacetime geometry due to the scalar field $`\chi `$. The presence of this dimensionfull constant implies that $`\chi `$VSL models will automatically be non-renormalizable. $`M`$ is then the energy at which the non-renormalizability of the $`\chi `$ field becomes important. So these models should be viewed as “effective field theories” valid for sub-$`M`$ energies. In this regard $`\chi `$VSL implementations are certainly no worse behaved than many of the models of cosmological inflation and/or particle physics currently extant.
The evolution of the scalar field $`\chi `$ will be assumed to be governed by some VSL action $`S_{\mathrm{V}SL}`$. Then the complete action for the first of the models proposed above is
$`S_I`$ $`=`$ $`{\displaystyle d^4x\sqrt{g}\left[R(g)+_{\mathrm{m}atter}\right]}`$ (3)
$`+`$ $`{\displaystyle d^4x\sqrt{g_{\mathrm{e}m}}g_{\mathrm{e}m}^{\alpha \beta }F_{\beta \gamma }g_{\mathrm{e}m}^{\gamma \delta }F_{\delta \alpha }}`$
$`+`$ $`{\displaystyle d^4x\sqrt{g}_{\mathrm{V}SL}(\chi )}.`$
Let us suppose the potential in this VSL action has a global minimum, but that the $`\chi `$ field is displaced from this minimum in the early universe: either trapped in a meta-stable state by high-termperature effects or displaced due to chaotic initial conditions. The transition to the global minimum may be either of first or second order and during it $`_\alpha \chi 0`$, so that $`g_{\mathrm{e}m}g`$. Once the true global minimum is achieved, $`g_{\mathrm{e}m}=g`$ again. Since one can arrange $`\chi `$ today to have settled to the true global minimum, current laboratory experiments would automatically give $`g_{\mathrm{e}m}=g`$.
Since $`V(\chi )0`$ in the early universe, $`\chi `$ could drive an inflationary phase. While this is true we stress instead the more interesting possibility that, by coupling an independent inflation field to $`g_{\mathrm{e}m}`$, $`\chi `$VSL models can be used to improve the inflationary framework by enhancing its ability to solve the cosmological puzzles .
## 3 COSMOLOGICAL PUZZLES
The general covariance of General Relativity means that the set of models consistent with the existence of the apparently universal class of preferred rest frames defined by the Cosmic Microwave Background (CMB) is very small and non-generic. Inflation seeks to alleviate this problem by making the flat Friedmann–Lemaitre–Robertson–Walker (FLRW) model an attractor within the set of almost–FLRW models, at the cost of violating the strong energy condition (SEC) during the inflationary epoch. VSL cosmologies by contrast typically sacrifice Lorentz invariance, again thereby making the flat FLRW model an attractor .
Our own approach, while is able to solve the “kinematic” puzzles as well as inflation does, cannot solve the flatness problem since in its purest formulation (no inflation driven or enhanced by $`\chi `$) violations of the SEC do not occur, and because our models do not lead to an explicit “hard” breaking of the Lorentz invariance like other VSL models do. \[Our class of VSL models exhibit a “soft” breaking of Lorentz invariance, which is qualitatively similar to the notion of spontaneous symmetry breaking in particle physics.\]
We will now consider some of the major cosmological puzzles, directing the reader to for an extended discussion.
### 3.1 Isotropy
One of the major puzzles of the standard cosmological model is that the isotropy of the CMB seems in conflict with estimates of the region of causal contact at last scattering. The basic mechanism by which VSL models solve this cosmological puzzle relies on the fact that the (coordinate) size of the horizon at the time of last scattering $`t_{}`$ is modified by the time dependence of the photon speed $`R_{\mathrm{h}z}=_0^t_{}c_{\mathrm{p}hoton}𝑑t/a(t)`$. It is this quantity that sets the distance scale over which photons can transport energy and thermalize the primordial fireball. On the other hand, the coordinate distance out to the surface of last scattering is $`R_{\mathrm{l}s}=_t_{}^{t_0}c_{\mathrm{p}hoton}𝑑t/a(t)`$. Observationally, the large-scale homogeneity of the CMB implies $`R_{\mathrm{h}z}R_{\mathrm{l}s}`$. Although this is a paradox in the standard cosmological framework (without inflation), it can be achieved by having $`c_{\mathrm{p}hoton}c_{\mathrm{g}ravity}`$ early in the expansion and keeping $`c_{\mathrm{p}hoton}c_{\mathrm{g}ravity}`$ between last scattering and the present epoch (as it should be for VSL models to be compatible with observations at low-redshift).
### 3.2 Flatness
The flatness paradox arises from the fact that the flat FLRW universe, although plausible from observation, appears as an unstable solution of GR. From the Friedmann equation, it is a simple matter of definition that
$$ϵ\mathrm{\Omega }1=\frac{Kc^2}{H^2a^2}=\frac{Kc^2}{\dot{a}^2},$$
(4)
where $`K=0,\pm 1`$. Again we have to deal with the basic point of our VSL cosmologies: Which $`c`$ are we dealing with? We cannot simply replace $`cc_{\mathrm{p}hoton}`$ in the above (as done in other VSL implementations). As we have pointed out, the $`c`$ appearing here must be the fixed $`c_{\mathrm{g}ravity}`$, otherwise the Bianchi identities are violated and Einstein gravity loses its geometrical interpretation in terms of spacetime curvature. Thus we have to take $`ϵ=Kc_{\mathrm{g}ravity}^2/\dot{a}^2`$. Differentiating this equation, we see that purely on kinematic grounds
$$\dot{ϵ}=2Kc_{\mathrm{g}ravity}^2\left(\ddot{a}/\dot{a}^3\right)=2ϵ\left(\ddot{a}/\dot{a}\right).$$
(5)
Given the way we have implemented VSL cosmology in terms of a two-metric model, this equation is independent of the details in the photon sector. In particular, if we want to solve the flatness problem by making $`ϵ=0`$ a stable fixed point of this evolution equation (at least for some significant portion of the history of the universe), then we must have $`\ddot{a}>0`$, which is equivalent to SEC violation in FLRW. VSL effects by themselves are not sufficient. \[Superficially similar VSL models are claimed to solve the flatness puzzle. See for a discussion of such an apparent discrepancy.\]
### 3.3 Monopoles and Relics
The Kibble mechanism predicts topological defect densities that are inversely proportional to powers of the correlation length of the Higgs fields. These are generally upper bounded by the Hubble distance $`c/H`$. Inflation solves this problem by diluting the density of defects to an acceptable degree. We deal with the issue by varying $`c`$ in such a way as to have a large Hubble distance during defect formation. Thus we need the transition in the speed of light to happen after the SSB that leads to monopole production. \[We also want good thermal coupling between the photons and the Higgs field, to justify using the photon horizon scale in the Kibble freeze-out argument.\]
## 4 PRIMORDIAL FLUCTUATIONS
The inflationary framework owes its popularity not only to its ability to strongly mitigate the main cosmological puzzles, but also to its providing a plausible micro-physics explanation for the causal creation of primordial perturbations.
In the case of $`\chi `$VSL, the creation of primordial fluctuations is also generic. The basic mechanism can be easily understood by modelling the change in the speed of light as due to the effect of a changing “effective refractive index of the EM vacuum”: $`n_{EM}=c_{\mathrm{g}ravity}/c_{\mathrm{p}hoton}=\left(\sqrt{[1+(AM^4)(_t\chi )^2]}\right)^1`$. Particle creation from a time-varying refractive index is a well-known effect, and many features of it are similar to those derivable for its inflationary and VSL counterparts (*e.g.,* the particles are still produced in squeezed pairs). We must stress the fact that these mechanisms are not entirely identical. In $`\chi `$VSL cosmologies a thermal distribution of the excited modes (with a temperature approximately constant in time) is no longer generic, and likewise the Harrison-Zel’dovich (HZ) spectrum is not guaranteed. Nevertheless, approximate thermality, at fixed temperature, over a wide frequency range can be proved for suitable regimes , implying an approximate HZ spectrum of primordial perturbations only on a finite range of frequencies. We hope that this “weak” prediction of a HZ spectrum will be among the possible observational test of these implementations of the VSL framework.
## 5 CONCLUSIONS
Implementing VSL cosmologies in a geometrically clean way seems to lead almost inevitably to some version of a two-metric cosmology. We have indicated that there are several different ways of building two-metric VSL cosmologies and have discussed some of their generic cosmological features. |
no-problem/0001/hep-ph0001246.html | ar5iv | text | # HOW NEUTRINOS GET MASS AND WHAT OTHER THINGS MAY HAPPEN BESIDES OSCILLATIONS
## 1 Introduction
In the minimal standard model, under the gauge group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$, the leptons transform as:
$$\left(\begin{array}{c}\nu _i\\ l_i\end{array}\right)_L(1,2,1/2),l_{iR}(1,1,1),$$
(1)
and the one Higgs doublet transforms as:
$$\mathrm{\Phi }=\left(\begin{array}{c}\varphi ^+\\ \varphi ^0\end{array}\right)(1,2,1/2).$$
(2)
Without additional particles at or below the electroweak energy scale, i.e. $`10^2`$ GeV, $`m_\nu `$ must come from the following effective dimension-5 operator,
$$\frac{1}{\mathrm{\Lambda }}(\nu _i\varphi ^0l_i\varphi ^+)(\nu _j\varphi ^0l_j\varphi ^+).$$
(3)
All theoretical models of neutrino mass differ only in its specific realization.
## 2 Canonical, Minimal, and Next-to-Minimal Seesaw
Add 3 heavy singlet right-handed neutrinos to the minimal standard model: 1 $`\nu _R`$ for each $`\nu _L`$. Then the operator of Eq. (3) is realized because each heavy $`\nu _R`$ is linked to $`\nu _L\varphi ^0`$ with a Yukawa coupling $`f`$; and since $`\nu _R`$ is allowed to have a large Majorana mass $`M_R`$, the famous seesaw realtionship $`m_\nu =m_D^2/M_R`$ is obtained, where $`m_D=f\varphi ^0`$. This mechanism dominates the literature and is usually implied when a particular pattern of neutrino mass and mixing is proposed.
Actually, it is not necessary to have 3 $`\nu _R`$’s to get 3 nonzero neutrino masses. Add just 1 $`\nu _R`$. Then only 1 linear combination of $`\nu _e,\nu _\mu ,\nu _\tau `$ gets a seesaw mass. The other 2 neutrino masses are zero at tree level, but since there is in general no more symmetry to protect their masslessness, they must become massive through radiative corrections. As it turns out, this happens in two loops through double $`W`$ exchange and the result is doubly suppressed by the charged-lepton masses. Hence it is not a realistic representation of the present data for neutrino oscillations.
Add 1 $`\nu _R`$ and 1 extra Higgs doublet. Then 1 neutrino gets a seesaw mass. Another gets a one-loop mass through its coupling to $`\varphi _2^0`$, where $`\varphi _2^0=0`$. This second mass is proportional to the coupling of the term $`(\overline{\varphi }_2^0\varphi _1^0)^2`$ times $`\varphi _1^0^2`$ divided by $`M_R`$. The third neutrino gets a two-loop mass as in the minimal case. This scheme is able to fit the present data.
## 3 Heavy Higgs Triplet
Add 1 heavy Higgs triplet $`(\xi ^{++},\xi ^+,\xi ^0)`$. Then the dimension-4 term
$$\nu _i\nu _j\xi ^0\left(\frac{\nu _il_j+l_i\nu _j}{\sqrt{2}}\right)\xi ^++l_il_j\xi ^{++}$$
(4)
is present, and $`m_\nu \xi ^0`$. If $`m_\xi 10^2`$ GeV, this would require extreme fine tuning to make $`\xi ^0`$ small. But if $`m_\xi >>10^2`$ GeV, the dimension-4 term should be integrated out, and again only the dimension-5 term
$$(\nu _i\varphi ^0l_i\varphi ^+)(\nu _j\varphi ^0l_j\varphi ^+)=\nu _i\nu _j(\varphi ^0\varphi ^0)(\nu _il_j+l_i\nu _j)(\varphi ^0\varphi ^+)+l_il_j(\varphi ^+\varphi ^+),$$
(5)
remains, so that
$$m_\nu =\frac{2f\mu \varphi ^0^2}{m_\xi ^2},$$
(6)
where $`f`$ and $`\mu `$ are the couplings of the terms $`\nu _i\nu _j\xi ^0`$ and $`\varphi ^0\varphi ^0\overline{\xi }^0`$ respectively. This shows the interesting result that $`\xi `$ has a very small vacuum expectation value inversely proportional to the square of its mass,
$$\xi ^0=\frac{\mu \varphi ^0^2}{m_\xi ^2}<<m_\xi .$$
(7)
The $`SU(2)_L\times SU(2)_R\times U(1)_{BL}`$ version of this relationship is $`v_L\varphi ^0^2/v_R`$.
## 4 Some Generic Consequences
Once neutrinos have mass and mix with one another, the radiative decay $`\nu _2\nu _1\gamma `$ happens in all models, but is usually harmless as long as $`m_\nu <`$ few eV, in which case it will have an extremely long lifetime, many many orders of magnitude greater than the age of the Universe. The present astrophysical limit is $`10^{14}`$ years.
The analogous radiative decay $`\mu e\gamma `$ also happens in all models, but is only a constraint for some models where $`m_\nu `$ is radiative in origin. The present experimental limit on this branching fraction is $`1.2\times 10^{11}`$.
Neutrinoless double $`\beta `$ decay occurs, but is sensitive only to the $`\nu _e\nu _e`$ entry of $`_\nu `$, which may be assumed to be zero in many models. The present experimental limit is 0.2 eV.
## 5 Leptogenesis in the 2 Simplest Models of Neutrino Mass
Leptogenesis is possible in either the canonical seesaw or Higgs triplet models of neutrino mass. In the canonical seesaw scenario, $`\nu _R`$ may decay into both $`l^{}\varphi ^+`$ and $`l^+\varphi ^{}`$. In the Higgs triplet scenario, $`\xi ^{++}`$ may decay into both $`l^+l^+`$ and $`\varphi ^+\varphi ^+`$. The lepton asymmetry thus generated may be converted into the present observed baryon asymmetry of the Universe through the electroweak sphalerons.
The decay amplitude of $`\nu _R`$ into $`l^{}\varphi ^+`$ is the sum of tree-level and one-loop contributions, where the intermediate state $`l^+\varphi ^{}`$ may appear as a vertex correction through $`\nu _R^{}`$ exchange. The interference between them allows a decay asymmetry of $`l^{}\varphi ^+l^+\varphi ^{}`$ to be produced, provided that $`CP`$ is violated. This requires $`\nu _R^{}\nu _R`$ and is analogous to having direct $`CP`$ violation in $`K`$ decay, i.e. $`ϵ^{}0`$.
There is also $`CP`$ violation in the self-energy correction to the mass matrix spanning $`\nu _R`$ and $`\nu _R^{}`$, which is analogous to having indirect $`CP`$ violation in the $`K^0\overline{K}^0`$ system, i.e. $`ϵ0`$. This effect has a $`(mm^{})^1`$ enhancement, but the limit $`m^{}=m`$ is not singular.
Similarly, the decay amplitude of $`\xi ^{++}`$ into $`l^+l^+`$ has a self-energy (but no vertex) correction involving the intermediate state $`\varphi ^+\varphi ^+`$. This generates a decay asymmetry given by
$$\delta _i\frac{Im[\mu _1\mu _2^{}\underset{k,l}{}f_{1kl}f_{2kl}^{}]}{8\pi ^2(M_1^2M_2^2)}\left(\frac{M_i}{\mathrm{\Gamma }_i}\right).$$
(8)
Again, $`CP`$ violation requires 2 different $`\xi `$’s.
## 6 Radiative Neutrino Mass
The generic expression of a Majorana neutrino mass is given by
$$m_\nu f^2\varphi ^0^2/\mathrm{\Lambda },$$
(9)
hence
$$\mathrm{\Lambda }>10^{13}\mathrm{GeV}(1\mathrm{eV}/m_\nu )f^2,$$
(10)
i.e. the scale of lepton number violation is very large (and directly unobservable) unless $`f<10^5`$ or so.
If $`m_\nu `$ is radiative in origin, $`f`$ is suppressed first by the loop factor of $`(4\pi )^1`$, then by other naturally occurring factors such as $`m_l/M_W`$ or $`m_q/M_W`$. In that case, $`\mathrm{\Lambda }`$ may be small enough to be observable directly (or indirectly through lepton flavor violating processes.)
Take for example the Zee model, which adds to the minimal standard model 1 extra Higgs doublet $`\mathrm{\Phi }_2`$ and 1 charged singlet $`\chi ^+`$. Then the coexistence of the terms $`g_{ij}(\nu _il_j\nu _jl_i)\chi ^+`$ and $`\mu (\varphi _1^+\varphi _2^0\varphi _2^+\varphi _1^0)\chi ^{}`$ allows the following radiative mass matrix to be obtained:
$$_\nu =\left[\begin{array}{ccc}0& f_{\mu e}(m_\mu ^2m_e^2)& f_{\tau e}(m_\tau ^2m_e^2)\\ f_{\mu e}(m_\mu ^2m_e^2)& 0& f_{\tau \mu }(m_\tau ^2m_\mu ^2)\\ f_{\tau e}(m_\tau ^2m_e^2)& f_{\tau \mu }(m_\tau ^2m_\mu ^2)& 0\end{array}\right],$$
(11)
where
$$f_{ij}\frac{g_{ij}}{16\pi ^2}\frac{\mu \varphi _2^0}{\varphi _1^0m_\chi ^2}.$$
(12)
This model has been revived in recent years and may be used to fit the neutrino-oscillation data.
In the above, the mass of the charged scalar $`\chi `$ may be light enough to allow observable contributions to $`\mathrm{\Gamma }(\mu e\nu \overline{\nu })`$ at tree level, and to $`\mathrm{\Gamma }(\mu eee)`$ in one loop. Hence lepton flavor violating processes may reveal the presence of such a new particle.
## 7 R Parity Nonconserving Supersymmetry
In the minimal supersymmetric standard model, $`R(1)^{3B+L+2J}`$ is assumed conserved so that the superpotential is given by
$$W=\mu H_1H_2+f_{ij}^eH_1L_ie_j^c+f_{ij}^dH_1Q_id_j^c+f_{ij}^uH_2Q_iu_j^c,$$
(13)
where $`L_i`$ and $`Q_i`$ are the usual lepton and quark doublets, and
$$H_1=(h_1^0,h_1^{}),H_2=(h_2^+,h_2^0)$$
(14)
are the 2 Higgs doublets. If only $`B`$ is assumed to be conserved but not $`L`$, then the superpotential also contains the terms
$$\mu _iL_iH_2+\lambda _{ijk}L_iL_je_k^c+\lambda _{ijk}^{}L_iQ_jd_k^c,$$
(15)
and violates $`R`$. As a result, a radiative neutrino mass $`m_\nu \lambda ^2(Am_b^2)/16\pi ^2m_{\stackrel{~}{b}}^2`$ may be obtained. Furthermore, from the mixing of $`\nu _i`$ with the neutralino mass matrix through the bilinear term $`L_iH_2`$ and the induced vacuum expectation value of $`\stackrel{~}{\nu }_i`$, a tree-level mass $`m_\nu (\mu _i/\mu \stackrel{~}{\nu }_i/h_1^0)^2m_{eff}`$ is also obtained.
## 8 Leptogenesis from R Parity Nonconservation
Whereas lepton-number violating trilinear couplings are able to generate neutrino masses radiatively, they also wash out any preexisting $`B`$ or $`L`$ asymmetry during the electroweak phase transition. On the other hand, successful leptogenesis may still be possible as shown recently.
Assume the lightest and 2nd lightest supersymmetric particles to be
$$\stackrel{~}{W}_3^{}=\stackrel{~}{W}_3ϵ\stackrel{~}{B},\stackrel{~}{B}^{}=\stackrel{~}{B}+ϵ\stackrel{~}{W}_3,$$
(16)
respectively, where $`\stackrel{~}{W}_3`$ and $`\stackrel{~}{B}`$ are the $`SU(2)`$ and $`U(1)`$ neutral gauginos, and $`ϵ`$ is a very small number. Note that $`\stackrel{~}{B}`$ couples to $`\overline{\tau }_L^c\stackrel{~}{\tau }_L^c`$ but $`\stackrel{~}{W}_3`$ does not, because $`\tau _L^c`$ is trivial under $`SU(2)`$. Assume $`\stackrel{~}{\tau }_Lh^{}`$ mixing to be negligible but $`\stackrel{~}{\tau }_L^ch^+`$ mixing to be significant and denoted by $`\xi `$. Obviously, $`\stackrel{~}{\tau }`$ may be repalced by $`\stackrel{~}{\mu }`$ or $`\stackrel{~}{e}`$ in this discussion.
Given the above assumptions, $`\stackrel{~}{B}^{}`$ decays into $`\tau ^{}h^\pm `$ through $`\xi `$, whereas $`\stackrel{~}{W}_3^{}`$ decays (also into $`\tau ^{}h^\pm `$) are further suppressed by $`ϵ`$. This allows $`\stackrel{~}{W}_3^{}`$ decay to be slow enough to be out of equilibrium with the expansion of the Universe at a temperature $``$ 2 TeV, and yet have a large enough asymmetry $`(\tau ^{}h^+\tau ^+h^{})`$ in its decay to obtain $`n_B/n_\gamma 10^{10}`$. See Figure 1.
This unique scenario requires $`\stackrel{~}{W}_3^{}`$ to be lighter than $`\stackrel{~}{B}^{}`$ and that both be a few TeV in mass so that the electroweak sphalerons are still very effective in converting the $`L`$ asymmetry into a $`B`$ asymmetry. It also requires very small mixing between $`\stackrel{~}{\tau }_L`$ with $`h^{}`$, which is consistent with the smallness of the neutrino mass required in the phenomenology of neutrino oscillations. On the other hand, the mixing of $`\stackrel{~}{\tau }_L^c`$ with $`h^+`$, i.e. $`\xi `$, should be of order $`10^3`$ which is too large to be consistent with the usual terms of soft supersymmetry breaking. For successful leptogenesis, the nonholomorphic term $`H_2^{}H_1\stackrel{~}{\tau }_L^c`$ is required.
## 9 Conclusion and Outlook
Models of neutrino mass and mixing invariably lead to other possible physical consequences which are important for our overall understanding of the Universe, as well as other possible experimentally verifiable predictions.
## Acknowledgments
I thank Rahul Basu and the other organizers of WHEPP-6 for their great hospitality and a stimulating meeting. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837.
## References |
no-problem/0001/hep-th0001101.html | ar5iv | text | # UPR-872T Standard Model Vacua in Heterotic M–Theory
## 1 Introduction
In fundamental work, it was shown by Hořava and Witten that if M–theory is compactified on the orbifold $`S^1/Z_2`$, a chiral $`N=1`$, $`E_8`$ gauge supermultiplet must exist in the twisted sector of each of the two ten-dimensional orbifold fixed planes. It is important to note that, in this theory, the chiral gauge matter is confined solely to the orbifold planes, while pure supergravity inhabits the bulk space between these planes. Thus, Hořava-Witten theory is a concrete and fundamental representation of the idea of a “brane world”.
Witten then showed that, if further compactified to four dimensions on a Calabi–Yau threefold, the $`N=1`$ supersymmetric low–energy theory exhibits realistic gauge unification and gravitational coupling strength provided the Calabi–Yau radius, $`R`$, is of the order of $`10^{16}`$GeV and that the orbifold radius, $`\rho `$, is larger than $`R`$. Thus, Hořava–Witten theory has a “large” internal bulk dimension, although it is of order the intermediate scale and not the TeV size bulk dimensions, or larger, discussed recently .
When compactifying the Hořava–Witten theory, it is possible that all or, more likely, a subset of the $`E_8`$ gauge fields do not vanish classically in the Calabi–Yau threefold directions. Since these gauge fields “live” on the Calabi–Yau manifold, $`3+1`$-dimensional Lorentz invariance is left unbroken. Furthermore, by demanding that the associated field strengths satisfy the constraints $`F_{ab}=F_{\overline{a}\overline{b}}=g^{a\overline{b}}F_{a\overline{b}}=0`$, $`N=1`$ supersymmetry is preserved. However, these gauge field vacua do spontaneously break the $`E_8`$ gauge group as follows. Suppose that the non-vanishing gauge fields are associated with the generators of a subgroup $`G`$, where $`G\times HE_8`$. Then the $`E_8`$ gauge group is spontaneously broken to $`H`$, which is the commutant subgroup of $`G`$ in $`E_8`$. This mechanism of gauge group breaking allows one, in principle, to reduce the $`E_8`$ gauge group to smaller and phenomenologically more interesting gauge groups such as unification groups $`E_6`$, $`SO(10)`$ and $`SU(5)`$ as well as the standard model gauge group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$. The spontaneous breaking of $`E_8`$ to $`E_6`$ by taking $`G=SU(3)`$ and identifying it with the spin connection of the Calabi–Yau threefold, the so-called “standard embedding”, was discussed in . A general discussion of non-standard embeddings in this context and their low energy implications was presented in . We will refer to Hořava–Witten theory compactified to lower dimensions with arbitrary gauge vacua as heterotic M–theory.
It is, therefore, of fundamental interest to know, given a Calabi–Yau threefold $`X`$, what non-Abelian gauge field vacuum configurations associated with a subgroup $`GE_8`$ can be defined on it. One approach to this problem is to simply attempt to solve the six-dimensional Yang–Mills equations with the appropriate boundary conditions subject to the above constraints on the field strengths. However, given the complexity of Calabi–Yau threefolds, this approach becomes very difficult at best and is probably untenable. One, therefore, must look for an alternative construction of these Yang-Mills connections. Such an alternative was presented by Donaldson and Uhlenbeck and Yau , who recast the problem in terms of holomorphic vector bundles. These authors prove that for every semi-stable holomorphic vector bundle with structure group $`G`$ over $`X`$, there exists a solution to the six-dimensional Yang–Mills equations satisfying the above constraints on the field strengths, and conversely. Thus, the problem of determining the allowed gauge vacua on a Calabi–Yau threefold is replaced by the problem of constructing semi–stable holomorphic vector bundles over the same threefold.
It was shown in recent publications , relying heavily on work on holomorphic vector bundles by several authors , that a wide class of semi-stable holomorphic vector bundles with structure groups $`SU(n)E_8`$ can be explicitly constructed over elliptically fibered Calabi–Yau threefolds. The restriction to $`SU(n)`$ subgroups was for simplicity, other structure subgroups being possible as well. Thus, using holomorphic vector bundles and the Donaldson, Uhlenbeck, Yau theorem, it has been possible to classify and give the properties of a large class of $`SU(n)`$ gauge vacua even though the associated solutions of the Yang–Mills equations are unknown.
As presented in , three–family vacua with phenomenologically interesting unification groups such as $`E_6`$, $`SO(10)`$ and $`SU(5)`$ could be obtained, corresponding to vector bundle structure groups $`SU(3)`$, $`SU(4)`$ and $`SU(5)`$ respectively. However, it was not possible to break $`E_8`$ directly to the standard gauge group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ in this manner. A natural solution to this problem is to use non-trivial Wilson lines to break the GUT group down to the standard gauge group . This requires that the fundamental group of the Calabi–Yau threefold be non-trivial. Unfortunately, one can show that all elliptically fibered Calabi–Yau threefolds are simply connected, with the exception of such threefolds over an Enriques base which, however , is not consistent with the requirement of three families of quarks and leptons.
With this in mind, recall that an elliptic fibration is simply a torus fibration that admits a zero section. We were able to show that it is the requirement of a zero section that severely restricts the fundamental group of the threefold to be, modulo the one exception mentioned above, trivial. Hence, if one lifts the zero section requirment, and considers holomorphic vector bundles over torus-fibered Calabi–Yau threefolds without section, then one expects to find non-trivial first homotopy groups and Wilson lines in vacua that are consistent with the three-family requirement. In we gave the relevant mathematical properties of a specific class of torus-fibered Calabi–Yau threefolds without section and constructed holomorphic vector bundles over such threefolds. We then used these results to explicitly construct a number of three-family vacua with unification group $`SU(5)`$ which is spontaneously broken to the standard gauge group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ by Wilson lines.
The results of represent $`N=1`$ “standard” models of particle physics derived directly from M–theory. Each of these vacua has three families of chiral quarks and leptons coupled to the standard $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ gauge group. As discussed above, this “observable sector” lives on a $`3+1`$ dimensional “brane world”. It was shown in that this $`3+1`$ dimensional space is the worldvolume of a BPS three–brane. It is separated from a “hidden sector” three–brane by a bulk space with an intermediate scale “large” extra dimension. The requirement of three families, coupled to the fundamental condition of anomaly freedom and supersymmetry, constrains the theory to admit an effective class describing the wrapping of additional five-branes around holomorphic curves in the Calabi–Yau threefold. These five-branes “live” in the bulk space and represent new, non-perturbative aspects of particle physics vacua.
In this talk, we present the rules for building phenomenological particle physics “standard” models in heterotic M-theory on torus-fibered Calabi–Yau threefolds without section realized as quotient manifolds $`Z=X/\tau _X`$. These quotient threefolds have a non-trivial first homotopy group $`\pi _1(Z)=_2`$. Specifically, we construct three-family particle physics vacua with GUT group $`SU(5)`$. Since $`\pi _1(Z)=_2`$, these vacua have Wilson lines that break $`SU(5)`$ to the standard $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ gauge group. We then present several explicit examples of these “standard” model vacua for the base surface $`B=F_2`$ of the torus fibration. We refer the reader to for the mathematical details and a wider set of examples, including the base $`B=dP_3`$.
## 2 Rules for Realistic Particle Physics Vacua
In this section, we give the rules required to construct realistic particle physics vacua, restricting our results to vector bundles with structure group $`SU(n)`$ for $`n`$ odd. The rules presented here lead to $`N=1`$ supersymmetric theories with three families of quarks and leptons with the standard model gauge group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$.
The first set of rules deals with the selection of the elliptically fibered Calabi–Yau threefold $`X`$ with two sections, the choice of the involution and constraints on the vector bundles, such that the bundles descend to vector bundles on $`Z=X/\tau _X`$. If one was using this construction to construct vector bundles for each of the two $`E_8`$ groups in Hořava-Witten theory, then this first set of constraints is applicable to each bundle individually. The rules are
* Two Section Condition: Choose an elliptically fibered Calabi–Yau threefold $`X`$ which admits two sections $`\sigma `$ and $`\xi `$. This is done by selecting the base manifold $`B`$ of $`X`$ to be a 1) del Pezzo, 2) Hirzebruch, 3) blown-up Hirzebruch or 4) an Enriques surface. The threefold $`X`$ with two sections is then specified by its Weierstrass model with an explicit choice of
$$g_2=4(a^2b),g_3=4ab.$$
(2.1)
The discriminant is then given by
$$\mathrm{\Delta }=\mathrm{\Delta }_1\mathrm{\Delta }_2^2,$$
(2.2)
where
$$\mathrm{\Delta }_1=a^24b,\mathrm{\Delta }_2=4(2a^2+b).$$
(2.3)
* Choice of Involution: Using the properties of the base, explicitly specify an involution $`\tau _B`$ on $`B`$. Now choose sections $`a`$ and $`b`$ to be invariant under $`\tau _B`$. This allows one to construct an involution $`\tau _X`$ on $`X`$. Find the set of fixed points $`_{\tau _B}`$ under $`\tau _B`$ and show that
$$_{\tau _B}\{\mathrm{\Delta }=0\}=\mathrm{}.$$
(2.4)
* Bundle Constraint: Consider semi-stable holomorphic vector bundles $`V`$ over $`X`$. To construct any such vector bundle one must specify a divisor class $`\eta `$ in the base $`B`$ as well as coefficients $`\lambda `$ and $`\kappa _i`$. These coefficients satisfy
$$\lambda \frac{1}{2},\kappa _i\frac{1}{2}m,$$
(2.5)
with $`m`$ an integer. Furthermore, we must have that
$$\eta \text{ is effective}$$
(2.6)
as a class on $`B`$.
* Bundle Involution Condition: In order for $`V`$ to descend to a vector bundle $`V_Z`$ over $`Z`$, the class $`\eta `$ in $`B`$ and the coefficients $`\kappa _i`$ must satisfy the constraints
$`\tau _B(\eta )`$ $`=\eta ,`$ (2.7)
$`{\displaystyle \underset{i}{}}\kappa _i`$ $`=\eta c_1`$
The second set of rules is directly particle physics related. The first of these is the requirement that the theory have three families of quarks and leptons. The number of generations associated with the vector bundle $`V_Z`$ over $`Z`$ is given by
$$N_{\text{gen}}=\frac{1}{2}c_3(V_Z).$$
(2.8)
Requiring $`N_{\text{gen}}=3`$ leads to the following rule for the associated vector bundle $`V`$ over $`X`$.
* Three-Family Condition: To have three families we must require
$$6=\lambda \eta (\eta nc_1).$$
(2.9)
The second such rule is associated with the anomaly cancellation requirement that
$$[W_Z]=c_2(TZ)c_2(V_{Z1})c_2(V_{Z2}),$$
(2.10)
where $`[W_Z]`$ is the class associated with non-perturbative five-branes in the bulk space of the Hořava-Witten theory. Vector bundles $`V_{Z1}`$ and $`V_{Z2}`$ are located on the “observable” and “hidden” orbifold planes respectively. In this talk, for simplicity, we will always take $`V_{Z2}`$ to be the trivial bundle. Hence, gauge group $`E_8`$ remains unbroken on the “hidden” sector, $`c_2(V_{Z2})`$ vanishes and condition (2.10) simplifies accordingly. Using the definition
$$[W_Z]=\frac{1}{2}q_{}[W],$$
(2.11)
condition (2.10) can be pulled-back onto $`X`$ to give
$$[W]=c_2(TX)c_2(V).$$
(2.12)
It follows that
$$[W]=\sigma _{}W_B+c(FN)+dN$$
(2.13)
where
$$W_B=12c_1\eta $$
(2.14)
and
$`c`$ $`=c_2+\left({\displaystyle \frac{1}{24}}(n^3n)+11\right)c_1^2{\displaystyle \frac{1}{2}}\left(\lambda ^2{\displaystyle \frac{1}{4}}\right)n\eta \left(\eta nc_1\right){\displaystyle \underset{i}{}}\kappa _i^2,`$ (2.15)
$`d`$ $`=c_2+\left({\displaystyle \frac{1}{24}}(n^3n)1\right)c_1^2{\displaystyle \frac{1}{2}}\left(\lambda ^2{\displaystyle \frac{1}{4}}\right)n\eta \left(\eta nc_1\right){\displaystyle \underset{i}{}}\kappa _i^2+{\displaystyle \underset{i}{}}\kappa _i.`$ (2.16)
The class $`[W_Z]`$ must represent an actual physical holomorphic curve in the Calabi–Yau threefold $`Z`$ since physical five-branes are required to wrap around it. Hence, $`[W_Z]`$ must be an effective class and, hence, its pull-back $`[W]`$ is an effective class in the covering threefold $`X`$. This leads to the following rule.
* Effectiveness Condition: For $`[W]`$ to be an effective class, we require
$$W_B\text{ is effective in }B,c0,d0.$$
(2.17)
Finally, consider subgroups of $`E_8`$ of the form
$$G\times HE_8.$$
(2.18)
If $`G`$ is chosen to be the structure group of the vector bundle, then, naively, one would expect the commutant subgroup $`H`$ to be the subgroup preserved by the bundle. However, Rajesh, Berglund and Mayr have shown that this will be the case if and only if the vector bundle satisfies a further constraint. If this constraint is not satisfied, then the actual preserved subgroup of $`E_8`$ will be larger than $`H`$. Although not strictly necessary, we find it convenient in model building to demand that this constraint hold.
* Stability Constraint: Let $`G\times HE_8`$ and $`G`$ be the structure group of the vector bundle. Then $`H`$ will be the largest subgroup preserved by the bundle if and only if
$$\eta >nc_1.$$
(2.19)
If one follows the above rules, then the vacua will correspond to a grand unified theory with unification group $`H`$ and three families of quarks and leptons. In this talk, we will only consider the maximal subgroup $`SU(5)\times SU(5)E_8`$. We then choose
$$G=SU(5).$$
(2.20)
Therefore, the unification group will be
$$H=SU(5).$$
(2.21)
However, these vacua correspond to vector bundles over the quotient torus-fibered Calabi–Yau threefold $`Z`$ which has non-trivial homotopy group
$$\pi _1(Z)=_2.$$
(2.22)
It follows that the GUT group will be spontaneously broken to the standard model gauge group
$$SU(5)SU(3)_C\times SU(2)_L\times U(1)_Y,$$
(2.23)
if we adopt the following rule.
* Standard Gauge Group Condition: Assume that the bundle contains a non-vanishing Wilson line with generator
$$𝒢=\left(\begin{array}{cc}\mathrm{𝟏}_3& \\ & \mathrm{𝟏}_2\end{array}\right).$$
(2.24)
Armed with the above rules, we now turn to the explicit construction of phenomenologically relevant non-perturbative vacua.
## 3 Three Family Models
We begin by choosing the base of the Calabi–Yau threefold to be the Hirzebruch surface
$$B=F_2.$$
(3.1)
As discussed in the Appendix of , the Hirzebruch surfaces are $`^1`$ fibrations over $`^1`$. There are two independent classes on $`F_2`$, the class of the base $`𝒮`$ and of the fiber $``$. Their intersection numbers are
$$𝒮𝒮=2,𝒮=1,=0.$$
(3.2)
The first and second Chern classes of $`F_2`$ are given by
$$c_1(F_2)=2𝒮+4,$$
(3.3)
and
$$c_2(F_2)=4.$$
(3.4)
We now need to specify the involution $`\tau _B`$ on the base and how it acts on the classes on $`B`$. We recall that there is a single type of involution on $`^1`$. If $`(u,v)`$ are homogenous coordinates on $`^1`$, the involution can be written as $`(u,v)(u,v)`$. This clearly has two fixed points, namely the origin $`(0,1)`$ and the point at infinity $`(1,0)`$ in the $`u`$-plane. To construct the involution $`\tau _B`$, we combine an involution on the base $`^1`$ with one on the fiber $`^1`$. Thus $`_{\tau _B}`$ contains four fixed points.
To ensure that we can construct a freely acting involution $`\tau _X`$ from $`\tau _B`$, we need to show that the discriminant curve can be chosen so as not to intersect these fixed points. We recall that the two components of the discriminant curve are given by
$$\mathrm{\Delta }_1=a^24b,\mathrm{\Delta }_2=4\left(2a^2+b\right),$$
(3.5)
and that parameters $`a`$ and $`b`$ are sections of $`K_B^2`$ and $`K_B^4`$ respectively, where $`K_B`$ is the canonical bundle of the base. In order to lift $`\tau _B`$ to an involution of $`X`$, we required that
$$\tau _B(a)=a,\tau _B(b)=b.$$
(3.6)
This restricts the allowed sections $`a`$ and $`b`$ and, consequently, the form of $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$. One can show that, for a generic choice of $`a`$ and $`b`$ satisfying conditions (3.6), there is enough freedom so that the discriminant curves do not intersect any of the fixed points.
We now want to consider curves $`\eta `$ in $`F_2`$ that are invariant under the involution $`\tau _B`$. This can be done by first determining how this involution acts on the effective classes. We find that the involution preserves both $`𝒮`$ and $``$ separately, so that
$$\tau _B(𝒮)=𝒮,\tau _B()=.$$
(3.7)
Since any class $`\eta `$ is a linear combination of $`𝒮`$ and $``$, we see that an arbitrary $`\eta `$ satisfies $`\tau _B(\eta )=\eta `$.
We can now search for $`\eta `$, $`\lambda `$ and $`\kappa _i`$ satisfying the three family, effectiveness and stability conditions given above. We find that there are two classes of solutions
solution 1: $`\eta =14𝒮+22,\lambda =\frac{3}{2},`$ (3.8)
$`{\displaystyle \underset{i}{}}\kappa _i=\eta c_1=44,{\displaystyle \underset{i}{}}\kappa _i^260,`$
solution 2: $`\eta =24𝒮+30,\lambda =\frac{1}{2},`$
$`{\displaystyle \underset{i}{}}\kappa _i=\eta c_1=60,{\displaystyle \underset{i}{}}\kappa _i^276.`$
First note that the coefficients $`\lambda `$ satisfy the bundle constraint (2.5). Furthermore, one can find many examples of $`\kappa _i`$ with $`i=1,\mathrm{},4\eta c_1`$, satisfying the bundle constraint (2.5), the given conditions on $`_i\kappa _i^2`$ and the invariance condition $`_i\kappa _i=\eta c_1`$.
Using $`n=5`$, (3.3), (3.8) and the intersection relations (3.2), one can easily verify that both solutions satisfy the three-family condition (2.9).
Next, from (2.13), (2.14), (2.15) and (2.16), as well as $`n=5`$, (3.3), (3.4), (3.8) and the intersection relations (3.2), we can calculate the five-brane curves $`W`$ associated with each of the solutions. We find that
solution 1: $`[W]=\sigma _{}\left(10𝒮+26\right)+\left(112k\right)\left(FN\right)+\left(60k\right)N,`$ (3.9)
solution 2: $`[W]=\sigma _{}\left(18\right)+\left(132k\right)\left(FN\right)+\left(76k\right)N,`$
where
$$k=\underset{i}{}\kappa _i^2$$
(3.10)
It follows that the base components for $`[W]`$ are given by
solution 1: $`W_B=10𝒮+26,`$ (3.11)
solution 2: $`W_B=18,`$
which are both effective. Furthermore, we note that for each five-brane curve the $`c`$ and $`d`$ coefficients of classes $`FN`$ and $`N`$ respectively are non-negative integers (given the constraints on $`k`$). Hence, effectiveness condition (2.17) is satisfied.
Finally, note that for $`n=5`$ the stability condition becomes $`\eta >5c_1`$. In both of the above solutions
$$\eta >5c_1=10𝒮+20$$
(3.12)
so that the stability condition is satisfied. Note that this condition is consistent with the somewhat stronger condition used in since $`\eta `$ and $`c_1`$ have integer coefficients.
We conclude that, over a Hirzebruch base $`B=F_2`$, one can construct torus-fibered Calabi–Yau threefolds, $`Z`$, without section with non-trivial first homotopy group $`\pi _1(Z)=_2`$. Assuming a trivial gauge vacuum on the hidden brane, we have shown that we expect these threefolds to admit precisely two classes of semi-stable holomorphic vector bundles $`V_Z`$, (3.8), associated with an $`N=1`$ supersymmetric theory with three families of chiral quarks and leptons and GUT group $`H=SU(5)`$ on the observable brane world. Since $`\pi _1(Z)=_2`$, Wilson lines break this GUT group as
$$SU(5)SU(3)_C\times SU(2)_L\times U(1)_Y,$$
(3.13)
to the standard model gauge group. Anomaly cancellation and supersymmetry require the existence of non-perturbative five-branes in the extra dimension of the bulk space. These five-branes are wrapped on holomorphic curves in $`Z`$ whose homology classes, (3.9), are exactly calculable.
### Acknowledgments
R. Donagi is supported in part by an NSF grant DMS-9802456 as well as a University of Pennsylvania Research Foundation Grant. B.A. Ovrut is supported in part by a Senior Alexander von Humboldt Award, by the DOE under contract No. DE-AC02-76-ER-03071 and by a University of Pennsylvania Research Foundation Grant. T. Pantev is supported in part by an NSF grant DMS-9800790 and by an Alfred P. Sloan Research Fellowship. |
no-problem/0001/quant-ph0001029.html | ar5iv | text | # From time inversion to nonlinear QED11footnote 1Published on Foundations of Physics, Vol. 30, No. 11, 2000
## 1. INTRODUCTION
Decades ago, Wigner first pointed out the significance of time inversion and later made detailed discussions on antiunitary time inversion (Wigner ). His theory about time inversion is based on a classical motion picture (Wigner ): “Time inversion … replaces every velocity by the opposite velocity, so that the position of particles at $`+`$t becomes the same as it was, without time inversion at $``$t. …”
Also well known is Einstein’s relativity theory (Einstein ) that completely changes our perception of space and time, as best manifested by Minkowski’s “world-postulate” (see Minkowski’s paper in ). Minkowski unifies space and time by introducing time as an independent coordinate in addition to three space coordinates. Usually “Minkowski space-time” refers to four-dimensional flat space-time in special relativity, in which relativistic quantum mechanics and quantum electrodynamics (QED) are established.
On the development of quantum mechanics and relativity, there always exist heated debates on the physical meaning of the theories. Dirac however was indifferent in such debates. Instead he had been looking for mathematical possibilities to reconcile quantum mechanics and relativity. As a consequence, he founded relativistic quantum mechanics with a relativistic wave equation called “Dirac equation” (Dirac ). However, the Dirac equation was also not perfect. In the following year, Klein found a paradox in the Dirac equation. Now seven decades has gone by, the “Klein paradox” has still not been resolved mathematically, though it can be explained away by physical reasoning.
This paper is organized as follows. Given that a correct concept in Newtonian classical mechanics may not be necessarily correct in Einsteinian relativistic mechanics, first I am going to show that the above mentioned classical motion picture is just one of those concepts, correct classically but not relativistically. Then I will proceed to clarify what is supposed to mean to make a time inversion in special relativity. Based on the general principles of quantum mechanics and special relativity, it can be shown that time inversion turns out to be a unitary transformation with energy being a time vector changing sign under time inversion in Minkowski space-time.
With unitary time inversion understood, the next step is to study the transformation of the Dirac equation. For clarity, let us name the Dirac equation in the case of electromagnetic interaction as Dirac EM-equation to distinguish it from the Dirac equation in other cases. What happens is: the Dirac EM-equation is not invariant under unitary time inversion. As a direct consequence of this non-invariance, the Dirac EM-equation has a non-symmetric positive-negative energy spectrum with a mathematical singularity that causes a “Klein paradox”. Here in this paper, I do not intend to argue too much about whether or not the “Klein paradox” has physical meaning. Rather I hold such a point of view that we would be better off from the outset without mathematical singularity.
To implement unitary time inversion, I come up with a revised equation that looks similar to the Dirac EM-equation on the one hand, while differs from the Dirac EM-equation in several aspects on the other. The main difference is: the revised equation preserves the invariance under unitary time inversion in addition to the invariance under the other Lorentz transformations. Consequently, it gives a positive-negative symmetric energy spectrum without singular crossing point. In the case of Coulomb interaction, the energy bound states for Hydrogen-like ions are solved without singularity when nuclear number $`Z>137`$, and the order of some energy levels found opposite to the conventional one, which entails further high precision tests from experiments like Zeeman effects.
Another difference is: the Dirac EM-equation, involving the external electromagnetic potential with minimal coupling, is linear on fermion field; while the revised wave equation, involving the electro-dynamical interaction potential, is nonlinear on fermion field. In non-relativistic limit, the Dirac EM-equation reduces to the linear Schrödinger equation, while the revised nonlinear wave equation reduces to the nonlinear Schrödinger equation with soliton-like solutions well known in nonlinear physics.
Typically, when the finite size of fermion is considered, the interaction potential can be generalized by a convolution between four-current and four-potential with an integral over the finite four-dimensional size of the fermion. Using such a nonsingular convolution can avoid the singularity and infinity troubles in calculating the self-energy of electron and the divergent integrals of Feynman Diagrams.
The gauge invariance is to be discussed at the end. To preserve Lorentz invariance including the invariance under unitary time inversion, it can be shown that gauge function is of particular form, not arbitrary, and the only gauge condition is Lorentz gauge. It is also straightforward to write a nonlinear Lagrangian that derives the Maxwell electromagnetic field equation and the revised nonlinear fermion field equation. With the basic principles of quantum field theory, we may then establish a nonlinear QED that would show many interesting applications in experiment. What I am actually up to, is to make necessary improvements on QED in its own framework, namely in the language of space-time, without further drastic changes as in superstring theory. So much has been said, let me now turn into more detailed discussions.
## 2. UNITARY TIME INVERSION
Usually under a coordinate transformation $`x^{}=ax`$, the transformation of wave function in quantum mechanics is defined by
$$\mathrm{A}(a)\mathrm{\Psi }(x)=\mathrm{\Psi }(ax).$$
$`(2.1)`$
But antiunitary time inversion is defined by (see Bjorken and Drell )
$$\mathrm{T}_{}\mathrm{\Psi }(t,𝐱)=\mathrm{\Psi }^{}(t,𝐱).$$
$`(2.2)`$
where the extra complex conjugation $``$ on wave function is used to comply with the assumption that under time inversion $`(t,𝐱)(t,𝐱)`$, energy and momentum transform as $`(E,𝐩)(E,𝐩)`$. In this case, the phase of a plane wave $`\varphi =𝐩𝐱Et`$ (natural units are used in this paper), is not invariant under antiunitary time inversion. This also leads to another result that the imaginary unit $`i`$ has to be supposed to change to $`i`$ under antiunitary time inversion, while constant $`i`$ has nothing to do with time.
In quantum mechanics, the group velocity of a wave packet is expressed by $`d𝐱/dt=dE(𝐩)/d𝐩`$. By making a positive non-relativistic energy assumption $`E(𝐩)=𝐩^2/2m`$, one can then get a classical correspondence $`d𝐱/dt=𝐩/m=𝐯`$, from which the classical motion picture comes. The point is: in the derivation of classical mechanics velocity from quantum mechanics group velocity, the positive energy assumption has been taken, which may not be correct under time inversion in special relativity. It would be logically odd to discuss symmetry under time inversion based on a truncated classical motion picture that has already broken the symmetry.
Now that quantum mechanics is more general than classical mechanics, it is better not to impose any properties for time inversion classically until we obtain certain results from quantum mechanics. As we understand, time inversion is nothing but a kind of coordinate transformation in space-time, and it is natural to adapt the general definition (2.1) for time inversion rather than to follow the truncated classical motion picture. Therefore putting the general principles of quantum mechanics in the first place, we define time inversion as:
$$\mathrm{T}\mathrm{\Psi }(t,𝐱)=\mathrm{\Psi }(t,𝐱).$$
$`(2.3)`$
For a plane wave $`\mathrm{\Psi }(t,𝐱)=C(𝐩,E)\mathrm{exp}[i(𝐩𝐱Et)]`$, it is clear that the expectation value of space momentum operator $`\text{}=i_𝐱`$ remains the same and that of energy operator $`\text{Ê}=i_t`$ changes sign under time inversion. Any wave function can be Fourier-transformed into a combination of plane waves. So it is easy to check these results hold for any kind of matter waves.
In 4-d space-time, the space-time interval squared $`\mathrm{\Delta }X_\mu \mathrm{\Delta }X^\mu =g^{\mu \nu }\mathrm{\Delta }X_\mu \mathrm{\Delta }X_\nu `$ is Lorentz invariant, where $`g^{00}=g^{ii}=1(i=1,2,3)`$ and $`g^{\mu \nu }=0(\mu \nu )`$. Mathematically if contravariant four-coordinate transforms as $`X^\mu =a_\nu ^\mu X^\nu `$, then covariant four-coordinate transforms in the way $`X_\mu ^{}=a_\mu ^\nu X_\nu `$. Since $`\mathrm{\Delta }X_\mu ^{}\mathrm{\Delta }X^\mu =\mathrm{\Delta }X_\mu \mathrm{\Delta }X^\mu `$, we get $`a^1=ga^{}g`$ where $`a^{}`$ is the transpose of $`a`$, leading to $`det(a^1)=det(a)=\pm 1`$ for proper and improper Lorentz transformations respectively. On the other hand, the phase of a plane wave $`\varphi =P_\mu X^\mu `$ characterizing the physical correlation of space-time world-points, is also invariant under homogeneous Lorentz transformations, therefore four-momentum must transform in a covariant way $`P_\mu ^{}=a_\mu ^\nu P_\nu `$. In the case of time inversion $`(t,𝐱)(t,𝐱)`$, it is naturally found an energy-momentum transformation
$$(E,𝐩)(E,𝐩).$$
$`(2.4)`$
Moreover, if a plane wave with a constant four-momentum $`P_\mu =(E,𝐩)`$ travels from space-time point 1 to 2, the phase difference between them $`\mathrm{\Delta }\varphi =\varphi _2\varphi _1`$ describing the causality of this process, is an invariant under all inhomogeneous Lorentz transformations $`X^\mu =a_\nu ^\mu X^\nu +b^\mu `$. The phase differences between points 1 and 2 before and after space-time inversions can be written down as: $`\mathrm{\Delta }\varphi =𝐩(𝐱_\mathrm{𝟐}𝐱_\mathrm{𝟏})E(t_2t_1)`$ and $`\mathrm{\Delta }\varphi ^{}=𝐩^{}(𝐱_\mathrm{𝟐}^{}𝐱_\mathrm{𝟏}^{})E^{}(t_2^{}t_1^{})`$. Since $`\mathrm{\Delta }\varphi ^{}=\mathrm{\Delta }\varphi `$, we obtain $`𝐩^{}=𝐩`$ under space inversion $`𝐱_\mathrm{𝟐}^{}𝐱_\mathrm{𝟏}^{}=(𝐱_\mathrm{𝟐}𝐱_\mathrm{𝟏})`$, and $`E^{}=E`$ under time inversion $`t_2^{}t_1^{}=(t_2t_1)`$. It is such a Lorentz invariant physical phase space that determines the unique energy-momentum transformation $`(E,𝐩)(E,𝐩)`$ under time inversion.
Actually four-vector momentum $`P_\mu =(E,𝐩)`$ consists of time component $`E`$ and space component $`𝐩`$. While momentum $`𝐩`$ is usually known as a space vector changing sign under space inversion, energy $`E`$ may be similarly regarded as a time vector changing sign under time inversion. The idea of uncertainty principles in quantum mechanics: $`\mathrm{\Delta }𝐩\mathrm{\Delta }𝐱1`$ and $`\mathrm{\Delta }E\mathrm{\Delta }t1`$, is that the strict measurement of momentum and position as well as that of energy and time can not be taken simultaneously since they do not commute, but they do have close relationship with each other. The determination of momentum is tightly related to that of position not to that of time. Similarly the measurement of energy is directly connected to that of time not to that of position. This one-to-one correspondence reveals the following fact that 4-d space-time and energy-momentum, combined into a Lorentz invariant phase space, are the reciprocal systems relatively to each other
$$P_\mu =i_\mu ,$$
$`(2.5)`$
with a commutator relation $`[X^\mu ,P_\mu ]=i`$.
The Fourier transformation used in quantum mechanics also shows a good example of this one-to-one correspondence. According to this correspondence, we prefer to call 4-d energy-momentum system, the “reciprocal world” of space-time. Generally speaking, this is only a question of using different representations or interchangeable languages. There is nothing special, in the sense of relativistic covariance, that producing an inversion in space-time world is equivalent to making a corresponding inversion in its energy-momentum reciprocal world by the hint of (2.5). This idea now enables us to understand why momentum switches sign under space inversion while energy reverses sign under time inversion.
The physical picture becomes clear if in a “local framework”, we do not impose absolute future and past concepts by setting a standard clock A but rather reverse time axis by setting another clock B running counterclockwise. If a plane wave is propagating toward the future with a positive (or negative) frequency by the standard clock A, then it can also be equivalently looked upon as propagating toward the past with a negative (or positive) frequency by the other clock B. On the principle of special relativity, it is arbitrary in setting clocks in locally flat space-time. Therefore this plane wave may have either positive or negative energy depending on which clock we use.
By the correspondence between space-time and energy-momentum worlds, if we indeed want to introduce a time inversion concept in Lorentz group, then it is natural to introduce a corresponding “energy inversion” concept to map these two worlds completely. Once again we would like to emphasize, based on what we try to clarify here from the intrinsic structures of space-time world and energy-momentum reciprocal world, and from the general principles of special relativity and quantum mechanics, that in Minkowski flat space-time, energy is not a scalar any more, it is a “time vector” changing sign under time inversion.
Similar to space inversion, by definition (2.3) it is easy to prove: (a) T is linear; (b) $`\mathrm{T}^2=1`$; (c) $`\mathrm{T}^{}=`$T; (d) $`\mathrm{T}^{}=`$T. So time inversion operator is unitary
$$\mathrm{T}^1=\mathrm{T}=\mathrm{T}^{}.$$
$`(2.6)`$
To find its eigenvalues let T$`\mathrm{\Psi }=\lambda \mathrm{\Psi }`$. From relation (b) we get $`\lambda =\pm 1`$ representing even and odd “time parities” respectively.
For a free relativistic particle with an energy-momentum relation $`E^2𝐩^2=m^2`$, the common eigenfunctions of momentum operator $`\text{}=i_𝐱`$ and energy operator $`\text{Ê}=i_t`$ are expressed by $`\mathrm{\Phi }_\pm (t,𝐱)=C(𝐩,E)\varphi _𝐩(𝐱)\mathrm{exp}(\pm iEt)`$. Obviously they are not the eigenfunctions of time inversion operator T since T does not commute with Ê: $`[\mathrm{T},\text{Ê}]0`$. Now make linear combinations: $`\mathrm{\Psi }_+=(\mathrm{\Phi }_++\mathrm{\Phi }_{})/2`$ and $`\mathrm{\Psi }_{}=(\mathrm{\Phi }_+\mathrm{\Phi }_{})/2i`$, which are the common eigenstates of and T but not those of Ê any more. Generally any state wave function can be divided by $`\mathrm{\Psi }=\mathrm{\Psi }_++\mathrm{\Psi }_{}`$, where the even and odd time parity eigenstates are $`\mathrm{\Psi }_\pm =(1\pm \mathrm{T})\mathrm{\Psi }/2`$ respectively. And any operator can be expressed by $`\text{Ŵ}=\text{Ŵ}_++\text{Ŵ}_{}`$, where $`\text{Ŵ}_\pm =(\text{Ŵ}\pm \mathrm{T}\text{Ŵ}\mathrm{T})/2`$ and T$`\text{Ŵ}_\pm \mathrm{T}=\pm \text{Ŵ}_\pm `$. Here $`\text{Ŵ}_+`$ represents the even time parity operator such as momentum $`\text{}=i_𝐱`$ and $`\text{Ŵ}_{}`$ represents the odd time parity operator such as energy $`\text{Ê}=i_t`$.
## 3. DIRAC EQUATION REVISITED
At the end of 19th century, Zeeman detected the splitting of spectral lines under the influence of external magnetic fields, revealing the existence of electron spin. To explain the Zeeman effect, the imagination of electron structure has been made: electron has a magnetic moment and thus has an intrinsic spin angular momentum instead of a classical point particle (Lorentz ). It has been clarified that electron spin relates to the extra degrees of freedom and can be well described by a two component complex variable called “spinor”. One is then faced with such a typical question: how to construct a minimal complete set of variables in the combination of space-time and intrinsic spin. In Dirac’s mind (Dirac ), spinors, like tensors, are geometrical objects, yielding covariant transformations with respect to Minkowski space-time, and specifically under Lorentz transformations, the Dirac equation is to obey the covariance principle of special relativity. Along this line of thought, we may say the whole domain of space-time and intrinsic spin is Lorentz invariant, and name it “common space” for clarity.
A common variable is now defined by
$$\mathrm{\Omega }=\gamma ^\mu X_\mu ,$$
$`(3.1)`$
where matrices $`\gamma ^\mu (\mu =0,1,2,3)`$, in the representation of Bjorken and Drell , are ascribed to the spin geometrical factors. The common variable constructed in this way is a function of space-time coordinates and also a metrical quantity. The common variable interval squared becomes $`\mathrm{\Delta }\mathrm{\Omega }^2=\mathrm{\Delta }X_\mu \mathrm{\Delta }X^\mu `$, leading to the anticommutation relations of Dirac matrices $`\{\gamma ^\mu ,\gamma ^\nu \}=2g^{\mu \nu }`$. Moreover, the derivative with respect to common variable can be deduced as
$$_\omega =(\frac{1}{_\mu \mathrm{\Omega }})_\mu =\gamma ^\mu _\mu ,$$
$`(3.2)`$
and a momentum operator in common space is naturally introduced as
$$P_\omega =i_\omega =\gamma ^\mu P_\mu .$$
$`(3.3)`$
By (3.1) and (3.3) it is straightforward to prove a commutator relation
$$[\mathrm{\Omega },P_\omega ]=4i.$$
$`(3.4)`$
If noticing that the common momentum squared of a free particle with mass $`m`$ is constant $`P_\omega ^2=E^2𝐩^\mathrm{𝟐}=m^2`$, we find $`P_\omega `$ has two eigenvalues $`\pm m`$. If choosing $`+m`$ eigenvalue, we arrive at the Dirac equation of free spin one-half particle
$$\gamma ^\mu P_\mu \mathrm{\Psi }(x)=m\mathrm{\Psi }(x),$$
$`(3.5)`$
The Dirac equation, on the principle of special relativity, is required to be invariant under Lorentz transformation $`\mathrm{L}_\omega `$ in common space, which is a direct product of spinor one $`\mathrm{L}_s`$ and coordinate one $`\mathrm{L}_c`$: $`\mathrm{L}_\omega =\mathrm{L}_s\mathrm{L}_c`$. Making $`\mathrm{L}_\omega `$ on both sides of (3.5) and noting that $`\mathrm{L}_s`$ commutes with space-time vectors and $`\mathrm{L}_c`$ commutes with spinor vectors, we have
$$\mathrm{L}_s\gamma ^\mu \mathrm{L}_s^1\mathrm{L}_cP_\mu \mathrm{L}_c^1\mathrm{L}_\omega \mathrm{\Psi }(x)=m\mathrm{L}_\omega \mathrm{\Psi }(x).$$
$`(3.6)`$
Since momentum $`P_\mu `$ is a covariant four-vector in space-time,
$$\mathrm{L}_cP_\mu \mathrm{L}_c^1=a_\mu ^\nu P_\nu ,$$
$`(3.7)`$
matrices $`\gamma ^\mu `$ must be correspondingly contravariant in spinor space,
$$\mathrm{L}_s\gamma ^\mu \mathrm{L}_s^1=a_\nu ^\mu \gamma ^\nu .$$
$`(3.8)`$
The above discussions also hold for two special cases. First, there is a unitary space inversion in common space $`\mathrm{S}_\omega =\mathrm{S}_s\mathrm{S}_c`$, where $`\mathrm{S}_s=\gamma ^0`$ is a unitary space inversion in spinor space:
$$\mathrm{S}_s\gamma ^0\mathrm{S}_s^1=\gamma ^0,\mathrm{S}_s𝜸\mathrm{S}_s^1=𝜸,$$
$`(3.9)`$
and $`\mathrm{S}_c`$ is a unitary space inversion in space-time, defined as usual in quantum mechanics. Second, there is a unitary time inversion in common space $`\mathrm{T}_\omega =\mathrm{T}_s\mathrm{T}_c`$, where $`\mathrm{T}_s=\gamma ^1\gamma ^2\gamma ^3`$ is a unitary time inversion in spinor space:
$$\mathrm{T}_s\gamma ^0\mathrm{T}_s^1=\gamma ^0,\mathrm{T}_s𝜸\mathrm{T}_s^1=𝜸,$$
$`(3.10)`$
and $`\mathrm{T}_c`$ is a unitary time inversion in space-time, defined by (2.3).
When a spin one-half particle is put into an external Maxwell electromagnetic field, the Dirac EM-equation with minimal coupling can be written down in a general form (Dirac ):
$$\gamma ^\mu (P_\mu eA_\mu )\mathrm{\Psi }(x)=m\mathrm{\Psi }(x),$$
$`(3.11)`$
where $`A_\mu =(\mathrm{\Phi },𝐀)`$ is the space-time four-potential of electromagnetic field, proposed by Minkowski . Under time inversion, time component $`\mathrm{\Phi }`$ does not change, while space vector $`𝐀`$ is supposed to change sign. The Minkowski four-potential constructed in this way is therefore “time-anticovariant” under time inversion:
$$(\mathrm{\Phi },𝐀)(\mathrm{\Phi },𝐀).$$
$`(3.12)`$
If four-momentum $`P_\mu `$ is also “time-anticovariant”, one may make spinor space yield a sort of transformation to keep the equation invariant as done in conventional theory. However from what we have shown in the last section, four-momentum is not “time-anticovariant”, but rather “time-covariant” (2.4), just as it is covariant under all the other Lorentz transformations. Here exists a nontrivial dilemma that the summation of “time-covariant” four-momentum and “time-anticovariant” Minkowski four-potential multiplied by charge and a negative sign is neither time-covariant nor time-anticovariant, keeping in no way the Dirac EM-equation invariant under time inversion.
This dilemma gives rise to an unsymmetrical positive-negative energy spectrum when we solve this equation. Consequently when the electric potential $`\mathrm{\Phi }`$ such as Coulomb potential is strong enough, the positive and negative energy spectra will be shifted to overlap, leading to a mathematical singularity difficulty \- “Klein paradox” (Klein ). Particularly, for a Hydrogen-like ion there is no real bound energy solution but appears a singularity when the nuclear number of the ion is larger than 137.
At first, Dirac could hardly understand why his equation has negative energy solutions that seem “nonphysical”. To find an explanation, Dirac constructed a hole theory with a vacuum of fully filled negative energy states (Dirac ). The predicted positron was found a few years later (Anderson ). Despite Dirac’s profound prediction of antiparticles, the hole theory has difficulties such as infinite density, infinite negative energy and vacuum fluctuation in the vacuum state. Based on the hole theory, the “Klein paradox” is explained in terms of vacuum polarization accompanying particle-antiparticle pair production and annihilation under strong fields. Along this line of work, one has been trying to explain away the “Klein paradox” rather than to avoid it. Eventually, one is led to such a conclusion: “ ‘Klein paradox’ is not a paradox”.
Just as in classical mechanics, energy in the Dirac hole theory is still viewed as a scalar: namely negative energy is always “lower” than positive energy. If energy in special relativity is a frame-dependent “time vector” rather than just a scalar, as we have shown earlier, then it makes little sense to insert negative energy into vacuum or explain the “Klein paradox” as vacuum polarization. There should exist a more fundamental solution for these problems. In the next section, we would like to provide a down-to-earth approach, to remove the “Klein paradox” by revising the Dirac EM-equation. It sounds a little radical. Indeed this can not be done without the understanding that time inversion is a unitary transformation and energy is a “time vector” changing sign under time inversion in Minkowski space-time.
## 4. NONLINEAR EQUATION RECONSTRUCTED
As shown in §2, energy is a time vector changing sign under time inversion. Therefore the whole energy spectrum we obtain from whatever relativistic wave equation we might construct, must be positive-negative symmetric. This is, on the other hand, a necessary check if the constructed equation is truly invariant under unitary time inversion or not. By looking into the Dirac EM-equation, we find that the non-symmetric energy spectrum is caused by a matrix $`\beta `$ in front of electric potential $`\mathrm{\Phi }`$. If $`\beta `$ is removed there, the equation becomes invariant under unitary time inversion. But it is no longer invariant under the other proper Lorentz transformations, if $`\mathrm{\Phi }`$ and A are still external potentials. After careful checking, we realize that interaction potentials have to be utilized in lieu of external potentials in a special way, to render the invariance under all Lorentz transformations. The following equation is what we get:
$$(\gamma ^\mu P_\mu e\mathrm{\Phi }^\mathrm{I}+e𝜸𝐀^𝐈)\mathrm{\Psi }(x)=m\mathrm{\Psi }(x),$$
$`(4.1)`$
where $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ are the interaction potentials related to the external potentials $`\mathrm{\Phi }`$ and $`𝐀`$. We are going to derive their relations by considering Lorentz transformations, among which time inversion is defined in the unitary way we presented earlier.
As in §3, one can see this modified equation is invariant under unitary space inversion $`\mathrm{S}_\omega =\gamma ^0\mathrm{S}_c`$ and time inversion $`\mathrm{T}_\omega =\gamma ^1\gamma ^2\gamma ^3\mathrm{T}_c`$ where $`\mathrm{T}_c`$ is defined by (2.3), by assuming that scalar potential $`\mathrm{\Phi }^\mathrm{I}`$ does not change sign under space-time inversions while space vector potential $`𝐀^𝐈`$ changes sign under space inversion but not under time inversion. These assumptions can be directly verified after we derive the explicit forms of $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ in terms of $`\mathrm{\Phi }`$ and $`𝐀`$.
Below we are going to prove that the modified equation (4.1) does give a symmetric positive-negative energy spectrum, to enhance our confidence that this equation is indeed invariant under unitary time inversion. From the equation (4.1), the energy operator is expressed in the way
$$\text{Ê}=\rho _1𝝈(𝐩\mathrm{𝐞𝐀}^𝐈)+\rho _3(m+e\mathrm{\Phi }^\mathrm{I}),$$
$`(4.2)`$
where
$$\rho _1=\left(\begin{array}{cc}0& I\\ I& 0\end{array}\right),\rho _2=\left(\begin{array}{cc}0& iI\\ iI& 0\end{array}\right),\rho _3=\left(\begin{array}{cc}I& 0\\ 0& I\end{array}\right),$$
$`(4.3)`$
form a set of $`4\times 4`$ matrices analogous to $`2\times 2`$ Pauli matrices $`\sigma _i(i=1,2,3)`$. Here $`I`$ is a $`2\times 2`$ unit matrix. To diagonalize (4.2), take a look at the square of Ê
$$\text{Ê}^2=(𝐩\mathrm{𝐞𝐀}^𝐈)^2e𝝈(\mathbf{}\times 𝐀^𝐈+𝐀^𝐈\times \mathbf{})+(m+e\mathrm{\Phi }^\mathrm{I})^2e\rho _2𝝈[\mathbf{},\mathrm{\Phi }^\mathrm{I}].$$
$`(4.4)`$
Making a unitary transformation ($`\mathrm{U}^1=\mathrm{U}^{}`$)
$$\mathrm{U}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}I& iI\\ iI& I\end{array}\right),$$
$`(4.5)`$
on both sides of (4.4) we can diagonalize it by using
$$\mathrm{U}\rho _2\mathrm{U}^1=\rho _3,$$
$`(4.6)`$
to the following form
$$\text{Ê}^2=\mathrm{U}\text{Ê}^2\mathrm{U}^1=\left(\begin{array}{cc}\text{Ê}_u^2& 0\\ 0& \text{Ê}_l^2\end{array}\right)$$
$$=(𝐩\mathrm{𝐞𝐀}^𝐈)^2e𝝈(\mathbf{}\times 𝐀^𝐈+𝐀^𝐈\times \mathbf{})+(m+e\mathrm{\Phi }^\mathrm{I})^2+e\rho _3𝝈[\mathbf{},\mathrm{\Phi }^\mathrm{I}].$$
$`(4.7)`$
If supposing the following solutions
$$\text{Ê}_u^2\chi _u=E_u^2\chi _u,\text{Ê}_l^2\chi _l=E_l^2\chi _l,$$
$`(4.8)`$
we can rewrite (4.7) as
$$\text{Ê}^2=\underset{\chi }{}(|\chi _u>E_u^2<\chi _u|+|\chi _l>E_l^2<\chi _l|).$$
$`(4.9)`$
Transforming back to
$$\psi _u=\mathrm{U}^1\chi _u,\psi _l=\mathrm{U}^1\chi _l,$$
$`(4.10)`$
we get
$$\text{Ê}^2=\underset{\psi }{}(|\psi _u>E_u^2<\psi _u|+|\psi _l>E_l^2<\psi _l|).$$
$`(4.11)`$
It is straightforward to check that the energy operator turns out to be
$$\text{Ê}=\underset{\psi ,\lambda }{}(|\psi _u>\lambda E_u<\psi _u|+|\psi _l>\lambda E_l<\psi _l|),$$
$`(4.12)`$
with $`\lambda =\pm 1`$, which indeed gives a symmetric positive-negative energy spectrum.
To keep (4.1) invariant under the proper Lorentz transformations
$$\mathrm{\Psi }^{}=\mathrm{L}_p\mathrm{\Psi },\overline{\mathrm{\Psi }}^{}=\overline{\mathrm{\Psi }}\mathrm{L}_p^1,$$
$`(4.13)`$
where $`\overline{\mathrm{\Psi }}=\mathrm{\Psi }^{}\gamma ^0`$, we need to let the following term:
$$\overline{\mathrm{\Psi }}(\mathrm{\Phi }^\mathrm{I}𝜸𝐀^𝐈)\mathrm{\Psi }\overline{\mathrm{\Psi }}\mathrm{\Psi }J^\mu A_\mu ^{},$$
$`(4.14)`$
be invariant, where
$$J_\mu =\overline{\mathrm{\Psi }}\gamma _\mu \mathrm{\Psi },$$
$`(4.15)`$
and
$$A_\mu ^{}=(\frac{\mathrm{\Phi }^\mathrm{I}}{\mathrm{\Psi }^{}\mathrm{\Psi }},\frac{𝐀^𝐈}{\overline{\mathrm{\Psi }}\mathrm{\Psi }}).$$
$`(4.16)`$
Since $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ is a proper Lorentz scalar and $`J_\mu `$ is a proper Lorentz four-current, $`A_\mu ^{}`$ must be a proper Lorentz four-vector as well. If we define the following correspondence between the interaction and external potentials:
$$\mathrm{\Phi }^\mathrm{I}=(\mathrm{\Psi }^{}\mathrm{\Psi })\mathrm{\Phi },$$
$`(4.17a)`$
$$𝐀^𝐈=(\overline{\mathrm{\Psi }}\mathrm{\Psi })𝐀,$$
$`(4.17b)`$
then we see $`A_\mu ^{}`$ becomes the Minkowski four-potential $`A_\mu =(\mathrm{\Phi },𝐀)`$.
Considering a four-current $`eJ_\mu `$ as a source of four-potential $`A_\mu `$, we have a Poisson equation
$$_\rho ^\rho A_\mu =eJ_\mu ,$$
$`(4.18)`$
which is the Maxwell equation under a Lorentz gauge
$$^\mu A_\mu =0,$$
$`(4.19)`$
with a current continuity equation derivable from (4.1)
$$^\mu J_\mu =0.$$
$`(4.20)`$
This whole set of the Maxwell equations is invariant under the proper Lorentz transformations as well as unitary space-time inversions, although the Minkowski four-potential and four-current are “both” time-anticovariant.
Take a look at an example: if an electron feels the external force driven by a proton, from the correspondence (4.17) and the Maxwell equation (4.18) we get the following potential expressions
$$\mathrm{\Phi }^\mathrm{I}(x)=\mathrm{\Psi }_e^{}(x)\mathrm{\Psi }_e(x)d^4x^{}[e_p\mathrm{\Psi }_p^{}(x^{})\mathrm{\Psi }_p(x^{})]G(x,x^{}),$$
$`(4.21a)`$
$$𝐀^𝐈(x)=\overline{\mathrm{\Psi }}_e(x)\mathrm{\Psi }_e(x)d^4x^{}[e_p\overline{\mathrm{\Psi }}_p(x^{})𝜸\mathrm{\Psi }_p(x^{})]G(x,x^{}),$$
$`(4.21b)`$
where Green’s function $`G(x,x^{})`$ satisfies
$$_\rho ^\rho G(x,x^{})=\delta ^{(4)}(xx^{}).$$
$`(4.22)`$
Hence $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$, which depend on not only the wave function of driving source $`\mathrm{\Psi }_p(x^{})`$ but also that of testing body $`\mathrm{\Psi }_e(x)`$, are the interaction potentials between the driving source and testing body, gaining different meaning from the background potentials $`\mathrm{\Phi }`$ and $`𝐀`$ driven only by external sources. Though the Minkowski four-potential has been successfully used in the Maxwell equation (4.18), the situation in the wave equation (4.1) is different: the interaction potentials $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ between the spin one-half particles and the external electromagnetic fields, should be taken into account, even if $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ do not yet form a covariant four-vector. While a time-covariant interaction four-potential will be derived in §8. Particularly in the linear approximation, the equation (4.21a) gives a point-like static Coulomb potential $`e_e\mathrm{\Phi }^\mathrm{I}=\alpha /r`$ as we expect.
From the relations (4.14) through (4.17), the equation (4.1) can also be alternatively written down as
$$\gamma ^\mu P_\mu \mathrm{\Psi }(x)=(m+eJ^\mu A_\mu )\mathrm{\Psi }(x).$$
$`(4.23)`$
Here $`eJ^\mu A_\mu `$ happens to be the well-known electro-dynamical interaction potential. So unlike the Dirac EM-equation, the constructed equation is a nonlinear wave equation, which, in most cases, does not have exact explicit analytical solutions. This result is actually a consequence of the Maxwell equation, in which four-potential transforms in the same way as four-current under the Lorentz transformations, leading to the appearance of a Lorentz invariant electro-dynamical interaction potential in (4.23).
Originally the Dirac EM-equation was set up to deal with the interaction between a point-like electron and an external electromagnetic field, while the internal size of electron was ignored. In contrast, the nonlinear equation (4.23) may become valid for extended electron, when the interaction potential is expressed by a convolution between four-current and four-potential
$$V(x)=eJ^\mu A_\mu =ed^4x^{}J^\mu (x^{})A_\mu (xx^{}),$$
$`(4.24)`$
with an integral over the finite 4-d size of extended electron. Using such a nonsingular convolution can avoid the infinity troubles like the infinite self-energy of electron and divergent integrals in Feynman diagrams, caused by the interaction of singular point particles.
If a fermion also participates in other types of interactions, more Lorentz invariant interaction terms should be added into $`V(x)`$. We may just as well generalize (4.23) by the following form:
$$i\gamma ^\mu _\mu \mathrm{\Psi }(x)=[m+V(x)]\mathrm{\Psi }(x),$$
$`(4.25)`$
where $`V(x)`$ representing the sum of interaction terms must be invariant under all the Lorentz transformations. This equation is basically a nonlinear equation since the interaction $`V(x)`$ depends on fermion fields. To completely solve the problem, one needs to establish more equations for intermediate bosons by gauge theory. Of course, this is only true in principle. When more complicated interactions are involved, solving a complete set of nonlinear equations are very impractical and much unnecessary, and further realizable techniques are required.
## 5. ENERGY ORDER OF HYDROGEN-LIKE IONS
Though (4.1) is a nonlinear equation without exact analytical solutions, we can still make a fair approximation that the interaction potentials $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ are slowly-changing functions of particle wavefunctions on account of the small scale of particle itself compared with the long-range electromagnetic interaction. Using this approximation and noticing the differences between upper larger components and lower smaller components of wave function, we obtain the non-relativistic Hamiltonian of four major terms after taking $`𝐀^𝐈=𝐁^𝐈\times 𝐫`$/2 which is valid in the atomic range where $`𝐁^𝐈`$ is approximately constant:
$$H_{NR}=\frac{𝐩^2}{2m}+e\mathrm{\Phi }^\mathrm{I}\frac{e}{2m}𝐤(\mathbf{}\times 𝐀^𝐈)\frac{e}{2m^2r^2}(𝐫\mathbf{}\mathrm{\Phi }^\mathrm{I})(𝐬𝐥),$$
$`(5.1)`$
here $`𝐤=𝐥+𝝈`$, while $`𝐥=𝐫\times 𝐩`$ is the orbital angular momentum, $`𝐬=𝝈/2`$ is the spin, and the total angular momentum is $`𝐣=𝐥+𝐬`$. In (5.1), the first term is the kinetic energy; the second is the electric potential; the third shows the magnetic Zeeman effect; the forth represents the spin-orbit coupling. As we can see, the first three terms are formally the same as the conventional results although $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ refer to the interaction potentials not the pure external potentials. The forth spin-orbit coupling term has a different sign from convention, showing a reversed order of splitting energy levels.
With the non-relativistic Hamiltonian (5.1), we can write a wave equation $`i_t\psi =H_{NR}\psi `$, which is kind of “nonlinear Schrödinger equation” since the interaction potentials $`\mathrm{\Phi }^\mathrm{I}`$ and $`𝐀^𝐈`$ depend on the wave functions. The nonlinear Schrödinger equation has certain soliton-like solutions, and has been applied in many aspects of nonlinear physics. However in the derivation of the above non-relativistic Hamiltonian, the positive-energy assumption has been taken. This implies: a wave equation with a non-relativistic Hamiltonian is not invariant under unitary time inversion. In other words, the results obtained by applying time inversion in a truncated non-relativistic wave equation like the Schrödinger equation may not be correct.
As an example, we can solve the nonlinear relativistic wave equation (4.1) for the single-electron model of Hydrogen-like ions under the linear approximation that there is a point-like Coulomb potential $`e\mathrm{\Phi }^\mathrm{I}=Z\alpha /r`$, but no vector magnetic potential $`e𝐀^𝐈=0`$. In this model, (4.1) reduces to
$$(\gamma ^\mu P_\mu +\frac{Z\alpha }{r})\mathrm{\Psi }(x)=m\mathrm{\Psi }(x),$$
$`(5.2)`$
which differs from the Dirac equation by a matrix $`\beta `$ on the Coulomb term. A more general form than (5.2) for scalar central potential ($`C/r`$) has been investigated in detail (Greiner ). Following the standard procedures of solving this type of equations, we arrive at the bound state energy solutions of (5.2):
$$E_{nj}=\pm m\left\{1\frac{(Z\alpha )^2}{[n(j+1/2)+\sqrt{(j+1/2)^2+(Z\alpha )^2}]^2}\right\}^{1/2},$$
$`(5.3)`$
which gives a symmetric positive-negative energy spectrum and does not have any difficulty when $`Z\alpha >1`$.
If $`Z\alpha 1`$, we can expand the positive $`E_{nj}`$ in powers of $`Z\alpha `$
$$E_{nj}=m[1\frac{(Z\alpha )^2}{2n^2}+\frac{(Z\alpha )^4}{2n^3}(\frac{1}{j+1/2}\frac{1}{4n})]+O[(Z\alpha )^6].$$
$`(5.4)`$
Where the first term is the rest mass of electron, the second term is the classical quantum mechanics result, and the third relativistic term is different from the conventional one by a crucial sign change. Although the fine structure splitting energy happens to be the same as usual
$$\mathrm{\Delta }E_n(j_1j_2)=\frac{m(Z\alpha )^4}{2n^3}|\frac{1}{j_1+1/2}\frac{1}{j_2+1/2}|,$$
$`(5.5)`$
the energy order for quantum number $`j`$ with fixed $`n`$ is changed. Our conclusion is: the smaller the total angular momentum $`j`$, the larger the positive energy $`E_{nj}`$ in a certain level $`n`$ of Hydrogen-like ions. For example, the energy level $`2P_{1/2}`$ is higher than the energy level $`2P_{3/2}`$ in the doublet $`2P_{1/2}2P_{3/2}`$ of Hydrogen-like ions, contrary to the conventional order. The common practice in experiment is: first detect a spectrum of energy levels, then arrange these energy levels in the order derived from the Dirac equation. Up till now, the energy order of the doublet of Hydrogen-like ions, has not been precisely verified in experiment as far as we know. Let us examine if it is possible to double check it by experiment.
The fine structures of the lines $`\mathrm{H}_\alpha (\lambda 6563,\mathrm{n}=32)`$ and $`\mathrm{H}_\beta (\lambda 4861,\mathrm{n}=42)`$ in the Balmer series of Hydrogen have been carefully investigated (Lewis and Spedding ; Spedding et al. ), of which no more than two strong components can be resolved within experimental accuracy. This type of experiments, however, can only show the energy difference between two levels, i.e., $`2P_{1/2}2P_{3/2}`$ of Hydrogen atom. It can not, by itself, indicate the energy order: which line corresponds to which transition. The fine structure of the line $`\lambda 4686(n=43)`$ of singly ionized Helium $`\mathrm{He}^+`$ has also been studied in detail (Paschen ). However, if the new energy order is assigned to those energy levels, the fit to the experimental data is not worse in consideration of experimental accuracies. On the other hand, the Paschen-Back effect by applying strong magnetic fields shows a symmetric triplet optical spectrum, and does not make any difference if the order of energy levels is reversed.
On the other hand, the Zeeman effects of multi-electron atoms and ions have been vastly investigated (White ; Kuhn ), which show both “normal order”, the bigger $`j`$ the higher energy level, and “abnormal order”, the smaller $`j`$ the higher energy level. When a number of electrons are involved in multi-electron atoms or ions, more complicated effects must be taken into account, for example, electrostatic screening, orbit penetration and Fermi-Dirac statistics properties of multi-electron systems, and more complete equations like the self-consistent Hartree-Fock equations need to be set up. They are beyond the scope that the single-electron equation can describe.
In principle, only the Zeeman effects for Hydrogen-like ions under weak magnetic fields can directly and clearly give us the answer of the energy order of Hydrogen-like ions, since the asymmetric spectral lines for different transitions may be presented (White ; Kuhn ). The principal fine structure doublet splitting ($`2P_{1/2}2P_{3/2}`$) of Hydrogen atom is less than half wave number, hence it is very difficult to detect more splitting levels under weak magnetic fields in such a narrow band with good resolution. For this reason, the Zeeman effect of Hydrogen atom has not been observed (White ), and the energy order of the doublet $`2P_{1/2}2P_{3/2}`$ in Hydrogen is still an unsolved puzzle in experiment. If high resolution (say, one tenth of wave number) in analyzing spectral lines is realized, the detection of the Hydrogen Zeeman effect will be easier. For the other Hydrogen-like ions such as $`\mathrm{He}^+,\mathrm{Li}^{++},\mathrm{Be}^{+++}`$, etc., having bigger fine structure splitting energies ($`Z^4`$) as seen in (5.5), the Zeeman effects under weak magnetic fields seem detectable, even if most of the simple transitions ($`n=21`$, or $`32`$) drop into ultraviolet or X-ray range, not seen in the range of visible light spectrum. By modern techniques in analyzing laser optical spectrum, these experiments are believed realizable.
From (5.3), the positive ground state energy for $`n=1`$ and $`j=1/2`$ is
$$E_0=\frac{m}{\sqrt{1+(Z\alpha )^2}},$$
$`(5.6)`$
which approaches zero when $`Z\mathrm{}`$. While the conventional ground state energy of a Hydrogen-like ion has a singularity at $`Z\alpha 1`$:
$$E_{con}=m\sqrt{1(Z\alpha )^2},$$
$`(5.7)`$
which has no real meaningful solution when $`Z\alpha >1`$. The percentage difference between (5.6) and (5.7) is
$$1\frac{E_{con}}{E_0}=1\sqrt{1(Z\alpha )^4},$$
$`(5.8)`$
which is 1% when $`Z50`$, 5% when $`Z75`$ and 15% when $`Z100`$. One can see that the errors are significant only for high $`Z`$ Hydrogen-like ions which are unstable on the other hand. This causes real difficulty in doing experiments. An alternative way is to measure the ground states of high $`Z`$ atoms instead of ions, by assuming the outer shell electrons have little screening effects on the inner ground state electrons, which may be approximately described by single-electron model. So a possible experiment is to use laser beam to pump out the electrons of high $`Z`$ atoms. One may find out the largest ionization energies and compare them with the theoretical results (5.6) and (5.7).
In summary, the energy solutions (5.3) differ from conventional results in a subtle way. The differences may not be noticeable if no special attention is paid to it in experiment. In fact, the fine structure splitting is exactly the same as usual. Only the energy order in a certain level $`n`$ of Hydrogen-like ions is reversed. This is why we emphasize the importance of a thorough experimental investigation of the Zeeman effects of Hydrogen-like ions. Ironically no such experiment can be found by far.
## 6. PARTICLE CONJUGATION
After a simple check, we find that charge conjugation defined by C$`\mathrm{\Psi }=i\gamma ^2\mathrm{\Psi }^{}`$, does not turn (4.23) into the other one with opposite charge. So we need to try something else. Now making a complex conjugation $``$ on both sides of (4.23), we get
$$\gamma _{}^{\mu }{}_{}{}^{}P_\mu \mathrm{\Psi }^{}(x)=(m+eJ^\mu A_\mu )\mathrm{\Psi }^{}(x).$$
$`(6.1)`$
Considering $`\gamma _{}^{2}{}_{}{}^{}=\gamma ^2`$ and $`\gamma _{}^{i}{}_{}{}^{}=\gamma ^i(i=0,1,3)`$, we may pick a unitary transformation
$$\mathrm{O}_s=i\gamma ^0\gamma ^1\gamma ^3,$$
$`(6.2)`$
which is a complex conjugation operator in spinor space:
$$\mathrm{O}_s\gamma _{}^{\mu }{}_{}{}^{}\mathrm{O}_s^1=\gamma ^\mu .$$
$`(6.3)`$
Making transformation $`\mathrm{O}_s`$ on both sides of (6.1), we get
$$\gamma ^\mu P_\mu (\mathrm{O}_\omega \mathrm{\Psi }(x))=(meJ^\mu A_\mu )(\mathrm{O}_\omega \mathrm{\Psi }(x)),$$
$`(6.4)`$
where $`\mathrm{O}_\omega `$ is a complex conjugation in common space, defined by
$$\mathrm{O}_\omega \mathrm{\Psi }(x)=i\gamma ^0\gamma ^1\gamma ^3\mathrm{\Psi }^{}(x).$$
$`(6.5)`$
Comparing (6.4) with (4.23), we find that $`\mathrm{O}_\omega `$ changes a particle with charge $`e`$ and mass $`m`$ into another one with opposite charge $`e`$ and opposite mass $`m`$. So we like to call $`\mathrm{O}_\omega `$ “particle conjugation”.
In §3, we have mentioned that common momentum has two eigenvalues $`\pm m`$. So we may just as well write another equation:
$$\gamma ^\mu P_\mu \mathrm{\Psi }_m(x)=m\mathrm{\Psi }_m(x),$$
$`(6.6)`$
which gives the same type of solutions with the same energy spectrum as the Dirac equation (3.5). Adding an interaction term, we get an equation similar to (4.23) but with “opposite” mass:
$$\gamma ^\mu P_\mu \mathrm{\Psi }_m(x)=(m+eJ^\mu A_\mu )\mathrm{\Psi }_m(x),$$
$`(6.7)`$
which can be transformed under “particle conjugation” into
$$\gamma ^\mu P_\mu (\mathrm{O}_\omega \mathrm{\Psi }_m(x))=(meJ^\mu A_\mu )(\mathrm{O}_\omega \mathrm{\Psi }_m(x)).$$
$`(6.8)`$
Clearly (6.8) and (4.23) show opposite charge but same mass.
Negative charge has been found in experiment long ago, but no negative mass. In present-day cosmology, the existence of negative mass is highly controversial. Somehow it involves the precise definition of mass. It would be naive to draw any definitive conclusion just from this single-electron model. Hence we consider this “particle conjugation” as a speculative thought, though we do feel equations (4.23), (6.4), (6.7) and (6.8) are all legitimate forms in certain sense.
Due to our different definitions of unitary time inversion and “particle conjugation”, the conventional CP or CPT theorem does not hold in our theory. It is possible to find more conventional theorems that may not be derivable in our nonlinear theory. We may either derive similar ones as replacements, or find completely new theorems.
## 7. PERTURBATION TREATMENT
As long as the interaction potential $`V(x)`$ is weak enough in (4.25), we may follow Feynman’s perturbation treatment (Feynman ), to derive the lowest-order differential cross section:
$$\frac{d\sigma }{d\mathrm{\Omega }}=\frac{|𝐩_𝐟|^3}{|𝐩_𝐢|}\frac{m^2}{E_fE_i}|\overline{u}(p_f,s_f^0)u(p_i,s_i^0)|^2|V(q)|^2,$$
$`(7.1)`$
where subscripts $`i`$ and $`f`$ represent the incoming and outgoing waves respectively, $`\overline{u}`$ and $`u`$ are the spinors, $`q=p_fp_i`$ is the four-momentum transfer and
$$V(q)=d^4xV(x)e^{iqx},$$
$`(7.2)`$
is the Fourier amplitude of the interaction potential $`V(x)`$.
In the case of Coulomb interaction between the incident electron beam and nuclein target of nuclear number $`Z`$: $`V(x)=Z\alpha /|𝐱|`$, energy is conserved: $`E_f=E_i`$. We have $`V(q)=Z\alpha /𝐪^2`$ where $`𝐪^2=(𝐩_𝐟𝐩_𝐢)^2=4𝐩^2(\mathrm{sin}\frac{\theta }{2})^2`$. For any unpolarized incident electron beam, the cross section is a sum over final spin states and an average over initial spin states. So we get
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{Z^2\alpha ^2m^2}{𝐪^4}\mathrm{\Sigma }_1.$$
$`(7.3)`$
where the spin sum
$$\mathrm{\Sigma }_1=\frac{1}{2}\underset{s_f^0,s_i^0}{}|\overline{u}(p_f,s_f^0)u(p_i,s_i^0)|^2=\frac{1}{2}(1+\frac{p_ip_f}{m^2}),$$
$`(7.4)`$
is different from the conventional result (Bjorken and Drell ) since our Coulomb term as in (5.2) does not have a matrix $`\beta `$. Ignoring a constant factor 1/16, finally we have
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{Z^2\alpha ^2m^2}{𝐩^4(\mathrm{sin}\frac{\theta }{2})^4}[1+\frac{𝐩^2}{m^2}(\mathrm{sin}\frac{\theta }{2})^2].$$
$`(7.5)`$
In the non-relativistic case $`|𝐩|m`$, it reduces to the Rutherford formula:
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{Z^2\alpha ^2m^2}{𝐩^4(\mathrm{sin}\frac{\theta }{2})^4}.$$
$`(7.6)`$
And in the relativistic limit $`|𝐩|m`$, it turns out to be
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{Z^2\alpha ^2}{𝐩^2(\mathrm{sin}\frac{\theta }{2})^2},$$
$`(7.7)`$
which differs from the Mott formula by a factor $`[\mathrm{cot}(\theta /2)]^2`$.
Usually the Coulomb interaction is applicable only in low energy scattering, not in high energy scattering due to the recoil of nuclear target. In the electron-proton elastic scattering experiments (McAllister and Hofstadter ), the incident electron energy 188 Mev is in the same order as the proton mass 938 Mev. Considering the recoil effect of hydrogen, we would better utilize a current-potential interaction with a nonsingular convolution (4.24):
$$V(x)=eJ_e^\mu (x)A_\mu ^p(x).$$
$`(7.8)`$
Its Fourier amplitude becomes
$$V(q)=ed^4x^{}J_e^\mu (x^{})e^{iqx^{}}d^4xA_\mu ^p(xx^{})e^{iq(xx^{})}=eJ_e^\mu (q)A_\mu ^p(q).$$
$`(7.9)`$
From the Maxwell equation (4.18), we may express the four-potential driven by proton as follows:
$$A_\mu ^p(x)=G(x)e_pJ_\mu ^p(x)=d^4yG(xy)e_pJ_\mu ^p(y),$$
$`(7.10)`$
where $`G(x)`$ is Green’s function defined by (4.22). In analogy to (7.9), we get
$$A_\mu ^p(q)=e_pG(q)J_\mu ^p(q),$$
$`(7.11)`$
where $`G(q)=(q^2+iϵ)^1`$. Finally we write down
$$V(q)=e^2J_e^\mu (q)G(q)J_\mu ^p(q)$$
$$=\alpha \sqrt{\frac{m^2}{E_fE_i}}\sqrt{\frac{M^2}{E_f^pE_i^p}}\overline{u}(p_f,s_f)\gamma ^\mu u(p_i,s_i)\frac{1}{q^2+iϵ}\overline{u}(P_f,S_f)\gamma _\mu u(P_i,S_i).$$
$`(7.12)`$
In the relativistic elastic scattering as shown in Fig. 1, the four-momentum transfer squared is $`q^2=(p^{}p)^2=4E_fE_i(\mathrm{sin}\frac{\theta }{2})^2`$, and the finite mass recoil factor is $`E_f/E_i=[1+(2E/M)(\mathrm{sin}\frac{\theta }{2})^2]^1`$ where $`E=E_i`$. For any unpolarized electron beam, the electron spin is random over space-time. So when we calculate the interaction between electron and proton, their spins are chosen randomly. The incoming electron spin $`s_i`$ in calculating the Feynman diagrams is not related to the incident electron spin $`s_i^0`$ in the past. Same is true for the outgoing electrons. To calculate the total differential cross section per solid angle, we sum over all spin states independently:
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{\alpha ^2m^4E_f}{q^4E_i^3}\mathrm{\Sigma }_1\mathrm{\Sigma }_2,$$
$`(7.13)`$
where $`\mathrm{\Sigma }_1`$, an extra term that the conventional theory does not have, is given by (7.4): $`\mathrm{\Sigma }_1=(E_fE_i/m^2)(\mathrm{sin}\frac{\theta }{2})^2`$ when $`Em`$, and $`\mathrm{\Sigma }_2`$ is the same as in the conventional theory (Bjorken and Drell ):
$$\mathrm{\Sigma }_2=\frac{1}{4}\underset{s_f,s_i}{}\underset{S_f,S_i}{}|\overline{u}(p_f,s_f)\gamma ^\mu u(p_i,s_i)\overline{u}(P_f,S_f)\gamma _\mu u(P_i,S_i)|^2$$
$$=\frac{E_fE_i}{m^2}(\mathrm{cos}\frac{\theta }{2})^2[1\frac{q^2}{2M^2}(\mathrm{tan}\frac{\theta }{2})^2].$$
$`(7.14)`$
Combining all these results we obtain
$$\frac{d\overline{\sigma }}{d\mathrm{\Omega }}=\frac{\alpha ^2}{E^2}(\mathrm{cot}\frac{\theta }{2})^2[\frac{1\frac{q^2}{2M^2}(\mathrm{tan}\frac{\theta }{2})^2}{1+\frac{2E}{M}(\mathrm{sin}\frac{\theta }{2})^2}],$$
$`(7.15)`$
which differs from the conventional result by a factor $`[\mathrm{sin}(\theta /2)]^2`$.
In comparison with the experimental data (McAllister and Hofstadter ) as shown in Fig. 2, it is hard to tell which one is better. More advanced research need to be made. Generally two form factors need to be considered due to the finite size and anomalous magnetic moment of proton (Cahn and Goldhaber ). The modification due to the internal structure of proton is significant in high energy scattering processes: the higher the incident electron energy, the larger the modification. In the extreme relativistic limit $`|𝐩|M`$, the result (7.15) is not valid, and in the certain range $`|𝐩|M`$, it is not accurate, but in the low relativistic limit $`m|𝐩|M`$, it may be good enough. The lowest-order scattering is quite preliminary. So we would not draw any definitive conclusions as yet.
## 8. GAUGE INVARIANCE
From the correspondence (4.17) and the following relation
$$\overline{\mathrm{\Psi }}(\mathrm{\Phi }^\mathrm{I}𝜸𝐀^𝐈)\mathrm{\Psi }=\overline{\mathrm{\Psi }}J^\mu A_\mu \mathrm{\Psi }=\overline{\mathrm{\Psi }}\gamma ^\mu (\overline{\mathrm{\Psi }}\mathrm{\Psi })A_\mu \mathrm{\Psi },$$
$`(8.1)`$
we can also write (4.1) or (4.23) as follows:
$$\gamma ^\mu (P_\mu eA_\mu ^\mathrm{T})\mathrm{\Psi }=m\mathrm{\Psi },$$
$`(8.2)`$
by introducing an interaction four-potential
$$A_\mu ^\mathrm{T}=(\overline{\mathrm{\Psi }}\mathrm{\Psi })A_\mu .$$
$`(8.3)`$
Here $`A_\mu ^\mathrm{T}`$ is “time-covariant”, since $`A_\mu `$ is “time-anticovariant” and $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ changes sign under unitary time inversion. The equation (8.2) implies a formal similarity to the Dirac equation (3.5) if we define an “effective” time-covariant four-momentum
$$P_\mu ^{eff}=P_\mu eA_\mu ^\mathrm{T},$$
$`(8.4)`$
or an “effective” time-covariant four-derivative
$$_\mu ^{eff}=_\mu +ieA_\mu ^\mathrm{T}.$$
$`(8.5)`$
In classical electrodynamics, observable physical quantities can be expressed in terms of electromagnetic field strengths: $`𝐄=\mathbf{}\mathrm{\Phi }_t𝐀`$ and $`𝐁=\mathbf{}\times 𝐀`$ which seem invariant under gauge transformation for an arbitrary scalar function $`f(x)`$ (Bjorken and Drell ; Aithison and Hey ):
$$A_\mu A_\mu +_\mu f.$$
$`(8.6)`$
However there are several restrictions that need to be taken into account seriously, if we want to preserve both gauge invariance and Lorentz invariance of the Maxwell field equation and relativistic fermion field equation.
First of all, as we have pointed out, Minkowski four-potential $`A_\mu `$ is time-anticovariant (3.12) but four-derivative $`_\mu `$ is apparently time-covariant under time inversion:
$$(_t,_𝐱)(_t,_𝐱).$$
$`(8.7)`$
So gauge function $`f(x)`$ must change sign under time inversion to keep the transformed four-potential $`A_\mu +_\mu f`$ time-anticovariant as same as $`A_\mu `$. Second, gauge function $`f(x)`$ needs to be a proper Lorentz invariant to ensure the transformed four-potential $`A_\mu +_\mu f`$ covariant under the proper Lorentz transformations. Third, by making a local phase transformation on fermion field
$$\mathrm{\Psi }\mathrm{\Psi }\mathrm{exp}[i\alpha (x)],$$
$`(8.8)`$
and a local gauge transformation (8.6) on Minkowski field simultaneously in (8.2), we arrive at another constraint
$$_\mu \alpha e(\overline{\mathrm{\Psi }}\mathrm{\Psi })_\mu f=0.$$
$`(8.9)`$
The general solutions satisfying the above three conditions are
$$f(x)=C_n(\overline{\mathrm{\Psi }}\mathrm{\Psi })^n,(n=\pm 1,\pm 3,\pm 5,\mathrm{}),$$
$`(8.10a)`$
$$\alpha (x)=e(\frac{n}{n+1}C_n(\overline{\mathrm{\Psi }}\mathrm{\Psi })^{n+1}C_1\mathrm{log}|\overline{\mathrm{\Psi }}\mathrm{\Psi }|),(n=1,\pm 3,\pm 5,\mathrm{}),$$
$`(8.10b)`$
where $`C_n`$ are arbitrary real constants to cover the whole phase space. Note: $`\overline{\mathrm{\Psi }}\mathrm{\Psi }0`$, otherwise, there is no interaction term in the equation (8.2).
Furthermore, the Lorentz gauge (4.19) leads to another constraint on the choice of gauge function $`f(x)`$:
$$_\mu ^\mu f(x)=0.$$
$`(8.11)`$
Using (4.23) or more generally (4.25) with (4.24), we can prove step by step that this is true if the gauge function takes the form of (8.10a). First we have an expansion
$$_\mu ^\mu (\overline{\mathrm{\Psi }}\mathrm{\Psi })=\overline{\mathrm{\Psi }}(_\mu ^\mu \mathrm{\Psi })+(_\mu ^\mu \overline{\mathrm{\Psi }})\mathrm{\Psi }+2_\mu \overline{\mathrm{\Psi }}^\mu \mathrm{\Psi }.$$
$`(8.12)`$
From (4.25) it comes out
$$\gamma ^\mu _\mu \mathrm{\Psi }=i(m+V)\mathrm{\Psi }.$$
$`(8.13)`$
Making a hermitian conjugation on both sides and then multiplying it by $`\gamma ^0`$ from right, we get by the relations $`\gamma _{}^{\mu }{}_{}{}^{}\gamma ^0=\gamma ^0\gamma ^\mu `$ and $`\overline{\mathrm{\Psi }}=\mathrm{\Psi }^{}\gamma ^0`$
$$_\mu \overline{\mathrm{\Psi }}\gamma ^\mu =i(m+V)\overline{\mathrm{\Psi }}.$$
$`(8.14)`$
From (8.13) it is derived
$$_\mu ^\mu \mathrm{\Psi }=\gamma ^\mu _\mu \gamma ^\nu _\nu \mathrm{\Psi }=i\gamma ^\mu (_\mu V)\mathrm{\Psi }(m+V)^2\mathrm{\Psi }.$$
$`(8.15)`$
Repeating the procedures in deriving (8.14) leads to
$$_\mu ^\mu \overline{\mathrm{\Psi }}=i\overline{\mathrm{\Psi }}\gamma ^\mu _\mu V(m+V)^2\overline{\mathrm{\Psi }}.$$
$`(8.16)`$
Furthermore we have
$$2_\mu \overline{\mathrm{\Psi }}^\mu \mathrm{\Psi }=2_\mu \overline{\mathrm{\Psi }}g^{\mu \nu }_\nu \mathrm{\Psi }=_\mu \overline{\mathrm{\Psi }}(\gamma ^\mu \gamma ^\nu +\gamma ^\nu \gamma ^\mu )_\nu \mathrm{\Psi }.$$
$`(8.17)`$
Combining (8.13) and (8.14), we get
$$_\mu \overline{\mathrm{\Psi }}\gamma ^\mu \gamma ^\nu _\nu \mathrm{\Psi }=(m+V)^2\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$
$`(8.18)`$
Then we need to find the second term in (8.17). Now let us make a transformation on all fermion fields (note $`\gamma ^0\gamma _{}^{\mu }{}_{}{}^{}\gamma ^0=\gamma ^\mu `$)
$$\mathrm{\Psi }^{}=\gamma ^\mu \mathrm{\Psi },\overline{\mathrm{\Psi }}^{}=\overline{\mathrm{\Psi }}\gamma ^\mu ,$$
$`(8.19)`$
then (8.13) becomes
$$\gamma ^\nu _\nu (\gamma ^\mu \mathrm{\Psi })=i(m+V^{})\gamma ^\mu \mathrm{\Psi }.$$
$`(8.20)`$
Here $`V^{}`$ is the interaction after the transformation, which for example can be expressed by its Fourier amplitude (7.13) in the electron-proton scattering
$$V^{}(x)=\alpha \sqrt{\frac{m^2}{E_fE_i}}\sqrt{\frac{M^2}{E_f^pE_i^p}}\frac{d^4q}{(2\pi )^4}\frac{1}{q^2+iϵ}e^{iqx}$$
$$\times \overline{u}(p_f,s_f)\gamma ^\mu \gamma ^\rho \gamma ^\mu u(p_i,s_i)\overline{u}(P_f,S_f)\gamma ^\mu \gamma _\rho \gamma ^\mu u(P_i,S_i).$$
$`(8.21)`$
Given $`\gamma ^\mu \gamma ^\mu =\pm 1`$, it is clear to see (8.21) does not change when $`\gamma ^\mu `$ and $`\gamma ^\rho `$ switch twice in both electron and proton (or any fermion) currents. So the interaction will not change after the transformation (8.19):
$$V^{}(x)=V(x).$$
$`(8.22)`$
Combining (8.20), (8.22) and (8.14), we have
$$_\mu \overline{\mathrm{\Psi }}\gamma ^\nu \gamma ^\mu _\nu \mathrm{\Psi }=i(m+V)_\mu \overline{\mathrm{\Psi }}\gamma ^\mu \mathrm{\Psi }=(m+V)^2\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$
$`(8.23)`$
With (8.18) and (8.23), then (8.17) becomes
$$2_\mu \overline{\mathrm{\Psi }}^\mu \mathrm{\Psi }=2(m+V)^2\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$
$`(8.24)`$
Finally inserting (8.15), (8.16) and (8.24) into (8.12) we arrive at
$$_\mu ^\mu (\overline{\mathrm{\Psi }}\mathrm{\Psi })=0,$$
$`(8.25)`$
leading to the constraint (8.11) if the gauge function is given by (8.10a). From the above proof, we can see that the gauge condition is quite stringent. It appears that the Lorentz gauge is inherent in our theory. Note: the Coulomb gauge, as a special case of the Lorentz gauge for static fields, is included.
In summary, with the explicit solutions (8.10) for $`f(x)`$ and $`\alpha (x)`$, the local gauge transformations in our nonlinear QED look like:
$$\mathrm{\Psi }\mathrm{\Psi }\mathrm{exp}[i\alpha (x)],$$
$`(8.26a)`$
$$A_\mu A_\mu +_\mu f(x),$$
$`(8.26b)`$
$$\mathrm{or}A_\mu ^\mathrm{T}A_\mu ^\mathrm{T}+\frac{1}{e}_\mu \alpha (x),$$
$`(8.26c)`$
which ensure both gauge invariance and Lorentz invariance of the nonlinear equation (8.2) and the Maxwell equation with the Lorentz gauge. Note: our nonlinear QED involving higher-order fermion fields, is different from the one involving higher-order electromagnetic fields, and is also different from the one involving self-interaction potential in the Dirac EM-equation, mentioned in some literatures (see reference ).
We may also construct a nonlinear QED Lagrangian density
$$L(x)=\overline{\mathrm{\Psi }}\gamma ^\mu (i_\mu \frac{1}{2}eA_\mu ^\mathrm{T})\mathrm{\Psi }m\overline{\mathrm{\Psi }}\mathrm{\Psi }\frac{1}{8}F_{\mu \nu }F^{\mu \nu }\overline{\mathrm{\Psi }}\mathrm{\Psi },$$
$`(8.27)`$
here $`F_{\mu \nu }=_\nu A_\mu _\mu A_\nu `$ is the second-rank antisymmetric electromagnetic field tensor. The Hamiltonian density changes sign under unitary time inversion, so does the Lagrangian density. All Euler-Lagrange field equations can be obtained by Hamilton’s principle. Taking an infinitesimal arbitrary variation on the electromagnetic field $`\delta A_\mu `$, we get
$$^\nu [F_{\mu \nu }(\overline{\mathrm{\Psi }}\mathrm{\Psi })]=e(\overline{\mathrm{\Psi }}\gamma _\mu \mathrm{\Psi })(\overline{\mathrm{\Psi }}\mathrm{\Psi }),$$
$`(8.28)`$
then keeping the fermion field unchanged $`^\mu (\overline{\mathrm{\Psi }}\mathrm{\Psi })0`$, we eliminate $`F_{\mu \nu }^\nu (\overline{\mathrm{\Psi }}\mathrm{\Psi })`$ from (8.28) and cancel $`\overline{\mathrm{\Psi }}\mathrm{\Psi }`$ on both sides to obtain the Maxwell equation $`^\nu F_{\mu \nu }=eJ_\mu `$ that reduces to (4.18) under the Lorentz gauge (4.19). Similarly by taking an infinitesimal change on the fermion field $`\delta \overline{\mathrm{\Psi }}`$, we get
$$i\gamma ^\mu _\mu \mathrm{\Psi }\frac{1}{2}e\gamma ^\mu A_\mu ^\mathrm{T}\mathrm{\Psi }\frac{1}{2}e(\overline{\mathrm{\Psi }}\gamma ^\mu \mathrm{\Psi })A_\mu \mathrm{\Psi }m\mathrm{\Psi }\frac{1}{8}F_{\mu \nu }F^{\mu \nu }\mathrm{\Psi }=0,$$
$`(8.29)`$
then keeping the electromagnetic field unchanged $`_\nu A_\mu 0`$, we eliminate the last term in (8.29) to obtain the nonlinear equation (8.2) by the relation (8.1).
Though the gauge transformation (8.26) does not keep the Lagrangian density (8.27) invariant, it does keep invariant the integral of the Lagrangian density since the extra term in the integral makes no contribution: $`d^4xJ^\mu _\mu \alpha =d^4x_\mu (J^\mu \alpha )=0`$. With the definition (8.3), the Maxwell equation (4.18) and the Lorentz gauge (4.19) or (8.25), we get
$$_\rho ^\rho A_\mu ^\mathrm{T}=(\overline{\mathrm{\Psi }}\mathrm{\Psi })J_\mu ^{ext}+2_\rho (\overline{\mathrm{\Psi }}\mathrm{\Psi })^\rho A_\mu .$$
$`(8.30)`$
Suppose in the free field case ($`J_\mu ^{ext}=0`$), the fluctuations of fermion field and photon field are small, namely
$$_\rho (\overline{\mathrm{\Psi }}\mathrm{\Psi })=ϵ_\rho (x)(\overline{\mathrm{\Psi }}\mathrm{\Psi }),$$
$`(8.31a)`$
$$^\rho A_\mu =\kappa ^\rho (x)A_\mu ,$$
$`(8.31b)`$
where $`ϵ_\rho (x)`$ and $`\kappa ^\rho (x)`$ are small fluctuation functions. Then we end up with
$$_\rho ^\rho A_\mu ^\mathrm{T}+\mu ^2(x)A_\mu ^\mathrm{T}=0,$$
$`(8.32a)`$
$$\mu ^2(x)=2ϵ_\rho (x)\kappa ^\rho (x).$$
$`(8.32b)`$
If $`\mu ^2(x)>0`$, it is a Klein-Gordon equation for vector boson with a “fluctuation mass” $`\mu (x)`$, i.e, a harmonic oscillator with a fluctuation energy-momentum. If $`\mu ^2(x)<0`$, it shows the decay and oscillation of the interaction field. Both cases are complicated: the time-covariant intermediate vector boson field $`A_\mu ^\mathrm{T}`$, namely the coupled interaction field between the Dirac field $`\mathrm{\Psi }`$ and the Minkowski field $`A_\mu `$, fluctuates in space-time.
By (8.27) and (8.1), the interaction Hamiltonian density can be written as
$$H^\mathrm{I}(x)=\frac{1}{2}e\overline{\mathrm{\Psi }}\gamma ^\mu A_\mu ^\mathrm{T}\mathrm{\Psi }=\frac{1}{2}e\overline{\mathrm{\Psi }}J^\mu A_\mu \mathrm{\Psi }.$$
$`(8.33)`$
If the fluctuation is small, we may simply take zeroth-order approximation by setting $`\mu (x)0`$. In this limit, we may treat $`A_\mu ^\mathrm{T}`$ as a time-covariant massless photon and linearize our nonlinear QED to fit experimental data just as accurately as conventional linear QED. Furthermore, we may investigate the nonlinear aspects of the theory, by considering $`A_\mu ^\mathrm{T}`$ as a massive intermediate boson, or considering $`J^\mu A_\mu `$ as a nonsingular convolution shown in (4.24) to avoid singularities, and rewrite (8.27) as:
$$L(x)=\overline{\mathrm{\Psi }}(i\gamma ^\mu _\mu \frac{1}{2}eJ^\mu A_\mu )\mathrm{\Psi }m\overline{\mathrm{\Psi }}\mathrm{\Psi }\frac{1}{8}F_{\mu \nu }F^{\mu \nu }\overline{\mathrm{\Psi }}\mathrm{\Psi }.$$
$`(8.34)`$
With the basic field quantization techniques, we can then deal with various problems in nonlinear QED starting from this Lagrangian density.
Orthodox QED, on the basis of the Dirac EM-equation, is just a linear theory. In this linear QED, electron is point-like and the self-energy of electron blows up when its size reduces to zero. Also at each vortex in Feynman diagrams, there is a singular term that causes ultraviolet divergences and requires renormalization. In contrast, a nonlinear equation of the type (4.25) with (4.24) gives certain “soliton-like” solutions, not “point-like” solutions. So the self-energy of electron would not go to infinity in our nonlinear QED. It appears at first sight that this theory is not renormalizable for there is a fourth-power term of fermion field in the Lagrangian. However the interaction potential $`eJ^\mu A_\mu `$ in (8.34) is considered as a nonsingular convolution, each vortex in Feynman diagrams is smeared, not singular any more as pictured in Fig. 1. Consequently, only two powers are left in the perturbation expansion with the other two being integrated out in each vortex. Hence this theory is free of ultraviolet divergences and is also renormalizable. These are just some general observations. Due to the scope limit of this paper, I leave these fundamental issues open for future research.
So much has been discussed, it is however not my intention to “solve” this nonlinear QED in such single paper. Rather my emphasis is on its “derivation” by the implementation of unitary time inversion. A complete understanding of nonlinear QED entails much more studies both theoretically and experimentally. It has been recognized for a long time that nonlinearity may result in fundamentally new phenomena (see reference for a history overview). There is a typical problem posed for nonlinear theories: since the linear superposition principle of quantum mechanics is not valid in nonlinear field equations, it becomes quite problematic to make linear perturbation expansions. A great many efforts have been made in the past several decades, to seek non-perturbative techniques in solving nonlinear quantum field theories. Many interesting and important issues are still far from being settled.
## 9. REMARKS
The theory established in this paper is strictly limited to the Minkowski flat space-time, where the Lorentz group is the fundamental symmetry group. Naturally we would ponder on the possibility of extending our theory into curved space-time. In a globally curved space-time, the Lorentz group is only locally preserved. A conventional treatment on the Dirac fields in such a space-time is to use a covariant derivative with spin affine connection in the Dirac equation, as outlined in my dissertation (Jin ). As far as the electromagnetic interaction is concerned, a logical step is to include an electro-dynamical interaction potential rather than external potential in the Dirac equation, similar to what I have shown here in this paper. After all, a curved space-time with Lorentzian signature reduces locally to Minkowski space-time as a limit. Specifically in a static space-time with a time-independent metric, the time component of spin affine connection vanishes, we can separate time from space and anticipate a positive-negative symmetric energy spectrum (Jin ). However in a general space-time with a time-dependent metric, time and space are not separable, time inversion is not applicable, and it is no longer possible to obtain a positive-negative symmetric energy spectrum.
## ACKNOWLEDGMENTS
The author would like to thank Professor Jonathan Dimock for helpful discussions. The author is also grateful to many colleagues for their comments.
## REFERENCES
1. E.P. Wigner, Nachr. Akad. Wiss. Gottingen, Math.-Physik, 546 (1932)
2. E.P. Wigner, Group Theory (Academic Press, New York, 1959)
3. E.P. Wigner, Symmetries and Reflections (Indiana University Press, Bloomington, 1967)
4. A. Einstein, The Principle of Relativity (Dover, New York, 1952)
5. P.A.M. Dirac, Proc. Roy. Soc. A117, 610 (1928); A118, 351 (1928)
6. O. Klein, Z. Phys. 53, 157 (1929)
7. J.D. Bjorken and S.D. Drell, Relativistic Quantum Mechanics (McGraw-Hill, New York, 1964)
8. P. Zeeman, Phil. Mag. (5) 43, 226 (1897); (5) 44, 55, 255 (1897)
9. H.A. Lorentz, The Theory of Electrons (Dover, New York, 1952)
10. P.A.M. Dirac, Spinors in Hilbert Space (Plenum Press, New York, 1974)
11. P.A.M. Dirac, The Principles of Quantum Mechanics (Oxford University Press, London, 1958)
12. H. Minkowski, Ann. Phys. Lpz. 47, 927 (1915)
13. P.A.M. Dirac, Proc. Roy. Soc. A126, 360 (1930)
14. C.D. Anderson, Science 76, 238 (1932); Phys. Rev. 43, 491 (1933)
15. W. Greiner, Relativistic Quantum Mechanics (Springer-Verlag, Berlin, 1990)
16. W. Greiner, B. Muller, and J. Rafelski, Quantum Electrodynamics of Strong Fields (Springer-Verlag, Berlin, 1985)
17. G.N. Lewis and F.H. Spedding, Phys. Rev. 43, 964 (1933)
18. F.H. Spedding, C.D. Shane, and N.S. Grace, Phys. Rev. 44, 58 (1933)
19. F. Paschen, Ann. d. Phys. 82, 692 (1926)
20. F. Paschen and E. Back, Ann. d, Phys. 39, 897 (1912); 40, 960 (1913)
21. H.E. White, Introduction to Atomic Spectra (McGraw-Hill, New York, 1934)
22. H.G. Kuhn, Atomic Spectra (Academic Press, New York, 1962)
23. R.P. Feynman, Phys. Rev. 76, 749 (1949); 76, 769 (1949)
24. N.F. Mott, Proc. Roy. Soc. A124, 426 (1929); A135, 429 (1932)
25. R.W. McAllister and R. Hofstadter, Phys. Rev. 102, 851 (1956)
26. R.N. Cahn and G. Goldhaber, The Experimental Foundations of Particle Physics (Cambridge University Press, Cambridge, 1989)
27. J.D. Bjorken and S.D. Drell, Relativistic Quantum Fields (McGraw-Hill, New York, 1965)
28. I.J.R. Aitchison and A.J.G. Hey, Gauge Theories in Particle Physics (Adam Hilger LTD, Bristol, 1982)
29. Asim O. Barut, Alwyn van der Merwe, and Jean-Pierre Vigier, eds., Quantum, Space and Time — The Quest Continues: Studies and Essays in Honour of Louis de Broglie, Paul Dirac and Eugene Wigner (Cambridge University Press, Cambridge, 1984)
30. W.M. Jin, Dirac Quantum Fields in Curved Space-Time (Ph.D. Dissertation, 1999)
31. W.M. Jin, Class. Quantum Grav. 15 3163 (1998) gr-qc/0009009
32. W.M. Jin, Class. Quantum Grav. 17 2949 (2000) gr-qc/0009010 |
no-problem/0001/cond-mat0001055.html | ar5iv | text | # The Excess Wing in the Dielectric Loss of Glass-Formers: A Johari-Goldstein 𝛽-Relaxation?
## Abstract
Dielectric loss spectra of glass-forming propylene carbonate and glycerol at temperatures above and below $`T_g`$ are presented. By performing aging experiments lasting up to five weeks, equilibrium spectra below $`T_g`$ have been obtained. During aging, the excess wing, showing up as a second power law at high frequencies, develops into a shoulder. The results strongly suggest that the excess wing, observed in a variety of glass formers, is the high-frequency flank of a $`\beta `$-relaxation.
In the frequency-dependent dielectric loss, $`\epsilon \mathrm{"}(\nu )`$, of many glass-formers, an excess contribution to the high-frequency power law of the $`\alpha `$-peak, $`\epsilon \mathrm{"}\nu ^\beta `$, shows up about 2-3 decades above the $`\alpha `$-peak frequency $`\nu _p`$. This excess wing can be reasonably well described by a second power law, $`\epsilon \mathrm{"}\nu ^b`$, with $`b<\beta `$ . An excess contribution at high frequencies was already noted in the pioneering work of Davidson and Cole and it is a common feature in glass-forming liquids without a well-resolved $`\beta `$-relaxation (see below). The physical origin of the excess wing is commonly considered as one of the great mysteries of glass physics. Despite the fact that some theoretical approaches describing the excess wing have been proposed , up to now there is no commonly accepted explanation of its microscopic origin. In addition, some phenomenological descriptions have been proposed . Most successful is the so-called Nagel scaling , which leads to a collaps of the $`\epsilon \mathrm{"}(\nu )`$ curves (including the excess wing) for different temperatures and various materials onto one master curve.
In various glass-formers additional relaxation processes, usually termed $`\beta `$-processes, are clearly seen in the loss spectra, showing up as a shoulder or even a second peak at $`\nu >\nu _p`$ . Such relaxations in supercooled liquids without intramolecular degrees of freedom were systematically investigated already three decades ago by Johari and Goldstein and basically were ascribed to intrinsic relaxation phenomena of the glass-forming process. We recall that these Johari-Goldstein $`\beta `$-relaxations are fundamentally different from $`\beta `$-relaxations due to intramolecular degrees of freedom. Commonly it is assumed that the excess wing and $`\beta `$-relaxations are different phenomena and even the existence of two classes of glass-formers was proposed - ”type A” with an excess wing and ”type B” with a $`\beta `$-process . Based on experimental observations it seems natural to explain Johari-Goldstein $`\beta `$-processes and excess wing phenomena on the same footing. Indeed such a possibility was already considered earlier and from an experimental point of view, it cannot be excluded that the excess wing is simply the high-frequency flank of a $`\beta `$-peak, hidden under the dominating $`\alpha `$-peak.
However, only the presence of a shoulder with a downward curvature can provide an unequivocal proof for an underlying $`\beta `$-peak. In materials with a well pronounced $`\beta `$-peak, the $`\alpha `$\- and $`\beta `$-relaxation timescales increasingly separate with decreasing temperature. Therefore, within this picture it may be expected that at low temperatures the excess wing transfers into a shoulder, due to less interference with the $`\alpha `$-peak. In marked contrast to this prediction, for various glass-forming liquids far below the glass temperature $`T_g`$, $`\epsilon \mathrm{"}(\nu )`$ follows a well-developed power law, without any indication of a shoulder . However, measurements of glass-forming materials at low temperatures near or below $`T_g`$ suffer from the fact that thermodynamic equilibrium is not reached in measurements using usual cooling rates. In addition, the loss becomes very low yielding increasingly high experimental uncertainties. Clearly, high-precision equilibrium measurements are necessary to investigate the ”true” behavior of the excess wing at low temperatures.
Therefore we performed high-precision dielectric measurements keeping the sample at a constant temperature, some $`\mathrm{K}`$ below $`T_g`$, for up to five weeks which ensured that thermodynamic equilibrium was indeed reached. The experiments were performed on two prototypical glass-formers known to exhibit well pronounced excess wings, propylene carbonate (PC, $`T_g159\mathrm{K}`$) and glycerol ($`T_g185\mathrm{K}`$). For the measurements, parallel plane capacitors having an empty capacitance up to $`100\mathrm{pF}`$ were used. Due to the approach of the fictive towards the real temperature during aging, a thermal contraction of the sample may occur. Its magnitude in one dimension can be estimated to $`\mathrm{\Delta }l/l3\times 10^3`$, which has a negligible effect on the results. High-precision measurements of the dielectric permittivity in the frequency range $`10^4\mathrm{Hz}\nu 10^6\mathrm{Hz}`$ were performed using a Novocontrol alpha-analyser. It allows for the detection of values of $`\mathrm{tan}\delta `$ as low as $`10^4`$. At selected temperatures and aging times, additional frequency sweeps at $`20\mathrm{Hz}\nu 1\mathrm{MHz}`$ were performed with the autobalance bridges Hewlett-Packard HP4284 and HP4285. To keep the samples at a fixed temperature for up to five weeks, a closed-cycle refrigerator system was used. The
sample was cooled from a temperature $`20\mathrm{K}`$ above $`T_g`$ with the maximum possible cooling rate of about $`3\mathrm{K}/\mathrm{min}`$. The final temperature was reached without any temperature undershoot. As zero point of the aging times reported below, we took the time when the desired temperature was reached, about $`200\mathrm{s}`$ after passing $`T_g`$. The temperature was kept stable within $`0.02\mathrm{K}`$ during the five weeks measurement time.
Figure 1 shows $`\epsilon \mathrm{"}(\nu )`$ of PC and glycerol at temperatures some $`\mathrm{K}`$ below $`T_g`$ during aging. A strong dependence of the spectra on aging time $`t`$ is observed. The spectra measured after the maximum time of $`t=10^{6.5}\mathrm{s}`$ (squares) are identical to those obtained after $`t=10^6\mathrm{s}`$ (stars), clearly demonstrating that equilibrium was reached. In Fig. 2 these equilibrium spectra are shown, together with equilibrium curves taken at higher temperatures, partly published earlier . Obviously, in the frequency window of Fig. 1 mainly the excess wing region is covered; the $`\alpha `$-peak is located at much lower frequencies, leading to a somewhat steeper increase towards low frequencies only. In Fig. 1 for both materials indeed the typical power law, characteristic of an excess wing, is observed for short $`t`$. However, with increasing $`t`$ the excess wing successively develops into a shoulder! This behaviour strongly suggests that a relaxation is the origin of the excess wing observed at shorter times or
higher temperatures (Fig. 2). Admittedly, for glycerol the curvature in $`\mathrm{log}_{10}\epsilon \mathrm{"}(\mathrm{log}_{10}\nu )`$ is quite subtle (Fig. 1b). However, further evidence is provided by Fig. 3 where the derivatives of the lowest curves in Figs. 1 and 2 are shown. A maximum is observed, clearly indicative of a $`\beta `$-process . In the region of the shoulder, the values of $`\mathrm{tan}\delta `$ ranging between $`5\times 10^3`$ and $`2\times 10^2`$ are well above the resolution limits of the measurement. In addition, for both materials measurements using the autobalance bridges (not shown) agree well with those obtained with the alpha-analyser. Finally, the response of the empty capacitor was measured at the temperatures of Fig. 1. The resulting loss of $`\epsilon \mathrm{"}<1.5\times 10^4`$ in the region of $`10\mathrm{Hz}\nu 100\mathrm{kHz}`$ excludes a non-intrinsic origin of the observed shoulder.
The results of Fig. 1 are interpreted as follows: During aging the fictive temperature successively approaches the ”real” temperature leading to a strong shift of the $`\alpha `$-peak towards lower frequencies. The relaxation times of $`\beta `$-processes usually exhibit a weaker temperature dependence than those of the $`\alpha `$-process. Therefore the $`\beta `$-process causing the excess wing can be expected to be less affected by the aging. The different aging depencences of the $`\alpha `$\- and the $`\beta `$-process can also easily be deduced from the fact that the spectra at different aging times cannot be scaled onto each other by a horizontal shift in Fig. 1. In addition, aging experiments reported by Johari and
coworkers revealed a negligible change of the $`\beta `$-peak frequency with aging time . Overall, with increasing aging time, respectively decreasing effective temperature, the time scales of $`\alpha `$\- and $`\beta `$-process become successively separated and finally the presence of a $`\beta `$-relaxation is revealed by the appearence of a shoulder.
A close inspection of dielectric measurements at low temperatures, reported in literature, reveals some further evidence of a $`\beta `$-relaxation being the origin of the excess wing: Already in the spectrum obtained by Leheny and Nagel after aging of glycerol for $`2.5\times 10^6\mathrm{s}`$ at $`177.6\mathrm{K}`$ a vague indication for a shoulder is present. However, possibly due to doubt of its significance, these authors payed no attention to this feature. In addition, in various other published results, near or just below $`T_g`$ some indications for a shoulder show up , again being ignored in most of these publications. Except for the thermal history was not reported in detail and it can be doubted if thermal equilibrium was reached. In the study of Wagner and Richert , in highly quenched Salol a strong secondary peak appeared below $`T_g`$ which vanished after heating to $`T_g`$. In addition, in isochronal measurements of various quenched glass-formers non-equilibrium secondary relaxations were detected. From the results in the question arises if the subtle indications for a $`\beta `$-relaxation seen around $`T_g`$ in are pure non-equilibrium effects and if this relaxation may completely vanish in equilibrium. Only the present aging experiments could prove that the excess wing is due to a $`\beta `$-relaxation that is an equilibrium phenomenon. In Fig. 1 the amplitude of the $`\beta `$-peak (only seen as excess wing at short times) may be suspected to diminish with aging. Interestingly, in systems with well resolved $`\beta `$-relaxations a similar behavior was observed . Finally it should be mentioned that in evidence of a $`\beta `$-relaxation in various systems without a clearly resolvable second relaxation peak (including PC) was found by using a ”difference isochrone” method.
In addition, in glycerol a high-frequency shoulder was detected by performing high-pressure dielectric experiments . Further evidences for $`\beta `$-relaxations in typical excess-wing glass-formers have also been obtained from other experimental methods, e.g. calorimetry or nuclear magnetic resonance . A strong corroboration of the present results is also provided by our recent results in ethanol, which prove that the well-pronounced excess wing in this glass-former is also due to a $`\beta `$-relaxation .
What further information on the $`\beta `$-relaxation causing the excess wing can be gained from the present data? The $`\alpha `$-peaks in PC and glycerol can well be fitted using the empirical Cole-Davidson (CD) function . In materials with well pronounced $`\beta `$-peaks their spectral shape often can be described by the Cole-Cole (CC) function . In Fig. 4 we demonstrate that the equilibrium curves of Fig. 1 can well be fitted using a sum of a CD and a CC function. Similar analyses were performed in . In Fig. 3 the corresponding derivatives are shown – again a reasonable agreement can be stated. In the same way also the present spectra at higher temperatures (Fig. 2) can be described . The resulting characteristics of the $`\beta `$-relaxations in PC and glycerol can be summarized as follows: i) The relaxation strength increases with $`T`$. This is often found for well resolved $`\beta `$-relaxations, e.g. in . ii) The width of the relaxation peaks becomes smaller with $`T`$, which is also a common property of well separated $`\beta `$-relaxations iii) Despite some difficulty to obtain precise information on the temperature development of the $`\beta `$-relaxation time, $`\tau _\beta `$, the analysis reveals significant deviations from thermally activated behavior . Therefore one may have objections using the term ”$`\beta `$-relaxation” in this case, as they are commonly assumed to follow an Arrhenius behavior. However, there is no principle reason that $`\beta `$-processes always should behave thermally activated, especially as their microscopic origin is still unclear. Already Johari suspected that in systems without a well resolved $`\beta `$-process, the relaxation times of $`\alpha `$\- and $`\beta `$-process are closer together due to an uncommon temperature dependence of $`\tau _\beta `$. Maybe the only difference between ”type A” and ”type B” glass-formers in the classification scheme of is just the temperature evolution of the $`\beta `$-dynamics: In the latter materials it is thermally activated, leading to a clear separation of $`\alpha `$\- and $`\beta `$-peak at low temperatures. In contrast, in ”type A” materials $`\tau _\beta (T)`$ follows more closely $`\tau _\alpha (T)`$ (i.e., it deviates from Arrhenius) and only the high-frequency flank of the $`\beta `$-peak – the excess wing – is visible. The Nagel-scaling , which strongly suggests a correlation between the $`\alpha `$-process and the excess wing in ”type A” systems, is not opposed to this framework: It simply implies that in those materials the parameters characterizing both relaxation processes are intimately related.
In this context it is of interest that recently for ”type B” glass-formers a correlation of $`\tau _\beta `$ and the Kohlrausch-exponent $`\beta _{KWW}`$ (describing the width of the $`\alpha `$-peak), both at $`T_g`$, was found : log$`{}_{10}{}^{}\tau _{\beta }^{}(T_g)`$ increases monotonically with $`\beta _{KWW}(T_g)`$. It was noted that those glass-formers that show no well-resolved $`\beta `$-relaxation (including PC and glycerol) have relatively large values of $`\beta _{KWW}(T_g)`$. As the mentioned correlation implies that the $`\alpha `$\- and $`\beta `$-timescales approach each other with increasing $`\beta _{KWW}(T_g)`$, the unobservability of a shoulder in those materials is easily rationalized. In an explanation of this behavior within the coupling model (CM) was proposed. Recently it was shown that the Nagel-scaling can be explained within this framework and even the deviation of $`\tau _\beta (T)`$ from thermally activated behavior seems to be consistent with the CM.
In summary, by performing high-precision measurements below $`T_g`$ in thermodynamic equilibrium we were able to resolve the presence of $`\beta `$-relaxations in two typical excess wing glass-formers. Our results strongly suggest that the so-far mysterious excess wing, showing up at higher temperatures, is simply the high-frequency flank of the loss-peaks caused by these $`\beta `$-relaxations. Of course, the mere identification of the excess wing with a $`\beta `$-relaxation provides no explanation, e.g. for the Nagel scaling or the question why glass-formers seem to fall into two classes - those with a well-separated $`\beta `$-relaxation and those without . But the knowledge of the true nature of the excess wing certainly is a prerequisite for any microscopic explanation of these phenomena, (e.g. ) and hopefully will enhance our understanding of the glass transition in general.
We thank K.L. Ngai for stimulating dicussions. This work was supported by the DFG, Grant-No. LO264/8-1 and partly by the BMBF, contract-No. EKM 13N6917. |
no-problem/0001/hep-ph0001223.html | ar5iv | text | # Gauge Interactions in the Dual Standard Model
\[
## Abstract
We present a geometric argument for the transformation properties of $`SU(5)S(U(3)\times U(2))`$ monopoles under the residual gauge symmetry. This strongly supports the proposal that monopoles of the dual standard model interact via a gauge theory of the standard model symmetry group, with the monopoles having the same spectrum as the standard model fermions.
\]
The dual standard model has been proposed as a way of unifying both matter and interaction . Monopoles from the Georgi-Glashow
$`SU(5)`$ $``$ $`S(U(3)\times U(2))`$ (1)
$`=SU(3)_\mathrm{C}\times U(2)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6`$ (2)
grand unification have precisely the same spectrum as the observed fermions in the standard model; it is therefore natural to associate these standard model fermions with such monopoles. In consequence of this we have calculated the gauge couplings at monopole unification
$$g_\mathrm{C}/g_\mathrm{I}=3,g_\mathrm{Y}/g_\mathrm{I}=2/\sqrt{15}.$$
(3)
Both values are satisfied by the standard model gauge couplings at a scale of a few GeV.
In this letter we examine the transformation properties of these monopoles under the residual $`S(U(3)\times U(2))`$ symmetry. As such we show that gauge transformations of the fundamental monopoles are entirely consistent with the fundamental representation of the standard model symmetry group. This gives strong support to the proposal that the long range interaction of these monopoles is via a gauge interaction of $`SU(3)_\mathrm{C}`$, $`SU(2)_\mathrm{I}`$ and $`U(1)_\mathrm{Y}`$ symmetry groups.
We shall consider firstly the fundamental monopoles. These are embedded $`SU(2)U(1)`$ monopoles,
$`SU(5)`$ $``$ $`S(U(3)\times U(2))`$ (4)
$``$ $``$ (5)
$`SU(2)_Q`$ $``$ $`U(1)_Q.`$ (6)
It is clear that these fundamental monopoles have a degeneracy of embeddings. The purpose of this letter to quantify this degeneracy.
To quantify the space of embeddings we shall label the embedding of the fundamental monopoles. For this it will prove useful to split the $`su(2)`$ algebra into components
$$su(2)_Q=u(1)_Q_Q.$$
(7)
Here $`u(1)_Q`$ is the Lie algebra of $`U(1)_Q`$, and $`_Q`$ is its associated orthogonal component. The direct sum is with respect to the standard inner product on $`su(5)`$, given by $`X,Y=\mathrm{tr}XY`$.
One useful label for the fundamental monopoles is their magnetic charge $`Q`$ (we will see later that there is another more useful label). The magnetic charge defines the asymptotic magnetic field of a monopole,
$$B^k\frac{\widehat{r}^k}{r^2}Q.$$
(8)
and is associated with the embedding
$$U(1)_Q=\mathrm{exp}(𝐑Q)S(U(3)\times U(2)),$$
(9)
normalised by
$$\mathrm{exp}(2\pi g_uQ)=1,$$
(10)
with $`g_u`$ the unified $`SU(5)`$ gauge coupling. Additionally the embedding in Eq. (9) is associated with the topology of $`SU(5)/S(U(3)\times U(2))`$, being a non-trivial element of
$$\pi _1[S(U(3)\times U(2))]=\pi _2[SU(5)/S(U(3)\times U(2))].$$
(11)
Following , we decompose the magnetic charge into colour, weak isospin and weak hypercharge components
$$Q=\frac{1}{g_u}\left(T_\mathrm{C}+\frac{1}{2}T_\mathrm{I}+\frac{1}{3}T_\mathrm{Y}\right),$$
(12)
where $`T_\mathrm{C}su(3)_\mathrm{C}`$ may be either
$`T_\mathrm{C}^r`$ $`=`$ $`i\mathrm{diag}(+\frac{2}{3},\frac{1}{3},\frac{1}{3},0,0),`$ (13)
$`T_\mathrm{C}^g`$ $`=`$ $`i\mathrm{diag}(\frac{1}{3},+\frac{2}{3},\frac{1}{3},0,0),`$ (14)
$`T_\mathrm{C}^b`$ $`=`$ $`i\mathrm{diag}(\frac{1}{3},\frac{1}{3},+\frac{2}{3},0,0),`$ (15)
and $`T_\mathrm{I}su(2)_\mathrm{I}`$ may be either
$$T_\mathrm{I}^\pm =\pm i\mathrm{diag}(0,0,0,1,1),$$
(16)
whilst $`T_\mathrm{Y}u(1)_\mathrm{Y}`$ may only be
$$T_\mathrm{Y}=i\mathrm{diag}(1,1,1,\frac{3}{2},\frac{3}{2})$$
(17)
The above degeneracies indicate that the fundamental monopoles form representations of $`SU(3)_\mathrm{C}`$, $`SU(2)_\mathrm{I}`$ and $`U(1)_\mathrm{Y}`$ with the corresponding dimension. Namely the fundamental representations.
The purpose of this letter is to investigate these degeneracies. We interpret the degeneracies as being due to gauge freedom of the monopole embedding. In this light we show that the gauge degeneracy of the fundamental monopoles are consistent with the fundamental representions of the residual symmetry group $`S(U(3)\times U(2))`$.
On the issue of duality, we shall show that the dual of the residual symmetry group $`SU(3)\times SU(2)\times U(1)`$ is also consistent with the gauge degeneracy of the monopoles.
A rigid (or global) gauge transformations of the fundamental monopole is defined by an element $`hS(U(3)\times U(2))`$ and transforms the magnetic field as
$$B^k\mathrm{Ad}(h)B^k=hB^kh^1.$$
(18)
Correspondingly the $`su(2)`$ embedding transforms under
$$su(2)_Q\mathrm{Ad}(h)su(2)_Q,$$
(19)
so that $`Q`$ transforms appropriately. Hence the components of Eq. (7) transform as
$`u(1)_Q`$ $``$ $`\mathrm{Ad}(h)u(1)_Q,`$ (20)
$`_Q`$ $``$ $`\mathrm{Ad}(h)_Q.`$ (21)
One may see that $`Q`$ is not a good quantity for examining the action of $`S(U(3)\times U(2))`$ on the monopole by considering the action of elements $`hU(1)_Q`$. These take $`u(1)_Qu(1)_Q`$ identically, whilst acting non-trivially on elements of $`_Q`$, taking them to another element of $`_Q`$.
Thus to obtain all of the possible monopole embeddings we must examine the action of $`S(U(3)\times U(2))`$ on $`_Q`$. This may be achieved by considering the action on any non-trivial element of $`_Q`$. Then the manifold of all equivalent fundamental monopoles under a rigid gauge transformation is
$$M(_Q)\frac{S(U(3)\times U(2))}{C(_Q)},$$
(22)
with the centraliser
$$C(_Q)=\{hS(U(3)\times U(2)):\mathrm{Ad}(h)_Q=_Q\}$$
(23)
representing those transformation that leave $`_Q`$ invariant.
We shall calculate $`C(_Q)`$ by considering its action on a monopole embedding. In particular consider a magnetic charge
$$Q^{r+}=\frac{1}{g_u}\left(T_\mathrm{C}^r+\frac{1}{2}T_\mathrm{I}^++\frac{1}{3}T_\mathrm{Y}\right),$$
(24)
having explicit components $`g_uQ_{jk}^{r+}=\delta _{j1}\delta _{k1}\delta _{j5}\delta _{k5}`$. The $`su(2)`$ algebra associated with this is generated by $`\{g_uQ^{r+},X^{r+},Y^{r+}\}`$, where the explicit components are
$`X_{ij}^{r+}`$ $`=`$ $`\delta _{j5}\delta _{k1}\delta _{j1}\delta _{k5},`$ (25)
$`Y_{ij}^{r+}`$ $`=`$ $`i(\delta _{j5}\delta _{k1}+\delta _{j1}\delta _{k5}).`$ (26)
Then $`[X^{r+},Y^{r+}]=2g_uQ^{r+}`$.
To exhibit the group structure we require the generators $`T_\mathrm{C}`$, $`T_\mathrm{Y}`$ and $`T_\mathrm{I}`$ expressed in a basis normalised to the topology of $`SU(5)/S(U(3)\times U(2))`$. To this end we define
$`C=\frac{3}{2}T_\mathrm{C}^r,`$ (27)
$`I=T_\mathrm{I}^+,`$ (28)
$`Y=\frac{2}{5}T_\mathrm{Y},`$ (29)
such that
$$\mathrm{Ad}(e^{2\pi C})_Q=\mathrm{Ad}(e^{2\pi I})_Q=\mathrm{Ad}(e^{2\pi Y})_Q=1.$$
(30)
In particular
$$\mathrm{Ad}(e^{\theta _CC})\mathrm{Ad}(e^{\theta _II})\mathrm{Ad}(e^{\theta _YY})_Q=e^{i(\theta _C+\theta _I+\theta _Y)}_Q$$
(31)
From this we obtain
$$C(_Q)=SU(2)_C\times U(1)_{\mathrm{Y}\mathrm{I}}\times U(1)_{\mathrm{I}+\mathrm{Y}2\mathrm{C}}/𝐙_2,$$
(32)
where $`𝐙_2`$ represents an intersection between $`SU(2)_C`$ and $`U(1)_{\mathrm{I}+\mathrm{Y}2\mathrm{C}}`$. Thus, in conclusion the manifold of rigidly gauge equivalent fundamental monopoles is
$$M(_Q)=\frac{S(U(3)\times U(2))}{SU(2)_C\times U(1)_{\mathrm{Y}\mathrm{I}}\times U(1)_{\mathrm{I}+\mathrm{Y}2\mathrm{C}}/𝐙_2}.$$
(33)
This is the first main result of this letter.
We should comment that it is possible to show all of the fundamental monopoles lie within the same equivalence class. This is by associating the different monopole embeddings with the spectrum of roots corresponding to the roots of $`SU(5)`$ that are not roots of $`S(U(3)\times U(2))`$. The action of $`S(U(3)\times U(2))`$ upon the associated root spaces takes one monopole embedding to another. We shall discuss this fully in another publication .
Now we shall consider the corresponding action of $`S(U(3)\times U(2))`$ upon a fermion in the fundamental representations of colour, weak isospin and weak hypercharge. In the standard model this corresponds to the $`(u,d)_L`$ quark doublet. For $`f_{(u,d)_L}𝐂^{3\times 2}`$ the action is
$$f_{(u,d)_L}h_\mathrm{Y}h_\mathrm{C}f_{(u,d)_L}h_\mathrm{I},$$
(34)
with $`h_\mathrm{Y}`$ interpreted as a complex phase and $`h_\mathrm{C}`$ and $`h_\mathrm{I}`$ elements of $`SU(3)_C`$ and $`SU(2)_I`$ respectively.
Consequently we may form a manifold of gauge equivalent fermion states from the actions of $`S(U(3)\times U(2))`$ on this fermion $`f_{(u,d)_L}`$. The manifold is of the form
$$M(f_{(u,d)_L})\frac{S(U(3)\times U(2))}{C(f_{(u,d)_L})},$$
(35)
with the stability group
$`C(f)=\{hS(U(3)\times U(2)):hf=f\}`$ (36)
representing those transformations that leave $`f`$ invariant.
Without loss of generality we shall consider acting on the specific element $`f_{jk}=\delta _{j1}\delta _{k1}`$. Again the generators used are normalised to the topology of $`S(U(3)\times U(2))`$,
$`C`$ $`=`$ $`i\mathrm{diag}(1,1,2),`$ (37)
$`I_3`$ $`=`$ $`i\mathrm{diag}(1,1),`$ (38)
$`Y`$ $`=`$ $`i\mathrm{diag}(1,1),`$ (39)
such that $`\mathrm{exp}(2\pi Y)f=\mathrm{exp}(2\pi I_3)f=\mathrm{exp}(2\pi C)f=1`$. In particular
$$\mathrm{exp}\theta _1Y\mathrm{exp}\theta _3Cf\mathrm{exp}\theta _2I_3=e^{i(\theta _1+\theta _2+\theta _3)}f.$$
(40)
From this we obtain
$$C(f_{(u,d)_L})=SU(2)_C\times U(1)_{\mathrm{Y}\mathrm{I}}\times U(1)_{\mathrm{Y}2\mathrm{C}}/𝐙_2$$
(41)
and thus we conclude that the manifold of rigidly gauge equivalent fundamental fermions is
$$M(f_{(u,d)_L})=\frac{S(U(3)\times U(2))}{SU(2)_C\times U(1)_{\mathrm{Y}\mathrm{I}}\times U(1)_{\mathrm{Y}2\mathrm{C}}/𝐙_2}.$$
(42)
By comparing the above manifolds we see that both are precisely the same
$$M(_Q)=M(f_{(u,d)_L}).$$
(43)
This is our main result. It shows an equivalence between the transformation properties of fundamental monopoles and $`(u,d)_L`$ fermions. This supports that fundamental monopoles transform under the same representation as the $`(u,d)_L`$ fermion. Namely the fundamental representation of $`S(U(3)\times U(2))`$.
We now consider the action of the dual group $`S(U(3)\times U(2))^v=SU(3)\times SU(2)\times U(1)`$ on the fermion $`f`$. Then the associated gauge orbit is $`S(U(3)\times U(2))^v/C^v(f)`$. However it is clear that $`C^v(f)=C(f)\times 𝐙_6`$. Thus fermion gauge orbit in Eq. (42) under $`S(U(3)\times U(2))`$ is the same as the fermion gauge orbit under the dual group $`S(U(3)\times U(2))^v`$. In other words the gauge orbits of monopoles are consistent with both the residual symmetry group and the dual residual symmetry group.
It is an interesting feature of the above arguments that they imply an association between the long range interactions of these monopoles and the gauge interactions of a particle transforming under the fundamental representation of $`SU(3)_\mathrm{C}`$, $`SU(2)_\mathrm{I}`$ and $`U(1)_\mathrm{Y}`$ gauge fields. In particular note the transformations of Eq. (18) are local. Then the monopole moves around its gauge orbit under transformations of a local symmetry. This feature should be viewed as the background to our work on unification in the dual standard model . There the starting assumption is that the monopoles interact via a gauge interaction, and as a consequence we derive relations between the gauge couplings at monopole unification.
Also of note is that the techniques used here relate purely to the symmetry properties of the model. Thus our derivation should be very general, and we expect that techniques used here will be applicable to other situations of interest. Other examples that should be amenable to this approach include monopoles from various symmetry breakings, and the long range interactions of vortices.
We now move on to a discussion of the gauge equivalence classes for the other monopoles. These are formed from stable bound states of fundamental monopoles .
Writing the magnetic charges of these other stable monopoles as
$$Q_{q_\mathrm{Y}}=\frac{1}{g_u}\left(q_\mathrm{C}T_\mathrm{C}+q_\mathrm{I}T_\mathrm{I}+q_\mathrm{Y}T_\mathrm{Y}\right),$$
(44)
where a particular state is labelled by its hypercharge, the following spectrum of stable monopoles is obtained:
| | $`q_\mathrm{C}`$ | $`q_\mathrm{I}`$ | $`q_\mathrm{Y}`$ | $`d_\mathrm{C}`$ | $`d_\mathrm{I}`$ | $`d_\mathrm{Y}`$ | f |
| --- | --- | --- | --- | --- | --- | --- | --- |
| $`(e^{2i\pi /3},1)`$ | 1 | 1/2 | 1/3 | 3 | 2 | 1 | $`(u,d)_L`$ |
| $`(e^{2i\pi /3},1)`$ | -1 | 0 | 2/3 | 3 | 0 | 1 | $`\overline{d}_L`$ |
| $`(1,1)`$ | 0 | 1/2 | 1 | 0 | 2 | 1 | $`(\overline{\nu },\overline{e})_R`$ |
| $`(e^{2i\pi /3},1)`$ | 1 | 0 | 4/3 | 3 | 0 | 1 | $`u_R`$ |
| $`(e^{2i\pi /3},1)`$ | - | - | - | - | - | - | - |
| $`(1,1)`$ | 0 | 0 | 2 | 0 | 0 | 1 | $`\overline{e}_L`$ |
The degeneracies of each bound state has also been included, relating to the degeneracy in Eqs. (13,14,15) and Eq. (16). We have also included the standard model fermions that have the same charges as the monopoles.
We shall consider firstly the gauge equivalence classes of the fermions in the standard model. As before these are of the form
$$M(f)=\frac{S(U(3)\times U(2))}{C(f)}$$
(45)
with $`C(f)`$ the centraliser of the fermion’s gauge transformations, namely
$$C(f)=\{hS(U(3)\times U(2)):hf=f\}.$$
(46)
A calculation analogous to that carried out in the first part of this letter gives:
$`C(f_{(u,d)_L})`$ $`=`$ $`SU(2)_\mathrm{C}\times U(1)_{\mathrm{Y}\mathrm{I}}\times U(1)_{\mathrm{Y}2\mathrm{C}}/𝐙_2`$ (47)
$`C(f_{\overline{d}_L})`$ $`=`$ $`SU(2)_\mathrm{C}\times SU(2)_\mathrm{I}\times U(1)_{\mathrm{Y}2\mathrm{C}}/𝐙_2`$ (48)
$`C(f_{(\overline{\nu },\overline{e})_L})`$ $`=`$ $`SU(3)_\mathrm{C}\times U(1)_{\mathrm{Y}\mathrm{I}}`$ (49)
$`C(f_{u_R})`$ $`=`$ $`SU(2)_\mathrm{C}\times SU(2)_\mathrm{I}\times U(1)_{\mathrm{Y}2\mathrm{C}}/𝐙_2`$ (50)
$`C(f_{\overline{e}_L})`$ $`=`$ $`SU(3)_\mathrm{C}\times SU(2)_\mathrm{I}.`$ (51)
We shall now turn to the problem of determining the gauge equivalence classes of the monopoles. However, our analysis is complicated by the fact that higher charged stable monopoles are not embedded monopoles. This point was crucial for our analysis in the first part of this letter, where we associated a $`_Q`$ with the monopole embedding and described the group actions upon this.
Instead we shall deal only with the magnetic charge of the monopoles. Observe that for the $`(u,d)_L`$ fundamental monopole the subgroup of $`S(U(3)\times U(2))`$ that leaves the magnetic charge invariant is
$$C(Q_{1/3})=\{hS(U(3)\times U(2)):\mathrm{Ad}(h)Q_{1/3}=Q_{1/3}\}$$
(52)
for which explicit calculation yields
$$C(Q_{1/3})=SU(2)_\mathrm{C}\times U(1)_\mathrm{C}\times U(1)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6.$$
(53)
It is clear that this is related to $`C(_Q)`$ by
$$C(Q_{1/3})=C(_Q)\times U(1)_Q/𝐙_6.$$
(54)
Physically this represents $`U(1)_Q`$ acting trivially upon $`Q`$, whilst acting non-trivially upon the monopole. Whilst $`U(1)_Q`$ does not appear in the action of $`S(U(3)\times U(2))`$ on the magnetic charge, it is still important in its action upon the monopole embedding.
Now we verified in the first part of this letter that $`C(_Q)=C(f_{(u,d)_L})`$. In fact this was all that was needed to prove that the monopole and fermion gauge equivalence classes were the same. Taking the analogy of this we shall show that
$$C(f_{(u,d)_L})\times U(1)_Q/𝐙_6=C(Q),$$
(55)
for each of the respective higher charge monopoles and their associated fermions.
The magnetic charges of the higher charge monopoles are given by the table above. From this we calculate
$`Q_{1/3}^{r+}`$ $`=`$ $`i\mathrm{diag}(1,0,0,0,1),`$ (56)
$`Q_{2/3}^r`$ $`=`$ $`i\mathrm{diag}(0,1,1,1,1),`$ (57)
$`Q_1^+`$ $`=`$ $`i\mathrm{diag}(1,1,1,1,2),`$ (58)
$`Q_{4/3}^r`$ $`=`$ $`i\mathrm{diag}(2,1,1,2,2),`$ (59)
$`Q_2`$ $`=`$ $`i\mathrm{diag}(2,2,2,3,3).`$ (60)
which yields their respective stability groups
$`C(Q_{1/3})`$ $`=`$ $`U(2)_\mathrm{C}\times U(1)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6`$ (61)
$`C(Q_{2/3})`$ $`=`$ $`U(2)_\mathrm{C}\times SU(2)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6`$ (62)
$`C(Q_1)`$ $`=`$ $`SU(3)_\mathrm{C}\times U(1)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6`$ (63)
$`C(Q_{4/3})`$ $`=`$ $`U(2)_\mathrm{C}\times SU(2)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6`$ (64)
$`C(Q_2)`$ $`=`$ $`SU(3)_\mathrm{C}\times SU(2)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_6.`$ (65)
From this it is a simple matter to see that Eq. (55) holds.
However, the above does not rigourously prove equivalence of their gauge equivalence classes; to do that one must examine the specific form of the monopoles, as in the first part of this letter. Nevertheless the verification that Eq. (55) holds for each of the monopoles and their respective fermion counterparts constitutes a strong indication that the gauge equivalence classes are the same.
We conclude this letter with a last remark about the structure of the higher charge monopole equivalence classes. Consideration of the above equations reveals that the $`(\nu ,e)_L`$ monopole does not transform under colour symmetry. Thus it is naturally associated with fundamental monopoles arising from the symmetry breaking
$$SU(3)U(2)=SU(2)_\mathrm{I}\times U(1)_\mathrm{Y}/𝐙_2.$$
(66)
These monopoles are again given by embedding an $`SU(2)`$ monopole. Then the gauge equivalence class of such fundamental monopoles are determined by analogous methods to those in the first part of this letter
$$M(_{Q_1})\frac{U(2)}{U(1)_{YI}}.$$
(67)
This is the same manifold as the gauge equivalence class of $`(\nu ,e)_L`$ fermions.
Likewise the monopoles associated with $`u_R`$ and $`d_R`$ do not transform under isospin symmetry and it is natural to associate them with monopoles arising from
$$SU(4)U(3)=SU(3)_\mathrm{C}\times U(1)_Y/𝐙_3.$$
(68)
Here the $`d_R`$ monopoles is given by embedding an $`SU(2)`$ monopole, whilst the $`u_R`$ is interpreted as a bound state of two of these. Their gauge equivalence class is
$$M(_{Q_{2/3}})\frac{U(3)}{SU(2)_C\times U(1)_{Y2C}/𝐙_2},$$
(69)
the same as for the $`u_R`$ fermion.
Finally the monopole associated with $`e_R`$ is associated with monopoles arising from
$$SU(2)U(1)_Y.$$
(70)
Again, trivially, this is an embedded monopole. This time the gauge equivalence class is
$$M(_{Q_2})U(1),$$
(71)
the same as for $`e_R`$.
Thus we remark that the higher charge monopoles are associated with fundamental monopoles in other symmetry breakings. Furthermore their gauge equivalence classes are calculable by similar methods to the first part of this letter. Such calculations yield the same equivalence classes as the corresponding fermions.
I acknowledge King’s College, Cambridge for a junior research fellowship and P. Saffin for discussions. |
no-problem/0001/astro-ph0001405.html | ar5iv | text | # Hydrodynamic Collimation of GRB Fireballs
Analytic solutions are presented for the hydrodynamic collimation of a relativistic fireball by a surrounding baryonic wind emanating from a torus. The opening angle is shown to be the ratio of the power output of the inner fireball to that of the exterior baryonic wind. The gamma ray burst 990123 might thus be interpreted as a baryon-pure jet with an energy output of order 10<sup>50</sup> erg or less, collimated by a baryonic wind from a torus with an energy output of order $`10^{52}`$ erg.
Recent observations of GRB’s and their afterglows appear to indicate that GRB ejecta are collimated. In particular, i) the high fluences exhibited by several bursts, notably GRB 990123 for which the fluence indicated an isotropic equivalent energy in excess of $`M_{}c^2`$ , require large focusing factors if the central engine is indeed associated with a stellar mass object as widely believed, and ii) the achromatic break seen in the light curve of afterglow emission in GRB 990510 , can be interpreted (e.g., refs ), as due to a transition whereby the blast wave decelerates from $`\mathrm{\Gamma }>1/\psi `$ to $`\mathrm{\Gamma }<1/\psi `$, where $`\psi `$ is the opening angle of the outflow. A blast wave model yielded an opening angle of $`\psi =0.08`$ and a corresponding beaming factor of 300 for this source .
If GRB’s are indeed collimated, as they appear to be, this would not be surprising. Collimation is a generic feature of outflows in astrophysics, and appears in outflows from protostars, AGN, and accretion disks around neutron stars (e.g. SS433). There is as yet no universally accepted explanation for such collimation. One possibility is collimation by magnetic hoop stress that arises when the field is wound up by rotation. Such a field configuration, however, would be kink unstable , and in any case is very slowly collimated . Another possibility is simply that the inertial forces in the outer part of the flow collimate material that impacts upon it from within. This hypothesis merely makes the plausible assumption that the flow is unsteady or multicomponent, so that there are glancing collisions between fluid elements on different streamlines. Even a steady, one component flow can display a collimated jet if it originates from a ring, with some of it directed towards the ring’s axis of cylindrical symmetry . If that part of the fluid that converges at the axis makes a dissipative ”sticky” collision, then it remains well collimated as seen by the external observer.
In GRB’s, there is an a priori reason to suspect multicomponent flows: If the central engine is a black hole, then there needs to be an accretion disk feeding it, and the luminosity from the accretion disk would be so super-Eddington that a baryon outflow would be dragged out with the photons and electron-positron pairs . This would obscure the gamma ray burst, so we probably have to say that the gamma ray burst is seen as such only when we look down the axis of the baryon poor jet (BPJ) that emerges along field lines connected to the event horizon, which are protected by the horizon from baryon contamination . The energy that powers the BPJ is either extracted from the rotation of the black hole, or deposited there by neutral particles that have crossed over from field lines (such as neutrinos that annihilate on the horizon-threading field lines).
The structure of colliding outflows \- We consider an accelerating, baryon poor outflow (BPJ) confined by the pressure and inertia of a baryon rich wind. We envision that the baryonic wind is expelled (e.g., from an extended disk or torus) over a range of scales much larger than the characteristic size of the central engine ejecting the BPJ, so that not too far out its streamlines may diverge more slowly than the BPJ streamlines, thereby giving rise to a collision of the two outflows. In general, the collision of the two fluids will lead to the formation of a contact discontinuity across which the total pressure is continuous, and two oblique shocks, one in each fluid, across which the streamlines of the colliding (unshocked) fluids are deflected. The details of this structure will depend, quite generally, on the parameters of the two outflows and on the boundary conditions.
We seek to determine the structure of the shocked layers assuming that the parameters of the unshocked fluids are given. The problem is then characterized by 11 independent variables: the number density, energy density, pressure, and velocity of each of the two shocked layers, and the cross-sectional radii of the two shocks and the contact discontinuity surface. In what follows subscripts $`j`$ and $`b`$ refer to quantities in the BPJ and in the baryonic wind, respectively, and subscript $`s`$ denotes quantities in the shocked layers. To simplify the analysis we approximate the shocked layers as cylindrically symmetric, coaxial, one dimensional flows along the channels, and denote by $`a_c`$, $`a_j`$, and $`a_b`$, respectively, the cross sectional radii of the contact discontinuity surface, the inner shock surface (in the BPJ), and the outer shock surface. The energy flux incident into the shocked BPJ layer through the shock is given by
$$T_j^{0k}n_k=(p_j+\rho _jc^2)\mathrm{\Gamma }_jU_j\mathrm{sin}\delta _j,$$
(1)
and the momentum flux in the axial direction incident through the shock is
$$T_j^{zl}n_l=p_j+(p_j+\rho _jc^2)U_j^2\mathrm{cos}\theta _j\mathrm{sin}\delta _j.$$
(2)
Here $`n_k`$ are the components of a unit vector normal to the shock surface, $`\theta _j`$ is the angle between the velocity of the fluid just upstream the shock and the BPJ axis, $`\mathrm{\Gamma }_j`$ and $`U_j`$ are the Lorentz factor and 4-velocity of the unshocked BPJ, and $`\delta _j`$ is the deflection angle (i.e., the angle between the velocity of upstream fluid and the shock surface), which is related to $`\theta _j`$ through: $`\mathrm{sin}\delta _j=(\mathrm{sin}\theta _j+\mathrm{cos}\theta _jda_j/dz)/\sqrt{1+(da_j/dz)^2}`$. With the above results the energy and momentum equations of the shocked fluid can be written as
$$\frac{d}{dz}[T_{sj}^{0z}\pi (a_c^2a_j^2)]=T_j^{0k}n_kdS/dz=(p_j+\rho _jc^2)\mathrm{\Gamma }_jU_j\pi a_j(\mathrm{sin}\theta _j+\mathrm{cos}\theta _jda_j/dz),$$
(3)
and
$$\frac{d}{dz}[T_{sj}^{zz}\pi (a_c^2a_j^2)]=T_j^{zl}n_ldS/dz=p_j+(p_j+\rho _jc^2)U_j^2\mathrm{cos}\theta _j\pi a_j(\mathrm{sin}\theta _j+\mathrm{cos}\theta _jda_j/dz),$$
(4)
where $`T_{sj}^{0z}`$, $`T_{sj}^{zz}`$ are the energy and momentum fluxes in the axial (z) direction of the shocked BPJ fluid, and $`dS=\pi a_j\sqrt{1+(da_j/dz)^2}dz`$ is an element of the shock surface between $`z`$ and $`z+dz`$. The projection of the momentum equation on the direction normal to the shock surface yields
$$p_{sj}=T_j^{kl}n_jn_k=p_j+(p_j+\rho _jc^2)U_j^2\mathrm{sin}^2\delta _j.$$
(5)
The last equation holds provided the transverse momentum flux incident through the shock exceeds the free expansion pressure of the shocked fluid. The above equations must be supplemented by an equation of state, $`p_{sj}=p_{sj}(\rho _{sj})`$, and a continuity equation that governs the density profile of the shocked fluid. Likewise, the equations describing the structure of the shocked baryonic wind can be written in the same form with the subscript $`j`$ replaced by $`b`$. Finally, the requirement that the net momentum flux across the contact discontinuity surface vanishes implies that $`p_{sb}=p_{sj}`$.
A complete treatment requires numerical integration of the coupled set of 11 equations derived above, subject to appropriate boundary conditions, and is left for future investigation. Below, we consider for illustration a simple situation where the BPJ is assumed to be pressure dominated, and the assumption is then justified a posteriori. This corresponds to the limit $`a_j=\mathrm{sin}\delta _j=0`$. We further suppose that the shocked baryonic layer is very thin, so that to a good approximation we can set $`a_b=a_c`$ in the above equations. Adopting a relativistic equation of state for the BPJ, $`p=\rho c^2/3`$, mass, energy and momentum conservation imply:
$`p_{sj}^{3/4}\mathrm{\Gamma }_{sj}\pi a_c^2=C,`$ (6)
$`(p_{sj}+\rho _{sj}c^2)\mathrm{\Gamma }_{sj}^2\pi a_c^2c4p_{sj}\mathrm{\Gamma }_{sj}^2\pi a_c^2c=L_j,`$ (7)
$`p_{sj}=p_b+(p_b+\rho _bc^2)U_b^2\mathrm{sin}^2\delta _b,`$ (8)
where C and $`L_j`$, the corresponding BPJ power, are constants. Equations (6), (7) can be combined to yield a relation between the Lorentz factor, the pressure, and the cross sectional radius:
$$(\mathrm{\Gamma }_{sj}/\mathrm{\Gamma }_{sjo})=(a_c/a_{co})=(p_{sj}/p_{sjo})^{1/4},$$
(9)
where $`\mathrm{\Gamma }_{sjo}`$, $`a_{co}`$, and $`p_{sjo}`$ are the corresponding values at $`z=z_o`$. On substituting eq. (9) into eq. (8), one obtains a differential equation for $`a_c(z)`$:
$$(a_c/a_{co})^4=p_b/p_{sjo}+\frac{(p_b+\rho _bc^2)U_b^2}{p_{sjo}}\frac{(\mathrm{sin}\theta _b+\mathrm{cos}\theta _bda_c/dz)^2}{[1+(da_c/dz)^2]}.$$
(10)
Pressure confinement \- We consider first the possibility that the BPJ is collimated by the pressure of the baryonic wind. Such collimation will occur naturally in the region located within the acceleration zone of the BPJ and above the acceleration zone of the baryonic wind (roughly above its critical point), even if the two outflows have the same equation of state. The reason is that in this region the density profile of the BPJ declines with radius as $`r^3`$ (for a conical BPJ) while that of the wind declines as $`r^2`$. Taking $`\delta _b=0`$ in equation (8) and using eq. (9) one finds,
$$a_c(z)=a_{co}[p_b(z)/p_{bo}]^{1/4}.$$
(11)
Adopting an equation of state of the form $`p_b(n_b)n_b^\alpha `$ for the baryonic fluid yields, $`a_c(z)=a_{co}[n_b(z)/n_{ob}]^{\alpha /4}`$. In case of a conical (or spherical) wind, $`n_b(z)z^2`$, and the cross sectional radius scales as $`a_c(z)z^{\alpha /2}`$. Convergence occurs for $`\alpha <2`$, which is the case for both relativistic gas ($`\alpha =4/3`$) and non-relativistic gas ($`\alpha =5/3`$).
The above analysis does not take into account effects associated with the formation of a rarefaction wave in the baryonic outflow due to the divergence of its stream lines near the interface separating the two fluids. This will result in a steeper decline of the pressure supporting the BPJ at radii where the wind is highly supersonic, and the consequent alteration of the profile of the contact discontinuity surface. We naively expect collimation to take place predominantly over a range of radii at which the Mach number of the baryonic wind is mild, unless some fraction of the wind power is dissipated, e.g., due to formation of shocks, far enough out. This suggests that only a modest beaming factor may be attainable in this case. The determination of the level of collimation in this scenario is left for future work.
BPJ confinement by a wind emanating from a torus \- As a second example we consider a baryonic wind emanating from a thin torus of radius $`R`$ located at $`z=0`$, where z is the axial coordinate, and centered around the central engine ejecting the BPJ (see Fig. 1). We suppose that on scales of interest the baryonic wind is highly supersonic, so that momentum transfer from the wind into the BPJ is dominated by ram pressure. We can therefore neglect the kinetic pressure, $`p_b`$, in eq. (10). Now the ram pressure of the baryonic outflow just upstream the oblique shock at some position $`z`$ is related to the total wind power , $`L_b`$, through:
$$(p_b+\rho _bc^2)U_b^2=\beta _bL_b/(4\pi ^2ca_cr),$$
(12)
where $`r=[z^2+(Ra_c)^2]^{1/2}`$ is the distance between a point on the torus and the nearest point to it at the shock (see figure 1), and $`\beta _b`$ is the (terminal) wind velocity. The angle between the wind velocity and the BPJ axis is given by $`\mathrm{tan}\theta _b=(Ra_c)/z`$. Substituting eq. (12) into eq. (10), and using eq. (7) yields,
$$\frac{(Ra_c+da_c/d\mathrm{ln}z)^2}{1+(da_c/dz)^2}=a_{co}^2\chi ^1\frac{[z^2+(Ra_c)^2]^{3/2}}{a_c^3}$$
(13)
with $`\chi =\pi ^1\beta _b\mathrm{\Gamma }_{sjo}^2(L_b/L_j)`$ being approximately the ratio of baryonic wind and BPJ powers. At large enough axial distances from the plane of the torus, $`z>>R`$, an approximate solution to eq. (13) can be obtained by expanding the various terms in powers of $`R/z`$, and becomes more exact at large z. To second order the BPJ in this regime is conical, viz., $`a_c(z)=b+\alpha z`$, where $`\alpha `$, the opening angle, and $`b`$ are constants, and satisfy the relation:
$$\chi (R/a_{co}b/a_{co})^2=\frac{(1+\alpha ^2)^{5/2}}{\alpha ^3}.$$
(14)
The parameter $`b/a_{co}`$ depends on the properties of the solution near the central engine (at $`z<R`$) and must be determined numerically. The requirement that the BPJ is confined by the ram pressure of the baryonic outflow at $`z=0`$ yields,
$$(R/a_{co}1)=\chi ,$$
(15)
where it has been assumed that $`da_c(z=0)/dz=0`$. Combining the last two equations, one finds
$$\frac{(1+\alpha ^2)^{5/2}}{\alpha ^3}=\chi (\chi +1b/a_{co})^2.$$
(16)
For $`\chi >>1`$ the latter equation implies that
$$\alpha \chi ^1+O(\chi ^2).$$
(17)
Numerical integration of eq. (13) confirmed that the BPJ is indeed conical with an opening angle given by eq. (16), except for a small range ($`z<R`$) near the central engine. BPJ profiles computed numerically for different values of the parameter $`\chi `$ are exhibited in Fig. 2, and the corresponding dependence of $`\alpha `$ on $`\chi `$ is displayed in Fig. 3.
Once the BPJ Lorentz factor exceeds $`\alpha ^1\chi `$, it will remain conical with the same opening angle regardless of the external conditions. This occurs at a distance $`z=R\chi ^2/(1+\chi )`$ from the central engine. Thus the baryonic wind must extend to at least this scale.
The apparent luminosity, $`L_{app}`$, exhibited by a beamed source having a power $`L_j`$ is roughly $`L_{app}f^1L_j`$, where $`f=\mathrm{\Delta }\mathrm{\Omega }/2\pi `$ is the corresponding beaming factor. For the above model we find
$$f=\pi \alpha ^2/2\pi =1/2\chi ^2=(\pi ^2/2)(L_j/L_b)^2,$$
(18)
and
$$L_{app}(2/\pi ^2)(L_b/L_j)L_b.$$
(19)
In conclusion, we have shown \[equation (19)\] that an inner relativistic jet collimated by an outer modestly relativistic jet can appear brighter to the observer within the beam by a factor that is inversely proportional to the intrinsic power of the inner jet. When all else is the same, a low luminosity inner jet actually appears brighter than a high luminosity one to the observer within its beam. The cost, of course, is that, fewer observers are within the beam. Ironically, most of the energy of the GRB is the unseen outer jet.
Now consider the remarkable GRB 990123, which had an isotropic equivalent energy output of about $`3\times 10^{54}`$ ergs. Applying equations (18) through (19), we propose that the actual energy in the gamma ray fireball was only about $`\eta ^210^{50}`$ ergs, (it may have even been smaller than average) and an outer baryonic wind output of order $`\eta 10^{52.5}`$ erg. The corresponding beaming factor is $`f=10^5\pi ^2\eta ^2/2`$. Here $`\eta `$ is assumed to be of order unity or less, otherwise the energetics strain theoretical models.
A rough observational lower limit on $`\eta `$ can in principle be derived from the consideration that GRB 990123 could not have a collimation angle that is implausibly smaller than for average GRB’s. Although it was an unusual burst, the a priori chance of seeing it is unlikely to have been less than $`10^3`$, and, considering that it is a BeppoSax-located GRB and that there has been at least one other burst of comparable (if somewhat smaller) intrinsic magnitude, probably considerably more than $`10^3`$. The largest uncertainty is the beaming factor for the average GRB. If we take that as being of order $`10^3`$, corresponding to an opening angle of about 0.06 radians, this suggests a value for $`\eta `$ of at least of order 0.3. This means that the minimum energy requirement for GRB 990123 is still of order $`10^{52}`$ ergs or more. On the other hand, the fact that it can come out as a baryonic wind relaxes the demands on theoretical models. For example, a magnetocentrifugal wind could plausibly release several percent of a solar rest energy over 30 seconds, whereas it would be hard to get this much from $`\nu \overline{\nu }`$ annihilation.
Another example is GRB990510, for which the achromatic steepening of the optical and radio light curves observed after about 1 day , can be modeled as due to the evolution of a jet with an opening angle of $`0.08`$ rad, and a corresponding beaming factor of 300. The prompt GRB emission should be at least as beamed as the afterglow emission, but could in fact be more beamed. The isotropic gamma-ray emission inferred for this source is $`3\times 10^{53}`$ ergs. Employing equations (18) and (19) yields a total energy of $`<10^{51}`$ ergs for the BPJ and $`3\times 10^{52}`$ ergs for the confining wind.
We acknowledge support from the Israel Science Foundation
-
-
- |
no-problem/0001/astro-ph0001304.html | ar5iv | text | # 1 Introduction
## 1 Introduction
While the metals observed in intracluster gas clearly originate from stars, how the metals got from stars into the intracluster gas remains controversial. The two global metal enrichment mechanisms most commonly considered are supernovae-driven protogalactic winds from early-type galaxies and ram-pressure stripping of gas from cluster galaxies. At the centers of cD clusters, accumulated stellar mass loss from the cDs may also contribute to the observed metal distribution. In principle, the imprints of these enrichment mechanisms can be distinguished by the chemical mix and spatial distribution of heavy elements in intracluster gas. Protogalactic winds, powered by Type II supernovae from early generations of short-lived, massive stars, may distribute metals throughout clusters. Ram-pressure stripping would be most effective near cluster centers and should deposit gas from galaxy atmospheres with considerable supplemental enrichment from Type Ia supernovae, since stripping is a more secular, ongoing process; SN Ia have longer-lived progenitors (accreting white dwarfs) and different elemental yields than SN II. Accumulated stellar mass loss in central cDs should have somewhat higher abundances than gas stripped from other early-type galaxies (but similar abundance ratios) and may have a different spatial profile than stripped gas.
Unfortunately, residual uncertainty in the theoretical elemental yields from SN II have allowed different interpretations of recent ASCA spectroscopy of intracluster gas. The yield models adopted by some investigators imply that global intracluster metal abundances are consistent with SN II ejecta, supporting the protogalactic wind enrichment scenario. However, we and others, using different theoretical yield models for SN II, find that as much as 50% of intracluster iron comes from SN Ia. Somewhat more than half of the global iron comes from SN II, which can be readily attributed to protogalactic wind enrichment. However, the presence of large quantities of iron from SN Ia throughout clusters is problematic: is ram pressure stripping so effective that it contaminates the outer parts of clusters as much as the central regions?
Since the detailed spatial distribution of elements in intracluster gas may offer clues about the dominant metal enrichment mechanism(s), we analyzed ASCA observations of Abell 496: analysis of previous Ginga and Einstein X-ray satellite data indicated that it has centrally enhanced abundances.
## 2 Analysis of Abell 496
We jointly fitted isothermal emissivity models to spatially resolved spectra of Abell 496 from all four ASCA instruments. Tying individual elemental abundances together in these fits, we find the metal abundance increases from 0.36 solar in the outer $`312^{}`$ region to 0.51 solar within $`3^{}`$ (see Fig. 1). Allowing the abundances of individual elements to vary independently, we find that iron, nickel and sulfur abundances are centrally enhanced. Our results are the same when we include a cooling flow spectral component for the emission from the central region.
We also found significant gradients in several abundance $`ratios`$: Si/S, Si/Ni and S/Fe. Having gradients in abundance ratios implies that the proportion of SN Ia/II ejecta is changing spatially and that the dominant metal enrichment mechanism(s) near the cluster center must be different than in the outer parts.
We compared an ensemble of observed abundance ratios to theoretical expectations of yields from SN Ia and SN II in order to estimate the relative proportion of SN Ia/II ejecta in Abell 496. An ensemble of ratios is required, since there are large theoretical uncertainties in the yields of individual elements.
Fig. 2 illustrates various estimates of the iron mass fraction due to SN Ia in inner ($`02^{}`$; filled circles) and outer ($`312^{}`$; empty circles) regions of the cluster. The ensemble of the five best-determined abundance ratios collectively and individually indicate that the proportion of SN Ia ejecta increases toward the center. The average of the five individual estimates of the iron mass fraction from SN Ia is also indicated in Fig. 2: the proportion of ejecta from SN Ia is $`46`$% in the outer parts, rising to $`72`$% within $`2^{}`$. The central increase in the proportion of SN Ia ejecta is such that the central iron abundance enhancement can be attributed entirely to SN Ia ejecta.
## 3 Enrichment Mechanisms
The central metal abundance enhancements in Abell 496 are not likely to be caused by ram pressure stripping of gas from cluster galaxies. The gaseous abundances measured in most early-type galaxies by ASCA and ROSAT are 0.2-0.4 solar, significantly less than the 0.5-0.6 solar abundance observed at the cluster center. Only the most luminous ellipticals, which also tend to be at the centers of galaxy clusters or groups, are observed to have gaseous abundances of 0.5-1 solar. If ram pressure stripping is effective in the cluster, it would act to $`dilute`$ the central abundance enhancement. If ram pressure stripping is not the primary source of metals near the cluster center, where it should be most effective, it is an even less likely source of metals in the outer parts of the cluster.
The central abundance enhancement is also unlikely to be due to the secular accumulation of stellar mass loss in the cD. If it were, then other giant ellipticals which have not been stripped should have comparable ratios of iron mass to optical luminosity. NGC 4636 is among the most X-ray luminous ellipticals and may be at the center of its own small group. However, NGC 4636 has a $``$10-20 times smaller iron mass to light ratio in its vicinity compared to the cD of Abell 496.
We propose instead that the bulk of intracluster gas is contaminated by two phases of winds from early-type galaxies: an initial SN II-driven protogalactic wind phase, followed by a secondary, less vigorous SN Ia-driven wind phase, contaminating the bulk of intracluster gas with comparable masses of iron. The secondary SN Ia-driven winds would be $``$ 10 times less energetic than the initial SN II-driven protogalactic winds, since SN Ia inject $``$10 times less energy per unit iron mass than SN II and the observations indicate that comparable amounts of iron came from SN Ia and SN II. Less vigorous secondary SN Ia-driven winds would allow SN Ia-enriched material to escape most galaxies, but not clusters. However, the secondary SN Ia-driven wind from a central dominant galaxy may be smothered, due to the galaxy’s location at the bottom of the cluster’s gravitational potential and in the midst of the highest ambient intracluster gas density. Such a smothered wind could generate the metal abundance enhancement seen at the center of Abell 496, which has the chemical signature of SN Ia ejecta.
Acknowledgements. This work was supported by NASA grant NAG 5-2574 and a National Research Council Senior Research Associateship at NASA GSFC. |
no-problem/0001/cond-mat0001444.html | ar5iv | text | # PHASE ORDERING IN CHAOTIC MAP LATTICES WITH ADDITIVE NOISE
## I Introduction
Non trivial collective behavior (NTCB) is an interesting feature peculiar to extensively chaotic dynamical systems, where the temporal evolution of spatially averaged quantities reveals the onset of long-range order in spite of the local disorder . A simple way to observe NTCB is to study models of spatially extended chaotic systems such as coupled map lattices (CMLs) that consist of chaotic maps locally coupled diffusively with some coupling strength $`g`$ . In these systems one observes multistability that is the remainder, for small couplings, of the completely uncoupled case . For large enough couplings NTCB is observed, corresponding to a macroscopic attractor, well-defined in the infinite-size limit and reached for almost every initial condition.
In a recent paper phase separation mechanisms have been investigated in a coupled map lattice model where the one-body probability distribution functions of local (continuous) variables has two disjoint supports. By introducing Ising spin variables, the phase ordering process following uncorrelated initial conditions was numerically studied and complete phase ordering was observed for large coupling values. Both the persistence probability $`p(t)`$ (i.e. the proportion of spins that has not changed sign up to time $`t`$) and the characteristic length of domains $`R(t)`$ (evaluated as the width at midheigth of the two-point correlation function) showed scaling behavior and the scaling exponents $`z`$ and $`\theta `$ ( defined by the scaling laws $`R(t)t^z`$ and $`p(t)t^\theta `$) were found to vary continuously with parameters, at odds with traditional models. The study of the phase ordering properties also allowed to determine the limit value $`g_c`$ beyond which multistability disappears and NTCB is observed . Indeed the following relations were found to hold: $`\theta (gg_c)^w`$ and $`z(gg_c)^w`$, and were used to determine $`g_c`$. The ratio $`\theta /z`$ was found to be close to $`0.40`$, the ratio known for the time dependent Ginzburg-Landau equation.
Subsequently dynamical scaling was studied in a lattice model of chaotic maps where the corresponding Ising spin model conserves the order parameter . This model is equivalent to a conserved Ising model with couplings that fluctuate over the same time scale as spin moves, in contact with a thermal bath at temperature $`T`$. The scaling exponents $`\theta `$ and $`z`$ were found to vary with temperature. In particular the growth exponent $`z`$ was observed to increase with temperature; it follows that thermal noise speeds up the phase ordering process in this class of models. At high temperatures $`z`$ assumes the value $`1/3`$, corresponding to the universality class of a Langevin equation known as model $`B`$ , that describes the standard conserved Ising model (when bulk diffusion dominate over surface diffusion ).
The role of noise as an ordering agent has been broadly studied in recent years in the context of both temporal and spatiotemporal dynamics. In the temporal case, that was first considered, external fluctuations were found to produce and control transitions (known as noise-induced transitions) from monostable to bistable stationary distributions in a large variety of physical, chemical and biological systems . As far as spatiotemporal systems are concerned, the combined effects of the spatial coupling and noise may produce an ergodicity breaking of a bistable state, leading to phase transitions between spatially homogeneous and heterogeneous phases. Results obtained in this field include critical-point shifts in standard models of phase transitions , pure noise-induced phase transitions , stabilization of propagating fronts , and noise-driven structures in pattern-formation processes . In all these cases, the qualitative (and somewhat counterintuitive) effect of noise is to enlarge the domain of existence of the ordered phase in the parameter space.
It is the purpose of this paper to analyse the role of additive noise in the phase separation of multiphase coupled map lattices . It will be shown that external noise can induce complete phase ordering for coupling values not leading to phase separation in the absence of the noise term. Furthermore this dynamical transition is reentrant: phase separation appears at a critical value of the noise intensity but disappears again at one higher value of the noise strength.
The paper is organized as follows. In the next section the coupled map lattice model here considered is introduced. In section 3 we present our numerical results. Section 4 summarizes our conclusions.
## II The model
Let us consider a two-dimensional square lattice of coupled identical maps $`f`$ acting on real variables $`x_i`$, whose evolution is governed by the difference equation:
$$x_i(t+1)=(14g)f[x_i(t)]+g\underset{ij}{}f[x_j(t)]+\xi _i(t).$$
(1)
Here the sum is over the nearest neighbors of site $`i`$, $`\xi _i`$ is a random number uniformly distributed in $`[\sigma /2,\sigma /2]`$, $`g`$ is the coupling strength and periodic boundary conditions are assumed. We have chosen the following map:
$$f(x)=\{\begin{array}{cc}\frac{\mu }{3}\mathrm{exp}[\alpha (x+\frac{1}{3})]& ifx[\mathrm{},\frac{1}{3}]\hfill \\ \mu x& ifx[\frac{1}{3},\frac{1}{3}]\hfill \\ \frac{\mu }{3}\mathrm{exp}[\alpha (\frac{1}{3}x)]& ifx[\frac{1}{3},+\mathrm{}]\hfill \end{array}$$
(2)
that is defined for every $`x`$ in the real axis (see Fig. 1). The map here considered is a modified version of the map used in ; the modification is motivated by the fact that, due to the term noise $`\xi _i`$, variables $`x_i(t)`$ are not constrained to take value in $`[1,1]`$. Choosing $`\mu =1.9`$ and $`\alpha =6`$, $`f`$ has two simmetric chaotic attractors, one with $`x>0`$ and the other with $`x<0`$. In Fig. 2 we show the invariant distribution of the attractor with positive $`x^{}`$s: it is composed of smooth pieces. The Lyapunov exponent of the map was evaluated $`0.558`$.
To study the phase ordering process, uncorrelated initial conditions were generated as follows: one half of the sites were chosen at random and the corresponding values of $`x`$ were assigned according to the invariant distribution of the chaotic attractor with $`x>0`$, while the other sites were similarly assigned values with $`x<0`$. We associated an Ising spin configuration $`s_i(t)=\mathrm{sgn}[x_i(t)]`$ with each configuration of the $`x`$ variable. Large lattices (up to $`1000\times 1000`$) with periodic boundary conditions were used; the persistence $`p(t)`$ was measured as the proportion of sites that has not changed $`s`$ the initial value. The average domain size $`R(t)`$ was measured by the relation $`C[R(t),t]=1/2`$, where $`C(r,t)=s_{i+r}(t)s_i(t)`$ is the two point correlation function of the spin variables. Both $`p(t)`$ and $`R(t)`$ were averaged over many (up to thirty) different samples of initial conditions.
## III Results
Fixing $`\sigma =0`$, that is considering the noise-free case, we performed the analysis suggested in . For various values of $`g`$ we measured the characteristic length $`R`$ and the persistence $`p`$ as functions of time; both these quantities saturate for small couplings and show scaling behaviour for large $`g`$ values. The associated exponents, respectively $`z`$ and $`\theta `$, were continuous functions of $`g`$ well described by the fitting ansatz :
$$z(gg_c)^w,\theta (gg_c)^w.$$
(3)
The estimated values of $`g_c`$ and $`w`$ were $`g_c=0.1652`$ and $`w=0.2260`$ while fitting the exponent $`z`$, and $`g_c=0.1654`$, $`w=0.2105`$ for the exponent $`\theta `$. The ratio $`\theta /z`$ was approximately independent of $`g`$ and equal to $`0.3767`$. Furthermore, we observed that the same fitting ansatz can be used to fit our data for nonvanishing and small noise strength $`\sigma `$. For example, in Fig. 3(a) and 3(b) we show respectively the fit of $`z`$ and $`\theta `$ versus $`g`$, while keeping $`\sigma `$ fixed and equal to $`0.1`$. As one can see, data are well fitted by the scaling forms (3), and the estimated values are $`g_c=0.1628`$, $`w=0.2197`$ for the $`z`$ exponent, and $`g_c=0.1632`$, $`w=0.2024`$ for the $`\theta `$ exponent . The ratio $`\theta /z`$ was estimated at $`0.3838`$. We remark that our estimate of the critical coupling $`g_c`$, when non-vanishing and small noise is present, is smaller than the noise-free critical value. This fact clearly shows that a proper amount of noise favours the phase separation process of the system.
Let us now consider the region $`g<g_c(\sigma =0)=0.165`$. Here in the noise-free case the system evolves towards blocked configurations and no phase separation takes place. We checked, however, that this asymptotic regime was attained after very long evolution times: the system spended a lot of time in metastable states, so that the evolution curve for $`R`$ and $`p`$ displayed typical stairs structure. This structure (the times marking the steps of the curve) was very robust, in the sense that:
* it resisted to a change of the initial conditions (choosen following the particular prescription of section II),
* it did not depend on lattice dimension,
* a little noise (low $`\sigma `$) did not destroy it.
Nevertheless, when growing the amount of noise, the life time of these metastable states became shorter and shorter, till they definitely disappeared for $`\sigma `$ greater than a critical value $`\sigma _c(g)`$. For $`\sigma >\sigma _c(g)`$ we got again power laws for $`R(t)`$ and $`p(t)`$, showing that the system separates for large times. This behaviour is shown in Fig. 4.
We estimated the critical value $`\sigma _c`$ by fitting our data with the ansatz $`z(\sigma \sigma _c)^w`$. In Fig. 5 we show our data corresponding to $`g=0.16`$: we evaluated $`\sigma _c=0.1094`$ and $`w=0.3152`$. As in the case of the choice 3, we have no theoretical argument to support the choice of the fitting ansantz, but the fact that it works on a large interval of $`g`$ letting us to give a precise measurement of $`\sigma _c`$. We were able to measure in such a way $`\sigma _c`$ for $`g`$ greater than $`0.025`$; at smaller values of $`g`$ the dynamics became very slow and we were not able to numerically extract the exponent $`z`$.
As $`\sigma `$ was increased, we found a transition at another critical value of the noise strength showing that the system does not separate beyond this critical $`\sigma `$. As an example in Fig. 6 we show the exponent $`z`$ versus $`\sigma `$ for $`g`$ fixed and equal to $`0.17`$. The transition seems to be discontinuous.
We repeated this analysis for several values of $`g`$. Interpolating the above described data for the critical noise strength, we built the phase diagram for the model shown in Fig. 7. The system separates in the shaded area, that is it tends asymptotically to complete phase ordering. Points in the white area correspond to an asymptotic regime of the system where clusters of the two phases are dynamical but their mean size remains constant; only for $`\sigma =0`$ one has blocked configurations with clusters fixed in time. Our data concern $`g`$ greater than 0.025, however we extrapolated the two critical curves towards $`g=0`$. We observe, interestingly, that the extrapolation of the two curves seem to meet at $`g=0`$; further investigation is needed to clarify the behavior of the noisy system close to $`g=0`$.
## IV Conclusions
The phase ordering properties of multiphase chaotic map lattices have recently attracted interest since they differ from those of traditional models. In this paper we have shown that additive noise acts as an ordering agent in this class of systems, i.e. for a suitable amount of noise the system may order even for values of the coupling strength for which no separation is observed in the absence of the noise-term. A simple explanation for this behavior is as follows. Small values of the spatial coupling lead, in the noise-free case, to spatially blocked configurations where interfaces between clusters of each phase are strictly pinned. A proper amount of noise makes the system cross these barriers thus leading to complete ordering. We have numerically constructed a phase diagram describing this behavior. As we said, a similar effect was observed in chaotic map lattices evolving with conserved dynamics, where we found that the growth exponent increases with temperature ; in the present case the additive noise plays the role of the thermal noise.
Figure Captions
Figure 1: The map $`f(x)`$ defined in (2).
Figure 2: Invariant probability distribution for the positive attractor of $`f(x)`$.
Figure 3: The estimated scaling exponents at fixed noise $`\sigma =0.1`$ : a) the dependence of the growth exponent $`z`$ from $`g`$ in linear and log-log plot, b) the dependence of the persistence exponent $`\theta `$ from $`g`$ in linear and log-log scale. Solid lines are best fits leading to the determination of $`g_c`$ and $`w`$ through the use of (3).
Figure 4: The effect of additive noise on the time evolution of the domain size $`R(t)`$ at $`g=0.05`$. The three curve are relative to $`\sigma =0,\sigma =0.06,\sigma =0.24`$.
Figure 5: The estimated growth exponent $`z`$ versus $`\sigma `$ at fixed coupling $`g=0.16`$ in linear and log-log scale. Also shown is the best fit with the function $`z(\sigma \sigma _c)^w`$.
Figure 6: The estimated growth exponent $`z`$ versus $`\sigma `$ at fixed coupling $`g=0.17`$. $`z`$ goes abruptly to zero at $`\sigma =1.2`$ showing that the system does not separate beyond this threshold.
Figure 7: The phase diagram in the plane $`\sigma g`$. The shaded area represents the parameter region in which the system separates asymptotically. |
no-problem/0001/astro-ph0001308.html | ar5iv | text | # The infrared side of galaxy formation. I. The local universe in the semi-analytical framework.
## 1 Introduction
In recent years, our understanding of galaxy formation and evolution has advanced very rapidly, as a result of both observations and theory. On the observational side, new instruments have allowed the direct study of galaxy populations at different wavelengths out to $`z5`$. By combining observations in the UV, optical, IR and sub-mm, we can now start to reconstruct the history of star formation in galaxies over the epochs when the bulk of the stars have formed (e.g. Madau et al., 1996; Steidel et al., 1999; Hughes et al., 1998). On the theoretical side, models based on the paradigm of structure formation through hierarchical clustering (which has successfully confronted a wide range of observations on large scale structure and microwave background anisotropies) have now been developed to the point where they can make definite predictions for the observable properties of galaxies (luminosities, colours, sizes, morphologies etc) at all redshifts, starting from an assumed initial spectrum of density fluctuations. The key technique for making these predictions has been that of semi-analytical modelling (White & Frenk, 1991; Lacey & Silk, 1991; Kauffmann et al., 1993; Cole et al., 1994; Somerville & Primack, 1999). In this technique, one applies simplified analytical descriptions of the main physical processes of gas cooling and collapse, star formation, feedback effects from supernovae, galaxy merging etc, with the backbone being a Monte Carlo description of the process of formation and merging of dark matter halos through hierarchical clustering. The predicted star formation histories are then combined with detailed stellar population models to calculate galaxy luminosities at different wavelengths. Conversely, direct numerical simulations have been enormously successful in studying the evolution of structure in the dark matter on a huge range of scales (e.g. Jenkins et al., 1998), but currently do not have sufficient spatial resolution to simultaneously follow all the processes involved in galaxy formation.
The semi-analytical models have been successful in predicting and/or explaining a large range of galaxy properties, both at low and high redshift, for instance, luminosity functions and colours in different optical and near-IR bands (Lacey et al., 1993; Kauffmann et al., 1993; Cole et al., 1994), the mixture of galaxy morphologies and the evolution of elliptical galaxies (Kauffmann et al., 1993; Baugh et al., 1996; Kauffmann, 1996), the properties of Lyman-break galaxies at high redshift (Baugh et al., 1998; Governato et al., 1998), the sizes and circular velocities of galaxies (Cole et al., 2000), and galaxy clustering evolution and the nature of the clustering bias (Kauffmann et al., 1997; Baugh et al., 1999; Diaferio et al., 1999; Benson et al., 2000). However, with very few exceptions, these semi-analytical models have ignored both extinction and emission by interstellar dust, and calculated only the direct stellar emission in the UV, optical and near-IR. This has been partly because the importance of dust was generally under-appreciated, especially for high redshift galaxies, but also because of the lack of physically realistic models for predicting dust effects.
This situation has now begun to change. On the one hand, there have been several observational discoveries demonstrating the importance of dust effects for building a complete picture of galaxy formation. (1) The discovery of a cosmic far-IR/sub-mm background by the COBE satellite (Puget et al., 1996; Guiderdoni, et al., 1997; Dwek, et al., 1998; Fixsen et al., 1998; Hauser et al., 1998; Schlegel et al., 1998), whose energy density indicates that, as suggested already by Wang (1991) and Franceschini et al. (1994), a large fraction of the energy radiated by stars over the history of the universe has been reprocessed by dust. (2) The discovery that the population of star forming galaxies at $`z24`$ that have been detected through their strong Lyman-break features are substantially extincted in the rest-frame UV (Pettini et al., 1998; Steidel et al., 1999). (3) The discovery of a population of sub-mm sources at high redshift ($`z1`$) using SCUBA, whose luminosities, if they are powered by star formation in dust-enshrouded galaxies, imply very large star formation rates ($`10^2M_{}\mathrm{yr}^1`$), and a total star formation density comparable to what is inferred from the UV luminosities of the Lyman-break galaxies (Smail et al., 1997; Hughes et al., 1998; Lilly et al., 1999). (4) The ISO detection of a population of strong IR sources; 15 $`\mu `$m ISOCAM (e.g. Oliver, et al., 1997; Elbaz, et al., 1999) and 175 $`\mu `$m ISOPHOT surveys (e.g. Kawara, et al., 1998; Puget, et al., 1999) indicate a population of actively star forming galaxies at $`0.4z1.3`$, which boosts the cosmic star formation density by a factor $`3`$ with respect to that estimated in the optical from the CFRS (Flores et al., 1999). For (1) and (3), there is the caveat that the contribution from dust-enshrouded AGNs to the sub-mm counts and background is currently uncertain, but probably the AGNs do not dominate (Granato et al., 1997; Almaini et al., 1999; Madau, 1999). These discoveries demonstrate that in order to understand the history of star formation in the universe from observational data, one must have a unified picture that covers all wavelengths from the far-UV to the sub-mm. The UV and the far-IR are expecially important, since young stellar populations emit most of their radiation in the rest-frame UV, but a significant fraction of this is dust reprocessed into the rest-frame far-IR.
On the theoretical side, it is now possible for the first time to construct true ab initio models in which the galaxy formation itself and stellar emission and dust absorption and emission are calculated from first principles, based on physical models, and avoiding observational parameterizations for various key ingredients (e.g. shape of the luminosity function, dependence of dust temperature on galaxy properties). These new models, which provide a unified treatment of emission from stars and dust, and predict the evolution of galaxy luminosities from the far-UV to the mm, are the subject of this paper.
The effects of dust on galaxy luminosities at different wavelengths have been included in some previous galaxy evolution models, at various levels of sophistication, but mostly in the context of backwards evolution models, where one tries to evolve back from observed galaxy properties at the present day, in contrast to the semi-analytical models, where one evolves forward from cosmological initial conditions. In backwards evolution models, one starts from the observed luminosity functions of different types of galaxy at the present day, assumes a different star formation history for each type, and calculates the luminosity evolution for each type, to predict what the galaxy population would have looked like in the past. Guiderdoni & Rocca-Volmerange (1987) were the first to include dust absorption in a model of this type, based on a 1D slab model for the star and dust distribution, and calculating the dust content self-consistently from a chemical evolution model. The same treatment of dust was later used in the semi-analytical galaxy formation models of Lacey et al. (1993). In both cases, the models were used to calculate galaxy luminosities and number counts in the UV and optical. Mazzei et al. (1992) were the first to try to model the evolution of stellar emission and dust emission together in a consistent framework based on stellar population synthesis models and a physical calculation of dust absorption. This model was then used by Franceschini et al. (1994) to calculate galaxy evolution and number counts in bands from the optical through to the far-IR, based on the backwards evolution approach. However, these models still made a number of simplifying assumptions (e.g. slab geometry for disks), and set a number of present-day properties of galaxies from observations (e.g. the optical depth of galactic disks, and the intensity of the radiation field heating the dust), rather than predicting them. Simpler backwards evolution models, where the luminosity evolution is parameterized as a simple function of redshift, have been considered by, e.g. Pearson & Rowan-Robinson (1996).
Recently, dust absorption has been included in several different semi-analytical models (Kauffmann et al., 1999; Somerville & Primack, 1999; Cole et al., 2000). The first two calculate dust effects only for present-day galaxies, using a 1D slab model, and taking the dust optical depth from observational measurements. On the other hand, Cole et al. (2000) predict the dust optical depth and how it evolves, based on chemical evolution and a prediction of disk sizes, and use the 3D disk+bulge radiative transfer models of Ferrara et al. (1999) to calculation the dust attenuation. The only previous semi-analytical model to calculate dust emission as well as absorption is that of Guiderdoni et al. (1998). However, that model also has several limitations: the galaxy formation model does not include merging of either dark halos or visible galaxies, and the fraction of star formation occuring in bursts is simply an arbitrary function; dust absorption is again modelled assuming a 1D slab geometry; and the dust temperature distribution is not predicted. Instead, the dust emission spectrum is modelled as the sum of several components, whose temperatures and relative strengths are chosen so as to reproduce the observed correlations of IR colours with IR luminosity found by IRAS.
The present paper represents a major advance over this earlier work in terms of scope, physical self-consistency and predictiveness. We combine the semi-analytical galaxy formation model of Cole et al. (2000) with the stellar population + dust model of Silva et al. (1998). The galaxy formation model includes formation of dark halos through merging, cooling and collapse of gas in halos to form disks, star formation in disks regulated by energy input from supernovae, merging of disk galaxies to form elliptical galaxies and bulges, bursts of star formation triggered by these mergers, predictions of the radii of disks and spheroids, and chemical enrichment of the stars and gas. The stellar population + dust model includes a realistic 3D geometry, with a disk and bulge, two phase dust in clouds and in the diffuse ISM, star formation in the clouds, radiative transfer of starlight through the dust distribution, a realistic dust grain model including PAHs and quantum heating of small grains, and a direct prediction of the dust temperature distribution at each point in the galaxy based on a calculation of dust heating and cooling. The output is the luminosity and spectrum of the stellar populations attenuated by dust, and of the dust emission from grains at a range of temperatures. From this, we can calculate the distribution of galaxy properties at any redshift, including the complete spectrum of each galaxy in the model from the far-UV to the sub-mm.
In this paper we compare the predicted properties for local galaxies with a wide range of observational data. A future paper will be devoted to high-z galaxies (Lacey et al., 2000). In Sections 2 and 3 we describe, respectively, the galaxy formation model and the spectrophotometric model. Section 4 describes how we generate model galaxy catalogues for both normal and starburst galaxies. The comparison with observations (SEDs, extinction properties, colors, etc.) is presented in Section 5 for spiral galaxies, and in Section 6 for starbursts. The model luminosity functions at different wavelengths are compared with observations in Section 7. Section 8 uses the models to predict the relationship between the star formation rate and the luminosities in various UV and IR bands, and to assess the accuracies of these as star formation indicators. Section 9 presents a summary and conclusions.
## 2 Semi-analytical galaxy formation model
We calculate the formation histories and global properties of galaxies using the semi-analytical galaxy formation model (GALFORM) of Cole et al. (2000), a development of that described in Cole et al. (1994) and Baugh et al. (1998). The principle of the model is to calculate the formation and evolution of dark matter halos starting from an assumed cosmology and initial spectrum of density fluctuations, and then to calculate the evolution of the baryons (gas and stars) within these evolving halos using a set of simple, physically-motivated rules to model gas cooling, star formation, supernova feedback and galaxy mergers. We describe here only the main features of the model, and refer the reader to Cole et al. (2000) for more details and for a discussion of the effects of varying parameters with respect to standard values given in Table 1.
#### (a) Cosmology:
The cosmology is specified by the present-day density parameter $`\mathrm{\Omega }_0`$, cosmological constant $`\mathrm{\Lambda }_0`$, baryon fraction $`\mathrm{\Omega }_b`$ (all in units of the critical density) and the Hubble constant $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. We assume a cold dark matter (CDM) model, with the initial spectrum of density fluctuations having shape parameter $`\mathrm{\Gamma }`$ and amplitude $`\sigma _8`$ (the r.m.s. density fluctuation in a sphere of radius $`8h^1\mathrm{Mpc}`$).
#### (b) Halo evolution:
Dark matter halos form through a process of hierarchical clustering, building up through merging from smaller objects. At any cosmic epoch, we calculate the number density of halos as a function of mass from the Press-Schechter (1974) formula. We then calculate halo merger histories, describing how a halo has formed, for a set of halos of different masses, using a Monte-Carlo algorithm based on the extended Press-Schechter formalism. We generate many different realizations of the merger history for each halo mass. We then follow the process of galaxy formation separately for each of these realizations.
#### (c) Cooling and collapse of gas in halos:
Diffuse gas is assumed to be shock-heated to the virial temperature of the halo when it collapses, and to then cool radiatively out to a radius determined by the density profile of the gas and the halo lifetime. The gas which cools collapses to form a rotationally supported disk, for which the half-mass radius $`r_{\mathrm{disk}}`$ is calculated assuming that dark matter and associated gas are spun up by tidal torques, and that the angular momentum of the gas is conserved during the collapse. The gas supply by cooling is assumed to be continuous over the lifetime of the halo.
#### (d) Star formation in disks:
Stars form from the cold gas in the disk, at a rate
$$\psi =M_{\mathrm{cold}}/\tau _{\mathrm{disk}},$$
(1)
where the star formation timescale is assumed to be
$$\tau _{\mathrm{disk}}=ϵ_{\mathrm{disk}}^1\tau _{\mathrm{disk}}\left(V_{\mathrm{disk}}/200\mathrm{k}\mathrm{m}\mathrm{s}^1\right)^\alpha _{}$$
(2)
where $`V_{\mathrm{disk}}`$ is the circular velocity at the half-mass radius of the disk, and $`\tau _{\mathrm{disk}}=r_{\mathrm{disk}}/V_{\mathrm{disk}}`$ is the dynamical time. $`ϵ_{\mathrm{disk}}`$ is the fraction of gas converted into stars in one dynamical time, for a galaxy with circular velocity $`V_{\mathrm{disk}}=200\mathrm{k}\mathrm{m}\mathrm{s}^1`$. The scaling of the star formation timescale with dynamical time is motivated by observations of star formation in nearby galaxies (Kennicutt, 1998), but modified to reproduce the observed dependence of gas fraction on luminosity.
#### (e) Supernova feedback in disks:
The energy input from supernovae is assumed to reheat gas in the disk and eject it into the halo at a rate
$$\dot{M}_{\mathrm{eject}}=\beta _{\mathrm{disk}}\psi ,$$
(3)
where for $`\beta _{\mathrm{disk}}`$ we assume
$$\beta _{\mathrm{disk}}=\left(V_{\mathrm{disk}}/V_{\mathrm{hot}}\right)^{\alpha _{\mathrm{hot}}}$$
(4)
Gas which has been ejected is assumed to be unavailable for cooling until the halo has doubled in mass through merging. The motivation for the this parameterization is that the rate of gas ejection should be proportional to the rate of supernovae, and also depend on the escape velocity from the disk, which in turn is related to the circular velocity. Our standard case $`\alpha _{\mathrm{hot}}=2`$ is equivalent to the assumption that a constant fraction of the Type II supernova energy goes into ejecting gas from the disk, if the escape velocity is proportional to $`V_{\mathrm{disk}}`$.
#### (f) Galaxy mergers and morphology:
The galaxy morphology (i.e. whether it is a spiral or elliptical) is determined by merging. Following the merger of two halos, the largest pre-existing galaxy is assumed to become the central galaxy in the new halo, while the other galaxies become satellite galaxies. The central galaxy can continue to grow a disk by cooling of gas from the halo. The satellite galaxies merge with the central galaxy on a timescale equal to that for dynamical friction to make the orbits decay. The merger is classed as a major merger if the mass ratio of the satellite to central galaxy exceeds a value $`f_{\mathrm{ellip}}`$, and as a minor merger otherwise. In a major merger, any pre-existing stellar disks are destroyed, producing a stellar spheroid (elliptical galaxy or bulge), and any remaining cold gas is consumed in a burst of star formation. The star formation timescale in the burst is related to the dynamical time of the bulge as described below. The spheroid can grow a new disk by cooling of halo gas. In a minor merger, the stars from the satellite galaxy add to the bulge of the central galaxy, while the cold gas adds to the disk, but no burst is triggered. In either case, the half-mass radius $`r_{\mathrm{bulge}}`$ of the spheroid produced in a merger is calculated using an energy conservation argument. Galaxies are classified into different morphological types based on their bulge-to-disk luminosity ratios.
#### (g) Star formation and feedback during bursts:
As already mentioned, star formation bursts are assumed to be triggered by major mergers of galaxies. In Cole et al. (2000), these bursts were modelled in a very simple way, with the conversion of gas into stars being assumed to be instantaneous, since the galaxy properties examined there were not sensitive to the detailed time dependence. In this paper, we model the bursts in more detail. We assume that star formation during bursts follows a law analogous to that for star formation in disks:
$$\psi =M_{\mathrm{cold}}/\tau _{\mathrm{burst}},$$
(5)
with star formation timescale
$$\tau _{\mathrm{burst}}=ϵ_{\mathrm{burst}}^1\tau _{\mathrm{bulge}}$$
(6)
where $`\tau _{\mathrm{bulge}}=r_{\mathrm{bulge}}/V_{\mathrm{bulge}}`$ is the dynamical time of the spheroid formed in the merger, $`V_{\mathrm{bulge}}`$ being the circular velocity at $`r_{\mathrm{bulge}}`$. As in Cole et al. (2000), feedback is modelled as in disks except with $`V_{\mathrm{bulge}}`$ replacing $`V_{\mathrm{disk}}`$ in eqn.(4), assuming the same values for $`V_{\mathrm{hot}}`$ and $`\alpha _{\mathrm{hot}}`$, giving a feedback factor $`\beta _{\mathrm{burst}}`$. Since we assume that no new gas is supplied by cooling during the burst, the star formation rate and cold gas mass decay during the burst as $`\mathrm{exp}(t/\tau _e)`$, where
$$\tau _e=\tau _{\mathrm{burst}}/(1R+\beta _{\mathrm{burst}}),$$
(7)
and $`R`$ is the recycled fraction, discussed below. The burst is assumed to occur in a region of half-mass radius $`r_{\mathrm{burst}}`$, where
$$r_{\mathrm{burst}}=\eta r_{\mathrm{bulge}}$$
(8)
More details on the geometry assumed for starbursts are given in § 3. For simplicity, the metallicity of the gas in the burst and of the stars formed during the burst are taken to be constant, and equal to the mean metallicity of the stars formed during the burst as calculated by the GALFORM model. The star formation in a burst is truncated at a time $`5\tau _e`$ after the burst began, i.e. after 99% of the gas in the burst has either been converted into stars or blown out of the galaxy by supernova feedback. At this time, the remaining gas and dust in the burst region are assumed to be dispersed. Star formation then starts again in a normal galactic disk surrounding the bulge, if one has formed by cooling of halo gas since the major merger that triggered the burst.
#### (h) Chemical evolution:
We assume that a fraction $`1/\mathrm{{\rm Y}}`$ of the mass formed into stars goes into visible stars ($`0.1<m<125M_{}`$), while the remainder goes into brown dwarfs ($`m<0.1M_{}`$). For visible stars we adopt a universal IMF, similar to that in the solar neighbourhood. In Cole et al. (2000) and in this paper, we use the form proposed by Kennicutt (1983), which is consistent with the “best estimate” of Scalo (1998):
$`dN/d\mathrm{ln}m`$ $``$ $`m^{0.4}(m<1M_{})`$ (9)
$``$ $`m^{1.5}(m>1M_{})`$
We use the instantaneous recycling approximation to calculate the evolution of the abundance of heavy elements of the cold gas ($`Z_{\mathrm{cold}}`$) and stars ($`Z_{}`$) in each galaxy, together with that of the hot gas in the halo ($`Z_{\mathrm{hot}}`$), including the effects of inflows and outflows between the galaxy and halo. The chemical evolution depends on the recycled fraction $`R`$ and the yield of heavy elements $`p`$.
#### (i) Stellar population synthesis and dust extinction:
In Cole et al. (2000), we calculated the luminosity evolution of each galaxy at different wavelengths using the stellar population synthesis models of Bruzual & Charlot (2000). The effects of dust extinction were calculated in a simple way using the dust models of Ferrara et al. (1999), which assume a smooth (unclumped) distribution for both the dust (in a disk) and stars (in a disk and a bulge). In the present paper, we use instead the combined stellar population and dust model GRASIL (Silva et al., 1998) to calculate the galaxy luminosities and spectra including both extinction and emission by dust. The stellar population part of GRASIL is similar to the Bruzual & Charlot model, as both are based on similar stellar evolution tracks and stellar spectra. The dust part of GRASIL is however considerably more sophisticated than the Ferrara et al. models, in that GRASIL allows for clumping of both dust and stars, and calculates the grain heating and emission as well as the extinction.
####
The parameters we have chosen for the GALFORM model are the same as those of the standard $`\mathrm{\Lambda }CDM`$ model of Cole et al. (2000), apart from $`ϵ_{\mathrm{burst}}`$ and $`\eta `$ describing the timescale and radius of bursts, which were not considered in Cole et al. . These parameters values are given in Table 1, and were obtained by comparing the model to observations of nearby galaxies, without any consideration of the far-IR or UV properties. We refer the reader to Cole et al. for a complete discussion of the effects of varying the ’old’ GALFORM paramaters, and for a systematic presentation of the influence of these parameters on the optical-NIR properties of galaxies (LFs, Tully-Fisher relation, disk sizes, morphology, gas content, metallicity, M/L ratios and colours). We only recall here the main observational constraints used to fix each of these parameters: $`ϵ_{\mathrm{disk}}`$ \- gas fraction of $`L_{}`$ galaxies; $`\alpha _{}`$ \- variation of gas fraction with luminosity; $`\alpha _{\mathrm{hot}}`$ \- faint end of LF and Tully-Fisher relation ; $`V_{\mathrm{hot}}`$ \- faint end of LF and sizes of low-$`L`$ spirals; IMF - observations of solar neighbourhood; $`\mathrm{{\rm Y}}`$ \- $`L_{}`$ in LF; $`p`$ \- metallicity of $`L_{}`$ ellipticals; $`f_{\mathrm{ellip}}`$ \- morphological mix of $`L_{}`$ galaxies.
Values for $`ϵ_{\mathrm{burst}}`$ and $`\eta `$ are instead obtained later in this paper by detailed comparison of the results of the combined GALFORM+GRASIL models with observed properties of bursting galaxies.
## 3 The Stellar Population and Dust Model
Far-UV to mm SEDs of model galaxies are calculated using the GRASIL code (Silva et al., 1998), which follows both the evolution of stellar populations and absorption and emission by dust. GRASIL calculates the following: (i) emission from stellar populations; (ii) radiative transfer of starlight through the dust distribution; (iii) heating and thermal equilibrium of dust grains (or thermal fluctuations for small ones); and (iv) emission by dust grains.
### 3.1 Stellar Population Model
The single stellar population (SSP) libraries included in GRASIL are based on the Padova stellar models and cover a large range in age and metallicity. They include the effects of dusty envelopes around AGB stars (Bressan et al., 1998). The age and metallicity distribution of a composite stellar population is specified by the birthrate function $`\mathrm{\Psi }(t,Z)`$, where $`\mathrm{\Psi }(t,Z)dtdZ`$ gives the mass of stars that were formed in the time interval $`(t,t+dt)`$ with metallicities in the range $`(Z,Z+dZ)`$. The SED for the composite stellar population at time $`t`$ is then obtained using
$$L_\lambda (t)=_0^t𝑑t^{}_0^1𝑑Zl_\lambda (tt^{},Z)\mathrm{\Psi }(t^{},Z)$$
(10)
where $`l_\lambda (\tau ,Z)`$ is the SED of a SSP of age $`\tau `$ and metallicity $`Z`$ for the assumed IMF.
For our semi-analytical galaxy formation model, $`\mathrm{\Psi }(t,Z)`$ is calculated for each galaxy by summing over all the progenitor galaxies which have merged to produce that galaxy, separately for the disk and bulge components. The progenitor galaxies each had their own star formation and chemical history, so that the composite $`\mathrm{\Psi }(t,Z)`$ obtained in general has a broad distribution of metallicity at each each age, i.e. there is no unique age-metallicity relation $`Z(t)`$.
### 3.2 Dust Model
GRASIL computes the radiative transfer of starlight, the heating of dust grains, and the emission from these grains with a self-consistent calculation of the distribution of grain temperatures, for an assumed geometrical distribution of the stars and dust, and a specific grain model. The dust is divided into two components, dense molecular clouds and diffuse cirrus in a disk. Stars form inside clouds and progressively leak out.
The details are given in Silva et al. (1998), but for convenience we summarize the main features here, focusing on the modifications introduced for the purposes of this application. Those GRASIL parameters which are not provided by GALFORM, and are in this sense additional adjustable parameters of the combined GALFORM+GRASIL semi-analytic modelling, are listed in Table 2, together with the adopted values for our standard case. See Fig. 1 for a sketch of the geometry of our model.
#### (a) Geometry of stars:
The stars are in two components (Silva et al. (1998) considered only pure disk and pure bulge systems): (i) a spherical bulge with an analytic King model profile, $`\rho (r^2+r_c^2)^{3/2}`$ for $`r<r_t`$, with concentration parameter $`\mathrm{log}(r_t/r_c)=2.2`$; (ii) a disk with a radially and vertically exponential profile, scalelength $`h_R`$ and scaleheight $`h_z`$. As described in § 2, the disk and bulge masses, $`M_{\mathrm{disk}}`$ and $`M_{\mathrm{bulge}}`$, and half-mass radii, $`r_{\mathrm{disk}}`$ and $`r_{\mathrm{bulge}}`$, for any galaxy are predicted by the galaxy formation model. The bulge core radius is related to the bulge half-mass radius by $`r_c=r_{\mathrm{bulge}}/14.6`$, while the disk scalelength $`h_R`$ is related to the disk half-mass radius by $`h_R=r_{\mathrm{disk}}/1.68`$. The star formation histories are also calculated separately for the disk and bulge by GALFORM. However, the disk axial ratio $`h_z/h_R`$ is a free parameter of the GRASIL model.
As partially anticipated in § 2, in galaxies undergoing bursts, the burst star formation, as well as the gas and dust, are assumed to be in an exponential disk, but with half-mass radius $`r_{\mathrm{burst}}=\eta r_{\mathrm{bulge}}`$ rather than $`r_{\mathrm{disk}}`$. The axial ratio $`h_z/h_R`$ of the burst region is allowed to be different from that for disks in non-bursting galaxies. The stars which were formerly in the disks of the galaxies before the galaxy merger which triggered the burst are assumed to become part of the bulge following the merger.
#### (b) Geometry of gas and dust:
The gas and dust are in an exponential disk, with the same radial scalelength as the disk stars (either $`r_{\mathrm{disk}}`$ for normal galaxies or $`r_{\mathrm{burst}}=\eta r_{\mathrm{bulge}}`$ for starbursts), but in general with a different scaleheight, so that $`h_z(dust)/h_z(stars)`$ is a free parameter. The gas and dust are in two components within the disk, molecular clouds and the diffuse ISM. The latter corresponds to the cirrus dust. The total gas mass $`M_{\mathrm{cold}}`$ and its metallicity $`Z_{\mathrm{cold}}`$ are calculated by the galaxy formation model, but the fraction of the gas in clouds, $`f_{mc}`$, and the cloud mass $`M_c`$ and radius $`r_c`$ are free parameters of GRASIL, though the results actually depend only on their combination $`M_c/r_c^2`$, which determines, together with the dust/gas ratio (see point (d) below), the optical depth of the clouds (Silva et al., 1998).
#### (c) Young stars and molecular clouds:
Stars are assumed to form inside the molecular clouds, and then to escape on a timescale $`t_{\mathrm{esc}}`$. Specifically, the fraction of stars still inside clouds at time $`t`$ after they formed is assumed to be given by
$`F(t)`$ $`=`$ $`1(t<t_{\mathrm{esc}})`$ (11)
$`=`$ $`2t/t_{\mathrm{esc}}(t_{\mathrm{esc}}<t<2t_{\mathrm{esc}})`$
$`=`$ $`0(t>2t_{\mathrm{esc}})`$
We allow $`t_{\mathrm{esc}}`$ to take different values in normal disks and in bursts, in keeping with the results of Silva et al. (1998). Indeed, given the small size scale and the intensity of the star formation activity in bursts, it is conceivable that the star-forming environment is quite different from that in normal spiral galaxies (see also § 3.3).
#### (d) Dust grain model:
The dust is assumed to consist of a mixture of graphite and silicate grains and polycyclic aromatic hydrocarbon molecules (PAHs), each with a distribution of grain sizes. Absorption and emission properties are calculated for each grain composition and size. The grain mix and size distribution were chosen by Silva et al. (1998) to match the extinction and emissivity properties of the local ISM, and are not varied here. The dust/gas ratio $`\delta `$ in the clouds and diffuse ISM is assumed to be proportional to the gas metallicity, with a value $`\delta =1/110`$ for $`Z=Z_{}=0.02`$. Thus, the total dust mass in a galaxy scales as $`M_{\mathrm{dust}}Z_{\mathrm{cold}}M_{\mathrm{cold}}`$.
#### (e) Radiative transfer, dust heating and re-emission:
The luminosities of the different stellar components (bulge stars, disk stars, and young stars still in clouds) are calculated using the population synthesis model described above. The GRASIL code then calculates the radiative transfer of the starlight through the dust distribution. Whilst in molecular clouds a full radiative transfer calculation is performed, the effects of scattering by diffuse dust are included only approximately, by assuming that the effective optical depth for absorption is related to the true absorption and scattering optical depths $`\tau _{abs}`$ and $`\tau _{scat}`$ by $`\tau _{abs,eff}=\sqrt{\tau _{abs}(\tau _{abs}+\tau _{scat})}`$. Thus the dust-attenuated stellar radiation field can be calculated at any point inside or outside the galaxy. Then GRASIL calculates for each point in the galaxy the absorption of radiation, thermal balance and re-emission for each grain composition and size. Thus, the distribution of grain temperatures is calculated self-consistently for the assumed geometry of the stars and dust, including the effects of temperature fluctuations for small grains. The final galaxy SED $`L_\lambda `$ is obtained by adding the contributions from the starlight (attenuated by dust) and from the dust re-emission, and depends on the inclination angle at which the galaxy is viewed. Emission from dust in the envelopes of AGB stars is included in the SSPs.
Our computations allow us to calculate the amount of energy emitted in the PAH bands, but theoretical predictions of the detailed shapes of the emission features are rather uncertain. Therefore we use the Lorentzian analytical fits to the observed PAH profiles for the Ophiuchus molecular cloud from Boulanger et al. (1998).
### 3.3 Choice of GRASIL adjustable parameters and new GALFORM parameters
The values of GRASIL parameters (Table 2) not provided by GALFORM have been based on a variety of observational data for galaxies in the local universe. For some of them, the choices were made by trying to match model predictions to the observational data, as is discussed in more detail in the relevant sections of this paper. We now summarize the reasons for these choices and for those of the two GALFORM parameters ($`ϵ_{\mathrm{burst}}`$ and $`\eta `$) not considered in Cole et al. (2000)
#### (a) $`ϵ_{\mathrm{burst}}`$:
this is chosen mainly so as to reproduce the bright end of the IR luminosity function, which is dominated by bursts triggered by galaxy mergers (§7.4). A secondary (weak) constraint is to reproduce the relation between $`L_{IR}/L_{UV}`$ and total luminosity or UV slope $`\beta `$ observed for starburst nuclei (§6.2). The value controls both the luminosity and lifetime (and thus number density) of starbursts.
#### (b) $`\eta =r_{\mathrm{burst}}/r_{\mathrm{bulge}}`$:
the choice of this is mainly based on the observational fact that starburst activity is usually confined to a nuclear region with a size much smaller than the galaxy as a whole, by about one order of magnitude (e.g. Sanders & Mirabel, 1996, and references therein). For instance, in Arp 220 most of the molecular gas is found in the central $`300\mathrm{p}\mathrm{c}`$ (Scoville et al., 1997), and the mid–IR light is dominated by more or less the same region (Keto et al., 1992), while the half-light radius for the old stellar population is $`3\mathrm{k}\mathrm{p}\mathrm{c}`$ (Wright et al., 1990). The value of $`\eta `$ controls the amount of extinction of starlight from bursts by the diffuse ISM, which however is usually overwhelmed (in bursts) by extinction in molecular clouds (see §6.4). Therefore our results are not very sensitive to the precise choice of this parameter, nor to the value of $`h_z/h_R`$ in starbursts (discussed below).
#### (c) $`h_z/h_R`$:
for normal disks, we choose a value of 0.1, consistent with observations of the stellar light distributions in edge-on spiral galaxies (e.g. Xilouris et al., 1999). It is also the typical value used by Silva et al. (1998) to fit the SEDs of spiral galaxies. This value is also important, and the adopted value turns out to be suitable, to match the observed difference in extinction between spiral galaxies seen edge-on and face-on (§5.2). Apart from this test, most predicted properties are not very sensitive to $`h_z/h_R`$. The choice of $`h_z/h_R=0.5`$ for starbursts is based on general observational indications that they are only moderately flattened.
#### (d) $`h_z(dust)/h_z(stars)`$:
this parameter has a significant effect on how much starlight is absorbed in the diffuse medium. From observations of our own galaxy it is known that the scaleheight of stars increases with the age of the stellar population, so that there is no unique value for $`h_z(dust)/h_z(stars)`$. The scaleheight of the gas is comparable to that of the youngest stars. Since we are particularly interested in having a realistic estimate of the extinction in the UV, both because it is strongest there and because this is an important source for dust heating, we choose $`h_z(dust)/h_z(stars)=1`$ to match what is seen for the young stars.
#### (e) $`f_{mc}`$:
this can be estimated observationally from the ratio of molecular to atomic hydrogen in galaxies, since in normal spiral galaxies most of the hydrogen in molecular clouds is in $`H_2`$, while most of the intercloud medium is atomic $`HI`$. Our adopted $`f_{mc}`$ implies a ratio $`H_2/HI`$ similar to the typical one for $`L_{}`$ spirals found by Sage (1993). Larger values reduce the extinction in the diffuse ISM and produce a somewhat colder molecular clouds emission, but our results are in general not significantly affected as long as we keep $`f_{mc}`$ in the range 0.2–0.8.
#### (f) $`M_c`$,$`r_c`$
: as already remarked (§ 3.2) the predicted SEDs depend on the ratio $`M_c/r_c^2`$, rather than $`M_c`$ and $`r_c`$ separately. Thus $`M_c`$ has been chosen to match typical giant molecular clouds in our own and nearby galaxies, while $`r_c`$ is chosen based on the results of Silva et al. (1998), who tuned $`M_c/r_c^2`$ to fit the SEDs of starburst galaxies in particular. The resulting value for $`r_c`$ is consistent with direct measurements of cloud radii.
#### (g) $`t_{\mathrm{esc}}`$:
this is a very important parameter in the model, since it is this that mainly controls how much of the radiation from young stellar populations is absorbed by dust. Silva et al. (1998) found from detailed fits to 3 nearby spirals values of 2.5, 3 and 8 Myr. For normal spirals, we favor a value of 2 Myr, close to the lower limit of this range, rather than the average 5 Myr. Although the latter provides an equally good overall fit to the LFs (somewhat better for IRAS colors and LFs, § 7.4, but somewhat worse for the UV LF), the former is more consistent with the massive star census in our own and nearby galaxies, which suggests that the time for which the stars are obscured by dust is about the 20 % of the total lifetime for the brightest stars, above say 30 $`M_{}`$, whose lifetime is around 6 Myr. For starbursts, the value we choose is based mainly on the comparison with properties of UV-bright starbursts in §6.2. This leads us to a value closer to that of normal spirals than the values $`t_{\mathrm{esc}}=20`$-$`60\mathrm{Myr}`$ found by Silva et al. (1998) from fitting 3 nearby starbursts, and suggests that the starburst galaxies used by Silva et al. may be not representative of the whole population. The difference could also be due in part to the more complex geometry adopted in this paper for starburst galaxies.
## 4 Generation of model galaxy catalogues
The GALFORM model is run for a set of dark matter halos covering a large range in mass, and generates a catalogue of model galaxies, including information about the following properties for each galaxy at the chosen epoch: stellar masses $`M_{\mathrm{disk}}`$, $`M_{\mathrm{bulge}}`$, and half mass radii $`r_{\mathrm{disk}}`$ and $`r_{\mathrm{bulge}}`$ of the disk and bulge, mass $`M_{\mathrm{cold}}`$ and metallicity $`Z_{\mathrm{cold}}`$ of gas in the disk, and the star formation histories $`\mathrm{\Psi }(t,Z)`$ of the disk and bulge separately, including both star formation in disks and during bursts, and specifying the metallicity distribution of the stars of each age. In addition, each galaxy has a weight or number density $`n`$, such that that galaxy should appear $`N=nV`$ times in an average volume of the universe $`V`$.
The GALFORM code outputs all the galaxies for each different halo that is calculated, down to a minimum mass controlled by the mass resolution of the merger tree. In practice, this means that the model catalogue contains many more low mass galaxies than high mass galaxies. Running the GRASIL code on every galaxy in the original catalogue is neither feasible (because of computer time) nor necessary. We therefore select a subset of galaxies from the catalogue chosen to sample galaxies more evenly in mass, and redistribute the weights to give the same total number density in each mass range. The GRASIL code is then run on each galaxy in this reduced catalogue to give the SED $`L_\lambda `$ including both stellar emission and dust absorption and emission, and statistical properties (e.g. luminosity functions) are then calculated making use of these weights. In fact, we calculate 2 samples of galaxies, a “normal” sample and a “burst” sample, as follows:
(a) Normal galaxies: By “normal” galaxies, we here simply mean galaxies not selected to have had a recent burst. From the parent GALFORM catalogue, we select a sample with equal numbers of galaxies in equal bins in $`\mathrm{log}M_{}`$, $`M_{}`$ being the total stellar mass of the galaxy. Within each mass bin, galaxies are randomly selected (allowing for multiple selection of the same galaxy) with probability proportional to their weight $`n`$. The selected galaxies are then assigned new weights $`n_i`$, such that each galaxy within the same bin has the same weight (multiply selected galaxies being counted as separate objects), and that the sum of the weights (i.e. number densities) within a bin is the same as in the parent catalogue. We have used bins with $`\mathrm{\Delta }\mathrm{log}M_{}=0.3`$ and about 40 galaxies per bin.
(b) Burst galaxies: By “burst” galaxies we mean galaxies which have had a burst in the recent past, at whatever redshift we are looking. Bursts have short durations compared to the age of the universe, so the fraction of galaxies undergoing a burst at any one time is very small, but they can be very luminous, and so may dominate the galaxy luminosity function at the highest luminosities. In practice, our “normal galaxy” catalogue contains too few galaxies in total to provide a representative sample of galaxies seen during their burst phase. Rather than use a greatly enlarged “normal galaxy” sample, it is more efficient to calculate a separate sample of “burst” galaxies, as follows: for a redshift $`z`$, we choose a subsample of galaxies which have had bursts during the time interval $`t(z)>t>t(z)T`$, where $`t(z)`$ is the age of the universe at redshift $`z`$, with equal numbers of galaxies in equal bins in $`\mathrm{log}M_{\mathrm{burst}}`$, $`M_{\mathrm{burst}}`$ being the mass of stars formed in the most recent burst. The galaxies are assigned new weights $`n_i`$ analogously to the case of normal galaxies, but now conserving the total number density in bins of $`M_{\mathrm{burst}}`$ for the galaxies which have had bursts more recently than $`T`$. For each burst galaxy, we then run GRASIL to calculate the total galaxy luminosity at a set of times after the start of the burst, chosen to sample all phases of the burst evolution, including the highest luminosity phase of short duration. If $`T<<t(z)`$, then the rate of bursts per unit volume during the time interval $`T`$ can be taken as constant. Then, for the $`i`$th galaxy in the $`j`$th phase in the burst evolution that lasts a time $`\mathrm{\Delta }t_j`$, the number density of galaxies that should be found in this phase is
$$n_{ij}=n_i\left(\frac{\mathrm{\Delta }t_j}{T}\right)$$
(12)
These weights can then be used to calculate statistical properties such as luminosity functions. When combining the “normal” and “burst” galaxy samples, the normal galaxies with bursts more recent than $`T`$ are explicitly excluded, to avoid statistical double-counting. In practice, we chose $`T=t(z)/20`$ at all $`z`$, with bins $`\mathrm{\Delta }\mathrm{log}M_{\mathrm{burst}}=0.3`$, around 10 galaxies per bin, and around 10 output times per galaxy, for $`0<tt_{burst}100\tau _e`$. For many calculations of statistical distributions, we then interpolate between these output times to have more burst phases.
## 5 Properties of spiral galaxies
In this section, we test the model predictions for disk galaxies against observed emission and absorption properties of nearby spirals.
### 5.1 SEDs of face-on spirals
We compared the predicted near-UV to far-IR SEDs of our model galaxies with the broad-band SEDs of a complete sample of nearby spiral galaxies (de Jong & van der Kruit, 1994), consisting of a diameter-limited sample of 86 nearly face-on, disk-dominated galaxies. de Jong & van der Kruit measured fluxes of these galaxies in the BVRIHK bands. We have supplemented these with U-band magnitudes from the literature and IRAS $`12,25,60,100\mathrm{\mu m}`$ fluxes from Saunders (1997).
We considered only those model galaxies with bulge to total light ratio $`B/T0.5`$ in the B-band, corresponding to the range of types in the de Jong sample (e.g. Simien & de Vaucouleurs, 1986). From Figure 2 it is apparent that the models reproduce the observed spectral trends reasonably well. This is particularly impressive since the ratio between the infrared and the optical-UV spans more than one order of magnitude, both in the observed and in the theoretical SEDs. The predicted infrared emission peaks at wavelengths somewhat larger than those sampled by IRAS, in agreement with recent ISO observations (e.g. Alton et al., 1998). The emission in the mid-infrared is dominated by PAH molecular bands.
Figure 3 shows the effects on typical SEDs of factor 2 variations in the molecular cloud fraction $`f_{mc}`$, their mass $`M_c`$ (i.e. their optical depth, having fixed the radius) and the escape timescale $`t_{\mathrm{esc}}`$. The effects are mostly confined to the mid-IR between 8 and 40 $`\mu `$m and in the UV below 0.4 $`\mu `$m. In these spectral regions, the predicted flux may change by up to a factor $`2`$, while the effects are almost negligible elsewhere. In the mid-IR the most important parameter is the cloud optical depth, while in the UV the effects of $`t_{\mathrm{esc}}`$ dominate.
### 5.2 The global extinction in spiral galaxies
The models predict that the extinction in galaxy disks should increase strongly with galaxy luminosity, as shown in Figure 4. Clearly, in comparing predictions of dust extinction with observations, one must be careful to specify the luminosity of the objects concerned.
There have been many attempts to measure or observationally constrain the total dust extinction in galaxy disks, using a variety of techniques: the inclination dependence of magnitudes or colours (e.g. de Vaucouleurs et al., 1991; Giovanelli et al., 1995), surface brightness distributions in edge-on galaxies (e.g. Kylafis & Bahcall, 1987), colour gradients in face-on disks (e.g. de Jong, 1996c), and the ratio of far-IR to UV luminosities (e.g. Xu & Buat, 1995; Buat & Xu, 1996; Wang & Heckman, 1996). In general, different techniques have given somewhat different answers.
Xilouris et al. (1999) estimate dust extinctions by fitting detailed models of the star and dust distributions to the observed surface brightness distributions of edge-on spiral galaxies. Their dust models include scattering. From their six Sb-Sc spirals with luminosities in the range $`17.5>M_B5\mathrm{log}h>19.0`$, we obtain a median central face-on extinction optical depth $`\tau _{V0}=0.6`$. In comparison, for edge-on galaxies in our model in the same luminosity range (after extinction), and with $`0.1<B/T<0.3`$ in the B-band, we find a median value $`\tau _{V0}=2.2`$, which is significantly larger. There could be several reasons for this difference between the models and the observations: there may be a problem with the Xilouris et al. method for deriving $`\tau _{V0}`$ from the observations, or the Xilouris et al. sample may not be representative, or the problem might be with our assumption that the dust and stars have the same exponential scalelength. The extinction-inclination observational test discussed next implies extinctions for edge-on galaxies in this luminosity range which are at least as large as those predicted by our model.
We considered the dependence of the net extinction on the inclination angle at which a galaxy is viewed. This has been studied in many papers using different methods, most recently by Tully et al. (1998), who also summarize the results from the earlier studies. Tully et al. measure the dependence of $`BK`$, $`RK`$ and $`IK`$ colours on galaxy inclination at a given K-band luminosity, the K-band being chosen to minimize extinction effects. They have a complete sample of spirals covering a large range in luminosity, $`18.5M_K5\mathrm{log}h24.5`$. They find a strong luminosity dependence, with a difference in B-band extinction between edge-on and face-on galaxies of about 2 mag for the brightest galaxies, and negligible for the faintest ones.
Tully et al. (1998) follow the usual practice and parameterize the extinction relative to that for the galaxy seen face-on as
$$A_\lambda ^{i0}m_\lambda (i)m_\lambda (0)=\gamma _\lambda \mathrm{log}(a/b)$$
(13)
where $`\gamma _\lambda `$ is a function of the passband. The axial ratio $`a/b`$ is assumed to be related to the inclination angle $`i`$ by
$$\mathrm{cos}i=\sqrt{\frac{(b/a)^2q^2}{1q^2}}$$
(14)
where $`i=0`$ for a face-on system, and $`q`$ is the axial ratio of a galaxy seen edge-on.
The models are compared with observations in Fig. 5. We use equation (14) to convert from the model inclination angle to the axial ratio, assuming $`q=0.1`$, which is the ratio $`h_z/h_R`$ adopted in our galaxy models. We considered model galaxies corresponding to the morphological types Sa-Scd, and four ranges in K-band luminosity, corresponding to the ranges chosen by Tully et al. (1998), indicated by the different symbols in the figure.
The three lines in the figure correspond to different values of the slope $`\gamma _B`$. The model galaxies approximately follow the linear dependence on $`\mathrm{log}(a/b)`$ (equation (13)), but with slopes $`\gamma _B`$ that are somewhat shallower, at any given luminosity, than those observationally inferred by Tully et al. (1998). For instance, for the luminosity range $`23.0<M_K5\mathrm{log}h<22.0`$, our models follow an average slope $`\gamma _B0.9`$, while Tully et al. find $`\gamma _B=1.1\pm 0.5`$, after allowing for the K-band extinction. The slope predicted by our models depends on the value chosen for the parameter $`h_z/h_R`$ (see also § 3.3). We have checked that increasing $`h_z/h_R`$ from our adopted value of 0.1 decreases the slope, while reducing it does not increase the slope significantly.
The agreement for the inclination test is anyway acceptable. Part of the discrepancy could be due to our simplified treatment of scattering by the diffuse dust (§ 3.2). In our models, the absolute extinction is often dominated by the molecular clouds, but the difference between the face–on and the edge–on extinction is entirely due to the cirrus. Comparisons of our model with that of Ferrara et al. (1999), where the treatment of scattering is more accurate, show that for the brightest objects this effect can account for about 0.1-0.2 mag of the differential extinction in the B band.
## 6 Properties of starburst galaxies
Starburst galaxies are broadly defined as galaxies in which the current star formation rate is much greater than its time-averaged value, and the star formation timescale correspondingly much shorter than the age of the universe. This definition includes objects with a wide range of properties, from bursting dwarf irregular galaxies (e.g. Thuan & Martin, 1981) to the ultra-luminous IR galaxies (ULIGs) found by IRAS (e.g. Sanders & Mirabel, 1996). In practice, a large variety of observational criteria have been applied to select samples of starburst galaxies, ranging from optical morphologies and spectra (e.g. Balzano, 1983) to IR colours and luminosities (e.g. Armus et al., 1990; Lehnert & Heckman, 1995). In our galaxy formation model, bursts are assumed to occur following major mergers of galaxies, producing elliptical galaxies from disk galaxies. For the ultra-luminous IR galaxies, the link between the starburst activity and galaxy mergers is clearly established (e.g. Sanders & Mirabel, 1996), while for low-luminosity starbursts, additional triggering mechanisms probably operate, which are not included in our model. In this section we will compare the properties of starbursts predicted by our model with those of various observational samples.
### 6.1 Properties of starbursts in the model
Figure 6 shows how various properties of the bursts in our model vary with the stellar mass of the galaxy, $`M_{\mathrm{star}}`$, after completion of the burst. The total mass of new stars formed in the burst, $`M_{\mathrm{burst}}`$, is seen to increase with the galaxy mass, with the fraction of stars formed in the burst being typically between $`1\%`$ and $`50\%`$. An exception to this trend is the group of points in the lower right corner of Fig.6a corresponding to small bursts occuring in large galaxies. These small bursts are produced by mergers between gas-poor elliptical galaxies. The main trend in panel (a) is produced by mergers between disk galaxies containing significant fractions of gas, and these dominate the statistics at all burst masses. The star formation rate during the burst is $`(M_{\mathrm{burst}}/\tau _e)\mathrm{exp}(t/\tau _e)`$, with $`t`$ measured from the start of the burst. The peak star formation rate is thus $`M_{\mathrm{burst}}/\tau _e`$, and occurs at the beginning of the burst. This peak SFR is seen also to increase with the host galaxy mass. The half-mass radius $`r_{\mathrm{burst}}`$ and exponential decay time $`\tau _e`$ of the burst are assumed to scale with the half-mass radius and dynamical timescale of the host galaxy, and also increase with galaxy mass. Large bursts, with $`M_{\mathrm{burst}}10^{10}h^1M_{}`$, are predicted to occur in galaxies with $`M_{\mathrm{star}}10^{11}M_{}`$, and to have radii $`r_{\mathrm{burst}}0.5h^1\mathrm{kpc}`$, star formation timescales $`\tau _e5\times 10^7\mathrm{yr}`$, and peak star formation rates $`200h^1M_{}\mathrm{yr}^1`$. These are similar properties to those inferred observationally for the ULIGS (e.g. Sanders & Mirabel, 1996).
### 6.2 Properties of UV-bright starbursts
A large amount of work has been done on samples of UV-bright starbursts selected from the catalogue of UV spectra of star-forming galaxies of Kinney et al. (1993). Various correlations have been found, for instance between the bolometric luminosity, the UV/IR ratio, the slope of the UV continuum and the metallicity (e.g. Meurer et al., 1995; Heckman et al., 1998). In this section, we compare the properties of our model starbursts with some of this observational data.
The observational sample that we use for our comparison is that of Heckman et al. (1998), who selected 45 starburst and star-forming galaxies from the original atlas of Kinney et al.. The criteria for a starburst galaxy to appear in the Kinney et al. catalogue are (a) that it has been previously classified as a starburst based on optical data, usually meaning that it has a compact optical morphology and strong optical emission lines (but no AGN activity) (e.g. Balzano, 1983); and (b) that it has been observed by IUE and has a high enough surface brightness within the IUE aperture to produce a reasonable quality UV spectrum. The catalogue is not in any sense statistically complete. The starburst activity in these galaxies is generally confined to the central regions. (The galaxies have mostly been selected so that the starburst activity fits within the IUE aperture, $`20\mathrm{"}\times 10\mathrm{"}`$, while the optical diameters of the underlying galaxies are typically a few arcminutes.)
For the galaxies in their sample, Heckman et al. (1998) measured a UV luminosity $`L_{UV}\lambda L_\lambda (1900\AA )`$ and mean continuum slope $`\beta `$ between 1250 and 1850 Å (defined by $`L_\lambda \lambda ^\beta `$) from IUE spectra, and a far-IR luminosity $`L_{FIR}`$ from IRAS measurements. Heckman et al. use the definition of $`L_{FIR}`$ from Helou et al. (1988), which can be expressed as in terms of the luminosities in the 60 and 100 $`\mathrm{\mu m}`$ IRAS bands as
$$L_{FIR}=0.65\nu L_\nu (60)+0.42\nu L_\nu (100)$$
(15)
$`L_{FIR}`$ provides an estimate of the $`40120\mathrm{\mu m}`$ luminosity. The quantity $`L_{FIR}+L_{UV}`$ is similar to the bolometric luminosity in the case of starbursts, where most of the radiation is emitted in either the UV or the FIR. The galaxies in the Heckman et al. sample cover the range $`L_{FIR}+L_{UV}10^810^{11}L_{}`$.
The evolutionary tracks of a selection of model starbursts, with burst masses covering the range $`M_{\mathrm{burst}}10^710^{10}M_{}`$, in $`L_{FIR}+L_{UV}`$, $`L_{FIR}/L_{UV}`$ and $`\beta `$ are shown in Figure 7, together with observational data for the Heckman et al. sample. We have calculated these quantities from the model SEDs to match the way they are calculated from the observational data. The bolometric luminosities of the model bursts, as measured by $`L_{FIR}+L_{UV}`$, peak soon after the start of the burst, following which they evolve towards smaller values. At the same time, the amount of dust reprocessing of the radiation, as measured by $`L_{FIR}/L_{UV}`$, also decreases. This results from two effects: the escape of young stars from the dense molecular clouds, and the decrease in the optical depth of the diffuse dust component as the gas in the burst is consumed. The UV slope $`\beta `$ initially evolves towards more negative values, i.e. bluer, as the net dust opacity falls. However, as the rate of formation of new stars drops and the dominant stellar population becomes older, the intrinsic unabsorbed stellar spectrum becomes redder, so the evolution in $`\beta `$ reverses, the models becoming redder with time even though the dust attenuation is falling. This happens after $`2030\mathrm{M}\mathrm{y}\mathrm{r}`$, controlled mainly by the stellar evolution timescale. As long as the evolution in $`\beta `$ is dominated by the declining dust opacity, the models stay close to the locus of observed points in the $`L_{FIR}+L_{UV}`$ vs $`L_{FIR}/L_{UV}`$ and $`L_{FIR}/L_{UV}`$ vs $`\beta `$ panels, but when the intrinsic stellar spectrum starts to redden with age, the models move away from the observed locus. This is not in itself in contradiction with observations, since there are selection effects in the observational sample, as we discuss below.
The burst evolution involves the interplay between two timescales, the lifetime of massive stars, $`10^7\mathrm{yr}`$, and the exponential decay time $`\tau _e`$ of the star formation rate and gas mass in the burst. The latter varies with burst mass, being larger than the stellar evolution timescale for large bursts, and comparable for small bursts, as shown in Fig 6. The model starbursts begin their evolution with a large infrared excess and a flat UV slope (upper and lower panel of Fig. 7). Fainter bursts, which have lower gas column densities and are on average also more metal poor, quickly exhaust their gas content and evolve toward a low infrared excess and a negative UV slope, along the locus defined by observations (lower panel of Fig.7). Conversely, brighter bursts, having larger gas column densities and higher metallicities, remain highly enshrouded by dust until, after a few tens of Myr, the dominant stellar population has become intrinsically old. Their UV continuum slopes always remain flat, at the beginning because of reddening and at later times because of age.
In summary, the model bursts lie close to the region occupied by observed bursts in Fig. 7 as long as the stellar population is young, in the sense of the UV light being dominated by very massive stars. The position of bursts along the observational locus is then determined mostly by the net dust opacity in the UV, in agreement with the interpretation of Meurer et al. (1995) and Heckman et al. (1998). This in turn depends both on the initial gas mass, radius and metallicity of the burst, and on its evolutionary stage.
A detailed comparison with the Heckman et al. observations would require us to construct a mock catalogue of model starbursts obeying the same selection criteria as the observed sample. Unfortunately, the observational selection criteria are rather ill-defined. In addition, one of the selection criteria is the presence of strong $`HII`$ region emission lines, and the GRASIL code at present does not calculate these emission line properties. Instead, we simply select starbursts with ages since the start of the burst less than $`t_{max}`$, to account roughly for the effect that as soon as most of the massive stars have evolved away, the galaxy will no longer produce strong emission lines, and so no longer be classified as a burst in the observational sample. Fig. 8 shows the resulting distribution of points for the choice $`t_{max}=50\mathrm{M}\mathrm{y}\mathrm{r}`$. The model starbursts are seen to follow similar relations to the observational sample. The results do not depend sensitively on the choice of t<sub>max</sub>.
Several parameters may in principle affect the the spectral properties of a model starburst galaxy and therefore the location of our models in the above plots, but the most critical are the ratio between the star formation timescale and the dynamical time ($`ϵ_{\mathrm{burst}}^1=\tau _{\mathrm{burst}}/\tau _{\mathrm{bulge}}`$), and the escape time ($`t_{\mathrm{esc}}`$) for newly born stars to escape from their parental clouds. The former affects the bolometric luminosity, which is almost directly proportional to the star formation rate, and thus inversely proportional to the star formation timescale. The latter affects the fraction of light absorbed inside clouds, and so may affect both the slope of the UV spectrum and the ratio between the IR and UV luminosities. The distribution of model points in Fig. 8 can therefore be used to constrain the values of $`ϵ_{\mathrm{burst}}`$ and $`t_{\mathrm{esc}}`$. However, we found that changes in either of these parameters by a factor $`2`$ either way would only slightly worsen the match with observations.
### 6.3 Infrared colours
We now consider the infra-red and sub-mm colours of starbursts and normal galaxies. Figure 9 shows the dependence of the mean IRAS colours on infra-red luminosity. This plot includes all model galaxies, both normal and starbursts. Their IRAS band luminosities are calculated by convolving the SEDs with the IRAS response functions. In calculating the mean colours, the models are weighted by their number density and by a factor $`L_\nu ^{3/2}(60\mathrm{\mu m})`$ to account approximately for the volume within which a galaxy would be visible in a $`60\mathrm{\mu m}`$ flux-limited sample. $`L_{IR}`$ is the standard estimate of the total $`81000\mathrm{\mu m}`$ IR luminosity from the luminosities in the four IRAS bandpasses (Sanders & Mirabel, 1996):
$`L_{IR}`$ $`=`$ $`0.97\nu L_\nu (12)+0.77\nu L_\nu (25)`$ (16)
$`+`$ $`0.93\nu L_\nu (60)+0.60\nu L_\nu (100).`$
The model predictions are compared with the observed mean colours calculated by Soifer & Neugebauer (1991) from the IRAS bright galaxy sample (IRAS BGS, Soifer et al. 1989), which is a complete sample flux-limited at $`60\mathrm{\mu m}`$. Models and observations, in particular the 12/60 $`\mathrm{\mu m}`$ colour, show in general similar trends.
Figure 10 shows the quite good agreement between the average IR and sub-mm SED of model starburst galaxies, and the observed SEDs of luminous infra-red galaxies from the sample of Lisenfeld et al. (2000). The limit $`L_{IR}10^{11}h^2L_{}`$ for the models has been chosen to approximately reproduce the selection for the Lisenfeld et al. sample. Note that the dust opacity in our models decreases as $`\lambda ^b`$ with $`b2`$ for $`100\lambda 1000\mathrm{\mu m}`$, while Lisenfeld et al., by fitting optically thin single temperature models to the data at $`\lambda 60\mathrm{\mu m}`$, derived $`b`$ values in the range 1.5–2. Our models demonstrate that the shallower slopes can instead be explained by the distribution of dust temperatures within each galaxy.
### 6.4 Extinction in starburst galaxies
An important problem in the study of star-forming galaxies is to determine the amount of attenuation of starlight by dust, especially in the UV. This bears directly on the determination of star formation rates in galaxies from their UV luminosities. For our own and a few nearby galaxies, the extinction law of the dust can be measured directly from observations of background stars, where the dust acts as a foreground screen. The differences found between the shapes of the extinction curves of the Galaxy, the Large Magellanic Cloud and the Small Magellanic Cloud below $`\lambda `$2600Å (e.g. Fitzpatrick, 1989) are often ascribed to the different metallicities in these systems, covering the range $`Z0.11Z_{}`$. Recently, Calzetti et al. (1994) (see also Calzetti, 1997, 1999) have analyzed the dust extinction in starburst galaxies. In this case, the measurement of the extinction is more complicated, since one measures the integrated light of the whole system, where stars and dust are mixed in a complex way. From the optical and UV spectra of a sample of UV-bright starbursts, Calzetti et al. derive an average attenuation law characterized by a shallower far-UV slope than that of the Milky Way extinction law, and by the absence of the 2175 Å feature. This is at first sight quite surprising, because the metallicities of these galaxies are mostly similar to that of the Milky Way, and so they might be expected to have similar dust properties. The question is then to what degree the differences between the starburst attenuation law and the Milky Way extinction law are due to the geometry of the stars and dust, and to what degree they can only be explained by differences in dust properties.
Figure 11 compares the average attenuation curves for galaxies from our model with the empirical “attenuation law” obtained for starbursts by Calzetti et al. (1999). The attenuation $`A_\lambda `$ for the models is defined as the difference in magnitudes of the stellar luminosity $`L_\lambda `$ of a galaxy with and without dust, and is normalized to the colour excess $`E(BV)=A_BA_V`$ of the stars to give an attenuation “law” $`k(\lambda )=A_\lambda /E(BV)`$, equivalent to the definition of Calzetti et al.. As described in §3.2, the dust properties we adopt imply an extinction law characterized by a distinct 2175 Å feature produced by graphite grains, and well matching the average Milky Way extinction curve. The model extinction law (solid line in Fig. 11) is the attenuation law that would be measured if all the dust were in a foreground screen in front of the stars and no scattered light reached the observer. This geometry is clearly not realistic as applied to the integrated light from galaxies. In our models, we have instead a complex and wavelength dependent geometry, where the UV emitting stars are heavily embedded inside molecular clouds, while the older stars, mainly emitting in the optical and near infrared, are well mixed with the diffuse interstellar medium.
Figure 11 shows average attenuation curves for 3 classes of model galaxies: (a) normal galaxies with $`E(BV)>0.05`$; (b) starbursts with $`5\times 10^8<L_{IR}<5\times 10^{10}h^2L_{}`$; and (c) starbursts with $`L_{IR}>5\times 10^{10}h^2L_{}`$. The starburst models are all chosen to have ages $`<50\mathrm{M}\mathrm{y}\mathrm{r}`$ since the start of the burst, as discussed in §6.2. Sample (b) corresponds roughly to the galaxies for which Calzetti et al. measured their attenuation law. The model attenuation law depends significantly on the sample, but all 3 classes show a weak or completely absent $`2175\AA `$ feature. In particular, the predicted attenuation curve for the lower luminosity starbursts is remarkably close to the empirical “Calzetti law”. This result is an entirely geometrical effect, and did not require us to assume for starbursts dust properties different from those of the Galaxy. This conclusion is contrary to that of Gordon et al. (1997), who argued that the observed shape is only produced with dust that lacks the $`2175\AA `$ feature in its extinction curve. The reason is presumably that Gordon et al. only considered clumping of dust, not of stars, and assumed a spatial distribution for stars independent of stellar age. Our results follow naturally from the assumption that stars are born inside dense dust clouds and gradually escape.
To further illustrate the importance of geometrical effects in determining the attenuation law, we show in Fig. 12 the attenuation laws of 2 normal and 2 starburst model galaxies. The global attenuation law, and the separate contributions from the molecular clouds and diffuse dust, are shown in each case, normalized to the colour excess $`E(BV)`$ produced by that dust component. The global ($`g`$), molecular cloud (MC) and diffuse dust ($`d`$) contributions are related by
$$(A_\lambda /E)_g=\frac{(A_\lambda /E)_{MC}E_{MC}+(A_\lambda /E)_dE_d}{E_g}$$
(17)
In the far-UV, including the spectral region around the $`2175\AA `$ feature, the global attenuation in the models is strongly contributed, or even dominated, by the MCs. The shape of the attenuation curve there has little to do with the optical properties of grains, because our MCs usually have such large optical depths that the UV light from stars inside the clouds is completely absorbed. The wavelength dependence of the attenuation law of the MC component instead arises from the fact that the fraction of the light produced by very young stars increases with decreasing wavelength, and at the same time, the fraction of stars which are inside clouds increases with decreasing age, as given by eqn. (11). The additional attenuation arising in the cirrus component can sometimes imprint a weak $`2175\AA `$ feature, but this is not the case for the starbursts, where the primary UV stellar light is dominated by very young populations.
## 7 Galaxy Luminosity Function
### 7.1 Method
The luminosity function of galaxies at different wavelengths is a basic property of the galaxy population which a galaxy formation model should explain. We calculate the galaxy luminosity function at different wavelengths by combining the model SEDs with the weights for the individual galaxies (Section 4). For the normal galaxy sample we have, for the number density of galaxies per $`\mathrm{ln}L`$ at some wavelength $`\lambda `$
$$\frac{dn}{d\mathrm{ln}L_\lambda }=\frac{1}{\mathrm{\Delta }\mathrm{ln}L}\underset{|\mathrm{ln}L_i\mathrm{ln}L|<\frac{1}{2}\mathrm{\Delta }(\mathrm{ln}L)}{}n_i$$
(18)
where $`n_i`$ is the number density for the $`i`$th galaxy, $`L_i`$ is its luminosity at wavelength $`\lambda `$, the centre of the bin is at $`L`$ and its width is $`\mathrm{\Delta }(\mathrm{ln}L)`$. For the burst galaxy sample, we have to sum over the burst phase $`j`$ also, giving
$$\frac{dn}{d\mathrm{ln}L_\lambda }=\frac{1}{\mathrm{\Delta }\mathrm{ln}L}\underset{|\mathrm{ln}L_{ij}\mathrm{ln}L|<\frac{1}{2}\mathrm{\Delta }(\mathrm{ln}L)}{}n_{ij}$$
(19)
where $`n_{ij}`$ is the number density for galaxy $`i`$ at evolutionary phase $`j`$, and $`L_{ij}`$ its the luminosity at that phase.
Galaxy luminosity functions are measured in specific bands defined by a filter+instrument response function, e.g. the standard B or K bands, or the IRAS bands. Thus we convolve the model SEDs with the appropriate response function to calculate the luminosity $`L_\nu `$ in that band. We use absolute magnitudes on the AB system, $`M_{AB}=2.5\mathrm{log}_{10}(L_\nu /4.345\times 10^{20}\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1)`$. The model luminosity functions have statistical uncertainties due to the finite size of the model galaxy catalogue. We estimate these statistical errors by bootstrap resampling of the catalogue.
### 7.2 Optical and Near Infra-Red
At optical and near-IR wavelengths the emission is mostly from older stars and the effects of dust obscuration are generally modest. Figure 13 shows the local luminosity function in the B-band ($`\lambda =0.44\mathrm{\mu m}`$), compared to the observed luminosity function measured from the ESP redshift survey by Zucca et al. (1997). The predicted luminosity function agrees well with the observed one, except at the highest luminosities. Extinction by dust makes galaxies around $`0.6\mathrm{mag}`$ fainter on average, for bright ($`LL_{}`$) galaxies. Galaxies which have had recent bursts (i.e. in the last $`1/20`$ of the age of the universe, 0.7 Gyr) do not dominate the luminosity function at any luminosity, when the effects of dust are included.
As described in Cole et al. (2000), the B-band luminosity function is used as one of the primary observational constraints for setting the parameters in the GALFORM model, in particular, the parameters $`\alpha _{\mathrm{hot}}`$ and $`V_{\mathrm{hot}}`$ controlling feedback, and the parameter $`\mathrm{{\rm Y}}`$ which sets the fraction of brown dwarfs in the IMF. The good agreement with the observed B-band luminosity function is therefore not a surprise, but it was not guaranteed, since the stellar population and dust models used in Cole et al. (2000) are not identical to those used here. Cole et al. used the stellar population models of Bruzual & Charlot (2000), and calculated the effects of dust using the models of Ferrara et al. (1999). The stellar population model in GRASIL is based on similar stellar evolution tracks and spectra, but the treatment of dust extinction is significantly different. The Ferrara et al. models assume that stars and dust are smoothly distributed, while in GRASIL a fraction of the dust is in clouds, and young stars are confined to these clouds. The B-band luminosity functions, both with and without dust, calculated by GALFORM using the Bruzual & Charlot (2000) and Ferrara et al. (1999) models agree very well with those computed using the GRASIL stellar population+dust model, demonstrating the consistency of the procedure of using the galaxy formation parameters derived in Cole et al. in combination with the GRASIL model. The effects of dust computed using the two models are quite similar in the B-band, in spite of the differences in the star and dust geometry. This is because most of the B-band light is produced by stars which are old enough to have escaped from the clouds in which they formed, so in GRASIL the attenuation is due mostly to the diffuse component of the dust, which is modelled in a similar way to that in the Ferrara et al. models.
Figure 13 also shows the model and observed luminosity functions in the K-band. In this case, the effects of dust are very small, so the comparison is essentially independent of assumptions about dust. Again, the model agrees well with observations over most of the luminosity range, as was also found by Cole et al. (2000). The contribution of galaxies with recent bursts is very small at all luminosities.
### 7.3 Far Ultra-Violet
In Figure 14 we compare the predicted luminosity function in the far-UV ($`\lambda =0.2\mathrm{\mu m}`$) with that measured by Sullivan et al. (2000) from a UV-selected redshift survey, based on FOCA instrument fluxes. This comparison has not previously been made for any semi-analytical galaxy formation models. The effect of dust are much larger than in the optical, as one would expect. In this case, the effects of the more realistic geometry for the stars and dust assumed by GRASIL compared to the Ferrara et al. models (clumpy rather than smooth distributions for the stars and dust) are significant. The stars that produce most of the UV light spend a large fraction of their lifetimes in the molecular clouds where they form, so the mean extinction is larger than in the case of a smoothly distributed dust component with the same total dust mass. Bursting and non-bursting galaxies contribute roughly equally at the highest luminosities. This result is however sensitive to the details of how bursts are modelled, since this determines what small fraction of the UV light escapes from currently or recently bursting galaxies. When we compare our model LF including extinction with the directly observed LF, uncorrected for extinction, we find reasonable agreement at lower luminosities, but at high luminosities, the model LF is somewhat lower than the observed one. This might be partly an effect of evolution in the observational sample, which covers a significant redshift range ($`z0.5`$), but it might also be that the UV extinction is over-estimated in the model.
Figure 14 shows also the effect of changing the burst radius $`r_{\mathrm{burst}}`$ and the timescale $`t_{\mathrm{esc}}`$ for stars to escape from clouds. Increasing $`r_{\mathrm{burst}}/r_{\mathrm{bulge}}`$ from 0.1 to 0.5 reduces the optical depth in the diffuse component during bursts, allowing more of the UV light from bursts to escape, and increasing the LF at the highest luminosities. Increasing $`t_{\mathrm{esc}}`$ in bursts from $`10\mathrm{M}\mathrm{y}\mathrm{r}`$ to $`30\mathrm{M}\mathrm{y}\mathrm{r}`$ has negligible effect on the total UV LF. Increasing $`t_{\mathrm{esc}}`$ in normal galaxies from $`2\mathrm{M}\mathrm{y}\mathrm{r}`$ to $`5\mathrm{M}\mathrm{y}\mathrm{r}`$ slightly lowers the amplitude of the luminosity function at the bright end.
### 7.4 Mid and Far Infra-Red
In the mid- and far-infrared, the luminosity of galaxies is dominated by re-emission from dust. Using the GRASIL code, we can now directly predict the far-IR luminosities of galaxies from our galaxy formation model, and compare with observations. The luminosity functions of galaxies at 12, 25, 60 and 100$`\mathrm{\mu m}`$ have been measured using IRAS data. The best determination is at $`60\mathrm{\mu m}`$, where IRAS was most sensitive. Figure 15 shows that the predicted luminosity function agrees extremely well with that observed by Saunders et al. (1990) and Soifer & Neugebauer (1991), except at very low luminosities, where the measured LF is fairly uncertain. Above $`\nu L_\nu (60)3\times 10^{10}h^2L_{}`$, the model LF is dominated by galaxies undergoing bursts triggered by mergers. This is in agreement with observations of ultra-luminous IRAS galaxies, which are all identified as recent mergers based on their optical morphology (e.g. Sanders & Mirabel, 1996).
The right panel of Figure 15 shows the effect on the $`60\mathrm{\mu m}`$ LF of varying the parameter $`ϵ_{\mathrm{burst}}`$, which relates the star formation timescale in bursts to the dynamical time of the bulge (equation 6). Unlike the other parameters in the GALFORM model, Cole et al. (2000) did not try to choose a best-fit value, because the observational data in the optical and near-IR that they compared with were not sensitive to its value. (The Cole et al. results were calculated assuming $`\tau _{\mathrm{burst}}=0`$.) However, the far-IR LF is sensitive to this and thus constrains the burst timescale for the most luminous galaxies. Figure 15 shows predictions for $`ϵ_{\mathrm{burst}}=`$ 1, 0.5, 0.25, corresponding to $`\tau _{\mathrm{burst}}/\tau _{\mathrm{bulge}}=1,2,4`$ respectively. Increasing $`ϵ_{\mathrm{burst}}`$ means bursts are more luminous, but last for a shorter time, and so have a lower number density. This trend is seen at the high-luminosity end of the $`60\mathrm{\mu m}`$ LF, which is dominated by bursting galaxies. A value $`ϵ_{\mathrm{burst}}=2`$ seems to fit somewhat better than higher or lower values, so we adopt this as our standard value. Also shown in the same panel is the somewhat better fit obtained setting $`t_{\mathrm{esc}}=5`$ Myr. However, as explained in § 3.3, our adopted standard value 2 Myr is favored by stellar evolution timescale argument and by the UV LF. Increasing $`t_{\mathrm{esc}}`$ in bursts from $`10\mathrm{M}\mathrm{y}\mathrm{r}`$ to $`30\mathrm{M}\mathrm{y}\mathrm{r}`$ has negligible effect on the LF.
The luminosity functions at 12, 25 and 100 $`\mathrm{\mu m}`$, are compared with the observational data from Soifer & Neugebauer (1991) in Figure 16. The predicted luminosity function agrees well with the measured one in each case.
## 8 Star formation rate indicators
Here we examine the accuracy of several SFR indicators based on continuum UV or IR luminosities (reviewed by e.g. Kennicutt, 1999).
The luminosity $`L_\nu (2800)`$ at $`2800\AA `$ has been extensively used to estimate SFRs of high-redshift galaxies and to investigate the evolution of the cosmic SFR density (e.g. Lilly et al., 1996; Conolly et al., 1997). In the top panel of Fig. 17, we plot the SFR against $`L_\nu (2800)`$ for the model galaxies, including the effects of extinction. Only models corresponding to spiral galaxies (B/T $``$ 0.5) and luminous starburst galaxies ($`L_{IR}10^{10}h^2L_{}`$) are shown. At higher luminosities and SFRs, $`L_\nu (2800)3\times 10^{26}\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1`$, the models in the absence of dust follow a linear relation between SFR and $`L_\nu `$, with a rather small dispersion, as would be expected if the UV luminosity is dominated by young stars and the recent SFR has been approximately constant. This linear relation, $`SFR/(M_{}\mathrm{yr}^1)=8.5\times 10^{29}L_\nu (2800\AA )/(\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1)`$, is indicated by the solid line, and its extrapolation to lower luminosity is shown by the dotted line. Dust extinction shifts points to the left of this line. At lower luminosities, the effects of dust extinction are very small, because of the low gas contents of the galaxies. On the other hand, because the SFRs are so small, the $`2800\AA `$ light has a significant contribution from post-AGB stars and old metal poor populations, and this causes the locus of points to bend to the right of the linear relation. The galaxies with very high metallicities ($`Z>0.1`$) have very low gas fractions.
The figure also shows as a dashed line the linear relation between $`L_\nu (2800)`$ and SFR obtained by Kennicutt (1999), using stellar population models for a Salpeter IMF, and assuming a constant SFR for the last $`10^8\mathrm{yr}`$. The $`SFR/L_\nu `$ ratio in our models without dust (solid line) is about 40% lower than Kennicutt’s value, but this difference is entirely due to the different IMF we adopt (equation (9)).
Perhaps the most striking feature of this plot is that the starburst models are offset by more than an order of magnitude from the average relation holding for the normal spirals, because of the large UV extinctions in the starbursts. Furthermore, their dispersion in SFR at a given luminosity is also quite large. Thus, the 2800Å luminosity with no dust correction performs rather poorly as a quantitative SFR indicator, both for very high SFRs (because of extinction) and for very low ones (because of the light from older stars).
The middle panel of Fig.17 depicts the relation between the star formation rate and L<sub>IR</sub> (eq. 16), the estimated $`81000\mathrm{\mu m}`$ luminosity based on the IRAS fluxes. The solid line represents the relation derived by Kennicutt (1998) for starbursts, $`SFR/(M_{}\mathrm{yr}^1)=4.5\times 10^{44}L_{IR}/(\mathrm{erg}\mathrm{s}^1)`$, by assuming that the bolometric output in a continuous burst of age between 10–100 Myr is completely reprocessed by dust, again for a Salpeter IMF. The Kennicutt relation is seen to fit our model galaxies quite well at all luminosities (normal spirals as well as starbursts), even though we assume a different IMF from Kennicutt.
The luminosity in the ISO $`15\mathrm{\mu m}`$ band has also been proposed as an approximate SFR indicator. The bottom panel of Fig. 17 shows the SFR vs $`L_\nu (15\mathrm{\mu m})`$ for our galaxies. The line plotted is our best linear fit to the model points, $`SFR/(M_{}\mathrm{yr}^1)=5.6\times 10^{30}L_\nu (15\mathrm{\mu m})/(\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1)`$. Despite the important contribution from PAH bands, the correlation of the SFR with $`L_\nu (15\mathrm{\mu m})`$ in our models is still fairly good.
## 9 Summary and Conclusions
We have combined an ab initio model of galaxy formation (GALFORM, §2, Cole et al., 2000), with an ab initio model for stellar emission and dust emission and absorption in galaxies (GRASIL, §3, Silva et al., 1998). Both models are state-of-the art. We are able to predict, in the context of the cold dark matter cosmology, the luminosities and spectral energy distributions from the UV to the sub-mm for the whole galaxy population, and how these change with cosmic epoch. Here we focused on a wide range of spectrophotometric properties of present-day galaxies, from the UV to the sub-mm, for a flat low-density cosmology ($`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$) with a CDM spectrum of density fluctuations. The model is remarkably successful in explaining the UV, optical and IR spectrophotometric and extinction properties of galaxies in the local universe. Future papers will investigate galaxy evolution in the UV, optical, IR and sub-mm out to high redshift.
Dust plays a dominant role in starburst galaxies, where star formation proceeds in the central regions of a galaxy on a short timescale. We did not previously make any detailed comparison of the properties of the starbursts predicted by semi-analytical galaxy formation models with observational data. Here we have shown that these properties are nicely reproduced.
Our model predicts an average dust attenuation law for starburst galaxies that agrees remarkably well with the empirical law found by Calzetti et al. (1999), although with a significant dispersion around the mean. In particular, the 2175Å bump is absent when the net attenuation of the galaxy light is considered. This is entirely an effect of the geometry of stars and dust in our model, and has nothing to do with the optical properties of dust grains. Indeed, our dust mixture would reproduce the average Milky Way extinction curve (with a strong 2175Å feature), if it were arranged in a foreground screen geometry. The absence of this feature in the attenuation curves of model starbursts is because in that case the dust attenuation is dominated by molecular clouds, with the shape controlled by the gradual escape of young stars from the clouds.
The starburst galaxies are predicted to dominate the bright end of the luminosity function in terms of bolometric luminosity, but because of the large extinctions in these objects, they do not make a dominant contribution to the bright end of the luminosity function in either the UV, optical or near-IR once dust effects are included. At these wavelengths, the luminosity function is dominated by normal spiral and elliptical galaxies. However, the starbursts completely dominate the bright end of the luminosity function in the mid- and far-IR (10–100$`\mathrm{\mu m}`$), at total luminosities $`L_{IR}10^{11}h^2L_{}`$. Overall, the luminosity function predictions from the far-UV to the far-IR are a remarkable success for the model, since the dust contents and galaxy radii are predicted a priori, and the only significant adjustable parameter in the comparison was the ratio of the burst timescale to the bulge dynamical time, which was chosen to fit the bright end of the luminosity function at 60$`\mathrm{\mu m}`$.
As expected, our models show that the UV continuum is in general a poor star formation indicator, both because of the large variations in the amount of extinction, and also because of the contribution from old stellar populations in the mid-UV ($`3000\AA `$) in more quiescent galaxies. The infrared luminosity is a much more reliable SFR indicator.
The parameters values adopted here for the GALFORM model are those chosen previously by Cole et al. (2000) to fit the properties of the local galaxy population in the optical and near-infrared, apart the new ones specifying the timescales and radii of bursts. Cole et al. also discuss the effects of varying the ’old’ GALFORM paramaters. The purpose of this paper was to present the effects of including dust in a fixed galaxy formation model. The treatment of dust reprocessing with GRASIL requires some additional parameters to be set, but opens the possibility to test semi-analytical models against the wealth of IR and sub-mm observations already available or planned for the near future. The adopted values for these parameters have also been guided by the results of Silva et al. (1998), who used GRASIL to reproduce detailed SEDs of several local normal and starburst galaxies. Some of the effects of variations of these parameters on SEDs, LFs and starburst properties have been discussed in the relevant sections of this paper. A more systematic parameter study will be included in future investigations. Among the newly introduced parameters, probably the most important ones are $`ϵ_{\mathrm{burst}}`$ and $`t_{\mathrm{esc}}`$, describing the timescales of bursts and the time for young stars to escape from their parent molecular clouds. They have significant effects on (and are constrained by) the IR and UV luminosity functions respectively.
In conclusion, this paper is a stepping stone for future work, which will apply the same models to galaxies at high redshift. Now that semi-analytic models can be effectively compared with infrared and sub-millimetre observations, as well as UV and optical data, they can be used to work towards an observationally and theoretically consistent picture for the history of galaxy formation and star formation in the universe.
We thank the anonymous referee for a constructive report. We acknowledge the support by the European Community under TMR grant ERBFMRX-CT96-0086. SMC acknowledges the support of a PPARC Advanced Fellowship, and CSF a PPARC Senior Fellowship and a Leverhulme Research Fellowship. CGL acknowledges the support of the Danish National Research Foundation through its establishment of the TAC, and a PPARC Visiting Fellowship at Durham. This work was partially supported by the PPARC rolling grant for extragalactic astronomy and cosmology at Durham. |
no-problem/0001/cond-mat0001219.html | ar5iv | text | # Vortices in small superconducting disks
## Abstract
We study the Ginzburg-Landau equations in order to describe a two-dimensional superconductor in a bounded domain. Using the properties of a particular integrability point ($`\kappa =1/\sqrt{2}`$) of these nonlinear equations which allows vortex solutions, we obtain a closed expression for the energy of the superconductor. The presence of the boundary provides a selection mechanism for the number of vortices.
A perturbation analysis around $`\kappa =1/\sqrt{2}`$ enables us to include the effects of the vortex interactions and to describe quantitatively the magnetization curves recently measured on small superconducting disks . We also calculate the optimal vortex configuration and obtain an expression for the confining potential away from the London limit.
The dimensionless Ginzburg-Landau energy functional $``$ of a superconductor depends on only one parameter , the ratio $`\kappa `$ between the London penetration depth $`\lambda `$ and the coherence length $`\xi `$
$$=_\mathrm{\Omega }\frac{1}{2}|B|^2+\kappa ^2|1|\psi |^2|^2+|(\stackrel{}{}i\stackrel{}{A})\psi |^2.$$
(1)
The order parameter $`\psi `$ is dimensionless as well as the magnetic field $`B`$ measured in units of $`\frac{\varphi _0}{4\pi \lambda ^2}`$, with $`\varphi _0=\frac{hc}{2e}`$. Lengths are measured in units of $`\lambda \sqrt{2}`$. The expression (1) assumes that both the order parameter and the vector potential have a slow spatial variation. The integral is over the volume $`\mathrm{\Omega }=\pi R^2d`$ of a thin disk of radius $`R`$ and thickness $`d`$.
Outside the superconducting sample, the order parameter vanishes and the magnetic field is solution of the Maxwell equation. The boundary condition between a superconductor and an insulator is $`(\stackrel{}{}i\stackrel{}{A})\psi |_\stackrel{}{n}=0`$ where $`\stackrel{}{n}`$ is the unit vector normal to the surface of the disk. The presence of a boundary precludes a complete analytical solution of the 3d Ginzburg-Landau equations for a thin disk. We are then led to make some simplifying assumptions, based upon numerical results .
The thickness $`d`$ of the sample considered in the experiments fulfills $`d\xi `$ and $`d\lambda `$. If the curvature of the magnetic flux lines, given by $`R/\lambda _e^2`$ (where $`\lambda _e(d,R,\lambda )`$ stands for the effective screening length), is smaller than $`1/\lambda _e`$, i.e. if $`R\lambda _e`$, then both $`\psi `$ and the vector potential $`\stackrel{}{A}`$ can be considered to be constant accross the thickness and the disk is effectively two-dimensional. The expression of the effective screening length $`\lambda _e(d,R,\lambda )`$ is not known, except for the case $`R\mathrm{}`$ where $`\lambda _e\lambda ^2/d`$. In the London limit (i.e. $`\kappa \mathrm{}`$), such a system has been described using Pearl’s solution . Finally, since $`\psi `$ and $`\stackrel{}{A}`$ are constant over the thickness, the covariant Neumann boundary condition, stated above, is automatically satisfied on the upper and lower surface of the disk.
The Ginzburg-Landau equations are nonlinear, second order differential equations whose solutions are usually unknown. However, for the special value $`\kappa =\frac{1}{\sqrt{2}}`$ known as the dual point , the equations for $`\psi `$ and $`\stackrel{}{A}`$ reduce to first order differential equations and the minimal free energy can be calculated exactly for an infinite plane. This relies on the identity true for two dimensional systems $`|(\stackrel{}{}i\stackrel{}{A})\psi |^2=|𝒟\psi |^2+\stackrel{}{}\times \stackrel{}{ȷ}+B|\psi |^2`$ where $`\stackrel{}{ȷ}`$ is the current density and the operator $`𝒟`$ is defined as $`𝒟=_x+i_yi(A_x+iA_y)`$. At the dual point, the expression (1) for $``$ is rewritten using this identity as follows
$$=_\mathrm{\Omega }\frac{1}{2}|B1+|\psi |^2|^2+|𝒟\psi |^2+_\mathrm{\Omega }(\stackrel{}{ȷ}+\stackrel{}{A}).\stackrel{}{d}l$$
(2)
where the last integral over the boundary $`\mathrm{\Omega }`$ of the system results from Stokes theorem.
For an infinite plane, we impose that the system is superconducting at large distance, i.e. $`|\psi |1`$ and $`\stackrel{}{ȷ}0`$ at infinity so that the boundary term in (2) coincides with the London fluxoid. It is quantized and equal to $`_\mathrm{\Omega }\stackrel{}{}\chi .\stackrel{}{d}l=2\pi n`$, where $`\chi `$ is the phase of the order parameter. The integer $`n`$ is the winding number of the order parameter $`\psi `$ and as such is a topological characteristic of the system . The extremal values of $``$, are $`=2\pi n`$, and are obtained when the bulk integral in (2) vanishes identically giving rise to two first order differential equations. These two equations can be decoupled to give for $`|\psi |`$ a second order nonlinear equation which admits families of vortex solutions . However, for the infinite plane, there is no mechanism to select the value of $`n`$, which only plays the role of a classifying parameter.
The extension of these results to finite size systems, namely the existence and stability of vortex solutions and their behaviour as a function of the applied field received a partial numerical answer. Numerical simulations of the Ginzburg-Landau equations show the existence of stationary vortex solutions whose number depends on the applied magnetic field. Moreover, these simulations indicate that the physical picture derived for $`\kappa =\frac{1}{\sqrt{2}}`$ remains qualitatively valid for quite a large range of values of $`\kappa `$, with a small corresponding change of free energy .
We consider finite size systems at the dual point i.e. for $`\kappa =\frac{1}{\sqrt{2}}`$. There, the edge currents screen the external magnetic field therefore producing a magnetic moment opposite to the direction of the field, whereas vortices in the bulk of the system produce a magnetic moment along the direction of the applied field. Assuming cylindrical symmetry, the current density $`\stackrel{}{ȷ}`$ has only an azimuthal component, with opposite signs in the bulk and on the edge of the system. Thus, there exists a circle $`\mathrm{\Gamma }`$ on which $`\stackrel{}{ȷ}`$ vanishes . This allows us to separate the domain $`\mathrm{\Omega }`$ into two concentric subdomains $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ whose boundary is the circle $`\mathrm{\Gamma }`$. Therefore one can extend to the subdomain $`\mathrm{\Omega }_1`$ the results obtained for the infinite case. The existence of vortices in a finite domain such as $`\mathrm{\Omega }_1`$ was checked numerically . It was shown that $`|\psi |`$ vanishes as a power law at the center of the disk, hence there is a (multi-)vortex in the center whose multiplicity is determined by the exponent of the power law. The magnetic flux $`\mathrm{\Phi }(\mathrm{\Omega }_1)=n`$ is quantized and the free energy in $`\mathrm{\Omega }_1`$ is $`(\mathrm{\Omega }_1)=2\pi n`$.
The contribution of $`(\mathrm{\Omega }_2)`$ to the free energy can be written, using the phase and the modulus of the order parameter $`\psi `$, as
$$_{\mathrm{\Omega }_2}(|\psi |)^2+|\psi (\stackrel{}{}\chi \stackrel{}{A})|^2+\frac{B^2+(1|\psi |^2)^2}{2}$$
(3)
We know, from the London equation, that both the magnetic field and the vector potential decrease rapidly away from the boundary $`\mathrm{\Omega }`$ of the system over a distance of order $`\lambda \sqrt{2}`$. Over the same distance, at the dual point, $`|\psi |`$ saturates to unity. One can thus estimate the integral (3) using a saddle-point method. We assume cylindrical symmetry, and we neglect the term $`(|\psi |)^2`$ on the boundary because of the boundary conditions, so that the relation (3) is now given by an integral over the boundary of the system. To go further, we need to implement boundary conditions for the magnetic field $`B(R)`$ and the vector potential $`A(R)`$. The choice $`B(R)=B_e`$, where $`B_e`$ is the external imposed field, corresponds to the geometry of an infinitely long cylinder, where the flux lines are not distorted outside the system. A more suitable choice for a flat thin disk is provided by demanding $`\varphi =\varphi _e.`$ This boundary condition implies that the vector potential is identified by continuity to its external applied value $`\stackrel{}{A_e}`$. It should be noticed that the magnetic field $`\stackrel{}{B}`$ has then a non monotonous variation: it is low in the bulk, larger than $`B_e`$ near the edge of the system, because of the distortion of flux lines, and eventually equal to its applied value far outside the system .
Finally, the minimization of the free energy with respect to $`|\psi |`$ gives $`1|\psi |^2=|\stackrel{}{}\chi \stackrel{}{A}|^2`$, such that, performing the integral over the boundary of the system, we obtain
$$\frac{1}{2\pi }(\mathrm{\Omega }_2)=\frac{\lambda \sqrt{2}}{R}(n\varphi _e)^2\frac{1}{2}(\frac{\lambda \sqrt{2}}{R})^3(n\varphi _e)^4$$
(4)
We have neglected the contribution of the $`B^2`$ term, which is smaller by a factor of the order $`(\lambda /R)^2`$.
The thermodynamic Gibbs potential $`𝒢`$ of the system is then
$$\frac{1}{2\pi }𝒢(n,\varphi _e)=n+a(n\varphi _e)^2\frac{a^3}{2}(n\varphi _e)^4a^2\varphi _{e}^{}{}_{}{}^{2}$$
(5)
where we have defined $`a=\frac{\lambda \sqrt{2}}{R}`$. The relation (5) consists in a set of quartic functions indexed by the integer $`n`$. The minimum of the Gibbs potential is the envelop curve defined by the equation $`\frac{𝒢}{n}|_{\varphi _e}=0`$, i.e. the system chooses its winding number $`n`$ in order to minimize $`𝒢`$. This provides a relation between the number $`n`$ of vortices in the system and the applied magnetic field $`\varphi _e`$.
We consider the limit of large enough $`\frac{R}{\lambda }`$, such that the quartic term is negligible. The Gibbs potential then reduces to a set of parabolas. The vortex number $`n`$ is then given by the integer part
$$n=[\varphi _e\frac{R}{2\sqrt{2}\lambda }+\frac{1}{2}]$$
(6)
while the magnetization $`M=\frac{𝒢}{\varphi _e},`$ is given by
$$M=2a(\varphi _en)2a^2\varphi _e$$
(7)
For $`\varphi _e`$ smaller that $`\frac{R}{2\sqrt{2}\lambda }`$, we have $`n=0`$ and $`(M)`$ increases linearly with the external flux. This corresponds to the London regime. The field $`H_1`$ at which the first vortex enters the disk corresponds to $`𝒢(n=0)=𝒢(n=1)`$, i.e. to
$$H_1=\frac{\varphi _0}{2\pi \sqrt{2}R\lambda }+\frac{\varphi _0}{2\pi R^2}$$
(8)
The subsequent vortices enter one by one for each crossing $`𝒢(n+1)=𝒢(n)`$; this happens periodically in the applied field, with a period equal to $`\mathrm{\Delta }H=\frac{\varphi _0}{\pi R^2}`$ and a discontinuity of the magnetization $`\mathrm{\Delta }M=\frac{2\sqrt{2}\lambda }{R}.`$
There is a qualitative similarity between the results we derived using the properties of the dual point and those obtained from a linearised version of the Ginzburg-Landau functional . However, the two approaches differ in their quantitative predictions due to the importance of the nonlinear term.
Within the previous approximations, the expression (5) captures the main features observed experimentally i.e. the behaviour of the magnetization at low fields (before the first discontinuity), the periodicity and the linear behaviour between the successive jumps. From the experimental parameters namely $`R=1.2\mu m`$ and $`\lambda (T)=84nm`$ at $`T=0.4K`$, we compute from our expressions $`H_1=25G`$ and $`\mathrm{\Delta }H=4.6G.`$ These values agree with the experimental results to within a few percent. We emphasize that $`H_1`$ scales like $`\frac{1}{R}`$, whereas $`\mathrm{\Delta }H`$ scales like $`\frac{1}{R^2}`$ in accordance with the experimental data . We calculate the ratio of the magnetization jumps to the maximum value of $`M`$ to be 0.20 as compared to a measured value of 0.22. The total number of jumps scales like $`R^2`$ and the upper critical field is independent of $`R`$ in our theory in agreement with the experimental data.
At the duality point $`\kappa =1/\sqrt{2}`$, the contribution of the vortices to the free energy is topological and does not depend neither on the precise shape of the vortices nor on the form of their interaction. This property does not hold for other values of $`\kappa `$. For a 2d film with an infinitesimal current sheet, the vortex configuration has been computed by Pearl , using the London equation, and differs qualitatively from the present model. Indeed, away from the dual point, both the shape of the vortices and their interaction modify the free energy and the magnetization. For instance, in the London limit ($`\kappa \mathrm{}`$), radially symmetric solutions become unstable and different geometrical configurations of the vortices have different energies. This results from two contributions to the energy, arising from the interaction between the vortices themselves and between the vortices and the edge currents. We have performed a perturbative analysis around the dual point, and we obtained that the bulk free energy $`(\mathrm{\Omega }_1)`$ is given by
$$\frac{1}{2\pi }(\mathrm{\Omega }_1)=n(1+\frac{1}{2}(\kappa \sqrt{2}1))+\beta (\kappa \sqrt{2}1)\underset{i<j}{}𝒰(r_{ij}).$$
(9)
The part which is linear in $`n`$ is independent of the position of the vortices. It has been evaluated using a variational ansatz . The two-body interaction potential near the dual point is well approximated by a function $`𝒰(r_{ij})=𝒰(r)`$ where $`𝒰(0)=1`$, $`𝒰(\mathrm{})=0`$ with $`\beta =\frac{1}{4}`$. In particular, for a configuration where all the vortices are close to the center of the disk, the bulk free energy is
$$\frac{1}{2\pi }(\mathrm{\Omega }_1)=n(\frac{5}{8}+\kappa \frac{3\sqrt{2}}{8})+\frac{1}{8}n^2(\kappa \sqrt{2}1)$$
(10)
Thus, away from the dual point, the linear term in the bulk free energy is not topological anymore and is modified by the interaction. The attractive or repulsive character of the interaction between vortices depends on the sign of $`(\kappa \sqrt{2}1)`$.
To obtain the edge contribution to the energy, we consider first the case of a single vortex placed at a distance $`x`$ (in units of $`\lambda \sqrt{2}`$) from the center of the disk. Then, the phase of the order parameter is given by $`\text{tan}\chi =\frac{\text{sin}\theta }{\text{cos}\theta y}`$, where $`y=ax=x/R`$ (whereas one has $`\chi =n\theta `$ for the symetrical case). Starting from the expression (3), we obtain
$$\frac{1}{2\pi }(\mathrm{\Omega }_2,y)=a(n\varphi _e)^2\frac{a^3}{4\kappa ^2}(n\varphi _e)^4+f(a,y,\varphi _e)$$
(11)
where the function $`f(a,y,\varphi _e)`$ can explicitely be calculated and represents a vortex confining potential inside a finite superconductor for $`\kappa 1/\sqrt{2}`$. This corresponds to the well-known Bean-Livingston confining energy barrier in a 3d superconductor which has been obtained in the extreme type II limit using the London equation. It is important to emphasize that around the dual point, vortices are not point-like and therefore the usual expression of the Bean-Livingston energy barrier does not hold.
The possible equilibrium configurations of vortices result from the competition between the bulk and edge contributions to the free energy derived above. It is either a giant vortex at the center of the disk, a situation which preserves the cylindrical symmetry, or a polygonal pattern of small vortices. In order to evaluate the energy of these configurations, we generalize the relation (11) to the case of a polygonal configuration of vortices placed at a distance $`x`$ from the center of the disk. The resulting energy is $`n`$-times the barrier contribution $`f(a,y,\varphi _e)`$ obtained in (11) provided the following substitutions are made: $`ana`$, $`yy^n`$ and $`\varphi _e\frac{\varphi _e}{n}`$.
In conclusion, we have investigated the question of the existence and stability of vortices in small two-dimensional bounded superconducting systems. We have shown that starting from the exact solution of the Ginzburg-Landau equations for an infinite plane and for the special value $`\kappa =1/\sqrt{2}`$, it is possible to derive an analytical expression for the free energy in a bounded system. The resulting expression provides a satisfactory quantitative description of the magnetization measured on small superconducting aluminium disks in the low magnetic field regime. For larger fields, we cannot neglect anymore the interaction effects due to the vortices and the edge currents. Perturbation theory around the value $`\kappa =1/\sqrt{2}`$, has allowed us to derive an expression for both the confining potential barrier of the vortices and the strength of the interaction between vortices. This provides a more refined description of the measured magnetization at larger fields.
Acknowledgment K.M. acknowledges support by the Lady Davies foundation and E.A. the very kind hospitality of the Laboratoire de Physique des Solides and the LPTMS at the university of Paris (Orsay). |
no-problem/0001/gr-qc0001086.html | ar5iv | text | # How much energy do closed timelike curves in 2+1 spacetimes need?
## I Energy conditions and polarized surfaces
There exist different types of causality violations in General Relativity (GR). One of them corresponds to spacetimes such as Goedel’s universe, where there are closed timelike curves (CTC) passing through each point of spacetime. The causality violation set is not a result of the evolution of certain initial data, but rather it exists “since ever”. There is certain evidence, provided by the fact that “we are not being invaded by hords of tourists coming from the future”, that our universe is not of this kind.
Nevertheless, GR allows for causality violations that “do not exist since ever”, but, instead, are generated through spacetime evolution. In these cases there exists a Cauchy Horizon $``$ (we shall always refer to, say, future Cauchy Horizons; the case of past Cauchy Horizons is, of course, identical), that can be compactly generated or not. $``$ is said to be compactly generated (CGCH) if its generators, when directed to the past, always enter a compact region and remain there forever. A spacetime with a CGCH is a possible characterization of time machines for the following two reasons:
* If an otherwise causally well behaved spacetime is changed in a compact region such that a Cauchy Horizon $``$ appears as a result, then $``$ is compactly generated .
* Conversely, a CGCH violates strong causality .
For obvious reasons, it is interesting to know under what conditions can $``$ be non empty. It can be seen that the weak energy condition (WEC) must be violated in an open spacetime with a CGCH . In this sense, the construction of this kind of time machines needs “quantum matter” (or the simultaneous creation of a singularity). There is a large amount of semiclassical work in this direction, which we do not intend to review here.
Spacetimes with non compactly generated Cauchy Horizons are allowed by classical GR (as opposed to compactly generated ones), but it is not clear under which conditions they should or should not exist.
We can make progress along these lines working in symmetric models, such that we can reduce the problem to one in $`2+1`$ gravity. Thus, in what follows we shall restrict ourselves to these low dimensional models, moreover to open ones, i.e. with non compact (and simply connected) spatial sections (typically, $`^2`$). A possible definition of energy momentum (EM) in these spacetimes is via holonomies. In this way, the total EM is timelike, spacelike or null, according to whether parallel transport of vectors around loops that enclose all the matter is defined by a rotation, a boost, or a null rotation, respectively. The following results can be obtained in $`2+1`$ :
* Under quite general conditions, a CGCH not only violates strong causality, but also stable causality, since there exist at least one closed null geodesic (this does not necessarily occur in 3+1, as emphasized in reference ).
* Even if one allows for WEC violations, under certain conditions on the relationship between positive and negative masses, a CGCH cannot exist if the total EM is timelike (except when it is a rotation of $`2\pi `$).
The aim of this paper is to give a result similar to the last one above mentioned, but for positive masses and non compactly generated horizons. Namely: the original calculations of Gott showed that in the spacetime of two particles that gravitationally scatter each other, certain inequality that involve the masses and velocities of the particles is sufficient for the existence of CTCs. That this inequality is also a necessary condition can be seen from Cutler’s analyisis on the global structure of these spacetimes . This inequality, in turn, can be reexpressed as spacelike total EM , and, in summary, the spacetime of two particles does not have CTCs if the EM is not spacelike. Kabat has further analyzed systems with more particles, and conjectured that as a general property, CTCs cannot exist in the absence of spacelike EM. Menotti and Seminara have given a proof of this conjecture for systems with rotational symmetry, but, unfortunately, this assumption does not hold either in solutions like Gott’s one or in other ones with different number of particles. Headrick and Gott have also shown a result related to Kabat’s conjecture: if a CTC is deformable to infinity, then its holonomy cannot be timelike, except for a rotation of $`2\pi `$.
Below we give an argument which shows that, quite generally, this conjecture is true. Basically, the argument is the following: if a Cauchy Horizon exists, it can be obtained as limit of polarized surfaces; on the other hand, these surfaces cannot converge if the total EM is timelike (except when it is a rotation of $`2\pi `$), leading in this way to a contradiction.
The rest of this paper is devoted to a more detailed description of this simple idea, and heavily relies on the works of Cutler and of Carroll et al , to which the reader can refer for further details. Also, some of the tools here used are of the kind of those used in , but in that reference the exposition is somewhat more detailed. Along this work we implicitly use some basic properties of curves, CTCs, and causality that can be seen in, e.g., or . Finally, a comprehensive review of CTCs in $`2+1`$ can be found in .
The notion of polarized surfaces was originally introduced by Kim and Thorne in their analysis of vacuum fluctuations and whormholes , and it is widely used in works that study the stability/instability of Cauchy Horizons under quantum test fields.
The n-th polarized surface $`\mathrm{\Sigma }(n)`$ is defined as the set of points through which passes a selfintersecting null geodesic (SNG), i.e. a null geodesic that returns to the same point of spacetime, but possibly with a different tangent vector, that circles $`n`$ times the system (this is explicited below). Its utility as a “Cauchy Horizon finder” is a consequence of the following property:
$$\underset{n\mathrm{}}{lim}\mathrm{\Sigma }(n)=$$
(1)
Cutler has used this criterion to obtain the global structure of Gott’s spacetime, and it has survived a non trivial check of self consistency, since Cutler finds that the region where there are CTCs disappears if the total EM is not spacelike, a fact that is known from other reasons (e.g., a time function can be globally defined).
For simplicity let us start discussing the case of two particles, the generalization will be straightforward. Let us suppose that there are CTCs in this spacetime, restricted to a region delimited by a Cauchy Horizon $``$. It is easy to see that the CTCs must circle both particles. Thus, the CTCs can be characterized by the number of times they encircle them, the winding number $`n`$. The same holds for the SNGs, and the $`n`$-th polarized surface $`\mathrm{\Sigma }(n)`$ is, thus, defined as the set of points through which passes a SNG with winding number $`n`$.
We first choose a point $`q`$ and a curve $`\gamma `$ which starts at $`q`$ and ends at some point $`p_1`$, and is completely contained in the region which contains CTCs (except for $`q`$, which is not in the region of CTCs but, rather, in its boundary) . That is, $`\gamma :[0,1]`$, with $``$ the spacetime manifold, such that $`\gamma (0)=q`$ and $`\gamma (1)=p_1`$. Since $`p_1`$ is in the region where there are CTCs, there exists a CTC $`𝒞_1`$ that passes through $`p_1`$; this CTC circles, say, $`n`$ times the pair of particles. We now approach $`p_1`$ to $`q`$ along $`\gamma `$, while smoothly deforming the entire curve $`𝒞_1`$, keeping $`n`$ fixed, At a certain point, this deformation will no longer be possible, and the curve that we were deforming will result in a SNG $`𝒢_n`$ that starts and ends at a point $`q_1\mathrm{\Sigma }(n)`$ (it is not clear when will this procedure converge to a closed curve, but if it does, one can see that it must converge to a SNG). We now take $`𝒞_1`$ and we move along it twice, obtaining a curve $`𝒞_2`$ that passes through $`p_2(=p_1)`$. Repeating the whole procedure, we obtain $`𝒢_{2n}`$ and a point $`q_2\mathrm{\Sigma }(2n)`$ that is closer to $``$, i.e. there exists a neighborhood $`𝒪`$ of $`q`$, such that $`q_2𝒪`$ but $`q_1𝒪`$. Thus, a point $`q_n\mathrm{\Sigma }(n)`$ will be closer (in the topological sense just mentioned) to $``$ than another one $`q_m\mathrm{\Sigma }(m)`$ with $`m<n`$. Thus, the sucession $`\left\{q_n\right\}`$ converges to $`q`$, and, in this way, one expects that (1) holds.
So we find that as a necessary condition for the existence of CTCs, the polarized surfaces should converge.
In the process $`\mathrm{\Sigma }(n)`$, the initial tangent to the SNG $`𝒢_n`$, $`k_n^{(i)}`$, and the final one, $`k_n^{(f)}`$, must approach the tangent to the horizon, $`k`$ (which is a null vector, since $``$ is a null hypersurface). That is, $`k_n^{(f)}k`$ and $`k_n^{(i)}k`$. Since $`𝒢_n`$ is a geodesic, its tangent is parallel transported, i.e., $`k_n^{(f)}=Ak_n^{(i)}`$, with $`A𝒮𝒪(2,1)`$. The crucial point is that $`A=^n`$, with $``$ the holonomic operator that defines the total EM. Now, $`k`$ must be a fixed null direction, i.e. a null eigenvector of $``$. So $``$ must have at least one null eigenvector. It is easy to see that if $``$ is spacelike or null, it has two and one null eigenvectors, respectively; and if $``$ is timelike it has no null eigenvector, except when it is the identity (which must correspond to a rotation of $`2\pi `$, because for a rotation of angle zero the spacetime would be the well behaved vacuum flat metric). Thus, if the total EM is timelike and it is not the identity, there cannot be any fixed null directions and we have reached a contradiction and arrived at our main result.
For more general situations, e.g., if there are an arbitrary number of particles, one must first recall that every subsystem has timelike EM if the total EM is timelike . With this property in hand, one can then repeat the whole construction and show that the polarized surfaces cannot converge if the total EM is timelike.
The property that the evolution of data with timelike total EM is free of singularities and/or Cauchy horizons seems to be a general feature that is not even restricted to particle-like solutions, but that, instead, also holds for fields coupled to gravity (the simplest case of this statemente being Einstein-Rosen waves, or a massless scalar field coupled to gravity, if seen as a $`2+1`$ system). A rigorous proof for vacuum and electrovacuum with a $`G_2`$ group of symmetries is contained in the work of Berger et al (the condition of timelike total EM is not explicited in , but it follows from the boundary conditions there imposed). It can be seen that if one has a universe with timelike total EM, one needs to add some matter in order to make the total EM spacelike , and thus, this “quite general” property that CTCs need spacelike total EM gives a precise notion of how much energy it is needed for causality violation. A similar result in $`3+1`$ would, of course, be of the greatest interest.
## II Some final comments
The argument that we gave as supporting the property of the polarized surfaces as “finders” of Cauchy Horizons is essentially the original one of Kim and Thorne. Though it is widely used and it is usually expected to hold under very general conditions, up to our knowledge there is no rigorous proof of it. Some parts of the analysis of the previous section are implicit in Cutler’s work, so we now make contact with it. Cutler takes a point $`p`$ through which a SNG passes and chooses two charts of inertial like coordinates (one chart for each particle). He then explicitly calculates the map $`k^ik^f`$ as a non linear map $`g(\varphi )`$ from the circle of null directions at $`p`$ to itself, and uses the fact that $`g`$ has two fixed points to obtain the tangent to the horizon (one fixed point corresponds to the tangent to the future horizon, and the other one to the past horizon) and reconstruct it using some symmetries of the spacetime. That is, his map $`g`$ corresponds, essentially, to our map $``$. We have here taken advantage of the fact that $``$ is linear, to see under which conditions there are fixed null directions (the fixed points of $`g`$ correspond to the null eigenvectors of $``$); and we have noted that $``$ defines the total EM, so that the two fixed points that Cutler finds do not depend on the details of the geometry of Gott’s spacetime, but rather on the property that its total EM is spacelike.
## Acknowledgements
The author thanks Sean Carroll for kindly reading the manuscript and for his valuable comments. He also acknowledges A. Ni, C. Valmont; and CONICET for financial support. This work was supported in part by grants from the National University of Córdoba, and from CONICOR, and CONICET (Argentina). |
no-problem/0001/hep-th0001206.html | ar5iv | text | # Untitled Document
hep-th/0001206 SLAC-PUB-8337 SU-ITP-00/02 IASSNS-HEP-00/05
Self-tuning flat domain walls in 5d gravity and string theory
Shamit Kachru, Michael Schulz and Eva Silverstein
Department of Physics and SLAC
Stanford University
Stanford, CA 94305/94309
We present Poincare invariant domain wall (“3-brane”) solutions to some 5-dimensional effective theories which can arise naturally in string theory. In particular, we find theories where Poincare invariant solutions exist for arbitrary values of the brane tension, for certain restricted forms of the bulk interactions. We describe examples in string theory where it would be natural for the quantum corrections to the tension of the brane (arising from quantum fluctuations of modes with support on the brane) to maintain the required form of the action. In such cases, the Poincare invariant solutions persist in the presence of these quantum corrections to the brane tension, so that no 4d cosmological constant is generated by these modes.
January 2000
1. Introduction
Some time ago, it was suggested that the cosmological constant problem may become soluble in models where our world is a topological defect in some higher dimensional spacetime . Recently such models have come under renewed investigation. This has been motivated both by brane world scenarios (see for instance \[2,,3,,4\]) and by the suggestion of Randall and Sundrum that the four-dimensional graviton might be a bound state of a 5d graviton to a 4d domain wall. At the same time, new ideas relating 4d renormalization group flows to 5d AdS gravity via the AdS/CFT correspondence have inspired related approaches to explaining the near-vanishing of the 4d cosmological term \[7,,8\]. These authors suggested (following ) that quantum corrections to the 4d cosmological constant could be cancelled by variations of fields in a five-dimensional bulk gravity solution. The results of this paper might be regarded as a concrete partial realization of this scenario, in the context of 5d dilaton gravity and string theory. A different AdS/CFT motivated approach to this problem appeared in .
In the thin wall approximation, we can represent a domain wall in 5d gravity by a delta function source with some coefficient $`f(\varphi )`$ (where $`\varphi `$ is a bulk scalar field, the dilaton), parametrizing the tension of the wall. Quantum fluctuations of the fields with support on the brane should correct $`f(\varphi )`$. In this paper, we present a concrete example of a 5d dilaton gravity theory where one can find Poincare invariant domain wall solutions for $`\mathrm{𝚐𝚎𝚗𝚎𝚛𝚒𝚌}`$ $`f(\varphi )`$. The constraint of finding a finite 4d Planck scale then restricts the sign of $`f`$ and the value of $`\frac{f^{}}{f}`$ at the wall to lie in a range of order one. Thus fine-tuning is not required in order to avoid having the quantum fluctuations which correct $`f(\varphi )`$ generate a 4d cosmological constant. One of the requirements we must impose is that the 5d cosmological constant $`\mathrm{\Lambda }`$ should vanish.<sup>1</sup> It is possible that an Einstein frame bulk cosmological term which is independent of $`\varphi `$ will also allow for similar physics . This would be natural in scenarios where the bulk is supersymmetric (though the brane need not be), or where quantum corrections to the bulk are small enough to neglect in a controlled expansion.
For suitable choices of $`f(\varphi )`$, this example exhibits the precise dilaton couplings which naturally arise in string theory. There are two interesting and distinct contexts in which this happens. One is to consider $`f(\varphi )`$ corresponding to tree-level dilaton coupling ($`Ve^{2\varphi }`$ in string frame, for some constant $`V`$). This form of the dilaton coupling is not restricted to tree-level $`\mathrm{𝚙𝚎𝚛𝚝𝚞𝚛𝚋𝚊𝚝𝚒𝚟𝚎}`$ string theory – it occurs for example on the worldvolumes of $`NS`$ branes in string theory. There, the dynamics of the worldvolume degrees of freedom does not depend on the dilaton – the relevant coupling constant is dilaton independent. Therefore, quantum corrections to the brane tension due to dynamics of worldvolume fields would be expected to maintain the “tree-level” form of $`f(\varphi )`$, while simply shifting the coefficient $`V`$ of the (string frame) $`e^{2\varphi }`$. The other form of $`f(\varphi )`$ natural in string theory involves a power series in $`e^\varphi `$. This type of coupling occurs when quantum corrections are controlled by the dilaton in string theory.
In either case, as long as we only consider quantum corrections which modify $`f(\varphi )`$ but maintain the required form of the bulk 5d gravity action, this means that quantum corrections to the brane tension do not destabilize flat space; they do not generate a four-dimensional cosmological constant. We will argue that some of our examples should have a microscopic realization in string theory with this feature, at leading order in a controllable approximation scheme. It is perhaps appropriate to call this “self-tuning” of the cosmological constant because the 5d gravity theory and its matter fields respond in just the right way to shifts in the tension of the brane to maintain 4d Poincare invariance. Note that here, as in , there is a distinction between the brane tension and the 4d cosmological constant.
There are two aspects of the solutions we find which are not under satisfactory control. Firstly, the curvature in the brane solutions of interest has singularities at finite distance from the wall; the proper interpretation of these singularities will likely be crucial to understanding the mechanism of self-tuning from a four-dimensional perspective. We cut off the space at these singularities. The wavefunctions for the four-dimensional gravitons in our solutions vanish there. Secondly, the value of the dilaton $`\varphi `$ diverges at some of the singularities; this implies that the theory is becoming strongly coupled there. However, the curvature and coupling can be kept arbitrarily weak at the core of the wall. Therefore, some aspects of the solutions are under control and we think the self-tuning mechanism can be concretely studied. We present some preliminary ideas about the microscopic nature of the singularities in §3.
A problem common to the system studied here and that of is the possibility of instabilities, hidden in the thin wall sources, that are missed by the effective field theory analysis. Studying thick wall analogues of our solutions would probably shed light on this issue. We do not resolve this question here. But taking advantage of the stringy dilaton couplings possible in our set of self-tuned models, we present a plausibility argument for the existence of stringy realizations, a subject whose details we leave for future work .
Another issue involves solutions where the wall is not Poincare invariant. This could mean it is curved (for example, de Sitter or Anti de Sitter). However it could also mean that there is a nontrivial dilaton profile along the wall (one example being the linear dilaton solution in string theory, which arises when the tree-level cosmological constant is nonvanishing). This latter possibility is a priori as likely as others, given the presence of the massless dilaton in our solutions.
Our purpose in this paper is to argue that starting with a Poincare invariant wall, one can find systems where quantum corrections leave a Poincare invariant wall as a solution. However one could also imagine starting with non Poincare invariant wall solutions of the same 5d equations (and preliminary analysis suggests that such solutions do exist in the generic case, with finite 4d Planck scale). We are in the process of systematically analyzing the fine tuning of initial conditions that considering a classically Poincare invariant wall might entail .
The paper is organized as follows. In §2, we write down the 5d gravity + dilaton theories that we will be investigating. We solve the equations of motion to find Poincare invariant domain walls, both in the cases where the 5d Lagrangian has couplings which provide the self-tuning discussed above, and in more general cases. In §3, we describe several possible embeddings of our results into a more microscopic string theory context. We close with a discussion of promising directions for future thought in §4.
There have been many interesting recent papers which study domain walls in 5d dilaton gravity theories. We particularly found and useful, and further references may be found there.
This research was inspired by very interesting discussions with O. Aharony and T. Banks. While our work on Poincare invariant domain walls and self-tuning was in progress, we learned that very similar work was in progress by Arkani-Hamed, Dimopoulos, Kaloper and Sundrum . In particular, before we had obtained the solutions in §2.3 and §2.4, R. Sundrum told us that they were finding singular solutions to the equations and were hoping the singularities would “explain” a breakdown of 4d effective field theory on the domain wall.
2. Poincare-invariant 4d Domain Wall Solutions
2.1. Basic Setup and Summary of Results
Let us consider the action
$$\begin{array}{cc}\hfill S=& d^5x\sqrt{G}\left[R\frac{4}{3}(\varphi )^2\mathrm{\Lambda }e^{a\varphi }\right]\hfill \\ & +d^4x\sqrt{g}(f(\varphi ))\hfill \end{array}$$
describing a scalar field $`\varphi `$ and gravity living in five dimensions coupled to a thin four-dimensional domain wall. Let us set the position of the domain wall at $`x_5=0`$. Here we follow the notation of so that the metric $`g_{\mu \nu }`$ along the four-dimensional slice at $`x_5=0`$ is given in terms of the five-dimensional metric $`G_{MN}`$ by
$$\begin{array}{cc}& g_{\mu \nu }=\delta _\mu ^M\delta _\nu ^NG_{MN}(x_5=0)\hfill \\ & \mu ,\nu =1,\mathrm{},4\hfill \\ & M,N=1,\mathrm{},5\hfill \end{array}$$
For concreteness, in much of our discussion we will make the choice
$$f(\varphi )=Ve^{b\varphi }$$
However, most of our considerations will $`\mathrm{𝚗𝚘𝚝}`$ depend on this detailed choice of $`f(\varphi )`$ (for reasons that will become clear). With this choice, (2.1) describes a family of theories parameterized by $`V`$, $`\mathrm{\Lambda }`$, $`a`$, and $`b`$. If $`a=2b=4/3`$, the action (2.1) agrees with tree-level string theory where $`\varphi `$ is identified with the dilaton. (That is, the 5d cosmological constant term and the 4d domain wall tension term both scale like $`e^{2\varphi }`$ in string frame.) In §3 we will discuss a very natural context in which this type of action arises in string theory, either with the specific form (2.1) or with more general $`f(\varphi )`$.
In the rest of this section we will derive the field equations arising from this action and construct some interesting solutions of these equations. In particular, we will be interested in whether there are Poincare-invariant solutions for the metric of the four-dimensional slice at $`x_5=0`$ for generic values of these parameters (or more generally, for what subspaces of this parameter space there are Poincare-invariant solutions in four dimensions). We will also require that the geometry is such that the four-dimensional Planck scale is finite. Our main results can be summarized in three different cases as follows:
(I) For $`\mathrm{\Lambda }=0`$, $`b\pm \frac{4}{3}`$ but otherwise arbitrary, and arbitrary magnitude of $`V`$ we find a Poincare-invariant domain wall solution of the equations of motion. For $`b=2/3`$, which is the value corresponding to a brane tension of order $`e^{2\varphi }`$ in string frame, the sign of $`V`$ must be positive in order to correspond to a solution with a finite four-dimensional Planck scale, but it is otherwise unconstrained. This suggests that for fixed scalar field coupling to the domain wall, quantum corrections to its tension $`V`$ do not spoil Poincare invariance of the slice. In §3 we will review examples in string theory of situations where worldvolume degrees of freedom contribute quantum corrections to the $`e^{2\varphi }`$ term in a brane’s tension. Our result implies that these quantum corrections do not need to be fine-tuned to zero to obtain a flat four-dimensional spacetime.
For a generic choice of $`f(\varphi )`$ in (2.1) (including the type of power series expansion in $`e^\varphi `$ that would arise in perturbative string theory), the same basic results hold true: We are able to find Poincare invariant solutions without fine-tuning $`f`$. Insisting on a finite 4d Planck scale gives a furthur constraint on $`f^{}/f`$ at the wall, forcing it to lie in a range of order one.
Given a solution with one value of $`V`$ and $`\mathrm{\Lambda }=0`$, a self-tuning mechanism is in fact clear from the Lagrangian (for $`b0`$). In (2.1) we see that if $`\mathrm{\Lambda }=0`$ (or $`a=0`$), the only non-derivative coupling of the dilaton is to the brane tension term, where it appears in the combination $`(V)e^{b\varphi }`$. Clearly given a solution for one value of $`V`$, there will be a solution for any value of $`V`$ obtained by absorbing shifts in $`V`$ with shifts in $`\varphi `$. With more general $`f(\varphi )`$, similar remarks hold: the dilaton zero mode appears only in $`f`$, and one can absorb shifts in $`V`$ by shifting $`\varphi `$.
However, in the special case $`b=0`$ (where $`f(\varphi )`$ is just a constant), we will also find flat solutions for generic $`V`$. This implies that the freedom to vary the dilaton zero mode is not the only mechanism that ensures the existence of a flat solution for arbitrary $`V`$.
(II) For $`\mathrm{\Lambda }=0`$, $`b=\pm 4/3`$, we find a different Poincare-invariant solution (obtained by matching together two 5d bulk solutions in a different combination than that used in obtaining the solutions described in the preceding paragraph (I)). A solution is present for any value of $`V`$. This suggests that for fixed scalar field coupling to the domain wall, quantum corrections to its tension $`V`$ do not spoil Poincare-invariance of the slice. Again the sign of $`V`$ must be positive in order to have a finite four-dimensional Planck scale.
(III) We do not find a solution (nor do we show that none exists) for general $`\mathrm{\Lambda }`$, $`V`$, $`a`$, and $`b`$ (in concordance with the counting of parameters in ). However, for each $`\mathrm{\Lambda }`$ and $`V`$ there is a choice of $`a`$ and $`b`$ for which we do find a Poincare invariant solution using a simple ansatz.
For $`a=0`$, and general $`b`$, $`\mathrm{\Lambda }`$, and $`V`$ we are currently investigating the existence of self-tuning solutions. Their existence would be in accord with the fact that in this case, as in the cases with $`\mathrm{\Lambda }=0`$, the dilaton zero mode only appears in the tension of the wall. This means again that shifts in $`V`$ can be absorbed by shifting $`\varphi `$, so if one finds a Poincare invariant solution for any $`V`$, one does not need to fine-tune $`V`$ to solve the equations.
2.2. Equations of Motion
The equations of motion arising for the theory (2.1), with our simple choice for $`f(\varphi )`$ given in (2.1), are as follows. Varying with respect to the dilaton gives:
$$\sqrt{G}\left(\frac{8}{3}^2\varphi a\mathrm{\Lambda }e^{a\varphi }\right)bV\delta (x_5)e^{b\varphi }\sqrt{g}=0$$
The Einstein equations for this theory are:
$$\begin{array}{cc}& \sqrt{G}\left(R_{MN}\frac{1}{2}G_{MN}R\right)\hfill \\ & \frac{4}{3}\sqrt{G}\left[_M\varphi _N\varphi \frac{1}{2}G_{MN}(\varphi )^2\right]\hfill \\ & +\frac{1}{2}\left[\mathrm{\Lambda }e^{a\varphi }\sqrt{G}G_{MN}+\sqrt{g}Vg_{\mu \nu }\delta _M^\mu \delta _N^\nu \delta (x_5)\right]=0\hfill \end{array}$$
We are interested in whether there are solutions with Poincare-invariant four-dimensional physics. Therefore we look for solutions of (2.1) and (2.1) where the metric takes the form
$$ds^2=e^{2A(x_5)}(dx_1^2+dx_2^2+dx_3^2+dx_4^2)+dx_5^2$$
With this ansatz for the metric, the equations become
$$\frac{8}{3}\varphi ^{\prime \prime }+\frac{32}{3}A^{}\varphi ^{}a\mathrm{\Lambda }e^{a\varphi }bV\delta (x_5)e^{b\varphi }=0$$
$$6(A^{})^2\frac{2}{3}(\varphi ^{})^2+\frac{1}{2}\mathrm{\Lambda }e^{a\varphi }=0$$
$$3A^{\prime \prime }+\frac{4}{3}(\varphi ^{})^2+\frac{1}{2}e^{b\varphi }V\delta (x_5)=0$$
where denotes differentiation with respect to $`x_5`$. The first one (2.1) is the dilaton equation of motion, the second (2.1) is the 55 component of Einstein’s equations, and the last (2.1) comes from a linear combination (the difference) of the $`\mu \nu `$ component of Einstein’s equation and the 55 component.
We will mostly consider the simple ansatz
$$A^{}=\alpha \varphi ^{}.$$
However for the case $`a=0`$, $`\mathrm{\Lambda }0`$ we will integrate the equations directly.
2.3. $`\mathrm{\Lambda }=0`$ Case
Let us first consider the system with $`\mathrm{\Lambda }=0`$. We will first study the bulk equations of motion (i.e. the equations of motion away from $`x_5=0`$) where the $`\delta `$-function terms in (2.1) and (2.1) do not come in. Note that because the delta function terms do not enter, the bulk equations are independent of our choice of $`f(\varphi )`$ in (2.1). We will then consider the conditions required to match two bulk solutions on either side of the domain wall of tension $`Ve^{b\varphi }`$ at $`x_5=0`$. We will find two qualitatively different ways to do this, corresponding to results (I) and (II) quoted above. We will also find that for fairly generic $`f(\varphi )`$, the same conclusions hold.
Bulk Equations: $`\mathrm{\Lambda }=0`$
Plugging the ansatz (2.1) into (2.1) (with $`\mathrm{\Lambda }=0`$) we find that
$$6\alpha ^2(\varphi ^{})^2=\frac{2}{3}(\varphi ^{})^2$$
which is solved if we take
$$\alpha =\pm \frac{1}{3}$$
Plugging this ansatz into (2.1) we obtain
$$\frac{8}{3}(\varphi ^{\prime \prime }+4(\pm \frac{1}{3})(\varphi ^{})^2)=0$$
Plugging it into (2.1) we obtain
$$3(\pm \frac{1}{3})\varphi ^{\prime \prime }+\frac{4}{3}(\varphi ^{})^2=0$$
With either choice of sign for $`\alpha `$, these two equations become identical in bulk. For $`\alpha =\pm \frac{1}{3}`$, we must solve
$$\varphi ^{\prime \prime }\pm \frac{4}{3}(\varphi ^{})^2=0$$
in bulk. This is solved by
$$\varphi =\pm \frac{3}{4}\mathrm{log}|\frac{4}{3}x_5+c|+d$$
where $`c`$ and $`d`$ are arbitrary integration constants.
Note that there is a singularity in this solution at
$$x_5=\frac{3}{4}c$$
Our solutions will involve regions of spacetime to one side of this singularity; we will assume that it can be taken to effectively cut off the space. At present we do not have much quantitative to say about the physical implications of this singularity. The results we derive here (summarized above) strongly motivate further exploring the effects of these singularities on the four-dimensional physics of our domain wall solutions.
At $`x_5=0`$ there is localized energy density leading to the $`\delta `$-function terms in (2.1) and (2.1). We can solve these equations by introducing appropriate discontinuities in $`\varphi ^{}`$ at the wall (while insisting that $`\varphi `$ itself is continuous). We will now do this for two illustrative cases (the first being the most physically interesting).
Solution (I):
Let us take the bulk solution with $`\alpha =+\frac{1}{3}`$ for $`x_5<0`$, and the one with $`\alpha =\frac{1}{3}`$ for $`x_5>0`$. So we have
$$\varphi (x_5)=\varphi _1(x_5)=\frac{3}{4}\mathrm{log}|\frac{4}{3}x_5+c_1|+d_1,x_5<0$$
$$\varphi (x_5)=\varphi _2(x_5)=\frac{3}{4}\mathrm{log}|\frac{4}{3}x_5+c_2|+d_2,x_5>0$$
where we have allowed for the possibility that the (so far) arbitrary integration constants can be different on the two sides of the domain wall.
Imposing continuity of $`\varphi `$ at $`x_5=0`$ leads to the condition
$$\frac{3}{4}\mathrm{log}|c_1|+d_1=\frac{3}{4}\mathrm{log}|c_2|+d_2$$
This equation determines the integration constant $`d_2`$ in terms of the others.
To solve (2.1) we then require
$$\frac{8}{3}(\varphi _2^{}(0)\varphi _1^{}(0))=bVe^{b\varphi (0)}$$
while to solve (2.1) we need
$$3\left(\alpha _2\varphi _2^{}(0)\alpha _1\varphi _1^{}(0)\right)=\frac{1}{2}Ve^{b\varphi (0)}$$
(where $`\alpha _1=+\frac{1}{3}`$ and $`\alpha _2=\frac{1}{3}`$). These two matching conditions become
$$\frac{8}{3}(\frac{1}{c_1}+\frac{1}{c_2})=bVe^{bd_1}|c_1|^{\frac{3}{4}b}$$
and
$$\frac{1}{c_2}\frac{1}{c_1}=\frac{1}{2}Ve^{bd_1}|c_1|^{\frac{3}{4}b}$$
Solving for the integration constants $`c_1`$ and $`c_2`$ we find
$$\frac{2}{c_2}=\left[\frac{3b}{8}\frac{1}{2}\right]Ve^{bd_1}|c_1|^{\frac{3}{4}b}$$
$$\frac{2}{c_1}=\left[\frac{3b}{8}+\frac{1}{2}\right]Ve^{bd_1}|c_1|^{\frac{3}{4}b}$$
Note that as long as $`b\pm \frac{4}{3}`$, we here find a solution for the integration constants $`c_1`$ and $`c_2`$ in terms of the parameters $`b`$ and $`V`$ which appear in the Lagrangian and the integration constant $`d_1`$. (As discussed above, the integration constant $`d_2`$ is then also determined).<sup>2</sup> We will momentarily find a disjoint set of $`\mathrm{\Lambda }=0`$ domain wall solutions for which $`b`$ will be forced to be $`\pm 4/3`$, so altogether there are solutions for any $`b`$. In particular, for scalar coupling given by $`b`$, there is a Poincare-invariant four-dimensional domain wall for any value of the brane tension $`V`$; $`V`$ does not need to be fine-tuned to find a solution. As is clear from the form of the 4d interaction in (2.1), one way to understand this is that the scalar field $`\varphi `$ can absorb a shift in $`V`$ since the only place that the $`\varphi `$ zero mode appears in the Lagrangian is multiplying $`V`$. However since we can use these equations to solve for $`c_{1,2}`$ without fixing $`d_1`$, a more general story is at work; in particular, even for $`b=0`$ we find solutions for arbitrary $`V`$.
A constraint on the sign of $`V`$ arises, as we will now discuss, from the requirement that there be singularities (2.1) in the bulk solutions, effectively cutting off the $`x_5`$ direction at finite volume.
More General $`f(\varphi )`$
If instead of (2.1) we include a more general choice of $`f`$ in the action (2.1), the considerations above go through unaltered. The choice of $`f`$ only enters in the matching conditions (2.1) and (2.1) at the domain wall. The modified equations become
$$\frac{8}{3}(\varphi _2^{}(0)\varphi _1^{}(0))=\frac{f}{\varphi }(\varphi (0))$$
$$3\left(\alpha _2\varphi _2^{}(0)\alpha _1\varphi _1^{}(0)\right)=\frac{1}{2}f(\varphi (0))$$
In terms of the integration constants, these become:
$$\frac{8}{3}(\frac{1}{c_1}+\frac{1}{c_2})=\frac{f}{\varphi }(\frac{3}{4}\mathrm{log}|c_1|+d_1)$$
$$\frac{1}{c_2}\frac{1}{c_1}=\frac{1}{2}f(\frac{3}{4}\mathrm{log}|c_1|+d_1)$$
Clearly for generic $`f(\varphi )`$, one can solve these equations.
Obtaining a Finite 4d Planck Scale
Consider the solution (2.1) on the $`x_5<0`$ side. If $`c_1<0`$, then there is never a singularity. Let us consider the four-dimensional Planck scale. It is proportional to the integral
$$𝑑x_5e^{2A(x_5)}$$
In the $`x_5<0`$ region, this goes like
$$𝑑x_5\sqrt{|\frac{4}{3}x_5+c_1|}$$
If $`c_1<0`$, then there is no singularity, and this integral is evaluated from $`x_5=\mathrm{}`$ to $`x_5=0`$. It diverges. If $`c_1>0`$, then there is a singularity at (2.1). Cutting off the volume integral (2.1) there gives a finite result. Note that the ansatz (2.1) leaves an undetermined integration constant in $`A`$, so one can tune the actual value of the 4d Planck scale by shifting this constant.
In order to have a finite 4d Planck scale, we therefore impose that $`c_1>0`$. This requires $`V(\frac{1}{2}\frac{3b}{8})>0`$. For the value $`b=2/3`$, natural in string theory (as we will discuss in §3), this requires $`V>0`$. With this constraint, there is similarly a singularity on the $`x_5>0`$ side which cuts off the volume on that side.
These conditions extend easily to conditions on $`f(\varphi )`$ in the more general case. We find
$$\begin{array}{cc}& \frac{3}{8}\frac{f}{\varphi }(\varphi (0))\frac{1}{2}f(\varphi (0))<0\hfill \\ & \frac{3}{8}\frac{f}{\varphi }(\varphi (0))+\frac{1}{2}f(\varphi (0))>0\hfill \end{array}$$
This means that $`f(\varphi )`$ must be positive at the wall (corresponding to a positive tension brane), and that
$$\frac{4}{3}<\frac{f^{}}{f}<\frac{4}{3}$$
So although $`f`$ does not need to be fine-tuned to achieve a solution of the sort we require, it needs to be such that $`f^{}/f`$ is in the range (2.1).
Let us discuss some of the physics at the singularity. Following \[5,,11\], we can compute the $`x_5`$-dependence of the four-dimensional graviton wavefunction. Expanding the metric about our solution (taking $`g_{\mu \nu }=e^{2A}\eta _{\mu \nu }+h_{\mu \nu }`$), we find
$$h_{\mu \nu }\sqrt{|\frac{4}{3}x_5+c|}$$
At a singularity, where $`|\frac{4}{3}x_5+c|`$ vanishes, this wavefunction also vanishes. Without understanding the physics of the singularity, we cannot determine yet whether it significantly affects the interactions of the four-dimensional modes.
It is also of interest to consider the behavior of the scalar $`\varphi `$ at the singularities. In string theory this determines the string coupling. In our solution (I), we see that
$$\begin{array}{cc}& x_5\frac{3}{4}c_1\varphi \mathrm{}\hfill \\ & x_5\frac{3}{4}c_2\varphi \mathrm{}\hfill \end{array}$$
So in string theory, the curvature singularity on the $`x_5<0`$ side is weakly coupled, while that on the $`x_5>0`$ side is strongly coupled. It may be possible to realize these geometries in a context where supersymmetry is broken by the brane, so that the bulk is supersymmetric. In such a case the stability of the high curvature and/or strong-coupling regions may be easier to ensure. In any case we believe that the results of this section motivate further analysis of these singular regions, which we leave for future work.
Putting everything together, we have found the solution described in case (I) above. It should be clear that since $`f(\varphi )`$ only appears in (2.1) multiplying the delta function “thin wall” source term, we can always use the choice (2.1) in writing matching conditions at the wall for concreteness. To understand what would happen with a more general $`f`$, one simply replaces $`Ve^{b\varphi (0)}`$ with $`f(\varphi (0))`$ and $`bVe^{b\varphi (0)}`$ with $`\frac{f}{\varphi }(\varphi (0))`$ in the matching equations. We will not explicitly say this in each case, but it makes the generalization to arbitrary $`f`$ immediate.
Solution (II):
A second type of solution with $`\mathrm{\Lambda }=0`$ is obtained by taking $`\alpha `$ to have the same sign on both sides of the domain wall. So we have
$$\varphi (x_5)=\varphi _1(x_5)=\pm \frac{3}{4}\mathrm{log}|\frac{4}{3}x_5+c_1|+d_1,x_5<0$$
$$\varphi (x_5)=\varphi _2(x_5)=\pm \frac{3}{4}\mathrm{log}|\frac{4}{3}x_5+c_2|+d_2,x_5>0$$
The matching conditions then require $`b=\frac{4}{3}`$ for consistency of (2.1) and (2.1) (in the case with more generic $`f(\varphi )`$, this generalizes to the condition $`\frac{f}{\varphi }(\varphi (0))=\frac{4}{3}f(\varphi (0))`$). This is not a value of $`b`$ that appears from a dilaton coupling in perturbative string theory. It is still interesting, however, as a gravitational low-energy effective field theory where $`V`$ does not have to be fine-tuned in order to preserve four-dimensional Poincare invariance. We find a solution to the matching conditions with
$$\begin{array}{cc}& c_1=c,x_5>0\hfill \\ & c_2=c,x_5<0\hfill \\ & d_1=d_2=d\hfill \\ & e^{\frac{4}{3}d}=\frac{4}{V}\frac{c}{|c|}\hfill \end{array}$$
for some arbitrary constant $`c`$, and any $`V`$. This gives the results summarized in case (II) above. The value $`b=4/3`$, which is required here, was excluded from the solutions (I) derived in the last section.
As long as we choose $`c`$ such that there are singularities on both sides of the domain wall, we again get finite 4d Planck scale. As we can see from (2.1) and (2.1), having singularities on either side of the origin requires $`c`$ to be positive. Then we see from (2.1) that we can find a solution for arbitrary positive brane tension $`V`$.
Let us discuss the physics of the singularities in this case. As in solutions (I), the graviton wavefunction decays to zero at the singularity like $`(xx_{sing})^{\frac{1}{2}}`$. For $`b=4/3`$, $`\varphi \mathrm{}`$ at the singularities on both sides, while for $`b=\frac{4}{3}`$, $`\varphi \mathrm{}`$ at the singularities on both sides.
Putting solutions (I) and (II) together, we see that in the $`\mathrm{\Lambda }=0`$ case one can find a Poincare invariant solution with finite 4d Planck scale for any positive tension $`V`$ and any choice of $`b`$ in (2.1). As we have seen, this in fact remains true with (2.1) replaced by a more general dilaton dependent brane tension $`f(\varphi )`$.
Two-Brane Solutions
One can also obtain solutions describing a pair of domain walls localized in a compact fifth dimension. In case (I), one can show that such solutions always involve singularities. In case (II), there are solutions which avoid singularities while maintaining the finiteness of the four-dimensional Planck scale. They however involve extra moduli (the size of the compactified fifth dimension) which may be stabilized by for example the mechanism of . The singularity is avoided in these cases by placing a second domain wall between $`x_5=0`$ and the would-be singularity at $`\frac{4}{3}x_5+c=0`$. This allows us in particular to find solutions for which $`\varphi `$ is bounded everywhere, so that the coupling does not get too strong. This is a straightforward generalization of what we have already done and we will not elaborate on it here.
2.4. $`\mathrm{\Lambda }0`$ (Solution III)
More generally we can consider the entire Lagrangian (2.1) with parameters $`\mathrm{\Lambda },V,a,b`$. In this case, plugging in the ansatz (2.1) to equations (2.1)–(2.1), we find a bulk solution
$$\begin{array}{cc}& \varphi =\frac{2}{a}\mathrm{log}(\frac{a(\sqrt{B})}{2}x_5+d)\hfill \\ & B=\frac{\mathrm{\Lambda }}{\frac{4}{3}12\alpha ^2}\hfill \\ & \alpha =\frac{8}{9a}\hfill \end{array}$$
We find a domain wall solution by taking one sign in the argument of the logarithm in (2.1) for $`x_5<0`$ and the opposite sign in the argument of the logarithm for $`x_5>0`$. Say for instance that $`a>0`$. Then we could take the $``$ sign for $`x>0`$ and the $`+`$ sign for $`x<0`$, and find a solution which terminates at singularities on both sides if we choose $`d>0`$.
The matching conditions then require
$$V=12\alpha \sqrt{B}$$
and
$$b=\frac{4}{9\alpha }$$
So we see that here $`V`$ must be fine-tuned to the $`\mathrm{\Lambda }`$-dependent value given in (2.1). This is similar to the situation in , where one fine-tune is required to set the four-dimensional cosmological constant to zero. Like in our solutions in §2.1, there is one undetermined parameter in the Lagrangian. But here it is a complicated combination of $`\mathrm{\Lambda }`$ and $`V`$ (namely, $`\frac{V}{\sqrt{\mathrm{\Lambda }}}`$), and we do not have an immediate interpretation of variations of this parameter as arising from nontrivial quantum corrections from a sector of the theory.
The fact, apparent from equations (2.1) and (2.1), that $`b=a/2`$ in this solution makes its embedding in string theory natural, as we will explain in the next section.
$`\mathrm{\Lambda }0`$, $`a=0`$
In this case, the bulk equations of motion become (in terms of $`h\varphi ^{}`$ and $`gA^{}`$)
$$\begin{array}{cc}& h^{}+4hg=0\hfill \\ & 6g^2\frac{2}{3}h^2+\frac{1}{2}\mathrm{\Lambda }=0\hfill \\ & 3g^{}+\frac{4}{3}h^2=0\hfill \end{array}$$
We can solve the second equation for $`g`$ in terms of $`h`$, and then integrate the first equation to obtain $`h(x_5)`$. For $`g0`$ the third equation is then automatically satisfied. We will not need detailed properties of the solution, so we will not include it here. The solutions are more complicated than those of §2.3. We are currently exploring under what conditions one can solve the matching equations to obtain a wall with singularities cutting off the $`x_5`$ direction on both sides . If such walls exist, they will also exhibit the self-tuning phenomenon of §2.3, since the dilaton zero mode can absorb shifts in $`V`$ and doesn’t appear elsewhere in the action.
3. Toward a String Theory Realization
3.1. $`\mathrm{\Lambda }=0`$ Cases
Taking $`\mathrm{\Lambda }=0`$ is natural in string theory, since the tree-level vacuum energy in generic critical closed string compactifications (supersymmetric or not) vanishes. One would expect bulk quantum corrections to correct $`\mathrm{\Lambda }`$ in a power series in $`g_s=e^\varphi `$. However, the analysis of §2.3 may still be of interest if the bulk corrections to $`\mathrm{\Lambda }`$ are small enough. This can happen for instance if the supersymmetry breaking is localized in a small neighborhood of the wall and the $`x_5`$ interval is much larger, or more generally if the supersymmetry breaking scale in bulk is small enough.
General $`f(\varphi )`$
The examples we have found in §2 which “self-tune” the 4d cosmological constant to zero have $`\mathrm{\Lambda }=0`$ with a broad range of choices for $`f(\varphi )`$. We interpret this as meaning that quantum corrections to the brane tension, which would change the form of $`f`$, do not destabilize the flat brane solution. The generality of the dilaton coupling $`f(\varphi )`$ suggests that our results should apply to a wide variety of string theory backgrounds involving domain walls. We now turn to a discussion of some of the features of particular cases.
D-branes
In string theory, one would naively expect codimension one D-branes (perhaps wrapping a piece of some compact manifold) to have $`f(\varphi )`$ given by a power series of the form
$$f(\varphi )=e^{\frac{5}{3}\varphi }\underset{n=0}{\overset{\mathrm{}}{}}c_ne^{n\varphi }$$
The $`c_0`$ term represents the tree-level D-brane tension (which goes like $`\frac{1}{g_s}`$ in string frame). The higher order terms in (3.1) represent quantum corrections from the Yang-Mills theory on the brane, which has coupling $`g_{YM}^2=e^\varphi `$.
If one looks for solutions of the equations which arise with the choice (2.1) for $`f(\varphi )`$ with positive $`V`$ and $`b=5/3`$ (the tree level D-brane theory), then there are no solutions with finite 4d Planck scale. The constraints of §2.3 cannot be solved to give a single wall with singularities on both sides cutting off the length in the $`x_5`$ direction. However, including quantum corrections to the D-brane theory to get a more generic $`f`$ as in (3.1), there is a constraint on the magnitude of $`\frac{f}{\varphi }(\varphi (0))`$ divided by $`f(\varphi (0))`$ which can be obeyed. Therefore, one concludes that for our mechanism to be at work with D-brane domain walls, the dilaton $`\varphi `$ must be stabilized away from weak coupling – the loop corrections to (3.1) must be important.
The Case $`f(\varphi )=Ve^{\frac{2}{3}\varphi }`$ and NS Branes
Another simple way to get models which could come out of string theory is to set $`b=2/3`$ in (2.1), so
$$f(\varphi )=Ve^{\frac{2}{3}\varphi }$$
Then (2.1) becomes precisely the Einstein frame action that one would get from a “3-brane” in string theory with a string frame source term proportional to $`e^{2\varphi }`$. In this case, $`\varphi `$ can also naturally be identified with the string theory dilaton. This choice of $`b`$ is possible in solutions of the sort summarized in result (I) in §2.1.
However, after identifying $`\varphi `$ with the string theory dilaton, if we really want to make this specific choice for $`f(\varphi )`$ we would also like to find branes where it is natural to expect that quantum corrections to the brane tension (e.g. from gauge and matter fields living on the brane) would shift $`V`$, but not change the overall $`\varphi `$ dependence of the source term. This can only happen if the string coupling $`g_s=e^\varphi `$ is $`\mathrm{𝚗𝚘𝚝}`$ the field-theoretic coupling parameter for the dynamical degrees of freedom on the brane.
Many examples where this happens are known in string theory. For example, the NS fivebranes of type IIB and heterotic string theory have gauge fields on their worldvolume whose Yang-Mills coupling does not depend on $`g_s`$ \[15,,16,,17\]. This can roughly be understood from the fact that the dilaton grows to infinity down the throat of the solution, and its value in the asymptotic flat region away from this throat is irrelevant to the coupling of the modes on the brane. Upon compactification, this leads to gauge couplings depending on sizes of cycles in the compactification manifold (in units of $`\alpha ^{}`$) \[16,,18\]. For instance, in gauge groups which arise “non-perturbatively” in singular heterotic compactifications (at less supersymmetric generalizations of the small instanton singularity ) are discussed. There, the 4d gauge couplings on a heterotic NS fivebrane wrapped on a two-cycle go like
$$g_{YM}^2\frac{\alpha ^{}}{R^2}$$
Here $`R`$ is the scale of this 2-cycle in the compactification manifold. In , this was used to interpret string sigma model worldsheet instanton effects, which go like $`e^{\frac{R^2}{\alpha ^{}}}`$, in terms of nonperturbative effects in the brane gauge group, which go like $`e^{\frac{8\pi ^2}{g_{YM}^2}}`$. So this is a concrete example in which nontrivial dilaton-independent quantum corrections to the effective action on the brane arise. One can imagine analogous examples involving supersymmetry breaking. In such cases, perturbative shifts in the brane tension due to brane worldvolume gauge dynamics would be a series in $`\frac{\alpha ^{}}{R^2}`$ and not $`g_s=e^\varphi `$.
In particular, one can generalize such examples to cases where the branes are domain walls in 5d spacetime (instead of space-filling in 4d spacetime as in the examples just discussed), but where again the brane gauge coupling is not the string coupling. Quantum corrections to the brane tension in the brane gauge theory then naturally contribute shifts
$$e^{\frac{2}{3}\varphi }Ve^{\frac{2}{3}\varphi }(V+\delta V)$$
to the (Einstein frame) $`b=2/3`$ source term in (2.1), without changing its dilaton dependence.
Most of our discussion here has focused on the case where $`\varphi `$ is identified with the string theory dilaton. However, in general it is possible that some other string theory modulus could play the role of $`\varphi `$ in our solutions, perhaps for more general values of $`b`$.
Resemblance to Orientifolds
In our analysis of the equations, we find solutions describing a 4d gravity theory with zero cosmological constant if we consider singular solutions and cut off the fifth dimension at these singularities. The simplest versions of compactifications involving branes in string theory also include defects in the compactification which absorb the charge of the branes and cancel their contribution to the cosmological constant in four dimensions, at least at tree level. Examples of these defects include orientifolds (in the context of D-brane worlds), S-duals of orientifolds (in the context of NS brane worlds), and Horava-Witten “ends of the world” (in the context of the strongly coupled heterotic string).
Our most interesting solutions involve two different behaviors on the two sides of the domain wall. On one side the dilaton goes to strong coupling while on the other side it goes to weak coupling at the singularity. This effect has also been seen in brane-orientifold systems .
It would be very interesting to understand whether the singularities we find can be identified with orientifold-like defects, as these similarities might suggest. Then their role (if any) in absorbing quantum corrections to the 4d cosmological constant could be related to the effective negative tension of these defects. However, various aspects of our dilaton gravity solutions are not familiar from brane-orientifold systems. In particular, the existence of solutions with curved 4d geometry on the same footing as our flat solutions does not occur in typical perturbative string compactifications. In any case, note that (as explained in §3.1) our mechanism does not occur in the case of weakly coupled D-branes and orientifolds.
3.2. $`\mathrm{\Lambda }0`$ Cases
Some of the $`\mathrm{\Lambda }0`$ cases discussed in §2.4 could also arise in string theory. As discussed in \[20,,21\] one can find closed string backgrounds with nonzero tree level cosmological constant $`\mathrm{\Lambda }<0`$ by considering subcritical strings. In this case, the cosmological term would have dilaton dependence consistent with $`a=4/3`$ in bulk. Using equations (2.1) and (2.1), this implies $`b=2/3`$, which is the expected scaling for a tree-level brane tension in the thin-wall approximation as well.
One would naively expect to obtain vacua with such negative bulk cosmological constants out of tachyon condensation in closed string theory \[20,,21\]. It is then natural to consider these domain walls (in the $`a=4/3,b=2/3`$ case) as the thin wall approximation to “fat” domain walls which could be formed by tachyon field configurations which interpolate between different minima of a closed string tachyon potential. In the context of the Randall-Sundrum scenario, such “fat” walls were studied for example in \[11,,22,,23\].
It would be interesting to find cases where the $`\mathrm{\Lambda }0`$, $`a=0`$ solutions arise from a more microscopic theory. However, it is clear that the dilaton dependence of (2.1) is then not consistent with interpreting $`\varphi `$ as the string theory dilaton. Perhaps one could find a situation where $`\varphi `$ can be identified with some other string theoretic modulus, and $`\mathrm{\Lambda }`$ can be interpreted as the bulk cosmological constant after other moduli are fixed.
4. Discussion
The concrete results of §2 motivate many interesting questions, which we have only begun to explore. Answering these questions will be important for understanding the four-dimensional physics of our solutions.
The most serious question has to do with the nature of the singularities. There are many singularities in string theory which have sensible physical resolutions, either due to the finite string tension or due to quantum effects. Most that have been studied (like flops and conifolds ) involve systems with some supersymmetry, though some (like orbifolds ) can be understood even without supersymmetry. We do not yet know the proper interpretation of our singularities, though as discussed in §3 there are intriguing similarities to orientifold physics in our system. After finding the solutions, we cut off the volume integral determining the four-dimensional Planck scale at the singularities. It is important to determine whether this is a legitimate operation.
It is desirable (and probably necessary in order to address the question in the preceding paragraph) to embed our solutions microscopically into M theory. As discussed in §3, some of our solutions appear very natural from the point of view of string theory, where the scalar $`\varphi `$ can be identified with the dilaton. It would be interesting to consider the analogous couplings of string-theoretic moduli scalars other than the dilaton. Perhaps there are other geometrical moduli which couple with different values of $`a`$ and $`b`$ in (2.1) than the dilaton does.
It is also important to understand the effects of quantum corrections to quantities other than $`f(\varphi )`$ in our Lagrangian. In particular, corrections to $`\mathrm{\Lambda }`$ and corrections involving different powers of $`e^\varphi `$ in the bulk (coming from loops of bulk gravity modes) will change the nature of the equations. It will be interesting to understand the details of curved 4d domain wall solutions to the corrected equations \[27,,11,,10\]. More specifically, it will be of interest to determine the curvature scale of the 4d slice, in terms of the various choices of phenomenologically natural values for the Planck scale. Since the observed value of the cosmological constant is nonzero according to studies of the mass density, cosmic microwave background spectral distribution, and supernova events , such corrected solutions might be of physical interest.
Perhaps the most intriguing physical question is what happens from the point of view of four-dimensional effective field theory (if such a description in fact exists). Understanding the singularity in the 5d background is probably required to answer this question. One possibility (suggested by the presence of the singularity and by the self-tuning of the 4d cosmological constant discovered here) is that four-dimensional effective field theory breaks down in this background, at least as far as contributions to the 4d cosmological constant are concerned. In and analogous examples, there is a continuum of bulk modes which could plausibly lead to a breakdown of 4d effective field theory in certain computations. In our theories, cutting off the 5d theory at the singularities leaves finite proper distance in the $`x_5`$ direction. This makes it unclear how such a continuum could arise (in the absence of novel physics at the singularities, which could include “throats” of the sort that commonly arise in brane solutions). So in this system, any breakdown of 4d effective field theory is more mysterious.
Acknowledgements
We are indebted to O. Aharony and T. Banks for interesting discussions which motivated us to investigate this subject. We would also like to thank R. Sundrum for many helpful discussions about closely related topics; we understand that Arkani-Hamed, Dimopoulos, Kaloper and Sundrum have uncovered very similar results . We thank H. Verlinde for interesting discussions, and in particular for several helpful comments about the potential generality of these results. In addition we are grateful to M. Dine, N. Kaloper, S. Shenker, M. Shmakova, L. Susskind and E. Verlinde for stimulating discussions. We would like to acknowledge the kind hospitality of the School of Natural Sciences at the Institute for Advanced Study during the early stages of this work. S.K. is supported in part by the Ambrose Monell Foundation and a Sloan Foundation Fellowship, M.S. is supported in part by an NSF Graduate Research Fellowship, and E.S. is supported in part by a DOE OJI Award and a Sloan Foundation Fellowship. S.K. and E.S. are supported in part by the DOE under contract DE-AC03-76SF00515.
References
relax V. Rubakov and M. Shaposhnikov, “Extra Space-Time Dimensions: Towards a Solution to the Cosmological Constant Problem,” Phys. Lett. B125 (1983) 139. relax P. Horava and E. Witten, “Heterotic and Type I String Dynamics from Eleven-Dimensions,” Nucl. Phys. B460 (1996) 506, hep-th/9510209; E. Witten, “Strong Coupling Expansion of Calabi-Yau Compactification,” Nucl. Phys. B471 (1996) 135, hep-th/9602070; A. Lukas, B. Ovrut, K. Stelle and D. Waldram, “The Universe as a Domain Wall,” Phys. Rev. D59 (1999) 086001, hep-th/9803235. relax N. Arkani-Hamed, S. Dimopoulos and G. Dvali, “The Hierarchy Problem and New Dimensions at a Millimeter,” Phys. Lett. B429 (1998) 263, hep-ph/9803315; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, “New Dimensions at a Millimeter and Superstrings at a TeV,” Phys. Lett. B436 (1998) 257, hep-ph/9804398. relax Z. Kakushadze and H. Tye, “Brane World,” Nucl. Phys. B548 (1999) 180, hep-th/9809147. relax L. Randall and R. Sundrum, “An Alternative to Compactification,” Phys. Rev. Lett. 83 (1999) 4690, hep-th/9906064. relax J. Maldacena, “The Large N Limit of Superconformal Field Theories and Supergravity,” Adv. Theor. Math. Phys. 2 (1998) 231, hep-th/9711200; S. Gubser, I. Klebanov and A. Polyakov, “Gauge Theory Correlators from Noncritical String Theory,” Phys. Lett. B428 (1998) 105, hep-th/9802109; E. Witten, “Anti-de Sitter Space and Holography,” Adv. Theor. Math. Phys. 2 (1998) 253, hep-th/9802150. relax E. Verlinde and H. Verlinde, “On RG Flow and the Cosmological Constant,” hep-th/9912058. relax C. Schmidhuber, “AdS(5) and the 4d Cosmological Constant,” hep-th/9912156. relax S. Kachru and E. Silverstein, “4d Conformal Field Theories and Strings on Orbifolds,” Phys. Rev. Lett. 80 (1998) 4855, hep-th/9802183. relax S. Kachru, M. Schulz and E. Silverstein, work in progress. relax O. DeWolfe, D. Freedman, S. Gubser and A. Karch, “Modeling the Fifth Dimension with Scalars and Gravity,” hep-th/9909134. relax M. Cvetic and H. Soleng, “Supergravity Domain Walls,” Phys. Rept. 282 (1997) 159, hep-th/9604090. relax N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum, to appear. relax W. Goldberger and M. Wise, “Modulus Stabilization with Bulk Fields,” Phys. Rev. Lett. 83 (1999) 4922, hep-ph/9907447. relax E. Witten, “Small Instantons in String Theory,” Nucl. Phys. B460 (1996) 541, hep-th/9511030. relax A. Klemm, W. Lerche, P. Mayr, C.Vafa and N. Warner, “Self-Dual Strings and N=2 Supersymmetric Field Theory,” Nucl.Phys. B477 (1996) 746, hep-th/9604034. relax N. Seiberg, “Matrix Description of M-theory on $`T^5`$ and $`T^5/Z_2`$,” Phys.Lett. B408 (1997) 98, hep-th/9705221. relax S. Kachru, N. Seiberg and E. Silverstein, “SUSY Gauge Dynamics and Singularities of 4d N=1 String Vacua,” Nucl. Phys. B480 (1996) 170, hep-th/9605036; S. Kachru and E. Silverstein, “Singularities, Gauge Dynamics and Nonperturbative Superpotentials in String Theory,” Nucl. Phys. B482 (1996) 92, hep-th/9608194. relax J. Polchinski and E. Witten, “Evidence for Heterotic - Type I String Duality,” Nucl. Phys. B460 (1996) 525, hep-th/9510169. relax S. de Alwis, J. Polchinski and R. Schimmrigk, “Heterotic Strings with Tree Level Cosmological Constant,” Phys. Lett. B218 (1989) 449. relax S. Kachru, J. Kumar and E. Silverstein, “Orientifolds, RG Flows, and Closed String Tachyons,” hep-th/9907038. relax M. Gremm, “Four-dimensional Gravity on a Thick Domain Wall,” hep-th/9912060. relax C. Csaki, J. Erlich, T. Hollowood and Y. Shirman, “Universal Aspects of Gravity Localized on Thick Branes,” hep-th/0001033. relax E. Witten, “Phases of N=2 Theories in Two-Dimensions,” Nucl. Phys. B403 (1993) 159, hep-th/9301042; P. Aspinwall, B. Greene and D. Morrison, “Calabi-Yau Moduli Space, Mirror Manifolds and Space-time Topology Change in String Theory,” Nucl. Phys. B416 (1994) 414, hep-th/9309097. relax A. Strominger, “Massless Black Holes and Conifolds in String Theory,” Nucl. Phys. B451 (1995) 96, hep-th/9504090. relax L. Dixon, J. Harvey, C. Vafa and E. Witten, “Strings on Orbifolds,” Nucl. Phys. B261 (1985) 678. relax N. Kaloper, “Bent Domain Walls as Braneworlds,” Phys. Rev. D60 (1999) 123506, hep-th/9905210. relax N. Bahcall, J. Ostriker, S. Perlmutter and P. Steinhardt, “The Cosmic Triangle: Assessing the State of the Universe,” Science 284 (1999) 1481, astro-ph/9906463. |
no-problem/0001/math-ph0001030.html | ar5iv | text | # Acoustics of the Indian Drum
## 1 Introduction - Music, Harmonics and Overtones
Sound is caused by alternate compressions and rarefactions of pressure in the air. The ear-drum vibrates with these so-called longitudinal waves, and that is how the sense of sound is had. Any arbitrary sound, however, does not please the ear. The normal human ear can, in general, differentiate between what is pleasing and what is noise. This section gives the important differences between Music and Noise.
### 1.1 Music
Music has its own definite wave patterns, so that any arbitrary sound cannot be called music. One can, by looking at the wave patterns, find out whether a given sound is musical or not. Music has the following two attributes that distinguish it from noise:
1. Pitch: The pitch of a wave pattern characterises the time after which the wave pattern repeats itself. For the sensation of pitch, therefore,it is essential that the waveform be periodic. A non-periodic waveform will not give the sensation of pitch, and can hence be labeled noise.
2. Timbre: Timbre refers to the quality of sound. A tuning fork, a sitar, a tanpura and a piano may all have the same pitch, yet their timbre may be completely different. Pitch refers to the frequency of repetition of the waveform, whereas timbre depends on the shape of the waveform. What timbre really refers to is the relative amplitudes of the various component single frequencies that make up a particular sound wave.
### 1.2 Overtones and Harmonics
Any complex waveform may be expressed as a linear superposition of a number of sine waves of different frequencies. Notes produced by musical instruments consist of sine waves of one fundamental frequency and of higher frequencies called overtones. For the resultant waveform to be periodic, the overtones have to be integral multiples of the fundamental frequency. Overtones that are integral multiples of the fundamental frequency are called harmonics.
Unless a majority of overtones in a particular sound are harmonic, the waveform will not be periodic and will not have a discernible pitch. Thus for a note to sound musical <sup>2</sup><sup>2</sup>2Musical, as used here just means having a definite pitch, which is very important in Indian Classical accompaniments; however, this does not mean that the Western drums are ’non-musical’, they produce rhythm, just like their Indian counterparts, see section 1.3., a majority of the overtones must be harmonic.
### 1.3 Indian and Western Drums
The Tabla is the most well-known of all Indian drums. Amir Khusro created it when he split the ancient Indian drum, the Pakhawaj into two parts. Of these, the right drum has a black patch in the centre, as mentioned earlier. This is made of a mixture of iron, iron oxides, resin, gum etc. and is stuck firmly on to the membrane. The thickness of this patch decreases radially outwards.
The left hand drum has a wider membrane, and has a black patch similar to the one in the right drum, except that it is unsymmetrically placed on one side of the membrane. It is worth noting here that the left hand drum is not used to produce harmonics but to provide lower frequencies in the overall sound while the Tabla is being played.
The key difference between Indian and western drums is the absence of the central loading in the case of the Western drums. It is shown mathematically in section 4 that a uniform circular membrane cannot produce harmonics. In section 5, we then investigate how the drum may be made harmonic by considering two theoretical models, i.e. radial density distributions.
Thus, this is a study of how the density variation of the membrane affects the frequencies of the overtones.
Indian Classical music is such that a recital consists (usually) of a single performer, and a couple of accompaniments. In this situation, the accompaniment (usually a drum, the Tabla for instance, must be harmonic, since otherwise, the aharmonicity would completely disrupt the recital.
Western drums, however, play a very different part in Western Classical music. They provide a rhythm to the music produced by the rest of the instruments, and (strictly speaking) do not need to produce harmonics.
## 2 The Wave Equation and its solution
The wave equation, in the case of a membrane, describes the relationship between the displacement of a point on the membrane changes with position and with time, when it is disturbed in some way.
The general form of the Wave equation is :
$$^2u=\frac{1}{c^2}\frac{^2u}{^2t}.$$
(1)
For a circular membrane, in polar co-ordinates, the wave equation may be written as
$$\frac{^2u}{r^2}+\frac{1}{r}\frac{u}{r}+\frac{1}{r^2}\frac{^2u}{\theta ^2}=\frac{1}{c^2}\frac{^2u}{t^2}$$
(2)
where $`(r,\theta )`$ describes any arbitrary point on the membrane that is assumed to have:
1. Uniform mass per unit area
2. Uniform tension per unit length
and $`u(r,\theta ,t)`$ is the vertical displacement of the point at a time t. Also,
$$c^2=\frac{\tau }{\rho }$$
(3)
where c is the speed at which the wave travels on the membrane, $`\tau `$ is the tension per unit length in the membrane and $`\sigma `$ is the mass per unit area of the membrane.
This equation is solved using the technique of separation of variables, where it is assumed that
$$u(r,\theta ,\varphi )=R(r)\mathrm{\Theta }(\theta )T(t).$$
(4)
Here, $`R`$ depends only on $`r`$, $`\mathrm{\Theta }`$ only on $`\theta `$, and $`T`$ only on $`t`$. Making this substitution, and solving, the three quantities $`T`$, $`\mathrm{\Theta }`$ and $`R`$ are found to be:
1. The $`T`$ solution: This is a simple sine (or cosine) function, indicating that the membrane undergoes simple harmonic oscillations in time. Its general form is:
$$T=cos\left(\omega t+\varphi \right),$$
(5)
where $`omega=ck`$ and $`\varphi `$ is a phase which depends on the initial conditions. However, it can be set to zero without loss of generality.
2. The $`\mathrm{\Theta }`$ solution: This tells us how $`u`$ changes with angle, i.e. how displacement changes as we move along a circular path of fixed radius. This part is:
$$\mathrm{\Theta }=cos(m\theta +\psi ).$$
(6)
$`\psi `$ is again a phase which depends on initial conditions, and may be put to zero without loss of generality. There are certain values of $`\theta `$ for which $`\mathrm{\Theta }(\theta )`$ reduces to zero. Thus, $`u`$ would, at all times be equal to zero on all diameters along which $`m\theta `$ is an integral multiple of $`\frac{\pi }{2}`$.
Nodal Diameters of a drum are those diameters that remain stationary while the rest of the drum is vibrating. For a particular value of $`m`$, there are $`m`$ symmetrically placed Nodal Diameters.
3. The $`R`$ solution: This describes how the displacement of the membrane above the plane of rest (i.e. $`u`$) changes as we move outward, along a radius. There are certain circles that remain stationary on the membrane while the rest of the drum vibrates. This happens whenever the function $`R(r)`$ crosses the value zero. These circles are called Nodal Circles and their radii correspond to those values of $`r`$ at which $`R(r)`$ reduces to zero. The $`R`$ solution is given by:
$$R(r)=J_m(kr),$$
(7)
where $`J_m`$ is the Bessel function of the first kind of order $`m`$.
A few Bessel functions are plotted in Fig.2 and Fig.3.
### 2.1 Normal Modes
The complete solution may be written as:
$$u(r,\theta ,t)=J_m(k_{mn}r)cos(m\theta )cos(\omega _{mn}t)$$
(8)
The most general solution is thus a superposition of various different modes, where in each mode, the whole drum vibrates with one frequency, has $`m`$ Nodal Diameters and $`n1`$ nodal circles. These modes are called the Normal Modes of the drum. The first nine Normal Modes are shown in Fig. 4.
### 2.2 The Boundary condition
The membrane is bound in the form a circle over the head of the wooden shell. Therefore, the periphery is always stationary. This has to be taken into account when the solution is being calculated. The condition may be stated mathematically as follows:
$$u(r,\theta ,t)=0$$
(9)
at $`r=a`$ or
$$J_n(k_{mn}a)=0.$$
(10)
That is, the boundary of the membrane should correspond to one of the zeroes of the Bessel function of any order $`n`$. This will yield the values of frequencies that are ’allowed’ by the boundary condition. However, we are interested only in the relative ratios of these frequencies.
Let us quantify the ’allowed’ frequencies. Let $`b_{mn}`$ be the mth zero of the Bessel function of order $`n`$. Then the above boundary condition becomes:
$$k_{mn}=\frac{b_{mn}}{a},$$
(11)
i.e. the allowed frequencies are proportional to the zeroes of Bessel functions. As noted earlier, however, the values of these zeroes have no integral ratio to each other, so that the ratios of the frequencies are also non-integral.
Thus, the simple circular membrane, and hence also Western drums, cannot produce harmonics.
## 3 Making the Drum Harmonic
Indian drums, in general, play a very different role as accompaniments in Classical music as compared to their western counterparts, as mentioned in sections 1.2 and 1.3. Since a uniform membrane does not give us harmonics, we tried solutions to a membrane with a density variation. The simplest possibility is a loading which varies only with $`r`$. We looked at the right hand tabla and tried various symmetric density distributions that resembled the actual loading. We describe here the one that was found to be successful. But first, we note the change produced in the wave equation.
Recall that $`c^2=\frac{\tau }{\rho }`$. Now, however, $`\rho =\rho (r)`$, so that $`c^2(r)=\frac{\tau }{\rho (r)}`$. This causes the wave equation to change from
$$\frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}+\left(\frac{\omega ^2}{c^2}\frac{m^2}{r^2}\right)R=0$$
(12)
for a uniform membrane to
$$\frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}+\left(\frac{\omega ^2}{c^2(r)}\frac{m^2}{r^2}\right)R=0$$
(13)
or, equivalently,
$$\frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}+\left(\frac{\rho (r)\omega ^2}{\tau }\frac{m^2}{r^2}\right)R=0$$
(14)
for a loaded membrane.
The Radial equation can be solved for various distributions. We tried two kinds of loading patterns:
1. The Step function: Concentric rings with varying density were considered. This gave good results, as can be seen in the table.
2. The Continuous Loading: The actual loading on the tabla is a step function to begin with. However, even though it is stuck in parts, the loading becomes more or less continuous after the tabla has been played for some time. The loading therefore looks quite like the function plotted in Fig. 5.
It was also found that if an exponential function is included toward the periphery, the results obtained for both the distributions improve. The form of the function used, along with the continuous loading, was $`ce^{dr}`$, where $`c,d`$ are constants. The actual function used as loading is shown in Fig. 6.
## 4 Calculation and results
Both the distributions given in the previous section were put into the Radial part of the wave equation, and the equation was solved numerically using the second-order Runge-Kutta method.
In writing down the wave equation for the loaded membrane, the following two assumptions were involved:
1. Normal Modes exist even in the loaded membrane
2. Tension per unit length is the same throughout the membrane, even in the loaded part.
### 4.1 Calculation
It may be seen from figure that the function chosen for Continuous loading varies slowly with $`r`$ near the centre of the membrane. Also, in the discrete loading, the density remains constant in a particular concentric circle, so that that the density is not varying in the centre. Due to this reason, the initial conditions (for starting the solution of the wave equation by the Runge-Kutta method) were assumed to be identical to those of the Bessel functions.
The allowed frequencies were found in the following way: First, the Radial equation was written as
$$\frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}+\left(\rho (r)k^2\frac{m^2}{r^2}\right)R=0$$
(15)
where
$$k^2=\frac{\omega ^2}{\tau }.$$
(16)
This is the equivalent of the Bessel equation of order $`m`$ for the loaded case.
Now, keeping $`m=0`$, the value of $`k^{}`$ was varied and the solution plotted as a graph on the screen until its value became zero at the boundary. The corresponding value of $`k^{}`$ was noted. $`k^{}`$ was then increased, until the solution again became zero, but this time the solution passed through zero once, meaning that there was a nodal circle. The process was continued for a particular value of $`m`$. Then, $`m`$ was increased by one, and the same process was continued, and all the allowed frequency values were noted.
### 4.2 Problems with the calculation
The method mentioned in the previous section works fine for $`m=0`$ and $`m=1`$, since the initial values of the Bessel functions and their derivatives are non-zero for the zeroth and first orders. However, for $`m=2`$ and higher orders, values of both the Bessel function and its derivative become zero at $`r=0`$. This leads to the solution becoming zero at all points for $`m=2`$ and higher orders.
This happens because the Runge-Kutta method depends on the initial value of the solution and its derivative (i.e. the value at $`r=0`$). Since the iteration begins with the derivative and the initial value as zero, it continues to be so for further values of $`r`$.
To get around this problem, iteration was begun not from $`r=0`$ but from an infinitesimally small value, in this case $`r=0.0001`$. The values of the Bessel function and its derivative at this point were computed from the series expansion of the Bessel function, considering the first four terms. The rest of the procedure was as in the previous section, but this ’initial value’ had to be calculated for each order separately.
### 4.3 Results
The most surprising result we got was that for the continuous loading, all the overtones were found to be nearly harmonic, but the “fundamental” itself was higher than what it should have been. In particular, the ratios were found to be 1.07:2:2:3 etc. These results are summarized in Table 1.The base in column 3 is the 2nd normal mode, with 1 nodal diameter. This is done to illustrate the fact that the fundamental is absent and the lowest eigenvalue is 1.07 times the fundamental.
Thus, a theoretical model with the first seven harmonics is possible, which also resembles the actual loading pattern on the Indian drums. However, with the kind of loading that the Indian drums have, the fundamental is absent and a slightly higher note is present instead. This is in accordance with recent observations of the frequencies present in the Indian drums.
## 5 Discussion
The results table shows certain modes with relative frequency ratios that differ from the exact harmonic ratios by small amounts (e.g. 0.01, 0.02). However, the minimum difference required between two frequencies so that a normal human ear can distinguish between them is $`67Hz`$. Considering that the fundamental note in Indian Classical (Sa of the base Saptak) is $`240Hz`$, these differences work out to $`2.4Hz`$ to $`5Hz`$. Thus, the human ear is not able to make out these differences.
Acknowledgements
This investigation was done as a project at St. Stephen’s College, Delhi, and was sponsored by the Deptt. of Science and technology, Govt. of India, and was exhibited, along with other projects done at St. Stephen’s College, at the 85th session of the Indian Science Congress at Hyderabad, 3rd to 10th January, 1998.
We would like to thank Dr. S.C. Bhargava, Mr. N. Raghunathan, Dr. B. Phookun, Dr. S. Aggarwal and Mr. S. Grewal of St. Stephen’s College for valuable suggestions and guidance during the project period. |
no-problem/0001/astro-ph0001213.html | ar5iv | text | # Superconducting cosmic strings as Gamma Ray Burst engines
## Abstract
Cusps of superconducting strings can serve as GRB engines. A powerful beamed pulse of electromagnetic radiation from a cusp produces a jet of accelerated particles, whose propagation is terminated by the shock responsible for GRB. A single free parameter, the string scale of symmetry breaking $`\eta 10^{14}GeV`$, together with reasonable assumptions about the magnitude of cosmic magnetic fields and the fraction of volume that they occupy, explains the GRB rate, duration and fluence, as well as the observed ranges of these quantities. The wiggles on the string can drive the short-time structures of GRB. This model predicts that GRBs are accompanied by strong bursts of gravitational radiation which should be detectable by LIGO, VIRGO and LISA detectors.
Models of gamma ray bursts (GRBs) face the problem of explaining the tremendous energy released by the central engine . In the case of isotropic emission, the total energy output should be as high as $`4\times 10^{54}`$ ergs. Strongly beamed emission is needed for all known engine models, such as mergers and hypernovae, but such extreme beaming is difficult to arrange. In this paper we show that emission of pulsed electromagnetic radiation from cusps of superconducting cosmic strings naturally solves this problem and explains the observational GRB data using only one engine parameter.
Cosmic strings are linear defects that could be formed at a symmetry breaking phase transition in the early universe . Strings predicted in most grand unified models respond to external electromagnetic fields as thin superconducting wires . As they move through cosmic magnetic fields, such strings develop electric currents. Oscillating loops of superconducting string emit short bursts of highly beamed electromagnetic radiation and high-energy particles .
The idea that GRBs could be produced at cusps of superconducting strings was first suggested by Babul, Paczynski and Spergel (BPS) and further explored by Paczynski . They assumed that the bursts originate at very high redshifts ($`z1001000`$), with GRB photons produced either directly or in electromagnetic cascades developing due to interaction with the microwave background. This model requires the existence of a strong primordial magnetic field to generate the string currents.
As it stands, the BPS model does not agree with observations. The observed GRB redshifts are in the range $`z3`$, and the observed duration of the bursts ($`10^2s\tau 10^3s`$) is significantly longer than that predicted by the model. On the theoretical side, our understanding of cosmic string evolution and of the GRB generation in relativistic jets have considerably evolved since the BPS papers were written. Our goal in this paper is to revive the BPS idea taking stock of these recent advances.
As in the BPS model we shall use the cusp of a superconducting string as the central engine in GRB. It provides the tremendous engine energy naturally beamed. Our main observation is that putting superconducting cusps in a different enviroment, the magnetized plasma at a relatively small redshift $`z`$, results in a different mechanism of gamma radiation, which leads to a good agreement with GRB observational data.
GRB radiation in our model arises as follows. Low-frequency electromagnetic radiation from a cusp loses its energy by accelerating particles of the plasma to very large Lorentz factors. Like the initial electromagnetic pulse, the particles are beamed and give rise to a hydrodynamical flow in the surrounding gas, terminated by a shock, as in the standard fireball theory of GRB (for a review see ). We shall assume that cosmic magnetic fields were generated at moderate redshifts (e.g., in young galaxies during the bright phase of their evolution ). The string symmetry breaking scale $`\eta `$ will be the only string parameter used in our calculations. With reasonable assumptions about the magnitude of cosmic magnetic fields and the fraction of volume in the Universe that they occupy, this parameter is sufficient to account for all main GRB observational quantities: the duration $`\tau _{GRB}`$, the rate of events $`\dot{N}_{GRB}`$, and the fluence $`S`$.
We begin with the description of some properties of strings, which will be used below (for a review see ).
A horizon-size volume at any time $`t`$ contains a few long strings stretching across the volume and a large number of small closed loops. The typical length of a loop at cosmological time $`t`$ and the loop number density are given by
$$l\alpha t,n_l(t)\alpha ^1t^3$$
(1)
The exact value of the parameter $`\alpha `$ in (1) is not known. We shall assume, following , that $`\alpha `$ is determined by the gravitational backreaction, so that $`\alpha k_gG\mu `$, where $`k_g50`$ is a numerical coefficient, $`G`$ is Newton’s constant, $`\mu \eta ^2`$ is the mass per unit length of string, and $`\eta `$ is the symmetry breaking scale of strings.
The loops oscillate and lose their energy, mostly by gravitational radiation. For a loop of invariant length $`l`$ , the oscillation period is $`T_l=l/2`$ and the lifetime is $`\tau _ll/k_gG\mu `$.
An electric field $`E`$ applied along a superconducting string generates an electric current. A superconducting loop of string oscillating in a magnetic field $`B`$ acts as an ac generator and develops an ac current of amplitude
$$J_0e^2Bl.$$
(2)
The local value of the current in the loop can be greatly enhanced in near-cusp regions where, for a short period of time, the string reaches a speed very close to the speed of light. Cusps tend to be formed a few times during each oscillation period. Near a cusp, the string gets contracted by a large factor, its rest energy being turned into kinetic energy. The density of charge carriers, and thus the current, are enhanced by the same factor. The contraction factor increases as one approaches the point of the cusp.
The growth of electric current near the cusp due to string contraction is terminated at a critical value $`J_{max}`$ when the energy of charge carriers becomes comparable to that of the string itself, $`(J/e)^2\mu `$. This gives
$$J_{max}e\eta ,\gamma _{max}(e\eta /J_0).$$
(3)
Alternatively, the cusp development can be terminated by small-scale wiggles on the string . If the wiggles contribute a fraction $`ϵ1`$ to the total energy of the string, then the maximum Lorentz factor is less than (3), and is given by $`\gamma _{max}ϵ^{1/2}`$. The actual value of $`\gamma _{max}`$ is not important for most of the following discussion.
Due to the large current, the cusp produces a powerful pulse of electromagnetic radiation. The total energy of the pulse is given by $`_{em}^{tot}2k_{em}J_0J_{max}l`$, where $`l\alpha t`$ is the length of the loop, and the coefficient $`k_{em}10`$ is taken from numerical calculations . This radiation is emitted within a very narrow cone of openening angle $`\theta _{min}1/\gamma _{max}`$ (for relativistic beaming in GRB see ). The angular distribution of radiated energy at larger angles is given by
$$d_{em}/d\mathrm{\Omega }k_{em}J_0^2l/\theta ^3,$$
(4)
We shall adopt the following simple model of cosmic magnetic fields. We shall assume that magnetic fields were generated at some $`zz_B`$ and then remained frozen in the extragalactic plasma, with
$$B(z)=B_0(1+z)^2,$$
(5)
where $`B_0`$ is the characteristic field strength at the present time. For numerical estimates below we shall use $`z_B4`$, $`B_010^7`$ G, and assume that the fraction of volume of the universe occupied by magnetized plasma is $`f_B0.1`$. We shall also assume that the universe is spatially flat and is dominated by non-relativistic matter.
We shall now estimate the physical quantities characterizing GRBs powered by superconducting strings. For a GRB originating at redshift $`z`$ and seen at angle $`\theta `$ with respect to the string velocity at the cusp, we have from Eqs.(2)-(5)
$$d_{em}/d\mathrm{\Omega }k_{em}e^4\alpha ^3t_0^3B_0^2(1+z)^{1/2}\theta ^3,$$
(6)
where $`t_0`$ is the present age of the Universe. The Lorentz factor of the relevant string segment near the cusp is $`\gamma 1/\theta `$. The duration of the cusp event as seen by a distant observer is
$$\tau _c(1+z)(\alpha t/2)\gamma ^3(\alpha t_0/2)(1+z)^{1/2}\theta ^3.$$
(7)
One can expect that the observed duration of GRB is $`\tau _{GRB}\tau _c`$. This expectation will be justified by the hydrodynamical analysis below.
The fluence, defined as the total energy per unit area of the detector, is
$$S(1+z)(d_{em}/d\mathrm{\Omega })d_L^2(z),$$
(8)
where $`d_L(z)=3t_0(1+z)^{1/2}[(1+z)^{1/2}1]`$ is the luminosity distance.
The rate of GRBs originating at cusps in the redshift interval $`dz`$ and seen at an angle $`\theta `$ in the interval $`d\theta `$ is given by
$$d\dot{N}_{GRB}f_B\frac{1}{2}\theta d\theta (1+z)^1\nu (z)dV(z).$$
(9)
Here, $`\nu (t)n_l(t)/T_l2\alpha ^2t^4`$ is the number of cusp events per unit spacetime volume, $`T_l\alpha t/2`$ is the oscillation period of a loop, $`dV=54\pi t_0^3[(1+z)^{1/2}1]^2(1+z)^{11/2}dz`$ is the proper volume between redshifts $`z`$ and $`z+dz`$, and we have used the relation $`dt_0=(1+z)dt`$.
Since different cusp events originate at different redshifts and are seen at different angles, our model automatically gives a distribution of durations and fluences of GRBs. The angle $`\theta `$ is related to the Lorentz factor of the relevant portion of the string as $`\theta 1/\gamma `$, and from Eqs.(6),(8) we have
$$\gamma (z;S)\gamma _0\alpha _8^1S_8^{1/3}B_7^{2/3}[(\sqrt{1+z}1)^2\sqrt{1+z}]^{1/3}.$$
(10)
Here, $`\gamma _0190,\alpha _8=\alpha /10^8`$, and the fluence $`S`$ and the magnetic field $`B_0`$ are expressed as $`S=S_810^8erg/cm^2`$ and $`B=B_710^7G`$.
Very large values of $`\gamma \gamma _{max}`$, which correspond (for a given redshift) to largest fluences, may not be seen at all because the radiation is emitted into a too narrow solid angle and the observed rates of these events are too small. The minimum value $`\gamma (z;S_{min})`$ is determined by the smallest fluence that is observed, $`S_{min}210^8erg/cm^2`$. Another limit on $`\gamma `$, which dominates at small $`z`$, follows from the condition of compactness and is given by $`\gamma 100`$ (see below).
The total rate of GRBs with fluence larger than $`S`$ is obtained by integrating Eq.(9) over $`\theta `$ from $`\gamma _{max}^1(z)`$ to $`\gamma ^1(z;S)`$ and over $`z`$ from $`0`$ to $`\mathrm{min}[z_m;z_B]`$, with $`z_m`$ from $`\gamma _{max}(z_m)=\gamma (z_m;S)`$. For relatively small fluences, $`S_8<S_c=0.03(\gamma _{max}(0)\alpha _8/\gamma _0)^3B_7^2`$, $`z_B<z_m`$ and we obtain
$`\dot{N}_{GRB}(>S)`$ $``$ $`{\displaystyle \frac{f_B}{2\alpha ^2t_0^4}}{\displaystyle _0^{z_B}}𝑑V(z)(1+z)^5\gamma ^2(z;S)`$ (11)
$``$ $`310^2S_8^{2/3}B_7^{4/3}yr^1.`$ (12)
Remarkably, this rate in our model does not depend on any string parameters and is determined (for a given value of $`S`$) almost entirely by the magnetic field $`B_0`$. The predicted slope $`\dot{N}_{GRB}(>S)S^{2/3}`$ is in a reasonable agreement with the observed one $`\dot{N}_{obs}(>S)S^{0.55}`$ at relatively small fluences .
For large fluences $`S_8>S_c`$, integration of Eq.(9) gives $`\dot{N}_{GRB}(>S)S^{3/2}`$. Observationally, the transition to this regime occurs at $`S_810^210^3`$. This can be accounted for if the cusp development is terminated by small-scale wiggles with fractional energy in the wiggles $`ϵ10^7\alpha _8^2B_7^{4/3}`$. Alternatively, if $`\gamma _{max}`$ is determined by the back-reaction of the charge carriers, Eq.(3), then the regime (12) holds for larger $`S_8`$, and observed steepening of the distribution at large $`S`$ can be due to the reduced efficiency of BATSE to detection of bursts with large $`\gamma `$. Indeed, large $`\gamma `$ results in a large Lorentz factor $`\gamma _{CD}`$ of the emitting region (see below), and at $`\gamma _{CD}10^3`$ photons start to escape from the BATSE range.
The duration of a GRBs originating at redshift $`z`$ and having fluence $`S`$ is readily calculated as
$$\tau _{GRB}200\frac{\alpha _8^4B_7^2}{S_8}(1+z)^1(\sqrt{1+z}1)^2s$$
(13)
Estimated from Eq.(7), $`\tau _{GRB}^{max}10^3\alpha _8s`$, while from Eq.(13) using $`S_{max}110^4erg/cm^2`$ and $`zz_B4`$, one obtains $`\tau _{GRB}^{min}3\alpha _8^4B_7^2ms`$. This range of $`\tau _{GRB}`$ agrees with observations for $`\alpha _81`$ (i.e. $`\eta 210^{14}GeV`$) and $`B_71`$.
Let us now turn to the hydrodynamical phenomena in which the gamma radiation of the burst is actually generated. The low-frequency electromagnetic pulse interacting with surrounding gas produces an ultrarelativistic beam of accelerated particles. This is the dominant channel of energy loss by the pulse. The beam of high energy particles pushes the gas with the frozen magnetic field ahead of it, producing an external shock in surrounding plasma and a reverse shock in the beam material, as in the case of “ordinary” fireball (for a review see ). The difference is that the beam propagates with a very large Lorentz factor $`\gamma _b\gamma `$ where $`\gamma `$ is the Lorentz factor of the cusp. (The precise value of $`\gamma _b`$ is not important for this discussion; it will be estimated in a subsequent publication .) Another difference is that the beam propagates in a very low-density gas. The beam can be regarded as a narrow shell of relativistic particles of width $`\mathrm{\Delta }l/2\gamma ^3`$ in the observer’s frame.
The gamma radiation of the burst is produced as synchrotron radiation of electrons accelerated by external and reverse shocks. Naively, the duration of synchrotron radiation, i.e. $`\tau _{GRB}`$, is determined by the thickness of the shell as $`\tau _{GRB}\mathrm{\Delta }`$. This is confirmed by a more detailed analysis, as follows. The reverse shock in our case is ultrarelativistic . The neccessary condition for that, $`\rho _b/\rho <\gamma _b^2`$, is satisfied with a wide margin (here $`\rho _b`$ is the baryon density in the beam and $`\rho `$ is the density of unperturbed gas). In this case, the shock dynamics and the GRB duration are determined by two hydrodynamical parameters . They are the thickness of the shell $`\mathrm{\Delta }`$ and the Sedov length, defined as the distance travelled by the shell when the mass of the snow-ploughed gas becomes comparable to the initial energy of the beam. The latter is given by $`l_{Sed}(_{iso}/\rho )^{1/3}`$.
The reverse shock enters the shell and, as it propagates there, it strongly decelerates the shell. The synchrotron radiation occurs mainly in the shocked regions of the shell and of the external plasma. The surface separating these two regions, the contact discontinuity (CD) surface, propagates with the same velocity as the shocked plasma, where the GRB radiation is produced.
The synchrotron radiation ceases when the reverse shock reaches the inner boundary of the shell. This occurs at a distance $`R_\mathrm{\Delta }l_{Sed}^{3/4}\mathrm{\Delta }^{1/4}`$ when the Lorentz factor of the CD surface is $`\gamma _{CD}(l_{Sed}/\mathrm{\Delta })^{3/8}0.1B_7^{1/4}n_5^{1/8}(1+z)^{1/2}\gamma ^{3/2}`$, where $`n_5`$ is the ambient baryon number density in units of $`10^5`$ cm<sup>-3</sup>. Note that these values do not depend on the Lorentz factor of the beam $`\gamma _b`$ and are determined by the cusp Lorentz factor $`\gamma `$. The size of the synchrotron emitting region is of the order $`R_\mathrm{\Delta }`$, and the Lorentz factor of this region is equal to $`\gamma _{CD}`$. The compactness condition requires $`\gamma _{CD}10^2`$, which yields $`\gamma 10^2`$ used above. The duration of GRB is given by
$$\tau _{GRB}R_\mathrm{\Delta }/2\gamma _{CD}^2l/2\gamma ^3,$$
(14)
i.e. it is equal to the duration of the cusp event given by Eq.(7). The energy that goes into synchrotron radiation is comparable to the energy of the electromagnetic pulse.
We conclude with several remarks on our GRB model.
(i) Magnetic fields in our scenario could be generated in young galaxies during the bright phase of their evolution and then dispersed by galactic winds in the intergalactic space. One expects that at present the fields are concentrated in the filaments and sheets of the large-scale structure . With sheets of characteristic size $`L(2050)h^1`$ Mpc and thickness $`D5h^1`$ Mpc, we have $`f_BD/L0.1`$. The average field strength can be estimated from the equipartition condition as $`B_010^7`$ G . We expect these values to be accurate within an order of magnitude.
(ii) We emphasize that our model involves a number of simplifying assumptions. All loops at cosmic time $`t`$ were assumed to have the same length $`l\alpha t`$, while in reality there should be a distribution $`n(l,t)`$. The evolution law (5) for $`B(z)`$ and the assumption of $`f_B=\mathrm{const}`$ are also oversimplified. A more realistic model should also account for a spatial variation of $`B`$. As a result, the correlation between $`\tau `$ and $`S`$ suggested by Eq.(13) will tend to be washed out. Finally, we disregarded the possibility of a nonzero cosmological constant which would introduce some quantitative changes in our estimates.
(iii) Apart from the reverse shock instability, small-scale wiggles on strings can naturally produce short-time variation in GRBs. These wiggles, acting like mini-cusps, produce a sequence of successive fireballs, which are the usual explanation of GRB short-time structure .
(iv) Cusps reappear on a loop, producing nearly identical GRBs with a period $`T_l\alpha t/250(1+z)^{1/2}yr`$. In a more realistic model, some fraction of loops would have lengths smaller than $`\alpha t`$ and shorter recurrence periods.
(v) Our model predicts that GRBs should be accompanied by strong bursts of gravitational radiation. The angular distribution of the gravitational wave energy around the direction of the cusp is $`d_g/d\mathrm{\Omega }G\mu ^2/\theta `$, and the dimensionless amplitude of a burst of duration $`\tau `$ originating at redshift $`z`$ can be estimated as
$$hk_g(G\mu )^2(\tau /\alpha t_0)^{1/3}(1+z)^{1/3}[(1+z)^{1/2}1]^1,$$
(15)
or $`h10^{21}\alpha _8^{5/3}z^1(\tau /1s)^{1/3}`$ for $`z1`$. Here, we have used the relation $`F_gh^2/G\tau ^2(1+z)(d_g/d\mathrm{\Omega })d_L^2(z)`$ for the gravitational wave flux and Eq.(7) for the burst duration $`\tau `$. These gravitational wave bursts are much stronger than expected from more conventional sources and should be detectable by the planned LIGO, VIRGO and LISA detectors.
(vi) GRBs have been suggested as possible sources of the observed ultrahigh-energy cosmic rays (UHECR) . This idea encounters two difficulties. If GRBs are distributed uniformly in the universe, UHECR have a classical Greisen-Zatsepin-Kuzmin (GZK) cutoff, absent in the observations. The acceleration by an ultrarelativistic shock is possible only in the one-loop regime (i.e. due to a single reflection from the shock) . For a standard GRB with a Lorentz factor $`\gamma _{sh}300`$ it results in the maximum energy $`E_{max}\gamma _{sh}^2m_p10^{14}eV`$, far too low for UHECR.
Our model can resolve both of these difficulties, assuming that $`gamma_{max}`$ is determined by the current backreaction, Eq.(3).
If the magnetic field in the Local Supercluster (LS) is considerably stronger than outside, then the cusps in LS are more powerful and the GZK cutoff is less pronounced.
Near-cusp segments with large Lorentz factors produce hydrodynamical flows with large Lorentz factors, e.g. $`\gamma 210^4`$ corresponds to $`\gamma _{CD}310^5`$ and $`E_{max}\gamma _{CD}^2m_p110^{20}eV`$. Protons with such energies are deflected in the magnetic field of LS and can be observed, while protons with much higher energies caused by near-cusp segments with $`\gamma 10^5`$ are propagating rectilinearly and generally are not seen. Further details of this scenario will be given elsewhere .
We are grateful to Ken Olum for useful discussions. The work of AV was supported in part by the National Science Foundation and the work of VB and BH by INTAS through grant No 1065. |
no-problem/0001/cond-mat0001108.html | ar5iv | text | # Decay on several sorts of heterogeneous centers: Special monodisperse approximation in the situation of strong unsymmetry. 2. Numerical results for the total monodisperse approximation
## 1 Preliminary remarks
In we have investigated the process of nucleation in the situation of the strong unsymmetry. We have analysed the system of condensation equations and suggested three different approximations.
The first approximation is the total monodisperse approximation. It has been already suggested in and is a rather natural one. In this approximation the total number of the droplets on the first type centers are regarded as those formed at the initial moment of time. Then all these droplets have now one and same size which can be easily calculated. It equals to $`z`$.
Certainly this approximation is suitable in the case of the strong unsymmetry. Namely in this case it was used in . But this approximation can be applied in some other cases. This approximation can be used to estimate the errors of some other approximations.
It is clear that the total monodisperse approximation is more rough than the special monodisperse approximation and the floating monodisperse approximation. Then we shall estimate the errors of the mentioned approximations by the the error of the total monodisperse approximation.
In the special monodisperse approximation we have to introduce the characteristic size $`\mathrm{\Delta }z`$ of the length of the spectrum due to the supersaturation fall. This value is well described in , . Then we have to imagine that the influence of the droplets formed on the first type centers can be described as the monodisperse peak with the number of droplets determined by the special recipe.
To determine the number of droplets in the monodisperse spectrum we must calculate the number of the first type heterogeneous centers which became the centers of the droplets until $`\mathrm{\Delta }z/4`$. It can be done without any influence of the second type heterogeneous centers taken into account.
The reason of concrete choice of the size $`\mathrm{\Delta }z/4`$ is described in in details. So, we needn’t to explain it here. We have only to note that this choice is equivalent to the specific choice of time, i.e. one has to calculate the number of the droplets on the first type centers formed until the first quarter of the nucleation period. More rigorously we have to speak here about the finish of the nucleation period due to the fall of supersaturation.
In fact we can act without the special monodisperse approximation but only with the help of the floating monodisperse approximation. This approximation is similar to the already described one but has one specific feature. In the floating monodisperse approximation the influence of the droplets formed on the first type centers at the ”moment” $`z`$ can be presented as $`z^3N_1(z/4)`$, where $`N_1(z/4)`$ is the number of the droplets formed on the first type centers until $`z/4`$. So, this approximation is formulated for all moments of time and can be used in the arbitrary moment.
Certainly when $`z`$ is near $`\mathrm{\Delta }z`$ this approximation coincides with the special monodisperse approximation. But this approximation is more simple and universal than the previous one. We shall use below the floating monodisperse approximation instead of the special monodisperse one.
## 2 Calculations for the total monodisperse approximation
Now we shall turn to estimate errors of approximations. The errors of substitutions of the subintegral functions by the rectangular form are known and they are rather small. But the error of the approximation itself has to be estimated.
The error of the number of droplets formed on the first type of heterogeneous centers can be estimated in frame of the standard iteration method and it is small. So, we need to estimate only the error in the number of the droplets formed on the second type of heterogeneous centers.
It is absolutely clear that the worst situation occurs when there is no essential exhaustion of heterogeneous centers of the second type.
It seems that the monodisperse approximation will be the worst for the pseudo homogeneous situation, i.e. when the first type centers remain practically unexhausted. But as far as we haven’t any direct proof of this property we shall consider the situation with the arbitrary power of exhaustion.
As the result we can consider the system of the following form
$$G=_0^z\mathrm{exp}(G(x))\theta _1(x)(zx)^3𝑑x$$
$$\theta _1=exp(b_0^z\mathrm{exp}(G(x))𝑑x)$$
with a positive parameter $`b`$ and have to estimate the error in
$$N=_0^{\mathrm{}}\mathrm{exp}(lG(x))𝑑x$$
with some parameter $`l`$.
Parameter $`l`$ shows that we doesn’t consider the influence of the first centers nucleation on itself but analyze the influence of the first centers nucleation on another process with another parameters. This differs our consideration from that made in .
We shall solve this problem numerically and compare our result with some models. In the model of the total monodisperse approximation we get
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lG_A(x))𝑑x$$
where $`G_A`$ is
$$G_A=\frac{1}{b}(1\mathrm{exp}(bD))x^3$$
and the constant $`D`$ is given by
$$D=_0^{\mathrm{}}\mathrm{exp}(x^4/4)𝑑x=1.28$$
We have tried the total monodisperse approximation for $`b`$ from $`0.2`$ up to $`5.2`$ with the step $`0.2`$ and for $`l`$ from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. We calculate the relative error in $`N`$. The results are drawn in fig.1 for $`N_A`$. Here $`r_1`$ is the relative error of the total monodisperse approximation in the number of droplets.
We see that even for $`N_A`$ the relative error is small for all situations with moderate $`l`$. For big $`l`$ the error slightly increases. This corresponds to the evident fact that when $`l`$ is big the nucleation on the second sort centers is finished earlier than on the first sort centers. Here the total monodisperse approximation isn’t valid.
The growth of the error for big $`l`$ is rather slow but inevitably it will lead to the big value. Then this approximation will give a wrong result even in the order of the magnitude.
The calculations presented in further sections show that the maximum of errors in the floating monodisperse approximation lies near $`l=0`$. So, we have to analyse the situation with small values of $`l`$. It was done in fig.2 for $`N_A`$. We see that for $`N_A`$ this situation is even better than the previous situation. It is rather natural because the small values of $`l`$ correspond to more strong ierarchy.
Unfortunately the situation for the floating monodisperse approximation is another. We can not find the maximum error of the results for the monodisperse approximation. We see that this error has the maximum at small $`b`$. Then we have to calculate the situation with $`b=0`$. Here we have to solve the following equation
$$G=_0^{\mathrm{}}\mathrm{exp}(G(x))(zx)^3𝑑x$$
and to compare
$$N=_0^{\mathrm{}}\mathrm{exp}(lG)𝑑x$$
with
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lDz^3)𝑑z$$
and other approximate expressions.
The results of this calculation are interesting mainly for floating monodisperse approximation and will be presented in the next sections.
Fig.1
The relative error of $`N_A`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the essential negative slope when $`b`$ increases and the slight positive slope when $`l`$ increases.
Fig.2
The relative error of $`N_A`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.01`$ up to $`0.11`$ with a step $`0.01`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the essential negative slope when $`b`$ increases and the slight positive slope when $`l`$ increases. The qualitative character is absolutely the same as in fig. 1. |
no-problem/0001/physics0001033.html | ar5iv | text | # Dream of a Christmas lecture
## Abstract
We recall the origins of differential calculus from a modern perspective. This lecture should be a victory song, but the pain makes it to sound more as a oath for vendetta, coming from Syracuse two milenia before.
A visitor in England, if he is bored enough, could notice that our old 20 pound notes are decorated with a portrait of Faraday imparting the first series of ”Christmas lectures for young people”, which began time ago, back in the XIXth century, at his suggestion. Today they have become traditional activity in the Royal Institution.
This year the generic theme of the lectures was quantum theory and the limits implied by it. The BBC uses to broadcast the full sessions during the holidays, and I decided to enjoy an evening seeing the recording. This day, the third of the series, is dedicated to the time scale of quantum phenomena. The main hall is to be occupied, of course, by the children who have come to enjoy the experimental session, and the BBC director, a senior well trained to control this audience, keeps the attention explaining how the volunteers are expected to enter and exit the scene. While he proceeds to the customary notice, that ”all the demonstrations here are done under controlled conditions and you should not try to repeat them at home”, I dream of a zoom over a first bowl with some of the bank notes, and the teacher starting the lecture.
He wears the white coat and in a rapid gesture drops a match in the bowl, and the pieces of money take fire. The camera goes from the flames to the speaker, who starts:
Money. Man made, artificial, unnatural. Real and Untrue.
And then a slide of a stock market chart:
But take a look to this graph: Why does it move with the same equations that a grain of pollen? Why does it oscillate as randomly as a quantum mechanic system?.
Indeed. It is already a popular topic that the equations used for the derivative market are related to the heat equation, and there is some research running in this address. But the point resonating in my head was a protest, formulated a couple of months before by Mr. W.T. Shaw, a researcher of financial agency, Nomura:
”Money analysts get volatility and other parameters from the measured market data, and this is done by using the inverse function theorem. If a function has a derivative non zero in a point, then it is invertible at this point. But, if we are working out discrete calculus, if we are getting discrete data from the market, how can we claim that the derivative is non zero? Should we say that our derivative is almost non zero? What control do we have over the inversion process?”.
Most meditations in this sense drive oneself to understand the hidings under the concept of stability of a numerical integration process.
But consider just this: discrete, almost zero, almost nonzero calculus!. It is a romantic concept by itself. Infinitesimals were at the core of the greatest priority dispute in Mathematics. On one side, at Cambridge, the second Lucasian chair, Newton. On the other side, at the political service of the elector of Mainz, the mathematical philosophy of Leibniz. And coming from the dark antiquity, old problems: How do you get a straight line from a circle? How do you understand the area of any figure? What is speed? Is the mathematical continuum composed of indivisible ”individua”, mathematical ”atoms without extension”?
Really all the thinking of calculus is pushed by two paradoxes. That one of the volume and that one of the speed.
The first one comes, it is said, from Democritus. Cut a cone with a plane parallel and indefinitely near to the basis. Is the circle on the plane smaller or equal than the basis?
Other version makes the infinite more explicit. Simply cut the cone parallel to the basis. The circle in the smaller cone should be equal to the one in the top of the trunk. But this happens for every cut, lets say you make infinite cuts, always the circles will be equal. How is that different of a cylinder? You can say, well, that the shape, the area, decreases between the cuts, no in the cuts. Ok, good point. But take a slice bounded by two cuts. As we keep cutting we make the slice smaller, indefinitely thinner, until the distinction between to remove a slice and to make a cut is impossible. How can this distinction be kept? And we need to kept it in a mathematically rigorous way, if possible.
The second paradox is a more popular one, coming from the meditations of Zeno. In more than one sense, it is dual to the previous one. Take time instead of height and position instead of circular area. How can an arrow to have a speed? How can an arrow to change position if it is resting at every instant? In other version, it is say that it can not move where it is fixed, and it can not move where it is not yet. Or, as Garcia-Calvo, a linguist and translator of Greek, formulated once: ”One does not kiss while he lives, one does not live when he kisses”.
Seriously taken, the paradox throws strong doubts about the concept of instantaneous speed. Or perhaps about the whole conception of what ”instantaneous” is. While Democritus asked how indefinite parts of space could add up to a volume, Zeno wonders how a movement can be decomposed to run across indefinite parts of time. A Wicked interplay.
It is interesting to notice that physicists modernly do not like to speak of classical mechanics as a limit $`\mathrm{}0`$, but as a cancellation of the trajectories that differ from the classical one. Perhaps this is more acceptable. Anyway, the paradoxes were closed in false by Aristotle with some deep thoughts about the infinity. Old mathematics was recasted for practical uses and, at the end, lost.
But in the late mid ages, some manuscripts were translated again. A man no far from my homeland, in the Ebro river in Spain, took over a Arabic book to be versed into Latin. It was the Elements, that book all you still ”suffer” in the first courses of math in the primary school, do you remember? Circles, angles, triangles, and all that. And, if your teacher is good enough, the art of mathematical skepticism and proof comes with it.
Of course the main interesting thing in the mid ages is a new art, Algebra. But that is a even longer history. To us, our interest is that with the comeback of geometry, old questions were again to be formulated. If continuous becomes, in the limit, without extension, then is such limit divisible? And if it is indivisible, atomic, how can it be?
That automatically brings up other deterred theory to compare with. That one which postulates Nature as composed with indivisible atoms but, having somehow extension, or at least some vacuum between them.
Such speculation had begun to be resuscitated in the start of the XVII, with Galileo Galilei himself using atomic theory to justify heat, colours, smell. His disciple Vincenzo Viviani will write, time-after, that then, with the polemic of the book titled ”Saggiatore”, the eternal prosecution of Galileo actions and discourses began.
Mathematics was needing also such atomic objects, and in fact the first infinitesimal elements were named just that, atoms, before the modern name was accepted.
(By the way, Copernico in “De revolutionibus” explains how the atomic model, with its different scales of magnitude, inspires the astronomical world: the distance of the earth to the center of the stars sphere is said to be negligible by inspiration from the negligibility of atomic scale. It is very funny that some centuries later someone proposed the ”planetary” model of atoms.)
Back to the lecture. Or to the dream. Now the laboratory has activated a sort of TV projector bringing images from the past. Italy.
Viviani. He made a good effort to recover Archimedes and other classical geometers. So it is not strange that the would-to-be first lucasian, Isaac Barrow, become involved when coming to Florence. And Barrow understood how differentiation and integration are dual operations.
Noises…
Perhaps Barrow learn of it during his Mediterranean voyage
Noises of swords and pirates sound here in the TV scene, and Barrow himself enters in the lecture room.
He is still blooding from the encounter with the pirates. Greets the speaker, cleans himself, and smiles to the children in the first row:
”We become involved in a stupid war. Europe went to war about sacraments, you know, the mystery of eucharistic miracle and all that niceties. And there we were, with individia, indivisilia, atoms… things that rule out difference between substance and accidents. You can not make a bread into a divine body if it is only atoms, they say.”
Indeed, someone filed a denounce against Galileo claiming that is theory was against the dogma of transustantiation. Touchy matter, good for protestant faith but not for the dogmas from Trent concilium.
For a moment he raises the head, staring to us, in the upper circle. Then he goes back to the young public: ”Yes, there was war. Protestants, Catholics, Anglicans. Dogmas and soldiers across Europe. Bad time to reject Aristotle, worse even to bring again Democritus. With Democritus comes Lucretius, with Lucretius comes Epicurus. Politically inconvenient, you know. Do the answers pay the risk?”
He goes away. He went away to Constantinople, perhaps to read the only extant copy of the Archimedean law. Perhaps he found the lost Method. Perhaps he lost other books when his ship was burned in Venice.
Yes, Bourbaki says (according ) that Barrow was the first one proofing the duality between derivatives and integration. At least, with his discrete ”almost zero” differential triangle, doubting about the risks of jumping to the limit, was closer to our modern view. Three or four years ago Majid, then still in Cambridge, claimed its resurrection in the non commutative calculus $`f(x)dx=dxf(x\lambda )`$. Even the formulation of fermions in the lattice, according Luscher, depends on this relationship to proof the cancellation of anomalies.
Also we would note that his calculus was ”renormalized” to a finite scale, as instead of considering directly $`\mathrm{\Delta }f/\mathrm{\Delta }x`$, he first scaled this relationship to a finite triangle with side $`f(x)`$. The freedom to choose either the triangle on $`f(x)`$ or the one in $`f(x+\lambda )`$ was lost when people start to neglect this finite scale.
Really, this is mathematical orthodoxy. Consider a series $`\mathrm{sin}(1/n)`$ and another one $`1/n`$, both going to zero. The quotient, then, seems to go to an indefinite $`0/0`$, but if you scale all the series to a common denominator, call it $`S_a(n)/a`$, you will find that $`S_a(n)`$ goes to $`a`$ as $`n`$ increases. Wilson in the seventies made the same trick for statistical field theory (or for quantum field theory), which was at that moment crowded of problematic infinities.
There is also a infinite there in the Barrow idea, but it is a very trivial one. Just the relation between the vertical of the finite triangle and the horizontal of the small one, $`\frac{f(x)}{\mathrm{\Delta }x}`$. It goes to infinity, but this divergence can be cured by subtracting another infinite quantity, $`\frac{f(x+\lambda )}{\mathrm{\Delta }x}`$, so that the limit is finite<sup>1</sup><sup>1</sup>1This example was provided by Alain in Vietri at the request of the public, but it was not to be related to the hoft algebra of trees, as far as I can see.
Barrow died in sanctity. But in his library there was no less that three copies of Lucretius ”De Rerum Natura”, a romam poem about atomistic Nature, already critiquized in the antiquity because in supports the Epicurean doctrine: that gods, if they exist, are not worried about the human affairs, so we must build our moral values from ourselves and our relationships with our friends and society.
In the lecture room, the slides fly one ager other. Back in the XVII, with heat, smell, colour, and other accidents, black storms blow in the air. It has been proposed that the sacred eucaristic mystery was in agreement with Aristotle, as it could be said that the substance of wine and bread was substituted by the substance of the Christ, while the accidents remained. Go tell to the Luterans.
First August 1632. The Compañia de Jesus forbids the teaching of the atomic doctrine. 22nd June 1633, Galileo recants. “Of all the days that was the one / An age of reason could have begun”
In the ”Saggiatore”, Galileo had begun to think of physical movement of atoms as the origin of the heat. It would take three centuries for Einstein to get the Brownian key. But even that was already disentangled of pure mathematics, so it took some other half century for to discover the same equations again, now for the stock market products. The history has not finished.
It sounds not to surprise that Dimakis has related discrete calculus to the Ito calculus, the basic stochastic in the heat equation, the play of money, that Black and Scholes rediscovered. In some sense it is as if the physical world described by mathematics were dependent on mathematics only, as it it were the unique answer to organise things in a localized position.
Dark clouds will block our view. Barrow survived to his ship and crossed Germany and come home to teach Newton. But Newton himself missed something greater when, for sake of simplicity, the limit to zero was taken. In this limit, he can claim the validity of series expansion to solve any differential equation, so it is a very reasonable assumption. Yes, but it had been more interesting to control the series expansion even without such limit.
Leibniz come to the same methods and the jump to the limit is to be the standard. Mathematical atoms, scales and discrete calculus will hide its interplay with the infinitesimal ones for some centuries. Only two years ago Mainz, in voice of Dirk Kreimer, got again the clue to generalized Taylor series. The wood was found to be composed of trees.
Vietri is a small village in the Tyrrenian sea, near Salerno, looking at the bay of Amalfi. Good fishing and intense limoncello liquor. About the 20th of Mars, 1998, there Alain come, to explain the way Kreimer had found a Hopf algebra structure governing perturbative renormalization. The algebra of trees was not only related to Connes Moscovici algebra, but also with the old one proposed by Cayley to control the Taylor series of the vector field differential equation.
And to close the circle, Runge-Kutta numerical integration algorithms can be classified with a hoft algebra of trees. Today it can be said that the generic solution to a differential equation is not just the function, but also some information codified in the Butcher group. Which can be related to the physics monster of this century, the renormalization group we have mentioned before.
Can we control the inversion of the Taylor series using trees? Then these doubts about the inverse function theorem in stock markets could be sorted out. Will us be able to expand in more than one variable? Still ignorabimus.
Worse, it is progressively clear that this kind of pre-Newtonian calculus are a natural receptacle for quantum mechanics. Even the stock market Ito equations are sometimes honoured as ”Feynman-Kac-Ito” formula, so marking its link with the quantum world. The difference comes from the format of the time variable in both worlds. One should think that time is more subtle than the intuitive ”dot” that Newton put in the fluxion equations.
Perhaps we are now, then, simply correcting a flaw made three hundred years ago. A flaw that Nature pointed to us, when it was clear the failure of classical mechanics in the short scale.
But how did come to exist the conditions to such failure? Why did geometry need to be reborn in the XVIIth century? Why did the mathematicians so little information, so that the mistake has a high probability<sup>2</sup><sup>2</sup>2And, by the way, Why there was not the slightest notion of probability in the old mathematics texts, so they were unable even to consider it? to happen?
If calculus, or ”indivisilia”, were linked to atomism already in the old age, it could be a sort of explanation. Archimedes explain that Democritus was the first finding ”without mathematical proof” the volume of the cone. And with Democritean science there was political problems already in Roman times:
A leftist scholar, Farrington, claims that political stability was thought to reside in some platonic tricks, lies, proposed in ”Republic”: a solid set of unskeptic faith going up the pyramid until the divine celestial gods. Epicurus is seen as a fighter for freedom, putting at risk social stability. Against Plato ideals, Epicurus casts in his help the Ionian learning, including Democritean mathematics and physics. According Farrington, if government aspires to platonic republic, it must control or suppress such kind of mathematics and physics.
No surprise, if this is true, that the man who understood the floating bodies and the centers of gravity, who stated the foundations and integration, the process of mechanical discovery and mathematical rigor, who was fervently translated by Viviani and Barrow, was killed and dismissed. To be buried without a name, it could have been Archimedes’ own wish. But to get his books left out of the copy process for centuries until there were extinct, that is a different thing.
”Only a Greek copy of the Floating Bodies extant, found at Constantinople. See here the palimpsest, the math almost cleared, a orthodox liturgy, perhaps St John Chrisostom, wrote above instead.”
”Let me to pass the pages, and here you have, the only known version of Archimedes letter about the Mechanical Method. Read only by three persons, perhaps four, since it was deleted in the Xth century. Was this reading the goal of Barrow in orient?”
And even then, is it the same? It has been altered, the last occasion in this century, when someone painted four evangelists over the Method.
Hmm. Last Connes report quotes the Floating Bodies principle, doesn’t it?. More and more associations. Stop!
And recall.
Had our research been different if we had been fully aware of the indivisilia problems, if we had tried hard for rigour? Perhaps. Only in the XVI, rescued Apollonius and Archimedes, the new mathematics re-taken the old issue. And, as we have seen, in a dark atmosphere. Enough to confuse them and go into classical mechanics instead of deformed mechanics. Instead of quantum mechanics.
The matter of copernicanism has been usually presented as a political issue. Brecht made a brilliant sketch of it while staying in Copenhagen with some friends, physicists which become themselves caught in the dark side of our own century. We suspect that the matter of atomism also has suffered because of this, and now it appears that Differential Geometry itself has run across a world of troubles since the assesination of his founder in Sicily two milenia ago. The truth has been blocked again and again by the status quo, by the ”real world” preferring tales of stable knowledge to inquisitive minds learning to crawl across, and with, the doubt.
If the goal of the Christmas lectures is to move young people to start a career in science, here is our statement: it is for the honour of human spirit. It is because understanding, reading the book of nature, we calm our mind. Call it ataraxia, athambia, or simply tranquility.
But we have been mistaken, wronged, delayed. The world has tricked, outraged, raped us. When we have been wronged, should we not to revenge? Then our main motivation is here: when reality is a lie, the song of science must be a song of vengeance. A man in Syracuse has been killed, all our milenia-old family has been dishonoured. Every mother, every child, every man in Sicily knows then the word. Vendetta.
Go to your blackboards, my children, and sing the song. Just to clear any trace of pain in the soul. |
no-problem/0001/cond-mat0001194.html | ar5iv | text | # Distributions of forces and ’hydrodynamic clustering’ in a shear thickened colloid.
What is the physics of concentrated particles in flow ? Detailed observation is hard, however, simulations of powders/granular media , have provided much insight. They have shown exponential distributions of force and force-chains. The origin of the distribution of force has motivated much theory \[3-6\], although in the main for static systems.
This letter examines force transmission in steady states of sheared colloid systems. It concerns shear thickening, a dramatic effect in which, at high shear rates, concentrated colloids develop high, and sometimes diverging, viscosities . Understanding thickening is of considerable interest \[8-12\]. The concept of ’hydrodynamic clustering’ has been introduced to explain thickening, but its precise meaning has not been explored. The main aim of this letter is to clarify ’hydrodynamic clustering’ , but on route to this commonalities will be found with dry powders.
This work considers colloids of rigid particles suspended in a fluid. Particle motion is overdamped. The equation of motion is
$$𝐑𝐕+𝐅_𝐂+𝐅_𝐁=\mathrm{𝟎},$$
(1)
where $`𝐕,𝐅_𝐂,𝐅_𝐁`$ are $`6N`$ velocity/angular-velocity and colloid and Brownian forces/torque vectors and $`𝐑`$ is a $`6N\times 6N`$ drag matrix of dissipative hydrodynamic interactions.
The results below are, in the main, for a model which includes just the squeeze mode of hydrodynamic lubrication forces between near neighbours. For spheres $`i`$ and $`j`$ with velocities $`𝐯_𝐢`$ and $`𝐯_𝐣`$ this determines a force $`𝐟_{\mathrm{𝐢𝐣}}`$ on particle $`i`$
$$𝐟_{\mathrm{𝐢𝐣}}=\alpha (h)(𝐯_𝐢𝐯_𝐣)𝐧_{\mathrm{𝐢𝐣}}𝐧_{\mathrm{𝐢𝐣}},$$
(2)
where $`𝐧_{\mathrm{𝐢𝐣}}`$ is the unit vector from centre $`i`$ to $`j`$, $`\alpha (h)`$ the drag coefficient and $`h`$ is the gap between the sphere surfaces. $`𝐑`$ is made up of such pair terms. In the absence of dissipations for shear modes, the particles are ’frictionless’. Some simulations have also been performed that include the shear mode.
The algorithm for simulating Eq. (1) with pair terms is given in . The lubrication force opposes the relative motion of the particles. At the high concentrations of interest it is argued that these terms will dominate the long ranged mobility matrix in shear flow. In the case of a hard sphere model the drag coefficient $`\alpha (h)`$ is given by the well known Reynolds formula which diverges as $`\mathit{1}/h`$. The particles simulated here have crude models for short range polymer coats with both a conservative repulsive force and a lubrication interaction modified from that of hard spheres. The coats define a particle contact which is crucial to the physics discussed below.
Experiment ,and simulation , establish that thickening is dominated by contributions from the lubrication forces. However, Reynolds lubrication is insufficient to drive strong thickening. Crude models of particle pairs shearing past each other , lead to the conclusion that any divergence of bulk properties can only be as the logarithm of the inverse gap of closest approach $`h_{min}`$ . It was suggested that N-body effects may be necessary to explain the stronger divergences of experiment.
However, simulations of spheres with Reynolds lubrication do show only logarithmic divergence, much weaker than experiment . The correct interpretation of the arguments in ref. is not that N-body effects are necessary, but that lubrication interactions stronger than Reynolds are necessary. Lubrication interaction can be strongly enhanced by polymer coats on the particles . On compression of the coats the lubrication divergence switches to a higher power of $`(1/h)`$ than Reynolds lubrication. The strength of its lubrication coefficient, depends on its porosity (the smaller the pores, the stronger the lubrication coefficient; see for a plot of this interaction). The coat also has an elastic modulus. It was shown previously that this model fits experimental data .
Simple pair arguments explain the onset of thickening. Thickening occurs once the bulk stress is sufficient to compress the coats against their conservative (spring) forces to a point at which the relaxation time of the coats, $`\tau _c`$ is longer than the inverse shear rate $`1/\dot{\gamma }`$. This is confirmed by the simulations.
Although we can understand the onset of thickening at the pair level, there is also a dependence on volume fraction, with strong thickening occurring only for $`\varphi _v>0.50`$. Simulations do reveal many body effects. Simulations motivated a model of thickening by hydrodynamic clustering: agglomerates of particles bound together by lubrication forces. Assuming such clusters are rigid, explains enhanced stress. However lubrication forces are only active under relative motion, so rigidity and agglomeration under lubrication are not obviously compatible. Indeed the ’clusters’ are reported to be transient and observed to rapidly disappear after cessation of flow \- they exist in response to the stress.
In the simulations below, the thickening regime when $`\dot{\gamma }\tau _c>1`$ is identified with a network of coat contacts of mean coordination in the range 5-6. It is this network that constitutes ’hydrodynamic clustering’. The geometry and kinetics of this network will be explored in detail.
The simulated model has a coat thickness set at $`0.01d`$ and other parameters choosen to fit experiments on perspex particles . The shear rate is defined as the dimensionless Peclet number. Figure 1 shows simulation data at volume fraction $`\varphi _v=0.54`$. Below $`Pe=200`$, the viscosity is close to flat and the particles flow in an ordered array of strings aligned along the flow direction. The system jumps from ordered flow to disordered flow from $`Pe=200`$ to $`Pe=500`$ where it continues to be disordered and thicken at higher rates. This order-disorder transition during the initial stages of thickening is common for mono-disperse spheres, however it is not sufficient nor necessary for thickening which is a feature of the disordered flow. We report elsewhere systems where the onset of thickening is from regimes of disorder and charged systems at $`\varphi _v<0.50`$ which show order disorder changes with minimal thickening. In any case, this letter is a study of the steady state thickening effect within the disordered regime. (The steady state should be contrasted with jamming in hard spheres . A jamming at a critical shear stress can also occur for models with coat interactions if the springs have a maximal force)
Thickening is accompanied by large normal stress differences. In agreement with experiment, $`N_1`$ is negative and the coat lubrication forces are the dominant contribution to the stress tensor. At $`Pe=200`$, $`\dot{\gamma }\tau _c>1`$ and remains so up to circa $`Pe=10^4`$ where the spring force of the highly compressed coats grows rapidly and the divergence of the lubrication coefficient weakens, bringing $`\dot{\gamma }\tau _c`$ close to unity and lowering the thickening.
Figure 2 shows, for several shear rates, the probability distribution for the magnitude of the lubrication force on individual bonds. Bonds are defined as Voronoi near neighbours. Data is shown at four different shear rates. It was confirmed that the effects reported are independent of systems size above N=500 and independent of time step over a decade of variation. On the linear-log plot the distributions at high force are clearly seen to be exponential.
In the ordered phase at $`Pe=100`$ the distribution is a single exponential for the bonds in extension and dominated by a single exponential for the bonds in compression (there is negligible 2nd exponential in the tail). These are not observable on the scale of figure 2.
In the thickening regime at $`Pe=500,1000,3000`$ the distributions over the full range are fit by the sum of two exponential decays, one particular example is shown in the inset. The system is characterised by two well separated characteristic forces. Bulk averages and the thickening are dominated by the growth of the high force distributions. Examination shows that the distribution with the decay at higher force (henceforth the contact network) is just that of the bonds with coats in contact; 45-50% of the bonds are in the contact network; the mean number of contacts per particles is $`5.2,5.7,5.9`$ for $`Pe=500,1000,3000`$ respectively. It is noted that the coordination number is approaching, from below, that of an isostatic network of frictionless particles . In the ordered regime a percolating contact network also exists, but is of lower coordination circa 3.
The finding of an exponential distribution of forces is common to recent simulations and experiment on dry granular media. In shaken powders a bimodal distribution was found . In sheared powders large fluctuations in normal stress were reported . Note, we have also investigated force distributions when aggregation forces are present and for hard spheres with Reynolds lubrication and short repulsive springs. Both these show exponential distributions. The distribution is not the result of a particular force law. Physical argument for this distribution have been given ; it is geometric in origin.
The geometry of the network was studied. Overall, the network is branched. By thresholding on force or interparticle gap, clusters of M particles were defined and their radius of gyration tensor examined. At high force above the high characteristic forces, these had one eigenvalue scaling as M and others scaling roughly root M. Although linear at high force, they are not straight ’force-chains’ . By summing just over the contacts partial stress tensors, $`\sigma `$, and a partial fabric tensor, F, were studied.
$$F=\underset{ij}{}𝐧_{\mathrm{𝐢𝐣}}𝐧_{\mathrm{𝐢𝐣}}.$$
(3)
For the systems at Pe=500,1000 and 3000, table 1 gives data from the high force distributions. For the Pe=100 system it shows data for bonds with $`|f_{bond}|>2\times 10^3`$. The flow geometry has y(x) the shear gradient(flow) direction with $`(x=y)`$ the compression and $`(x=y)`$ the extension directions. The table shows the viscosity $`\sigma _{xy}/Pe`$ and first normal stress difference $`N_1=(\sigma _{xx}\sigma _{yy})`$, and the x and y components of the normalised principal eigenvector of the fabric tensor. The ordered flow at Pe=100 has contacts in compression and extension just close to the gradient direction. For the thickened states contacts both in compression and extension contribute to the shear stress. However, contacts in extension/compression contribute roughly 2/3 to 1/3 of the normal stress difference. For contacts under compression, the fabric tensor has its principal axis just below the compression direction and is relatively independent of shear rate. For the bonds in extension its principal axis lies above the extension direction, thus enhancing their contribution to $`N_1`$, it tends more to the extension direction the higher the shear rate. The lubrication forces are symmetric from extension to compression, but the spring forces are not. It is the latter force that is responsible for the asymmetry between the fabric of the compression and extension wings.
| Peclet Com./Ext. | N1/Pe | visco | x (flow) | y (grad.) |
| --- | --- | --- | --- | --- |
| Pe=100 Com. | 0.7 | 0.4 | -0.43 | 0.9 |
| Pe=100 Ext. | -0.2 | 0.06 | 0.27 | 0.95 |
| Pe=500 Com. | -4 | 19 | -0.75 | 0.65 |
| Pe=500 Ext. | -6 | 9 | 0.58 | 0.81 |
| Pe=1000 Com. | -5 | 34 | -0.74 | 0.66 |
| Pe=1000 Ext. | -12 | 19 | 0.62 | 0.78 |
| Pe=3000 Com. | -11 | 55 | -0.74 | 0.68 |
| Pe=3000 Ext. | -22 | 37 | 0.67 | 0.74 |
Table 1 Contents are described in the text.
The kinetics of the networks was examined. It is straightforward to define an effective deformation tensor for a cluster. ’Rigid’ clusters of particles defined on force or gap thresholds would have very low extension and compression (relative to the applied shear) and just rotate in the flow. No clear evidence of these was found, although some large clusters of contacts did have eigenvalues for the symmetric part of their deformation tensors as low as 1/4 of that for affine shear. Figure 3 shows the large fluctuations in stress on a particle during flow; they build up and decay over epochs of 10% or so strain, at the applied shear rate this is much faster than the natural relaxation of the coat. They are driven by the flow that is changing the geometrically determined stress concentration in the system. (In the frame co-rotating with the flow the systems experience a changing biaxial stress.) Large changes in the distribution of forces under a ’small change’ in the direction of applied stress has been termed ’fragility’ . It is unclear if the 10% strain represents a ’small change’ and whether the changes in the network are an example of ’fragility’.
A key question is whether the contact networks have an inherent length scale. The data does suggest some correlation with density fluctuations. The structure factor in the thickening regime has peaks oriented along the flow direction inside the near neighbour ring. Similar peaks were observed in Neutron scattering from thickened colloids . Selecting particles with high forces and computing a partial structure factor also resulted in low Q-peaks oriented along the flow direction. Typically a particle must have candidate bonds for high forces close to the shear plane. If these are to carry high forces, and if the bonds through the particle are not co-linear, the other bonds must be able to supply large lateral forces to maintain overall balance. A locally high density in which particles coats were compressed in all directions would provide this. The distribution of a local volume fraction parameter defined as a sphere volume divide by the volume of its Voronoi cell was examined. Over all particles this is peaked at 0.54. If the distribution is computed just over particles selected on a high threshold of force magnitude, the peak is shifted to around local volume fractions 0.58-0.6, circa the hard sphere glass transition. Only to this degree can the network be considered ’clustering’. However, some particles at low local volume fractions can also carry high forces.
This work has detailed the nature of steady state thickening in colloids. Some features were common with the physics of slow powder flow - despite the different nature of the physical interactions. Clearly this is due to the generic importance of contact geometry in particle flows. Hopefully this will lead to exchanges of ideas and results. It is noted that the force network here does not conform to the simple notion of ’force chains’ prevalent in much of the literature, it is branched and coordinated close to the iso-static limit, but very high force parts were like short segments of random walks. The existence and relevance of exponential distributions of force have been questioned, but figure 2 shows that, at least in flow, this is a significant feature.
For the colloid community, the results have given a much needed clarification and detailing of the nature of ’hydrodynamic clustering’. There are no rigid clusters as some have assumed, but a network of contacts whose natural relaxation is slower than the shear time. This network flows with large fluctuations in lubrication force driven by the changing geometry of contacts in the shear flow. It had not been realised that bonds both in compression and extension are involved. The origin of normal stress differences is due to the fabric of the contact network. Low Q density fluctuations observed in experiment are also associated with the contact network. The growth of the high force distribution and the percolating contact network could be taken as an ’order-parameter’ for the thickening transition.
I acknowledge the support of the Unilever colloid physics program, EPSRC soft-solids Grant No GR/L21747 and funds from IFPRI . I have benefited from many useful discussions with R. Ball, R. Farr, L. Silbert, G. Soga, A. Catherall, D. Grinev, S. Edwards and M. Cates. |
no-problem/0001/hep-th0001095.html | ar5iv | text | # References
Conformal Anomaly and Large
Scale Gravitational Coupling
$`𝐇.\mathrm{𝐒𝐚𝐥𝐞𝐡𝐢}`$<sup>*</sup><sup>*</sup>*e-mail: h-salehi@cc.sbu.ac.ir. , $`𝐘.\mathrm{𝐁𝐢𝐬𝐚𝐛𝐫}`$e-mail: y-bisabr@cc.sbu.ac.ir.
Department of Physics, Sh. Beheshti University, Evin, Tehran 19839, Iran.
## Abstract
We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.
In the absence of a full theory of quantum gravity, one of the theoretical frameworks in which we may improve our understanding of quantum processes in a gravitational field is a semiclassical approximation. In this framework the matter is described by quantum field theory while the gravitational field itself is regarded as a classical object. The gravitational coupling of a quantum field is then investigated through the study of the quantum stress-tensor, i.e. the expectation value of the stress-tensor of the quantum field taken in some physical state. However, since the quantum stress-tensor contains singularities, some renormalization prescriptions are used to obtain a meaningful expression. One of the most remarkable consequences of these prescriptions is the so called conformal anomaly. This means that the trace of the quantum stress-tensor of a conformal invariant field obtains a nonzero expression while the trace of the classical stress-tensor vanishes identically. The appearance of a nonvanishing trace may be regarded as the breackdown of conformal symmetry. Since the conformal invariance of a theory reflects its invariance under rescaling of lengths, one may expect that the anomalous trace of the quantum stress-tensor may somehow be related to a scale of length.
The purpose of this note is to establish this relation by connecting the trace anomaly with a cosmological constant. We shall deal with this possibility by making a distinction between the trace anomaly and the traceless part of the quantum stress-tensor. Such a distinction is suggested by the results of the renormalization theory, although there is an alternative motivation which can be found in . In this way a simple dynamical model is introduced in which the traceless part is characterized by the stress-tensor of a conformal invariant scalar field. In this model, it is possible to rescale the trace anomaly and convert it into a cosmological length. The conformal symmetry is then broken by this cosmological length and a preferred conformal frame is determined in which the Einstein field equations with a cosmological constant are established.
Let us begin with the results of renormalization of a quantum stress-tensor $`\mathrm{\Sigma }_{\alpha \beta }`$ for a quantum scalar field conformally coupled with a background metric $`g_{\alpha \beta }`$ We use the units in which $`\mathrm{}=c=1`$ and follow the sign conventions of Hawking and Ellis .
$$^\alpha \mathrm{\Sigma }_{\alpha \beta }=0$$
(1)
$$\mathrm{\Sigma }_\alpha ^\alpha =2v_1(x)$$
(2)
where
$$v_1(x)=\frac{1}{720}\{\mathrm{}RR_{\alpha \beta }R^{\alpha \beta }+R_{\alpha \beta \delta \gamma }R^{\alpha \beta \delta \gamma }\}$$
(3)
Here $`_\alpha `$ denotes a covariant differentiation and $`\mathrm{}g^{\alpha \beta }_\alpha _\beta `$; $`R_{\alpha \beta \delta \gamma }`$ is the Riemann curvature tensor, $`R_{\alpha \beta }`$ is the Ricci tensor and $`R`$ is the curvature scalar. The first equation is a conservation law and the second one indicates an anomalous trace emerging from the renormalization process. The implication of Eq.(2) is that the quantum stress-tensor $`\mathrm{\Sigma }_{\alpha \beta }`$ may be written in the following general form
$$\mathrm{\Sigma }_{\alpha \beta }=\mathrm{\Sigma }_{\alpha \beta }^{(0)}\frac{1}{2}g_{\alpha \beta }v_1(x)$$
(4)
where $`\mathrm{\Sigma }_{\alpha \beta }^{(0)}`$ is a traceless tensor. In this way, $`\mathrm{\Sigma }_{\alpha \beta }`$ is decomposed into two parts: a traceless part $`\mathrm{\Sigma }_{\alpha \beta }^{(0)}`$ which respects the conformal symmetry and an anomalous part reflecting the quantum characteristics. The traceless condition of $`\mathrm{\Sigma }_{\alpha \beta }^{(0)}`$ is automatically satisfied if we introduce a conformally invariant C-number scalar field $`\varphi `$ satisfying
$$(\mathrm{}\frac{1}{6}R)\varphi =0$$
(5)
and identify $`\mathrm{\Sigma }_{\alpha \beta }^{(0)}`$ with the stress-tensor of $`\varphi `$, namely
$$T_{\alpha \beta }[\varphi ]=(\frac{2}{3}_\alpha \varphi _\beta \varphi \frac{1}{6}g_{\alpha \beta }_\gamma \varphi ^\gamma \varphi )\frac{1}{3}(\varphi _\alpha _\beta \varphi g_{\alpha \beta }\varphi \mathrm{}\varphi )+\frac{1}{6}\varphi ^2G_{\alpha \beta }$$
(6)
in which $`G_{\alpha \beta }`$ is the Einstein tensor. The relation (4) takes then the form
$$\mathrm{\Sigma }_{\alpha \beta }=T_{\alpha \beta }\frac{1}{2}g_{\alpha \beta }v_1(x)$$
(7)
The tracelessness of $`T_{\alpha \beta }`$ is then ensured by (5).
We may try to take the relation (7) as a general condition imposed on $`\mathrm{\Sigma }_{\alpha \beta }`$ in the given fixed background geometry described by the metric tensor $`g_{\alpha \beta }`$. However, in this case due to the Eq.(1) and the nonvanishing trace anomaly, $`T_{\alpha \beta }`$ can not be expressed as a conserved stress-tensor and the anomalous trace would provide a quantum source for the traceless tensor $`T_{\alpha \beta }`$. Although such a source could be considered as a desirable dynamical characteristic in many contexts, but we should note that its appearance on the whole background metric stands in conflict with the dynamical equation (5) for $`\varphi `$, as a C-number field. For this reason we shall follow a different interpretation for the relation (7).
We first note that the conformal coupling of the scalar field $`\varphi `$ implies that no distinction can be made among different conformally related configurations of $`\varphi `$ and $`g_{\alpha \beta }`$ in terms of the dynamical equation (5). Thus, the question which presents itself is that which conformal frame should be taken as the physical frame. We shall choose the conformal frame by the condition that $`T_{\alpha \beta }`$ as the stress-tensor of the scalar field, $`\varphi `$, should actually be conserved. Following this strategy we consider a conformal transformation
$$\overline{g}_{\alpha \beta }=\mathrm{\Omega }^2(x)g_{\alpha \beta }$$
$$\overline{\varphi }(x)=\mathrm{\Omega }^1(x)\varphi (x)$$
(8)
and write Eq.(7) in a conformal frame described by the metric $`\overline{g}_{\alpha \beta }`$ so that
$$\overline{\mathrm{\Sigma }}_{\alpha \beta }=\overline{T}_{\alpha \beta }+\frac{1}{6}\mathrm{\Lambda }\overline{g}_{\alpha \beta }\overline{\varphi }^2$$
(9)
or, equivalently
$$\overline{G}_{\alpha \beta }+\mathrm{\Lambda }\overline{g}_{\alpha \beta }=6\overline{\varphi }^2(\overline{\mathrm{\Sigma }}_{\alpha \beta }+\tau _{\alpha \beta }(\overline{\varphi }))$$
(10)
where $`\mathrm{\Lambda }`$ denotes a cosmological constant which is taken to be related to the anomalous trace by
$$3\overline{\varphi }^2\overline{v}_1(x)=\mathrm{\Lambda }$$
(11)
The coefficient $`\overline{\varphi }^2`$ is introduced to make the dimension of both sides consistent. The tensor $`\tau _{\alpha \beta }(\overline{\varphi })`$ is equal to $`\overline{T}_{\alpha \beta }`$ without $`G_{\alpha \beta }`$-term and coincides up to a sign with the so called modified stress-tensor . The relation (11) is a constraint on the conformal factor and singles out a specific conformal frame, which we call the cosmological frame, in which the anomalous trace is related to a cosmological constant. In the cosmological frame, $`\mathrm{\Lambda }`$ serves to characterize a distinguished cosmological length scale which breaks down the conformal symmetry of $`\overline{T}_{\alpha \beta }`$. Let us now consider the trace of Eq.(9)
$$\overline{\mathrm{\Sigma }}_\alpha ^\alpha \mathrm{\Lambda }\overline{\varphi }^2$$
(12)
Remarkably, this relation permits us to estimate the background average value of $`\overline{\varphi }`$, if we measure the trace of $`\overline{\mathrm{\Sigma }}_{\alpha \beta }`$ in the cosmological frame in terms of the large scale distribution of matter<sup>§</sup><sup>§</sup>§ This argument seems to be reasonable since in the cosmological frame the local fluctuations of the quantum stress-tensor, which are characterized by the trace anomaly, are replaced by a cosmological constant term., $`\overline{\mathrm{\Sigma }}_\alpha ^\alpha M/R_0^3`$ where $`M`$ and $`R_0`$ are the mass and the radius of the universe, respectively. Actually if we take into account the empirical fact that the radius of the universe, $`R_0`$, coincides with its Schwarzschild radius $`2GM`$, where $`G`$ is the gravitational constant, the Eq.(12) reduces to
$$\overline{\varphi }^2G$$
(13)
provided $`\mathrm{\Lambda }R_0^2`$ holds, in agreement with observations. Substituting this result into the Eq.(10) leads to
$$\overline{G}_{\alpha \beta }+\mathrm{\Lambda }\overline{g}_{\alpha \beta }G\overline{\mathrm{\Sigma }}_{\alpha \beta }$$
(14)
which establishes the usual features of the Einstein field equations with a cosmological constant. Thus the cosmological frame provides a preferred frame in which the background average value of $`\overline{\varphi }`$ plays the role of the gravitational constant. This result attributes two physical characteristics to this frame. Firstly, the tensor $`\overline{T}_{\alpha \beta }`$ becomes compatible with the conservation property of a stress-tensor, and secondly the large scale gravitational coupling is described by the Einstein field equations. |
no-problem/0001/nucl-th0001060.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Strong interaction shifts and widths of bound levels of hadronic atoms provide valuable information on the hadron-nucleus interaction at threshold . We call deeply bound hadronic states those that cannot be detected by means of spectroscopic tools ($`i.e.`$ analyzing the emitted $`X`$rays when the hadron decays from one atomic level to another one in the electromagnetic cascade). This happens for low-lying levels where the overlap between the hadron wave-function and the nucleus is appreciable, and as a consequence the width due to strong interaction of the hadron with the nucleus is much larger than the electromagnetic width of the level. In these circumstances, the emission of $`X`$rays is highly suppressed as compared to the hadron absorption by the nucleus. Indeed, the total decay width of the last observable level, using $`X`$ray spectroscopy techniques, is two or three orders of magnitude larger than the upper one. Thus, the width of these deeply bound states are expected to be very large. If the half-width of a given level is equal or larger than the separation to the next level, then that state cannot be resolved. So only narrow deeply bound states can be resolved and detected. It is out of any doubt that the precise experimental determination of binding energies and widths of these states would provide a valuable insight into the complex dynamics of the antikaon-nucleon and antikaon-nucleus systems. Similar studies have been carried out for the pion case, where narrow deeply bound pionic atom states were predicted -, and have been recently detected using nuclear reactions .
By using $`X`$ray spectroscopy techniques, energy shifts and widths of kaonic atoms levels have been measured through the whole periodic table<sup>1</sup><sup>1</sup>1The dynamics of all these levels are greatly dominated by pure electromagnetic interactions and they do not correspond to what we call deeply bound states.. A compilation of data can be found in Refs. and . Some $`K^{}`$nucleus optical potentials have been successfully fitted to experimental data , - and recently used by Friedman and Gal to predict the existence of narrow deeply bound levels in kaonic atoms . These authors find that the $`K^{}`$ deeply bound atomic levels are generally narrow, with widths ranging for $`50`$ to $`1500`$ KeV over the entire periodic table, and are not very sensitive to the different density dependence of the $`K^{}`$nucleus optical potentials that were used. Besides, they also note that, due to the strong attraction of the antikaon-nucleus optical potential, there must exist nuclear kaonic bound levels, deeper than the atomic states, which, in contrast, are expected to be very sensitive to the density dependence of the different optical potentials.
From the microscopical point of view, recently Ramos and Oset have developed an optical potential for the $`K^{}`$ meson in nuclear matter in a self-consistent microscopic manner. This approach uses a $`s`$-wave $`\overline{K}N`$ interaction obtained by solving a coupled-channel Lippmann-Schwinger equation<sup>2</sup><sup>2</sup>2This approach is inspired in the pioneering works of Refs. -. Similar extensions have been developed in the meson-meson sector -., in the $`S=1`$ strangeness sector, with a kernel determined by the lowest-order meson-baryon chiral Lagrangian . Though, a three-momentum cut-off, which breaks manifest Lorentz covariance, is introduced to regularize the chiral loops, the approach followed by the authors of restores exact unitarity and it is able to accommodate the resonance $`\mathrm{\Lambda }(1405)`$. Self-consistency turns out to be a crucial ingredient to derive the $`K^{}`$nucleus potential in Ref. and leads to an optical potential considerably more shallow than those found in Refs. , -. This was firstly pointed out by Lutz in Ref. , where however, $`\eta `$Y channels were not included when solving the coupled-channel Lippmann-Schwinger equation for the $`\overline{K}N`$ interaction, in the free space. Another recent work , where the medium modifications of antikaons in dense matter are studied in a coupled channel calculation, for scenarios more closely related to the environment encountered in heavy-ion collisions, also confirms the importance of self-consistency to find a similarly shallow potential. The depth of the real potential in the interior of nuclei is a topic of current interest in connection with possible kaon condensation in astrophysical scenarios .
In this work we have three aims. First, to see how this new microscopical optical potential , hereafter called $`V_{\mathrm{opt}}^{(1)}`$, describes the known kaonic atom levels and, if possible, quantify its quality. Second, taking $`V_{\mathrm{opt}}^{(1)}`$ as starting point, without modification and also adding to it a phenomenological part fitted to the kaonic atom data, to give predictions of deeply bound kaonic atoms levels and compare them with previous predictions. Third, to calculate the binding energies and widths of the nuclear kaonic states for both $`K^{}`$ and $`\overline{K}^0`$.
In Ref. , some results obtained with the potential $`V_{\mathrm{opt}}^{(1)}`$ have been very recently reported. Here, we try to quantify the deficiencies of this interaction and give more reliable predictions for states not yet observed, thanks to the use of a potential which describes better the measured data. As a matter of example, the nuclear $`K^{}`$ widths predicted in Ref. are about a factor two bigger than those obtained in this work. Thus, all nuclear states given in Ref. do have overlap with the continuum, their meaning thus being unclear. Furthermore, we also present here calculations for the $`\overline{K}^0`$nuclear states, not mentioned in Ref. at all.
The set of experimental data we have used through this work contains 63 energy shifts and widths of kaonic atom levels. Those are all data compiled in Refs. and except for the $`3d`$ energy shift in Helium and the $`6h`$ energy shift in Ytterbium <sup>3</sup><sup>3</sup>3 We exclude the first because it is too light to use models based on nuclear matter and the second because there are certain problems on its correct interpretation .. We define the strong shift for each level by means of
$$\epsilon =B_cB$$
(1)
where $`B`$ and $`B_c`$ are the total and the purely electromagnetic binding energies, respectively. Both of them are negative, and thus a negative (positive) energy shift means that the net effect of the strong potential is repulsive (attractive). To compute de $`K^{}`$nucleus bound states, we solve the Klein-Gordon equation (KGE) with an electromagnetic, $`V_c`$, and a strong optical, $`V_{\mathrm{opt}}`$, potentials, it reads:
$$\left(^2+\mu ^2+2\mu V_{\mathrm{opt}}(r)\right)\mathrm{\Psi }=\left(EV_c(r)\right)^2\mathrm{\Psi }$$
where $`\mu `$ is the $`K^{}`$nucleus reduced mass, the real part of $`E`$ is the total meson energy, including its mass, and the imaginary part of $`E`$, changed of sign, is the half-width $`\mathrm{\Gamma }/2`$ of the state. Different models for $`V_{\mathrm{opt}}`$ will be discussed in detail in the next section, whereas for $`V_c(r)`$ we use the Coulomb interaction taking exactly the finite-size distribution of the nucleus and adding to it the vacuum-polarization corrections .
Both the minimization numerical algorithm and the one used to solve the KGE in coordinate space, have been extensively tested in the similar problem of pionic atoms .
## 2 Experimental kaonic atom data and the optical potentials
The $`K^{}`$nucleus optical potential for kaonic atoms, $`V_{\mathrm{opt}}`$, is related to the $`K^{}`$self-energy, $`\mathrm{\Pi }_K^{},`$ inside of a nuclear medium. This relation reads:
$`2\mu V_{\mathrm{opt}}(r)`$ $`=`$ $`\mathrm{\Pi }_K^{}(q_0=m_K,\stackrel{}{q}=0,\rho _p(r),\rho _n(r)),`$ (2)
where the $`K^{}`$self-energy is evaluated at threshold, and $`\rho _{p(n)}(r)`$ is the proton (neutron) density. Neglecting isovector effects, as we will do in the rest of the paper, the optical potential only depends on $`\rho (r)=\rho _p(r)+\rho _n(r)`$. Charge densities are taken from . For each nucleus, we take the neutron matter density approximately equal to the charge one, though we consider small changes, inspired by Hartree-Fock calculations with the DME (density-matrix expansion) . In Table 1 we compile the densities used through this work.<sup>4</sup><sup>4</sup>4We use modified harmonic oscillator (two-parameter Fermi)-type densities for light (medium and heavy) nuclei. However charge (neutron matter) densities do not correspond to proton (neutron) ones because of the finite size of the proton (neutron). We take that into account following the lines of Ref. .
As we mentioned in the introduction, the authors of Ref. have developed an optical potential for the $`K^{}`$ meson in nuclear matter in a self-consistent microscopic manner. It is based on their previous work on the $`s`$-wave meson-baryon dynamics in the $`S=1`$ strangeness sector. There, and starting from the lowest-order meson-baryon chiral Lagrangian, a non-perturbative resummation of the bubbles in the $`s`$-channel<sup>5</sup><sup>5</sup>5Here $`s`$ refers to the Mandelstam variable $`E_{CM}^2`$ is performed. Such a resummation leads to an exact restoration of unitarity. The model reproduces successfully the $`\mathrm{\Lambda }(1405)`$ resonance and the $`K^{}pK^{}p,\overline{K}^0n,\pi ^0\mathrm{\Lambda },\pi ^0\mathrm{\Sigma },\pi ^+\mathrm{\Sigma }^{},\pi ^{}\mathrm{\Sigma }^+`$ cross sections at low energies. The results in nuclear matter are translated to finite nuclei by means of the local density approximation, which turns out to be exact for zero-range interactions , which is the case of the $`s`$-wave part of the $`K^{}`$nucleus optical potential for kaonic atoms.
Firstly, we consider the antikaon-selfenergy as given in Ref. , and use it to define, through Eq. (2), what we call $`V_{\mathrm{opt}}^{(1)}`$. This potential does not have any free parameter, all the needed input is fixed either from studies of meson-baryon scattering in the vacuum or from previous studies of pionic atoms , and thus it is a purely theoretical potential. It consists of $`s`$-wave and $`p`$-wave contributions. The $`s`$-wave part is quite complete, however the $`p`$-wave one is far from complete and only contains the contributions of $`\mathrm{\Lambda }`$-hole and $`\mathrm{\Sigma }`$-hole excitations at first order. For high-lying measured kaonic states, calculations of energies and widths with and without this $`p`$-wave piece turn out to be essentially undistinguishable, thus in what follows we will ignore the $`p`$-wave contribution to $`V_{\mathrm{opt}}^{(1)}`$. We use $`V_{\mathrm{opt}}^{(1)}`$ to compute the 63 shifts and widths of the considered set of data. The obtained $`\chi ^2`$ per number of data is 3.8, indicating that the agreement is fairly good, taking into account that the potential has no free parameters. To better quantify its goodness, we also construct a modified optical potential, which we call $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$, by adding to $`V_{\mathrm{opt}}^{(1)}`$ a phenomenological part linear in density, $`\delta V^{\mathrm{fit}}`$, characterized by a complex constant $`\delta b_0`$ as follows
$`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ $`=`$ $`V_{\mathrm{opt}}^{(1)}+\delta V^{\mathrm{fit}}`$ (3)
$`2\mu \delta V^{\mathrm{fit}}(r)`$ $`=`$ $`4\pi (1+{\displaystyle \frac{\mu }{m}})\delta b_0\rho (r),`$ (4)
where $`m`$ is the nucleon mass. We determine the unknown parameter $`\delta b_0`$ from a best fit to the previous set of shifts and widths of kaonic atom data, this yields
$`\delta b_0`$ $`=`$ $`[(0.078\pm 0.009)+i(0.25\pm 0.01)]\mathrm{fm}.`$ (5)
and the corresponding $`\chi ^2`$ per degree of freedom of the best fit is $`\chi ^2/dof`$ = 1.6. The errors on $`\delta b_0`$ are just statistical and have been obtained by increasing the value of $`\chi ^2`$ by one unit.
As a reference, we also compare these results with the ones obtained from a phenomenological $`t\rho `$type potential suggested in Refs. and , let us call it $`V_{\mathrm{opt}}^{(2)}(r)`$:
$`2\mu V_{\mathrm{opt}}^{(2)}(r)`$ $`=`$ $`4\pi (1+{\displaystyle \frac{\mu }{m}})b_0\rho (r).`$ (6)
By fitting the complex parameter $`b_0`$ in $`V_{\mathrm{opt}}^{(2)}`$ to the same set of data, we obtain
$`b_0`$ $`=`$ $`[(0.52\pm 0.03)+i(0.80\pm 0.03)]\mathrm{fm}`$
$`\chi ^2/dof`$ $`=`$ $`2.15`$ (7)
This result is slightly different from the one given in Ref. : $`b_0=[(0.62\pm 0.05)+i(0.92\pm 0.05)]`$ fm, because the nuclear-matter densities used in both works are not exactly the same.
Now, by comparing $`\delta V^{\mathrm{fit}}`$ with $`V_{\mathrm{opt}}^{(2)}`$ we have a rough estimate of how far is the theoretical potential $`V_{\mathrm{opt}}^{(1)}`$ from the empirical one $`V_{\mathrm{opt}}^{(2)}`$. If we compare the result for $`\delta b_0`$ in Eq. (5) to the one for $`b_0`$ in Eq. (7), we get
$`{\displaystyle \frac{\mathrm{Re}(\delta b_0)}{\mathrm{Re}(b_0)}}`$ $`=`$ $`0.15`$
$`{\displaystyle \frac{\mathrm{Im}(\delta b_0)}{\mathrm{Im}(b_0)}}`$ $`=`$ $`0.32,`$ (8)
which tells us that $`\delta b_0`$ is substantially smaller than $`b_0`$ and hence the theoretical potential, $`V_{\mathrm{opt}}^{(1)}`$, gives the bulk of the fitted (phenomenological) potential $`V_{\mathrm{opt}}^{(2)}`$. Of course, this is only true in the range of low densities which are relevant for the measured atomic levels (see the discussion at the end of this section). Besides, the microscopical potential ($`V_{\mathrm{opt}}^{(1)}`$) needs, in order to provide a better agreement to data, to have a larger attractive real part, and a smaller absorptive imaginary part. By looking at Eq. (8) one might quantify these deficiencies by about 15% and 30% for the real and imaginary parts respectively. Taking into account that the imaginary part of an optical potential provides an effective repulsion and that from the above discussion the real part of $`V_{\mathrm{opt}}^{(1)}`$ is not as attractive as it should be, it is clear that $`V_{\mathrm{opt}}^{(1)}`$ is less attractive than what can be inferred from the existing kaonic atom data. However it is interesting to note, that despite of this deficiency of attraction, the purely theoretical potential of Ref. provides an acceptable description of the data, which can be better quantified attending to the value of 3.8 obtained for $`\chi ^2`$ per number of data, quoted above, and/or looking at the results in Tables 24. In these tables, we give the results obtained with this potential for shifts and widths and compare them to the experiment and to results computed with different phenomenological potentials obtained from best fits to the data. As can be seen in the tables, the theoretical potential, $`V_{\mathrm{opt}}^{(1)}`$, quite often predicts too repulsive shifts, and for the lower states it generally predicts too small widths.
In the next sections we will analyze the predictions (energies and widths) of different density dependent optical potentials, all of them describing the known kaonic atom data, for deeply bound atomic as well as nuclear kaonic states . Thus it is worthwhile to consider also the following density dependent potential, $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$,
$`2\mu V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}(r)`$ $`=`$ $`4\pi (1+{\displaystyle \frac{\mu }{m}})\rho (r)\left(b_0^{\mathrm{exp}}+B_0\left({\displaystyle \frac{\rho (r)}{\rho _0}}\right)^\alpha \right),`$ (9)
used in previous studies and . In these references, $`\rho _0`$ is set to 0.16 fm<sup>-3</sup> and the complex parameter $`b_0^{\mathrm{exp}}`$ is fixed according to the low density limit. Using empirical scattering lengths, it is set to
$`b_0^{\mathrm{exp}}`$ $`=`$ $`(0.15+i0.62)\mathrm{fm}.`$ (10)
The above potential has three free real parameters to be adjusted: real and imaginary parts of the complex parameter $`B_0`$ and the real parameter $`\alpha `$. A best fit gives
$`B_0`$ $`=`$ $`[(1.62\pm 0.04)+i(0.028\pm 0.009)]\mathrm{fm}`$
$`\alpha `$ $`=`$ $`0.273\pm 0.018`$
$`\chi ^2/dof`$ $`=`$ $`1.83`$ (11)
For the sake of completeness and for a better comparison between potentials, we present in Tables 2– 4 the whole set of experimental shifts and widths data used in the fits, together with the results and $`\chi ^2/dof`$ from each of the considered potentials: $`V_{\mathrm{opt}}^{(1)}`$, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$, $`V_{\mathrm{opt}}^{(2)}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$. The potential $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$, mostly based on the theoretical input of Ref. , gives the best description (smallest $`\chi ^2/dof`$) of the data.
On the other hand, in Fig. 1 we show for <sup>208</sup>Pb, both the real and the imaginary parts of the four potentials studied in this work as a function of $`r`$. In both plots, and as reference, the electromagnetic potential, $`V_c`$, felt by the $`K^{}`$ meson is also depicted. Comparisons between the four potentials are of prime importance, because the depth of the real potential in the interior of nuclei determines possible kaon condensation scenarios. Another important feature is the shape of the different potentials in relation to the nuclear density. This information can also be extracted from Fig. 1 just by noting that the $`V_{\mathrm{opt}}^{(2)}`$ potential is proportional to $`\rho `$ and it is also shown in the plots.
Despite of the fact that, for instance, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$ have a totally different depths for distances smaller than 5 fm, both potentials give a good description of the measured atomic levels. This is a clear indication that these measured levels should be sensitive to low values of the nuclear density, as can be appreciated in Fig. 2. There<sup>6</sup><sup>6</sup>6In this figure, we also present wave function for deeper bound states, not yet detected, which will be discussed in the next sections., we show the $`7`$i modulus squared of the $`K^{}^{208}`$Pb reduced radial wave function, $`u_{7\mathrm{i}}(r)`$, and compare it with the <sup>208</sup>Pb nuclear density ($`\rho `$). Though the maximum of $`|u_{7\mathrm{i}}(r)|^2`$ is above 30 fm, the overlap of the kaon with the nuclear density reaches its maximum around 6 or 7 fm (see bottom plot). Thus, the atomic data are not sensitive to the values of the optical potentials at the center of the nuclei, but rather to their behavior at the surface. Thus, we consider of interest to amplify the region between 5 and 8 fm in Fig. 1. This can be found in Fig. 3, where we see that at the surface all optical potentials are much more similar than at the center of the nucleus. We would like to point out that the two potentials which better described the existing data, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$, have practically the same imaginary part above 7 fm.
To finish this section we would like to stress that the strong interaction effects in kaonic atoms are highly non-perturbative, as evidenced by the fact that although the level shifts are repulsive (i.e. negative) the real part of the optical potentials is attractive. This is a direct consequence of the absorptive part of the potentials being comparable in magnitude to the real part .
## 3 Deeply bound atomic $`K^{}`$-nucleus levels
We have used the four previous $`K^{}`$-nucleus optical potentials ($`V_{\mathrm{opt}}^{(1)}`$, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$, $`V_{\mathrm{opt}}^{(2)}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$) to predict binding energies and widths of deeply bound atomic states, not yet observed, in $`{}_{}{}^{12}\mathrm{C}`$, $`{}_{}{}^{40}\mathrm{Ca}`$ and $`{}_{}{}^{208}\mathrm{Pb}`$. Results for binding energies, strong shifts and widths are collected in Table 5. Binding energies, $`B`$, are practically independent of the used optical potential. Indeed, for a given nucleus and level, $`B`$ varies at most at the level of one per cent, hence we only present results obtained with the modified Oset-Ramos potential, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$. Deeply bound atomic level widths are much more sensitive to the details of the potential, and approximately follow a regular pattern: $`V_{\mathrm{opt}}^{(2)}`$widths are the widest, those calculated with the $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$ potential are narrower than the first ones by a few per cent, as it was previously stated in Ref. . For the deepest states, the $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ interaction leads to significantly smaller widths (about 20 to 40 %) than the $`V_{\mathrm{opt}}^{(2)}`$ potential. On the other hand, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$ predictions are more similar, but one still finds appreciable differences. The narrowest widths are obtained with the $`V_{\mathrm{opt}}^{(1)}`$ potential.
In Refs. and , and supported by the results obtained from the empirical $`V_{\mathrm{opt}}^{(2)}`$ and $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$ potentials, a scenario where the deep atomic states are narrow enough to be separated in most cases, except for some overlap in heavy nuclei<sup>7</sup><sup>7</sup>7In those cases, $`l`$-selective nuclear reactions might resolve the whole spectrum., was firstly presented. The above discussion and the results presented in Table 5 for the more theoretical founded potentials $`V_{\mathrm{opt}}^{(1)}`$ and $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ reinforce such a scenario. To illustrate this point, we present in Fig. 4 binding energies and widths, using the $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ potential, for different atomic states in carbon and lead to show how separable the different levels are.
As we mentioned in the introduction, the overlap of the $`K^{}`$-wave-function and the nucleus is bigger for these low-lying atomic states than for those accessible via the atomic cascade (see Fig. 5), and thus the precise determination of their binding energies and widths would provide valuable details of the antikaon-nucleus interaction at threshold. Thus, several nuclear reactions have been already suggested to detect them and . A rough comparison of the typical values in Fig. 5 and those in the bottom panel of Fig. 2 helps us to understand why the typical energy-shifts and widths are of the order of the MeV for deep atomic states while the magnitude of those for high-lying (measured) atomic states was the KeV. The minima of the 1s-atomic modulus squared wave function in Fig. 5 hint the existence of deeper bound states, as it was discussed in Ref. . Those states will be the subject of the next section.
## 4 Nuclear $`K^{}`$ and $`\overline{K}^0`$ states
All four optical potentials defined in the previous sections also have $`K^{}`$nucleus bound levels much deeper and wider than the deep atomic states presented in Table 5. We call these levels nuclear states, an enlightening discussion on their nature and differences to the atomic states can be found in Ref. . Those states would not exist if the strong interaction were switched off. To obtain all the nuclear levels for a given optical potential and nucleus, we initially set to zero the imaginary part of the optical potential and switch it on gradually keeping track at any step of the bound levels. We study three different nuclei across the periodic table (carbon, calcium and lead). Results are shown in Table 6. As can be seen, energies and widths depend greatly on the details of the used potential, however, and because of the enormous widths predicted for all of them, there exist serious doubts not only on the ability to resolve different states but also on their proper existence.
For the case of the atomic states discussed in the previous sections, the net effect of the strong potential was repulsive (negative energy shifts), whereas for these nuclear bound states, the resulting effect of the strong potential is attractive, as can be seen in Table 7 where we present $`\overline{K}^0`$nuclear states. Assuming isospin symmetry and neglecting isovector effects, we obtain those neutral antikaon bound states just by switching off the electromagnetic potential. Looking at both tables (6 and 7), we see that the electromagnetic interaction does not affect at all the widths ($`\mathrm{\Gamma }`$) of the deepest states. It has some effects on the binding energies ($`B`$), and, in some cases, it is responsible for the existence of some levels for $`K^{}`$ and not for the $`\overline{K}^0`$ case. In any case, we are certainly dealing with a highly non-perturbative (non-linear) scenario. For instance, for all nuclei and levels, the widths are more or less the same and only depend significantly on the potential. This behavior is due to the fact that these nuclear wave–functions are totally inside of the nucleus (see for instance 0s and 0g nuclear $`K^{}^{208}`$Pb levels in Fig. 2) where all the optical potentials are practically constant and much bigger than the electromagnetic one, as can be seen in Fig. 1. Thus, the electromagnetic dynamics does not play a crucial role, as it is explicitly shown in Fig. 6, where we compare $`K^{}`$ and $`\overline{K}^0`$nucleus wave functions in <sup>208</sup>Pb. As a matter of fact the width of any of those levels approximately verifies $`\mathrm{\Gamma }/2\mathrm{Im}(V_{\mathrm{opt}})`$ at the center of the nucleus.
The theoretical potential of Ref. supplemented by the phenomenological piece $`\delta V^{\mathrm{fit}}`$ in Eq. (3), leads both to the best description of the measured kaonic atom data and to the narrowest nuclear antikaon states. Indeed, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$widths are about four or five times smaller than the ones predicted by the empirical $`V_{\mathrm{opt}}^{(2\mathrm{D}\mathrm{D})}`$ and $`V_{\mathrm{opt}}^{(2)}`$ potentials. Actually, in some cases the $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$interaction predicts states such that $`B+\mathrm{\Gamma }/2`$ is still negative and thus, they might be interpreted as well defined states.
## 5 Conclusions
We have shown that the theoretical potential $`V_{\mathrm{opt}}^{(1)}`$, recently developed by Ramos and Oset and based on a chiral model, gives an acceptable description of the observed kaonic atom states, through the whole periodic table ($`\chi ^2/dof=3.8`$). Furthermore, it also gives quite reasonable predictions (when compared to results obtained from other phenomenological potentials fitted to available data) for the deep kaonic atom states, not yet observed. This is noticeable, because it has no free parameters. Of course, it can be improved by adding to it a small empirical piece which is fitted to the all available kaonic atom data. In this way, we have constructed $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$, and have used it to both quantify the “deficiencies” of the microscopical potential of Ramos and Oset and also to achieve more reliable predictions for the deeper (atomic and nuclear) bound states not yet detected. From the first kind of studies, we have concluded that at low densities, the combined effect of both real and imaginary parts of the theoretical potential leads to energy shifts more repulsive than the experimental ones. More quantitative results can be drawn from Eq. (8). Besides, deeply bound atomic state energies and widths obtained with $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ confirm the existence of narrow and separable states, as pointed out in Ref. , and therefore subject to experimental observation by means of nuclear reactions. However, there exist appreciable differences among the predicted widths for these states, when different potentials are used. Thus, the detection of such states would shed light on the intricacies of the antikaon behavior inside of a nuclear medium.
Finally, we have calculated nuclear antikaon states, for which the electromagnetic dynamics does not play a crucial role. There, we are in highly non-perturbative regime and their widths depend dramatically on the used potential, but they do not depend on the nucleus or level. However, because of the huge values found for the widths, one might have some trouble in identifying them as states. In any case, $`V_{\mathrm{opt}}^{(1\mathrm{m})}`$ leads to the narrowest states, and in some cases and using $`l`$-selective nuclear reactions one might resolve some states (for instance the ground state in <sup>40</sup>Ca, see Tables 6 and 7).
To end the paper, we would like to point out, that one should be cautious when interpreting results and conclusions drawn for the deep atomic and nuclear states, not yet detected. For instance, though it is commonly accepted that $`p`$-wave and isovector corrections to the optical potential have little effect at the low densities explored by the available data, it is not necessarily true at the higher densities explored by deeper bound states.
## Acknowledgments
We wish to thank E. Friedman, A. Gal, E. Oset, A. Ramos and L.L. Salcedo for useful communications. This research was supported by DGES under contract PB98-1367 and by the Junta de Andalucía. |
no-problem/0001/astro-ph0001039.html | ar5iv | text | # Antikaon condensation in neutron stars
## I Introduction
The study of dense matter in laboratories reveals many new and interesting results. There is a growing interplay between the physics of dense matter and the physics of compact objects . The composition and structure of a neutron star primarily depends on the nature of strong interaction. The equation of state of neutron star matter encompasses a wide range of densities, from the density of iron nucleus at the star’s surface to several times the normal nuclear matter density encountered in the core. Since the chemical potentials of nucleons and leptons increase rapidly with density in the stars interior, several novel phases with large strangeness fraction may appear. Among these possibilities, hyperon formation is quite a robust mechanism. This is possible when the neutron chemical potential becomes sufficiently large so that the neutrons at their Fermi surfaces can decay into hyperons via weak strangeness non-conserving interactions. The threshold densities for hyperon formation strongly depend on the nuclear equation of state .
In recent years considerable interest has grown in the study of the properties of kaons, $`K`$, and antikaons, $`\overline{K}`$, in dense nuclear matter and in neutron star matter. Within a chiral $`SU(3)_L\times SU(3)_R`$ Lagrangian, it was demonstrated by Kaplan and Nelson that a negatively charged antikaon $`K^{}`$ may undergo Bose-Einstein condensation in dense baryonic matter formed in heavy ion collisions. It was further predicted in the chiral perturbation theory that a $`K^{}`$ condensed phase may form in the core of neutron stars in consonance with kaon-nucleon scattering data and $`K^{}`$ atomic data . In these chiral Lagrangians, the (anti)kaons are directly coupled to nucleons. The strongly attractive $`K^{}`$-nucleon interaction increases with density lowering the effective mass $`m_K^{}`$ of the antikaons. Consequently, the in-medium energy of $`K^{}`$ meson, $`\omega _K^{}`$, decreases and $`s`$-wave $`K^{}`$ condensation sets in when $`\omega _K^{}`$ equals to the $`K^{}`$ chemical potential $`\mu _K^{}`$ which, in turn, is equal to the electron chemical potential $`\mu _e`$ for a cold catalyzed (neutrino-free) neutron star matter. The typical critical density for $`K^{}`$ condensation in nucleons-only star matter is about $`(34)n_0`$, where $`n_0`$ is the normal nuclear matter density. The exact value, however, depends on the model used, i.e. on the nucleonic equation of state and on the parameters employed, especially on the depth of the attractive $`K^{}`$ optical potential. The net effect of $`K^{}`$ condensation in neutron star matter is that $`K^{}`$ replaces electrons in maintaining charge neutrality and the energy of the condensate is lowered because of strongly attractive interactions between nucleons and $`K^{}`$ mesons. Due to the softening of the equation of state the masses of the stars are reduced in presence of $`K^{}`$ condensate . It was also found that in presence of hyperons, $`K^{}`$ condensation was delayed to higher density, and may not even exist in maximum mass stars. Protoneutron (newly born and neutrino rich) stars with $`K^{}`$ condensate was shown to have maximum masses larger than those of cold catalyzed (old and neutrino free) stars - a reversal from ordinary nucleons-only star. These stars may undergo supernovae explosion and then could collapse to small mass black holes during deleptonization .
The situation for kaons is however quite different. Theoretical investigations based on Nambu$``$Jona-Lasinio model , chiral perturbation theory , and one-boson-exchange model yield a repulsive optical potential for $`K^+`$ in nuclear medium. Therefore, $`K^+`$ condensation is expected to be forbidden in neutron stars.
In the traditional meson-exchange picture , (anti)kaons and baryons interact via the exchange of $`\sigma `$, $`\omega `$, and $`\rho `$ mesons. In addition to the decrease of $`m_K^{}`$ with density here, the energy $`\omega _K^{}`$ is lowered since $`K^{}`$ meson experiences attractive vector $`\omega `$-meson potential due to G-parity of antikaons. In contrast, the isovector $`\rho `$-meson field is repulsive for $`K^{}`$ and inhibits $`K^{}`$ condensation delaying it to higher densities. In neutron stars at densities above normal nuclear matter values, most of the protons convert into neutrons by inverse $`\beta `$-decay since it is energetically favorable to posses a small number of electrons. The ratio of electrons to nucleons which is again equal to the proton fraction due to charge neutrality is typically $`Y_e=Y_p0.10.15`$. Therefore, neutron star matter is highly asymmetric and quite distinct from ordinary symmetric nuclear matter. This suggests that the $`\rho `$-meson field should have a strong repulsive effect on $`K^{}`$ meson. On the other hand, the $`\rho `$-meson field induces an attractive field for $`\overline{K}^0`$ meson which is an isodoublet partner of $`K^{}`$ meson. This should lower the effective energy of $`\overline{K}^0`$ meson, $`\omega _{\overline{K}^0}`$, compared to that of $`K^{}`$ meson thereby making $`\overline{K}^0`$ meson condensation more favorable in neutron star matter. The critical density for $`s`$-wave neutral $`\overline{K}^0`$ meson condensation is governed by the condition $`\omega _{\overline{K}^0}=0`$, and should depend sensitively on the equation of state (EOS). Using variational chain summation method with two-nucleon and three-nucleon interactions, Akmal et al. recently predicted a neutral pion condensation phase transition in neutron star matter at a density of $`0.2`$ fm<sup>-3</sup>. So far no calculation of neutral $`\overline{K}^0`$ condensation and its impact on the gross properties of neutron stars has been performed. In this paper, we investigate the effect of antikaon condensation with more emphasize on $`\overline{K}^0`$ condensate on the constitution and structure of neutron star matter in the standard meson exchange model. For this purpose, we employ the usual relativistic mean field Lagrangian for baryons interacting via meson exchanges and includes also the self-interaction of the mesons. The antikaons are treated in the same footing as the baryons which interact by the exchange of the same mesons. For the $`K^{}`$ meson, the Lagrangian density of Ref. is used, and it is extended to include the $`\overline{K}^0`$ meson. We shall demonstrate within this model that, apart from the $`K^{}`$ meson (as already studied in Ref. ), the $`\overline{K}^0`$ condensate may also exist inside a neutron star and has a significant influence on the star properties. We shall also show that the threshold densities for the antikaon condensates are quite sensitive to the poorly known high density regime of the equation of state.
The paper is organized as follows. In section II we describe the relativistic mean field (RMF) model of strong interactions. The relevant equations for neutron star matter with antikaon condensate are summarized in this model. In section III the parameters of the model are discussed and effects of antikaon condensates in neutron star matter are presented. Section IV is devoted to summary and conclusions.
## II The Formalism
The starting point in the present approach is a relativistic field theoretical model of baryons and (anti)kaons interacting by the exchange of scalar $`\sigma `$, isoscalar vector $`\omega `$, and vector isovector $`\rho `$ mesons. The total hadronic Lagrangian can be written as the sum of the baryonic, kaonic and leptonic parts, i.e. $`=_B+_K+_l`$. Considering all the charge states of the baryon octet $`B\{n,p,\mathrm{\Lambda },\mathrm{\Sigma }^+,\mathrm{\Sigma }^{},\mathrm{\Sigma }^0,\mathrm{\Xi }^{},\mathrm{\Xi }^0\}`$, the baryonic Lagrangian is given by
$`_B`$ $`=`$ $`{\displaystyle \underset{B}{}}\overline{\psi }_B\left(i\gamma _\mu ^\mu m_B+g_{\sigma B}\sigma g_{\omega B}\gamma _\mu \omega ^\mu {\displaystyle \frac{1}{2}}g_{\rho B}\gamma _\mu 𝝉_B𝝆^\mu \right)\psi _B`$ (3)
$`+{\displaystyle \frac{1}{2}}\left(_\mu \sigma ^\mu \sigma m_\sigma ^2\sigma ^2\right)U(\sigma )`$
$`{\displaystyle \frac{1}{4}}\omega _{\mu \nu }\omega ^{\mu \nu }+{\displaystyle \frac{1}{2}}m_\omega ^2\omega _\mu \omega ^\mu {\displaystyle \frac{1}{4}}𝝆_{\mu \nu }𝝆^{\mu \nu }+{\displaystyle \frac{1}{2}}m_\rho ^2𝝆_\mu 𝝆^\mu .`$
Here $`\psi _B`$ denotes the Dirac spinor for baryon B with vacuum mass $`m_B`$ and isospin operator $`𝝉_B`$. The scalar self-interaction term
$$U(\sigma )=\frac{1}{3}g_2\sigma ^3+\frac{1}{4}g_3\sigma ^4,$$
(4)
is included to achieve a realistic compression modulus at normal nuclear matter density.
The Lagrangian density for (anti)kaons in the minimal coupling scheme is given by
$$_K=D_\mu ^{}\overline{K}D^\mu Km_K^2\overline{K}K,$$
(5)
with a covariant derivative $`D_\mu =_\mu +ig_{\omega K}\omega _\mu +ig_{\rho K}𝝉_K𝝆_\mu `$. The isospin doublet for the kaons is denoted by $`K(K^+,K^0)`$ and that for the antikaons is $`\overline{K}(K^{},\overline{K}^0)`$. The effective mass of (anti)kaons in this minimal coupling scheme is given by
$$m_K^{}=m_Kg_{\sigma K}\sigma ,$$
(6)
where $`m_K`$ is bare kaon mass. In the mean field approximation (MFA) adopted here, the meson fields are replaced by their mean values and the baryon currents by those generated in the presence of the mean meson fields. For a uniform and static matter within MFA, only the time-like components of the vector fields, and the isospin 3-component of $`\rho `$-meson field have non-vanishing values. The mean meson fields are denoted by $`\sigma `$, $`\omega _0`$, and $`\rho _{03}`$.
For $`s`$-wave ($`𝐤=0`$) condensation of antikaons $`\overline{K}`$, the dispersion relation representing the in-medium energies of $`\overline{K}(K^{},\overline{K}^0)`$ is given by
$$\omega _{K^{},\overline{K}^0}=m_K^{}g_{\omega K}\omega _0\frac{1}{2}g_{\rho K}\rho _{03},$$
(7)
where the isospin projection $`I_{3\overline{K}}=1/2`$ for the mesons $`K^{}`$ ($``$ sign) and $`\overline{K}^0`$ (+ sign) are explicitly written in the expression. Since the $`\sigma `$ and $`\omega `$ fields generally increase with density and both being attractive for antikaons, the effective energies of $`\overline{K}`$ are lowered in nuclear medium. Moreover, in nucleons-only matter $`\rho _{03}n_pn_n`$ ($`n_p`$ and $`n_n`$ are the proton and neutron densities) is negative, thus the $`\rho `$-meson field inhibits $`K^{}`$ condensation whereas it favors $`\overline{K}^0`$ condensation. Employing G-parity which simply transforms the sign of vector potential, the energies of kaons $`K(K^+,K^0)`$ are expressed as
$$\omega _{K^+,K^0}=m_K^{}+g_{\omega K}\omega _0\pm \frac{1}{2}g_{\rho K}\rho _{03}.$$
(8)
The $`\omega `$-meson field being repulsive for the kaons and dominates over the attractive $`\sigma `$-meson field at high densities is suggestive of the fact that kaon condensation should be highly restricted.
The meson field equations in presence of baryons and antikaon condensates can be derived from Eqs. (1)-(3) as
$`m_\sigma ^2\sigma `$ $`=`$ $`{\displaystyle \frac{U}{\sigma }}+{\displaystyle \underset{B}{}}g_{\sigma B}n_B^S+g_{\sigma K}{\displaystyle \underset{\overline{K}}{}}n_{\overline{K}},`$ (9)
$`m_\omega ^2\omega _0`$ $`=`$ $`{\displaystyle \underset{B}{}}g_{\omega B}n_Bg_{\omega K}{\displaystyle \underset{\overline{K}}{}}n_{\overline{K}},`$ (10)
$`m_\rho ^2\rho _{03}`$ $`=`$ $`{\displaystyle \underset{B}{}}g_{\rho B}I_{3B}n_B+g_{\rho K}{\displaystyle \underset{\overline{K}}{}}I_{3\overline{K}}n_{\overline{K}}.`$ (11)
Here the scalar and number density of baryon $`B`$ are respectively
$`n_B^S`$ $`=`$ $`{\displaystyle \frac{2J_B+1}{2\pi ^2}}{\displaystyle _0^{k_{F_B}}}{\displaystyle \frac{m_B^{}}{(k^2+m_B^2)^{1/2}}}k^2𝑑k,`$ (12)
$`n_B`$ $`=`$ $`(2J_B+1){\displaystyle \frac{k_{F_B}^3}{6\pi ^2}},`$ (13)
with effective baryonic mass $`m_B^{}=m_Bg_{\sigma B}\sigma `$, Fermi momentum $`k_{F_B}`$, spin $`J_B`$, and isospin projection $`I_{3B}`$. Note that for $`s`$-wave $`\overline{K}`$ condensation, the scalar and vector densities of antikaons are same, and in the MFA is given by
$$n_{K^{},\overline{K}^0}=2\left(\omega _{K^{},\overline{K}^0}+g_{\omega K}\omega _0\pm \frac{1}{2}g_{\rho K}\rho _{03}\right)\overline{K}K=2m_K^{}\overline{K}K.$$
(14)
The total energy density $`\epsilon =\epsilon _B+\epsilon _{\overline{K}}`$ has contributions from baryons, leptons, and antikaons. The baryonic plus leptonic energy density is
$`\epsilon _B`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2+{\displaystyle \frac{1}{3}}g_2\sigma ^3+{\displaystyle \frac{1}{4}}g_3\sigma ^4+{\displaystyle \frac{1}{2}}m_\omega ^2\omega _0^2+{\displaystyle \frac{1}{2}}m_\rho ^2\rho _{03}^2`$ (16)
$`+{\displaystyle \underset{B}{}}{\displaystyle \frac{2J_B+1}{2\pi ^2}}{\displaystyle _0^{k_{F_B}}}(k^2+m_B^2)^{1/2}k^2𝑑k+{\displaystyle \underset{l}{}}{\displaystyle \frac{1}{\pi ^2}}{\displaystyle _0^{k_l}}(k^2+m_l^2)^{1/2}k^2𝑑k.`$
The last term corresponds to the energy of leptons as required in a neutron star matter. The energy density for antikaons is
$$\epsilon _{\overline{K}}=m_K^{}\left(n_K^{}+n_{\overline{K}^0}\right).$$
(17)
Since antikaons form $`s`$-wave Bose condensates, they do not directly contribute to the pressure so that the pressure is due to baryons and leptons only
$`P`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2{\displaystyle \frac{1}{3}}g_2\sigma ^3{\displaystyle \frac{1}{4}}g_3\sigma ^4+{\displaystyle \frac{1}{2}}m_\omega ^2\omega _0^2+{\displaystyle \frac{1}{2}}m_\rho ^2\rho _{03}^2`$ (19)
$`+{\displaystyle \frac{1}{3}}{\displaystyle \underset{B}{}}{\displaystyle \frac{2J_B+1}{2\pi ^2}}{\displaystyle _0^{k_{F_B}}}{\displaystyle \frac{k^4dk}{(k^2+m_B^2)^{1/2}}}+{\displaystyle \frac{1}{3}}{\displaystyle \underset{l}{}}{\displaystyle \frac{1}{\pi ^2}}{\displaystyle _0^{k_l}}{\displaystyle \frac{k^4dk}{(k^2+m_l^2)^{1/2}}}.`$
The pressure due to antikaons is contained entirely in the meson fields via their field equations (7)-(9).
At the interior of neutron stars, nucleons and electrons undergo usual $`\beta `$-decay processes $`np+e^{}+\overline{\nu }_e`$ and $`p+e^{}n+\nu _e`$. When the electron chemical potential becomes equal to the muon mass, electrons are converted to muons by $`e^{}\mu ^{}+\overline{\nu }_\mu +\nu _e`$. In the present study of cold neutron star we may assume that the neutrinos have left the system freely. Therefore the chemical potentials of nucleons and leptons are governed by the equilibrium conditions
$$\mu _n\mu _p=\mu _e=\mu _\mu ,$$
(20)
where $`\mu _n`$, $`\mu _p`$, $`\mu _e`$, and $`\mu _\mu `$ are the chemical potentials of neutrons, protons, electrons, and muons with $`\mu _{n,p}=(k_{F_{n,p}}^2+m_N^2)^{1/2}+g_{\omega N}\omega _0+I_{3N}g_{\rho N}\rho _{03}`$. With the onset of $`\overline{K}`$ condensation, the strangeness changing processes that may occur are $`NN+\overline{K}`$ and $`e^{}K^{}+\nu _e`$, where $`N(n,p)`$ and $`\overline{K}(K^{},\overline{K}^0)`$ denote the isospin doublets for nucleons and antikaons, respectively. The requirement of chemical equilibrium yields
$`\mu _n\mu _p`$ $`=`$ $`\mu _K^{}=\mu _e,`$ (21)
$`\mu _{\overline{K}^0}`$ $`=`$ $`0,`$ (22)
where $`\mu _K^{}`$ and $`\mu _{\overline{K}^0}`$ are respectively the chemical potentials of $`K^{}`$ and $`\overline{K}^0`$. The above conditions dictate the onset of antikaon condensations. When the effective energy of $`K^{}`$ meson, $`\omega _K^{}`$, equals to its chemical potential, $`\mu _K^{}`$, which in turn is equal to the electrochemical potential $`\mu _e`$, a $`K^{}`$ condensate is formed. While $`\overline{K}^0`$ condensation follows when its in-medium energy satisfies the condition $`\omega _{\overline{K}^0}=\mu _{\overline{K}^0}=0`$. When other baryons in the form of hyperons are present in the neutron star matter, the standard $`\beta `$-decay processes for nucleons generalize to the form $`B_1B_2+l+\overline{\nu }_l`$ and $`B_2+lB_1+\nu _l`$, where $`B_1`$ and $`B_2`$ are baryons and $`l`$ is a lepton. All the equilibrium conditions involving the baryon octet may then be summarized by a single generic equation
$$\mu _B=\mu _nq_B\mu _e,$$
(23)
where $`\mu _B`$ and $`q_B`$ are, respectively, the chemical potential and electric charge of baryon species $`B`$. The above relations indicate that, in chemical equilibrium, two independent chemical potentials, $`\mu _n`$ and $`\mu _e`$, exist which correspond to baryon number and electric charge conservation. For neutron star matter we need to include also the charge neutrality condition, which in presence of antikaon condensate is expressed as
$$\underset{B}{}q_Bn_Bn_K^{}n_en_\mu =0.$$
(24)
In the present calculation we consider antikaon condensation as a second order phase transition. In principle in neutron star matter with two conserved charges, baryon and electric charge, the condensation can also be treated as a first order phase transition . In this situation, the star would have at low density a normal phase of baryons and leptons followed by a mixed phase of $`K^{}`$ condensate and baryons, and possibly a pure $`K^{}`$ condensed phase at a higher density. However in the present situation this would be more complicated as $`K^{}`$ and $`\overline{K}^0`$ have to be treated as two separate phases, and especially when the threshold density of one condensate may be reached in the mixed phase of nucleons and the other condensate. Treatment of such a triple phase would be numerically very involved and is postponed to a future publication.
## III Results and discussions
In the effective field theoretic approach adopted here, three distinct sets of coupling constants for nucleons, kaons, and hyperons associated with the exchange of $`\sigma `$, $`\omega `$, and $`\rho `$ mesons are required. The nucleon-meson coupling constants generated by reproducing the nuclear matter saturation properties are taken from Glendenning and Moszkowski of Ref. . This set is referred to as GM1 and listed in Table I.
Let us now determine the kaon-meson coupling constants. According to the quark and isospin counting rule, the vector coupling constants are given by
$$g_{\omega K}=\frac{1}{3}g_{\omega N}\mathrm{and}g_{\rho K}=g_{\rho N}.$$
(25)
The scalar coupling constant is obtained from the real part of the $`K^{}`$ optical potential at normal nuclear matter density
$$U_{\overline{K}}\left(n_0\right)=g_{\sigma K}\sigma g_{\omega K}\omega _0.$$
(26)
The negative sign in the vector meson potential is due to G-parity. The critical density of $`\overline{K}`$ condensation should therefore strongly depend on the $`K^{}`$ optical potential. Recent fits to the $`K^{}`$ atomic data indicate a strong attractive potential in dense matter. More recently, in high energy heavy ion collisions enhanced subthreshold $`K^{}`$ production with a steep spectral slope was found . In a simplified model this may be interpreted as a strong $`K^{}`$ attraction in nuclear matter. On the other hand, available $`K^{}N`$ scattering length suggests a repulsive interaction. The reason for this apparent ambiguity is the presence of $`\mathrm{\Lambda }(1405)`$-resonance which is considered to be an unstable $`\overline{K}N`$ bound state just below the $`K^{}p`$ threshold that makes the interpretation of the data more complicated. Recent analysis of $`K^{}`$ atomic data using a hybrid model which combines the relativistic mean field approach in the nuclear interior and a phenomenological density dependent potential at low density showed that the real part of $`K^{}`$ optical potential could be as large as $`U_{\overline{K}}=180\pm 20`$ MeV at normal nuclear matter density while being slightly repulsive at low density in accordance with the low density theorem. Coupled channel formalism which automatically generates the $`\mathrm{\Lambda }(1405)`$-resonance and successfully describes the low energy $`K^{}p`$ scattering data yields an attractive potential for the $`K^{}`$ meson of $`U_{\overline{K}}(n_0)100`$ MeV . In a chirally motivated coupled channel approach, the $`\overline{K}`$ optical potential depth was found to be $`U_{\overline{K}}(n_0)=120`$ MeV . The reason for such a spread in the predicted values lies in the diversity of treating the $`\mathrm{\Lambda }(1405)`$-resonance. We have therefore determined the $`K\sigma `$ coupling constant $`g_{\sigma K}`$ for a set of values of $`U_{\overline{K}}(n_0)`$ starting from $`100`$ MeV to $`180`$ MeV. This is listed in Table II for the set GM1. Since the $`\omega `$-meson potential for $`\overline{K}`$ in this model is $`V_\omega ^K(n_0)=g_{\omega K}\omega _072`$ MeV, a rather large sigma-kaon coupling constant of $`g_{\sigma K}=3.674`$ is required to reproduce a depth of $`180`$ MeV. Note that only for this large depth the value of the scalar coupling is similar to the prediction in the simple quark model i.e., $`g_{\sigma K}=g_{\sigma N}/3`$.
As an alternative approach the kaon-meson coupling constants may be determined from the $`s`$-wave kaon-nucleon ($`KN`$) scattering length. At the tree level, the isospin averaged $`KN`$ scattering length is given by
$$\overline{a}_{KN}=\frac{1}{4}a_0^{I=0}+\frac{3}{4}a_0^{I=1}=\frac{m_K}{4\pi (1+m_K/m_N)}\left(\frac{g_{\sigma K}g_{\sigma N}}{m_\sigma ^2}2\frac{g_{\omega K}g_{\omega N}}{m_\omega ^2}\right).$$
(27)
From the low density theorem, the kaon optical potential depth is given by
$$U_K=\frac{2\pi }{m_K}\left(1+\frac{m_K}{m_N}\right)\overline{a}_{KN}n_B.$$
(28)
Using the experimental values of $`a_0^{I=1}=0.31`$ fm and $`a_0^{I=0}=0.09`$ fm, the scattering length is $`\overline{a}_{KN}=0.255`$ fm. The $`K`$ optical potential depth is then $`U_K+29`$ MeV at $`n_0=0.153`$ fm<sup>-3</sup> and is repulsive. In the GM1 parameter set, for $`\overline{K}`$ depths of $`U_{\overline{K}}(n_0)=100(180)`$ MeV, we obtain $`KN`$ average scattering lengths and $`K`$ optical potential depths of $`\overline{a}_{KN}=0.468(0.319)`$ fm and $`U_K(n_0)+44(36)`$ MeV. With $`U_{\overline{K}}(n_0)=180`$ MeV, the attractive potential obtained for kaon optical potential depth (apart from rather small vector potential of (anti)kaon $`V_\omega ^K`$) is due to the neglect of $`\mathrm{\Lambda }(1405)`$-resonance which is beyond the scope of this paper.
In the present study we shall be mostly concerned with antikaon condensation in nucleons-only matter. In general, the presence of hyperons delays the onset of $`\overline{K}`$ condensation to much higher densities . However, for orientation we shall only provide briefly the effects of hyperons on antikaon condensation in neutron star matter. The vector coupling constants for the hyperons are obtained from $`SU(6)`$ symmetry
$`{\displaystyle \frac{1}{3}}g_{\omega N}`$ $`=`$ $`{\displaystyle \frac{1}{2}}g_{\omega \mathrm{\Lambda }}={\displaystyle \frac{1}{2}}g_{\omega \mathrm{\Sigma }}=g_{\omega \mathrm{\Xi }},`$ (29)
$`g_{\rho N}`$ $`=`$ $`{\displaystyle \frac{1}{2}}g_{\rho \mathrm{\Sigma }}=g_{\rho \mathrm{\Xi }};g_{\rho \mathrm{\Lambda }}=0.`$ (30)
The $`\sigma `$-meson couplings to the hyperons ($`Y`$) can be obtained from the well-depth of $`Y`$ in saturated nuclear matter
$$U_Y^N\left(n_0\right)=g_{\sigma Y}\sigma +g_{\omega Y}\omega _0.$$
(31)
Analysis of energy levels in $`\mathrm{\Lambda }`$-hypernuclei suggests a well-depth of $`\mathrm{\Lambda }`$ in nuclear matter of $`U_\mathrm{\Lambda }^N(n_0)30`$ MeV. For the $`\mathrm{\Sigma }`$ potential the situation is unclear, since there is no evidence for bound $`\mathrm{\Sigma }`$-hypernuclei. The prediction range from completely bound $`\mathrm{\Sigma }`$’s with $`U_\mathrm{\Sigma }^N(n_0)30`$ MeV to unbound with $`U_\mathrm{\Sigma }^N(n_0)+30`$ MeV. A few events in emulsion experiments with $`K^{}`$ beams have been attributed to the formation of $`\mathrm{\Xi }^{}`$-hypernuclei from which the depth of $`\mathrm{\Xi }`$ in nuclear matter has been extracted to be $`U_\mathrm{\Xi }^N(n_0)28`$ MeV. Recently, a few $`\mathrm{\Xi }`$-hypernuclei events have been identified in ($`K^{},K^+`$) reaction from which a tighter constraint on $`\mathrm{\Xi }`$ well-depth of $`18`$ MeV has been imposed . The coupling constants $`g_{\sigma Y}`$ for hyperons have been obtained using these depths.
We now present results for neutron star matter containing nucleons, leptons, and $`\overline{K}`$ condensates for the parameter set GM1 of Tables I and II. In Fig. 1, the nucleon scalar and vector potentials, and the electron chemical potential are displayed as a function of baryon density normalized to the equilibrium value of $`n_0`$ for $`K^{}`$ optical potential depth of $`U_{\overline{K}}(n_0)=120`$ MeV. The solid lines refer to results for nucleons-only matter and the dashed ones correspond to calculation with $`\overline{K}`$ condensate. With $`\overline{K}`$ condensation we find that the scalar potential is enhanced while vector $`\omega `$\- and $`\rho `$-meson potentials are decreased. The kinks in the dashed lines correspond to the onset of $`\overline{K}^0`$ condensation when the meson potentials deviate further from nucleons-only results. The variation in the meson fields may be attributed to the occurrence of the source terms due to $`\overline{K}`$ condensation in the field equations of motion (Eqs. (7)-(9)).
The populations of neutron star matter with and without $`\overline{K}`$ condensation are shown in Fig. 2. In the top panel, the particle abundances of nucleons-only matter are shown. Here all the particle fractions increase with baryon density. The behavior of the proton (and lepton) fractions are determined by nuclear symmetry energy which in turn is controlled by the $`\rho `$-meson field in the RMF models. In the central panel we exhibit the particle abundances of the star matter with $`K^{}`$ condensation at $`U_{\overline{K}}(n_0)=120`$ MeV, which in fact is the first particle to condense among the two antikaons $`\overline{K}(K^{},\overline{K}^0)`$. Once $`K^{}`$ condensate sets in at $`3.05n_0`$, it rapidly increases with density replacing the leptons in maintaining charge neutrality. The proton density which then becomes equal to $`K^{}`$ density eventually turns out to be larger than the neutron density; the neutron density nearly freezes to a constant value. This can be understood from the threshold condition of $`K^{}`$ condensation $`\omega _K^{}=\mu _n\mu _p`$ (see Eq. (17)). Substituting explicitly these values, we obtain
$$\left(k_{F_p}^2+m_N^2\right)^{1/2}=\left(k_{F_n}^2+m_N^2\right)^{1/2}+\left(m_K^{}+\frac{1}{3}g_{\omega N}\omega _0\frac{1}{2}g_{\rho N}\rho _{03}\right).$$
(32)
Since at high densities, the $`\omega `$-meson potential dominates the $`\sigma `$-meson potential and $`\rho _{03}n_pn_n`$ is negative, the term within the bracket is positive. It is therefore evident that $`k_{F_p}>k_{F_n}`$. This has been also observed in Ref. . Hence the net effect of $`K^{}`$ condensation is to lower the electron chemical potential $`\mu _e`$ and the nuclear symmetry energy (see also Fig. 1). In the bottom panel of Fig. 2, we show the particle fractions with both $`K^{}`$ and $`\overline{K}^0`$ condensates. Interestingly, with the onset of $`\overline{K}^0`$ condensate at $`4.83n_0`$, the neutron and proton abundances become identical resulting in an isospin saturated symmetric nuclear matter. This can be easily understood by substituting the threshold condition for $`\overline{K}^0`$ condensation, i.e. $`\omega _{\overline{K}^0}=0`$, from Eq. (5) into Eq. (27). From these relations it is clear that $`\overline{K}^0`$ condensate formation enforces the condition $`k_{F_p}=k_{F_n}`$. Alternatively, one may interpret from the central panel of Fig. 2 that once the neutron and proton densities (Fermi momenta) become identical, the isodoublet partner of $`K^{}`$ meson, i.e. $`\overline{K}^0`$ should be populated. The electrochemical potential $`\mu _e=\mu _n\mu _p`$ then merges with the magnitude of the $`\rho `$-meson potential $`|V_\rho ^N|=|g_{\rho N}\rho _{03}|`$ with $`\rho _{03}n_{\overline{K}^0}n_K^{}`$, as is evident from Fig. 1. At densities above $`7.32n_0`$, the antikaon densities are equal which is a general feature of isospin driving force towards symmetry. The $`\rho `$-meson field and thereby $`\mu _e`$ vanishes. As a consequence we are left with a system composed of exactly equal amount of $`npK^{}\overline{K}^0`$, i.e. a perfectly symmetric matter of nucleons and antikaons. For this matter, the strangeness fraction $`f_S=|S|/B=(n_K^{}+n_{\overline{K}^0})/n_B`$ is as large as unity.
In Fig. 3, the effective masses of the (anti)kaons, $`m_K^{}`$, are shown as a function of normalized density for various values of $`K^{}`$ optical potential depths from $`100`$ MeV to $`180`$ MeV in steps of 20 MeV. Note that $`\overline{K}`$ is only a test particle in the field of nucleons and appear physically only after condensation. The decrease of $`m_K^{}`$ is found to be quite sensitive to the $`K^{}`$ depth which determines the onset of condensation. At a given $`U_{\overline{K}}(n_0)`$, the (anti)kaon mass decreases more strongly with successive condensation of $`K^{}`$ and $`\overline{K}^0`$ mesons. This is a manifestation of increase in the $`\sigma `$-meson field with condensation as can be seen in Fig. 1.
Figure 4 shows the $`s`$-wave condensation energies for $`K^{}`$ meson, $`\omega _K^{}`$ (solid lines), and for $`\overline{K}^0`$ meson, $`\omega _{\overline{K}^0}`$ (dashed lines), as a function of density of various antikaon optical potential depths in the GM1 set. The effective energies follow a similar qualitative trend as $`m_K^{}`$. However, for depths up to $`U_{\overline{K}}(n_0)=140`$ MeV, the effective kaon masses do not vary significantly (see Fig. 3). Therefore, considering Eq. (5), the large drop in $`\omega _{K^{},\overline{K}^0}`$ with density is dominated by $`\omega `$-meson that influences $`\overline{K}`$ condensation. For higher values of $`U_{\overline{K}}`$, the large decrease in $`m_K^{}`$s due to large couplings $`g_{\sigma K}`$ are primarily responsible for condensation. For any value of $`U_{\overline{K}}`$, the isovector potential of nucleons shifts the energy of $`\overline{K}^0`$ meson below that of the $`K^{}`$ meson by $`120`$ MeV. At a given $`U_{\overline{K}}`$, condensation of $`K^{}`$ meson occurs when $`\omega _K^{}`$ intersects the electron chemical potential $`\mu _e`$ in absence of a condensate (shown by dotted line). For all values of $`\overline{K}`$ depth we find that $`K^{}`$ meson is formed before the threshold density for $`\overline{K}^0`$ condensate, i.e. $`\omega _{\overline{K}^0}=0`$, is reached. The critical densities for $`\overline{K}`$ condensates are collected in Table III for different values of $`U_{\overline{K}}(n_0)`$. A careful investigation of Fig. 4 reveals that beyond the densities for $`K^{}`$ condensates (and before $`\overline{K}^0`$ threshold densities), the rate of decrease of the energies $`\omega _{K^{},\overline{K}^0}`$ are attenuated. This can be traced back to the density-dependent behavior of the meson fields in Fig. 1. Above $`K^{}`$ formation density the decrease in $`\omega `$-meson field dominates over the increase in the $`\sigma `$-meson field. This leads to smaller rate of decrease of $`\omega _K^{}`$ and $`\omega _{\overline{K}^0}`$. Moreover, the decrease in the $`\rho `$-meson field due to $`K^{}`$ condensation causes even a smaller decrease of $`\overline{K}^0`$ energy with density. Thus the $`K^{}`$ meson condensate with the help of the vector mesons conspires to delay the appearance of its isospin partner $`\overline{K}^0`$ meson to a higher density. At densities above $`\overline{K}^0`$ condensate, the $`K^{}`$ energy again starts to decrease due to dramatic fall of $`\rho `$-meson potential.
The equation of state (EOS), pressure $`P`$ versus the energy density $`\epsilon `$ is displayed in Fig. 5 for the GM1 parameter set. The solid line represents calculation for nucleons-only star matter while the dashed lines refer to those with $`\overline{K}`$ formation for different values of $`U_{\overline{K}}(n_0)`$. The strong attraction imparted by antikaon condensation makes the EOS softer. The kinks at higher densities in the dashed lines correspond to the appearance of $`\overline{K}^0`$ condensation which further soften the EOS. Also the softness is quite sensitive to the choice of the $`\overline{K}`$ optical potential depth. A large $`U_{\overline{K}}(n_0)`$ corresponds to larger attraction and thereby an enhanced softening of the EOS.
We have used the results of Baym, Pethick and Sutherland to describe the crust consisting of leptons and nuclei at the low-density ($`n_B<0.001`$ fm<sup>-3</sup>) EOS. For the mid-density regime ($`0.001<n_B<0.08`$ fm<sup>-3</sup>) the results of Negele and Vautherin are employed. Above this density, the EOS for the relativistic models have been adopted. It is worth mentioning here that though we have treated $`\overline{K}`$ condensation as a second order phase transition, for higher $`U_{\overline{K}}(n_0)`$ values the incompressibility $`K=9dP/dn_B`$ at the threshold is found to be negative (see Fig. 5). This represents a first order phase transition where we have employed the Maxwell construction to maintain a positive compressibility.
The static neutron star sequences representing the stellar masses $`M/M_{}`$ and the corresponding central energy density $`\epsilon _c`$ are shown in Fig. 6 for nucleons-only matter (solid line) and matter with further inclusion of $`\overline{K}`$ condensate (dashed line) for several values of $`U_{\overline{K}}(n_0)`$. With the occurrence of $`\overline{K}`$ condensation, the maximum masses of neutron stars are reduced due to softening of the EOS. The maximum masses $`M_{\mathrm{max}}`$ and the central densities $`u_{\mathrm{cent}}=n_{\mathrm{cent}}/n_0`$ of the stars are listed in Table III. Interestingly, it is observed that for $`U_{\overline{K}}(n_0)140`$ MeV, the threshold densities for neutral $`\overline{K}^0`$ condensate occur before the central densities for the maximum mass stars. This implies that a significant region of these maximum mass stars will contain $`\overline{K}^0`$ condensate along with $`K^{}`$ condensate. Note that for large values of $`U_{\overline{K}}(n_0)`$, the requirement of Maxwell construction causes a negligible variation of the masses (and radii) of the stars with central density.
Apart from the $`\overline{K}`$ optical potential depth, the critical densities for antikaon condensation depend sensitively on the nuclear equation of state at high density. The GM1 set is obtained from a Walecka-type Lagrangian with a self-interaction term for the scalar meson. Because of the linear density-dependence of the vector $`\omega `$-meson field, it was found that the vector potential overestimates the relativistic Bruckner-Hartree-Fock (RBHF) results at high densities. Bodmer first proposed an additional nonlinear $`\omega `$-meson term in the RMF model of the form
$$_{\omega ^4}=\frac{1}{4}g_4\left(\omega _\mu \omega ^\mu \right)^2.$$
(33)
Later, it was found by Sugahara and Toki that this modification led to a reasonable agreement with the RBHF results. The model parameters obtained by fitting experimental data for binding energies and charge radii of heavy nuclei is referred to as TM1. The parameters of this set are given in Table I. The $`\sigma K`$ coupling constants $`g_{\sigma K}`$ for the TM1 set listed in Table II are found to be much smaller than GM1. This stems from rather large $`\omega K`$ potential of $`V_\omega ^K(n_0)=91`$ MeV for the TM1 set compared to 72 MeV for GM1. In the TM1 set, for $`\overline{K}`$ depths of $`U_{\overline{K}}(n_0)=100(180)`$ MeV, we obtain $`KN`$ average scattering lengths and kaon optical potential depths of $`\overline{a}_{KN}=0.829(0.373)`$ fm and $`U_K(n_0)+82(+3)`$ MeV. In this set even for $`U_{\overline{K}}(n_0)=180`$ MeV, the kaon optical potential depth is repulsive due to rather large $`V_\omega ^K(n_0)`$ in contrast to GM1 set.
The equation of state for nucleons-only star matter with and without antikaon condensation is shown in Fig. 7 for the TM1 set. In contrast to the GM1 set, the EOS for $`np`$-matter at high densities is much softer here. This is due to $`n_B^{1/3}`$ variation of the vector potential compared to simple $`n_B`$ dependence in the GM1 set. Due to a soft EOS, the meson fields in the TM1 set vary rather slowly with density leading to delayed appearance of $`\overline{K}`$ condensate (see Table III). The stellar sequence for this set is shown in Fig. 8. It is found that the masses of the stars with only nucleons and leptons are smaller than those in GM1. On the contrary, with $`\overline{K}`$ condensate the $`M_{\mathrm{max}}`$ of stars for $`U_{\overline{K}}(n_0)120`$ MeV are in fact larger in the TM1 model compared to the GM1 set. The delayed formation of $`\overline{K}`$ in this model suppresses the softening effect of antikaons. In particular, only for $`\overline{K}`$ depth as large as $`160`$ MeV, a $`\overline{K}^0`$ condensate can be formed in the limiting mass star.
The above study suggests that a stiffer EOS is more efficient for antikaon condensation in neutron star matter. The small values of $`n_0=0.145`$ fm<sup>-3</sup> and $`m_N^{}/m_N=0.634`$ with large $`a_{\mathrm{asy}}=36.9`$ MeV in the TM1 set should have, in principle, provided a stiffer EOS . However, the nonlinear $`\omega `$-meson term in this model entails a substantial softening of the EOS at high densities. To investigate the effects of a rather stiff EOS on $`\overline{K}`$ condensation with acceptable nuclear matter properties, we have generated another set of parameters using the Lagrangian of Eq. (1) (i.e. excluding the nonlinear $`\omega `$-meson term of Eq. (28)) and reproducing the saturation properties of the TM1 set. We call this set as GMT and it is presented in Table I. It is to be noted that the coefficients $`g_3`$ in the sets GM1 and GMT are negative. This is associated with the well-known problem that the scalar potential is unbounded from below. We have however found that in these effective field theoretical models the equations of state for the two sets are continuous (as are exhibited) over a wide density range relevant to the neutron star interior. The $`g_{\sigma K}`$ coupling constants presented in Table II for the GMT set are nearly the same as TM1. This arises from similar values of $`\omega K`$ potential $`V_\omega ^K`$ at saturation density although they should differ considerably at higher densities. For the same reason, the values of $`KN`$ scattering lengths and $`K`$ optical potential depths at $`n_0`$ are almost identical in the TM1 and GMT sets.
In Fig. 9, we display the variation of effective kaon mass $`m_K^{}`$ (top panel) and $`\overline{K}`$ condensate energies (bottom panel) for various values of $`U_{\overline{K}}(n_0)`$ for the GMT set. This should be contrasted with Figs. 3 and 4 for the GM1 set. It is found that $`m_K^{}`$ has a smaller decrement with density even for $`U_{\overline{K}}(n_0)=180`$ MeV. Therefore, the variation of $`\omega `$-meson field is chiefly responsible for the large drop in $`\omega _{K^{},\overline{K}^0}`$ in the GMT set. The EOS and the stellar sequences are shown in Figs. 10 and 11 with and without $`\overline{K}`$ condensation. The results for the GMT set are summarized in Table IV. The $`M_{\mathrm{max}}`$ for nucleons-only stars is largest in the GMT set. Since the EOS is very stiff, the chemical potentials for nucleons increase rapidly with density which shift the threshold densities for $`\overline{K}`$ condensation to much smaller values. We find that $`\overline{K}^0`$ condensation occurs well inside the maximum mass stars for all values of $`U_{\overline{K}}(n_0)`$, except for $`100`$ MeV. To delineate the effects of $`\overline{K}^0`$ condensation from $`K^{}`$, we also give in Table IV the maximum masses and corresponding central densities for stars allowing only $`K^{}`$ condensation. It is found that the softening due to $`\overline{K}^0`$ condensate could reduce the maximum mass from that of the $`K^{}`$ condensate further by $`17\%`$ at $`U_{\overline{K}}(n_0)=180`$ MeV. The present investigation points to the fact that a stiff EOS (as for the GMT set) favors strongly the formation of $`\overline{K}^0`$ condensate in neutron stars. In other words, from G-parity one can infer that a soft EOS would enhance kaon production. Indeed, more kaon production was found in heavy ion collisions for a soft EOS rather than a stiff one.
We have discussed in Fig. 3 that $`K^{}`$ condensation which occurs at earlier density than $`\overline{K}^0`$ reduces the magnitude of the vector potentials. Consequently, the effective energy $`\omega _{\overline{K}^0}`$ of $`\overline{K}^0`$ is enhanced shifting its threshold to a higher density. This effect is illustrated in Table V where the stellar properties of stars having only $`\overline{K}^0`$ condensate is considered. It is evident that $`\overline{K}^0`$ formation takes place much earlier in the absence of $`K^{}`$ condensate (see Tables III and IV). Since $`u_{\mathrm{cr}}(\overline{K}^0)<u_{\mathrm{cent}}`$, sizeable amount of $`\overline{K}^0`$ condensate would be present in stars for all $`\overline{K}`$ optical potential depths in the GM1 and GMT sets. However, $`u_{\mathrm{cr}}(\overline{K}^0)`$ being higher than $`u_{\mathrm{cr}}(K^{})`$, and moreover, $`\overline{K}^0`$ cannot replace leptons in maintaining charge neutrality, the EOS is stiffer and the maximum masses of the stars are larger here than those with $`K^{}`$ condensate only.
In Fig. 12, the mass-radius relationship is shown for nucleons-only stars and for stars with different values of $`U_{\overline{K}}(n_0)`$ for the GM1, TM1, and GMT sets. It is found that in all models the limiting mass stars without $`\overline{K}`$ condensate have the largest maximum masses with smallest radii. With increasing $`\overline{K}`$ optical potential depth although the maximum masses decrease, the corresponding radii are however found to be similar for most of the stars in the TM1 and GMT sets. In contrast, stars with smaller masses have larger radii in the GM1 set.
We now briefly consider the case when hyperons are allowed in addition to nucleons. At higher densities when the Fermi energy of nucleons exceeds the effective mass of a hyperon minus its associated interaction, the conversion of nucleons to hyperon is energetically favorable. The hyperon-meson coupling constants are obtained as discussed before. Unless otherwise mentioned we use an attractive $`\mathrm{\Sigma }`$ well-depth of $`30`$ MeV. With this choice the $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }^{}`$ are the first two strange particles that appear at roughly the same density of $`2n_0`$ in all the sets. More massive and positively charged particles than these appear at higher densities. Since the conversion of nucleons to hyperons relieves the Fermi pressure of nucleons, the equation of state is softened. The EOSs for star matter with only nucleons and hyperons are depicted by dash-dotted lines in Figs. 5 and 7 for the GM1 and TM1 sets. The maximum masses and the corresponding central densities are given in Table III. It is evident from the figures that the hyperons emerge at densities much earlier than the threshold density for $`\overline{K}`$ condensate even for $`U_{\overline{K}}(n_0)=180`$ MeV. Since we have seen that a soft EOS prevents $`\overline{K}`$ condensation, the hyperons therefore postpone $`\overline{K}`$ formation to higher densities. This effect is further accentuated because the negatively charged $`\mathrm{\Sigma }^{}`$ and $`\mathrm{\Xi }^{}`$ can replace electrons in maintaining charge neutrality. The resulting drop in $`\mu _e`$ would mean a smaller value of $`\omega _K^{}`$ is required for $`K^{}`$ condensation which leads to higher threshold density. However, we find that in presence of hyperons the rate of decrease of the isovector potential is not so severe compared to that of $`\mu _e`$. Therefore, in a hyperonic medium the shift in the threshold density for $`\overline{K}^0`$ condensate is much smaller than that of $`K^{}`$. For example, in the GM1 set, the critical densities are $`u_{\mathrm{cr}}(K^{})=6.00n_0`$ and $`u_{\mathrm{cr}}(\overline{K}^0)=6.61n_0`$ for $`U_{\overline{K}}(n_0)=120`$ MeV. (This is to be compared with the results of Table III for stars with nucleons and antikaons.) These values are however beyond the central density of $`5.35n_0`$ for maximum mass star containing all baryons but no antikaons. On the other hand, for $`U_{\overline{K}}(n_0)=140`$ MeV, the critical densities of $`u_{\mathrm{cr}}(K^{})=3.41n_0`$ and $`u_{\mathrm{cr}}(\overline{K}^0)=4.42n_0`$ imply that both $`K^{}`$ and $`\overline{K}^0`$ condensates will be present in hyperonic stars. This means that even though hyperons shift the threshold densities of $`K^{}`$ and $`\overline{K}^0`$ condensates to higher values, the possibility of finding hyperon stars with $`\overline{K}^0`$ condensate (in association with $`K^{}`$) is enhanced for large $`\overline{K}`$ optical potential depths.
The standard mean field model used in the hyperon sector was found to be inadequate to describe the strongly attractive hyperon-hyperon interaction observed in double $`\mathrm{\Lambda }`$-hypernuclei. Since the core of a neutron star can be hyperon-rich , the hyperon-hyperon interaction is expected to be important. This may be accounted by considering two additional mesons, the scalar meson $`f_0(975)`$ (denoted by $`\sigma ^{}`$ hereafter) and the vector meson $`\varphi (1020)`$ which couple only to the hyperons ($`Y`$) . The corresponding Lagrangian is
$`^{YY}`$ $`=`$ $`{\displaystyle \underset{B}{}}\overline{\psi }_B\left(g_{\sigma ^{}B}\sigma ^{}g_{\varphi B}\gamma _\mu \varphi ^\mu \right)\psi _B`$ (35)
$`+{\displaystyle \frac{1}{2}}\left(_\mu \sigma ^{}^\mu \sigma ^{}m_\sigma ^{}^2\sigma ^2\right){\displaystyle \frac{1}{4}}\varphi _{\mu \nu }\varphi ^{\mu \nu }+{\displaystyle \frac{1}{2}}m_\varphi ^2\varphi _\mu \varphi ^\mu .`$
The $`\varphi Y`$ couplings are obtained from the SU(6) relation
$$2g_{\varphi \mathrm{\Lambda }}=2g_{\varphi \mathrm{\Sigma }}=g_{\varphi \mathrm{\Xi }}=\frac{2\sqrt{2}}{3}g_{\omega N}.$$
(36)
The $`\sigma ^{}Y`$ couplings are obtained by fitting them to a well-depth $`U_Y^{(Y^{})}`$ for a $`Y`$ in a $`Y^{}`$-bath . Note that the nucleons do not couple to the strange mesons, i.e. $`g_{\sigma ^{}N}=g_{\varphi N}=0`$. The EOS and the star sequence are shown for this situation in Figs. 5 and 6 (dotted lines) for the GM1 set without antikaons. The additional attraction imparted by the $`\sigma ^{}`$ field makes the EOS softer at moderately high densities. At very high densities, the EOS merges with that without the strange mesons due to dominance of the repulsive $`\varphi `$ field over the $`\sigma ^{}`$ field. The maximum star mass obtained here is $`1.721M_{}`$ at a central density of $`5n_0`$.
The rather large values of the meson fields ($`\sigma ^{},\varphi `$) in a hyperonic star should influence antikaon condensation densities. From the strangeness content of $`\overline{K}`$ ($`s`$-quark content) it is evident that the $`\sigma ^{}`$ field is attractive and the $`\varphi `$ field is repulsive for antikaons as also for the hyperons. As in Ref. , the $`\varphi K`$ coupling is obtained from the SU(3) relation, $`g_{\varphi K}=6.04/\sqrt{2}`$, and the $`\sigma ^{}K`$ coupling from the $`f_0(975)`$ decay, $`g_{\sigma ^{}K}=2.65`$. With these couplings and for $`U_{\overline{K}}(n_0)=140`$ MeV, we find that the relatively strong repulsive $`\varphi `$ field in a hyperon-rich medium prevents antikaon condensation even at densities as large as $`7.40n_0`$ (in the GM1 set) where the effective nucleon mass $`m_N^{}`$ drops to zero . Only for large $`\overline{K}`$ optical depth of $`160`$ MeV, and thereby large $`g_{\sigma K}`$ value, critical densities of $`u_{\mathrm{cr}}(K^{})=2.76n_0`$ and $`u_{\mathrm{cr}}(\overline{K}^0)=4.13n_0`$ are reached.
The above discussions of hyperonic star matter are confined to $`\mathrm{\Sigma }`$-depth of $`30`$ MeV. Repulsive depth of $`U_\mathrm{\Sigma }^N(n_0)+30`$ MeV has been also estimated . On refitting the scalar coupling $`g_{\sigma \mathrm{\Sigma }}`$ to this depth and limiting to the Lagrangian of Eq. (1) only, we find that in all the models $`\mathrm{\Sigma }^{}`$ formation occurs at a much higher density and the resulting EOS is now stiffer compared to those with attractive $`\mathrm{\Sigma }`$-depth. For example, in the GM1 set, a star of maximum mass $`1.798M_{}`$ is formed with a central density of $`5.13n_0`$. All the $`\mathrm{\Sigma }`$ particles do not exist in this star while the $`\mathrm{\Xi }^{}`$ occurs at a early density of $`2.6n_0`$.
In this paper, we have employed the relativistic mean field model where the nucleon-meson coupling constants are fitted to certain bulk properties at normal nuclear matter density. The model is then extended to high density regime to investigate the effects of antikaon condensation. The underlying assumption in the RMF models is that the meson field operators in the Euler-Lagrange equations are replaced by their mean values because the source terms in those equations increase with baryon density. Also, the coupling constants of the RMF models are density independent. There are other approaches where gross properties of dense matter relevant to neutron stars are calculated in the framework of nonrelativistic Brueckner-Hartree-Fock and variational chain summation (VCS) calculations using modern nucleon-nucleon interaction . Recently, Akmal, Pandharipande and Ravenhall (APR) have studied neutron star properties in the VCS method using a new Argonne $`V_{18}`$ nucleon-nucleon interaction (which fit all the data of the Nijmegen data base) with relativistic boost corrections and a fitted three-nucleon interaction. This calculation represents at present perhaps the most sophisticated many-body approach to dense matter. The bulk properties obtained in the APR model are binding energy $`E/B=16`$ MeV, nuclear symmetry energy $`a_{\mathrm{asy}}=33.94`$ MeV, and incompressibility $`K=266`$ MeV at a saturation density of $`n_0=0.16`$ fm<sup>-3</sup>. To gauge the uncertainties involved in the extrapolation of the RMF model at high density, we compare the nucleons-only matter results with those of the APR model. For this purpose we use the model which contains nonlinear interaction for the scalar $`\sigma `$-meson only. (Note that the TM1 model which includes also self-interaction of $`\omega `$-meson is fitted to finite nuclear properties.) The five parameters in this RMF model are fitted to the above mentioned (four) saturation properties plus the effective nucleon mass of $`m_N^{}/m_N=0.70`$ and 0.78 for the two representative cases considered here. In Fig. 13 we show the results for pure neutron matter (PNM) and symmetric nuclear matter (SNM) in the RMF model (solid lines), and in the APR model with pion condensation (dashed lines) and their low density extrapolations without a pion condensed phase (dotted lines) . It is clear from this figure that the agreement between the two models breaks down at high density. The APR model takes into account possibly all leading many-body correlation effects. It was also demonstrated that the strong tensor correlation gives rise to neutral pion condensation in PNM and SNM at densities of 0.20 fm<sup>-3</sup> and 0.32 fm<sup>-3</sup>, respectively. The RMF model, however, cannot address the question of density dependent correlation in dense matter, and moreover the coupling constants are density independent. This may be crucial to the deviation of the RMF results from the APR calculations. Consequently, the threshold densities of antikaon condensation may be affected. On the other hand, the advantage of the RMF models is that relativity is in-built and this causes the EOS to be causal. However, in the APR model the EOS becomes superluminal at densities close to the maximum masses of the stars.
We have computed nucleons-only star masses using this parameter set in the RMF model. For the case $`m_N^{}/m_N=0.70`$, the maximum masses of the neutron star matter (pure neutron matter) are $`M_{\mathrm{max}}=2.29`$ $`(2.45)M_{}`$ at central densities of $`n_{\mathrm{cent}}=0.92`$ (0.80) fm<sup>-3</sup>. For the case $`m_N^{}/m_N=0.78`$, the respective values are $`M_{\mathrm{max}}=2.00`$ $`(2.20)M_{}`$ at $`n_{\mathrm{cent}}=1.09`$ (0.91) fm<sup>-3</sup>. Comparing with the respective results of $`M_{\mathrm{max}}=2.20`$ $`(2.21)M_{}`$ in the APR model with pion condensation, we find that in the RMF model the maximum mass of a neutron star is more sensitive to the amount of proton fraction. It was shown that a neutron star may cool rapidly by the so-called direct URCA process in which neutrinos generated in the stars interior carry away the energy. The threshold density for this process depends on the symmetry energy which in turn is determined by the proton fraction. For the APR model the threshold is at a density of $`n_B=0.78`$ fm<sup>-3</sup> corresponding to a star of mass $`2.0M_{}`$. In the RMF model, for $`m_N^{}/m_N=0.70`$ and 0.78, the threshold densities are respectively, 0.59 and 0.61 fm<sup>-3</sup>, which correspond to neutron stars of masses 2.09 and $`1.78M_{}`$. The relatively rapid rise of symmetry energy with density manifests in a smaller threshold density for the direct URCA process in the RMF models.
It may be worth mentioning that the role of nucleon-nucleon and kaon-nucleon correlations in kaon condensation in neutron star matter was studied by Pandharipande et al. . The kaon energy for low density matter was calculated from Lenz potential while at high density the Hartree potential was used. The strong $`NN`$ and $`KN`$ correlations in the Hartree potential were shown to lead to a dramatic reduction of the antikaon attraction at high densities that could raise the threshold density for antikaon condensation to much higher values. Recently, kaon energy in neutron matter has been calculated analytically by solving the Klein-Gordon equation in the Wigner-Seitz cell approximation. It was found that the transition from the low density Lenz potential to a high density Hartree potential occurs at $`4n_0`$. However, in a chiral perturbation theory, Waas et al. found that short-range correlations has only a moderate effect on the antikaon condensation densities.
## IV Summary and Conclusions
We have investigated antikaon, $`K^{}`$\- and $`\overline{K}^0`$-meson, condensation in neutron stars within a relativistic mean field approach where the interaction between the baryons and antikaons are generated by the exchange of $`\sigma `$, $`\omega `$, and $`\rho `$ mesons. Three different parameter sets (GM1, TM1, and GMT) have been exploited in this calculation. In the GM1 and GMT sets the Lagrangian contain self-interaction for $`\sigma `$-meson only, while the set TM1 incorporates also nonlinear interaction for $`\omega `$-meson. The softest nuclear equation of state follows from TM1 set whereas the stiffest one from GMT set. We find that the critical densities for $`K^{}`$ and in particular $`\overline{K}^0`$ condensations depend sensitively on the choice of antikaon optical potential depth and more strongly on the nuclear equation of state. The threshold density of $`\overline{K}^0`$ always lie above that of $`K^{}`$ meson. With the appearance of $`K^{}`$ and $`\overline{K}^0`$ condensates, the overall equation of state becomes softer compared to the situation without antikaon condensation. This leads to a reduction in the maximum star masses. With the softest EOS (TM1 set), $`\overline{K}^0`$ condensate may be formed inside maximum mass stars only when the $`\overline{K}`$ optical depth is quite large. On the other hand for the stiffest nuclear EOS (GMT set), $`\overline{K}^0`$ can occur well inside the maximum mass stars for rather small values of antikaon optical potential depth. The appearance of $`K^{}`$ meson leads to a smaller variation of the vector fields with density. The decrease of $`\overline{K}^0`$ energy with density is therefore attenuated resulting in its formation at higher density than in the case when $`K^{}`$ meson is prohibited.
With the onset of $`K^{}`$ condensation (neglecting $`\overline{K}^0`$ meson), the proton fraction rises dramatically and even crosses the neutron fraction at some critical density. From the threshold conditions for $`K^{}`$ and $`\overline{K}^0`$ formations, i.e. $`\omega _K^{}=\mu _n\mu _p`$ and $`\omega _{\overline{K}^0}=0`$, it is evident that the critical point of identical proton and neutron densities (Fermi momenta) is precisely the density where $`\overline{K}^0`$ formation sets in. In earlier model studies of $`K^{}`$ condensation only (where $`\overline{K}^0`$ condensation were ignored), identical values of neutron and proton abundances were observed at high densities after $`K^{}`$ condensation. It may therefore be instructive to revisit the antikaon condensation scenario including $`\overline{K}^0`$ meson in these models. With the onset of $`\overline{K}^0`$ condensation, there is a competition in the formation of $`K^{}p`$ and $`\overline{K}^0n`$ pairs resulting in a perfectly symmetric matter for nucleons and antikaons inside neutron stars. As the proton fraction is much enhanced in this situation, it may have profound implication on the cooling properties of neutron stars.
With the hyperon-meson couplings constants determined from hypernuclear data and $`SU(6)`$ symmetry relation, the hyperons are formed earlier than antikaons. Since the EOS becomes softer, antikaon condensation can only occur at high densities. The negatively charged hyperons cause a larger reduction in the electron chemical potential compared to that of the isovector potential. This indicates that in contrast to nucleons only stars, $`\overline{K}^0`$ has a higher formation probability in hyperonic stars, but only for relatively larger values of $`\overline{K}`$ optical potential depth.
For all the cases studied here, the maximum masses of the stars are found to be larger than the precise current observational lower limit of $`1.44M_{}`$ imposed by the larger mass of the binary pulsar PSR 1913 + 16 . The recent discovery of high-frequency brightness oscillations in low-mass X-ray binaries provides a promising new avenue for determining masses and radii of neutron stars . Large neutron star masses with an upper limit of $`M2.02.3M_{}`$ have been extracted from kilohertz quasi-periodic-oscillations (QPO). The present investigation suggests that indeed both $`K^{}`$ and $`\overline{K}^0`$ condensates are possible in stars of such large masses. On the other hand, hyperon degrees of freedom which lower the maximum mass considerably, would be clearly excluded in these massive stars. In contrast, also very small masses of $`M1.35\pm 0.04M_{}`$ have been accurately determined in binary pulsars . It is clear that exotic degrees of freedom like antikaons and hyperons must occur in these pulsars at few times the normal nuclear matter density.
It has been inferred that large magnetic field $`10^{18}`$G could exist at the core of neutron stars . Though such large interior field is not accessible to direct observation, their existence could influence certain gross properties of neutron stars, such as cooling properties of the stars . Recently, there has been a considerable debate about whether a charged Bose gas in a constant magnetic field could exhibit a Bose-Einstein condensation . Some calculations ruled out such a possibility whereas others predicted that it could occur in a magnetic field . The formation of a Bose-Einstein condensate of neutral Bose gas such as a system of $`\overline{K}^0`$ in a strong magnetic field may be an interesting possibility.
###### Acknowledgements.
We would like to thank Jürgen Schaffner-Bielich and Avraham Gal for helpful discussions and remarks. S.P. was financially supported by the Alexander von Humboldt Foundation.
FIG. 1. The mean meson potentials for nucleons and the electrochemical potential versus the baryon density $`n_B/n_0`$ in the GM1 set for nucleons-only star matter (solid lines) and for matter with further inclusion of antikaon, $`K^{}`$ and $`\overline{K}^0`$, condensation (dashed lines). The $`\overline{K}`$ optical potential depth is $`U_{\overline{K}}=120`$ MeV at the normal nuclear matter density of $`n_0=0.153`$ fm<sup>-3</sup>. Above the critical density for $`\overline{K}^0`$ condensation, $`n_{\mathrm{cr}}(\overline{K}^0)4.83n_0`$, the electrochemical potential $`\mu _e`$ coincides with $`V_\rho ^N=g_{\rho N}\rho _{03}`$.
FIG. 2. The proper number densities $`n_i`$ of various compositions in neutron star matter in the GM1 model. The results are for nucleons-only matter (top panel), matter with further inclusion of $`K^{}`$ condensation (central panel), and matter where also $`\overline{K}^0`$ condensation is considered (bottom panel). The $`\overline{K}`$ optical potential at normal nuclear matter density is $`U_{\overline{K}}=120`$ MeV.
FIG. 3. The variation of effective antikaon mass $`m_K^{}/m_K`$ as a function of baryon density $`n_B/n_0`$ for star matter with nucleons and antikaons in the GM1 set. The different curves from the top to bottom on the left side of the graph correspond to $`\overline{K}`$ optical potential depths at normal nuclear matter density of $`U_{\overline{K}}=100,120,140,160,180`$ MeV.
FIG. 4. The effective energy of $`K^{}`$ (solid lines) and $`\overline{K}^0`$ (dashed lines) versus baryon density in the GM1 set. The different curves for each antikaon have the same meaning as in Fig. 3.
FIG. 5. The equation of state, pressure $`P`$ vs. energy density $`\epsilon `$ in the GM1 set. The results are for nucleons-only ($`np`$) star matter (solid line) and for matter with further inclusion of $`K^{}`$ and $`\overline{K}^0`$ condensation (dashed lines) for antikaon optical potential depths at normal density of $`U_{\overline{K}}=100,120,140,160`$ MeV. The equations of state for stars with nucleons and hyperons without any antikaons ($`npH`$) are shown for Lagrangian of Eq. (1) (dash-dotted line) and with further inclusion of Eq. (29) (dotted line).
FIG. 6. The neutron star sequences near the limiting mass in the GM1 set. The different curves have the same meaning as in Fig 5. The filled circles correspond to the maximum masses, and the arrows indicate the minimum mass stars that possess a $`\overline{K}^0`$ condensate at their centers.
FIG. 7. Same as Fig. 5 but for the TM1 set. The antikaon optical potential depths at normal nuclear matter density are $`U_{\overline{K}}=120,140,160,180`$ MeV.
FIG. 8. Same as Fig. 6 but for the TM1 set. The antikaon optical potential depths are same as in Fig. 7.
FIG. 9. In the top panel the effective antikaon mass $`m_K^{}/m_K`$, and in the bottom panel the effective energy of $`K^{}`$ (solid lines) and $`\overline{K}^0`$ (dashed lines) versus baryon density in the GMT set. The different curves from the top to bottom on the left side of the graph in each panel correspond to $`U_{\overline{K}}(n_0)=100,120,140,160,180`$ MeV.
FIG. 10. Same as Fig. 5 but for the GMT set. The values of $`U_{\overline{K}}(n_0)`$ are same as in Fig. 9.
FIG. 11. Same as Fig. 6 but for the GMT set. The values of $`U_{\overline{K}}(n_0)`$ are same as in Fig. 9.
FIG. 12. The mass-radius relation for neutron star sequences for nucleons-only stars and for stars with further inclusion of $`K^{}`$ and $`\overline{K}^0`$ condensation for the GM1 (top panel), TM1 (central panel), and GMT (bottom panel) parameter sets. The filled circles correspond to the maximum masses of the stars without and with $`\overline{K}`$ condensate. For the latter case, the circles from right to left in each panel are obtained with $`U_{\overline{K}}(n_0)=100,120,140,160,180`$ MeV.
FIG. 13. The pure neutron matter (PNM) and symmetric nuclear matter (SNM) energies as a function of baryon density. The calculations are in the RMF model (solid lines) with effective nucleon mass of $`m_N^{}/m_N=0.70`$ and 0.78, and in the APR model with pion condensation (dashed lines) and their low density extrapolations without a pion condensed phase (dotted lines); for details see text. |
no-problem/0001/cond-mat0001263.html | ar5iv | text | # Localization-delocalization transition in disordered systems with a direction
## Abstract
Using the supersymmetry technique, we study the localization-delocalization transition in quasi-one-dimensional non-Hermitian systems with a direction. In contrast to chains, our model captures the diffusive character of carriers’ motion at short distances. We calculate the joint probability of complex eigenvalues and some other correlation functions. We find that the transition is abrupt and occurs as a result of an interplay between two saddle-points in the free energy functional.
Non-Hermitian models with disorder have attracted recently a considerable attention . Non-Hermitian Hamiltonians appear in context of flux lines in superconductors , in transfer phenomena in lossy media , in hydrodynamics , in QCD , in quantum mechanics of open systems .
The most important property of the non-Hermitian Hamiltonians is that their eigenenergies can be complex independently of their origin. Very interesting physical systems are models with a direction. The simplest Hamiltonian $``$ of such models is written as
$$=\frac{(𝐩+i𝐡)^2}{2m}+V(𝐫),$$
(1)
where $`𝐩`$ is the momentum operator, $`V\left(𝐫\right)`$ is a random potential and $`𝐡`$ is a constant vector. One comes to the Hamiltonian $``$, Eq. (1), reducing the $`d+1`$ dimensional classical problem of vortices in a superconductor with columnar defects to a $`d`$-dimensional quantum problem. Then, the vector $`𝐡`$ is proportional to the component of the magnetic field perpendicular to the line defects .
Considering the one-dimensional (1D) version of the Hamiltonian $``$, Eq. (1), the authors of Ref. predicted a localization-delocalization transition that occurs at $`h_c=l_c^1`$, where $`l_c`$ is the localization length at $`𝐡=0`$. At $`hl_c^1`$ the “imaginary vector-potential” $`𝐡`$ can be removed by the “gauge transformation”
$$\varphi _k\left(𝐫\right)=\varphi _{0k}\left(𝐫\right)\mathrm{exp}\left(\mathrm{𝐡𝐫}\right),\overline{\varphi }_k\left(𝐫\right)=\varphi _{0k}\left(𝐫\right)\mathrm{exp}\left(\mathrm{𝐡𝐫}\right)$$
(2)
In Eq. (2), $`\varphi _k\left(𝐫\right)`$ and $`\overline{\varphi }_k\left(𝐫\right)`$ are the right and left eigenfunctions, $`\varphi _{0k}\left(𝐫\right)`$ are those at $`𝐡=0`$. Carrying out the transformation, Eq. (2), we see that the eigenvalues of the localized states $`ϵ_k`$ do not depend on $`𝐡`$. However, the gauge transformation applies only if the functions $`\varphi _k\left(𝐫\right)`$ do not grow at infinity. For eigenfunctions of the form $`\varphi _0\mathrm{exp}(|𝐫|/l_c)`$, this is possible if $`h<h_c`$. For $`h>h_c`$, one should take extended wave functions, which leads to complex eigenenergies.
Qualitative arguments of Ref. have been confirmed for the disordered chains numerically as well as analytically . At the same time, they are quite general and one might expect that such a transition occurs in more complex systems provided all states at $`𝐡=0`$ are localized. For example, one can think of disordered quasi-one-dimensional wires and films. Using the mapping of Ref. , one expects such models to be relevant for description of the vortices in slabs and 3D bulk superconductors. The wires and films are richer than the chains because at distances larger than the mean free path $`l`$ but smaller than the localization length $`L_c`$ the motion is diffusive, whereas in disordered chains only ballistic motion or localization are possible. As a result, properties of the localization-delocalization transition in wires and films can differ. The problem has not been addressed yet and our paper is aimed to provide its basic understanding.
We study the distribution function of complex eigenenergies $`P(\epsilon ,y)`$, where $`\epsilon `$ and $`y`$ are the real and imaginary parts of the energy, and correlations of eigenfunctions of the Hamiltonian $``$, Eq. (1), at different points for wires using the supersymmetry technique . A proper non-linear supermatrix $`\sigma `$-model has been derived in Ref. . Its free energy functional $`F_h\left[Q\right]`$ reads
$$F_h\left[Q\right]=\frac{\pi \nu }{8}STr[D\left(Q\right)^24\left(\gamma \mathrm{\Lambda }+y\mathrm{\Lambda }_1\tau _3\right)Q]𝑑𝐫$$
(3)
where $`D`$ is the classical diffusion coefficient, $`\nu `$ is the density of states, and the standard notations for the supertrace $`Str`$ and supermatrices $`Q`$, $`\tau _3`$, $`\mathrm{\Lambda }`$, and $`\mathrm{\Lambda }_1`$ are used . The free energy functional $`F_h\left[Q\right]`$, Eq. (3), differs from the conventional one $`F_0\left[Q\right]`$ (describing the Hermitian problem) by the presence of the “gauge-invariant” combination $`Q=`$ $`Q+𝐡[Q,\mathrm{\Lambda }]`$ instead of $`Q`$.
The zero-dimensional (0D) version of the $`\sigma `$-model is equivalent to models of random non-Hermitian matrices , where no transition occurs. But the $`\sigma `$-model, Eq. (3), holds in any dimension and one might expect that already the 1D version would manifest the transition. So, one could describe the transition using the transfer-matrix technique developed for the $`\sigma `$-models .
Surprisingly, the 1D $`\sigma `$-model with $`F_h\left[Q\right]`$, Eq. (3), does not have any transition and physical quantities of interest are smooth functions of $`h`$ giving always a finite probability of complex eigenenergies. At first glance, this result is in an evident contradiction with the arguments of Ref. . The resolution of this paradox is quite interesting. It turns out that the $`\sigma `$-model with $`F_h[Q]`$, Eq. (3), describes properly only the region of the delocalized states. The functional $`F_0[Q]`$ should be used in the localized regime, for any finite $`h<h_c`$, where $`h_c`$ is a critical vector-potential. This replacement of the free energy, resembling a first order transitions, results in an abrupt change of the distribution of eigenvalues. At $`h<h_c`$, all eigenenergies are real whereas at $`h>h_c`$ one gets a broad distribution in the complex plane. In the limit $`hh_c`$, our results are universal for any dimension. All this is in a contrast with the results obtained for the chains, where the spectrum changes smoothly, showing the appearance of “arcs” when increasing $`h`$ , or “wings” when starting from the regime of the strong non-Hermiticity .
Having formulated the rough picture of what happens when changing the parameter $`h`$, let us present some details of calculations. We first introduce the quantities which are to be studied. Since eigenvalues of the Hamiltonian (1) can be complex, it is convenient to double the size of relevant matrices , thus “hermitizing” the problem. In such an approach, the role of the Green function is played by the function
$$B(𝐫,𝐫^{})=\underset{k}{}\frac{\gamma \varphi _k(𝐫)\overline{\varphi }_k(𝐫^{})}{(ϵϵ_k^{})^2+(yϵ_k^{\prime \prime })^2+\gamma ^2}$$
(4)
with an eigenenergy $`ϵ_k=ϵ_k^{}+iϵ_k^{\prime \prime }`$. The “density-density correlator” in the present context is given by
$$Y(𝐫,𝐫^{};ϵ,y)=\frac{1}{\pi }\underset{\gamma 0}{lim}B(𝐫,𝐫^{})B(𝐫^{},𝐫),$$
(5)
where brackets imply impurity averaging. The limit $`\gamma 0`$, understood in all correlators, becomes important as soon as $`h0`$. The function $`Y(𝐫,𝐫^{};ϵ,y)`$, Eq. (5), establishes a link between the localization properties and the joint probability density of complex eigenenergies
$$P(ϵ,y)=\frac{1}{V}d𝐫d𝐫^{}Y(𝐫,𝐫^{};ϵ,y)$$
(6)
where $`V`$ is the volume. We introduce also another important correlator $`X(𝐫,𝐫^{};ϵ,y)=C(𝐫,𝐫^{};ϵ,y)+C(𝐫^{},𝐫;ϵ,y),`$
$$C(𝐫,𝐫^{};ϵ,y)=\frac{1}{2\pi }\underset{\gamma 0}{lim}B(𝐫,𝐫^{})B^{}(𝐫,𝐫^{})$$
(7)
The correlator $`Y(𝐫,𝐫^{};ϵ,y)`$, Eq. (5), is invariant under the transformation, Eq. (2), but $`X(𝐫,𝐫^{};ϵ,y)`$ is not.
We further express as usual the correlation functions in terms of integrals over eight-component superfield $`\psi (𝐫)`$, average over the white-noise disorder potential $`V(𝐫)`$, decouple $`\psi ^4`$ term by $`8\times 8`$ matrix $`Q(𝐫)`$ and integrate over $`\psi \left(𝐫\right)`$. As a result, we obtain a functional integral over $`Q`$ with a free energy functional
$`F[Q]={\displaystyle \frac{1}{2}}{\displaystyle STr\left(\frac{\pi \nu Q^2(𝐫)}{4\tau }\mathrm{ln}[i+\frac{Q(𝐫)}{2\tau }]\right)𝑑𝐫}`$
$$=(𝐩^2𝐡^2)/2mϵ+i\gamma \mathrm{\Lambda }+i\mathrm{\Lambda }_1^{}$$
(8)
where $`^{}=i𝐡/m+y\tau _3`$ and $`\tau `$ is the mean free time.
The next standard step is to find the minimum of $`F[Q]`$ neglecting $`^{}`$. The minimum is reached at
$$Q=V\mathrm{\Lambda }\overline{V}$$
(9)
$`V\overline{V}=1`$, $`V^+=K\overline{V}K`$. (The notations are the same as in Ref. ). Expanding the functional $`F[Q]`$ near the minimum in the gradients of $`Q`$ and in $`^{}`$ one comes to the functional $`F_h[Q]`$, Eq. (3). This is exactly the way how the $`\sigma `$-model, Eq. (3), was derived in Ref. . However, it has been mentioned that this $`\sigma `$-model is not valid in the localized regime. What is wrong in the derivation?
It not difficult to find that the functional $`F\left[Q\right]`$ has also another minimum. Performing the transformation
$$\stackrel{~}{Q}=\mathrm{exp}\left(\mathrm{\Lambda }_1\mathrm{𝐫𝐡}\right)Q\mathrm{exp}\left(\mathrm{\Lambda }_1\mathrm{𝐫𝐡}\right)$$
(10)
we rewrite the functional $`F\left[Q\right],`$ Eq. (8) in terms of $`\stackrel{~}{Q}`$. As a result, the imaginary vector-potential $`𝐡`$ is removed from $`F`$ and the minimum is achieved at
$$\stackrel{~}{Q}=V\mathrm{\Lambda }\overline{V}$$
(11)
which corresponds to $`Q`$ varying in space.
Which of these two saddle points should be chosen? The answer depends on the value of $`h`$. To clarify this question we consider the correlation functions $`Y(𝐫,𝐫^{};ϵ,y)`$ and $`X(𝐫,𝐫^{};ϵ,y)`$, Eqs. (5,7). They can be written as functional integrals over the supermatrices $`Q`$
$$Y\left(X\right)=\frac{\pi \nu ^2}{4}Q_{42}^{1\pm }Q_{24}^{1\pm }Q_{42}^{2\pm }Q_{24}^{2\pm }_Q$$
(12)
where the sign $`+`$ ($``$) corresponds to the correlator $`Y`$ $`\left(X\right)`$, $`Q^{1\pm }=Q^{11}(𝐫)\pm Q^{22}(𝐫)`$, $`Q^{2\pm }=Q^{12}(𝐫)\pm Q^{21}(𝐫)`$, and the matrices $`Q^{}`$ are taken at $`𝐫^{}`$. The symbol $`\mathrm{}_Q`$ stands for averaging with the functional $`F\left[Q\right]`$, Eq. (8). To calculate the integral in Eq. (12) let us use the transformation, Eq. (10), and take the saddle-point, Eq. (11). Since the combination of the supermatrices $`Q`$ entering Eq. (12) for the function $`Y`$ is invariant under the transformation, Eq. (10), $`𝐡`$ is gauged out in this function. Therefore, one can use the standard results of the transfer-matrix approach developed for the Hermitian case. The final result for the correlator $`Y(𝐫,𝐫^{};ϵ,y)`$
$$Y(𝐫,𝐫^{};ϵ,y)=p_{\mathrm{}}\left(r\right)\delta \left(y\right)$$
(13)
where $`r=\left|𝐫𝐫^{}\right|`$, contains the function $`p_{\mathrm{}}\left(r\right)`$ characterizing localization properties
$$p_{\mathrm{}}\left(r\right)=\underset{k}{}\left|\varphi _{0k}\left(𝐫\right)\right|^2\left|\varphi _{0k}\left(𝐫^{}\right)\right|^2\delta \left(ϵϵ_k\right)$$
(14)
In disordered wires, the function $`p_{\mathrm{}}\left(r\right)`$ describes the decay of the wave functions at $`rL_c`$
$$p_{\mathrm{}}\left(r\right)\frac{1}{4\sqrt{\pi }L_c}\left(\frac{\pi ^2}{8}\right)^2\left(\frac{4L_c}{r}\right)^{3/2}\mathrm{exp}\left(\frac{r}{4L_c}\right)$$
(15)
where $`L_c`$ is the localization length. For the unitary ensemble it equals $`L_c=2\pi \nu SD`$, $`S`$ is the cross-section.
In contrast, the vector-potential $`𝐡`$ enters explicitly the pre-exponential of the function $`X`$ after making the transformation, Eq. (10). However, the dependence on $`𝐡`$ is simple and calculations for $`X(𝐫,𝐫^{};ϵ,y)`$ can be performed in the same way as for $`Y(𝐫,𝐫^{};ϵ,y)`$ yielding
$$X(𝐫,𝐫^{};ϵ,y)=\mathrm{cosh}\left(2hr\right)p_{\mathrm{}}\left(r\right)\delta \left(y\right)$$
(16)
Eqs. (13,16) demonstrate that, provided one may perform the transformation, Eq. (10), all eigenenergies remain real. However, the validity of this procedure depends on the value of $`h`$. The function $`X(𝐫,𝐫^{};ϵ,y)`$ does not grow at infinity only if $`h<h_c`$ where
$$h_c=\left(8L_c\right)^1$$
(17)
At $`h>h_c`$ Eqs. (13,16) cannot be used because this would correspond to growing wave functions, which are forbidden for a closed geometry. This agrees with the arguments of Ref. . According to Ref. one should use at $`h>h_c`$ extended states having complex eigenenergies.
In the present formalism, the other saddle-point, Eq. (9), of the free energy $`F[Q]`$ should be taken in the regime $`h>h_c`$. This leads to the $`\sigma `$-model in the form of Eq. (3). Expecting that the eigenenergies become complex, we determine their distribution function $`P(ϵ,y)`$, Eq. (6). Following the transfer-matrix technique we write the function $`P(ϵ,y)`$ in the form
$$P(ϵ,y)=\frac{\pi \nu ^2S}{4}\mathrm{\Psi }\left(Q\right)(Q_{42}^{1+}P_{24}^{1+}Q_{42}^{2+}P_{24}^{2+})dQ$$
(18)
In Eq. (18), $`\mathrm{\Psi }\left(Q\right)`$ is the partition function of the semi-infinite wire, the matrix function $`P`$ is the partition function between the points $`r`$ and $`r^{}`$ multiplied by $`Q^{}\mathrm{\Psi }(Q^{})`$ and integrated over $`r^{}`$. As usual, proper differential equations for $`\mathrm{\Psi }`$ and $`P`$ are derived from $`F[Q]`$, Eq. (3).
In order to carry out these calculations it is necessary to choose a parametrization of the supermatrices $`Q`$. For the non-Hermitian problem involved the parametrization of Ref. is most natural. For simplicity we consider the unitary ensemble, where the supermatrices $`Q`$ can be parametrized in the form
$`Q=T\left(\widehat{\theta }\right)Q_0\overline{T}\left(\widehat{\theta }\right)\text{}Q_0=\left(\begin{array}{cc}\mathrm{cos}\widehat{\phi }& \tau _3\mathrm{sin}\widehat{\phi }\\ \tau _3\mathrm{sin}\widehat{\phi }& \mathrm{cos}\widehat{\phi }\end{array}\right),`$
$$\widehat{\phi }=\left(\begin{array}{cc}\phi & 0\\ 0& i\chi \end{array}\right)\text{}\widehat{\theta }=\left(\begin{array}{cc}\theta & 0\\ 0& i\theta _1\end{array}\right)$$
(19)
In Eqs. (19), the supermatrices $`T(\widehat{\theta })`$ contain not only real variables $`\widehat{\theta }`$ but also Grassmann variables. The “angles” vary in the following intervals $`\pi /2<\phi <\pi /2`$, $`\mathrm{}<\chi <\mathrm{}`$, $`\pi <\theta <\pi `$, $`\mathrm{}<\theta _1<\mathrm{}`$.
The variables $`\widehat{\theta }`$ and $`\widehat{\phi }`$ are not equivalent. For example, neglecting the gradient terms in Eq. (3) one comes to the 0D free energy $`F^{\left(0\right)}`$, containing the variables $`\widehat{\phi }`$ only
$$F^{\left(0\right)}[\widehat{\phi }]=\stackrel{~}{h}^2(\lambda _1i\stackrel{~}{y}/2\stackrel{~}{h}^2)^2+\stackrel{~}{h}^2(\lambda +\stackrel{~}{y}/2\stackrel{~}{h}^2)^2,$$
(20)
where $`\lambda _1=\mathrm{sinh}\chi `$, $`\lambda =\mathrm{sin}\phi `$, $`\stackrel{~}{h}^2=2h^2L_c^2`$ and $`\stackrel{~}{y}=2yL_c^2/D`$. In contrast to real magnetic field $`H`$, gradually suppressing and finally freezing out some degrees of freedom with increasing $`H`$, the imaginary magnetic field $`h`$ shifts the saddle point as a whole. Noticeable changes in behavior occur only at $`\pm \stackrel{~}{y}2\stackrel{~}{h}^2`$, where $`\mathrm{sin}\phi 1`$.
Calculations performed in Ref. for the 0D case show that the variables $`\widehat{\theta }`$ play a minor part in 0D. They do not enter $`F^{\left(0\right)}[\widehat{\phi }]`$ but their role is even less pronounced due to the singularity of the Jacobian at $`\theta `$, $`\theta _10`$. A detailed discussion of Ref. leads to the conclusion that one should replace the $`\widehat{\theta }`$-dependent part of the Jacobian by a constant and put everywhere else $`\widehat{\theta }=0`$. It is also interesting to note that the free energy $`F_h\left[Q\right]`$, Eq. (3), is not invariant against the replacement, Eq. (10). Using the parametrization, Eqs. (19), one obtains that this replacement leads to the shift $`\stackrel{~}{\theta }=\theta 2i\mathrm{𝐫𝐡}`$, $`\stackrel{~}{\theta _1}=\theta _12\mathrm{𝐫𝐡}`$. This shift changes the contour of the integration over $`\pi <\theta <\pi `$ in a complicated way, thus demonstrating the violation of the “gauge symmetry”.
Effective disappearance of the variables $`\widehat{\theta }`$ shows that the correspondence between the problem under consideration and random flux models is rather obscure because in the latter, the variables $`\widehat{\theta }`$ play a crucial role. For the quantities we have studied, Eqs. (5,6,7), this mapping is inadequate.
As in the 0D case, one should put everywhere $`\theta `$, $`\theta _1=0`$ when deriving the transfer matrix equations. As a result, differential equations for $`\mathrm{\Psi }`$ and $`P`$ contain $`\lambda `$ and $`\lambda _1`$ only
$`\widehat{}\mathrm{\Psi }`$ $`=`$ $`F^{\left(0\right)}\mathrm{\Psi },\widehat{}P^+=L_c(i\lambda _1i\lambda )\mathrm{\Psi },`$ (21)
$`\widehat{}`$ $`=`$ $`{\displaystyle \frac{1}{J_0}}_\lambda (1\lambda ^2)J_0_\lambda +{\displaystyle \frac{1}{J_0}}_{\lambda _1}(1+\lambda _1^2)J_0_{\lambda _1},`$ (22)
where $`J_0=1/(\lambda _1+i\lambda )^2`$.
Analysis of Eqs. (21) for $`hL_c1`$ is to some extent similar to the one in the limit of high frequencies for the conventional problem of localization . We obtain
$`\mathrm{\Psi }\mathrm{exp}[\stackrel{~}{h}(\lambda _1+i\stackrel{~}{y}/2\stackrel{~}{h}^2)^2\stackrel{~}{h}(\lambda \stackrel{~}{y}/2\stackrel{~}{h}^2)^2],P^+\mathrm{\Psi }/2\stackrel{~}{h}`$
which gives after substitution into Eq. (18)
$$P(ϵ,y)\frac{\nu }{4\stackrel{~}{h}^2}\{\begin{array}{ccc}1,& & \hfill |\stackrel{~}{y}|<2\stackrel{~}{h}^2\\ 0,& & \hfill |\stackrel{~}{y}|>2\stackrel{~}{h}^2\end{array}$$
(23)
The form of the function $`P(ϵ,y)`$, Eq. (23), is the same as the 0D result of Refs. . This result does not depend on the dimensionality and corresponds to the elliptic law for strongly non-Hermitian random matrices .
Analogous calculations for $`Y`$, Eqs. (4, 5) yield
$$Y(r)\nu \beta \mathrm{exp}(2\beta |r|),\beta =\sqrt{(h^2y^2/4Dh^2)}.$$
(24)
Analytical study of Eqs. (21) is hardly feasible at $`hL_c1`$. To solve them numerically, we use the standard over-relaxation method with Chebyshev acceleration. The results for the distribution $`P(ϵ,y)`$ are presented in Fig. 1 for several values of $`\stackrel{~}{h}`$. The lowest value of $`\stackrel{~}{h}=1/4\sqrt{2}`$ corresponds to the critical $`h_c`$, Eq. (17). For comparison we present the 0D result of Refs. .
The curves for the 1D and 0D cases are rather close to each other. However, their evolution with decreasing $`h`$ is drastically different. The 0D curve tends smoothly to the $`\delta `$-function $`\nu \delta \left(y\right)`$ when $`h0`$, wheareas the 1D curve changes abruptly to this expression at $`h=h_c`$. In the region $`h>h_c`$ the states are extended and the exponential decay of the correlation function $`Y\left(r\right)`$, Eq. (24), is obtained after summation over many states with different phase differences.
In conclusion, we studied the localization-delocalization transition for non-Hermitian disordered systems with a direction, Eq. (1) and broken time-reversal symmetry. We found that the transition occurs due to an interplay between two saddle points in the free energy $`F[Q]`$, which results in the abrupt transition at the critical field $`h=h_c.`$ Below this field, all eigenvalues are real and wave functions are localized. Above the critical field the eigenvalues form a broad distribution in the complex plane. We believe that the abrupt transition found is quite natural for the original problem of the vortex lines in the presence of columnar defects . Provided the geometry of the sample is closed and it is infinitely long, vortex lines should abruptly change their behavior, from being pinned by columns to become oriented along the magnetic field.
We acknowledge useful discussions with I.L. Aleiner, A. Altland, C.W.J. Beenakker and Y.V. Fyodorov. This work was supported by SFB 237 “Unordnung und Grosse Fluktuationen” |
no-problem/0001/cond-mat0001114.html | ar5iv | text | # Decay on several sorts of heterogeneous centers: Special monodisperse approximation in the situation of strong unsymmetry. 4. Numerical results for the approximation of essential asymptotes
## 1 Calculations
We have to recall again the system of the condensation equations. It can be written in the following form
$$G=_0^z\mathrm{exp}(G(x))\theta _1(x)(zx)^3𝑑x$$
$$\theta _1=exp(b_0^z\mathrm{exp}(G(x))𝑑x)$$
with a positive parameter $`b`$ and have to estimate the error in
$$N=_0^{\mathrm{}}\mathrm{exp}(lG(x))𝑑x$$
with some parameter $`l`$.
We shall solve this problem numerically and compare our result with the already formulated models. In the model of the total monodisperse approximation we get
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lG_A(x))𝑑x$$
where $`G_A`$ is
$$G_A=\frac{1}{b}(1\mathrm{exp}(bD))x^3$$
and the constant $`D`$ is given by
$$D=_0^{\mathrm{}}\mathrm{exp}(x^4/4)𝑑x=1.28$$
Numerical results are shown in .
In the model of the floating monodisperse approximation we have to calculate the integral
$$N_B=_0^{\mathrm{}}\mathrm{exp}(lG_B(x))𝑑x$$
where $`G_B`$ is
$$G_B=\frac{1}{b}(1\mathrm{exp}(b_0^{z/4}\mathrm{exp}(x^4/4)𝑑x))z^3$$
$$G_B\frac{1}{b}(1\mathrm{exp}(b(\mathrm{\Theta }(Dz/4)z/4+\mathrm{\Theta }(z/4D)D)))z^3$$
Numerical results are shown in .
It is very attractive to spread the approximation for the last integral at small $`z`$ for all $`z`$ (as it was done in the intermediate situation in when we solved the algebraic equation on the parameters of the spectrum (in the intermediate situation it is absolutely justified). Then we came to the third approximation
$$N_C=_0^{\mathrm{}}\mathrm{exp}(lG_C(x))𝑑x$$
where $`G_C`$ is
$$G_C\frac{1}{b}(1\mathrm{exp}(bz/4))z^3$$
This approximation will be called as ”approximation of essential asymptotes”. The real advantage of this approximation is the absence of the exponential nonlinearity. When this approximation will be introduced into equation on the parameters of the condensation process there will be no numerical difficulties to solve it.
We have tried all mentioned approximations for $`b`$ from $`0.2`$ up to $`5.2`$ with the step $`0.2`$ and for $`l`$ from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. We calculate the relative error in $`N`$. The results are drawn in fig.1 for $`N_C`$ where the relative errors are marked by $`r_3`$.
We see that the relative errors of $`N_B`$ and $`N_C`$ are very small and practically the same. One can not find the difference between fig.1 in and fig.1 here.
The maximum of errors in $`N_B`$ and $`N_C`$ lies near $`l=0`$. So, we have to analyse the situation with small values of $`l`$. It was done in fig.2 for $`N_C`$. We see that we can not find the maximum error because it increases at small $`b`$. Then we have to calculate the situation with $`b=0`$. Here we have to solve the following equation
$$G=_0^{\mathrm{}}\mathrm{exp}(G(x))(zx)^3𝑑x$$
and to compare
$$N=_0^{\mathrm{}}\mathrm{exp}(lG)𝑑x$$
with
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lDz^3)𝑑z$$
$$N_B=_0^{\mathrm{}}\mathrm{exp}(l(\mathrm{\Theta }(z/4D)Dz^3+\mathrm{\Theta }(Dz/4)z^4/4))𝑑z$$
$$N_C=_0^{\mathrm{}}\mathrm{exp}(lz^4/4)𝑑z$$
We can not put here $`l=0`$ directly.
The results are shown in fig.3. One can see one curve with two wings. The upper wing corresponds to the error of $`N_A`$ and the lower corresponds to the relative error in $`N_B`$ and $`N_C`$. At $`l=0`$ these wings come together. We see that our hypothesis (the worst situation for the floating monodisperse approximation takes place when the first type heterogeneous centers are unexhausted) is really true.
The worst situation is when $`b`$ is near zero and $`l`$ lies also near zero. Here we can use the total monodisperse approximation to estimate the error. It is clear that the relative error in $`N_A`$ is greater than in $`N_B`$ (not in $`N_C`$). So, we can calculate $`r_1`$, see that when $`b`$ goes to zero it decreases (it is clear also from the physical reasons) and estimate the error $`r_2`$ at $`b=0`$, $`l=0`$ by $`R_1`$ calculated at small $`b`$ and $`l=0`$. Then one can see that it is small.
An interesting problem is to see whether $`N_B`$ and $`N_C`$ are different or no. Earlier we can not see the difference. In fig.4 one can see the ratio $`r_2/r_3`$ plotted at $`b=0`$ and can note that only for $`l0.01÷0.02`$ one can see the small difference. It means that to see the difference between these approximations the ratio between the scale of the first type centers nucleation and the scale of the second type centers nucleation must be giant. Even at giant values the difference is small.
Fig.1
The relative error of $`N_C`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the maximum at small $`l`$ and moderate $`b`$. One can not separate $`N_B`$ and $`N_C`$ according to fig.1 in and fig.1.
Fig.2
The relative error of $`N_C`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.01`$ up to $`0.11`$ with a step $`0.01`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the maximum at small $`l`$ and small $`b`$. One can note that now the values of $`b`$ corresponding to maximum of the relative errors become small. One can not separate $`N_B`$ and $`N_C`$ according to fig.2 in and fig.2 here.
Fig.3
The relative errors of $`N_A`$, $`N_B`$ and $`N_C`$ drawn as the function of $`l`$ at $`b=0`$. Parameter $`l`$ goes from $`0.01`$ up to $`5.01`$.
One can see two wings which come together for $`b`$ near $`0`$. The upper wing corresponds to the relative error of $`N_A`$. The lower wing corresponds to the relative errors of $`N_B`$ and $`N_C`$. One can not separate them.
One can see that $`r_1`$ decreases when $`b`$ goes to $`0`$ and can estimate $`r_2`$ by $`r_1`$ at small $`l`$.
Fig.4
The ratio $`r_2/r_3`$ drawn as the function of $`l`$ at $`b=0`$. Parameter $`l`$ goes from $`0.01`$ up to $`5.01`$.
One can see the difference between $`r_2`$ and $`r_3`$ only at very small values of $`b`$ (only the first two points corresponding to $`l=0.01`$ and $`l=0.02`$). So, $`N_3`$ can be also considered as a suitable approximation. |
no-problem/0001/cond-mat0001379.html | ar5iv | text | # Antiferromagnetic Alignment and Relaxation Rate of Gd Spins in the High Temperature Superconductor GdBa2Cu3O7-δ
## Abstract
The complex surface impedance of a number of GdBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> single crystals has been measured at 10, 15 and 21 GHz using a cavity perturbation technique. At low temperatures a marked increase in the effective penetration depth and surface resistance is observed associated with the paramagnetic and antiferromagnetic alignment of the Gd spins. The effective penetration depth has a sharp change in slope at the Néel temperature, $`T_N`$, and the surface resistance peaks at a frequency dependent temperature below 3K. The observed temperature and frequency dependence can be described by a model which assumes a negligibly small interaction between the Gd spins and the electrons in the superconducting state, with a frequency dependent magnetic susceptibility and a Gd spin relaxation time $`\tau _s`$ being a strong function of temperature. Above $`T_N`$, $`\tau _s`$ has a component varying as $`1/(TT_N)`$, while below $`T_N`$ it increases $`T^5`$.
One of the surprising early discoveries about high temperature superconductors was their apparent insensitivity to out-of-plane magnetic ions, with the superconducting properties of YBCO remaining almost unchanged when yttrium was replaced by magnetic rare earth ions, with the exception of Ce, Pr and Tb . Furthermore, the low temperature antiferromagnetic properties were observed to be independent of doping. The thermodynamic superconducting and antiferromagnetic properties of GdBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (GBCO) therefore appear to be completely uncoupled . However, there remains the possibility that the mutual interaction of the rare earth (RE) spins and the superconducting electrons could lead to changes in their dynamic properties. Interest in the antiferromagnetic alignment of the rare earth spins has also re-emerged as a likely explanation of the anomalous increase in the microwave surface resistance of GBCO thin films observed at low temperatures .
We have therefore undertaken a systematic microwave investigation of the antiferromagnetic spin alignment in a number of GBCO single crystals. The surface impedance has been measured to well below the Gd Néel temperature $`T_N2.2`$K, using a hollow dielectric resonator technique . Measurements at 10, 15 and 21 GHz confirm the influence on microwave properties of the alignment of the Gd spins and enable us to determine the Gd spin relaxation time $`\tau _s`$ both above and below the Néel temperature.
In the superconducting state the microwave surface impedance is given by
$$Z_s=\sqrt{\frac{\mathrm{i}\mu _r\mu _0\omega }{(\sigma _1\mathrm{i}\sigma _2)}},$$
(1)
where $`\sigma _1`$ is the normal state quasi-particle conductance $`n_{qp}e^2\tau _{qp}/m`$ ($`\tau _{qp}`$ is the quasi-particle scattering time). In the ideal superconducting state $`\sigma _2(T)=n_s(T)e^2/m\omega =(\lambda (T)^2\mu _0\omega )^1`$, where $`\lambda (T)`$ is the penetration depth unperturbed by any coexistent magnetic properties and $`n_s`$ is the superfluid density. However, in the presence of magnetic spins $`\sigma _2=(\lambda ^2\mu _r\mu _0\omega )^1`$ and we can assume a relaxation model for a frequency dependent permeability, $`\mu _r(T,\omega )=1+\chi (T)/(1+\mathrm{i}\omega \tau _s(T))`$, with a spin lattice relaxation time $`\tau _s`$. In this situation the field penetration is modified but not the intrinsic penetration depth which is related to the superfluid fraction. For a antiferromagnetic system above the Néel temperature $`T_N`$ we expect $`\chi (T)=C/(T+T_N)`$, with $`C=n\mu _B^2p^2/12\pi k_B`$, where $`n`$ is the number of spins per unit volume. DC magnetic measurements give an effective moment $`p=7.82`$ , in good agreement with the free ion value of 7.92. Rewriting $`Z_s`$ as $`[\mathrm{i}\mu _0\omega /\sigma _{eff}]^{1/2}`$, we can define an effective conductivity as
$$\sigma _{eff}=\frac{(\sigma _1\mathrm{i}\sigma _2)}{1+\chi /(1+\mathrm{i}\omega \tau _s)}.$$
(2)
The real and imaginary parts are give by
$$\sigma _{1eff}=\frac{(1+\omega ^2\tau _s^2)(\mathrm{\Gamma }\sigma _1+\sigma _2\chi \omega \tau _s)}{\mathrm{\Gamma }^2+\chi ^2\omega ^2\tau _s^2},$$
(3)
and
$$\sigma _{2eff}=\frac{(1+\omega ^2\tau _s^2)(\sigma _1\chi \omega \tau _s\mathrm{\Gamma }\sigma _2)}{\mathrm{\Gamma }^2+\chi ^2\omega ^2\tau _s^2},$$
(4)
where $`\mathrm{\Gamma }=1+\omega ^2\tau _s^2+\chi `$. At low temperature, we will show that $`\omega \tau _s>1`$, so that
$$\sigma _{1eff}\sigma _1+\sigma _2\chi /\omega \tau _s$$
(5)
and
$$\sigma _{2eff}\sigma _2(1+\chi /\omega ^2\tau _s^2+2/\omega ^2\tau _s^2)\sigma _1\chi /\omega \tau _s,$$
(6)
where we retained terms to second order in $`1/\omega \tau _s`$ because $`\sigma _2\sigma _1`$.
In fitting our data we have also assumed no change in the quasi-particle conductance from the Gd spin fluctuations. To test the validity of this model, we have measured the surface impedance of several GBCO single crystals both above and below $`T_N`$ and at several microwave frequencies.
The GBCO crystals were grown in BaZrO<sub>3</sub> crucibles to minimise contamination .The largest crystal used in these measurements was $`1.0\times 1.1\times 0.06\mathrm{mm}^3`$ with a $`T_c`$ of 93K. During the course of the experiments, with the sample held overnight at room temperature under vacuum, the microwave transition at $`T_c`$ was observed to broaden, with a second transition emerging at $`63`$K. This second transition is similar to earlier measurements by Srikanth et al. on a similarly grown YBCO crystal, which was interpreted as a second energy gap . We believe the anomaly is more likely to be associated with oxygen diffusion out of the sample, resulting in surface regions of oxygen deficient, 60K phase. This may be a generic problem for BaZrO<sub>3</sub>-grown HTS crystals held in vacuum at room temperature for any length in time. However, no associated changes were observed in the low temperature microwave properties in our measurements.
The surface impedance was measured using a cylindrical dielectric resonator with a 2mm hole passing along its axis. The resonator was placed centrally within an OFHC copper cavity. Measurements were made using the TE<sub>01n</sub> resonant modes with a typical unloaded $`Q`$ values of $`10^5`$ at 10GHz. The temperature of the dielectric resonator and copper cavity was held constant at the helium bath temperature. The sample was placed at a magnetic field antinode of the dielectric resonator, and was supported on the end of a long sapphire rod passing centrally through the resonator. The crystal could be heated from 1.2K to well above $`T_c`$ by a heater mounted outside the cavity. Experiments were performed with the $`c`$-axis parallel or perpendicular to the rf magnetic field, to investigate the affect of magnetic anisotropy. From neutron diffraction experiments, the Gd spins are known to align along the $`c`$-axis . Measurements were made at three of the resonant modes of the dielectric resonator close to 10, 15 and 21 GHz with suitable positioning of the sample.
The microwave properties were determined by a conventional cavity perturbation method using a HP 8722C Network analyser with additional data processing to obtain accurate measurements of the resonant frequency $`f_0`$ and half power bandwidth $`f_B`$ of the dielectric cavity resonances. The changes in these values with temperature can be related to the surface impedance by the cavity perturbation formula, $`\mathrm{\Delta }f_B(T)2\mathrm{i}\mathrm{\Delta }f_0(T)=\mathrm{\Gamma }(R_s+\mathrm{i}\mathrm{\Delta }X_s)`$. The resonator constant $`\mathrm{\Gamma }`$ was determined from measurements with the sample replaced by a chemically polished niobium sample of the same size and known resistivity. We were able to achieve a measurement accuracy and reproducibility for $`R_s`$ of $`\pm 20\mu \mathrm{\Omega }`$ and an error in $`\mathrm{\Delta }\lambda =\pm 0.3\mathrm{\AA }`$, for a sample of area $`0.5\times 1.1\mathrm{mm}^2`$ at 10GHz.
Figure 1 shows measurements of $`R_s(T)`$ at the three frequencies. For $`T30`$K the losses are quadratic in $`\omega `$, consistent with $`\omega \tau _{qp}1`$. In absolute terms, the losses are somewhat larger than observed in the best YBCO crystals, but are comparable with $`R_s`$ data for near optimally doped BSCCO crystals . At lower temperatures there is a marked and strongly frequency dependent rise in losses, which we associate with the paramagnetic alignment of the Gd spins above the Néel temperature. The losses at 10GHz peak at 3.5K, significantly above the Néel temperature of $`2.2\mathrm{K}`$ (see inset of Figure 1). These losses then decrease by more than an order of magnitude at the lowest temperatures.
The spin alignment also leads to a small but pronounced increase in the reactive part of the surface impedance at low temperatures, as shown for two frequencies in figure 2. This corresponds to an increased penetration depth with a sharp change in slope defining the Néel temperature. Above $`7`$K , the reactance increases as $`T^2`$, in contrast to the $`T`$-dependence observed for high quality YBCO crystals. The magnitude of these losses and the $`T^2`$ dependence of the penetration depth imply a larger quasi-particle scattering than observed in optimally doped YBCO crystals . Above 10K we have the expected $`\omega `$ dependence of $`X_s`$, but at low temperatures ($`T<7`$K) the reactance has a $`\omega ^1`$ dependence, as predicted by our model. The inset illustrates measurements with the crystal aligned with its $`c`$-axis parallel and perpendicular to the microwave magnetic field (open and closed circles respectively). In a parallel microwave field, the expected direction of spin alignment, the reactance drops linearly below a well-defined transition temperature $`T_N2.25`$K, but in the perpendicular configuration the susceptibility below $`T_N`$ is much flatter and may even go through a small maximum. Similar results to those illustrated in Figures 1 and 2 were observed for all three single crystals investigated. The crystals included slightly underdoped, as-grown and close to optimum-doped oxygen annealed samples, from different growth batches.
To extract an effective conductivity $`\sigma _{1eff}`$ from $`R_s=\frac{1}{2}\omega ^2\mu _0^2\sigma _{1eff}\lambda ^3`$, we have assumed: (i) a value for $`\lambda (0)=140\mathrm{n}\mathrm{m}`$, typical of high quality YBCO samples (ii) above 10K $`\sigma _{2eff}\sigma _2`$, and (iii) below 10K $`\lambda (T)T^2`$ consistent with the $`T^2`$ temperature dependence of $`X_s(T)\omega \mu _0\lambda (T)`$ in figure 2. We have also assumed that $`\sigma _1`$ is not significantly affected by the Gd spin fluctuations. Figure 3 shows the temperature dependence of the derived values of $`\sigma _{1eff}`$ for all three frequencies measured. Because the contribution to the effective conductivity from the Gd spins varies $`1/\omega \tau _s`$, using the 10 and 15 GHz data, we can extract $`\sigma _1`$ giving the solid line. The derived temperature dependence is similar to the variation of $`\sigma _1(\mathrm{T})`$ observed in YBCO single crystals. It increases to a broad peak at about 25K, reflecting the increase in the quasi-particle scattering lifetime at low temperatures.
In our model, the additional losses when $`\omega \tau _s>1`$ are largely associated with the paramagnetic relaxation of the Gd spins in the microwave field. These losses, which we interpret as a decrease in the Gd spin relaxation rate on approaching and passing through the antiferromagnetic phase transition, peak at a frequency dependent temperature significantly above $`T_N`$. In this respect, we note that there is no significant change in $`\sigma _{1eff}`$ at the Néel temperature $`T_N`$ (see inset of figure 3). Any such affect is masked by the much larger changes in $`\tau _s`$, figure 4.
The region near $`T_N`$ might be expected to be dominated by antiferromagnetic spin fluctuations. We assume that close to $`T_N`$, $`\tau _s`$ involves a temperature dependent term varying as $`[T_N/(TT_N)]^\alpha `$. A fit to this relation for $`\alpha =1`$ is shown in figure 4. Below the antiferromagnetic transition $`\tau _s`$ increases by an order of magnitude varying $`T^5`$ with an apparent change in slope at $`T_N`$. The inset of figure 4 shows $`\chi (T)`$ used to derive the temperature dependence of $`\tau _s`$. This assumes a Curie-Weiss temperature dependence above $`T_N`$ and a slight drop in $`\chi (T)`$ below $`T_N`$ consistent with the measurement shown in the inset of figure 2.
The model we have applied assumes that there is no interaction between the quasi-particles and spins (this is consistent with specific heat data on GBCO where no change is observed between semiconducting and superconducting samples ). Susceptibility measurements on non-superconducting GBCO have shown that $`\chi (T)`$ fits a 2-d Ising model above $`T_N`$ . Below the transition $`\chi (T)`$ remains anomalously high, deviating from the Ising model. This is consistent with our reactance measurements, where we see only a small change in $`\mathrm{\Delta }X_s(T)`$ below $`T_N`$, figure 2. This is in contrast to what is seen in other RE substitutions (Sm, Dy and Nd) where the specific heat data can be fitted to a 2-d Ising model both above and below $`T_N`$ .
In the insets of figures 1 and 2, the solid lines fitted to the data have been evaluated using equations 3 and 4. We have assumed: (i) $`\chi (T)=C/(T+T_N)`$, with a value of $`C=n\mu _B^2p^2/12\pi k_B`$ corresponding to a derived magnetic moment $`p=9.5`$, slightly larger than deduced from magnetic measurements (ii) the derived Gd spin relaxation time plotted in figure 4, and (iii) the complex electronic conductivity given by the derived value of $`\sigma _1`$ in figure 3 and a value of $`\sigma _2`$ assuming $`\lambda (0)=140\mathrm{n}\mathrm{m}`$. The excellent fit to the experimental data supports our theoretical model. In particular, there appears to be no need to invoke any additional effects, such as a modification of the electronic mean free path from the Gd spin fluctuations.
In summary we have presented extensive microwave surface impedance measurements on GBCO single crystals at several frequencies to investigate the influence of the antiferromagnetic alignment of the Gd spin at low temperatures. We are able to describe the experimental results by a model involving the increase in magnetic susceptibility associated with antiferromagnetic alignment and a strongly temperature dependent relaxation time. The derived spin lattice relaxation time increases below $`T_N`$ with a temperature dependence $`T^5`$. Above $`T_N`$, $`\tau _s1/(TT_N)`$. Within the accuracy of our measurements, $`\sigma _1(T)`$ is not affected by the antiferromagnetic alignment of the Gd spins.
We thank G. Walsh and D. Brewster for valuable technical support. We also thank A. Porch and M. Hein for useful discussions. This research is supported by the EPSRC, UK. |
no-problem/0001/hep-ph0001292.html | ar5iv | text | # Vortex Phases in Condensed Matter and Cosmology
## 1 Introduction
One possible explanation for the baryon asymmetry of the Universe is that it was generated in a 1st order electroweak phase transition. This possibility cannot be realized in the Standard Model (SM), though, since there is no 1st order transition for $`m_H>72`$ GeV . In the MSSM there is still room for a strong 1st order transition, if the right-handed stop is lighter than the top: a consequence of such a circumstance would be a Higgs lighter than $`110`$ GeV, not yet experimentally excluded for the MSSM.
In this paper, we will consider another possibility for increasing the strength of the electroweak phase transition. Indeed, remaining in the SM but imposing an external magnetic field $`H_{\text{ext}}`$ on the system, has a strengthening effect. It turns out that for baryon number non-conservation there is an opposing effect due to the sphaleron dipole moment, but we nevertheless consider it interesting to map out the phase diagram in some detail for $`H_{\text{ext}}0`$, as an analogy with superconductors (Sec. 3) suggests that the system might have quite unexpected properties. The case of small Higgs masses in $`H_{\text{ext}}0`$, as well as the first results on large Higgs masses, were already reviewed in, and we concentrate here on the physical case $`m_H>m_Z`$.
## 2 Cosmological motivation for $`H_{\text{ext}}0`$
The physical relevance of considering $`H_{\text{ext}}0`$ comes from the observation that the existence of galactic magnetic fields today may well imply the existence of primordial seed fields in the Early Universe. In order to get large enough length scales, it seems conceivable that even in the most favourable case of strongly “helical” fields, the seed fields should have a correlation length at least of the order of the horizon radius at the electroweak (EW) epoch. Such large length scales could possibly be produced during the inflationary period of Universe expansion (see, e.g., ref. and references therein).
If a primordial spectrum of magnetic fields is generated during inflation, it is on the other hand also true that after a while the fields are essentially homogeneous at small length scales. Indeed, magnetohydrodynamics,
$$\frac{\stackrel{}{H}_Y}{t}=\frac{1}{\sigma }^2\stackrel{}{H}_Y+\times (\stackrel{}{v}\times \stackrel{}{H}_Y),$$
(1)
tells that magnetic fields diffuse away at scales $`l<(t/\sigma )^{1/2}(M_{\text{Pl}}/T)^{1/2}T^1`$, where $`\sigma `$ is conductivity. At the EW epoch $`T100`$ GeV, this gives $`l_{\text{EW}}10^7/T`$, a scale much larger than the typical correlation lengths $``$ a few $`\times T^1`$.
A further question is the magnitude of magnetic fields. An equipartition argument would say that only a small fraction of the total (free) energy density can be in magnetic fields. This leads to $`H_Y/T^2<2`$. In conclusion, there could well be essentially homogeneous and macroscopic (hypercharge) magnetic fields around at $`T100`$ GeV, with a magnitude $`H_Y/T^21`$.
## 3 Superconductors in $`H_{\text{ext}}0`$
As a further motivation for studying in detail the electroweak case, let us recall the very rich structure found in quite an analogous system, superconductors under an external magnetic field. Denoting the inverses of the spatial scalar and vector correlation lengths by $`m_H,m_W`$ (and $`xm_H^2/(2m_W^2)`$), the usual starting point for superconductor studies, the 3d continuum scalar electrodynamics (or the Ginzburg-Landau, GL, theory), predicts at the tree-level two qualitatively different responses of the system to an external magnetic field:
In the type I case, $`m_H<m_W`$, a flux cannot penetrate the superconducting phase. However, superconductivity is destroyed by $`H_{\text{ext}}`$. The way in which this transition has to take place is that the superconducting and the normal phases coexist at $`H_{\text{ext}}^c`$. This implies a 1st order transition.
In the type II case, $`m_H>m_W`$, on the other hand, the flux can penetrate the system via an Abrikosov vortex lattice. At a large enough $`H_{\text{ext}}`$ the system then continuously changes to the normal phase.
It is now a very interesting observation that fluctuations change the nature of the tree-level type II transition described above in an essential way. Indeed, much of the vortex lattice phase is observed to be removed, but it is also found in high-$`T_c`$ superconductors (which are strongly of type II) that the continuous transition changes to a 1st order one: for a particularly clear signal, see Fig. 2 in, reproduced in Fig. 1(left).
At the same time, high-$`T_c`$ superconductors are of course a complicated layered and highly anisotropic material, so it is not immediately clear whether the 1st order transition observed is also a property of, say, the simple continuum GL theory. Let us list arguments in favour of and against this possibility:
* There is an analytic prediction of a 1st order transition, starting just from the GL theory. However, it is based on an $`ϵ`$-expansion around $`d=6`$, and relies on $`ϵ=3`$ being small. Other analytic arguments (see, e.g., ) also lack a small expansion parameter. A set of lattice simulations have been carried out which favour the possibility of a phase transition directly in the GL theory. However, the theory actually simulated is not GL but some approximation thereof, and moreover, the effects of discretization artifacts in the simulations have not been systematically investigated.
* There are, on the other hand, other simulation results which argue that a layered structure is essential for the 1st order transition. However, these simulations use again an approximate form of the theory, whose validity for the full GL model is not clear. Finally, direct lattice simulations in the full GL model have so far failed to see a transition, see Fig. 1(right). However, one can argue that due to the high computational cost, these simulations do not necessarily yet represent the thermodynamical limit with respect to the number of vortices.
To summarize, we consider it at the moment an open problem what is the “minimal” continuum model which may display a 1st order transition between the vortex phase and the normal phase. Understanding this issue would be very important for, e.g., the considerations to which we now turn.
## 4 The Electroweak Theory in $`H_{\text{ext}}0`$ at Tree-Level
To analyse the behaviour of the electroweak theory in an external magnetic field, we can directly consider the dimensionally reduced 3d action
$$_{3\mathrm{d}}=\frac{1}{4}G_{ij}^aG_{ij}^a+\frac{1}{4}F_{ij}F_{ij}+(D_i\varphi )^{}D_i\varphi +y\varphi ^{}\varphi +x(\varphi ^{}\varphi )^2.$$
(2)
Here $`G_{ij}^a,F_{ij}`$ are the SU(2) and U<sub>Y</sub>(1) field strengths, and $`\varphi `$ is the Higgs doublet. In terms of the physical 4d parameters, $`x`$ and $`y`$ are expressed as
$$x0.12\frac{m_H^2}{m_W^2},y4.5\frac{TT_0}{T_0},$$
(3)
where $`T_0`$ equals the critical temperature up to radiative corrections. By a magnetic field we now mean, in the symmetric phase of the theory, an Abelian U<sub>Y</sub>(1) magnetic field $`H_Y`$. In the broken phase, this goes dynamically to the electromagnetic field $`H_{\text{EM}}`$.
The tree-level phase diagram of the theory in Eq. (2) is shown in Fig. 2. This is quite similar to that of the GL model. For $`m_H>m_Z`$ the ground state solution of the equations of motion is inhomogeneous in a certain range of $`H_Y`$. This Ambjørn-Olesen (AO) phase is the analogue of the Abrikosov vortex lattice of superconductors. There are some differences, as well: for instance, the Higgs phase is not really a Meissner phase, as at low temperatures and fields, the magnetic field can pass through the system in a homogeneous configuration. Another notable difference is that the “vortices” (see Fig. 2(right)) are not topological objects in the same sense as in superconductors, as the Higgs vev does not vanish at the core of the profile.
For future reference, let us recall one way of understanding the appearance of the “instability” leading to the AO-phase. The point is that at tree-level, there are charged excitations in both phases of the system which can be arbitrarily light close enough to the phase transition. In the presence of a magnetic field, the corresponding energies behave as Landau levels. It can then happen that some excitations become essentially “tachyonic”, leading to an instability: e.g. in the broken phase for large $`H_{\text{EM}}`$,
$$m_{W,\text{eff}}^2=m_W^2eH_{\text{EM}}<0.$$
(4)
## 5 The Electroweak Theory in $`H_{\text{ext}}0`$ with Fluctuations
In order to include systematically the effects of fluctuations, we have studied the system in Eq. (2) with lattice simulations. We refer there for the details of the simulations, as well as for the justification of the following main result: for the values of $`H_{\text{ext}}`$ studied, we have not observed the AO phase, nor any phase transition at all for $`m_H>m_Z`$! Let us discuss here to what extent we can now understand such a contrast with the high-$`T_c`$ behaviour.
For small values of $`H_Y`$, the discrepancy can be understood as being due to SU(2) confinement. For instance, the $`W`$ is always massive in contrast to perturbation theory, so that Eq. (4) cannot be satisfied for arbitrarily small $`H_{\text{EM}}`$. It is however difficult to turn this argument into a quantitative one.
Another way to express the issue is that the only gauge-invariant degrees of freedom which can become massless are a neutral scalar (the Higgs), and the photon. Close to the endpoint (see Fig. 3), the system can thus be non-perturbatively described by an effective theory of the form ($`\varphi \mathrm{IR}`$)
$$=\frac{1}{4}F_{ij}F_{ij}+\frac{1}{2}(_i\varphi )^2+h\varphi +\frac{1}{2}m^2\varphi ^2+\frac{1}{4}\lambda \varphi ^4+\gamma _1\varphi F_{ij}F_{ij}+\mathrm{}.$$
(5)
However, in this theory there are no charged excitations, hence no Landau levels and instabilities, unlike at tree-level!
On the other hand, the effective theory in Eq. (5) can in principle break down for very large fields, and also far away from the endpoint, and one may ask what happens then? It is here that the case of superconductors again becomes relevant. As discussed at the end of Sec. 3, it might be that even in superconductors some extra structure such as layers is needed in order to have a vortex phase and the associated 1st order transition. If so, then it is unlikely that there would be any remnant of the AO phase in the fluctuating electroweak system even at large $`H_Y`$. If no layers are needed, on the contrary, there just might be one.
## 6 Conclusions
It appears that even if there is an external magnetic field present, the SM electroweak transition terminates at $`m_H<90`$ GeV, and above that there is no structure at all, see Fig. 3. In particular, the Ambjørn-Olesen phase seems not to be realized at realistic magnetic fields. Thus an electroweak phase transition within the SM does not leave a cosmological remnant. An interesting theoretical open issue is still what happens at very large magnetic field strengths — a question which involves quite intriguing analogies also with the behaviour of experimentally accessible high-$`T_c`$ superconductors.
## Acknowledgments
I thank K. Kajantie, T. Neuhaus, P. Pennanen, A. Rajantie, K. Rummukainen, M. Shaposhnikov and M. Tsypin for collaboration and discussions on various topics mentioned in this talk. This work was partly supported by the TMR network Finite Temperature Phase Transitions in Particle Physics, EU contract no. FMRX-CT97-0122. |
no-problem/0001/nucl-th0001047.html | ar5iv | text | # 𝜌 - nucleus bound states in Walecka model
## Abstract
Possible formation of $`\rho `$ nucleus bound state is studied in the framework of Walecka model. The bound states are found in different nuclei ranging from $`{}_{}{}^{3}He`$ to $`{}_{}{}^{208}Pb`$. These bound states may have a direct bearing on the recent experiments on the photoproduction of $`\rho `$ meson in the nuclear medium.
The properties of hadrons in the nuclear medium is a field of current interest. These properties at high density and/or temperature are important for the study of neutron stars and supernovae. The possible explanations of the experimental results of heavy ion collisions will also depend on the in-medium behaviour of the hadrons. The recent observation of enhanced dilepton production in the low invariant mass domain in heavy ion collider experiments has triggered speculation that the effective $`\rho `$\- meson mass decreases in the nuclear medium. The studies using Chiral perturbation theory shows that even at finite densities there may be a partial restoration of chiral symmetry leading to the decrease of vector meson masses from their free values .
A number of experiments have been done to study the possible shift of the masses of vector mesons $`\omega `$, $`\eta `$ and $`\rho `$ . The most notable experiments regarding $`\rho `$ meson mass modification are $`K^+{}_{}{}^{12}C`$ elastic cross section measurements at 800 MeV that have also revealed an enhancement which can be attributed to a shift in $`\rho ^0`$ mass ; the need for shifted meson masses to explain the spin transfer in the $`\stackrel{}{p}`$-$`A`$ scattering experiment at IUCF ; and photoproduction experiments at the INS facility in Tokyo via the reaction $`{}_{}{}^{3}He(\gamma ,\pi ^+,\pi ^{})`$ at photon energies below the free production threshold of 1083 MeV for the $`\rho ^0`$ meson . The anlysis of the experimental data yielded $`m_{\rho ^0}^{}`$ of 642$`\pm `$40 MeV/c<sup>2</sup> and 669$`\pm `$32 MeV/c<sup>2</sup> in the photon energies 800-880 MeV and 880-960 MeV respectively. The reanalysis of the single and double $`\mathrm{\Delta }`$ experiment gave the $`\rho ^0`$ mass to be 490$`\pm `$40 MeV/c<sup>2</sup> in the photon energy range 380-700 MeV . The similar trend has been found by Bianchi et al. who have developed a model based on hadronic fluctuations of the real photon to describe the total photonucleon and photonuclear cross sections. A decrease in $`\rho `$ meson mass in different nuclei is found to be necessary to explain the experimental data.
The substantial change in the $`\rho `$ meson mass led some authors to argue that such large decrease in mass can not be explained by the mean field picture of the nuclear matter . These authors suggest that this decrease in mass should be taken as a signature of the partial restoration of chiral symmetry in ground state nuclei. On the other hand, Bhattacharyya et al. showed that the above conclusion is premature since a proper inclusion of the relevant interactions in a mean field description does give a large decrease in the $`\rho `$ meson mass as suggested by the experimental results.
In a recent work Popendreou et al. have shown that the reduction in $`\rho ^0`$ meson mass can also be explained through a $`\rho ^0`$\- nucleus bound state. They found that the depth of the potential required to produce the bound state is consistent with that expected from Dirac phenomenology using Brown-Rho scaling. In the present report we have studied the formation of $`\rho ^0`$-nucleus bound state using Walecka model.
The Walecka model is one of the most popular mean field model for nuclear matter . Though there have been lot of modifications this model still serves as the most effective mean field theory of nuclear matter. Starting from the Walecka model Lagrangian one can write down the coupled differential equation for the fields in the mean field approximation. These equations are solved self consistently for different nuclei to get the density distributions. The $`\rho ^0`$ meson mass is then evaluated as the pole of the propagator to get the mass variation with density as well as radius . The evaluation of $`\rho ^0`$ mass involves two coupling constants. One is the vector coupling of $`\rho ^0`$ meson with nucleon ($`g_\rho `$) and the other is the tensor coupling of $`\rho ^0`$ with nucleons ($`f_\rho `$ or $`c_\rho =f_\rho /g_\rho `$). In general, one can not fix the value of the tensor coupling constant within the premise of Walecka model itself. So in practice, one can take $`g_\rho `$ as well as $`c_\rho `$ both from some other source like Bonn potential or QCD sum rules or keep the $`g_\rho `$ as obtained within Walecka model and use $`c_\rho `$ from other calculations . As shown in , the reduction in the $`\rho ^0`$ meson mass is different for different parameter sets. The maximum reduction in mass is obtained from Walecka model parametr set where as the QCD sum rule parameter set yields the minimum reduction. In the following we have used the parameter set $`g_\rho =8.912`$ as obtained in Walecka model by fitting the asymmetric energy for nuclear matter and $`f_\rho =2.866`$ to describe the $`\rho `$-nucleus bound states. Similar values for $`f_\rho `$ are obtained from Bonn potential as well as QCD sum rule calculations . The average $`\rho ^0`$ masses in $`{}_{}{}^{3}He`$ for this parameter set are $`657`$ MeV and $`600`$ MeV without and with tensor interaction respectively.
The $`\rho ^0`$ meson consists of same flavour quark - antiquarks and as a result it is not expected to feel the Lorentz Vector potential generated by the nuclear environment. The total potential felt by $`\rho ^0`$ is then given by $`m_{\rho ^0}^{}(r)m_{\rho ^0}`$ where $`m_{\rho ^0}^{}`$ now depends on the position from the center of the nucleus. So in a nucleus the static $`\rho ^0`$ meson field $`\varphi _\rho `$ is given by,
$$[^2+E_\rho ^2m_{\rho ^0}^2]\varphi _\rho =0.$$
(1)
To incorporate the width of $`\rho ^0`$ meson in our estimate for the bound states we assume the phenomenological form as suggested by Saito et al. .
$$\stackrel{~}{m_{\rho ^0}^{}}=m_{\rho ^0}^2(r)\frac{i}{2}\{[m_{\rho ^0}m_{\rho ^0}^{}(r)]\gamma _\rho +\mathrm{\Gamma }_\rho \}m_{\rho ^0}^{}(r)\frac{i}{2}\mathrm{\Gamma }_\rho ^{}(r)$$
(2)
where $`\mathrm{\Gamma }_\rho `$ =150 MeV is the width of $`\rho `$ meson in free space. $`\gamma _\rho `$ is treated as a phenomenological parameter chosen to describe the in-medium meson width $`\mathrm{\Gamma }_\rho ^{}`$. So we actually solve the equation
$$[^2+E_\rho ^2\stackrel{~}{m^2}_{\rho ^0}(r)]\varphi _\rho (r)=0$$
(3)
The above equation has been solved in the coordinate space using relaxation method . This is done in the following way. First we separate the eqn.(3) in radial and angular parts. Then the wave function $`\varphi _\rho `$ and energy $`E_\rho `$ are written as $`\varphi _\rho =\varphi _\rho ^1+i\varphi _\rho ^2`$ and $`E_\rho =E_\rho ^1+iE_\rho ^2`$, where the superscripts $`1`$ and $`2`$ denote the real and imaginary parts of the relevant quantities. Substituting these in the radial part of the wave equation one gets two second order coupled differential equations for real and imaginary part of the wave function. These are then solved for the real and imaginary part of the energy $`E_\rho `$. The single particle energy $`E`$ can be defined in terms of complex eigenenergies $`E_\rho `$ as
$$E_\rho =E+m_\rho i\frac{\mathrm{\Gamma }}{2}$$
(4)
The single particle energies $`E=Re(E_\rho m_\rho )`$ and the full width $`\mathrm{\Gamma }`$ for different nuclei are given in Table 1 for two different parameter sets PI and PII. For PI and PII the vector coupling $`g_\rho `$ is 8.912 whereas the tensor coupling $`f_\rho `$ = 2.866 for PI and $`f_\rho =0`$ for PII. The table shows that a non-zero $`\gamma _\rho `$ increases the width where as the real part remains almost same. This is evident from eqn. (2) which shows that a non-zero $`\gamma _\rho `$ increases the imaginary part of the potential. Similar effect has been found in case of $`\omega `$-nucleus bound states as well. The energies in table 1 show that we get very strongly bound $`\rho `$-nucleus states in our present formalism. Without the tensor coupling, the $`E=132.95`$ MeV in the $`{}_{}{}^{3}He`$ in $`l=0`$ state. The similar value for the $`\rho `$ bound state has been found in ref.. On the other hand, we find that $`E=199.14`$ MeV in $`{}_{}{}^{3}He`$ in the presence of tensor coupling. This high binding is clearly the result of a large drop in the $`\rho ^0`$ mass in the presence of tensor coupling. In other words, to get less binding one has to reduce the effect of tensor coupling or introduce new effects like form factor which will restrict the dramatic change in the $`\rho ^0`$ mass. |
no-problem/0001/cond-mat0001195.html | ar5iv | text | # References
Glass Transition Temperature and Fractal Dimension of Protein Free Energy Landscapes
Nelson A. Alves<sup>a,</sup><sup>1</sup><sup>1</sup>1alves@quark.ffclrp.usp.br and Ulrich H.E. Hansmann<sup>b,</sup><sup>2</sup><sup>2</sup>2hansmann@mtu.edu
<sup>a</sup>Departamento de Física e Matemática, FFCLRP
Universidade de São Paulo. Av. Bandeirantes 3900. CEP 014040-901
Ribeirão Preto, SP, Brazil
<sup>b</sup>Department of Physics, Michigan Technological University
Houghton, MI 49931-1291, USA
> Abstract
>
> The free-energy landscape of two peptides is evaluated at various temperatures and an estimate for its fractal dimension at these temperatures calculated. We show that monitoring this quantity as a function of temperature allows to determine the glass transition temperature. Keywords: Energy landscape, folding funnel, fractal dimension, protein folding, generalized-ensemble simulations
It is well known that a large class of proteins folds spontaneously into unique, globular shape which is determined solely by the sequence of amino acids (the monomers) . Folding of a protein into this three-dimensional structure (in which it is biologically active) is now often described by energy landscape theory and the funnel concept . In this “new view” of folding it is assumed that the energy landscape of a protein is rugged, however, unlike for a random hetero-polymer, there is a sufficient overall slope so that the numerous folding routes converge towards the native structure. The particulars of the funnel-like energy landscape determine the transitions between the different thermodynamic states . For instance, a common scenario for folding may be that first the protein collapses from a random coil to a compact state. This coil-to-globular transition is characterized by the collapse transition temperature $`T_\theta `$. In the second stage, a set of compact structures is explored. The final stage involves a transition, characterized by the folding temperature $`T_f(T_\theta )`$, from one of the many local minima in the set of compact structures to the native conformation.
However, the essence of the funnel landscape idea is competition between the tendency towards the folded state and trapping due to ruggedness of the landscape. It is expected that for a good folder the temperature, $`T_g`$, where glass behavior sets in, has to be significantly lower than the folding temperature $`T_f`$ . It follows that it is important to calculate these two temperatures when examining the folding of a protein in computer simulations. While the folding temperature $`T_f`$ can be easily determined by monitoring the changes of a suitable order parameter with temperature, the situation is less obvious for the glass transition temperature $`T_g`$ which is normally determined from the slowing down of folding times with temperature in kinetic simulations . To measure $`T_g`$ from equilibrium properties of the protein one can use the intimate connection between “roughness” and fractality. Especially, we expect that the fractal dimension of the folding funnel will increase with increasing roughness of the free energy landscape. We conjecture that the glass transition temperature $`T_g`$ is associated with a change of the fractal dimension of the folding funnel and propose to measure $`T_g`$ by calculating the fractal dimension of the protein free energy landscape as a function of temperature. We remark that our approach differs from Ref. where recently for some peptides the fractal properties of the time series of the potential energy were studied.
While landscape theory and funnel concept were developed from studies of minimalistic protein models without reference to specific proteins, we intend to probe our assumption for two distinct peptides. The first peptide is Met-enkephalin, which has the amino acid sequence Tyr-Gly-Gly-Phe-Met. In previous work evidence was presented that the folding of this peptide can be described within the funnel concept. Estimators for the collapse temperature $`T_\theta =295\pm 30`$ K and the folding temperature $`T_f=230\pm 30`$ K were presented . As the second peptide we choose poly-alanine of chain length $`N=20`$, which undergoes at $`T=508(5)`$ K a sharp transition between a completely ordered helical state and a random (coil) state . Hence, we expect no finite glass transition temperature for this polypeptide; and the thermal behavior of the fractal dimension should differ significantly from that of Met-enkephalin.
Our simulations of both peptides relied on a detailed, realistic, description of the intramolecular interactions. Such simulations are known to be notoriously difficult. This is because at low temperatures simulations based on canonical Monte Carlo or molecular dynamics techniques will get trapped in one of the multitude of local minima separated by high energy barriers, and physical quantities cannot be calculated accurately. Only recently, with the development of generalized-ensemble techniques such as multicanonical sampling and simulated tempering , calculation of accurate low-temperature thermodynamic quantities became feasible in protein simulations . Hence, the use of one of these novel techniques, multicanonical sampling , was crucial for our project.
In a multicanonical algorithm conformations with energy $`E`$ are assigned a weight $`w_{mu}(E)1/n(E)`$, $`n(E)`$ being the density of states. A simulation with this weight generates a random walk in the energy space and a large range of energies is sampled. Hence, one can use the re-weighting techniques to calculate the free energy $`G(x)`$ as a function of the chosen reaction coordinate $`x`$ over a wide temperature range by
$$G(x,T)=k_BT\mathrm{log}[P(x)w_{mu}^1(E(x))e^{E(x)/k_BT}]C.$$
(1)
Here, $`P(x)`$ is the distribution of $`x`$ as obtained by our multicanonical simulation and $`C`$ is chosen so that the lowest value of $`G(x)`$ is set to zero for each temperature. Unlike in a canonical simulation the weight $`w_{mu}`$ is not a priori known, and estimators have to be calculated using the procedures described in Refs. .
The main problem in characterizing the roughness of the high-dimensional folding funnel of a protein is the choice of an appropriate reaction coordinate. We choose for Met-enkephalin the overlap with the (known) ground state, $`O`$, defined by
$$O=1\frac{1}{90n_F}\underset{i=1}{\overset{n_F}{}}|\alpha _i\alpha _i^{(GS)}|,$$
(2)
where $`\alpha _i^{(GS)}`$ stand for the $`n_F`$ dihedral angles of the ground state conformation. Similar, we chose as order parameter for poly-alanine the helicity
$$q=\frac{\stackrel{~}{n}_H}{N2},$$
(3)
which allows us to distinguish between helical and coil configurations of poly-alanine. Here $`\stackrel{~}{n}_H`$ is the number of helical residues in a conformation, without counting the first and last residues which can move freely and will not be part of a helical segment.
The fractal dimension of the free energy landscape can be calculated from different definitions, and different definitions can yield different information about the graph under study . From a theoretical point of view the Hausdorff-Besicovitch definition is the proper one to characterize the geometrical complexity, however, it is very difficult to evaluate numerically. More widely used techniques to obtain a dimension of an arbitrary set are box-counting (and its generalized version) and the method introduced by Higuchi , which we used for our analysis.
To define a fractal dimension, Higuchi considers a finite set of observations $`X(j),j=1,2,\mathrm{},N`$ taken at a regular interval $`k`$ in the reaction coordinate, and evaluates the length $`L_m(k)`$ of the corresponding graph for different interval length $`k`$ obtained from sequences
$$X_k^m:X(m),X(m+k),X(m+2k),\mathrm{},X(m+[\frac{Nm}{k}]k),$$
(4)
where $`m=1,2,\mathrm{},k`$ and $`[\frac{Nm}{k}]`$ denotes the integer part of $`(Nm)/k`$. The length of the graph is calculated as
$$L_m(k)=\frac{N1}{k}\underset{i=1}{\overset{[\frac{Nm}{k}]}{}}\frac{|X(m+ik)X(m+(i1)k)|}{k[\frac{Nm}{k}]}.$$
(5)
If the behavior of the graph has fractal characteristics over the available range $`k`$ then
$$<L(k)>k^d,$$
(6)
where $`d`$ is the fractal dimension and $`<L(k)>`$ is the average value over $`k`$ partial lengths of the graph.
We start now presenting our results which rely on 2,000,000 sweeps for both Met-enkphalin and poly-alanine. The potential energy function $`E_{tot}`$ that we used is given by the sum of electrostatic term $`E_C`$, Lennard-Jones term $`E_{LJ}`$, and hydrogen-bond term $`E_{hb}`$ for all pairs of atoms in the peptide together with the torsion term $`E_{tors}`$ for all torsion angles. The parameters for the energy function were adopted from ECEPP/2 (as implemented in the KONF90 program ). We further fix the peptide bond angles $`\omega `$ to their common value $`180^{}`$, and do not explicitly include the interaction of the peptide with the solvent and set the dielectric constant $`ϵ`$ equal to 2.
In Fig. 1 we show the free energy landscape of Met-enkephalin as a function of the overlap with the ground state for $`T=230`$ K. The funnel towards the ground state is clearly visible in this plot. In previous work it was found that at this temperature no long-living traps exist and therefore the funnel is relatively smooth. In Fig. 2 we show the corresponding logarithm of the averaged curve length $`<L(k)>`$ over the interval length $`k`$, for this temperature. The straight line corresponds to the least-square fit to the linear model obtained from Eq. (6). The error bars are the standard deviations obtained from $`k`$ sets $`L_m(k)`$ for the all statistics. Here we present the fractal dimension obtained from the whole statistics instead of introducing any binning procedure of our data. Hence, the errors reported here for the final estimates of $`d`$ are related to the deviation of the linear behavior of $`\mathrm{ln}<L(k)>`$ in Eq. (6).
Repeating the above analysis for various temperatures, we obtain a plot of the fractal dimension as a function of temperature which is displayed for Met-enkephalin in Fig. 3. Various distinct regions can be observed in this graph. For the high temperature region the fractal dimension seems to be constant and only slightly deviating from a one-dimensional graph: we find $`d1.15`$. In Refs. it was shown that this temperature region is dominated by extended coil structures with little resemblance to the ground state. Hence the energy landcape is a rapidly increasing function of the overlap, with the minimum at small values of that order parameter. With decreasing temperature the fractal dimension of the free energy landscape increases till it reaches a local maximum of $`d_1=1.33\pm 0.05`$ for $`T=280\pm 40`$ K (the quoted uncertainty in the temperature is an upper estimate and given by the range of temperatures for which the measured fractal dimension $`d`$ lies within the errorbars of $`d_1`$). This temperature seems to correspond to $`T_\theta =295\pm 20`$ K, the collapse temperature found in earlier work for Met-enkephalin. At $`T_\theta `$ both extended coil structures and an ensemble of collapsed structures can exist, and the free energy landscape reflects the large fluctuations at this temperature. In Ref. it was shown that this temperature a funnel-like structure of the landscape starts to appear, which becomes clearly visible at the folding temperature $`T_f=230\pm 30`$ K. We find in our plot of the fractal dimension no indication for this folding transition, presumably because that temperature seems to be within the same peak. Instead we observe that the fractal dimension decreases again with further decreasing temperature. This is consistent with our previous results and indicates that as the temperature decreases the ground state structure becomes more and more favored in the ensemble of compact structures. Actually, the folding temperature is defined by the condition that half of the observed configurations are ground state - like, and that temperature, $`T_f=230\pm 30`$ K corresponds roughly to the mid point of the fractal dimension plot between its maximum of $`d=1.33\pm 0.05`$ for $`T=280\pm 40`$ K and the low-temperature minimum of $`d=1.25\pm 0.02`$ at a temperature $`T=180\pm 30`$ K. Below that temperature, the fractal dimension increases rapidly again, indication the onset of glassy behavior and the appearance of long-living traps. Hence, we identify this temperature as the glass temperature and find for Met-enkephalin
$$T_g=180\pm 30K.$$
(7)
This estimate is consistent with $`T_g=160\pm 30K`$ as determined by an approximate calculation in Ref. .
As mentioned above, it is expected that for a protein $`T_f>T_g`$, i.e. a good folder can be characterized by the relation
$$\frac{T_f}{T_g}>1.$$
(8)
The result for $`T_f=230\pm 30`$ K (as quoted in Ref. ) and our new estimate for the glass transition temperature $`T_g`$ lead indeed to $`T_f/T_g=1.28>1`$. This value of the ratio clearly demonstrates that Met-enkephalin is a good folder according to the above criterion. Our result is consistent with an alternative characterization of folding properties. Thirumalai and collaborators have pointed out that the kinetic accessibility of the native conformation can be classified by the parameter
$$\sigma =\frac{T_\theta T_f}{T_\theta },$$
(9)
i.e., the smaller $`\sigma `$ is, the more easily a protein can fold. With central values $`T_\theta =295`$ K and $`T_f=230`$ K, taken from Ref. , we have for Met-enkephalin $`\sigma 0.2`$ which implies reasonably good folding properties according to Ref.. Hence, we see that there is a strong correlation between the folding criterion ($`T_f/T_g>1`$) proposed by Bryngelson and Wolynes and the one by Thirumalai and co-workers .
It is interesting to compare the above graph with the behavior of the fractal dimension for poly-alanine with its sharp helix-coil transition. Following the receipt described for Met-enkephalin, we obtain from an analysis of free energy landscapes $`G(q,T)`$ (with $`q`$ defined in Eq. 3) a plot of the fractal dimension as a function of temperature for poly-alanine chains of length $`N=20`$. The graph is displayed in Fig. 4.
Here we also observe an almost flat curve for high temperatures with a small value ($`d1`$) of the fractal dimension of the free energy landscape as a function of the helicity, which is our order parameter for this system. The small value for the fractal dimension indicates again the relative smooth landscape of the peptide in this temperature region which is dominated by coil structures. Again the fractal dimension increases with decreasing temperature till a maximum of $`d=1.6\pm 0.3`$ is reached at $`T=510\pm 30`$ K for $`N=20`$.
This temperature corresponds to the helix-coil-transition temperature $`T_c=508\pm 5`$ K of Ref. . Below that temperature the fractal dimension decreases again rapidly, but unlike for Met-enkephalin it does not increase again at some lower temperature. The rapid decrease in $`d`$ reflects the observation that below $`T_c`$ the system exist almost exclusively in a single configuration, namely as a single extended $`\alpha `$-helix, and that the transition between coil and helix states is either first-order-like or can be described as a strong second order transition . Since the ground state structure is so strongly energetically favored for poly-alanine, we find no indication for a glass transition at lower temperatures. Such a behavior would be expected for pronounced transitions of the above type.
Let us summarize our results. We have used generalized-ensemble simulations to calculate the free-energy landscapes of two peptides as a function of a suitable reaction coordinate for a large temperature range. We have measured the fractal dimension of these energy landscapes and studied its thermal behavior. Our results show that the fractal dimension $`d(T)`$ as a function of temperature is sensitive to thermodynamic transitions in the molecules. Especially, it is possible to determine estimators for the glass transition temperature $`T_g`$ from this quantity.
Acknowledgements:
U.H. was visitor at the Ribeirão Preto campus of the Universidade de São Paulo when this work was performed. He thanks the Departamento de Física e Matemática for the kind hospitality extended to him, and FAPESP for a generous travel grant. Financial supports from FAPESP and a Research Excellence Fund of the State of Michigan are gratefully acknowledged.
Figure Captions:
1. Free energy of Met-enkephalin as a function of the overlap with the (known) ground state O for $`T=230`$ K. The results are calculated from a generalized-ensemble simulation of 2,000,000 Monte Carlo sweeps.
2. Linear regression for ln$`<L(k)>`$ (as defined in Eq. (6)) for $`T=230`$ K.
3. Fractal dimension of the free-energy landscape of Met-enkephalin as function of temperature.
4. Fractal dimension of the free-energy landscape of Poly-alanine (with chain length $`N=20`$) as a function of temperature. |
no-problem/0001/cond-mat0001284.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The understanding of the role played by quenched impurities on the nature of phase transitions is one of the significant subjects in statistical physics, and it has been a topic of substantial interest for many authors in the last two decades . According to the Harris criterion, quenched randomness is a relevant perturbation at the second-order critical point when its specific-heat exponent $`\alpha `$ of the pure system is positive. Following the earlier work of Imry and Wortis who argued that a quenched disorder could produce rounding of a first-order phase transition and thus induce second-order phase transitions, the introduction of randomness to systems undergoing a first-order phase transition is comprehensively considered. It was shown by Hui and Berker that the bond randomness can have a drastic effect on the nature of a first-order phase transition by phenomenological renormalization group arguments , and the feature has been placed on a firmer basis with the rigorous proof of vanishing of the latent heat . Their theory was numerically checked with the Monte Carlo (MC) method by Chen, Ferrrngberg and Landau (CFL) , where the eight-state Potts model was studied with random-bond disorders. Experimental evidence in two-dimensional systems was found that in the order-disorder phase transitions of absorbed atomic layers, the critical exponents are modified, in the addition of disorder, from the original four-state Potts model universality class in the pure case . On the other hand, no modification is found when the pure system belongs to the Ising Universality class . The theoretical study of such kinds of disordered systems is also an active field where a resort to intensive MC simulations is often helpful .
It is well known that the pure Potts model in two-dimensions (2D) has a second order phase transition when the number of Potts state $`q4`$ and is first order for $`q>4`$. As the specific heat exponent $`\alpha `$ of the pure system is always positive for $`q>2`$, the disorders will be the relevant perturbation for the Potts model. As a result, all the transitions are second order for all 2D $`q`$-state Potts models in the presence of quenched disorders, and the impurities have particularly strong effect for $`q>4`$, even changing the order of the transitions.
In this paper, we discuss the dynamic scaling features of random-bond Potts model (RBPM) through MC simulations, to estimate the critical exponents. We consider the important questions of whether there exists an Ising-like universality class for the RBPM and how is the critical behavior affected by the introduction of disorder into the pure system . The large-scale MC results by the CFL and in ref. suggest that, in 2D, any random systems should belong to the pure Ising universality class. These results are also coherent with experiment . In recent papers, however, Cardy and Jacobsen studied the random-bond Potts models for several values of $`q`$ with a different approach based on the connectivity transfer matrix (TM) formalism of Blöte and Nightingale Their estimates of the critical exponents led to a continuous variation of $`\beta /\nu `$ with $`q`$, which is in sharp disaggreement with the MC results for $`q=8`$ . We hope that the resulting critical behavior measured in this paper will play a role in settling this controversy. Furthermore we will test the short-time dynamic (STD) MC approach for the first time, to study the spin systems with the quenched disorder and to show its efficiency by numerical studies, which is also one of our main aims of this paper.
## 2 Model and Method
The Hamiltonian of $`q`$-state Potts model with quenched random interactions can be written,
$`\beta H={\displaystyle \underset{<i,j>}{}}K_{ij}\delta _{\sigma _i\sigma _j},K_{ij}>0,`$ (1)
where the spin $`\sigma `$ can take the values 1,$`\mathrm{}q`$, $`\beta =1/k_BT`$ the inverse temperature, $`\delta `$ is the Kronecker delta function, and the sum is over all nearest-neighbor pairs on a 2D lattice. The dimensionless couplings $`K_{ij}`$ are selected from two positive (ferromagnetic) values of $`K_1`$ and $`K_2=rK_1`$, with a strong to weak coupling ratio $`r=K_2/K_1`$ called as disorder amplitude, according to a bimodal distribution,
$`P(K)=p\delta (KK_1)+(1p)\delta (KK_2).`$ (2)
When $`p=0.5`$, the system is self-dual and the exact critical point can be determined by ,
$`(e^{K_c}1)(e^{K_c^{}}1)=q.`$ (3)
where $`K_c`$ and $`K_c^{}`$ are the corresponding critical values of $`K_1`$ and $`K_2`$ respectively at the transition point. While $`r=1`$ corresponds to the pure case and the critical point is located at $`K_c=\text{log}(1+\sqrt{q})`$ and the phase transitions are first-order for $`q>4`$. With additional random-bond distribution, however, new second-order phase transitions are induced for any of $`q`$-state Potts models and the new critical points are determined according to Eq.(3) for different values of disorder amplitude $`r`$ and state parameter $`q`$.
In this work we chose $`q=8`$, which is known to have a strong first-order phase transition, in the hope that we would find a new second-order phase transition caused by the quenched disorder to show the effect of impurities on the first-order systems. The strength of the disorder was chosen for several values of $`r`$, as was done in , to check the Ising-like universality class. To minimize the number of bond configurations needed for the disorder averaging, we confined our study to the bond distributions in which there are the same number of strong and weak bonds in each of the two lattice directions. This procedure should reduce the variation between different bond configurations, with no loss of generality.
We performed our simulations by the STD method on the 2D square lattices with periodic boundary conditions. This dynamic MC simulations have been successfully performed to estimate the critical temperatures $`T_c`$ and the critical exponents $`\theta `$, $`\beta `$, $`\nu `$ and dynamic exponent $`z`$ for the 2D Ising model and the 2D 3-state Potts model , since for both models there exist second order phase transitions. Recently this approach has also been extensively applied for the Fully Frustruated XY model and spin glass systems to study the critical scaling characteristics to estimate all the dynamic and static critical exponents .
Traditionally it was believed that universality and scaling relations can be found only in the equilibrium stage or long-time regime. In Ref., however it was discovered that for a magnetic system in states with a very high temperature $`TT_c`$ which is suddenly quenched to the critical temperature $`T_c`$ and then evolve according to a dynamics of model A , there emerges a universal dynamic scaling behavior already within the short-time regime, which satisfies,
$`M^{(k)}(t,\tau ,L,m_0)=b^{k\beta /\nu }M^{(k)}(b^zt,b^{1/\nu }\tau ,b^1L,b^{x_0}m_0),`$ (4)
where $`M^{(k)}`$ is the $`k`$th moment of the magnetization, $`\tau =(TT_c)/T_c`$ is the reduced temperature, $`\beta `$ and $`\nu `$ are the well known static critical exponents and $`b`$ is a scaling factor. The variable $`x_0`$, a new independent exponent, is the scaling dimension of initial magnetization $`m_0`$. This dynamic scaling form is generalized from finite size scaling in the equilibrium stages. Importantly the scaling behavior of Eq.(4) can be applied to both dynamic exponent measurements and the estimates of the static exponents originally defined in equilibrium.
We begin our study on the evolutions of magnetization in the initial stage of the dynamic relaxation starting at very high temperature and small magnetization ($`m_00`$). For a sufficiently large lattice ($`L\mathrm{}`$), from Eq.(4) by setting $`\tau =0`$, $`b=t^{1/z}`$, it is easy to derive that
$`M^{(k)}(t,m_0)=t^{k\beta /\nu z}M^{(k)}(1,t^{x_0/z}m_0).`$ (5)
When $`k=1`$ we get the most important scaling relation on which our measurements of the critical exponent $`\theta `$ are based,
$`M(t)m_0t^\theta ,\theta =(x_0\beta /\nu )/z.`$ (6)
As a result, the magnetization undergoes an initial increase at the critical point $`K_c`$ after a microscopic time $`t_{mic}`$. This prediction is supported by a number of MC investigations which have been applied to detect all the static and dynamic critical exponents as well as the critical temperatures . The advantage of the dynamic MC simulations is that it may eliminate critical slowing down since the measurements are performed in the early time stages of the evolution where the spatial and time correlation lengths are small.
In our simulations, the time evolution of $`M(t)`$ is calculated through the definition
$`M(t)`$ $`=`$ $`{\displaystyle \frac{1}{N}}[{\displaystyle \frac{q<M_O>1}{q1}}].`$ (7)
Here $`M_O=\text{max}(M_1,M_2,\mathrm{},M_q)`$ with $`M_i`$ being the number of spins in the $`i`$th state among $`q`$ states. $`<\mathrm{}>`$ denotes the initial configuration averages over independent random number sequences, and $`[\mathrm{}]`$ the disorder configuration averages over quenched random-bond distributions. $`N=L^2`$ is a number of spins on this square lattice and $`q=8`$ is chosen.
The susceptibility plays an important role in the equibrium. Its finite size behavior is often used to determine the critical temperature and the critical exponents $`\gamma /\nu `$ and $`\beta /\nu `$ . For the STD approach, the time-dependent susceptibility (the second moment of the magnetization) is also interesting and important. For the random-bond Potts model, the second moment of the magnetization is usually defined as
$`M^{(2)}(t)`$ $`=`$ $`{\displaystyle \frac{1}{N}}[(<M^2(t)><M(t)>^2)].`$ (8)
To study the scaling behavior of the second moment of magnetization, we have to take the initial states of $`m_0=0`$ to start the relaxation processes. Because the spatial correlation length in the beginning of the relaxation is small compared with the lattice size $`L^d`$ in the short-time regime of the dynamic evolition, the second moment behaves as $`M^{(2)}(t,L)L^d`$. Then the finite size scaling Eq.(4) induces a power-law behavior at the critical temperature,
$`M^{(2)}(t)t^y,y=(d2\beta /\nu )/z.`$ (9)
From the scaling analysis of the spatial correlation function we easily realize the non-equibrium spatial correlation length $`\xi t^{1/z}`$. Therefore $`M^{(2)}(t)\xi ^{(d2\beta /\nu )}`$.
In the above considerations the dynamic relaxation process was assumed to start from a disordered state or with small magnetization $`m_0`$. Another interesting and important process is the dynamic relaxation from a completely ordered state. The initial magnatization locate exactly at its fixed point $`m_0=1`$, where scaling of the form,
$`M^{(k)}(t,\tau ,L)=b^{k\beta /\nu }M^{(k)}(b^zt,b^{1/\nu }\tau ,b^1L),`$ (10)
is expected. This scaling form looks to be the same as the dynamic scaling one in the long-time regime, however, it is now assumed already valid in the macroscopic short-time regime.
For the magnetization itself, $`b=t^{1/z}`$ yields, for a sufficiently large lattice,
$`M(t,\tau )=t^{\beta /\nu z}M(1,t^{\beta /\nu z}\tau ).`$ (11)
This leads to a power-law decay behavior of
$`M(t,\tau )=t^{c_1},c_1=\beta /\nu z,`$ (12)
at the critical point ($`\tau =0`$). The formula can be used to calculate the critical exponents $`\beta /\nu `$ and $`z`$. For a small but nonzero $`\tau `$, the power-law behavior will be modified by the scaling function $`M(1,t^{\beta /\nu z}\tau )`$, which has been used to determine the critical temperatures . Furthermore, by introducing a Binder cumulant
$`U(t,L)={\displaystyle \frac{M^{(2)}(t,L)}{(M(t,L))^2}}1,`$ (13)
a similar power-law behavior at the critical point induced from the scaling Eq.(10) shows that,
$`U(t,L)t^{c_2},c_2=d/z,`$ (14)
on a large enough lattice. Here, unlike the relaxation from the disordered state, the fluctuations caused by the initial configurations are much smaller. In pratical simulations, these measurements of the critical exponents and critical temperature are better in quality than those from the realization process starting from disordered states.
## 3 MC Simulations and Results
As it was pointed out that the Heat-bath algorithm is more efficient than the Metropolis algorithm in the STD , and the universality is satisfied for different algorithms, we only perform the MC simulations with the Heat-bath algorithm at the critical points of 2D eight-state RBPM for an optimal disorder amplitude $`r^{}=10`$ which is located on the random fixed point regime with a largest value of central charge $`c=1.5300(5)`$ . Samples for averages are taken both for over 300 disorder distribution configurations and about $``$500 independent initial configurations on the square lattices $`L^2`$ with $`L`$ up to 128. Statistical errors are simply estimated by performing three groups of averages with different random seed selectes for the initial configurations. It should be noted that, except for $`M(t)`$, the measurements of $`M^{(2)}(t)`$ and $`U(t)`$ are restricted to the initial states with $`m_0=0`$ or $`m_0=1`$. Importantly, it was verified that the critical exponents save the same value the same as those in the equilibrium or long-time stage of the relaxation . Therefore we can measure these exponents based on the corresponding scaling relation in the initial stages of the relaxation.
We start our simulations to verify the power-law scaling behavior of $`M(t)`$ with several values of disorder amplitude at the critical points $`K_c(r)`$ (as shown in Table 1). The initial configurations are prepared with small magnetizations $`m_0=0.06,0.04`$, 0.02 and exact zero states. In Fig.1, the time evolutions of the magnetization $`M(t)`$ versus the disorder amplitude $`r`$ on a $`64^2`$ lattice are displayed with a double-log scale. We can easily find that all the curves exhibit the power-law behavior predicted by Eq.(6). Thus $`\theta `$ can be estimated from the slopes of the curves. The values of $`\theta `$ as a function of the disorder amplitude $`r`$ for small initial magnetization $`m_0`$ are presented in the Table 1.
We then set $`m_0=0`$ to measure the second moment of magnetization. The power-law behaviour of the second moment $`M^{(2)}(t)`$ is observed in Fig. 2, where the curves for different lattice sizes are plotted. Again, they present a very nice power-law increase. Values of scaling dimension $`y=(d2\beta /\nu )/z`$ determined from slopes of the curves during $`t=[10,200]`$ are listed in the Table 2.
We furthermore set $`m_0=1`$ to observe the evolution of the magnetization and the Binder cumulant, both should show the power-law behavior as predicted by the Eq.(12) and Eq.(13). Their curves are plotted in the Figs. 3 and 4 respectively. The values of the scaling dimension $`c_1=\beta /\nu z`$ and $`c_2=d/z`$ are then estimated, as are also presented in Table 2. Now the results of $`y`$, $`c_1=\beta /\nu z`$ and $`c_2=d/z`$ can be used to estimate critical exponent $`\beta /\nu `$ shown in Table 2. For comparison, also listed in Table 2 are the correspoinding results of the scaling dimension for the Ising and $`q=3`$ Potts models on 2D (3D) square (cubic) lattices, and in Table 3 we summarize the results of the critical exponent $`\beta /\nu `$ up to the present.
## 4 Summary and Conclusion
In this paper we have investigated the short-time critical dynamics of the random-bond Potts model on 2D lattices to verify whether it has a second order phase transition in the Ising-like universality class, by a STD study. Dynamic scaling haviour was found, and has been used to estimate the critical exponents $`\theta `$, $`z`$ and $`\beta /\nu `$. Our main results are summarized in Table 2, which are obtained from the slopes of power-law curves of $`M(t)`$, $`M^{(2)}(t)`$ and $`U(t)`$ in the double-log scales by least–square fits.
Our work shows that for the RBPM, there exists a power-law behavior, which is the typical feature of a continuous phase transition in the STD processes. The $`r`$-dependence characteristics of the dynamic exponent $`\theta `$ gives evidence that their dynamic MC behavior is different from the pure Ising model, and the values of magnetic exponent $`\beta /\nu `$ in our calculation seems to be the same as that given by the TM formula , but not as that by the CFL . Furthermore we found that the values of dynamic exponent $`\theta `$ depend on the disorder amplitude $`r`$, and it suggests that the dynamic exponent $`z`$ may also depend on the $`r`$, which would be interesting to examine for future study.
In conclusion, this study presents numerical evidence that the quenched impurities in the RBPM can induce new second-order phase transitions, but they appear not always to belong to Ising-like universality class, although the result of new critical exponent $`\theta `$ is the same for both the $`r=10`$ RBPM and the Ising model within errorbars by present calculations. Second, as the effect of critical slowing down in the equilibrium stage for the RBPM is more severe than that for the pure systems, the cluster algorithms, up to now, have been frequently used in MC simulations of the 2D RBPM . In present paper, however, we have applied the STD MC simulations which use local updating schemes for the 2D RBPM, and the fact that dynamic MC simulations can avoid the critical slowing down in the STD processes, where the spatial correlation length is still small, makes it easier to calculate the critical exponents. An important subject for further study is, for example, how the dependence of dynamic critical exponent $`\beta /\nu `$ on the state parameter $`q`$ and the disorder amplitude $`r`$ by a systematic simulation using the STD method in order to clarify the crossover behaviour from the random fixed point to a percolation-like limit . This is being studied at present.
We acknowledge the helpful discussions with Y. Aoki and H. Shanahan. This research was initiated during the visit to University of Tsukuba (HPY, who also acknowledge the Center for Computational Physics for hospitality where the MC simulations were performed on the DEC workstations).
FIGURE CAPTIONS
FIGURE 1: The time evolution of magnetization showing the $`r`$–dependence of $`\theta `$, plotted with a double-log scale on a lattice of $`64\times 64`$ with $`m_0=0.01`$.
FIGURE 2: The time evolution of second moment of magnetization starting from absolute random states, plotted with a double-log scale on lattices of $`32\times 32`$, $`64\times 64`$ and $`128\times 128`$.
FIGURE 3: The power-law decay of magnetization starting from fully ordered states, plotted with a double-log scale on lattices of $`32\times 32`$, $`64\times 64`$ and $`128\times 128`$. The finite size effect is obvious when the lattice size $`L<64`$.
FIGURE 4: The time evolution of Binder comulant starting from fully ordered states, plotted with a double-log scale on lattices of $`32\times 32`$, $`64\times 64`$ and $`128\times 128`$. |
no-problem/0001/quant-ph0001118.html | ar5iv | text | # Induced coherence with and without induced emission
## Abstract
We analyze signal coherence in the setup of Wang, Zou and Mandel, where two optical downconverters have indistinct idler modes. Quantum interference, caused by indistinguishability of paths, has a visibility proportional to the transmission amplitude between idlers. Classical interference, caused by induced emission, may be complete for any finite transmission.
The phenomenon of induced emission (that is, emission stimulated in a system by an input from another system) is well-known in laser technology . It causes the phase of the amplified field to adopt the same phase as the incident locking field. It can also be used in parametric down conversion to lock the phase of the idler, and hence that of the signal (since the phase sum of the signal and idler is locked to the pump phase) . If the field used to lock the idler of one downconverter (DC2 in Fig. 1) is itself the idler output of another downconverter (DC1 in Fig. 1), the two signal fields will be locked in phase also. Thus they will have (in principle) perfect first order coherence and so will interfere at the final beam splitter in Fig. 1. If there is no connection between the two downconverters, and hence no induced emission, the two signals wil be incoherent, and there will be no interference. The classical explanation for this is that in parametric downconversion the phase of the signal and idler vary randomly from shot to shot, with only their sum being fixed by the pump phase.
The above arguments are completely classical. Wang, Zou and Mandel (WZM) used a completely different (quantum mechanical) explanation, based on indistinguishability of paths, to explain the interference they observed in their realization of the experiment shown in Fig. 1. They did this for the very good reason that there was no induced emission in their experiment, as the downconversion rates were so low that the probability of both crystals producing a downconverted pair was negligible. Nonetheless, their analysis showed that for perfect matching of idler modes, the signal fields from DC1 and DC2 show perfect interference, while the interference is lost if the idler fields are distinquishable.
The quantum analysis used by WZM is the only correct explanation of their experiment. However, the existence of a classical theory which also reproduces these coherence features poses the following question: when is interference due to induced emission and when is it due to indistinguishability of quantum transition paths? Put another way, what, in the results of WZM, is the signature of quantum induced coherence, as distinct from classical induced emission? In this letter we show that the signature is the linear dependence of the coherence on the transmission amplitude $`t`$ from the output of idler 1 to the input of idler 2.
Before presenting our analysis, we note that the question of classical versus quantum explanations for first-order interference in parametric down conversion has arizen before with reference to an experiment of Herzog et al. . In this experiment both signal and idler fields were reflected and passed through a single down-converter a second time. Both classical and quantum arguments predict first-order interference features in the resulting fields, but also here with different magntudes of visibility . An elegant experiment with a single, but spatially extended, down-converter was recently performed, where the same discussion appears as to whether the signal and idler fields stimulate down conversion of future pump pulses further along the crystal, or whether different pulses interfere because of the indistinguishability between photons created at different times and places inside the crystal .
We turn now to our, fully quantum, analysis of the WZM experiment. In an appropriate limit the system can be described by four modes, $`s_1,i_1`$ (the signal and idler for DC1), and $`s_2,i_2`$ (the signal and idler for DC2). Consider an arbitrary operator in the Hilbert space of these four modes. The equation giving its transformation from its value $`O`$ before the interaction to its value $`O^{}`$ after the action of the downconverters and the idler transmission between DC1 and DC2 is
$$O^{}=U_1^{}U_t^{}U_2^{}OU_2U_tU_1$$
(1)
Here $`U_\mu `$ for $`\mu =1,2`$ describe the downconversion in the undepleted pump approximation. The crystals and pumps are assumed to be identical so that
$$U_\mu =\mathrm{exp}[i\chi (a_{s_\mu }a_{i_\mu }+a_{s_\mu }^{}a_{i_\mu }^{})]$$
(2)
where the $`a`$’s represent annihilation operators. In between the downconversions the idler from DC1 is put through a beam splitter, and becomes the idler for DC2. This is described by
$$U_t=\mathrm{exp}[(\mathrm{arcsin}t)(a_{i_1}^{}a_{i_2}a_{i_2}^{}a_{i_1})],$$
(3)
where the beam splitter transmittance $`t`$ can vary between zero (where idler 2 is independent from idler 1) and unity (where idler 1 output is equal to idler 2 input).
Using Eq. (1) we easily obtain the following
$`a_{s_1}^{}`$ $`=`$ $`a_{s_1}\mathrm{cosh}\chi ia_{i_1}^{}\mathrm{sinh}\chi `$ (4)
$`a_{s_2}^{}`$ $`=`$ $`a_{s_2}\mathrm{cosh}\chi ira_{i_2}^{}\mathrm{sinh}\chi `$ (6)
$`it(a_{i_1}^{}\mathrm{cosh}\chi +ia_{s_1}\mathrm{sinh}\chi )\mathrm{sinh}\chi ,`$
where $`r=\sqrt{1t^2}`$. Since all of the initial fields are in the vacuum state it is easy to obtain the expectation values
$`a_{s_1}^{}{}_{}{}^{}a_{s_1}^{}`$ $`=`$ $`\mathrm{sinh}^2\chi `$ (7)
$`a_{s_2}^{}{}_{}{}^{}a_{s_2}^{}`$ $`=`$ $`\mathrm{sinh}^2\chi (r^2+t^2\mathrm{cosh}^2\chi )`$ (8)
$`a_{s_1}^{}{}_{}{}^{}a_{s_2}^{}`$ $`=`$ $`\mathrm{sinh}^2\chi t\mathrm{cosh}\chi `$ (9)
Note that the two signal modes have equal intensity only in the limit $`\chi 1`$.
The maximum obtainable visibility between two fields in an experiment is given by the modulus of the first order coherence function between those fields,
$$g^{(1)}(1,2)=\left|a_1^{}a_2\right|/\sqrt{a_1^{}a_1a_2^{}a_2}$$
(10)
In this case we find between the two final signal fields
$$g^{(1)}(1,2)=\frac{t\mathrm{cosh}\chi }{\sqrt{1+t^2\mathrm{sinh}^2\chi }}.$$
(11)
Noting that idler 1, before it enters the beam splitter, has the same statistics as signal 1, we can rewrite (11) in terms of the mean photon number $`\overline{n}_1=\mathrm{sinh}^2\chi `$ entering the beam splitter as
$$g^{(1)}(1,2)=t\sqrt{\frac{1+\overline{n}_1}{1+t^2\overline{n}_1}}.$$
(12)
In this form it is easy to consider the relevant limits. The single-photon regime, which is the regime of the experiment and theory in Ref. , occurs for $`\overline{n}_11`$. That is, the probability of a downconversion at DC1 is small, and hence the probability to have downconversions at both crystals is negligible. Then the analysis of WZM applies and we expect the maximum visibility to be equal to $`t`$. This is exactly what Eq. (12) predicts.
The opposite regime is that where $`\overline{n}_11`$. Here there are many photons on average in all of the downconverted beams. Thus, we could expect the classical argument to apply (although our analysis remains of course completely quantum mechanical). That is, the phase of idler 1 should lock the phase of idler 2 for any nonzero transmittance $`t`$. Again, this is reproduced by Eq. (12), which in the limit $`\overline{n_1}\mathrm{}`$ is equal to unity for $`t>0`$ and zero for $`t=0`$.
For finite values of the first idler photon number output $`\overline{n_1}`$, the maximum visibility is a concave-down function of $`t`$, as shown in Fig. 2. It is evident that even photon numbers of order unity produce marked deviations from linearity. This would be interesting to observe experimentally.
To conclude, we have shown that, regardless of the number of photons involved, the first-order coherence of two signal beams is unity when one idler perfectly seeds the second, and zero when the two are independent. The difference between the quantum (single-photon) and classical (many-photon) regimes is for intermediate values of $`t`$, the transmittance of the beam splitter which transmits the first idler output into the second crystal. A linear dependence of visibility on $`t`$, as seen convincingly in Ref. , is the true signature of induced coherence without induced emission.
. |
no-problem/0001/hep-ex0001001.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Scintillating crystal detectors have been widely used as electromagnetic calorimeters in high energy physics , as well as in medical and security imaging and in the oil-extraction industry. They offer many potentials merits for low-energy (keV-MeV range) low background experiments.
A CsI(Tl) scintillating crystal detector is being constructed to be placed near a reactor core to study low energy neutrino interactions . In subsequent sections, we discuss the motivations for this choice of detector technology, the physics topics to be addressed, the basic experimental design and the prototype performance parameters. The various background processes and the experimental means to identify and suppress them are considered. Various extensions based on this detector technique are summarized at the end.
## 2 Physics and Detector Motivations
High energy (GeV) neutrino beams from accelerators have been very productive in the investigations of electroweak, QCD and structure function physics and have blossomed into matured programs at CERN and Fermilab. However, the use of low energy (MeV) neutrino as a probe to study particle and nuclear physics has not been well explored.
Nuclear power reactors are abundant source of electron anti-neutrinos ($`\overline{\nu _\mathrm{e}}`$) at the MeV range. Previous experiments with reactor neutrinos primarily focused on the interactions
$$\overline{\nu _\mathrm{e}}+\mathrm{p}\mathrm{e}^++\mathrm{n}$$
to look for neutrino oscillations . The choice of this interaction channel is due to its relatively large cross-sections, the very distinct experimental signatures (prompt e<sup>+</sup> followed by a delayed neutron capture), and the readily available detector choice of liquid scintillator providing the proton target. Using this interaction, the reactor neutrino spectrum has been measured to a precision of $``$2% in the 3 MeV to 7 MeV range from the Bugey experiment .
The only other processes measured at the MeV range for $`\overline{\nu _\mathrm{e}}`$ are $`\overline{\nu _\mathrm{e}}`$-electron and $`\overline{\nu _\mathrm{e}}`$-deuteron interactions, and their accuracies are at the 30-50% and 10-20% range, respectively. There are motivations to improve on these measurements, to investigate complementary detection techniques, and to study the other unexplored channels.
Gamma-ray spectroscopy has been a standard technique in nuclear sciences (that is, in the investigations of physics at the MeV range), with the use of scintillating crystals or solid-state detectors. Gamma-lines of characteristic energies give unambiguous information on the presence and transitions of whichever specific isotopes, allowing a unique interpretation of both the signal and background processes.
Several intrinsic properties of crystal scintillators make them attractive candidates for low-background experiments in neutrino and astro-particle physics. The experimental difficulties of building a high-quality gamma detector for MeV neutrino physics have been the large target mass required. However, in recent years, big electro-magnetic calorimeter systems (such as the mass of 40 tons of CsI(Tl) crystals, in the case for the current B-factories experiments ) have been built for high energy physics experiments, using CsI(Tl) crystals with silicon PIN photo-diodes readout . In addition, NaI(Tl) detectors with the mass range of 100 kg have been used in Dark Matter WIMP searches , producing some of the most sensitive results.
The CsI-crystal production technology is by now well matured and the cost has been reduced enormously due to the large demands. It becomes realistic and affordable to build a CsI detector in the range of 1-ton in target mass for a reactor neutrino experiment. The detector is technically much simpler to build and to operate than, for instance, gas chambers and liquid scintillators. The detector mass can be readily scaled up to tens of tons if the first experiment would yield interesting results or lead to other potential applications. Other scintillating crystal detectors can be easily customized in the various potential applications.
The properties of CsI(Tl) crystals, together with those of a few common scintillators, are listed in Table 1. The CsI(Tl) crystal offers certain advantages over the other possibilities. It has relatively high light yield and high photon absorption (or short radiation length). It is mechanically stable and easy to machine, and is only weakly hygroscopic. There is no need for a hermetic container to seal the detector from the ambient humidity (as required, for instance, by NaI(Tl)). This minimizes radioactive background as well as energy loss in the passive elements which will degrade energy resolution. In particular, CsI(Tl) provides strong attenuation for $`\gamma `$’s of energy less than 500 keV. As a result, it is possible to realize a compact detector design with minimal passive materials within the fiducial volume, and with an efficient external shielding configuration.
## 3 Neutrino Interactions on CsI(Tl)
### 3.1 Neutrino-Electron Scattering
The cross section for the process
$$\overline{\nu _\mathrm{e}}+\mathrm{e}^{}\overline{\nu _\mathrm{e}}+\mathrm{e}^{}$$
gives information on the electro-weak parameters ($`\mathrm{g}_\mathrm{V},\mathrm{g}_\mathrm{A},\mathrm{and}\mathrm{sin}^2\theta _\mathrm{W}`$), and are sensitive to small neutrino magnetic moments ($`\mu _\nu `$) and the mean square charge radius ($`<\mathrm{r}^2>`$. Scatterings of the $`(\nu _\mathrm{e}\mathrm{e})`$ and $`(\overline{\nu _\mathrm{e}}\mathrm{e})`$ are two of the most realistic systems where the interference effects between Z \[neutral currents (NC)\] and W \[charged currents (CC)\] exchanges can be studied . Many of the existing and proposed solar neutrino detectors (Super-Kamiokande, Borexino, HELLAZ, HERON …) make use of the $`\nu _\mathrm{e}`$-e interactions as their detection mechanisms. The fact the $`\nu _\mathrm{e}`$-e scattering can proceed via both W and Z exchanges, while $`\nu _\mu `$-e and $`\nu _\tau `$-e are purely neutral current processes, is the physics-basis of Resonance Neutrino Oscillation in Matter (the “MSW” Effect) .
In an experiment, what can be measured is the recoil energy of the electron (T). The differential cross section can be expressed as :
$`{\displaystyle \frac{d\sigma }{dT}}(\nu e)`$ $`=`$ $`{\displaystyle \frac{G_F^2m_e}{2\pi }}[(g_V\pm x\pm g_A)^2+(g_V\pm xg_A)^2[1{\displaystyle \frac{T}{E_\nu }}]^2+(g_A^2(g_V\pm x)^2){\displaystyle \frac{m_eT}{E_\nu ^2}}]`$
$`+{\displaystyle \frac{\pi \alpha _{em}^2\mu _\nu ^2}{m_e^2}}[{\displaystyle \frac{1T/E_\nu }{T}}]`$
where $`g_V=2\mathrm{sin}^2\theta _\mathrm{W}\frac{1}{2}`$ and $`g_A=\frac{1}{2}`$ in the Standard Model, while the upper (lower) signs refer to $`\nu _\mu e`$ and $`\nu _\tau e`$ ($`\overline{\nu _\mu }e`$ and $`\overline{\nu _\tau }e`$) scatterings where only NC are involved, and
$$x=\frac{2M_W^2}{3}<\mathrm{r}^2>\mathrm{sin}^2\theta _\mathrm{W}.$$
For $`\nu _ee`$ scattering, both NC and CC and their interference terms contribute, so that the cross sections can be evaluated by replacing $`g_Vg_V+1`$ and $`g_Ag_A+1`$. The $`\mu _\nu `$ term have a $`\frac{1}{\mathrm{T}}`$ dependence. Accordingly, experimental searches for the neutrino magnetic moment should focus on the reduction of the threshold (usually background-limited) for the recoil electron energy.
The $`\mathrm{g}_\mathrm{A}\mathrm{Vs}.\mathrm{g}_\mathrm{V}`$ parameter space where $`(\overline{\nu _\mathrm{e}}\mathrm{e})`$ scatterings are sensitive to is depicted in Figure 1. The complementarity with $`(\nu _\mu \mathrm{e},\overline{\nu _\mu }\mathrm{e},\nu _\mathrm{e}\mathrm{e})`$ can be readily seen. The expected recoil energy spectrum is displayed in Figure 2a, showing standard model expectations and the case with an anomalous neutrino magnetic moment of $`10^{10}\mu _\mathrm{B}`$. The present published limit is $`1.9\times 10^{10}\mu _\mathrm{B}`$. The number of events in the two cases as a function of measurement threshold is depicted in Figure 2b. It can be seen that the rate is at the range of O(1) events per kg of CsI(Tl) per day \[$``$ “pkd”\]<sup>3</sup><sup>3</sup>3For simplicity, we denote “events per kg of CsI(Tl) per day” by pkd in this article. at about 100 keV threshold at a reactor neutrino flux of $`10^{13}\mathrm{cm}^2\mathrm{s}^1`$, which poses formidable experimental challenge in terms of background control.
Therefore, investigations of $`(\overline{\nu _\mathrm{e}}\mathrm{e})`$ cross-sections with reactor neutrinos allow one to study electro-weak physics at the MeV range, to probe charged and neutral currents interference, and to look for an anomalous neutrino magnetic moment. The present experimental situations are discussed in Ref. and summarized in Table 2. In particular, a re-analysis of the Savannah River results , based on an improved reactor neutrino spectrum and the Standard Model $`\mathrm{sin}^2\theta _\mathrm{W}`$ value, suggested that the measured ($`\overline{\nu _\mathrm{e}}`$ e) cross-sections at 1.5-3.0 MeV and 3.0-4.5 MeV are 1.35$`\pm `$0.4 and 2.0$`\pm `$0.5 times, respectively, larger than the expected values, and can be interpreted to be consistent with a $`\mu _\nu `$ at the range of (2-4)X$`10^{10}\mu _\mathrm{B}`$. Various astrophysics considerations from the time duration of the supernova SN1987A burst , stellar cooling and Big Bang Nucleosynthesis provide more stringent bounds on $`\mu _\nu `$ to the $`10^{11}10^{13}\mu _\mathrm{B}`$ level, but these are model-dependent. An anomalous neutrino magnetic moment of range $`\mu _\nu 10^{10}\mu _\mathrm{B}`$ has been considered as a solution to the Solar Neutrino Puzzle . There are motivations to improve the cross-section measurements and magnetic moment sensitivities further with laboratory experiments. Several other projects are underway .
A 500 kg CsI(Tl) crystal calorimeter (fiducial mass 200-300 kg) will have more target electrons than previous experiments and current projects , as shown in Table 2. The signature for $`(\overline{\nu _\mathrm{e}}\mathrm{e})`$ will be a single hit out of the several hundred channels. As discussed in Section 6, the crystal scintillator approach may provide low detection threshold, high photon attenuation, and powerful diagnostic capabilities for background understanding. All these features can potentially improve the sensitivities for both cross-section measurements and magnetic moments studies.
### 3.2 Neutrino Interactions on $`{}_{}{}^{133}\mathrm{Cs}`$ and $`{}_{}{}^{127}\mathrm{I}`$
Neutral current excitation (NCEX) on nuclei by neutrinos
$$\overline{\nu _\mathrm{e}}+(\mathrm{A},\mathrm{Z})\overline{\nu _\mathrm{e}}+(\mathrm{A},\mathrm{Z})^{}$$
has been observed only in the case of <sup>12</sup> with intermediate energy (O(10 MeV)) neutrinos. Excitations with lower energies using reactor neutrinos have been studied theoretically but not observed.
Crystal scintillators, having good $`\gamma `$ resolution and capture efficiency, are suitable to study these processes. The experimental signatures will be an excess of events at the characteristic gamma-energies correlated to the Reactor-ON period. Using CsI(Tl) as active target nuclei, the candidate $`\gamma `$-lines with M1 transitions include 81 and 160 keV for $`{}_{}{}^{133}\mathrm{Cs}`$, and 58, 202 and 418 keV for $`{}_{}{}^{127}\mathrm{I}`$. The use of NCEX as the detection mechanisms for solar neutrinos and Dark Matter-WIMPs have been discussed. Competitive limits have been set with the WIMP-searches based on the NCEX channel with $`{}_{}{}^{127}\mathrm{I}`$ and <sup>129</sup>Xe .
There are no theoretical predictions for these transitions for $`{}_{}{}^{133}\mathrm{Cs}`$ and $`{}_{}{}^{127}\mathrm{I}`$. One may expect a similar range as the 480 keV case for <sup>7</sup>Li , which would be O(0.01-0.1) pkd at a reactor neutrino flux of $`10^{13}\mathrm{cm}^2\mathrm{s}^1`$. In addition, there are theoretical work suggesting that the NCEX cross-sections on <sup>10</sup>B and <sup>11</sup>B are sensitive to the axial isoscalar component of NC interactions and the strange quark content of the nucleon. The studies of neutrino-induced interactions on nuclei is one of the principal program of the ORLaND proposal based on intermediate energy neutrinos from spallation neutron source.
For completeness, we mention that $`\overline{\nu _\mathrm{e}}`$ can also interact with $`{}_{}{}^{133}\mathrm{Cs}`$ and $`{}_{}{}^{127}\mathrm{I}`$ via the charged-current (CC) channels. There are two modes: (I) Inverse beta decay
$$\overline{\nu _\mathrm{e}}+(\mathrm{A},\mathrm{Z})\mathrm{e}^++(\mathrm{A},\mathrm{Z}1)^{}$$
have the distinct signatures of 2 back-to-back 511 keV $`\gamma `$s from the positron annihilation, plus the characteristic $`\gamma `$-lines from the daughter nuclei, and the positron itself; (II) Resonant orbital electron capture
$$\overline{\nu _\mathrm{e}}+\mathrm{e}^{}+(\mathrm{A},\mathrm{Z})(\mathrm{A},\mathrm{Z}1)^{}$$
takes place only at a narrow range of neutrino energy equal to the Q-value of the transition. The signatures are the characteristic gamma lines for the excited daughter nuclei. Calculation does not exist but the event rates are expected to be further suppressed since they involve the conversion of a proton to a neutron in neutron-rich nuclei. The $`\overline{\nu _\mathrm{e}}`$N-CC interactions have been considered to detect the low-energy terrestrial neutrinos due to the radioactivity at the Earth’s lithosphere .
## 4 Experimental Design
Since mid-1997, a Collaboration has been built up to pursue an experimental program discussed in Section 3 $``$ the studies of low energy neutrino interactions using reactor neutrinos as source with CsI(Tl) crystal as detector .
The experiment will be performed at the Nuclear Power Station II at Kuo-sheng at the northern shore of Taiwan. The experimental location is about 28 m from one of the reactor cores, and 102 m from the other one. Each of the cores is a boiling water reactor with 2.9 GW thermal power output, giving a total flux of about $`5.6\times 10^{12}\mathrm{cm}^2\mathrm{s}^1`$ at the detector site. The site is at the lowest level of the reactor building, with about 25 mwe of overburden, as depicted schematically in Figure 3.
To fully exploit the advantageous features of the scintillating crystal approach in low-energy low-background experiments, the experimental configuration should enable the definition of a fiducial volume with a surrounding active 4$`\pi `$-veto, and minimal passive materials.
The schematic design of the experiment is shown in Figure 4. The detector will consist of about 480 kg of CsI(Tl) crystals<sup>4</sup><sup>4</sup>4Manufacturer: Unique Crystals, Beijing, arranged in a $`17\times 15`$ matrix. One CsI(Tl) crystal unit consists of a hexagonal-shaped cross-section with 2 cm side and a length 20 cm, giving a mass of 0.94 kg. Two such units are glued optically at one end to form a module. The light output are read out at both ends by 29 mm diameter photo-multipliers (PMTs) with low-activity glass window <sup>5</sup><sup>5</sup>5Hamamatsu CR110 customized, which provide about 50% of end-surfaces coverage. The design of the PMT base is optimized for high gain (low threshold) and good linearity over a large dynamic range. The modular design enables the detector to be constructed in stages.
Individual crystals are wrapped with 4 layers of 70 $`\mu `$m thick teflon sheets to provide diffused reflection for optimal light collection. The sum of the two PMT signals ($`\mathrm{Q}_{\mathrm{tot}}=\mathrm{Q}_1+\mathrm{Q}_2`$) gives the energy of the event, while the difference will provide a measurement of a longitudinal position. Outer layers as well as the last few cm near the readout surfaces will be used as active veto. The exact definitions of fiducial and veto volume can be fine-tuned based on the actual background, and can differ with different energy ranges.
The schematics of the electronics system is depicted in Figure 5. The PMT signals are fed to amplifiers and shapers, and are finally digitized by 8-bit Flash-Analog-Digital-Convertor (FADC) modules running at a clock rate of 20 MHz. The shaping is optimized for the $`\mu `$s time-scale rise and fall times, such that noise-spikes from single photo-electron are smeared out and suppressed. Typical scintillation pulses due to $`\gamma `$ and $`\alpha `$ events as measured by the system are displayed in Figure 6. A precision pulse generator provides means to calibrate and monitor the performance and stability of the electronics system.
The trigger conditions include: (a) having any one or more channels above a pre-set “high threshold” (typically 50-100 keV equivalent), and (b) not having a cosmic veto signal within a previous time-bin of typically 100 $`\mu `$s. Once these conditions are fulfilled, all channels with signals above a “low threshold” (typically 10-30 keV equivalent) will be read out. The logic control circuit enables complete acquisition of delayed signatures up to several ms, to record cascade events in decay series like $`{}_{}{}^{238}\mathrm{U}`$ and $`{}_{}{}^{232}\mathrm{Th}`$.
The FADC, the trigger units, logic control and calibration modules, are read out and controlled by a VME-based data acquisition system, connected by a PCI-bus to a PC running with the LINUX operating system. The on-line and off-line software architecture, together with their inter-connections, are shown schematically in Figure 7. The on-site data taking conditions can be remotely monitored from the home-base laboratories via telephone line. (Internet access to the Nuclear Power Plant is not allowed.) Data will be saved in hard disks on-site and replaced at the once-per-week interval. They are duplicated and stored in magnetic tapes and CDs for subsequent off-line analysis. The detailed design and performance of the electronics, data acquisition and control systems will be the subject of a forthcoming article.
The compact CsI(Tl) detector enables an efficient shielding to be built. The schematics of the shielding configuration is depicted in Figure 8. Cosmic-rays and their related events will be vetoed by an outermost layer of plastic scintillators. The typical veto gate-time will be $``$100 $`\mu `$s to allow for delayed signatures due to neutron interactions. Ambient radioactivity is suppressed by 15 cm of lead and 5 cm of steel. The steel layer also provides the mechanical structures to the system. Neutrons, mostly cosmic-induced in the lead and steel, are slowed down and then absorbed by 25 cm of boron-loaded polyethylene. The inner 5 cm of oxygen-free-high-conductivity (OFHC) copper serves to suppress residual radioactivity from the shielding materials. The copper layers can be dismounted and replaced by more polyethylene, allowing flexibilities to optimize the shielding conditions with respect to different physics focus. The CsI(Tl) target will be placed inside a electrically-shielded and air-tight box made of copper sheet. The entire inner target space will be covered by a plastic bag flushed with dry nitrogen to prevent the radioactive radon gas from diffusing into the target region.
To enable detector access and maintenance, the entire shielding assembly consists of three parts: (1) a fixed shielding house, (2) a movable trolley on which the target detector sits, and (3) the front door which can be moved by wheels. All the access pipes and cable trays are bent so that there are no direct line-of-sight between the inner target and the external background. Ports are provided to allow insertion of radiation sources for regular monitoring and calibration. These ports are blocked by copper plugs during normal data taking.
## 5 Performance of Prototype Modules
Extensive measurements on the crystal prototype modules have been performed. The response is depicted in Figure 9, showing the variation of collected light for $`\mathrm{Q}_1`$, $`\mathrm{Q}_2`$ and $`\mathrm{Q}_{\mathrm{tot}}`$ as a function of position within one crystal module. The charge unit is normalized to unity at the <sup>137</sup>Cs photo-peak (660 keV) for both Q<sub>1</sub> and Q<sub>2</sub> at their respective ends, while the error bars denote the FWHM width at that energy. The discontinuity at L=20 cm is due to the optical mis-match between the glue (n=1.5) and the CsI(Tl) crystal (n=1.8). It can be seen that there is a dependence of $`\mathrm{Q}_{\mathrm{tot}}`$ with position at the 10-20% level. A FWHM energy resolution of 10% is achieved at 660 keV, and its variation with energy follows the $`\mathrm{E}^{\frac{1}{2}}`$ relation. The detection threshold (where signals are measured at both PMTs) is $`<`$20 keV. A good linearity of the PMT response is achieved for energies from 20 keV to 20 MeV.
The longitudinal position can be obtained by considering the dimensionless ratio $`\mathrm{R}=(\mathrm{Q}_1\mathrm{Q}_2)/(\mathrm{Q}_1+\mathrm{Q}_2)`$, the variation of which at the <sup>137</sup>Cs photo-peak energy along the crystal length is displayed in Figure 10. The ratio of the RMS errors in R relative to the slope gives the longitudinal position resolutions. The measured resolutions are 2 cm, 3.5 cm and 8 cm at 660 keV, 200 keV and 30 keV, respectively. The dependence of R on energy is negligible at the less than the 10<sup>-3</sup> level.
In addition, CsI(Tl) provides powerful pulse shape discrimination (PSD) properties to differentiate $`\gamma `$/e events from those due to heavily ionizing particles like $`\alpha `$’s, which have faster fall time, as shown in Figure 6. The typical separation between $`\alpha `$/$`\gamma `$ in CsI(Tl) with the “Partial Charge Vs Total Charge” method is depicted in figure 11, demonstrating an excellent separation of $`>`$99% above 500 keV. Unlike in liquid scintillators, $`\alpha `$’s are only slightly quenched in their light output in CsI(Tl). The quenching factor depends on the Tl concentration and the measurement parameters like shaping time: for full integration of the signals, the suppression is typically 50% .
## 6 Background Considerations
### 6.1 Merits of Crystal Scintillator
The suppression, control and understanding of the background is very important in all low background experiments. The scintillating crystal detector approach offers several merits to these ends. The essence are:
Large Photon Attenuation:
With its high-Z nuclei, CsI(Tl) provides very good attenuation to $`\gamma `$-rays, especially at the low energy range below 500 keV. For instance, the attenuation lengths for a 100 keV $`\gamma `$-ray are 0.12 cm and 6.7 cm, respectively, for CsI(Tl) and liquid scintillator. That is, 10 cm of CsI(Tl) has the same attenuating power as 5.6 m of liquid scintillator at this low energy. Consequently, the effects of external ambient $`\gamma `$ background, like those from the readout device, electronics components, construction materials and radon diffusion are negligible after several cm of active veto layer. Therefore, the background at low energy will mostly originate within the fiducial volume due to the internal components.
For CsI(Tl) which is non-hygroscopic and does not need a hermetic seal system to operate, “internal components” include only two materials: the crystal itself and the teflon wrapping sheets, typically at a mass ratio of 1000:1. Teflon is known to have very high radio-purity (typically better than the ppb level for the <sup>238</sup>U and <sup>232</sup>Th series) . As a result, the experimental challenge becomes much more focussed - the control and understanding of the internal radio-purity and long-lived cosmic-induced background of the CsI(Tl) crystal itself.
Characteristic Detector Response:
The detection threshold is lower while the energy resolution of CsI(Tl) is better than typical liquid and plastic scintillator with the same modular mass. Furthermore, in an O(100 kg) CsI(Tl) detector system, the keV-MeV photons originated within the crystal will be fully captured. These features, together with PSD capabilities for $`\alpha `$-particles and the granularity of the detector design, can provide important diagnostic tools for understanding the physical processes of the system. Once the background channels are identified, understood and measured, subtraction of its associated effects can be performed.
When the dominant background contributions are from internal contaminations, two complementary strategies can be deployed for the background control : (I) consistent background subtraction, using the measured spurious $`\alpha `$ or $`\gamma `$ peaks which indicates residual radioactivity inside the crystal, and (II) the conventional Reactor ON$``$OFF subtraction.
The background count rate will be stable and not affected by external parameters such as ambient radon concentrations, details of the surrounding equipment configurations and cosmic veto inefficiencies. Consequently, the systematic uncertainties can be reduced, and the reliabilities of both background suppression processes will be more robust. In addition, the large target mass helps to reduce statistical uncertainties. Spectral shape distribution can also be analyzed to provide additional handles. For instance the comparison of the signal rates between the “$`<1\mathrm{MeV}`$” and the “$`>1\mathrm{MeV}`$” samples can enhance the sensitivities in the magnetic moments studies.
### 6.2 Background Channels
The merits discussed above allow a compact detector design and hence, efficient and cost-effective shielding configurations. While care and the standard procedures are adopted for suppressing the ambient radioactivity background (radon purging, choice of clean construction materials, photon-counting measurements with germanium detectors, use of PMT with low-activity glass), the key background issue remains that of internal background from the CsI(Tl) itself. The different contributions and their experimental handles are discussed below.
#### 6.2.1 Internal Intrinsic Radioactivity
Unlike in liquid scintillators, $`\alpha `$’s are only slightly quenched in their light output in CsI(Tl). Crystals contaminated by uranium or thorium would give rise to multiple peaks above 3 MeV, as reported in Ref. . The absence of multiple peak structures in our prototype crystals suggest a <sup>238</sup>U and <sup>232</sup>Th concentration of less than the $`10^{12}`$ g/g level \[$``$1 pkd\], assuming the decay chains are in equilibrium. In addition, direct counting method with a high-purity germanium detector shows the <sup>40</sup>K and <sup>137</sup>Cs contaminations of less than the $`10^{10}`$ g/g \[$``$1700 pkd\] and $`4\times 10^{18}`$ g/g \[$``$1200 pkd\] levels, respectively. Mass spectrometry method sets limits of <sup>87</sup>Ru to less than $`8\times 10^9`$ g/g \[$``$210 pkd\].
Internal radioactivity background typically consists of $`\alpha `$, $`\beta `$ and $`\gamma `$ emissions which have characteristic energies and temporal-correlations. Residual background below the measured limits can be identified and subtracted off based on the on-site data. By careful studies of the timing and energy correlations among the distinct $`\alpha `$ signatures, precise information can be obtained on the radioactive contaminants in the cases where the <sup>238</sup>U and <sup>232</sup>Th decay series are not in equilibrium, so that the associated $`\beta `$/$`\gamma `$ background can be accounted for accurately. For instance, Dark Matter experiments with NaI(Tl) reported trace contaminations (range of $`10^{18}10^{19}`$ g/g \[25-250 pkd\] of <sup>210</sup>Pb in the detector, based on $`\gamma `$-peak at 46.5 keV and the equivalent peak for $`\alpha `$’s at 5.4 MeV . Accordingly, $`\beta `$-decays from <sup>210</sup>Bi can be subtracted off from the signal. Similarly, the residual $`\beta `$-decays of <sup>40</sup>K and <sup>137</sup>Cs can be accounted for based on their respective characteristic $`\gamma `$-lines measure-able from the data.
#### 6.2.2 Cosmic-Induced Radioactivity
The experiment is located at a site with 25 mwe overburden, which is sufficient to effectively attenuate the primary hadronic component from cosmic rays. The “prompt” cosmic events can be easily identified, since: (a) the plastic scintillator veto can tag them at better than 95% efficiency, (b) bremsstrahlung photons from cosmic-rays on the shieldings cannot reach the inner fiducial volume of the target, as explained in Section 6.1, (c) cosmic-induced neutrons, mostly from the lead, are attenuated and absorbed efficiently by the boron-loaded polyethylene layers, and (d) background originated from cosmic-rays traversing the CsI(Tl) target will lead to unmistakably large pulses ($``$20 MeV for one crystal).
The more problematic background are due to the long-lived (longer than ms) unstable isotopes created by the various nuclear interaction processes:
1. Neutron Capture
Ambient neutrons or those produced at the the lead shieldings have little probability of being captured by the CsI crystal target, being attenuated efficiently by the boron-loaded polyethylene. Cosmic-induced neutrons (energy range MeV) originated from the target itself have high probability of leaving the target. Residual neutrons can be captured by the target nuclei $`{}_{}{}^{133}\mathrm{Cs}`$ and $`{}_{}{}^{127}\mathrm{I}`$ predominantly via (n,$`\gamma `$
$`\mathrm{n}+{}_{}{}^{133}\mathrm{Cs}`$ $``$ $`{}_{}{}^{134}\mathrm{Cs}(\sigma =30\mathrm{b};\mathrm{Q}=6.89\mathrm{MeV});`$
$`\mathrm{n}+{}_{}{}^{127}\mathrm{I}`$ $``$ $`{}_{}{}^{128}\mathrm{I}(\sigma =6\mathrm{b};\mathrm{Q}=6.83\mathrm{MeV})`$
with relatively large cross-sections.
The daughter isotope <sup>134</sup>Cs ($`\tau _{\frac{1}{2}}=2.05\mathrm{yr};\mathrm{Q}=2.06\mathrm{MeV}`$) decays with 70% branching ratio by beta-decay (end point 658 keV), plus the emission of two $`\gamma `$’s (605 keV and 796 keV), and therefore will not give rise to a single hit at the low-energy region. The isotope <sup>128</sup>I ($`\tau _{\frac{1}{2}}=25\mathrm{min};\mathrm{Q}=2.14\mathrm{MeV}`$), on the other hand, has a branching ratio of 79% having a lone beta-decay, which will mimic the single-hit signature. The neutron production rate on-site at the CsI(Tl) target is estimated to be about 50 pkd. Folding in the capture efficiency by the target ($`25`$%) and by $`{}_{}{}^{127}\mathrm{I}`$ in particular ($`14`$%), the <sup>128</sup>I production rate is about 1.8 pkd.
The neutron capture rate by the CsI target can be measured by tagging $`\gamma `$-bursts of energy 6.8 MeV. Knowing the capture rate, the contributions to the low-energy background due to <sup>128</sup>I can be evaluated and subtracted off. Furthermore, the three-fold coincidence from the <sup>134</sup>Cs decays can be measured, providing additional information to the neutron capture rates.
2. Muon Capture
Cosmic-muons can be stopped by the target nuclei and subsequently captured via
$`\mu ^{}+(\mathrm{A},\mathrm{Z})`$ $``$ $`(\mathrm{A}\mathrm{Y},\mathrm{Z}1)+\gamma {}_{}{}^{}\mathrm{s}+\mathrm{Y}\mathrm{neutrons},`$
where Y can be 0,1,2,….. with $`<\mathrm{Y}>1.2`$. The daughter isotopes for Y=1,2,3 are all stable, while the Y=0 case (less than 5% probability) will give rise to <sup>133</sup>Xe and <sup>127</sup>Te, both of which can lead to low-energy single-site background events: <sup>133</sup>Xe ($`\tau _{\frac{1}{2}}=5.3\mathrm{days};\mathrm{Q}=428\mathrm{keV}`$) decays with beta (end-point 347 keV) plus a $`\gamma `$-ray at 81 keV, while <sup>127</sup>Te ($`\tau _{\frac{1}{2}}=9.4\mathrm{hr};\mathrm{Q}=690\mathrm{keV}`$) decays with a lone beta. The estimated muon capture rate on-site at the CsI target is $``$30 pkd, so that the background contributions from the Y=0 channels are less than 1.5 pkd.
3. Muon-Induced Nuclear Dissociation
Cosmic-muons can disintegrate the target nuclei via the ($`\gamma `$,n) interactions or by spallations , at an estimated rate of $``$10 pkd and $``$1 pkd, respectively. Among the various decay configurations of the final states nuclei of the ($`\gamma `$,n) processes, <sup>132</sup>Cs and <sup>126</sup>I, only about 20% (or $``$2 pkd) of the cases will give rise to single-hit background. The other decays give characteristic and identifiable signatures. For instance, <sup>132</sup>Cs decays by electron capture resulting in the emissions of a $`\gamma `$-ray at 668 keV plus the X-rays from xenon. These can easily be tagged and used as reference to subtract the single-hit background.
#### 6.2.3 Reactor-ON Correlated Background
Previous experiments with reactor neutrinos as well as on-site measurements indicate that $`\gamma `$ and neutron background associated with “reactor-ON” are essentially zero outside the reactor core and within reasonable shieldings. The target region is proton-free and therefore neutrino-induced background from $`\overline{\nu _\mathrm{e}}`$-p is negligible. These interactions, however, will occur at the polyethylene shieldings. The prompt e<sup>+</sup> will only give rise at most to 511 keV $`\gamma `$-rays, while the neutron (energy range 1-10 keV) will be mostly captured by the <sup>10</sup>B in the polyethylene, producing only 480 keV $`\gamma `$-rays. Both of these low energy $`\gamma `$-background will be efficiently attenuated by the copper shielding and the active veto.
### 6.3 Sensitivity Goals
From the various background considerations discussed in the previous sections, it can be seen that while the background control is non-trivial like all other low-energy neutrino experiments, there are more experimental handles to suppress and identify them with the crystal scintillator approach. The efficient $`\gamma `$-peak detection, the fine granularity and the PSD capabilities of the CsI(Tl) detector provides enhanced analyzing and diagnostic power for the background understanding. The dominating contribution to the sensitivities is due to the internal background in the CsI(Tl) target. The experimental challenge is focussed and therefore more elaborate procedures can be deployed to study and enhance the radio-purity of this one material as the experiment evolves.
The present studies place limits on internal radio-purity to the range of less than the 1000 pkd level. Residual contaminations, if exist, can be further studied and measured by various methods like photon counting with germanium detectors, neutron activation analysis, mass spectroscopy analysis, as well as by the spectroscopic and time-correlation input from on-site data taking. The effects due to cosmic-induced long-lived isotopes are typically at the range of a few pkd. Background due to both channels can be reduced by consistent background subtraction when the sources are identified and measured, and the goal of a suppression factor of 10<sup>2</sup> can be achieve-able. Such background subtraction strategies have been successfully used in accelerator neutrino experiments. As an illustration, the CHARM-II experiment measured about 2000 neutrino-electron scattering events from a sample of candidate events with a factor of 20 larger in size . Events due to the various background processes were identified and subtracted off, such that a few % uncertainty in the signal rate had been achieved.
It can be seen from Figure 2b that a Standard Model rate of $``$1 pkd can be expected for a detection threshold of 100 keV. After performing the consistent background suppression, a residual Background-to-Signal ratio of less than 10 before Reactor ON$``$OFF subtraction is realistic. Therefore, based on Reactor ON/OFF periods of 100 and 50 days, respectively, a fiducial mass of 300 kg of CsI(Tl) target, and a detector systematic uncertainty of 1% in performing the Reactor ON$``$OFF subtraction, the sensitivity goals of $`3\times 10^{11}\mu _\mathrm{B}`$ for the magnetic moment search and a 5-10% uncertainty to the cross-section measurements can be projected. A comparison with previous on-going experiments is summarized in Table 2.
Under similar assumptions, an event rate of $`>`$0.005 pkd can be observed for the $`\overline{\nu _\mathrm{e}}`$-NCEX channel of the various candidate lines in the 50-500 keV range, corresponding to the cross-section sensitivities of $`2\times 10^{45}\mathrm{cm}^2`$.
## 7 Status and Prospects
By the end of 1999, the design and prototype studies of the experiment have been completed. Construction is intensely underway. A complete 100-kg system, with full electronics and shieldings, is expected to be installed on-site at the Reactor Plant by summer 2000. Date taking will commence while the second 100-kg system will be added in phase. Future upgrades and modifications of the experiment will depend on the first results, with the goals of achieving a 500 kg system eventually.
The detector design adopted in this experiment can be adopted for other low-energy low-background experiments based on scintillating crystal detectors , such as Dark Matter WIMP searches , sub-MeV Solar Neutrino detection (indium-loaded , LiI(Eu) or GSO crystals have been proposed), and further studies of $`\overline{\nu _\mathrm{e}}`$-NCEX on other isotopes like <sup>7</sup>Li, <sup>10</sup>B and <sup>11</sup>. Experience and results from the the reactor neutrino experiment with CsI(Tl) crystals reported in this work will provide valuable input to these projects.
Much flexibility is available for detector optimization based on this generic and easily scale-able design. Different modules can be made of different crystals. More and longer crystals can be glued to form one module. Different crystals can be glued together, in which case the event location among the various crystals can be deduced from the different pulse shape. Passive target can be inserted to replace a crystal module. New wrapping materials can be used instead of teflon $``$ there is an interesting new development with sol-gel coating which can be as thin as a few microns , thereby even reducing the passive materials within the fiducial volume further.
The authors are grateful to the technical staff of their institutes for the invaluable support, and to the CYGNUS Collaboration for the loan of the veto plastic scintillators. This work was supported by contracts NSC 87-2112-M-001-034 and NSC 88-2112-M-001-007 from the National Science Council, Taiwan, as well as NSF 01-5-23336 and NSF 01-5-23374 from the National Science Foundation, U.S.A. |
no-problem/0001/cond-mat0001204.html | ar5iv | text | # Islands, craters, and a moving surface step on a hexagonally reconstructed (100) noble metal surface
## I Introduction
The heavy noble metal $`(100)`$ surfaces possess the so-called hex-reconstruction, where the top monolayer spontaneously converts from square to (approximately) hexagonal (in fact, triangular), with a lateral density increase of $`2530\%`$ . The second atomic layer, immediately below the first, remains instead square, with a bulk-like lateral density, and only a minor local perturbation in correspondence of the domain walls, or solitons, which the hex layer forms by (incommensurate) epitaxy onto it.
Let us imagine the flat, hex-reconstructed $`(100)`$ surface and consider what should happen if we were to ideally deposit one further monolayer on top of it. The new top monolayer should be itself hex-reconstructed, because that is the lowest energy configuration for this surface. However the former top layer, now covered and turned into a second layer, must de-construct, from the hex state back to a square lattice. Conversely, we may imagine removing the top monolayer. The former, unreconstructed second layer must now acquire some extra atoms, in order to become hex-reconstructed, again because that is the lowest energy state.
The questions now are: how exactly should all this happen? Where do the excess atoms, expelled from the covered layer, go? Conversely, where do the extra atoms needed for top layer reconstruction come from? And what other consequences does this peculiar situation have? A related problem which provided some inspiration for this work is that of the surprising nonlinearity observed, mainly by King et al. in surface adsorption of molecules (CO, O$`_2,`$ D<sub>2</sub>) versus coverage, on a hex reconstructed Pt $`(100)`$ substrate.
In this paper we describe work which lays the ground for addressing some of these questions.
We carried out Molecular Dynamics (MD) simulation work addressing specifically the hex-reconstructed Au $`(100).`$ The temperature dependence of the top layer density allows us to study phenomena associated with density changes by simply changing the sample temperature in a particle-conserving system: upon increasing temperature lateral density of a flat reconstructed Au $`(100)`$ surface shows a tendency to increase. We find in our simulations an increase from the $`T=0`$ lateral density of $`1.24`$ with respect to the bulk, to $`1.35`$ for $`T=1100K`$. This behavior is in close agreement with experiment for both Au and Pt . Thus, heating is equivalent to removing atoms, cooling to adding atoms.
The work done so far includes the following:
* interplay of step and hex reconstruction, showing their important mutual influence;
* sudden formation of a small adsorbed island (spontaneously expelled by heating) with accompanying deconstruction of the covered portion;
* sudden formation of a small crater (spontaneously formed by cooling) with accompanying reconstruction of the uncovered substrate portion;
* sudden step retraction (obtained by heating) with reconstruction of the uncovered substrate portion, via “incorporation” of step edge atoms.
In the following, we shall briefly summarize some of our results, leaving a more proper and detailed account for a separate publication.
## II Method
The hexagonal reconstruction of Au, Pt and Ir(100) consists of a spontaneously stabilized 2D close-packed monolayer on top of the otherwise (100) crystal. This phenomenon is reasonably well understood, but quite hard to handle at the electronic level . We found quite some time ago that it was possible to reproduce it with quantitative accuracy for Au(100), using classical potentials of the many-body type, the glue potentials, very specially and carefully optimized for Au. Within that approximation, it is possible to carry out extensive simulations of some of the the situations imagined above, and obtain from them a microscopic insight into those otherwise puzzling questions.
With the glue potential for gold, we carried out classical MD simulation. Newton’s equations were integrated numerically, allowing for large length-scales (at least 50 Å lateral size) and long simulation times, (at least 1 nsec), working at sufficiently high temperature so as to attain sufficient atom mobilities.
The geometry chosen for simulating our surfaces is an N-layer (001) slab, (N=12-16) with periodic boundary conditions (PBC) along the (100) and (010) directions. The top (001) surface was free, whereas the bottom one consisted of 3 frozen bulk-like layers. The typical total atom number ranged from 20000 to 30000 <sup>*</sup><sup>*</sup>*The free $`(001)`$ surface of Au is generally reconstructed, with a periodicity which depends on temperature, and is generally incommensurate . Our cell can only accommodate a commensurate periodicity (at least in the absence of steps) and we choose that to be $`(5\times 1)`$, or $`(5\times 25)`$, rather close to the actual one, $`(5\times 34)`$. We do not expect these small deviations to be very important. An additional aspect is that of rotations of the reconstructed overlayer. The experimental rotational angle is given as $`0.84^{}`$ , jumping to zero at $`1000K`$. Rotations are basically incompatible with periodic boundary conditions, and have been neglected..
In order to simulate a surface with a single step, we generated new suitable PBCs which transform an $`A`$ layer into a $`B`$ layer (within an ABAB.. (100) stacking sequence) when crossing the slab boundary in the direction orthogonal to the step. This artifact allows the study of an isolated step which interacts only with its repeated image, and the terrace size remains constant during the system evolution.
Temperature was controlled, and the system was carefully and gradually heated through velocity rescaling. Mobility of surface atoms became non-negligible for $`T>1000K`$ (well below melting, $`T_m=1336K`$ for Au).
The simulation of atom addition and removal represents a difficult task in the canonical ensemble. Working fully canonically, i.e., conserving particles, we were able to obtain a similar outcome by exploiting a peculiar feature of the $`(100)`$ hex reconstructed surface, namely the fact that its lateral density increases with temperature, notably by about 5% from 700 to 1000 K.
When a step is present, as is the case here, it will retract upon heating to accommodate for this top layer density increase. The net movement of the step in fact provides in this way for us a very natural method to gauge the optimal spontaneous change of surface lateral density. Moreover, we can observe how the lateral density varies locally, depending on the planar coordinate relative to the step position.
If we consider now a flat, step-free surface, and we heat it up suddenly, the associated increase of optimal lateral hex layer density will induce a strong tensile surface stress, which cannot be relieved in the absence of a defect. If that stress overcomes a certain critical limit, we can expect the sudden formation of one or more craters, leading to a state quite similar to that which we could have obtained by removing atoms at fixed temperature. Conversely, sudden cooling should lead to compressive stress, and eventually islands of excess atoms will pop up to relieve that stress.
This method is completely ad hoc for our situation, and it can only work in practice if the hex layer is relatively free to slide parallel to itself. Luckily, we found that this is the case for gold in the temperature range 800-1000K used here.
An alternative, and certainly more standard way to add or remove atoms in order to study island or crater formation would be Grand Canonical Monte Carlo (GCMC). Although we did succeed in implementing it for certain Au surfaces, we eventually found the MD technique described above more useful for the present purposes. GCMC is in the first place very difficult to equilibrate, and moreover it does not provide as much desirable information on the dynamics, as MD does.
Conversely, GCMC can be of great help in all cases where a strong density change (like in the square$``$hex transformation) must be handled. We will demonstrate how that works for Au $`(100)`$ in a forthcoming paper .
## III A step on the hex Au $`(100)`$ surface
We simulated a $`(100,1,1)`$ vicinal surface of Au $`(100)`$, with a single step and a wide terrace (maximum terrace size was $`50\times 50`$ nearest neighbor distances). Fig. 1 (upper part) shows the side view of the surface with a step.
The atoms of the second layer are marked in white; we note that the part of the second layer covered by the terrace is unreconstructed (A) whereas the (B) part is reconstructed. If we follow the lateral density of the second layer from A to B, we will cross a transition zone in correspondence with the step. The width of this zone marks the surface correlation length as probed by the step, and can be extracted from MD simulations, and the result for the temperature of $`T=900K`$ are shown in Fig. 1 (lower part). The correlation length is roughly $`5`$ Å at $`900K`$ and increases to $`10`$ Å at $`1250K`$ (not shown). This result gives a measure of the influence of the step on the lateral coordination in its neighborhood, and reveals an interesting interplay between step and reconstruction.
## IV Island formation
The reconstructed $`(100)`$ surface of noble metals undergoes an order $``$disorder transition at about $`T=.8T_m`$. Low temperature deconstruction can, however, be induced upon adsorption of molecular species. The Cambridge group has carried out thorough studies of adsorption of light molecules such as CO, D<sub>2</sub> and O<sub>2</sub> onto a fully reconstructed Pt $`(100)`$ surface. They found that molecular island form, but that the growth rate of the islands increases extremely slowly at low concentrations, roughly like the fourth power of molecular coverage (we shall call this King’s law). The explanation offered for this phenomenon is that while the hex-reconstructed surface is unreactive, and will bind the molecules, the opposite is true for the deconstructed , square $`(100)`$ surface. The latter however must nucleate (under the island), and this requires a finite island size, of no less than 4 ad-molecules.
It should be noted that King’s exponent of about $`4`$ implies that about $`8`$ Pt atoms must switch from hex to square for the adsorption process to grow. Hence King’s law is most likely telling us a property of the clean surface. This is confirmed by the circumstance that the exponent is not very dependent on the adsorbed species. The next observation is that very much the same deconstruction must take place with homoepitaxy. Hence we expect that upon deposition of Pt on Pt$`(100)`$, or on Au$`(100)`$, particularly at high temperatures when equilibrium can be established, there should be a minimum critical island size, of order 8 or so atoms, related to deconstruction of the substrate.
We mimicked homogeneous atoms addition/removal through the already mentioned temperature jump technique. An alternative method was to prepare the system with a certain surface excess density, and to wait for the excess atoms to form an island. In both cases, the island/crater growth was mainly determined by the density difference between square and hexagonal phase and not by the initial conditions of the simulation; the dynamics can be therefore trusted as true island/crater growth dynamics, whatever the method used to determine lateral density excess/deficit.
In this section we will focus on island formation case, adding atoms at a temperature of $`1200K`$ in order to have sufficient mobility of surface atoms.
An initial excess density of 0.06 $`\rho _b`$ ($`\rho _b`$ being the lateral density of a bulk $`(100)`$ layer) at the surface causes the appearance of small fluctuating islands. However these islands are readsorbed quickly by the substrate so long as their size is smaller than about 10 atoms. If the size is greater than 10-15 atoms, however, the island is not readsorbed and begins to grow. Figure 2(a) shows the time evolution of the maximum island size on the surface; there is clearly a critical size above which the island size grows.
Our explanation for this critical size is the following: as long as the island is small, there is no deconstruction under it. In other words if the border of the island has the same role of the single step described in the previous section, the island size is too small. No deconstruction occurs until the diameter of the island is greater than the surface correlation length as felt by the step. When the size of the island exceeds this value, only then the substrate can deconstruct and growth can continue, now fueled by the excess atoms ejected by the lower layer. These atoms correspond to the density difference between hexagonal and square order under the island, and they make the island growth very fast. The deconstruction is almost completed when the island has a size of about 25-30 atoms, as shown in a snapshot of the simulation (t=175 ps), shown in Figure 3.
## V Crater formation
A symmetric situation with respect to the last section is the formation of craters on a flat surface. We used temperature as the driving force for the density change at the surface, by increasing temperature from 800 to $`T=950K`$. At this temperature, surface atom mobility is sufficiently high. The main results concerning craters are the following:
* The formation of craters requires a slightly smaller nucleus with a critical size $`N_H`$ of about $`810`$ atoms. Figure 2(b) shows this behaviour; the size (in atoms) of a crater is plotted versus the simulation time. At a size of about 8, a jump occurs and the crater growth subsequently continues linearly, up to saturation. This jump is associated with reconstruction of the crater bottom.
* The mechanism for further growth of the craters with $`N>N_H`$ is the following: atoms in the hole, initially arranged in a square lattice, are undercoordinated; atoms at the boundary of the crater are “eaten up” increasing the dimension of the hole. The growth is less dramatic than that of the island.
Summarizing, there appears to be a connection between the critical nucleus for growth of craters and of islands and the reconstruction correlation length as probed by a step on a surface. We can infer from the two situations we have examined (the crater and the island) that reconstruction and deconstruction play a crucial role in determining their onset. The size of the critical nucleus predicted for Au $`(100)`$ is of 8-10 atoms, in remarkably close agreement with the 8-atom size which can be extracted by King’s exponent of 4 for molecular adsorption on Pt $`(100)`$
## VI Discussion
We have found that reconstruction/deconstruction introduces a natural critical size for the nucleation of islands and craters of Au on Au (100). This size does not show the normal dependence upon supersaturation, expected of normal nucleation processes, and appears more as an intrinsic characteristics of that surface. The reason can be as follows. The nucleation free energy barrier as a function of increasing size has a nonstandard shape, with a large, sudden drop around $`15`$ atoms, when substrate reconstruction/deconstruction can occur. That drop has the effect of pinning the critical size, making it independent of sovrasaturation and, possibly, also of the adsorbed species. Figure 4 shows a very schematic representation of this point in the crater case. For small craters, the bottom of the crater is not reconstructed. At the radius of about 6 Angstrom, deconstruction can occur and a jump in the free energy is observed; by changing the sovrasaturation, the jump position (which depends only on the interplay between step and reconstruction), does not change.
## Acknowledgments
We thank D. A. King and Bo Persson for fruitful and constructive discussions. Work of D. P. at SISSA is directly supported by MURST.
## Figure captions
* Figure 1. Upper part: side view of a surface with a single step. The second layer atoms are drawn in white. (A) denotes the part of the second layer which lies under the terrace and is unreconstructed, (B) denotes the uncovered reconstructed terrace. Lower part: profile of the lateral density in the “second layer” for $`T=900K`$. The layer has a square structure to the left, where it is covered by a terrace, and a dense hexagonal structure to the right. The interface between the two zones is smoothened by the presence of a finite correlation length of the step. At $`T=1200K`$ (not shown) the profile is smoother, and the correlation length larger.
* Figure 2. (a): the maximum island size for an excess density of about $`0.06`$. There is a jump in the island size and a growth up to a saturation value of about 38 atoms, corresponding to the initial excess density. The arrow indicates the onset of deconstruction of the substrate under the island. This deconstruction is completed at a size of about 20. (b): the crater formation case; time evolution of a crater size. Please note the jump at a critical value for the size.
* Figure 3. (color) Snapshot of the island growth simulation after 175 $`ps`$. Red atoms are adatoms. The view from the bottom shows the almost complete deconstruction under the red island, bigger than the critical size.
* Figure 4. Schematic profile of Gibbs free energy (at different chemical potential) upon formation of a crater of radius $`r`$. The jump occurs at the crater reconstruction, and does not depend on sovrasaturation. |
no-problem/0001/hep-ex0001046.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The forward proton spectrometer (FPS) is part of the H1 detector at the HERA collider at DESY . From 1995, the first year of FPS operation, until 1997 HERA collided 27.5 GeV positrons with 820 GeV protons. In 1998 positrons were replaced by electrons <sup>1</sup><sup>1</sup>1Throughout this paper electron is a generic name for $`e^{}`$ and $`e^+`$. and the proton energy was increased to 920 GeV.
The aim of the FPS is to extend the acceptance of the H1 detector in the very forward direction, which is the direction of the outgoing proton beam. The central tracking chambers cover the angular range down to polar angles of $`5^o`$. At smaller angles energy deposits are observed in detectors close to the outgoing proton beam direction. The liquid argon calorimeter, the plug calorimeter and the proton remnant tagger in combination with the forward muon system are sensitive to forward activities down to polar angles of about 1 mrad . The FPS has been built to measure forward going protons which are scattered at polar angles below this range.
Forward going protons with energies close to the kinematic limit of the incident proton beam energy arise from diffractive processes, which in Regge theory are described by Pomeron exchange . At lower proton energies processes with meson exchange become the dominant production mechanism . The measurement of diffractive processes offers the possibility to investigate the structure of the exchanges .
The FPS measures the trajectory of protons emerging from electron-proton collisions at the interaction point. The HERA magnets in the forward beam line separate scattered protons from the circulating proton beam. After a distance of about 60 m the scattered protons deviate typically a few millimeters from the central proton orbit and can be measured by tracking detectors.
These detectors are multi-layer scintillating fiber detectors read out by position-sensi-
tive photo-multipliers (PSPMs). Scintillator tiles cover the sensitive detector area and are used to form trigger signals. To provide the necessary aperture for the proton injection into HERA the detectors are mounted in movable plunger vessels, so-called Roman Pots, which are retracted during the beam filling and orbit tuning process . When stable conditions are reached the detectors are moved close to the cirulating beam.
Due to the optics and the construction of the beam line only one secondary particle with at least 500 GeV energy can reach the FPS stations. The track parameters of the measured trajectories are used to reconstruct the energy and the scattering angles at the interaction point based on the knowledge of bending strengths and positions of the HERA magnets.
Four FPS stations at distances between 64 m and 90 m from the interaction point measure the trajectory of scattered protons. In 1995 two vertical stations were built, which approach the beam from above. They detect protons in the kinematic range $`0.5<E^{}/E_p<0.95`$, where $`E^{}`$ and $`E_p`$ are the energies of the scattered and the incident protons, respectively. In 1997 two horizontal stations were added, which approach the beam from the outer side of the proton ring. Complementary to the vertical FPS stations, they are sensitive in the range $`E^{}/E_p>0.9`$.
This paper is organized as follows: section 2 contains the details of the FPS hardware components. In section 3 the operation of the FPS and the experience after three years of data taking are discussed. The main detector characteristics as acceptance, efficiency and resolution are given in section 4. In this section also the method of energy reconstruction and calibration are described. An summary of the relevant FPS parameters and an outlook on physics results is given in section 5.
## 2 Detector Components
The positions of the four FPS stations along the forward beam line are shown in fig.1. The two vertical stations are placed at 81 m and 90 m behind the BU00 dipole magnet, which bends the proton beam 5.7 mrad upwards. In front of this magnet at 64 m the first horizontal station is placed while the second horizontal station is located behind the BU00 magnet at 80 m.
### 2.1 Mechanics
All FPS stations consist of two major mechanical components: the plunger vessel which is the movable housing of the detector elements, and the fiber detectors with their readout components.
The plunger vessel is a cylinder made of 3 mm stainless steel. The inner volume, where the fiber detectors are mounted, has a diameter of 140 mm. To reduce the amount of material in front of the detectors thin windows of 0.3 mm steel are welded to the main cylinder. The bottom sections at the positions of the detectors are closed by 0.5 mm steel plates.
The plunger vessel can be moved close to the proton beam orbit due to the flexible connection via steel bellows to a flange of the beam pipe. In the vertical stations the plunger vessel is driven by spindles which are connected by a belt to a stepping motor. Due to space limitations the horizontal stations have a hydraulic moving system. The maximum range for the detector movement is 50 mm in the vertical stations and 35 mm in the horizontal stations. All stations are equipped with external position measuring devices <sup>2</sup><sup>2</sup>2Messtaster Metro MT60, Dr.J.Heidenhain GmbH, D-8225 Traunreut. which give the actual detector positions with 10 $`\mu `$m precision. These values are recorded every second by the slow control system.
A sketch of the detector insert of the horizontal stations is shown in fig.2a. The detector carrier is an aluminum tube with a carbon fiber end part to support the fiber detectors. The platform above the detector carrier houses the PSPMs and the front-end electronics. The trigger photo-multipliers (PMTs) are fixed on the detector carrier inside the plunger vessel. In the vertical stations the basic arrangement is the same, but the trigger PMTs are placed outside the plunger vessel on the same platform as the PSPMs.
For the reconstruction of the proton energy the positions of all FPS stations with respect to the HERA proton beam have to be known. For this purpose a measuring plate with four marks is inserted into the plunger vessel. These marks are geodetically surveyed with a precision of 100 $`\mu `$m. The position of the fiber detectors with respect to the outer marks is measured on a scanner with a precision of 20 $`\mu `$m.
The front-end electronics together with the PSPMs and trigger PMTs has a power consumption of about 150 W. To keep the temperature below $`30^o`$C the walls of the platform housing the electronic components are water cooled.
The front-end electronics of all FPS stations is surrounded by an external lead shielding against synchrotron radiation. The horizontal stations have in addition a soft iron shielding to reduce the influence of magnetic stray fields on the PSPMs.
For safety reasons the plunger vessels are filled with dry nitrogen gas, so that in case of leakage only a small amount of dry gas enters the HERA machine vacuum.
### 2.2 Fiber Detectors
Scintillating fibers arranged in multi-layer structures represent a fast and robust tracking detector. The vertex smearing and beam divergence at the interaction point set the scale for the spatial detector resolution. Due to these constraints, the energy resolution cannot be improved by a detector resolution better than 100$`\mu `$m.
A view of the arrangement of the fiber detectors in the vertical stations is given in fig.2b. Each pot is equipped with two identical subdetectors measuring two coordinates transverse to the beam direction. The subdetectors are separated by 60 mm to allow the reconstruction of a local track segment of the proton trajectory. Each subdetector consists of two coordinate detectors with fibers inclined by $`\stackrel{+}{}45^o`$ with respect to the symmetry axis of the $`\pm 45^o`$ with respect to the symmetry axis of the plunger vessel. This angle was chosen to avoid a strong bending of the light guide fibers inside the plunger vessel.
Each coordinate detector consists of five fiber layers to ensure good spatial resolution and efficiency. The fibers of 1 mm diameter are positioned in parallel to each other with a pitch of 1.05 mm within each layer. Neighbouring fiber layers are staggered by 0.21 mm to obtain the best spatial resolution.
The size of the fiber detector was chosen to detect most of the scattered protons in one hemisphere of the proton orbit. According to Monte Carlo simulations a detector size of about $`5\times 5`$ cm<sup>2</sup> meets this requirement in the vertical stations, while in the horizontal stations detectors of half this size are sufficient. Hence in the vertical stations 48 fibers are combined into one layer while 24 fibers per layer are used in the horizontal stations.
The precision of the fiber positions at the detector end face was measured by a microscope. The typical deviation from the nominal fiber position is 10 $`\mu `$m.
The scintillating fibers are thermally spliced to light guide fibers which transmit the scintillation signals to the PSPMs. The light guides have a length of 50 cm in the vertical stations and 30 cm in the horizontal stations. The light transmission of all spliced connections was measured and only those with a transmission better than 80 % were accepted for the detector production.
To optimize the mechanical stability of the spliced connections the fibers of several producers were tested. The chosen scintillating fibers <sup>3</sup><sup>3</sup>3POLIFI 02-42-100, Pol.Hi.Tech.,S.P.Turanense Km.44400 - 67061 Carsoli(AQ),Italy. have an attenuation length of 3.5 m and a trapping efficiency of 4 %. In a test run the light yield of 4.5 photo-electrons per millimeter fiber traversed by a minimum ionizing particle was measured at the end of 2 m light guides . For the detectors in the horizontal stations fibers with double cladding were used. Due to a trapping efficiency of nearly 7 % these fibers have an enhanced light yield.
The light guide fibers are glued into a plastic mask to feed the scintillation signals into the PSPM channels. All fibers of a coordinate detector are combined into the same mask to be read out by one PSPM.
### 2.3 Position-Sensitive Photo-Multipliers
The position-sensitive photo-multiplier (PSPM) is an efficient device to detect scintillation signals from many fibers simultaneously. Different types of PSPMs are used in the vertical and horizontal stations. For the vertical stations the 64-channel PSPM H4139-20 <sup>4</sup><sup>4</sup>4HAMAMATSU PHOTONICS K.K. Electron Tube Center,314-5 Shimokanzo, 438-0193 Japan. was chosen. This device gave the best results for the readout of the fiber detectors in terms of efficiency and resolution at that time. For the FPS upgrade the 124-channel PSPM MCPM-124 <sup>5</sup><sup>5</sup>5MELZ, Electrozavodskaja 14, 107061 Moscow, Russia. was applied in the horizontal stations.
The fundamental difference between the two types of PSPMs is the electron multiplication system. The H4139-20 has a fine-mesh dynode system which gives a gain above $`10^6`$ while the MCPM-124 is equipped with two micro-channel plates which produce a gain typically one order of magnitude less. An important feature of the MCPM-124 is the electro-static focusing and the anti-distortion electrode between the photo-cathode and the first micro-channel plate. Due to the long path of the photo-electrons this device is very sensitive to magnetic fields. A detailed investigation of the micro-channel plate PSPM can be found in . The main characteristics of both PSPM types are compiled in table 1.
For the correct recognition of fiber hits the cross talk to neighbouring pixels should be small. The values quoted by the producer range around a few percent. However a large contribution of electronic cross talk can increase the overall cross talk above the level of 10 %.
To read out all 240 fibers of a coordinate detector in the vertical stations by 64 channels of the H4139-20 type implies that 4 fibers have to be coupled to one PSPM pixel. The pixel size of 4 mm diameter is large enough to place 4 fibers of 1 mm diameter without increasing the cross talk. The consequence of this 4-fold fiber multiplexing is a 4-fold ambiguity in the hit recognition. Since the FPS stations aim to measure only one particle track this ambiguity can be resolved by a corresponding segmentation of the trigger planes into four scintillator tiles (section 2.4).
In the horizontal stations all 120 fibers of a coordinate detector can be read out by the 124 channels of the MCPM-124 without multiplexing. The pixel size of $`1.5\times 1.5`$ mm<sup>2</sup> matches well the fiber diameter of 1 mm. The distortions of the regular anode pixel grid due to the electro-static focusing are compensated by an appropriate design of the fiber mask.
A general principle of the fiber-to-PSPM-pixel mapping is, that neighbouring fibers are not read out by neighbouring PSPM pixels. This scheme reduces the influence of cross talk on the spatial resolution of the detector.
All PSPMs have 2 pixels which are coupled via light guide fibers to light emitting diodes for monitoring and test purposes.
### 2.4 Trigger Counters
The trigger counters consist of scintillator planes covering the sensitive area of the fiber detectors. Each coordinate detector is equipped with one scintillator plane, placed before or behind the fiber detector. Altogether there are four scintillator planes in each FPS station.
The arrangement of the scintillator planes with respect to the fiber detectors in the vertical stations can be seen in fig.2b. To resolve the spatial ambiguity due to the 4-fold fiber multiplexing the scintillator planes are segmented into four tiles. Each tile covers 12 fibers with a unique fiber-to-PSPM pixel mapping. The trigger tiles are made of 5 mm thick scintillator material BC-408 <sup>6</sup><sup>6</sup>6BICRON, 12345 Kinsman Road, Newbury, Ohio 44065-9577,USA.. Bundles of 240 light guide fibers of 0.5 mm diameter are used to transmit the light signals to the trigger PMTs. The bundles are glued to the end faces of the scintillator tiles and have a length of 50 cm. Altogether 16 PMTs XP1911 <sup>7</sup><sup>7</sup>7Philips Components, Postbus 90050, 5600 PB Eindhoven, Netherlands. are used to read out the trigger signals in a vertical station.
In the horizontal stations the scintillator planes are not segmented and only 3 mm thick. Each plane is connected to two bundles of light guide fibers to transmit the scintillation signals to two PMTs R5600 <sup>8</sup><sup>8</sup>8HAMAMATSU PHOTONICS K.K. Electron Tube Center, 314-5 Shimokanzo, 438-0193 Japan. \- a metal package PMT characterized by compact size and low weight. Since in the horizontal stations the trigger PMTs are placed inside the plunger vessel the light guide bundles have a length of only 15 cm. The main parameters of the trigger PMTs are summarized in table 2.
### 2.5 Electronics
The main parts of the FPS electronics are given in a block diagram in fig.3. It can be subdivided into three parts:
$``$ the front-end components: preamplifiers and comparators for the signals of the PSPMs and trigger PMTs mounted close to the detectors;
$``$ the conversion, pipelining and trigger electronics located in crates a few meters away from the FPS stations in the HERA tunnel;
$``$ the VME master controller residing outside the HERA tunnel organizing the data readout and the trigger processing.
In the vertical stations, which are equipped with the H4139-20, one can expect an anode charge of 0.5 pC per fiber hit. A charge sensitive preamplifier with a sensitivity of 1 V/pC is used as the first step in the signal processing chain, followed by a differential driver circuit for common mode rejection. In the horizontal stations the preamplifier sensitivity is enlarged to 30 V/pC due to the lower gain of the MCPM-124. The analog signals of the trigger PMTs are fed into amplifiers to enlarge their amplitudes by a factor of 10 before the digitization.
The first stage of the signal processing is a FADC with 6 bit resolution and 1 V range. It is strobed by the HERA clock and the digitized signals are packed into 8 bit wide pipeline registers. Since HERA has a bunch crossing interval of 96 nsec and the first level trigger decision from the main H1 detector is available only after 2.5 $`\mu `$sec all data have to be stored in pipelines. The FPS pipeline boards have a length of 32 bunch crossings.
In addition to the FADC readout the signals of the trigger PMTs are fed into comparator boards with a remotely controlled threshold. The output signals are transmitted to the trigger board which compares the hit pattern with trigger conditions consistent with a proton track segment. The logical OR of all conditions comprises the local trigger signal of a FPS station. It is stored together with the hit pattern of all trigger PMTs on the pipeline board.
All local trigger signals are combined to FPS trigger elements for the central H1 trigger processor. To compensate the different distances of the FPS stations to the interaction point programmable delay circuits are used to synchronize the FPS trigger pulses with the HERA clock.
The FPS data are transmitted via a bi-directional fiber optic link to the input FIFO of the VME master controllers. All FPS stations are read out in parallel with a strobe frequency of 5 MHz in a maximum readout time of 52 $`\mu `$sec.
A second fiber optic link is used to transmit the signals from the trigger boards to the VME master controllers outside the HERA tunnel. In the other direction the HERA clock signals and information from the central H1 trigger unit are transmitted to each FPS station.
## 3 Operation and Data Taking
### 3.1 Operation of the FPS
The beam profile determines how close the detectors can be moved to the proton beam orbit. It can be calculated from the emittance and the $`\beta `$-function of the proton machine . At the positions of the FPS stations the width and the height of the beam profile are given in table 3. It is a flat ellipse at 90 m and 80.5 m and becomes wider at 64 m.
Assuming a distance of 10 standard deviations of the beam profile plus a safety margin of 2 mm as the closest distance of approach, the detectors in the vertical stations can be moved up to 4 mm to the central proton orbit. In the horizontal stations the distance of closest approach is about 30 mm at 64 m and 20 mm at 80m.
During injection and ramping of the beams all FPS detectors are in their parking positions far from the circulating beam. When stable beam conditions are reached an automatic insert procedure is started. The detectors are moved in steps of 100 $`\mu `$m and the increase of counting rates in the trigger tiles and the beam loss monitors mounted on the beam pipe at 64 m, 83 m and 95 m are observed. For the vertical station at 90 m also the rate of the forward neutron calorimeter located at a distance of 107 m from the interaction point is recorded.
As the detectors are moved towards the proton beam a gradual increase of the rates is observed as long as the plunger vessels remain in the shadow of the HERA machine collimators which cut the tails of the proton beam profile. When the plunger vessel leaves the collimator shadow the rates increase steeply.
The automatic insert procedure evaluates the gradient of the rate increase to stop the movement. If the increase between two consecutive steps exceeds a predefined level the movement is stopped and the detectors have reached their working positions.
The insert procedure starts with the station nearest to the interaction point at 64 m, because particles scattered off the bottom of the plunger vessel affect the rates in the FPS stations further downstream. The whole procedure until all detectors have reached their working positions takes about 20 minutes.
The slow control system records detector parameters in time intervals of seconds and stores them into the database. Some important parameters are the detector positions, the values of the HERA beam position monitors and the trigger rates.
The dose rate in all FPS stations accumulated over one year of HERA operation was measured. At the positions of the fiber detectors the highest rate was 28 krad in the horizontal station at 64 m. The other stations are well protected by the yoke of the BU00 dipole magnet and accumulated doses below 1 krad. The dose rates in all stations are well below the limits of radiation damage of scintillating fibers .
An emergency retract system protects the detectors and PSPMs against radiation damage by badly tuned beams or accidental beam loss. When the rate monitors indicate a rate above a critical threshold all detectors are quickly moved to their parking positions.
An important item for the operation of the FPS is the safety of the HERA running. The motor drives of all FPS stations are connected to the H1 emergency power net and in case of a power break all detectors are automatically retracted. In addition, the horizontal stations have a spring loaded sytem to retract the detectors in case of problems. During the detector insert procedure several checks protect the proton beam against accidental detector movement by human intervention or computer failures.
### 3.2 Data Acquisition Program
The data acquisition (DAQ) has two major tasks: it organizes the data transfer from the front-end electronics to the central H1 event builder and monitors data quality parameters.
When a first level trigger decision stops the filling of the pipeline a controller program starts the readout of the FADC data of a specific set of pipeline stages, which includes all PSPM and trigger PMT signals. The digital trigger data of five consecutive stages are read out including the central pipeline stage containing the FADC data. Due to the long distance to the central H1 detector the signals of a proton traversing the FPS stations reside close to the end of the pipeline.
The master controllers receive the data via a fiber-optical link and perform a pedestal subtraction with zero suppression. Finally, the DAQ program reformats the data for the off-line analysis and sends them to the central event builder.
The size of a typical FPS event is 200 Byte. About 1 % to 2 % of the total H1 data sample contains such an FPS event. The deadtime due to the FPS DAQ is typically 1.2 msec.
The monitor program allows to modify readout parameters as the comparator thresholds of the trigger PMTs, the definition of the trigger conditions, the FADC strobe delays and the zero suppression thresholds of the PSPM amplitudes. In addition it shows control histograms and an online event display.
## 4 Detector Performance
### 4.1 Trigger Rate and Efficiency
The trigger rate of the scintillator tiles has contributions from two major sources: the signals from traversing particles, either scattered protons or shower particles from beam-gas or beam-wall interactions, and the background rate due to PMT noise and synchrotron radiation from the electron machine. To reduce the synchrotron background rate all stations are shielded with lead plates.
The majority of the data were recorded with the trigger condition that at least 3 out of 4 trigger planes per station have fired. Moreover, in the vertical stations the combination of scintillator tiles which correspond to a track topology define narrow forward cones. In the horizontal stations the unsegmented scintillator planes are coupled to two PMTs which are used in coincidence to reduce the contribution of PMT noise.
The resulting trigger rates of the vertical FPS stations are between 2 kHz and 4 kHz for typical luminosity conditions with proton and electron currents of 60 mA and 20 mA, respectively. The coincidence rate of the vertical stations varies between 0.5 kHz and 1.5 kHz. In the horizontal stations the trigger rates are significantly higher. This is mostly due to the wide beam profile in the horizontal coordinate (table 3) and the missing shielding of the huge BU00 dipole magnet before the station at 64 m. The typical coincidence rate of the horizontal stations is about 10 kHz.
The four local trigger signals of all FPS stations are combined to 8 FPS trigger elements. They are used in combination with trigger elements from other H1 subdetectors in the central trigger processor. Three physics triggers with the signature of a forward going proton are formed for low multipicity, photo-production and deep inelastic scattering events.
The efficiency of the trigger tiles is determined from redundant signals which can be attributed to the same forward track. The typical values of the tile efficiency are well above 98 %.
### 4.2 Hit Identification and Track Reconstruction
The signals of all PSPM channels are classified into hit, noise, or cross talk signals before the track reconstruction.
To reduce the influence of cross talk on the track reconstruction a filter algorithm is applied . Possible cross talk signals in the neighbourhood of channels with large amplitudes are suppressed, while isolated channels with small amplitudes are kept. A signal above an amplitude threshold of 2 $`\sigma _i`$ \+ 1 FADC counts, where $`\sigma _i`$ is the pedestal variation of the i-th PSPM channel, is accepted as a hit if at least one of the two associated trigger tiles has fired.
In the first step of the track reconstruction the fiber hits in a coordinate detector are grouped into clusters compatible with a forward track segment. Hits in at least two layers are requested for each cluster.
As described in section 2.2 each FPS station contains two identical subdetectors separated by 60 mm. Each cluster in the first detector is combined with each cluster in the second detector to obtain a track projection. The slope of these projections is used to select forward going protons. A typical slope distribution with a narrow peak related to forward protons is shown in fig.4a.
Two projections each having at least 5 out of 10 hit layers are combined to a spatial track. A scatter plot of track points in the middle plane between both subdetectors is shown in fig.4b. Only very few fake tracks due to misidentified track projections or ambiguous combinations in multi-track events can be seen outside the sensitive detector area.
All spatial tracks inside the sensitive detector area are used to form global tracks for each pair of vertical and horizontal FPS stations. Before this step the track points have to be corrected for the detector positions. The large distance between two FPS stations allows to measure the slopes of global tracks with an accuracy of a few $`\mu `$rad.
For a minimum multiplicity of 5 fiber hits the probability to find a local track projection is 86 %. This results in a reconstruction efficiency of about 50 % for protons passing both vertical stations. In the horizontal stations this efficiency is smaller due to the lower layer efficiency (section 4.3).
### 4.3 Fiber Detector Efficiency and Resolution
The performance of the fiber detectors is described by the layer efficiency, which is defined as the probability that a fiber layer indicates a hit if it is traversed by a charged particle. The layer efficiency depends on the light yield, the attenuation of the scintillating and light guide fibers, and losses in the readout chain and the hit finding algorithm. Another source of inefficiency is the dead material between neighbouring fibers. Assuming an effective fiber core diameter of 900 $`\mu `$m the geometrical layer efficiency is 86 % for a fiber pitch of 1050 $`\mu `$m.
The hit multiplicity of reconstructed tracks is used to evaluate the layer efficiencies. For a typical run range the layer efficiencies of all FPS stations are shown in fig.5. In the vertical stations the values vary between 50 % and 80 % with an average around 65 %. In the horizontal stations the layer efficiencies are slightly lower. They range from 30 % to 70 % with an average around 50 %. The lower values in the horizontal stations can be explained by the lower quantum efficiency and gain of the MCPM-124, as given in table 1. In all FPS stations a degradation of the layer-efficiencies between 5 % and 10 % per year was observed. Possible reasons are the aging of PSPMs and fiber detectors, intensified by the synchrotron radiation and beam induced background during injection and beam steering in the HERA tunnel. To maintain the quality of the FPS stations fiber detectors with too low layer-efficiency and PSPMs with a gain degradation were exchanged during the accelerator shut-down periods.
The spatial resolution of the fiber detectors is determined by the overlap region of hit fibers in subsequent layers. Due to the 210 $`\mu `$m staggering of fibers in neighbouring layers a theoretical resolution of 60 $`\mu `$m can be obtained if all fibers are 100 % efficient. In practice this resolution is detoriated by inefficient fibers, noise hits and the dead material between the fibers.
For a horizontal station the measured spatial resolution as a function of the hit multiplicity is shown in fig.6. The average value of the resolution is 150 $`\mu `$m, in good agreement with results from prototype measurements at the DESY electron test beam . Due to the different combinations of overlapping fibers a particular hit multiplicity results in overlap regions of different sizes. This effect propagates into the variance of the resolution which starts for all multiplicities around100 $`\mu `$m and ranges up to 300 $`\mu `$m for the lowest multiplicities.
The spatial detector resolution has to be compared with the uncertainty of the proton trajectory due to vertex smearing and beam divergence at the interaction point. Depending on the proton energy these sources give an additional uncertainty between 50 $`\mu `$m and 150 $`\mu `$m at the positions of the FPS stations. The Coulomb scattering of a proton passing a FPS station results in an uncertainty of similar size in the stations further downstream. Due to these inevitable contributions, the total resolution cannot be significantly improved by a better spatial resolution of the fiber detectors.
### 4.4 Energy and Angle Reconstruction
The energy reconstruction is based on the knowledge of the optics of the proton beam line . Since only dipoles and quadrupoles are placed between the interaction point and the FPS stations the deflections in the horizontal and vertical projections are decoupled. This allows the energy of the scattered proton to be reconstructed independently in both projections.
The coordinate system in the following description is defined by a horizontal $`X`$-axis, a vertical $`Y`$-axis and a $`Z`$-axis pointing in beam direction. The intercept $`X`$ and slope $`X^{}`$ of global tracks in the horizontal projection are related to the energy E and the scattering angle $`\mathrm{\Theta }_x`$ of the proton at the interaction point by two linear equations:
$`X=a_x(E)+b_x(E)\mathrm{\Theta }_x`$ (1)
$`X^{}=c_x(E)+d_x(E)\mathrm{\Theta }_x`$ (2)
A corresponding set of equations holds for the vertical projection. The transfer functions $`a_x(E),b_x(E),c_x(E)`$ and $`d_x(E)`$ are determined from Monte Carlo simulations of forward protons at the reference positions $`Z`$ = 85 m and $`Z`$ = 72 m for the vertical and horizontal stations respectively.
While the reconstruction in the horizontal projection has a unique solution for energy and angle, the vertical projection has two solutions in most cases. In the vertical stations a large fraction of these double solutions has unphysically large scattering angles and can thus be rejected. For the remaining tracks the solution with the energy closer to the energy found in the horizontal projection is accepted. The final proton energy is the weighted average of the energies from both projections.
The energy spectrum of scattered protons measured with the vertical FPS stations during 1996 data taking with 820 GeV proton beam energy is shown in fig.7a. It begins around 500 GeV and drops sharply above 750 GeV. The upper edge of the energy spectrum reflects the fact that in both vertical station the closest distance of approach to the circulating proton beam is about 4 mm.
The errors of the reconstructed energies are shown in fig.7b. The mean error is 6 GeV at 700 GeV proton energy and decreases to 2 GeV for protons at 500 GeV.
To evaluate the absolute energy scale error, the FPS measurements were compared with predictions of the proton energy from the central H1 detector. Events with a high photon virtuality, where the hadronic final state is well contained in the central detector, were used for this purpose. Based on a sample of 17 events we estimate an energy scale error of 10 GeV .
The reconstructed spectra of the polar angles $`\mathrm{\Theta }_x`$ and $`\mathrm{\Theta }_y`$ of scattered protons are shown in fig.7c,d. The acceptance in the vertical stations is limited to values $`|\mathrm{\Theta }_{x,y}|<0.4`$ mrad corresponding to transverse momenta $`p_T<300`$ MeV/c. The mean error of the $`\mathrm{\Theta }_x`$ measurement is 5 $`\mu `$rad independent from the proton energy, while the mean error of $`\mathrm{\Theta }_y`$ increases from 5 $`\mu `$rad at 500 GeV up to 100 $`\mu `$rad at 700 GeV.
### 4.5 Calibration of the Detector Positions
The transfer functions in (1) and (2) are calculated with respect to the nominal beam orbit. Since the actual beam orbit varies for different proton fills, a fill-dependent calibration of the detector positions is necessary before the track parameters can be used for the energy reconstruction. This calibration is based on the comparison of global tracks with a Monte Carlo sample using the nominal beam optics.
Fig.8a shows the scatter plot of intercepts $`X`$ and slopes $`X^{}`$ of global tracks in the horizontal projection at the reference position $`Z`$ = 85 m before the calibration. Superimposed are the lines of constant energy and scattering angles assuming the nominal interaction point and beam optics. Certain combinations of $`X`$ and $`X^{}`$ are forbidden, but this region is partly occupied by uncalibrated tracks. This discrepancy is based on the difference between nominal and actual beam orbit and can be improved by additional offsets for slopes and intercepts of global tracks - the calibration constants $`\mathrm{\Delta }X`$ and $`\mathrm{\Delta }X^{}`$. They are determined in a maximum likelyhood fit minimizing the number of tracks in the forbidden region. Fig.8b shows the scatter plot after the calibration with much less tracks in the forbidden region.
In the vertical projection the same scheme is applied. Due to the shape of the scatter plot a unique solution of the calibration constants $`\mathrm{\Delta }Y`$ and $`\mathrm{\Delta }Y^{}`$ can only be found if the energy measurement in the horizontal projection is used as an additional constraint.
In the horizontal FPS stations the calibration can be done in a similar way. In addition, the diffractive photo-production of $`\rho `$-mesons , where the final state is completely measured in the central detector and the FPS, offers an independent method to check the energy scale.
## 5 Summary and Outlook
For the H1 experiment at the HERA collider a spectrometer was built to measure forward protons with energies greater than 500 GeV in the angular range below 1 mrad with respect to the proton beam direction. Such protons escape the central detector through the beam pipe and can be detected at a large distance from the interaction point where their positions deviate a few millimeters from the circulating proton beam.
The FPS consists of two vertical stations at 81 m and 90 m which approach the beam from above and two horizontal stations at 64 m and 80 m which approach the beam from the outer side of the HERA ring.
The main components of the FPS are fiber detectors located in Roman Pots, which can be moved close to the proton orbit. The detectors consist of 5 staggered layers of scintillating fibers of 1 mm diameter. The scintillating fibers are spliced to light guide fibers which transmit the signals to position-sensitive photo-multipliers. For triggering each FPS station is equipped with four planes of scintillator tiles.
The fiber detectors in the vertical FPS stations have a spatial resolution of 150 $`\mu `$m and a typical layer-efficiency of 65 %. This allows the proton trajectory to be reconstructed through both stations for about 50 % of the triggered events. In the horizontal FPS stations the spatial resolution is as good as in the vertical stations, but the layer-efficiency is slightly less, typically 50 %.
The local track elements are combined into global tracks for the pairs of vertical and horizontal FPS stations. The parameters of these global tracks are used to evaluate the energy and scattering angle of the proton at the interaction point. The deflection of scattered protons in the horizontal and vertical projection is decoupled and the energy can be measured twice. For 820 GeV proton beam energy the vertical stations measure scattered protons in the energy range between 500 GeV and 750 GeV. The corresponding error varies between 2 GeV and 6 GeV for the lowest and highest energies respectively, with an additional energy scale error of 10 GeV.
The kinematic range of the horizontal FPS stations is complementary to that of the vertical stations. The horizontal stations give access to the diffractive region with $`x_{}<0.1`$, where $`x_{}=1E^{}/E_p`$ is the fractional energy of the exchange. This is illustrated in fig.9.
Until 1999 an integrated luminosity of about 15 pb<sup>-1</sup> was collected with the vertical stations and 5 pb<sup>-1</sup> with the horizontal FPS stations. First physics results for the structure function with a leading proton have been published from the 1995 data with the vertical stations . The semi-inclusive structure function $`F_2^{LP(3)}`$ with leading protons in the kinematic range 580 GeV $`<E^{}<`$ 740 GeV and $`p_T<`$ 200 MeV was measured . In another analysis the photo-production cross section with leading protons and two jets was evaluated and compared to theoretical predictions .
Acknowledgement
The technical help provided by the workshops of the DESY laboratories at Hamburg and Zeuthen is greatfully acknowledged. In particular we thank the technicians H.J.Seidel and P.Pohl. The continuous assistance of the HERA machine, survey and vacuum groups is essential for the successful operation of the Roman Pot devices. We thank D.P.Johnson (Brussels), B.Stella (Rome) and J.Zsembery (Saclay) for their help in the early phase of this project. This project was supported by the INTAS-93-43 grant.
References
* \] H1 Collaboration, I.Abt et al., Nucl.Instr.and Meth.A386 (1997) 310, ibid.348.
* \] H1 Collaboration, T.Ahmed et al., Phys.Lett.B348 (1995) 681;
H1 Collaboration, S.Aid et al., Nucl.Phys.B463 (1996) 3;
H1 Collaboration, S.Aid et al., Nucl.Phys.B468 (1996) 3;
H1 Collaboration, S.Aid et al., Nucl.Phys.B472 (1996)3, ibid.32.
* \] T.Regge, Nuovo Cimento 14 (1959) 951;
G.Chew and S.Frautschi, Phys.Rev.Lett.7 (1961) 394;
A.Kaidalov, Phys.Rep.50 (1979) 157;
K.Goulianos, Phys.Rep.101 (1983) 169;
A.Donnachie and P.Landshoff, Phys.Lett.B296 (1992) 227.
* \] G.Alberi and G.Goggi, Phys.Rep.74 (1981) 1;
G.Chew, S.Frautschi and S.Mandelstam, Phys.Rev.126 (1962) 1202;
A.W.Thomas and C.Boros, ADP-98-79/T346,hep-ph/9812264v2, 1999.
* \] G.Ingelmann and P.Schlein, Phys.Lett.B152 (1985) 256;
A.Donnachie and P.Landshoff, Nucl.Phys.B303 (1988) 634.
* \] ZEUS Collaboration, M.Derrick et al., Phys.Lett.B356 (1995) 129;
H1 Collaboration, C.Adloff et al., Z.Phys.C76 (1997) 613;
H1 Collaboration, ”Measurement of the diffractive structure function $`F_2^{D3}`$ at low and high $`Q^2`$ at HERA”, contr. paper to the 29th Int.Conf.on High Energy Physics ICHEP’98, Vancouver, Canada, 1998.
* \] U.Amaldi et al., Phys.Lett.43B (1973) 231;
R.Battiston et al., Nucl.Instr.and Meth.A238 (1985) 35;
A.Brandt et al., Nucl.Instr.and Meth.A327 (1993) 412;
F.Abe et al., Phys.Rev.D50 (1994) 5518;
ZEUS Collaboration, J.Breitweg et al., Eur.Phys.J.C2 (1998) 246.
* \] J.Bähr et al., DESY preprint 92-176, 1992;
J.Bähr et al., DESY preprint 93-200,93-201, 1993;
J.Bähr et al., Nucl.Instr.and Meth.A330 (1993) 103;
J.Bähr et al., Nucl.Instr.and Meth.A371 (1996) 380.
* \] J.Bähr et al., DESY-Zeuthen preprint 95-01, 1995.
* \] D.C.Carey, ”The Optics of Charged Particle Beams”, Harwood Academic Publishers, New York, 1987;
K.G.Steffen, ”High-Energy Beam Optics”, Interscience Monographs and Texts in Physics and Astronomy, Vol.17, John Wiley and Sons, New York, 1995;
K.Wille, ”Physik der Teilchenbeschleuniger und Synchrotronstahlungsquellen”, Teubner,Stuttgart, 1992;
J.Roßbach and P.Schmüser, ”Basic course on accelerator optics”, DESY preprint M-93-02, 1993.
* \] M.Beck et al., Nucl.Instr.and Meth.A381 (1996) 330;
T.Nunnemann, Thesis MPIH-V7-1999, University of Heidelberg, 1998.
* \] C.Zorn, Nucl.Phys.B32 (1993) 377;
J.Bähr et al., DESY preprint 99-079, 1999, physics/9907019.
* \] The filter algorithm corrects the measured amplitudes $`S_i`$ by subtracting the weighted amplitudes of the direct and diagonal neighbours according to:
$`\widehat{S_i}=1.025\left(S_i0.15\underset{dir.}{}S_j0.10\underset{diag.}{}S_j\right)`$
* \] H1 Collababoration, ”The Forward Proton Spectrometer of H1”, contr.paper pa17-025 to the 28th Int.Conf. on High Energy Physics ICHEP’96, Warsaw, Poland, 1997.
* \] H1 Collaboration, C.Adloff et al., Z.Phys.C75 (1997) 607;
H1 Collababoration, C.Adloff et al., DESY 99-010, 1999, hep-ex/9902019, subm.to Eur.Phys.J.C.
* \] H1 Collababoration, C.Adloff et al., Eur.Phys.J.C6 (1999) 587.
* \] B.List, Thesis, University of Hamburg, 1996.
* \] C.Wittek, Thesis, University of Hamburg, 1997.
Table 1: Typical parameters of the position-sensitive photo-multipliers H4139-20 and MCPM-124 used in the vertical and horizontal FPS stations, respectively.
| Characteristics | H4139-20 | MCPM-124 |
| --- | --- | --- |
| photo-cathode material | bi-alkaline | multi-alkaline |
| window | glass | fiber optic |
| size | $`40\times 40`$ mm<sup>2</sup> | 25 mm diam. |
| quantum efficiency | typ. 20 % | typ. 15 % |
| at 400 nm | | |
| dynode structure | proximity mesh | micro-channel plates |
| stages | 16 | 2 |
| anode pixels | 8 x 8 | 10 x 10 + 4 x 6 |
| pixel pitch | 5.08 mm | 2.2 mm |
| pixel size | 4 mm diam. | 1.5 $`\times `$ 1.5 mm<sup>2</sup> |
| max.voltage | 2.7 kV | 2.9 kV |
| gain | $`10^6`$ at 2.5 kV | 3 $`\times 10^5`$ at 2.8 kV |
| pulse time | 2.7 nsec rise | 2.5 nsec total |
| uniformity | 1 : 3 | 1 : 3 |
| average cross talk | 1 % | 1 - 2 % |
Table 2: Typical parameters of the trigger photo-multipliers XP1911 and R5600 used in the vertical and horizontal FPS stations, respectively.
| Characteristics | XP1911 | R5600 |
| --- | --- | --- |
| photo-cathode material | bi-alkaline | bi-alkaline |
| window | glass | glass |
| size | 15 mm diam. | 8 mm diam. |
| quantum efficiency | 20 % | 20 % |
| at 400 nm | | |
| dynode structure | linear focused | metal channel |
| stages | 10 | 8 |
| max.voltage | 1.9 kV | 1.0 kV |
| gain | $`10^6`$ at 1.2 kV | $`10^6`$ at 0.8 kV |
| pulse time | 2.3 nsec rise | 0.65 nsec rise |
Table 3 : The horizontal and vertical profile of the 820 GeV proton beam at the interaction point and at the positions of the FPS stations in terms of standard deviations $`\sigma _x`$ and $`\sigma _y`$ of a Gaussian parametrization. For the emittance the value 17$`\pi `$ mm $``$ mrad was assumed.
| z / m | $`\sigma _x`$/mm | $`\sigma _y`$/mm |
| --- | --- | --- |
| 0 | 0.17 | 0.05 |
| 64 | 2.49 | 0.82 |
| 80.5 | 1.86 | 0.25 |
| 90 | 1.38 | 0.22 |
Figure Captions
A schematic view of forward beam line a) in the horizontal b) in the vertical projection indicating the 10 $`\sigma `$ beam envelope, the main magnets and the FPS stations at 64 m, 80 m, 81 m and 90 m.
A horizontal FPS station a) with fiber detectors in parking position and a vertical FPS station b) showing the fiber detector end faces and the scintillator planes of the trigger counters.
The electronic scheme : the PSPM and trigger PMT signals are transmitted via optical links to the VME master controllers - in the other direction control signals (HERA clock, pipeline enable , fast clear) are sent to the FPS stations.
The local tracks in a horizontal FPS station: a) a slope spectrum with the peak signal of forward proton tracks, b) spatial track points with the shape of the detector area.
The layer efficiencies of the fiber detectors a,b) in the vertical stations during 1996 data taking and c,d) in the horizontal stations during 1998 running.
The spatial resolution of the fiber detectors in the horizontal stations in dependence on the multiplicity of fiber hits. The bars describe the variance according to the different sizes of the fiber overlap regions for combinations with the same hit multiplicity.
The energy and angular spectra measured with the vertical FPS stations in 1996 a) the energy E obtained from the measurements in both projections, b) the error $`\mathrm{\Delta }`$E in dependence on the energy E, and the polar angles c) $`\mathrm{\Theta }_x`$, and d) $`\mathrm{\Theta }_y`$.
The scatter plots of intercepts X and slopes dX/dZ of global tracks in the vertical stations a) before and b) after the calibration of the detector positions. The full lines correspond constant energies from 420 GeV to 820 GeV, the dashed lines are for constant angles from $`0.7`$ mrad to $`+0.7`$ mrad.
The geometrical acceptance in dependence on the fractional momentum $`x_{}`$ of the exchange for protons passing a) the vertical and b) the horizontal FPS stations. |
no-problem/0001/astro-ph0001061.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The Cosmological Principle was first adopted when observational cosmology was in its infancy; it was then little more than a conjecture, embodying ’Occam’s razor’ for the simplest possible model. Observations could not then probe to significant redshifts, the ‘dark matter’ problem was not well-established and the Cosmic Microwave Background (CMB) and the X-Ray Background (XRB) were still unknown. If the Cosmological Principle turned out to be invalid then the consequences to our understanding of cosmology would be dramatic, for example the conventional way of interpreting the age of the Universe, its geometry and matter content would have to be revised. Therefore it is important to revisit this underlying assumption in the light of new galaxy surveys and measurements of the background radiations.
Like with any other idea about the physical world, we cannot prove a model, but only falsify it. Proving the homogeneity of the Universe is in particular difficult as we observe the Universe from one point in space, and we can only deduce directly isotropy. The practical methodology we adopt is to assume homogeneity and to assess the level of fluctuations relative to the mean, and hence to test for consistency with the underlying hypothesis. If the assumption of homogeneity turns out to be wrong, then there are numerous possibilities for inhomogeneous models, and each of them must be tested against the observations.
Despite the rapid progress in estimating the density fluctuations as a function of scale, two gaps remain:
(i) It is still unclear how to relate the distributions of galaxies and mass (i.e. ‘biasing’); (ii) Relatively little is known about fluctuations on intermediate scales between these of local galaxy surveys ($`100h^1`$ Mpc) and the scales probed by COBE ($`1000h^1`$ Mpc).
Here we examine the degree of smoothness with scale by considering redshift and peculiar velocities surveys, radio-sources, the XRB, the Ly-$`\alpha `$ forest, and the CMB. We discuss some inhomogeneous models and show that a fractal model on large scales is highly improbable. Assuming an FRW metric we evaluate the ‘best fit Universe’ by performing a joint analysis of cosmic probes.
## 2 Cosmological Principle(s)
Cosmological Principles were stated over different periods in human history based on philosophical and aesthetic considerations rather than on fundamental physical laws. Rudnicki (1995) summarized some of these principles in modern-day language:
$``$ The Ancient Indian: The Universe is infinite in space and time and is infinitely heterogeneous.
$``$ The Ancient Greek: Our Earth is the natural centre of the Universe.
$``$ The Copernican CP: The Universe as observed from any planet looks much the same.
$``$ The Generalized CP: The Universe is (roughly) homogeneous and isotropic.
$``$ The Perfect CP: The Universe is (roughly) homogeneous in space and time, and is isotropic in space.
$``$ The Anthropic Principle: A human being, as he/she is, can exist only in the Universe as it is.
We note that the Ancient Indian principle can be viewed as a ‘fractal model’. The Perfect CP led to the steady state model, which although more symmetric than the CP, was rejected on observational grounds. The Anthropic Principle is becoming popular again, e.g. in explaining a non-zero cosmological constant. Our goal here is to quantify ‘roughly’ in the definition of the generalized CP, and to assess if one may assume safely the Friedmann-Robertson-Walker (FRW) metric of space-time.
## 3 Probes of Smoothness
### 3.1 The CMB
The CMB is the strongest evidence for homogeneity. Ehlers, Garen and Sachs (1968) showed that by combining the CMB isotropy with the Copernican principle one can deduce homogeneity. More formally the EGS theorem (based on Liouville theorem) states that “If the fundamental observers in a dust spacetime see an isotropic radiation field, then the spacetime is locally FRW”. The COBE measurements of temperature fluctuations $`\mathrm{\Delta }T/T=10^5`$ on scales of $`10^{}`$ give via the Sachs Wolfe effect ($`\mathrm{\Delta }T/T=\frac{1}{3}\mathrm{\Delta }\varphi /c^2`$) and Poisson equation rms density fluctuations of $`\frac{\delta \rho }{\rho }10^4`$ on $`1000h^1\mathrm{Mpc}`$ (e.g. Wu, Lahav & Rees 1999; see Fig 3 here), i.e. the deviations from a smooth Universe are tiny.
### 3.2 Galaxy Redshift Surveys
Figure 1 shows the distribution of galaxies in the ORS and IRAS redshift surveys. It is apparent that the distribution is highly clumpy, with the Supergalactic Plane seen in full glory. However, deeper surveys such as LCRS show that the fluctuations decline as the length-scales increase. Peebles (1993) has shown that the angular correlation functions for the Lick and APM surveys scale with magnitude as expected in a universe which approaches homogeneity on large scales.
Existing optical and IRAS (PSCz) redshift surveys contain $`10^4`$ galaxies. Multifibre technology now allows us to measure redshifts of millions of galaxies. Two major surveys are underway. The US Sloan Digital Sky Survey (SDSS) will measure redshifts to about 1 million galaxies over a quarter of the sky. The Anglo-Australian 2 degree Field (2dF) survey will measure redshifts for 250,000 galaxies selected from the APM catalogue. About 80,000 2dF redshifts have been measured so far (as of December 1999). The median redshift of both the SDSS and 2dF galaxy redshift surveys is $`\overline{z}0.1`$. While they can provide interesting estimates of the fluctuations on scales of hundreds of Mpc’s, the problems of biasing, evolution and $`K`$-correction, would limit the ability of SDSS and 2dF to ‘prove’ the Cosmological Principle. (cf. the analysis of the ESO slice by Scaramella et al 1998 and Joyce et al. 1999).
### 3.3 Peculiar Velocities
Peculiar velocities are powerful as they probe directly the mass distribution (e.g. Dekel et al. 1999). Unfortunately, as distance measurements increase with distance, the scales probed are smaller than the interesting scale of transition to homogeneity. Conflicting results on both the amplitude and coherence of the flow suggest that peculiar velocities cannot yet set strong constraints on the amplitude of fluctuations on scales of hundreds of Mpc’s. Perhaps the most promising method for the future is the kinematic Sunyaev-Zeldovich effect which allows one to measure the peculiar velocities of clusters out to high redshift.
The agreement between the CMB dipole and the dipole anisotropy of relatively nearby galaxies argues in favour of large scale homogeneity. The IRAS dipole (Strauss et al 1992, Webster et al 1998, Schmoldt et al 1999) shows an apparent convergence of the dipole, with misalignment angle of only $`15^{}`$. Schmoldt et al. (1999) claim that 2/3 of the dipole arises from within a $`40h^1\mathrm{Mpc}`$, but again it is difficult to ‘prove’ convergence from catalogues of finite depth.
### 3.4 Radio Sources
Radio sources in surveys have typical median redshift $`\overline{z}1`$, and hence are useful probes of clustering at high redshift. Unfortunately, it is difficult to obtain distance information from these surveys: the radio luminosity function is very broad, and it is difficult to measure optical redshifts of distant radio sources. Earlier studies claimed that the distribution of radio sources supports the ‘Cosmological Principle’. However, the wide range in intrinsic luminosities of radio sources would dilute any clustering when projected on the sky (see Figure 2). Recent analyses of new deep radio surveys (e.g. FIRST) suggest that radio sources are actually clustered at least as strongly as local optical galaxies (e.g. Cress et al. 1996; Magliocchetti et al. 1998). Nevertheless, on the very large scales the distribution of radio sources seems nearly isotropic. Comparison of the measured quadrupole in a radio sample in the Green Bank and Parkes-MIT-NRAO 4.85 GHz surveys to the theoretically predicted ones (Baleisis et al. 1998) offers a crude estimate of the fluctuations on scales $`\lambda 600h^1`$ Mpc. The derived amplitudes are shown in Figure 3 for the two assumed Cold Dark Matter (CDM) models. Given the problems of catalogue matching and shot-noise, these points should be interpreted at best as ‘upper limits’, not as detections.
### 3.5 The XRB
Although discovered in 1962, the origin of the X-ray Background (XRB) is still unknown, but is likely to be due to sources at high redshift (for review see Boldt 1987; Fabian & Barcons 1992). The XRB sources are probably located at redshift $`z<5`$, making them convenient tracers of the mass distribution on scales intermediate between those in the CMB as probed by COBE, and those probed by optical and IRAS redshift surveys (see Figure 3).
The interpretation of the results depends somewhat on the nature of the X-ray sources and their evolution. By comparing the predicted multipoles to those observed by HEAO1 (Lahav et al. 1997; Treyer et al. 1998; Scharf et al. 1999) we estimate the amplitude of fluctuations for an assumed shape of the density fluctuations (e.g. CDM models). Figure 3 shows the amplitude of fluctuations derived at the effective scale $`\lambda 600h^1`$ Mpc probed by the XRB. The observed fluctuations in the XRB are roughly as expected from interpolating between the local galaxy surveys and the COBE CMB experiment. The rms fluctuations $`\frac{\delta \rho }{\rho }`$ on a scale of $`600h^1`$Mpc are less than 0.2 %.
### 3.6 The Lyman-$`\alpha `$ Forest
The Lyman-$`\alpha `$ forest reflects the neutral hydrogen distribution and therefore is likely to be a more direct trace of the mass distribution than galaxies are. Unlike galaxy surveys which are limited to the low redshift Universe, the forest spans a large redshift interval, typically $`1.8<z<4`$, corresponding to comoving interval of $`600h^1\mathrm{Mpc}`$. Also, observations of the forest are not contaminated by complex selection effects such as those inherent in galaxy surveys. It has been suggested qualitatively by Davis (1997) that the absence of big voids in the distribution of Lyman-$`\alpha `$ absorbers is inconsistent with the fractal model. Furthermore, all lines-of-sight towards quasars look statistically similar. Nusser & Lahav (1999) predicted the distribution of the flux in Lyman-$`\alpha `$ observations in a specific truncated fractal-like model. They found that indeed in this model there are too many voids compared with the observations and conventional (CDM-like) models for structure formation. This too supports the common view that on large scales the Universe is homogeneous.
## 4 Is the Universe Fractal ?
The question of whether the Universe is isotropic and homogeneous on large scales can also be phrased in terms of the fractal structure of the Universe. A fractal is a geometric shape that is not homogeneous, yet preserves the property that each part is a reduced-scale version of the whole. If the matter in the Universe were actually distributed like a pure fractal on all scales then the Cosmological Principle would be invalid, and the standard model in trouble. As shown in Figure 3 current data already strongly constrain any non-uniformities in the galaxy distribution (as well as the overall mass distribution) on scales $`>300h^1\mathrm{Mpc}`$.
If we count, for each galaxy, the number of galaxies within a distance $`R`$ from it, and call the average number obtained $`N(<R)`$, then the distribution is said to be a fractal of correlation dimension $`D_2`$ if $`N(<R)R^{D_2}`$. Of course $`D_2`$ may be 3, in which case the distribution is homogeneous rather than fractal. In the pure fractal model this power law holds for all scales of $`R`$.
The fractal proponents (Pietronero et al. 1997) have estimated $`D_22`$ for all scales up to $`500h^1\mathrm{Mpc}`$, whereas other groups have obtained scale-dependent values (for review see Wu et al. 1999 and references therein).
Estimates of $`D_2`$ from the CMB and the XRB are consistent with $`D_2=3`$ to within $`10^4`$ on the very large scales (Peebles 1993; Wu et al. 1999). While we reject the pure fractal model in this review, the performance of CDM-like models of fluctuations on large scales have yet to be tested without assuming homogeneity a priori. On scales below, say, $`30h^1\mathrm{Mpc}`$, the fractal nature of clustering implies that one has to exercise caution when using statistical methods which assume homogeneity (e.g. in deriving cosmological parameters). We emphasize that we only considered one ‘alternative’ here, which is the pure fractal model where $`D_2`$ is a constant on all scales.
## 5 More Realistic Inhomogeneous Models
As the Universe appears clumpy on small scales it is clear that assuming the Cosmological Principle and the FRW metric is only an approximation, and one has to average carefully the density in Newtonian Cosmology (Buchert & Ehlers 1997). Several models in which the matter in clumpy (e.g. ’Swiss cheese’ and voids) have been proposed (e.g. Zeldovich 1964; Krasinski 1997; Kantowski 1998; Dyer & Roeder 1973; Holz & Wald 1998; Célérier 1999; Tomita 1999). For example, if the line-of-sight to a distant object is ‘empty’ it results in a gravitational lensing de-magnification of the object. This modifies the FRW luminosity-distance relation, with a clumping factor as another free parameter. When applied to a sample of SNIa the density parameter of the Universe $`\mathrm{\Omega }_m`$ could be underestimated if FRW is used (Kantowski 1998; Perlmutter et al. 1999). Metcalf & Silk (1999) pointed out that this effect can be used as a test for the nature of the dark matter, i.e. to test if it is smooth or clumpy.
## 6 A ‘Best Fit Universe’: a Cosmic Harmony ?
Several groups (e.g. Eisenstein, Hu & Tegmark 1998; Webster et al. 1998; Gawiser & Silk 1998; Bridle et al. 1999) have recently estimated cosmological parameters by joint analysis of data sets (e.g. CMB, SN, redshift surveys, cluster abundance and peculiar velocities) in the framework of FRW cosmology. The idea is that cosmological parameters can be better estimated by the complementary of the different probes.
While this approach is promising and we will see more of it in the next generation of galaxy and CMB surveys (2dF/SDSS/MAP/Planck) it is worth emphasizing a ‘health warning’ on this approach. First, the choice of parameters space is arbitrary and in the Bayesian framework there is freedom in choosing a prior for the model. Second, the ‘topology’ of the parameter space is only helpful when ‘ridges’ of 2 likelihood ‘mountains’ cross each other (e.g. as in the case of the CMB and the SN). It is more problematic if the joint maximum ends up in a ‘valley’. Finally, there is the uncertainty that a sample does not represent a typical patch of the FRW Universe to yield reliable global cosmological parameters.
### 6.1 Cosmological Parameters
Webster et al. (1998) combined results from a range of CMB experiments, with a likelihood analysis of the IRAS 1.2Jy survey, performed in spherical harmonics. This method expresses the effects of the underlying mass distribution on both the CMB potential fluctuations and the IRAS redshift distortion. This breaks the degeneracy e.g. between $`\mathrm{\Omega }_m`$ and the bias parameter. The family of CDM models analysed corresponds to a spatially-flat Universe with an initially scale-invariant spectrum and a cosmological constant $`\lambda `$. Free parameters in the joint model are the mass density due to all matter ($`\mathrm{\Omega }_m`$), Hubble’s parameter ($`h=H_0/100`$ km/sec), IRAS light-to-mass bias ($`b_{iras}`$) and the variance in the mass density field measured in an $`8h^1`$ Mpc radius sphere ($`\sigma _8`$). For fixed baryon density $`\mathrm{\Omega }_b=0.02/h^2`$ the joint optimum lies at $`\mathrm{\Omega }_m=1\lambda =0.41\pm 0.13`$, $`h=0.52\pm 0.10`$, $`\sigma _8=0.63\pm 0.15`$, $`b_{iras}=1.28\pm 0.40`$ (marginalised 1-sigma error bars). For these values of $`\mathrm{\Omega }_m,\lambda `$ and $`H_0`$ the age of the Universe is $`16.6`$ Gyr.
The above parameters correspond to the combination of parameters $`\mathrm{\Omega }_m^{0.6}\sigma _8=0.4\pm 0.2`$. This is quite in agreement from results form cluster abundance (Eke et al. 1998), $`\mathrm{\Omega }_m^{0.5}\sigma _8=0.5\pm 0.1`$. By combining the abundance of clusters with the CMB and IRAS Bridle et al. (1999) found $`\mathrm{\Omega }_m=1\lambda =0.36`$, $`h=0.54`$, $`\sigma _8=0.74`$, and $`b_{iras}=1.08`$ (with error bars similar to those above).
On the other hand, results from peculiar velocities yield higher values (Zehavi & Dekel 1999), $`\mathrm{\Omega }_m^{0.6}\sigma _8=0.8\pm 0.1`$. By combining the peculiar velocities (from the SFI sample) with CMB and SN Ia one obtains overlapping likelihoods at the level of $`2sigma`$ (Bridle et al. 2000). The best fit parameters are $`\mathrm{\Omega }_m=1\lambda =0.42`$, $`h=0.63`$, and $`\sigma _8=1.24`$.
### 6.2 Hyper-Parameters
A complication that arises in combining data sets is that there is freedom in assigning the relative weights of different measurements. A Bayesian approach to the problem utilises ‘Hyper Parameters’ (Lahav et al. 2000).
Assume that we have 2 independent data sets, $`D_A`$ and $`D_B`$ (with $`N_A`$ and $`N_B`$ data points respectively) and that we wish to determine a vector of free parameters $`𝐰`$ (such as the density parameter $`\mathrm{\Omega }_\mathrm{m}`$, the Hubble constant $`H_0`$ etc.). This is commonly done by minimising
$$\chi _{\mathrm{joint}}^2=\chi _A^2+\chi _B^2,$$
(1)
(or maximizing the sum of log likelihood functions). Such procedures assume that the quoted observational random errors can be trusted, and that the two (or more) $`\chi ^2`$s have equal weights. However, when combining ‘apples and oranges’ one may wish to allow freedom in the relative weights. One possible approach is to generalise Eq. 1 to be
$$\chi _{\mathrm{joint}}^2=\alpha \chi _A^2+\beta \chi _B^2,$$
(2)
where $`\alpha `$ and $`\beta `$ are ‘Lagrange multipliers’, or ‘Hyper-Parameters’ (hereafter HPs), which are to be evaluated in a Bayesian way. There are a number of ways to interpret the meaning of the HPs. A simple example of the HPs is the case that
$$\chi _A^2=\frac{1}{\sigma _i^2}[x_{\mathrm{obs},i}x_{\mathrm{pred},i}(𝐰)]^2,$$
(3)
where the sum is over $`N_A`$ measurements and corresponding predictions and errors $`\sigma _i`$. Hence by multiplying $`\chi ^2`$ by $`\alpha `$ each error effectively becomes $`\alpha ^{1/2}\sigma _i`$. But even if the measurement errors are accurate, the HPs are useful in assessing the relative weight of different experiments. It is not uncommon that astronomers discard measurements (i.e. by assigning $`\alpha =0`$) in an ad-hoc way. The procedure we propose gives an objective diagnostic as to which measurements are problematic and deserve further understanding of systematic or random errors.
If the prior probabilities for $`\mathrm{ln}(\alpha )`$ and $`\mathrm{ln}(\beta )`$ are uniform then one should consider the quantity
$$2\mathrm{ln}P(𝐰|D_A,D_B)=N_A\mathrm{ln}(\chi _A^2)+N_B\mathrm{ln}(\chi _B^2)$$
(4)
instead of Eq. 1.
More generally for $`M`$ data sets one should minimise
$$2\mathrm{ln}P(𝐰|\mathrm{data})=\underset{j=1}{\overset{M}{}}N_j\mathrm{ln}(\chi _j^2),$$
(5)
where $`N_j`$ is the number of measurements in data set $`j=1,\mathrm{},M`$. It is as easy to calculate this statistic as the standard $`\chi ^2`$. The corresponding HPs can be identified as $`\alpha _{\mathrm{eff},j}=N_j/\chi _j^2`$ (where the $`\chi _j^2`$’s are evaluated at the values of the parameters $`𝐰`$ that minimise eq. 4) and they provide useful diagnostics on the reliability of different data sets. We emphasize that a low HP assigned to an experiment does not necessarily mean that the experiment is ‘bad’, but rather it calls attention to look for systematic effects or better modelleing. The method is illustrated (Lahav et al. 2000) by estimating the Hubble constant $`H_0`$ from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang).
## 7 Discussion
Analysis of the CMB, the XRB, radio sources and the Lyman-$`\alpha `$ which probe scales of $`1001000h^1\mathrm{Mpc}`$ strongly support the Cosmological Principle of homogeneity and isotropy. They rule out a pure fractal model. However, there is a need for more realistic inhomogeneous models for the small scales. This is in particular important for understanding the validity of cosmological parameters obtained within the standard FRW cosmology.
Joint analyses of the CMB, IRAS, SN, cluster abundance and peculiar velocities suggests $`\mathrm{\Omega }_m=1\lambda 0.4`$. With the dramatic increase of data, we should soon be able to map the fluctuations with scale and epoch, and to analyze jointly LSS (2dF, SDSS) and CMB (MAP, Planck) data, taking into account generalized forms of biasing.
Acknowledgments I thank my collaborators for their contribution to the work presented here. |
no-problem/0001/hep-th0001152.html | ar5iv | text | # A nilpotent symmetry of quantum gauge theories
## Abstract
For the Becchi-Rouet-Stora-Tyutin (BRST) invariant extended action for any gauge theory, there exists another off-shell nilpotent symmetry. For linear gauges, it can be elevated to a symmetry of the quantum theory and used in the construction of the quantum effective action. Generalizations for nonlinear gauges and actions with higher order ghost terms are also possible.
Introduction: Quantization of gauge theories requires gauge-fixing, and for most gauges, the introduction of ghost fields. The resulting theory is invariant, not under the gauge symmetry itself, but under the Becchi-Rouet-Stora-Tyutin (BRST) symmetry. Nilpotence of the BRST transformation allows it to be extended to a symmetry of the quantum theory at all orders of the perturbation series, which allows order by order cancellation of infinities by the introduction of appropriate operators in the action. The quantum effective action is then the most general function invariant under this symmetry as well as all other known quantum symmetries of the theory.
In any useful gauge theory, gauge fields are coupled to many other fields. For general gauge theories, several of these fields may have the same Lorentz and gauge transformation properties. This leads to an enormous number of possible terms in the quantum effective action. If the theory is renormalizable, most of these terms have to vanish, leaving only those which are identical with the tree-level action, up to multiplicative constants. The BRST symmetry, imposed through the Slavnov-Taylor operator, ensures this stability for the physical ghost-free gauge-invariant part of the action. The demonstration of stability of the gauge-fixing and ghost terms requires auxiliary conditions.
In Landau-type gauges, the auxiliary conditions used are the ghost equation, the antighost equation, and their commutators with the Slavnov-Taylor operator. In more general linear gauges the antighost equation picks up a nonlinear breaking term and thus loses its usefulness. This is particularly inconvenient for linear interpolating ($`R_\xi `$ type) gauges. It is also inconvenient for gauge theories involving several fields with similar group and/or Lorentz transformation properties. For example, in theories involving non-Abelian two-form fields, one finds auxiliary vector fields and corresponding scalar ghosts. These can mix with the usual vector or ghost fields. As a result the proof of stability of general linear (including interpolating) gauges in such theories can be quite long. For nonlinear gauges and for actions containing terms of quartic or higher order in the ghost fields, proofs of stability are even more complicated.
In this paper I show that the BRST-invariant extended action for any gauge theory admits another, gauge fermion dependent, nilpotent symmetry, which does not seem to have been noticed earlier. This symmetry differs from the BRST symmetry only in its action on trivial pairs. For some special kinds of action, for example those which are quadratic in ghost fields, it becomes identical with the BRST symmetry upon using equations of motion. However, off shell it is always different from BRST, and can be used as an auxiliary condition to uniquely determine the quantum effective action of a gauge theory, including ghost and gauge-fixing terms. This symmetry holds in general linear gauges in addition to BRST, so it is particularly convenient for proving the uniqueness and stability of the ghost and gauge fixing sector of gauge theories outside Landau gauge, unlike the usual algebraic renormalization scheme. Below, I construct this symmetry, first when the theory has only fermionic ghosts, and then for a theory with both fermionic and bosonic ghosts. As a simple illustration I will apply this construction to the example of Yang-Mills theory, but the real convenience of this symmetry becomes apparent when it is applied to theories with larger field content, such as theories of $`p`$-form gauge fields. A generalization of the construction, somewhat similar to the well known antifield construction (see for a review) suggests itself for theories with higher order ghost terms, and is discussed at the end of the paper.
The extended ghost sector of the tree-level quantum action of a gauge theory can be written in the general form
$`S_{ext}^c=h^Af^A+{\displaystyle \frac{1}{2}}\lambda h^Ah^A+\overline{\omega }^A\mathrm{\Delta }^A.`$ (1)
The anticommuting antighosts $`\overline{\omega }^A`$ and the corresponding auxiliary fields $`h^A`$ form what are known as trivial pairs. Here the index $`A`$ stands for the collection of all indices, $`f^A=0`$ is the gauge-fixing condition, $`\lambda `$ is a constant gauge-fixing parameter, and $`\mathrm{\Delta }^A`$ is the BRST variation of the gauge-fixing function, $`\mathrm{\Delta }^A=sf^A`$. The sum over $`A`$ includes the integration over space-time, and $`f^A`$ has been chosen to be independent of $`\overline{\omega }^A`$ and $`h^A`$. This part of the action remains invariant if the trivial pair transform under BRST as
$`s\overline{\omega }^A=h^A,sh^A=0,`$ (2)
and can be written as a BRST differential of a gauge-fixing fermion $`\mathrm{\Psi }`$,
$`S_{ext}^c=s\left(\overline{\omega }^Af^A{\displaystyle \frac{1}{2}}\lambda \overline{\omega }^Ah^A\right)s\mathrm{\Psi }.`$ (3)
On the other hand, I can rearrange $`S_{ext}^c`$ as
$`S_{ext}^c`$ $`=`$ $`{\displaystyle \frac{1}{2}}\lambda \left(h^A+{\displaystyle \frac{1}{\lambda }}f^A\right)\left(h^A+{\displaystyle \frac{1}{\lambda }}f^A\right){\displaystyle \frac{1}{2\lambda }}f^Af^A+\overline{\omega }^A\mathrm{\Delta }^A`$ (4)
$`=`$ $`{\displaystyle \frac{1}{2}}\lambda \left(\left(h^A+{\displaystyle \frac{2}{\lambda }}f^A\right){\displaystyle \frac{1}{\lambda }}f^A\right)\left(\left(h^A+{\displaystyle \frac{2}{\lambda }}f^A\right){\displaystyle \frac{1}{\lambda }}f^A\right){\displaystyle \frac{1}{2\lambda }}f^Af^A+\overline{\omega }^A\mathrm{\Delta }^A`$ (5)
$`=`$ $`h^Af^A+{\displaystyle \frac{1}{2}}\lambda h^Ah^A+\overline{\omega }^A\mathrm{\Delta }^A,`$ (6)
where I have defined $`h^A=h^A{\displaystyle \frac{2}{\lambda }}f^A`$. Now $`S_{ext}^c`$ has the same functional form as before, but in terms of a redefined auxiliary field $`h^A`$. It follows that $`S_{ext}^c`$ is invariant under a new set of transformations:
$`\stackrel{~}{s}\overline{\omega }^A=h^A`$ $``$ $`\stackrel{~}{s}\overline{\omega }^A=h^A+{\displaystyle \frac{2}{\lambda }}f^A,`$ (7)
$`\stackrel{~}{s}h^A=0`$ $``$ $`\stackrel{~}{s}h^A={\displaystyle \frac{2}{\lambda }}\stackrel{~}{s}f^A,`$ (8)
$`\stackrel{~}{s}`$ $`=`$ $`s\mathrm{o}nallotherfields.`$ (9)
It follows that $`\stackrel{~}{s}`$ is nilpotent on all fields, $`\stackrel{~}{s}^2=0`$. It should be emphasized that $`\stackrel{~}{s}`$ is a symmetry of the original ($`s`$-invariant) action itself, not some special feature of the construction procedure.
When the extended sector corresponds to the gauge-fixing of an anticommuting gauge field, as can happen for theories with reducible gauge symmetries, the construction is slightly more complicated, since the auxiliary fields now have odd ghost number. Typically, the extended ghost sector in this case can be written with anticommuting auxiliary fields $`\overline{\alpha }^A`$, $`\alpha ^A`$ as
$`S_{ext}^a=\overline{\alpha }^Af^A+\overline{f}^A\alpha ^A+\zeta \overline{\alpha }^A\alpha ^A+\overline{\beta }^A\mathrm{\Delta }^A.`$ (10)
In this, $`f^A`$ is the anticommuting gauge-fixing function, $`\mathrm{\Delta }^A=sf^A`$, and $`\overline{\beta }^A`$ is the corresponding commuting antighost. The term $`\overline{f}^A\alpha ^A`$ is a rearrangement of the appropriate terms in $`\overline{\omega }^A\mathrm{\Delta }^A`$ which appear in $`S_{ext}^c`$ of Eq. (1) for the usual gauge symmetries. Such terms are not affected by the redefinitions in Eq. (6), so they will appear in Eq. (10). In addition to Eq. (2), the BRST transformations on the extended sector now include $`s\overline{\beta }^A=\overline{\alpha }^A,s\overline{\alpha }^A=s\alpha ^A=0`$, and $`s(S_{ext}^c+S_{ext}^a)=0`$, although $`S_{ext}^c`$ and $`S_{ext}^a`$ are not separately BRST-invariant.
Just as in the case with commuting auxiliary fields, the terms in $`S_{ext}^a`$ can be rearranged,
$`S_{ext}^a`$ $`=`$ $`\zeta \left(\overline{\alpha }^A+{\displaystyle \frac{1}{\zeta }}\overline{f}^A\right)\left(\alpha ^A+{\displaystyle \frac{1}{\zeta }}f^A\right){\displaystyle \frac{1}{\zeta }}\overline{f}^Af^A+\overline{\beta }^A\mathrm{\Delta }^A`$ (11)
$`=`$ $`\zeta \left(\left(\overline{\alpha }^A+{\displaystyle \frac{2}{\zeta }}\overline{f}^A\right){\displaystyle \frac{1}{\zeta }}\overline{f}^A\right)\left(\left(\alpha ^A+{\displaystyle \frac{2}{\zeta }}f^A\right){\displaystyle \frac{1}{\zeta }}f^A\right){\displaystyle \frac{1}{\zeta }}\overline{f}^Af^A+\overline{\beta }^A\mathrm{\Delta }^A`$ (12)
$`=`$ $`\zeta \overline{\alpha }^A\alpha ^A+\overline{\alpha }^Af^A+\overline{f}^A\alpha ^A+\overline{\beta }^A\mathrm{\Delta }^A.`$ (13)
where I have now defined $`\overline{\alpha }^A=\left(\overline{\alpha }^A+{\displaystyle \frac{2}{\zeta }}\overline{f}^A\right)`$ and $`\alpha ^A=\left(\alpha ^A+{\displaystyle \frac{2}{\zeta }}f^A\right)`$. As before, a new set of BRST transformations can be defined for $`S_{ext}^a`$,
$`\stackrel{~}{s}\overline{\beta }^A`$ $`=`$ $`\overline{\alpha }^A=\left(\overline{\alpha }^A+{\displaystyle \frac{2}{\zeta }}\overline{f}^A\right),`$ (14)
$`\stackrel{~}{s}\overline{\alpha }^A`$ $`=`$ $`0\stackrel{~}{s}\overline{\alpha }^A={\displaystyle \frac{2}{\zeta }}\stackrel{~}{s}\overline{f}^A,`$ (15)
$`\stackrel{~}{s}\alpha ^A`$ $`=`$ $`0\stackrel{~}{s}\alpha ^A={\displaystyle \frac{2}{\zeta }}\stackrel{~}{s}f^A,`$ (16)
$`\stackrel{~}{s}`$ $`=`$ $`s\mathrm{o}nallotherfields.`$ (17)
Since $`\alpha ^A`$ was the result of BRST variation of some field, $`\alpha ^A`$ has to be the variation under $`\stackrel{~}{s}`$ of the same field, and $`\stackrel{~}{s}\overline{f}^A`$ must be calculated according to the rules of Eq. (9). In addition, the action of $`\stackrel{~}{s}`$ must be the same as that of $`s`$ for the fields contained in $`f^A`$. Then $`\stackrel{~}{s}^2=0`$ on all fields, and $`\stackrel{~}{s}(S_{ext}^c+S_{ext}^a)=0`$.
Example: Let me consider a concrete example, and construct this symmetry for Yang-Mills theory in an arbitrary (linear or nonlinear) gauge-fixing function $`f^a`$. The tree-level quantum action is in this case
$$S=d^4x\left(\frac{1}{4}F_{\mu \nu }^aF^{a\mu \nu }+h^af^a+\overline{\omega }^a\mathrm{\Delta }^a+\frac{1}{2}\xi h^ah^a\right),$$
(18)
where $`a`$ is the gauge index. This is invariant under the BRST transformations
$`sA_\mu ^a`$ $`=`$ $`_\mu \omega ^a+gf^{abc}A_\mu ^b\omega ^c,s\overline{\omega }^a=h^a,`$ (19)
$`s\omega ^a`$ $`=`$ $`{\displaystyle \frac{1}{2}}gf^{abc}\omega ^b\omega ^c,sh^a=0.`$ (20)
Following the rules of Eq. (9), I obtain
$`\stackrel{~}{s}\overline{\omega }^a`$ $`=`$ $`h^a+{\displaystyle \frac{2}{\xi }}f^a,\stackrel{~}{s}h^a={\displaystyle \frac{2}{\xi }}\mathrm{\Delta }^a,`$ (21)
$`\stackrel{~}{s}`$ $`=`$ $`s\text{ on all other fields.}`$ (22)
By construction $`\stackrel{~}{s}`$ is a symmetry of the action, $`\stackrel{~}{s}S=0`$, and nilpotent, $`\stackrel{~}{s}^2=0`$. It is straightforward to check these two properties explicitly for this example of Yang-Mills theory.
Any symmetry is a useful property of a theory, just how useful depends on both the symmetry and the theory. Let me show here how this symmetry can be used jointly with BRST to ease calculations. The quantum effective action $`\mathrm{\Gamma }[\chi ,K]`$ defined in the presence of background c-number sources $`K^A`$ for the BRST variations $`F^A=s\chi ^A`$ obeys the Zinn-Justin equation $`(\mathrm{\Gamma },\mathrm{\Gamma })=0`$, where $`(,)`$ is the antibracket in terms of $`\chi ^A`$ and their sources for BRST variations, $`K^A`$. Note that $`\mathrm{\Gamma }[\chi ,K]`$ does not contain the sources for the BRST variations of auxiliary fields of the type $`h^A,\alpha ^A,\overline{\alpha }^A`$, etc. which are BRST invariant. Also note that $`\mathrm{\Gamma }_{N,\mathrm{}}`$, which is the infinite part of the $`N`$-loop contribution to $`\mathrm{\Gamma }`$, does not contain the sources for the BRST variations of antighosts of the type $`\overline{\omega }^A`$, because their BRST variations are linear in the fields .
For most physically interesting cases the effective action is at most linear in the remaining $`K^A`$ on dimensional grounds. This is the case for pure Yang-Mills fields, as well as several theories with Yang-Mills fields coupled to various other fields, in four dimensions. For these theories, the Zinn-Justin equation reduces to the statement that for infinitesimal $`ϵ`$, $`S_R+ϵ\mathrm{\Gamma }_{N,\mathrm{}}`$ is invariant under the quantum BRST symmetry $`s_R`$, which is just the most general nilpotent symmetry built out of the fields in the theory, and which reduces to the original BRST symmetry at tree-level .
Let me look at this class of theories, viz., those for which $`\mathrm{\Gamma }[\chi ,K]`$ has been shown to be at most linear in the $`K^A`$. Let me also assume that the quantum BRST transformation $`s_R`$ has been found by solving the Zinn-Justin equation. In order to see the effect of the gauge-dependent symmetry $`\stackrel{~}{s}`$ on the quantum theory, I take the same effective action $`\mathrm{\Gamma }[\chi ,K]`$ with the same sources. Of course $`\stackrel{~}{s}`$ is a gauge-dependent symmetry, nonetheless it can be elevated to a symmetry of the quantum effective action if the gauge-fixing functions are linear in the fields. I shall denote the minimal fields by $`\varphi ^A`$ and non-minimal fields by $`\lambda ^A`$. Then $`\stackrel{~}{s}\varphi ^A=s\varphi ^A`$, and consequently $`\stackrel{~}{s}s\varphi ^A=0`$. The application of $`\stackrel{~}{s}`$ on the partition function gives (since the tree-level action $`S`$ is invariant under $`\stackrel{~}{s}`$),
$$F^A\frac{\delta _L\mathrm{\Gamma }[\chi ,K]}{\delta \varphi ^A}+\stackrel{~}{s}\lambda ^A\frac{\delta _L\mathrm{\Gamma }[\chi ,K]}{\delta \lambda ^A}+\stackrel{~}{s}s\lambda ^AK^A[\lambda ]=0.$$
(23)
Here $``$ denotes the quantum average in the presence of sources, specified such that the quantum average of a field is the field itself . So far the gauge could be arbitrary. For the special case where the gauge-fixing functions are assumed to be linear in the fields, $`\stackrel{~}{s}\lambda ^A`$ as defined in Eq. (9) and Eq. (17) is either linear in the fields or equals the BRST variation of some linear function of the fields. Either way, $`\stackrel{~}{s}\lambda ^A`$ is known explicitly. In addition, the effective action does not contain the sources for BRST variations of $`(h^A,\alpha ^A,\overline{\alpha }^A)`$ etc. and only $`S_R`$ contains the sources for the BRST variations of $`(\overline{\omega }^A,\overline{\beta }^A)`$ etc. Then I can read off from Eq. (23) that $`S_R+ϵ\mathrm{\Gamma }_{N,\mathrm{}}`$ is invariant under $`\stackrel{~}{s}_R`$, which is just $`\stackrel{~}{s}`$ as calculated in terms of the quantum BRST transformation $`s_R`$.
Going back to the example of Yang-Mills theory in four dimensions, I obtain directly from Eq. (23) that in a linear gauge the quantum symmetry corresponding to $`\stackrel{~}{s}`$ is given just by Eq. (22) with $`s`$ and $`\stackrel{~}{s}`$ replaced by $`s_R`$ and $`\stackrel{~}{s}_R`$, respectively, where $`s_R`$ is the usual quantum BRST transformation for Yang-Mills fields . The ghost sector of the general effective action can now be obtained through an extremely short calculation. Let me define $`s_R^{}=\frac{1}{2}(s_R\stackrel{~}{s}_R)`$. Then
$`s_R^{}\overline{\omega }^a`$ $`=`$ $`\left(h^a+{\displaystyle \frac{1}{\xi }}f^a\right),s_R^{}h^a={\displaystyle \frac{1}{\xi }}\mathrm{\Delta }_R^a,`$ (24)
$`s_R^{}`$ $`=`$ $`0\text{ on all other fields}.`$ (25)
Since Yang-Mills theory is power-counting renormalizable in four dimensions, the infinite part of the $`N`$-loop quantum effective action, after infinities up to $`N1`$ loops have been absorbed into counterterms, is an integrated local functional of mass dimension four. So on dimensional grounds and because the effective action must have zero ghost number, it can be at most quadratic in the trivial pair $`\lambda ^A(\overline{\omega }^a,h^a)`$ . So I can write
$$\mathrm{\Gamma }=S_C+\lambda ^AX^A+\lambda ^A\lambda ^BX^{AB},$$
(26)
where $`S_C`$ does not contain any ghost or auxiliary field, and $`X^A`$ and $`X^{AB}`$ do not contain any of the $`\lambda ^A`$. Then the coefficients of different powers of the $`\lambda ^A`$ in the equation $`s_R^{}\mathrm{\Gamma }=0`$ must vanish. In particular, the terms quadratic or linear in $`\lambda ^A`$ give
$`X_{\overline{\omega }\overline{\omega }}^{ab}=X_{\overline{\omega }h}^{ab}`$ $`=`$ $`0,`$ (27)
$`X_{\overline{\omega }}^a+{\displaystyle \frac{2}{\xi }}\mathrm{\Delta }_R^bX_{hh}^{ab}`$ $`=`$ $`0,`$ (28)
while the terms independent of $`\lambda ^A`$ in $`s_R^{}\mathrm{\Gamma }=0`$ give
$$f^aX_{\overline{\omega }}^a+\mathrm{\Delta }_R^aX_h^a=0.$$
(29)
In addition, antighosts and auxiliary fields transform among themselves under BRST, so I can also consider the coefficients of $`\lambda ^A`$ in $`s_R\mathrm{\Gamma }=0`$ to obtain some independent equations,
$$s_RX_{\overline{\omega }}^a=0=s_RX_{hh}^{ab},X_{\overline{\omega }}^a=s_RX_h^a.$$
(30)
Here the function $`X_{hh}^{ab}`$ is symmetric and has vanishing mass dimension and ghost number. It follows from the above equation that $`X_{hh}^{ab}`$ is purely numerical, and because we are dealing with the SU(N) algebra, and $`X_{hh}^{ab}`$ is clearly symmetric in $`(a,b)`$, it must be proportional to $`\delta ^{ab}`$. Then
$$X_{hh}^{ab}=\frac{\xi Z_\omega }{2}\delta ^{ab},X_{\overline{\omega }}^a=Z_\omega \mathrm{\Delta }_R^a\text{and}X_h^a=Z_\omega f^a,$$
(31)
for some constant $`Z_\omega `$. (The last equation follows from combining Eq.s (30) and (29).) Therefore the quantum effective action takes the form
$$\mathrm{\Gamma }=S_C+Z_\omega \overline{\omega }^a\mathrm{\Delta }_R^a+Z_\omega h^af^a+\frac{\xi }{2}Z_\omega h^ah^a,$$
(32)
where $`S_C`$ is the most general ghost-free polynomial of dimension four symmetric under $`s_R`$ and all linear symmetries of the classical theory. Note that I did not need to assume any specific gauge-fixing function, only that it is linear.
It is known that the problem of stability of the ghost (and gauge-fixing) sector of gauge theories can be solved by using the ghost and antighost equations as auxiliary conditions in the usual algebraic renormalization scheme . However, those equations are in their most useful form in the Landau gauge $`\xi =0`$, while the symmetry $`\stackrel{~}{s}`$ is defined for a non-zero $`\xi `$ and cannot even be constructed directly for $`\xi =0`$. (Of course, the $`\xi 0`$ limit can be taken after the effective action has been found.) Therefore the symmetry $`\stackrel{~}{s}`$ is not a reformulation of the usual auxiliary conditions. In particular, the use of $`\stackrel{~}{s}`$ as an auxiliary condition in the algebraic renormalization scheme, as opposed to the ghost and antighost equations, can be thought of as being complementary to those auxiliary conditions. This symmetry is especially useful in dealing with Yang-Mills type theories with a large number of fields, for which various different interpolating linear gauges are allowed. Examples are theories with $`p`$-form fields, as in the first order formulation of Yang-Mills theory or the topological mass generation mechanism. The technique described here provides a straightforward way of verifying the uniqueness of the gauge-fixing and ghost sector of such theories, as has been done in .
Generalizations: The calculations for the example were done assuming that the gauge condition is linear in the fields. This was mainly for convenience — just as for the usual BRST symmetry, results in linear gauges are easier to calculate and interpret. But even in nonlinear gauges, or for actions with quartic ghost terms, there is a corresponding nilpotent symmetry. Let me show the construction for Yang-Mills theory in four dimensions, generalizations to many other cases being fairly simple. First, the gauge-fixing fermion of Eq. (3) is generalized to include terms quadratic in the antighost, so that
$`S_{ext}^c`$ $`=`$ $`s\mathrm{\Psi }=s(\overline{\omega }^af_0^a{\displaystyle \frac{1}{2}}\xi \overline{\omega }^ah^a{\displaystyle \frac{1}{2}}\overline{\omega }^a\overline{\omega }^bf_1^{ab})`$ (33)
$`=`$ $`\overline{\omega }^a\mathrm{\Delta }_0^a+h^af_0^a+{\displaystyle \frac{1}{2}}\xi h^ah^a+h^a\overline{\omega }^bf_1^{ab}{\displaystyle \frac{1}{2}}\overline{\omega }^a\overline{\omega }^b\mathrm{\Delta }_1^{ab},`$ (34)
where $`f_0^a`$ and $`f_1^{ab}`$ do not contain $`\overline{\omega }^a`$ or $`h^a`$, but are arbitrary otherwise, and $`\mathrm{\Delta }_0^a=sf_0^a`$ and $`\mathrm{\Delta }_1^{ab}=sf_1^{ab}`$. For Yang-Mills theory in four dimensions, $`\mathrm{\Psi }`$ must be of dimension three or less, so there are no further terms. Now I can ‘complete the square’ as before, and write
$`S_{ext}^c`$ $`=`$ $`{\displaystyle \frac{1}{2}}\xi \left(h^a+{\displaystyle \frac{1}{\xi }}f^a\right)\left(h^a+{\displaystyle \frac{1}{\xi }}f^a\right){\displaystyle \frac{1}{2\xi }}f^af^a+\overline{\omega }^a\mathrm{\Delta }_0^a{\displaystyle \frac{1}{2}}\overline{\omega }^a\overline{\omega }^b\mathrm{\Delta }_1^{ab},`$ (35)
where $`f^a=f_0^a+\overline{\omega }^bf_1^{ab}`$. Then as before I can define $`h^a=\left(h^a+{\displaystyle \frac{2}{\xi }}f^a\right)`$ and write
$`S_{ext}^c=\overline{\omega }^a\mathrm{\Delta }_0^a+h^af_0^a+{\displaystyle \frac{1}{2}}\xi h^ah^a+h^a\overline{\omega }^bf_1^{ab}{\displaystyle \frac{1}{2}}\overline{\omega }^a\overline{\omega }^b\mathrm{\Delta }_1^{ab},`$ (36)
which has the same functional form as Eq. (34), but with $`h^a`$ replaced by $`h^a`$. So the new symmetry transformations are
$`\stackrel{~}{s}\overline{\omega }^a`$ $`=`$ $`h^a+{\displaystyle \frac{2}{\xi }}f_0^a+{\displaystyle \frac{2}{\xi }}\overline{\omega }^bf_1^{ab},`$ (37)
$`\stackrel{~}{s}h^a`$ $`=`$ $`{\displaystyle \frac{2}{\xi }}\mathrm{\Delta }_0^a{\displaystyle \frac{2}{\xi }}h^bf_1^{ab}{\displaystyle \frac{4}{\xi ^2}}f_0^bf_1^{ab}{\displaystyle \frac{4}{\xi ^2}}\overline{\omega }^cf_1^{bc}f_1^{ab}+{\displaystyle \frac{2}{\xi }}\overline{\omega }^b\mathrm{\Delta }_1^{ab},`$ (38)
$`\stackrel{~}{s}`$ $`=`$ $`s\text{ on all other fields}.`$ (39)
Again, this is a symmetry of the action $`S_{ext}^c`$, and therefore a symmetry of the full action including the gauge-invariant terms. Note that $`\stackrel{~}{s}`$ is again nilpotent by construction, $`\stackrel{~}{s}^2=0`$.
For the examples given so far, including the last one, $`\stackrel{~}{s}`$ differs from $`s`$ by a ‘trivial symmetry’ ,
$`\stackrel{~}{s}s=\eta ^{AB}{\displaystyle \frac{\delta S}{\delta \chi ^A}}{\displaystyle \frac{\delta }{\delta \chi ^B}},`$ (40)
where in this case $`\chi ^A`$ is restricted to run over $`(\overline{\omega }^A,h^A)`$ and $`\eta ^{AB}`$ is graded antisymmetric in $`(A,B)`$. What makes $`\stackrel{~}{s}`$ special is the fact that it is nilpotent, since adding a trivial symmetry to BRST does not make an off-shell nilpotent symmetry in general. On the other hand, for general BRST-invariant actions, the two symmetries $`s`$ and $`\stackrel{~}{s}`$ need not be related by a trivial symmetry. For general actions, i.e., those which may include higher order ghost terms, the construction of $`\stackrel{~}{s}`$ can be generalized to give a nilpotent symmetry. To see how that can be done, note that for the examples above, $`h^A=\delta \mathrm{\Psi }/\delta \overline{\omega }^A`$ up to a constant coefficient, as if $`h^A`$ were the antifield of $`\overline{\omega }^A`$. It is worth emphasizing that $`h^A`$ is not the antifield for $`\overline{\omega }^A`$. But this similarity suggests a generalization of the previous constructions in the following way.
Given a gauge invariant action $`S_0`$, let the ghost fields be defined as usual, and introduce a trivial pair $`(\overline{\omega }^A,h^A)`$ for each generator, with the BRST transformation law $`s\overline{\omega }^A=h^A,sh^A=0`$. The gauge-fixing fermion $`\mathrm{\Psi }`$ is then constructed as some functional of ghost number $`1`$, subject to any other known symmetry or dimensional restriction. The ghost sector of the action is then $`s\mathrm{\Psi }`$, so that the total action $`S_0+s\mathrm{\Psi }`$ is BRST-invariant. Now let a new BRST transformation $`\stackrel{~}{s}`$ be defined as $`\stackrel{~}{s}\overline{\omega }^A=h^A,\stackrel{~}{s}h^A=0`$, where $`h^A=\delta \mathrm{\Psi }/\delta \overline{\omega }^A`$, and $`\stackrel{~}{s}=s`$ on all other fields. A new gauge fixing fermion $`\mathrm{\Psi }^{}`$ is then constructed by replacing $`h^A`$ by $`h^A`$ in $`\mathrm{\Psi }`$, i.e., $`\mathrm{\Psi }^{}=\mathrm{\Psi }[h^Ah^A]`$, and a new ghost action is constructed as $`\stackrel{~}{s}\mathrm{\Psi }^{}`$. The (new) total action $`S_0+\stackrel{~}{s}\mathrm{\Psi }^{}`$ is then invariant under $`\stackrel{~}{s}`$.
If $`\mathrm{\Psi }`$ is chosen to be the most general gauge fixing fermion, $`s\mathrm{\Psi }`$ would be the most general $`s`$-exact functional of vanishing ghost number. But now there are two actions, one constructed with $`s\mathrm{\Psi }`$, and the other with $`\stackrel{~}{s}\mathrm{\Psi }^{}`$, and these two need not be equal when written in terms of the same $`h^A`$. For the situations where (as in all the examples above) $`\stackrel{~}{s}\mathrm{\Psi }[h^Ah^A]=s\mathrm{\Psi }`$ up to a finite number of irrelevant constants, the total action is invariant under two different off-shell nilpotent symmetries $`s`$ and $`\stackrel{~}{s}`$. This can be an immensely useful property for proving the uniqueness of the ghost action for complicated theories. In addition, since $`\stackrel{~}{s}`$ differs from BRST transformations only by its action on the trivial pair, it has the same cohomology as the BRST transformation itself . So there is no additional complication in calculating the structure of anomalies in the theory, which is determined fully by the BRST cohomology.
In summary, any BRST invariant action in linear or nonlinear gauge has another off-shell, nilpotent symmetry with the same cohomology as the BRST transformation. If the action contains up to quartic ghost terms, it is always symmetric under both BRST and this transformation. If it contains higher order ghost terms, one can construct another action which is symmetric under the new BRST transformation, and whose gauge-invariant component agrees with that of the original action. If the ghost sectors of the two actions agree as well, both transformations leave it invariant. This can simplify calculations of the counterterms, especially when the gauge-fixing term is linear in the fields.
Acknowledgement: It is a pleasure to thank M. Henneaux for a helpful comment. |
no-problem/0001/hep-ex0001031.html | ar5iv | text | # Coherent e+e- pair creation at high energy muon colliders. Talk at the Workshop Studies on Colliders and Collider Physics at the Highest Energies: Muon Colliders at 10 TeV to 100 TeV, 27 September - 1 October, 1999 Montauk, New York, USA, be published by the American Institute of Physics.
## Abstract
It is shown that at muon colliders with the energy in the region of 100 TeV the process of coherent pair creation by the muon in the field of the opposing beam becomes important and imposes some limitations on collider parameters.
One of the main advantages of muon colliders is that the muon is much heavier than the electron and therefore the radiation (beamstrahlung) in beam collisions is suppressed. The relative energy loss during the beam collision $`\mathrm{\Delta }E/EEB^2/m^4`$.
However, there is another process in beam collisions which may be important for a high energy muon collider: it is the coherent e<sup>+</sup>e<sup>-</sup> creation. In this process the e<sup>+</sup>e<sup>-</sup> pair is created by a virtual photon in a strong field of the opposing muon beam $`B|E|+|B|`$. The process of coherent pair creation is very important for e<sup>+</sup>e<sup>-</sup> linear colliders . This process has large probability at
$$\kappa =(\omega /m_ec^2)(B/B_0)>1,B_0=\alpha e/r_e^24.4\times 10^{13}\text{Gauss}.$$
(1)
At a 100 TeV muon collider the energy and beam field are even higher than those at linear e<sup>+</sup>e<sup>-</sup> colliders. So, one can expect that this process will be important for muon collider as well, because naively the cross section of this process depends only on $`E,B,m_e`$, but not on $`m_\mu `$.
However, there is one effect in this process which makes a situation at electrons and muon beams very different. In e<sup>+</sup>e<sup>-</sup> collisions, the maximum energy of virtual photons is almost equal to the electron energy, while at $`\mu \mu `$ colliders the maximum photon energy depends also on the mass of the produced system. This can be understood in the following way . The minimum value of the photon mass, which corresponds to the case when the virtual photon has zero transverse momentum ,
$$Q_{min}^2=q_{min}^2=(pp^{})^2\frac{m^2\omega ^2}{E(E\omega )},$$
(2)
where $`m`$ is the mass of beam particles. Also, the cross section of e<sup>+</sup>e<sup>-</sup> pair production is large only near the threshold $`W^24m_e^2`$. Besides, the cross section is negligible for $`Q_{max}^2>W^2`$, i.e. $`Q_{max}^2m_e^2`$. As a result, from the inequality $`Q_{min}<Q_{max}`$ it follows
$$\omega <\gamma _\mu m_ec^2.$$
(3)
So, only photons with the energy $`\omega <\gamma _\mu m_ec^2(1/200)E_\mu `$ contribute to the process of coherent pair creation.
Nevertheless, at the 100 TeV $`\mu \mu `$ collider even such “low energy” photons can produce e<sup>+</sup>e<sup>-</sup> pairs. Indeed, for $`N=0.8\times 10^{12},\sigma _x=2\times 10^5`$ cm, $`\sigma _z=0.25`$ cm, $`E=50`$ TeV (“evolutionary” $`\mu \mu `$(100) collider)
$$\kappa \gamma _\mu \frac{B}{B_0}0.85.$$
(4)
Here I took $`BeN/\sigma _x\sigma _z`$, which is close to the maximum effective beam field ($`|B|+|E|`$).
The probability of e<sup>+</sup>e<sup>-</sup> creation by the muon in the transverse magnetic field per unit length for $`\kappa <1`$
$$W\frac{0.013\alpha ^3R^{5/2}}{r_e\gamma _\mu }e^{2\sqrt{3}/R},$$
(5)
where $`R=\gamma _\mu (B/B_0)`$.<sup>1</sup><sup>1</sup>1Here I distinguish $`R`$ and $`\kappa `$ because $`\kappa `$ is approximately equal to $`\gamma _\mu B/B_0`$ while $`R`$ is equal to this expession by definishion. For $`R=0.85`$, $`\sigma _z=0.25`$ cm, $`E_\mu =50`$ TeV the probability of e<sup>+</sup>e<sup>-</sup> pair creation by the muon during its life (about 1000 beam collisions)
$$p1000W\sigma _z0.1.$$
(6)
This is a large probability, the maximum that can be accepted. Further two times increase of the $`R`$ value will lead to a one order decrease of the luminosity. Note, that in the process of the e<sup>+</sup>e<sup>-</sup> creation the muon loses about 1/200 of its energy that is much larger than the energy spread at muon colliders ($`10^4`$), so the considered muon will no longer contribute to the luminosity (due to chromatic abberation).
Let us compare now coherent pair creation with beamstrahlung where the muon can also emit of sufficiently hard photon. We have seen that the probability of coherent pair creation is large when $`\kappa (EBe\mathrm{})/(m_\mu m_e^2c^5)>1`$. In beamstrahlung, the muon is “lost” when the characteristic photon energy $`E_\gamma /E(EBe\mathrm{})/(m_\mu ^3c^5)=\kappa (m_e/m_\mu )^2>\delta 10^4`$ (see B.King’s table of the 100 TeV “evolutionary” muon collider). One can see that the expression for a beamstrahlung does not contain $`m_e`$ and is smaller by a factor of $`(m_\mu /m_e)^24\times 10^4`$ than the characteristic parameter in coherent pair creation; however the upper limit on the beamstrahlung parameter is also smaller by a numerically similar factor. This means than both processes become important approximately at the same values of the muon energy and beam field. Which process is more important depends on the energy acceptance of the final focus system. For the considered parameters the coherent pair creation is more important.
The coherent e<sup>+</sup>e<sup>-</sup> pair creation in beam collisions is essential for the 100 TeV muon collider and imposes some limitations on design parameters. |
no-problem/0001/cond-mat0001268.html | ar5iv | text | # Taxonomy of Stock Market Indices
\[
## Abstract
We investigate sets of financial non-redundant and nonsynchronously recorded time series. The sets are composed by a number of stock market indices located all over the world in five continents. By properly selecting the time horizon of returns and by using a reference currency we find a meaningful taxonomy. The detection of such a taxonomy proves that interpretable information can be stored in a set of nonsynchronously recorded time series.
89.70.+c
\]
One key aspect of information theory is that unpredictable time series, namely time series which are not or poorly redundant are characterized by statistical properties which are almost indistinguishable from the ones observed in basic random processes such as, for examples, Bernoulli or Markov processes. Within this theoretical framework it may appear paradoxical that some time series generated in complex systems, which are playing a vital role in biological and economic systems are essentially unpredictable and characterized by a negligible or pretty low redundancy. Prominent examples are the time series of the price changes of assets traded in a financial markets and the symbolic series of coding regions of DNA .
In this letter we show that an approximately non-redundant time series non-synchronously recorded may carry different levels of interpretable information provided that it can be analyzed synchronously together with other time series of the same kind. In other words we show that in addition to the information related to the redundant nature of the time series other sources of information may be present in a non-redundant time series and that such additional information can be extracted by comparing the considered time series with analogous ones. Our work focuses on time series monitoring financial markets located all over the world. With our study, we aim to detect in a quantitative way the existence of links between different stock markets.
It is worth pointing out that the study of the dynamics of stock exchange indices located all over the world has additional levels of complexity with respect to, for example, the dynamics of a portfolio of stocks traded in a single stock market. To cite just two of the most prominent ones – (i) stock markets located all over the world have different opening and closing hours; and (ii) transactions in different markets are done by using different currencies that fluctuates themselves the one with respect to the other. It is then important to quantify the degree of similarity between the dynamics of stock indices of nonsynchronous markets trading in different currencies.
Here we present a study showing that meaningful information can be extracted by a set of stock indices time series. In our study, the different levels of interdependence and complexity of data are elucidated by considering multiple applications of the same methodology on modified sets of the investigated time series. In our study we are able to show that it is possible to extract a group of taxonomies that directly reflects geographical and economic links between several countries all over the years. This is obtained by using the almost non-redundant time series of several stock indices of financial markets located all over the world only.
The efficient market paradigm states that stock returns of financial price time series are unpredictable . Within this paradigm, time evolution of stock returns is well described by a random process . Several empirical analyses of real market data have proven that returns time series are approximately described by unpredictable non-redundant time series . The absence of redundancy is not complete in real markets and the presence of residual redundancy has been detected . A minimized degree of redundancy is required to avoid the presence of arbitrage opportunities.
We investigate two sets of data – (i) the nonsynchronous time evolution of $`n=24`$ daily stock market indices computed in local currencies during the time period from January 1988 to December 1996, and (ii) the closure value of the 51 Morgan Stanley Capital International (MSCI) country indices daily computed in local currencies or in USA dollars in the time period from January 1996 to December 1999. The stock indices used in our research belong to stock markets distributed all over the world in five continents.
We already stated that a set of stock indices time series is essentially different from a portfolio of stocks traded in a single stock market. Specifically, the fact that trading may occur at different time in two different cities implies that some markets are open during the time whereas others are closed (the most prominent example concerns New York and Tokyo stock markets). This makes impossible a rigorously synchronous analysis of a large number of stock indices located all over the world. An analysis of daily data of say closure values may induce spurious correlations introduced just by the specific time at which the variables are stored. The effects of nonsynchronous trading in time series analysis is well documented in the economic literature . In fact different degrees of correlation between the New York and Tokyo markets are estimated depending if one consider the closure - closure between the two markets or the closure - opening. In particular it has been empirically detected that the highest degree of correlation between these two markets is observed between the open-closure return of the New York stock exchange at day $`t`$ and the opening-closure of the Tokyo stock market at day $`t+1`$ .
The aim of this study is to consider a large set of indices. It is of course impossible to collect a set of indices located all over the world which are synchronous with respect to the opening and closing hours. This intrinsic limitation motivate us to consider a week time horizon where the nonsynchronous hourly mismatch of our data is minimized.
We aim to discover the presence of interpretable information in a set of time series. We proceed by determining a quasi-synchronous correlation coefficient of the weekly difference of logarithm of closure value of indices. The correlation coefficient is
$$\rho _{ij}=\frac{<Y_iY_j><Y_i><Y_j>}{\sqrt{(<Y_i^2><Y_i>^2)(<Y_j^2><Y_j>^2)}}$$
(1)
where $`i`$ and $`j`$ are the numerical labels of indices, $`Y_i=\mathrm{ln}S_i(t)\mathrm{ln}S_i(t1)`$ and $`S_i(t)`$ is the last value of the trading week $`t`$ for the index $`i`$. The correlation coefficient is computed between all the possible pairs of indices present in the database. The statistical average is a temporal average performed on all the trading days of the investigated time period. We then obtain the $`n\times n`$ matrix of correlation coefficient for weekly logarithm index differences (which almost coincides with index returns). Correlation matrices have been recently investigated within the framework of random matrix theory . Here we take a different perspective, we use the method introduced in ref. . Specifically we assume that the subdominant ultrametric space associated to a metric distance may reveal part of the economic information stored in the time series. This is obtained by defining a quantitative distance between each pair of elements $`i`$ and $`j`$, $`d(i,j)=\sqrt{2(1\rho _{ij})}`$ and then using this distance matrix $`𝐃`$ to determine the minimum spanning tree (MST) connecting the $`n`$ indices. The MST, a theoretical concept of graph theory , allows to obtain, in a direct and unique way, the subdominant ultrametric space and the hierarchical organization of the elements (indices in our case) of the investigated data set. Subdominant ultrametric space has been fruitfully used in the description of frustrated complex systems. The archetype of this kind of systems is a spin glass .
In the rest of this letter, we show that the group of taxonomies found by considering the subdominant ultrametric matrices $`𝐃^<`$ associated with the distance matrices $`𝐃`$, obtained from different sets of quasi-synchronously time series investigated in local currencies or in USA Dollars, are of direct interpretation.
We first investigated the set of 24 indices of 20 different countries recorded during the period 1988-1996. We divide the entire period in 6 four years partially overlapping periods. The first covers the years 1988-1991, the second 1989-1992 and so on. Each 4 years period comprises 207 or 208 week records for each time series. In all the periods we detect distinct clusters of North-America, Europe and Asia-Pacific stock indices. The North-America cluster is rather stable over the years and includes the USA indices Dow Jones 30, Standard & Poor’s 500, Nasdaq 100 and Nasdaq Composite. The European cluster increases in size starting in the first period as the one formed by Amsterdam AEX, Paris CAC40, Frankfurt DAX and London FTSE and ending as a FTSE, AEX, DAX, CAC40, Madrid General and Oslo General cluster in the last period. Milan Comit index stays always out of the European cluster in the investigated periods. This is not so surprising because Italy was the only large European economy rather far from the so-called Maastricht parameters during that period. The Asian-Pacific cluster is also expanding as time goes on. It starts as a Kuala Lumpur Comp., Singapore Straits Times Industrial and Bangkok SET cluster and ends as a Kuala Lumpur, Singapore, Hong Kong Hang-Seng, Bangkok, Australia All Ordinary, Jakarta Comp. and Philippines Comp. cluster. Japanese stock indices do not join the Asian-Pacific cluster and Japan behaves as a poorly linked country. The same occurs for BSE30 index (of India) and South-America indices.
In Fig. 1 we show the hierarchical trees obtained for the first and the last averaging time period. The presence of clusters is observed in both periods but the tree of the second period has larger clusters. In summary our study shows that regional links between different economies emerge directly from time series. Moreover, an increase of the size of observed clusters and a relative stability of the clusters over the years is detected.
With the aim of expanding this analysis over one of the largest sets of indices today available, we consider the set of 51 world indices computed by MSCI. For a so large set of indices the point of view of the investor becomes crucial. In other words it is important to consider the problem also from the perspective of an international investor simultaneously monitoring the various markets. Several aspects of the different countries needed to be taken into account to make an appropriate comparison, they include the difference in currency values, levels of taxation etc.. Here we consider the most important of these differences namely the fact that the performances of different stock markets need to be compared by an international investor by using one reference currency. To evaluate the impact of a change of currency in the computation of indices, we consider the 51 MSCI country indices either in local currencies and in USA Dollars.
The 51 indices belongs to 51 different countries located in all continents. They comprises so-called emerged and emerging markets. Indices can be found at the web site http://www.mscidata.com. The data are daily data and covers the period 1996-1999. In Fig. 2a we show the result of our analysis performed by investigating weekly closure data in local currencies during the period 1996-1999. Four distinct clusters are detected (indicated in the bottom of the figure by a solid line). The cluster number one is essentially a North-American (green lines indicating USA and Canada indices) and European cluster (blue lines). There is only one country index from the Asia-Pacific area and it is Australia (red line). The cluster number two comprises 4 South-America country indices and the number three is composed by 6 Asia-Pacific country indices whereas the small cluster number four comprises India and Pakistan. The only world region that does not explicitly show index clustering is the world region of Africa-Middle East (purple lines). However, it is worth noting that several of these country indices are found at the extreme right of the hierarchical tree namely they are all quite far from any other country. Once again Japan index is disconnected from the Asia-Pacific cluster and is observed at the external edge of the South-America cluster. Between European countries the ones which are outside cluster one are The Czech Republic, Greece, Turkey and Luxembourg. Of these four countries only the Luxembourg is considered by MSCI an emerged market.
The same analysis is then repeated for the same indices in the same period but using indices computed in USA Dollars. The hierarchical tree of this investigation is shown in Fig, 2b. The overall structure observed in Fig. 2a is conserved but some relevant changes are detected. For example the Australian index leaves cluster one and links together with New Zealand in cluster three of this figure. Japan moves still far being now the first read line after the Asian-Pacific cluster, the small India-Pakistan cluster disappears and Peru’ links at the edge of cluster one. In summary the results of our analysis show that the computing of the indices in a single reference currency can modify the obtained hierarchical structure. However, the changes detected in the specific investigated period are not dramatic and limited to few countries.
To verify if the nonsynchronous recording of daily data indeed affects our findings we also determine the hierarchical tree for daily closure changes for the same set of indices used to obtain the tree of Fig. 2b. This new hierarchical tree shows the same overall structure observed in the tree of Fig. 2a but with a number of different links which are probably induced by the use of nonsynchronous time series. Specifically we observe that almost all the American indices cluster together (Brazil, Argentina, Mexico, USA, Canada and Peru’) and South-Africa cluster with the (in this case just) European cluster.
In conclusion, we have shown that sets of stock index time series located all over the world can be used to extract economic information about the links between different economies provided that the effects of the nonsynchronous nature of the time series and of the different currencies used to compute the indices are properly taken into account.
We are grateful to Fabrizio Lillo for fruitful discussions and to an anonymous referee for helpful comments. G. Bonanno and R.N. Mantegna wish to thank INFM, ASI and MURST for financial support, N. Vandewalle is financially supported by FNRS. |
no-problem/0001/hep-ex0001009.html | ar5iv | text | # Measurement of Charge Asymmetries in Charmless Hadronic 𝐵 Meson Decays
## Abstract
We search for CP-violating asymmetries ($`𝒜_{\mathrm{CP}}`$) in the $`B`$ meson decays to $`K^\pm \pi ^{}`$, $`K^\pm \pi ^0`$, $`K_S^0\pi ^\pm `$, $`K^\pm \eta ^{}`$, and $`\omega \pi ^\pm `$. Using 9.66 million $`\mathrm{{\rm Y}}(4S)`$ decays collected with the CLEO detector, the statistical precision on $`𝒜_{\mathrm{CP}}`$ is in the range of $`\pm 0.12`$ to $`\pm 0.25`$ depending on decay mode. While CP-violating asymmetries of up to $`\pm 0.5`$ are possible within the Standard Model, the measured asymmetries are consistent with zero in all five decay modes studied.
preprint: CLNS 99/1651 CLEO 99-17
S. Chen,<sup>1</sup> J. Fast,<sup>1</sup> J. W. Hinson,<sup>1</sup> J. Lee,<sup>1</sup> N. Menon,<sup>1</sup> D. H. Miller,<sup>1</sup> E. I. Shibata,<sup>1</sup> I. P. J. Shipsey,<sup>1</sup> V. Pavlunin,<sup>1</sup> D. Cronin-Hennessy,<sup>2</sup> Y. Kwon,<sup>2,</sup><sup>*</sup><sup>*</sup>*Permanent address: Yonsei University, Seoul 120-749, Korea. A.L. Lyon,<sup>2</sup> E. H. Thorndike,<sup>2</sup> C. P. Jessop,<sup>3</sup> H. Marsiske,<sup>3</sup> M. L. Perl,<sup>3</sup> V. Savinov,<sup>3</sup> D. Ugolini,<sup>3</sup> X. Zhou,<sup>3</sup> T. E. Coan,<sup>4</sup> V. Fadeyev,<sup>4</sup> Y. Maravin,<sup>4</sup> I. Narsky,<sup>4</sup> R. Stroynowski,<sup>4</sup> J. Ye,<sup>4</sup> T. Wlodek,<sup>4</sup> M. Artuso,<sup>5</sup> R. Ayad,<sup>5</sup> C. Boulahouache,<sup>5</sup> K. Bukin,<sup>5</sup> E. Dambasuren,<sup>5</sup> S. Karamnov,<sup>5</sup> S. Kopp,<sup>5</sup> G. Majumder,<sup>5</sup> G. C. Moneti,<sup>5</sup> R. Mountain,<sup>5</sup> S. Schuh,<sup>5</sup> T. Skwarnicki,<sup>5</sup> S. Stone,<sup>5</sup> G. Viehhauser,<sup>5</sup> J.C. Wang,<sup>5</sup> A. Wolf,<sup>5</sup> J. Wu,<sup>5</sup> S. E. Csorna,<sup>6</sup> I. Danko,<sup>6</sup> K. W. McLean,<sup>6</sup> Sz. Márka,<sup>6</sup> Z. Xu,<sup>6</sup> R. Godang,<sup>7</sup> K. Kinoshita,<sup>7,</sup>Permanent address: University of Cincinnati, Cincinnati OH 45221 I. C. Lai,<sup>7</sup> S. Schrenk,<sup>7</sup> G. Bonvicini,<sup>8</sup> D. Cinabro,<sup>8</sup> L. P. Perera,<sup>8</sup> G. J. Zhou,<sup>8</sup> G. Eigen,<sup>9</sup> E. Lipeles,<sup>9</sup> M. Schmidtler,<sup>9</sup> A. Shapiro,<sup>9</sup> W. M. Sun,<sup>9</sup> A. J. Weinstein,<sup>9</sup> F. Würthwein,<sup>9,</sup>Permanent address: Massachusetts Institute of Technology, Cambridge, MA 02139. D. E. Jaffe,<sup>10</sup> G. Masek,<sup>10</sup> H. P. Paar,<sup>10</sup> E. M. Potter,<sup>10</sup> S. Prell,<sup>10</sup> V. Sharma,<sup>10</sup> D. M. Asner,<sup>11</sup> A. Eppich,<sup>11</sup> J. Gronberg,<sup>11</sup> T. S. Hill,<sup>11</sup> D. J. Lange,<sup>11</sup> R. J. Morrison,<sup>11</sup> H. N. Nelson,<sup>11</sup> R. A. Briere,<sup>12</sup> B. H. Behrens,<sup>13</sup> W. T. Ford,<sup>13</sup> A. Gritsan,<sup>13</sup> J. Roy,<sup>13</sup> J. G. Smith,<sup>13</sup> J. P. Alexander,<sup>14</sup> R. Baker,<sup>14</sup> C. Bebek,<sup>14</sup> B. E. Berger,<sup>14</sup> K. Berkelman,<sup>14</sup> F. Blanc,<sup>14</sup> V. Boisvert,<sup>14</sup> D. G. Cassel,<sup>14</sup> M. Dickson,<sup>14</sup> P. S. Drell,<sup>14</sup> K. M. Ecklund,<sup>14</sup> R. Ehrlich,<sup>14</sup> A. D. Foland,<sup>14</sup> P. Gaidarev,<sup>14</sup> L. Gibbons,<sup>14</sup> B. Gittelman,<sup>14</sup> S. W. Gray,<sup>14</sup> D. L. Hartill,<sup>14</sup> B. K. Heltsley,<sup>14</sup> P. I. Hopman,<sup>14</sup> C. D. Jones,<sup>14</sup> D. L. Kreinick,<sup>14</sup> M. Lohner,<sup>14</sup> A. Magerkurth,<sup>14</sup> T. O. Meyer,<sup>14</sup> N. B. Mistry,<sup>14</sup> C. R. Ng,<sup>14</sup> E. Nordberg,<sup>14</sup> J. R. Patterson,<sup>14</sup> D. Peterson,<sup>14</sup> D. Riley,<sup>14</sup> J. G. Thayer,<sup>14</sup> P. G. Thies,<sup>14</sup> B. Valant-Spaight,<sup>14</sup> A. Warburton,<sup>14</sup> P. Avery,<sup>15</sup> C. Prescott,<sup>15</sup> A. I. Rubiera,<sup>15</sup> J. Yelton,<sup>15</sup> J. Zheng,<sup>15</sup> G. Brandenburg,<sup>16</sup> A. Ershov,<sup>16</sup> Y. S. Gao,<sup>16</sup> D. Y.-J. Kim,<sup>16</sup> R. Wilson,<sup>16</sup> T. E. Browder,<sup>17</sup> Y. Li,<sup>17</sup> J. L. Rodriguez,<sup>17</sup> H. Yamamoto,<sup>17</sup> T. Bergfeld,<sup>18</sup> B. I. Eisenstein,<sup>18</sup> J. Ernst,<sup>18</sup> G. E. Gladding,<sup>18</sup> G. D. Gollin,<sup>18</sup> R. M. Hans,<sup>18</sup> E. Johnson,<sup>18</sup> I. Karliner,<sup>18</sup> M. A. Marsh,<sup>18</sup> M. Palmer,<sup>18</sup> C. Plager,<sup>18</sup> C. Sedlack,<sup>18</sup> M. Selen,<sup>18</sup> J. J. Thaler,<sup>18</sup> J. Williams,<sup>18</sup> K. W. Edwards,<sup>19</sup> R. Janicek,<sup>20</sup> P. M. Patel,<sup>20</sup> A. J. Sadoff,<sup>21</sup> R. Ammar,<sup>22</sup> A. Bean,<sup>22</sup> D. Besson,<sup>22</sup> R. Davis,<sup>22</sup> I. Kravchenko,<sup>22</sup> N. Kwak,<sup>22</sup> X. Zhao,<sup>22</sup> S. Anderson,<sup>23</sup> V. V. Frolov,<sup>23</sup> Y. Kubota,<sup>23</sup> S. J. Lee,<sup>23</sup> R. Mahapatra,<sup>23</sup> J. J. O’Neill,<sup>23</sup> R. Poling,<sup>23</sup> T. Riehle,<sup>23</sup> A. Smith,<sup>23</sup> J. Urheim,<sup>23</sup> S. Ahmed,<sup>24</sup> M. S. Alam,<sup>24</sup> S. B. Athar,<sup>24</sup> L. Jian,<sup>24</sup> L. Ling,<sup>24</sup> A. H. Mahmood,<sup>24,</sup><sup>§</sup><sup>§</sup>§Permanent address: University of Texas - Pan American, Edinburg TX 78539. M. Saleem,<sup>24</sup> S. Timm,<sup>24</sup> F. Wappler,<sup>24</sup> A. Anastassov,<sup>25</sup> J. E. Duboscq,<sup>25</sup> K. K. Gan,<sup>25</sup> C. Gwon,<sup>25</sup> T. Hart,<sup>25</sup> K. Honscheid,<sup>25</sup> D. Hufnagel,<sup>25</sup> H. Kagan,<sup>25</sup> R. Kass,<sup>25</sup> J. Lorenc,<sup>25</sup> T. K. Pedlar,<sup>25</sup> H. Schwarthoff,<sup>25</sup> E. von Toerne,<sup>25</sup> M. M. Zoeller,<sup>25</sup> S. J. Richichi,<sup>26</sup> H. Severini,<sup>26</sup> P. Skubic,<sup>26</sup> and A. Undrus<sup>26</sup>
<sup>1</sup>Purdue University, West Lafayette, Indiana 47907
<sup>2</sup>University of Rochester, Rochester, New York 14627
<sup>3</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309
<sup>4</sup>Southern Methodist University, Dallas, Texas 75275
<sup>5</sup>Syracuse University, Syracuse, New York 13244
<sup>6</sup>Vanderbilt University, Nashville, Tennessee 37235
<sup>7</sup>Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061
<sup>8</sup>Wayne State University, Detroit, Michigan 48202
<sup>9</sup>California Institute of Technology, Pasadena, California 91125
<sup>10</sup>University of California, San Diego, La Jolla, California 92093
<sup>11</sup>University of California, Santa Barbara, California 93106
<sup>12</sup>Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
<sup>13</sup>University of Colorado, Boulder, Colorado 80309-0390
<sup>14</sup>Cornell University, Ithaca, New York 14853
<sup>15</sup>University of Florida, Gainesville, Florida 32611
<sup>16</sup>Harvard University, Cambridge, Massachusetts 02138
<sup>17</sup>University of Hawaii at Manoa, Honolulu, Hawaii 96822
<sup>18</sup>University of Illinois, Urbana-Champaign, Illinois 61801
<sup>19</sup>Carleton University, Ottawa, Ontario, Canada K1S 5B6
and the Institute of Particle Physics, Canada
<sup>20</sup>McGill University, Montréal, Québec, Canada H3A 2T8
and the Institute of Particle Physics, Canada
<sup>21</sup>Ithaca College, Ithaca, New York 14850
<sup>22</sup>University of Kansas, Lawrence, Kansas 66045
<sup>23</sup>University of Minnesota, Minneapolis, Minnesota 55455
<sup>24</sup>State University of New York at Albany, Albany, New York 12222
<sup>25</sup>Ohio State University, Columbus, Ohio 43210
<sup>26</sup>University of Oklahoma, Norman, Oklahoma 73019
CP-violating phenomena arise in the Standard Model because of the single complex parameter in the quark mixing matrix . Such phenomena are expected to occur widely in $`B`$ meson decays and will be searched for by all current $`B`$-physics initiatives in the world. However, there is currently no firm experimental evidence for CP violation outside the neutral kaon system, where direct CP violation has been recently observed.
Direct CP violation, i.e., a difference between the rates for $`\overline{B}\overline{f}`$ and $`Bf`$, will occur in any decay mode for which there are two or more contributing amplitudes which differ in both weak and strong phases. This rate difference gives rise to an asymmetry, $`𝒜_{\mathrm{CP}}`$, defined as
$$𝒜_{\mathrm{CP}}\frac{(\overline{B}\overline{f})(Bf)}{(\overline{B}\overline{f})+(Bf)}.$$
(1)
For the simple case of two amplitudes $`T,P`$ with $`TP`$, $`𝒜_{\mathrm{CP}}`$ is given by
$$𝒜_{\mathrm{CP}}2|\frac{T}{P}|\mathrm{sin}\mathrm{\Delta }\varphi _\mathrm{w}\mathrm{sin}\mathrm{\Delta }\varphi _\mathrm{s}.$$
(2)
Here $`\mathrm{\Delta }\varphi _\mathrm{s}`$ and $`\mathrm{\Delta }\varphi _\mathrm{w}`$ refer to the difference in strong and weak phases between $`T`$ and $`P`$.
The decay $`BK^\pm \pi ^{}`$, for instance, involves a $`bu`$ W-emission amplitude ($`T`$) with the weak phase $`\mathrm{Arg}(V_{ub}^{}V_{us})\gamma `$ and a $`bs`$ penguin amplitude ($`P`$) with the weak phase $`\mathrm{Arg}(V_{tb}^{}V_{ts})=\pi `$ or $`\mathrm{Arg}(V_{cb}^{}V_{cs})=0`$. Theoretical expectations of $`|T/P|1/4`$ in $`BK^\pm \pi ^{}`$ thus allow for $`𝒜_{\mathrm{CP}}`$ as large as $`\pm 0.5`$.
The CP-violating phases may arise from either the Standard Model CKM matrix or from new physics , while the CP-conserving strong phases may arise from the absorptive part of a penguin diagram or from final state interaction effects . Precise predictions for $`𝒜_{\mathrm{CP}}`$ are not feasible at present as both the absolute value and the strong interaction phases of the contributing amplitudes are not calculable. However, numerical estimates can be made under well-defined model assumptions and the dependence on both model parameters and CKM parameters can be probed. Recent calculations of CP asymmetries under the assumption of factorization have been published by Ali et al. and are listed in Table I for the modes examined in this paper. A notable feature of the model used in Ref. is that soft final state interactions are neglected, leading to rather small CP-invariant phases. However, it has been argued recently that CP-conserving phases due to soft rescattering could be large , possibly leading to enhanced $`|𝒜_{\mathrm{CP}}|`$ .
In this Letter, we present results of searches for CP violation in decays of $`B`$ mesons to the three $`K\pi `$ modes, $`K^\pm \pi ^{}`$, $`K^\pm \pi ^0`$, $`K_S^0\pi ^\pm `$, the mode $`K^\pm \eta ^{}`$, and the vector-pseudoscalar mode $`\omega \pi ^\pm `$. These decay modes are selected because they have well-measured branching ratios and significant signal yields in our data sample . In addition, these decays are self-tagging; the flavor of the parent $`b`$ or $`\overline{b}`$ quark is tagged simply by the sign of the high momentum charged hadron. In the decay $`BK^\pm \pi ^{}`$ we assume that the charge of the kaon tags the charge of the $`b`$ quark.
The data used in this analysis was collected with two configurations of the CLEO detector at the Cornell Electron Storage Ring (CESR). It consists of an integrated luminosity of $`9.13\mathrm{fb}^1`$ taken on the $`\mathrm{{\rm Y}}`$(4S) resonance, corresponding to 9.66 million $`B\overline{B}`$ pairs, and $`4.35\mathrm{fb}^1`$ taken below $`B\overline{B}`$ threshold, used for continuum background studies. CLEO is a general purpose solenoidal magnet detector, described in detail elsewhere . Cylindrical drift chambers in a 1.5T solenoidal magnetic field measure momenta and specific ionization ($`dE/dx`$) of charged tracks. Photons are detected using a 7800-crystal CsI(Tl) electromagnetic calorimeter. For the second configuration, the innermost tracking chamber was replaced by a 3-layer, double-sided silicon vertex detector, and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture. These modifications led to improved $`dE/dx`$ resolution in the main drift chamber, as well as improved momentum resolution. Two thirds of the data used in the present analysis was taken with the improved detector configuration.
Efficient track quality requirements are imposed on charged tracks. Pions and kaons are identified by $`dE/dx`$. The separation between kaons and pions for typical signal momenta $`p2.6`$ GeV$`/c`$ is $`1.7`$ and $`2.0`$ standard deviations ($`\sigma `$) for the two detector configurations. Candidate $`K_S^0`$ are selected from pairs of tracks forming well-measured displaced vertices with a $`\pi ^+\pi ^{}`$ invariant mass within $`2\sigma `$ of the $`K_S^0`$ mass. Pairs of photons with an invariant mass within 2.5$`\sigma `$ of the nominal $`\pi ^0`$ mass are kinematically fitted with the mass constrained to the nominal $`\pi ^0`$ mass. For the high momentum $`K_S^0`$ and $`\pi ^0`$ candidates reconstructed with these requirements, the ratio of signal to combinatoric background is better than 10. Electrons are rejected based on $`dE/dx`$ and the ratio of the track momentum to the associated shower energy in the CsI calorimeter; muons are rejected based on the penetration depth in the instrumented steel flux return. Resonances are reconstructed through the decay channels $`\eta ^{}\eta \pi ^+\pi ^{}`$ with $`\eta \gamma \gamma `$, $`\eta ^{}\rho \gamma `$ with $`\rho \pi ^+\pi ^{}`$, and $`\omega \pi ^+\pi ^{}\pi ^0`$.
The $`𝒜_{\mathrm{CP}}`$ analyses presented here are closely related to the corresponding branching fraction determinations published elsewhere. We briefly summarize here the main points of the analysis.
We calculate a beam-constrained $`B`$ mass $`M=\sqrt{E_\mathrm{b}^2p_B^2}`$, where $`p_B`$ is the $`B`$ candidate momentum and $`E_\mathrm{b}`$ is the beam energy. The resolution in $`M`$ ranges from 2.5 to 3.0 $`\mathrm{MeV}`$, where the larger resolution corresponds to the $`B^\pm K^\pm \pi ^0`$ decay. We define $`\mathrm{\Delta }E=E_1+E_2E_\mathrm{b}`$, where $`E_1`$ and $`E_2`$ are the energies of the daughters of the $`B`$ meson candidate. The resolution on $`\mathrm{\Delta }E`$ is mode-dependent. For final states without photons, the $`\mathrm{\Delta }E`$ resolutions for the two configurations of the CLEO detector are 26 and 20 MeV. We accept candidates with $`M`$ within $`5.25.3`$ $`\mathrm{GeV}`$ and $`|\mathrm{\Delta }E|<200`$ MeV, and extract yields and asymmetries with an unbinned maximum likelihood fit. The fiducial region in $`M`$ and $`\mathrm{\Delta }E`$ includes the signal region and a substantial sideband for background determination. Sideband regions are also included around each of the resonance masses ($`\eta ^{}`$, $`\eta `$, and $`\omega `$) for use in the likelihood fit. For the $`\eta ^{}\rho \gamma `$ case, the $`\rho `$ mass is not included in the fit; we require $`0.5\mathrm{GeV}<m_{\pi \pi }<0.9\mathrm{GeV}`$.
The main background arises from $`e^+e^{}q\overline{q}`$ (where $`q=u,d,s,c`$). Such events typically exhibit a two-jet structure and can produce high momentum back-to-back tracks in the fiducial region. To reduce contamination from these events, we calculate the angle $`\theta _\mathrm{s}`$ between the sphericity axis of the candidate tracks and showers and the sphericity axis of the rest of the event. The distribution of $`\mathrm{cos}\theta _\mathrm{s}`$ is strongly peaked at $`\pm 1`$ for $`q\overline{q}`$ events and is nearly flat for $`B\overline{B}`$ events. For $`K\pi `$ modes, we require $`|\mathrm{cos}\theta _\mathrm{s}|<0.8`$ which eliminates $`83\%`$ of the background. For $`\eta ^{}`$ and $`\omega `$ modes, the requirement is $`|\mathrm{cos}\theta _\mathrm{s}|<0.9`$. detail in Additional discrimination between signal and $`q\overline{q}`$ background is obtained from event shape information used in a Fisher discriminant ($``$) technique as described in detail in Ref. .
Using a detailed GEANT-based Monte Carlo simulation we determine overall detection efficiencies of 0.48 ($`K^\pm \pi ^{}`$), 0.38 ($`K^\pm \pi ^0`$), 0.15 ($`K^0\pi ^\pm `$), 0.13 ($`K^\pm \eta ^{}`$), and 0.26 ($`\omega \pi ^\pm `$). These efficiencies include secondary branching fractions for $`K^0K_S^0\pi ^+\pi ^{}`$ and $`\pi ^0\gamma \gamma `$ as well as for the $`\eta ^{}`$ and $`\omega `$ decay modes where applicable.
To extract signal and background yields we perform unbinned maximum-likelihood fits using $`\mathrm{\Delta }E`$, $`M`$, $``$, $`|\mathrm{cos}\theta _B|`$ (if not used in $``$), $`dE/dx`$, daughter resonance mass, and helicity angle in the daughter decay. The free parameters to be fitted are the asymmetry $`(\overline{f}f)/(\overline{f}+f)`$ and the sum ($`\overline{f}+f`$) in both signal and background. In most cases there is more than one possible signal hypothesis and its corresponding background hypothesis, e.g., we fit simultaneously for $`K^\pm \pi ^0`$ and $`\pi ^\pm \pi ^0`$ to ensure proper handling of the $`K/\pi `$ identification information. The probability density functions (PDFs) describing the distribution of events in each variable are parametrized by simple forms (Gaussian, polynomial, etc.) whose parameters are determined in separate studies. For signal PDF shapes parameters are determined by fitting simulated signal events. Backgrounds in these analyses are dominated by continuum $`e^+e^{}q\overline{q}`$ events, and we determine parameters of the background PDFs by fitting data collected below the $`\mathrm{{\rm Y}}(4S)`$ resonance. The uncertainties associated with such fits are charge symmetric in all PDFs except the $`dE/dx`$ parametrization. The $`dE/dx`$ information was calibrated such that any residual charge asymmetry is negligible compared to the statistical errors for $`𝒜_{\mathrm{CP}}`$.
The experimental determination of charge asymmetries in this analysis depends entirely on the properties of high momentum tracks. The charged meson that tags the parent $`b/\overline{b}`$ flavor has in each case a momentum between 2.3 and 2.8 GeV/c. In independent studies, using very large samples of high momentum tracks, we searched for and set stringent limits on the extent of possible charge-correlated bias in the CLEO detector and analysis chain for tracks in the $`23`$ GeV$`/c`$ momentum range. Based on a sample of 8 million tracks, we find an $`𝒜_{\mathrm{CP}}`$ bias of less than $`\pm 0.002`$ introduced by differences in reconstruction efficiencies for positive and negative high momentum tracks.
For $`K^\pm \pi ^{}`$ combinations, where differential charge-correlated efficiencies must also be considered in correlation with $`K/\pi `$ flavor, we use 37,000 $`D^0K\pi (\pi ^0)`$ decays and set a limit on the $`𝒜_{\mathrm{CP}}`$ bias of $`\pm 0.005`$. These $`D^0`$ meson decays, together with an additional 24,000 $`D_{(s)}^\pm `$ meson decays, are also used to set an upper limit of 0.4 MeV$`/c`$ on any charge-correlated or charge-strangeness-correlated bias in the momentum measurement. The resulting limit on $`𝒜_{\mathrm{CP}}`$ bias from this source is $`\pm 0.002`$. We conclude that there is no significant $`𝒜_{\mathrm{CP}}`$ bias introduced by track reconstruction or selection.
Our ability to distinguish the final states $`K^+\pi ^{}`$ and $`K^{}\pi ^+`$ depends entirely on particle identification using $`dE/dx`$. In addition, all other decay modes depend to varying degrees on $`dE/dx`$ to distinguish between $`BX\pi ^+`$ and $`XK^+`$, $`X`$ being a $`K_S^0,\pi ^0,\eta ^{}`$ or an $`\omega `$. The $`dE/dx`$ was carefully calibrated in order to remove any possible charge dependencies.
We calibrate the $`dE/dx`$ response using radiative $`\mu `$ pair events assuming that for a given velocity $`dE/dx`$ is the same for $`\mu ^\pm ,\pi ^\pm `$, and $`K^\pm `$. We then compare the dE/dx response for positive and negative tracks in the momentum range $`23`$ GeV$`/c`$ from all hadronic events in the CLEO data sample. The large available statistics allows us to split the data into subsets and to verify the stability of the calibration over time. The fully calibrated $`dE/dx`$ is then verified using kinematically identified kaons and pions of $`23`$ GeV$`/c`$ from $`D^0K\pi (\pi ^0)`$ decays. The $`dE/dx`$ distributions for $`K^\pm `$ and $`\pi ^\pm `$ from this sample are shown in Fig. 1. No significant differences are seen between different charge species. The statistical uncertainty in this comparison translates into a possible $`𝒜_{\mathrm{CP}}`$ bias of $`\pm 0.01`$ for $`K^\pm \pi ^{}`$, and less for all other final states. We conservatively assign a total systematic error of $`\pm 0.02`$ in all five decay modes.
As additional check we measure the asymmetry of the background events in each decay mode, and find that all are consistent with the expected null result for continuum background. The results for the asymmetry in continuum background are $`0.024\pm 0.038`$ ($`K^\pm \pi ^{}`$), $`0.003\pm 0.032`$ ($`K^\pm \pi ^0`$), $`0.017\pm 0.037`$ ($`K_S^0\pi ^\pm `$), $`0.006\pm 0.070`$ ($`\eta ^{}(\eta \pi \pi )K^\pm `$), $`0.009\pm 0.015`$ ($`\eta ^{}(\rho \gamma )K^\pm `$), and $`0.001\pm 0.010`$ ($`\omega \pi ^\pm `$). We further confirm that our analysis method does not introduce a bias in the measured $`𝒜_{\mathrm{CP}}`$ in the analysis of simulated events with known asymmetries.
We conclude that any possible systematic bias on $`𝒜_{\mathrm{CP}}`$ is negligible compared to the statistical errors of our measurements. Our 90$`\%`$ confidence level (CL) ranges are calculated adding statistical and systematic errors in quadrature.
We summarize the results in Table I and Fig. 2. The dependence of the likelihood function on $`𝒜_{\mathrm{CP}}`$ for each of the five decay modes is depicted in Fig. 3. This figure was obtained by re-optimizing the likelihood function at each fixed value of $`𝒜_{\mathrm{CP}}`$ to account for correlations between the free parameters in the fit.
We see no evidence for CP violation in the five modes analyzed here and set 90% CL intervals that reduce the possible range of $`𝒜_{\mathrm{CP}}`$ by as much as a factor of four. It has been suggested that $`𝒜_{\mathrm{CP}}`$ in $`K^\pm \pi ^{}`$ and $`K^\pm \pi ^0`$ are expected to be almost identical within the Standard Model. Based on the average $`𝒜_{\mathrm{CP}}`$ in these two decay modes we calculate a $`90\%`$ CL range of $`0.28<`$ $`𝒜_{\mathrm{CP}}`$ $`<+0.05`$.
While the sensitivity is not yet sufficient to probe the rather small $`𝒜_{\mathrm{CP}}`$ values predicted by factorization models, extremely large $`𝒜_{\mathrm{CP}}`$ values that might arise if large strong phase differences were available from final state interactions are firmly ruled out. For the cases of $`K\pi `$ and $`\eta ^{}K`$, we can exclude $`|𝒜_{\mathrm{CP}}|`$ greater than 0.30 and 0.23 at 90% CL respectively.
We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the Natural Sciences and Engineering Research Council of Canada, the A.P.Sloan Foundation, the Swiss National Science Foundation, and Alexander von Humboldt Stiftung. |
no-problem/0001/hep-th0001173.html | ar5iv | text | # Frobenius-Schur Indicators, the Klein-bottle Amplitude, and the Principle of Orbifold Covariance
## Abstract
The “orbifold covariance principle”, or OCP for short, is presented to support a conjecture of Pradisi, Sagnotti and Stanev on the expression of the Klein-bottle amplitude.
Frobenius-Schur indicators had been introduced in to distinguish between real and pseudo-real primary fields of a CFT, i.e. those primaries whose two-point function is symmetric ( resp. antisymmetric ) with respect to braiding. They have a simple expression in terms of the usual data of a CFT, i.e. the fusion rule coefficients $`N_{pq}^r`$, the exponentiated conformal weights ( or statistic phases ) $`\omega _p=\mathrm{exp}\left(2\pi i(\mathrm{\Delta }_pc/24)\right)`$ and the matrix elements of the modular transformation $`S:\tau \frac{1}{\tau }`$, which reads
$$\nu _p=\underset{q,r}{}N_{qr}^pS_{0q}S_{0r}\left(\frac{\omega _q}{\omega _r}\right)^2$$
(1)
where the sum runs over the primary fields, and the label $`0`$ refers to the vacuum. The basic result about the Frobenius-Schur indicator $`\nu _p`$ is that it is three-valued : its value is +1 for real primaries, -1 for pseudo-real ones, and 0 if $`p\overline{p}`$.
Besides the original motivation to characterize simply the symmetry properties of two-point functions, Frobenius-Schur indicators had been applied previously in the study of simple current extensions , of boundary conditions , and of WZNW orbifolds . They have also appeared in the work of Pradisi, Sagnotti and Stanev on open string theory , although in a disguised form, as the coefficients of the Klein-bottle amplitude for a CFT whose torus partition function is the charge conjugation modular invariant. This connection has been noticed in several papers since then, and arguments were presented to support this Ansatz . Recently, the Klein-bottle amplitude had been computed using 3D techniques in , and the result agrees with the Ansatz of Pradisi, Sagnotti and Stanev. Unfortunately, there is still an important piece of evidence missing, namely the validity of the Ansatz depends on the positivity conjecture
$$N_{pqr}\nu _p\nu _q\nu _r0$$
(2)
for any three primaries $`p,q,r`$. Although in some special cases the positivity conjecture can be shown to hold , no general proof is available at the moment. Therefore, it seems relevant to present another argument strongly supporting the Ansatz of Pradisi, Sagnotti and Stanev, which is completely independent of the previous ones. This argument is based on what we call the “orbifold covariance principle”, which we’ll explain in a moment.
First of all, let’s summarize some basics of permutation orbifolds which will be needed in the sequel. For any Conformal Field Theory $`𝒞`$ and any permutation group $`\mathrm{\Omega }`$, one can construct a new CFT $`𝒞\mathrm{\Omega }`$ by orbifoldizing the $`n`$-fold tensor power of $`𝒞`$ by the twist group $`\mathrm{\Omega }`$, where $`n`$ is the degree of $`\mathrm{\Omega }`$, and the resulting CFT is called a permutation orbifold . One can compute most interesting quantities of the permutation orbifold $`𝒞\mathrm{\Omega }`$ from the knowledge of $`𝒞`$, e.g. one has explicit expressions for the torus partition function, the characters, the matrix elements of modular transformations, etc. . Not only may one compute the relevant quantities, but the resulting expressions have a simple geometrical meaning : besides the obvious symmetrizations involved, one has to include instanton corrections arising from the twisted sectors, related to the non-trivial coverings of the world-sheet. This recipe works for arbitrary oriented surfaces , and may be generalized to the unoriented case, in particular the Klein-bottle. But to obtain the explicit expression of the Klein-bottle amplitude, one has first to make a short detour into uniformization theory.
In case of orientable surfaces, uniformization theory tells us that a closed surface is obtained by quotienting its universal covering surface - which is either the sphere for genus 0, the plane for genus 1, or the upper half-plane for genus bigger than 1 - by a suitable discrete group of holomorphic transformations isomorphic to its fundamental group : in case the genus is greater than one, this is a hyperbolic Fuchsian group, for genus 1 this is a group of translations, and the genus 0 case is trivial . For non-orientable surfaces one has to include orientation reversing, i.e. antiholomorphic transformations as well. A suitable presentation of the fundamental group of the Klein-bottle looks as follows :
$$a,b|b^1ab=a^1$$
(3)
i.e. the fundamental group is generated by two elements $`a`$ and $`b`$ satisfying the single defining relation $`aba=b`$ . So we have to look for one holomorphic and one antiholomorphic affine transformation that satisfy the above relation. As the uniformizing group is only determined up to conjugacy, we may use this freedom to transform the generators into the following canonical form :
$`a:`$ $`z`$ $`z+it`$ (4)
$`b:`$ $`z`$ $`\overline{z}+{\displaystyle \frac{1}{2}}`$ (5)
whith $`t<0`$. The meaning of the parameter $`t`$ may be recovered by considering the oriented (two-sheeted) cover of the Klein-bottle : this is a torus with purely imaginary modular parameter equal to $`\frac{1}{it}`$. So different Klein-bottles are parametrized by $`t`$, and may be obtained by identifying the points of the complex plane under the action of the group generated by the two transformations in Eq.(4).
We can now embark upon computing the Klein-bottle amplitude $`K^\mathrm{\Omega }`$ of the permutation orbifold $`𝒞\mathrm{\Omega }`$ in terms of the corresponding amplitude $`K`$ of $`𝒞`$. The general recipe tells us that we have to consider each homomorphism from the fundamental group into the twist group $`\mathrm{\Omega }`$, i.e. each pair $`x,y\mathrm{\Omega }`$ that satisfy $`x^y=x^1`$. Each such homomorphism determines a covering of the Klein-bottle, which is not connected in general, its connected components being in one-to-one correspondence with the orbits $`\xi 𝒪(x,y)`$ of the group generated by $`x`$ and $`y`$. There are two kinds of orbits : those on which the group $`x,y^2`$ generated by $`x`$ and $`y^2`$ acts transitively, the corresponding connected coverings being Klein-bottles again; and those which decompose into two orbits $`\xi _\pm `$ under the action of $`x,y^2`$, the corresponding coverings being tori. Accordingly, we have $`𝒪(x,y)=𝒪_{}(x,y)𝒪_+(x,y)`$, where $`𝒪_{}(x,y)`$ contains those orbits whose corresponding covering is a Klein-bottle. There is a simple numerical characterization of these two cases : $`𝒪_{}(x,y)`$ consists of those orbits $`\xi 𝒪(x,y)`$ which contain an odd number of $`x`$-orbits. The uniformizing groups of the above connected components, hence their moduli, may be determined as the point stabilizers of the corresponding orbits. Each homomorphism gives a contribution equal to the product of the partition functions of the connected components of the corresponding covering, and the total Klein-bottle amplitude is the sum of all these contributions divided by the order of the twist group $`\mathrm{\Omega }`$. All in all, we get the result
$$K^\mathrm{\Omega }(t)=\frac{1}{\left|\mathrm{\Omega }\right|}\underset{x,y\mathrm{\Omega }}{}\delta _{x^y,x^1}\underset{\xi 𝒪_{}(x,y)}{}K\left(\frac{\lambda _\xi ^2t}{\left|\xi \right|}\right)\underset{\xi 𝒪_+(x,y)}{}Z\left(\frac{\left|\xi \right|}{2\lambda _\xi ^2it}+\frac{\kappa _\xi }{\lambda _\xi }\right)$$
(6)
where $`Z`$ is the torus partition function of the theory. In the above formula, $`\left|\xi \right|`$ stands for the length of the orbit $`\xi `$, $`\lambda _\xi `$ is the length of the $`x`$-orbits contained in $`\xi `$, while $`\kappa _\xi `$ is the smallest non-negative integer such that $`x^{\kappa _\xi }y^{\left|\xi \right|/\lambda _\xi }`$ stabilizes the points of $`\xi 𝒪_+(x,y)`$.
Let’s now turn to the “orbifold covariance principle”. Suppose we have an equality of the form
$$L=R$$
(7)
where $`L`$ and $`R`$ denote some quantities of the CFT. If such an identity is to hold universally in any CFT, it should obviously hold in any permutation orbifold as well, i.e. Eq.(7) should imply
$$L^\mathrm{\Omega }=R^\mathrm{\Omega }$$
(8)
where we denote by $`L^\mathrm{\Omega }`$ (resp. $`R^\mathrm{\Omega }`$) the value of $`L`$ (resp. $`R`$) in the $`\mathrm{\Omega }`$ permutation orbifold. As this should hold for an arbitrary permutation group $`\mathrm{\Omega }`$, this gives us an infinite number of highly nonlinear consistency conditions for Eq.(7) to be valid, provided we can express $`L^\mathrm{\Omega }`$ and $`R^\mathrm{\Omega }`$ in terms of $`L`$ and $`R`$ respectively. This is what we call the “orbifold covariance principle”, or OCP for short.
In the case at hand, consider the two component quantity
$$L=\left(\begin{array}{c}Z(\tau )\\ K(t)\end{array}\right)$$
which, according to the Pradisi-Sagnotti-Stanev Ansatz, should equal
$$R=\left(\begin{array}{c}_p\chi _p(\tau )\overline{\chi _{\overline{p}}(\tau )}\\ _p\nu _p\chi _p(\frac{1}{it})\end{array}\right)$$
Note that it is at this point that we restrict ourselves to theories with the charge conjugation invariant. It is now straightforward to verify that the Ansatz $`L=R`$ indeed satisfies the OCP. This follows from the following general results :
$`Z^\mathrm{\Omega }(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{\left|\mathrm{\Omega }\right|}}{\displaystyle \underset{(x,y)\mathrm{\Omega }^{\left\{2\right\}}}{}}{\displaystyle \underset{\xi 𝒪(x,y)}{}}Z\left({\displaystyle \frac{\mu _\xi \tau +\kappa _\xi }{\lambda _\xi }}\right)`$ (9)
$`\chi _{p,\varphi }(\tau )`$ $`=`$ $`{\displaystyle \frac{1}{\left|\mathrm{\Omega }_p\right|}}{\displaystyle \underset{(x,y)\mathrm{\Omega }_p^{\left\{2\right\}}}{}}\overline{\varphi }(x,y){\displaystyle \underset{\xi 𝒪(x,y)}{}}\omega _{p(\xi )}^{\frac{\kappa _\xi }{\lambda _\xi }}\chi _{p(\xi )}\left({\displaystyle \frac{\mu _\xi \tau +\kappa _\xi }{\lambda _\xi }}\right)`$ (10)
$`\nu _{p,\varphi }`$ $`=`$ $`{\displaystyle \frac{1}{\left|\mathrm{\Omega }_p\right|}}{\displaystyle \underset{x,y^2\mathrm{\Omega }_p}{}}\delta _{x^y,x^1}\varphi (x,y^2){\displaystyle \underset{\xi 𝒪_{}(x,y)}{}}\nu _{p(\xi )}{\displaystyle \underset{\xi 𝒪_+(x,y)}{}}C_{p(\xi _+)}^{p(\xi _{})}`$ (11)
In these formulae $`\mathrm{\Omega }^{\left\{2\right\}}`$ denotes the set of commuting pairs of elements of the group $`\mathrm{\Omega }`$, $`p:\{1,\mathrm{},n\}`$ is an $`n`$-tuple of primaries (considered as a map), $`\mathrm{\Omega }_p`$ is the stabilizer in $`\mathrm{\Omega }`$ of the map $`p`$ under the natural induced action, and $`\varphi `$ is an irreducible character of the double of the stabilizer $`\mathrm{\Omega }_p`$. For a pair $`(x,y)\mathrm{\Omega }_p^{\left\{2\right\}}`$, we denote by $`𝒪(x,y)`$ the set of orbits on $`\{1,\mathrm{},n\}`$ of the group generated by $`x`$ and $`y`$, while for a given orbit $`\xi 𝒪(x,y)`$, $`\lambda _\xi `$ denotes the common length of the $`x`$-orbits contained in $`\xi `$, $`\mu _\xi `$ denotes their number, and $`\kappa _\xi `$ is the smallest nonnegative integer such that $`y^{\mu _\xi }x^{\kappa _\xi }`$ belongs to the stabilizer of $`\xi `$, while $`p(\xi )`$ denotes the value of the map $`p`$ on the orbit $`\xi `$ (on which it is constant because both $`x`$ and $`y`$ stabilize $`p`$). Finally, $`C_p^q`$ denotes the charge conjugation matrix, i.e. $`C=S^2`$, while the notation $`𝒪_\pm (x,y)`$ and $`\xi _\pm `$ has been explained previously in connection with Eq. (6).
With the aid of Eq.(10) and Eq.(6), after performing the required summations, we arrive at the result that $`L=R`$ implies $`L^\mathrm{\Omega }=R^\mathrm{\Omega }`$, confirming the claim. This should be viewed as a strong consistency check of the Pradisi-Sagnotti-Stanev Ansatz.
Of course, the above argument does not exhaust the potential of the OCP, it is just intended to illustrate the application of this powerful tool. As it is possible to apply the OCP even in cases where a formal proof is out of reach for present day techniques, it should be considered as an important tool in the investigation of Conformal Field Theories. |
no-problem/0001/hep-ph0001108.html | ar5iv | text | # Chiral phase transition at high temperature and density in the QCD-like theory
## I INTRODUCTION
At zero temperature and zero (baryon number) density, the chiral symmetry in quantum chromodynamics (QCD) is dynamically broken. It is generally believed that at sufficiently high temperature and/or density the QCD vacuum undergoes a phase transition into a chirally symmetric phase. This chiral phase transition plays an important role in the physics of neutron stars and the early universe and it may be realized in heavy-ion collisions. At finite temperature the lattice simulation is powerful to study the chiral phase transition at finite temperature $`(T0)`$. It is now developing also for finite chemical potential $`(\mu 0)`$. However, effective theories of QCD are still useful for various nonperturbative phenomena including the phase transition.
Recently, it has been argued that importance of a study of the phase structure, especially a position of the tricritical point, has been pointed out in Ref. . The Nambu–Jona-Lasinio (NJL) model in which the interaction is induced by instantons and the random matrix model have shown almost the same results concerning the tricritical point. It is also interesting to study the possibility of color superconducting phase at high baryon density \[2,4–8\]. However we may neglect this phase in the high temperature region where we found the tricritical point. In this paper, we concentrate on the chiral phase transition between $`SU(N_f)_L\times SU(N_f)_R`$ and $`SU(N_f)_{L+R}`$ using the effective potential and the QCD-like theory. One usually studies the phase structure of QCD in terms of the Schwinger–Dyson equation (SDE) or the effective potential \[9–13\]. However, the use of the SDE only is not sufficient for its study in particular when there is a first order phase transition; then, we use the effective potential. The QCD-like theory provided with the effective potential for composite operators and the renormalization group is successful to study the chiral symmetry breaking in QCD . This type of theory is occasionally called QCD in the improved ladder approximation. The phase diagram in the QCD-like theory has been studied in Refs. . However, the position of the tricritical point is largely different from that obtained from the NJL model and the random matrix model.
In this paper we use a modified form of the Cornwall–Jackiw–Tomboulis (CJT) effective potential which is convenient for a variational approach. The formulation is given for the case where the chiral symmetry is explicitly broken at zero temperature and density. We, then, consider the CJT effective potential in the improved ladder approximation at finite temperature and/or density. Being motivated by Refs. , we re-examine the chiral phase transition and phase structure in the chiral limit.
This paper is organized as follows. In Sec. II we formulate the effective potential for composite operators and extend it to finite temperature and density. In Sec. III we first determine the value of $`\mathrm{\Lambda }_{\text{QCD}}`$ by a condition $`f_\pi =93`$ MeV at $`T=\mu =0`$ and then calculate the effective potential at finite $`T`$ and/or $`\mu `$ numerically. Using those results, we study the phase structure in the $`T`$-$`\mu `$ plain. Sec. IV is devoted to conclusion. We fix the mass scale by the condition $`\mathrm{\Lambda }_{\text{QCD}}=1`$, except for Sec. III.
## II EFFECTIVE POTENTIAL FOR QUARK PROPAGATOR
### A CJT effective potential at zero temperature and density
At zero temperature and zero density, the CJT effective potential for QCD in the improved ladder approximation is expressed as a functional of $`S(p)`$ the quark full propagator :
$`V[S]`$ $`=`$ $`V_1[S]+V_2[S],`$ (1)
$`V_1[S]`$ $`=`$ $`{\displaystyle \frac{d^4p}{(2\pi )^4i}\text{Tr}\left[\mathrm{ln}(S_0^1(p)S(p))S_0^1(p)S(p)+1\right]},`$ (2)
$`V_2[S]`$ $`=`$ $`{\displaystyle \frac{i}{2}}C_2{\displaystyle \frac{d^4p}{(2\pi )^4i}\frac{d^4q}{(2\pi )^4i}\overline{g}^2(p,q)\text{Tr}\left(\gamma _\mu S(p)\gamma _\nu S(q)\right)D^{\mu \nu }(pq)},`$ (3)
where $`C_2=(N_c^21)/(2N_c)`$ is the quadratic Casimir operator for color $`SU(N_c)`$ group, $`S_0(p)`$ is the bare quark propagator, $`\overline{g}^2(p,q)`$ is the running coupling of one–loop order, $`D^{\mu \nu }(p)`$ is the gluon propagator (which is diagonal in the color space) and “Tr” refers to Dirac, flavor and color matrices. The two-loop potential $`V_2`$ is given by the vacuum graph of the fermion one-loop diagram with one gluon exchange (see Fig. 1).
After Wick rotation, we use the following approximation according to Higashijima and Miransky
$`\overline{g}^2(p_E,q_E)=\theta (p_Eq_E)\overline{g}^2(p_E)+\theta (q_Ep_E)\overline{g}^2(q_E).`$ (4)
In this approximation and in the Landau gauge, no renormalization of the quark wave function is required and the CJT effective potential is expressed in terms of $`\mathrm{\Sigma }(p_E)`$ the dynamical mass function of quark:
$`V[\mathrm{\Sigma }(p_E)]`$ $`=`$ $`V_1[\mathrm{\Sigma }(p_E)]+V_2[\mathrm{\Sigma }(p_E)],`$ (5)
$`V_1[\mathrm{\Sigma }(p_E)]`$ $`=`$ $`2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Sigma }^2(p_E)+p_E^2}{m^2(\mathrm{\Lambda })+p_E^2}}`$ (7)
$`+4{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}{\displaystyle \frac{\mathrm{\Sigma }(p_E)(\mathrm{\Sigma }(p_E)m(\mathrm{\Lambda }))}{\mathrm{\Sigma }^2(p_E)+p_E^2}},`$
$`V_2[\mathrm{\Sigma }(p_E)]`$ $`=`$ $`6C_2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}{\displaystyle \frac{d^4q_E}{(2\pi )^4}}{\displaystyle \frac{\overline{g}^2(p_E,q_E)}{(p_Eq_E)^2}}`$ (9)
$`\times {\displaystyle \frac{\mathrm{\Sigma }(p_E)}{\mathrm{\Sigma }^2(p_E)+p_E^2}}{\displaystyle \frac{\mathrm{\Sigma }(q_E)}{\mathrm{\Sigma }^2(q_E)+q_E^2}}.`$
Here, an overall factor (the number of light quarks times the number of colors) is omitted and $`m(\mathrm{\Lambda })`$ is the bare quark mass. In the above equations we temporary introduced the ultraviolet cutoff $`\mathrm{\Lambda }`$ in order to make the bare quark mass well-defined.
The extremum condition for $`V`$ with respect to $`\mathrm{\Sigma }(p_E)`$ leads to the following SDE for the quark self-energy
$`\mathrm{\Sigma }(p_E)=m(\mathrm{\Lambda })+3C_2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4q_E}{(2\pi )^4}}{\displaystyle \frac{\overline{g}^2(p_E,q_E)}{(p_Eq_E)^2}}{\displaystyle \frac{\mathrm{\Sigma }(q_E)}{\mathrm{\Sigma }^2(q_E)+q_E^2}}.`$ (10)
In Higashijima–Miransky approximation, since the argument of the running coupling has no angle dependence, we first perform the angle integration. As a result, we understand that the procedure is achieved equivalently by replacing $`(p_Eq_E)^2`$ by $`\theta (p_Eq_E)(p_E^2)^1+\theta (q_Ep_E)(q_E^2)^1`$ in Eq. (10). Then we can reduce Eq. (10) to the following differential equation
$`{\displaystyle \frac{\mathrm{\Sigma }(p_E)}{\mathrm{\Sigma }^2(p_E)+p_E^2}}={\displaystyle \frac{(4\pi )^2}{3C_2}}{\displaystyle \frac{d}{p_E^2dp_E^2}}\left({\displaystyle \frac{1}{\mathrm{\Delta }(p_E)}}{\displaystyle \frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}}\right),`$ (11)
and the two boundary conditions
$`{\displaystyle \frac{1}{\mathrm{\Delta }(p_E)}}{\displaystyle \frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}}|_{p_E=0}`$ $`=`$ $`0,`$ (12)
$`\mathrm{\Sigma }(p_E){\displaystyle \frac{𝒟(p_E)}{\mathrm{\Delta }(p_E)}}{\displaystyle \frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}}|_{p_E=\mathrm{\Lambda }}`$ $`=`$ $`m(\mathrm{\Lambda }),`$ (13)
where the functions
$`𝒟(p_E)={\displaystyle \frac{\overline{g}^2(p_E)}{p_E^2}}`$ (14)
and
$`\mathrm{\Delta }(p_E)={\displaystyle \frac{d}{dp_E^2}}𝒟(p_E),`$ (15)
are introduced.
Substituting Eqs. (10) and (11) into Eqs. (7) and (9), we obtain
$`V[\mathrm{\Sigma }(p_E)]`$ $`=`$ $`2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Sigma }^2(p_E)+p_E^2}{m^2(\mathrm{\Lambda })+p_E^2}}`$ (17)
$`+2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}{\displaystyle \frac{\mathrm{\Sigma }(p_E)(\mathrm{\Sigma }(p_E)m(\mathrm{\Lambda }))}{\mathrm{\Sigma }^2(p_E)+p_E^2}}`$
$`=`$ $`2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Sigma }^2(p_E)+p_E^2}{m^2(\mathrm{\Lambda })+p_E^2}}`$ (19)
$`+{\displaystyle \frac{2(4\pi )^2}{3C_2}}{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}\left[\mathrm{\Sigma }(p_E)m(\mathrm{\Lambda })\right]{\displaystyle \frac{d}{p_E^2dp_E^2}}\left({\displaystyle \frac{1}{\mathrm{\Delta }(p_E)}}{\displaystyle \frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}}\right)`$
$`=`$ $`2{\displaystyle ^\mathrm{\Lambda }}{\displaystyle \frac{d^4p_E}{(2\pi )^4}}\mathrm{ln}{\displaystyle \frac{\mathrm{\Sigma }^2(p_E)+p_E^2}{m^2(\mathrm{\Lambda })+p_E^2}}`$ (21)
$`{\displaystyle \frac{2}{3C_2}}{\displaystyle ^{\mathrm{\Lambda }^2}}𝑑p_E^2{\displaystyle \frac{1}{\mathrm{\Delta }(p_E)}}\left({\displaystyle \frac{d}{dp_E^2}}\mathrm{\Sigma }(p_E)\right)^2+V_S,`$
where we used a partial integration in the last line and
$`V_S`$ $`=`$ $`F(\mathrm{\Lambda })F(0),`$ (22)
$`F(p_E)`$ $`=`$ $`{\displaystyle \frac{2}{3C_2}}\left[\mathrm{\Sigma }(p_E)m(\mathrm{\Lambda })\right]{\displaystyle \frac{1}{\mathrm{\Delta }(p_E)}}{\displaystyle \frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}}.`$ (23)
Hereafter we consider the effective potential in the continuum limit $`(\mathrm{\Lambda }\mathrm{})`$. Let us begin by evaluating $`F(\mathrm{\Lambda })`$ using the running coupling
$`\overline{g}^2(p_E)`$ $`=`$ $`{\displaystyle \frac{2\pi ^2a}{\mathrm{ln}p_E^2}},a{\displaystyle \frac{24}{11N_c2n_f}},`$ (24)
and the corresponding asymptotic form of the mass function
$`\mathrm{\Sigma }(p_E)`$ $``$ $`m(\mathrm{\Lambda })\left({\displaystyle \frac{\mathrm{ln}p_E^2}{\mathrm{ln}\mathrm{\Lambda }^2}}\right)^{a/2}+{\displaystyle \frac{\sigma }{p_E^2}}(\mathrm{ln}p_E^2)^{a/21},`$ (25)
where $`n_f`$ is the number of flavors which controls the running coupling. Throughout this paper, we put $`N_c=n_f=3`$, namely $`a=8/9`$. The parameter $`\sigma `$ is related to the order parameter of the chiral symmetry $`\overline{q}q`$ as
$`\sigma ={\displaystyle \frac{2\pi ^2a\overline{q}q}{3}}.`$ (26)
When the chiral symmetry is exact, i.e., $`m(\mathrm{\Lambda })=0`$, using Eqs. (24) and (25), we can easily show that $`F(\mathrm{\Lambda })`$ vanishes in the continuum limit, i.e., $`lim_\mathrm{\Lambda }\mathrm{}F(\mathrm{\Lambda })=0`$. As for $`F(0)`$, since we introduce infrared finite running coupling and mass function in Eqs. (29) and (30), we can set $`F(0)=0`$. After all, in the continuum limit, we get $`V_S=0`$ and the modified version of the CJT effective potential is obtained as
$`V[\mathrm{\Sigma }(p_E)]`$ $`=`$ $`2{\displaystyle \frac{d^4p_E}{(2\pi )^4}\mathrm{ln}\frac{\mathrm{\Sigma }^2(p_E)+p_E^2}{p_E^2}}`$ (28)
$`{\displaystyle \frac{2}{3C_2}}{\displaystyle 𝑑p_E^2\frac{1}{\mathrm{\Delta }(p_E)}\left(\frac{d}{dp_E^2}\mathrm{\Sigma }(p_E)\right)^2}.`$
We can also show that $`V_S=0`$, namely Eq. (28) holds for nonzero bare quark mass .
A few comments are in order.
(1) The extremum condition for Eq. (28) with respect to $`\mathrm{\Sigma }(p_E)`$ leads to Eq. (11) which is equivalent to the original equation (10) in Higashijima–Miransky approximation apart from the two boundary conditions. We will take account of these conditions when we introduce the trial mass function.
(2) In the chiral limit, Eq. (28) is the same as the expression given in Refs. . However, even if the chiral symmetry is explicitly broken, we can use the same expression for $`V`$ . We do not require the finite renormalization adopted in Ref. .
Now we are in a position to introduce a modified running coupling and a trial mass function. We use the following QCD-like running coupling
$`\overline{g}^2(p_E)={\displaystyle \frac{2\pi ^2a}{\mathrm{ln}(p_E^2+p_R^2)}},`$ (29)
where $`p_R`$ is a parameter to regularize the divergence of the QCD running coupling at $`p=1(\mathrm{\Lambda }_{\text{QCD}})`$. This running coupling approximately develops according to the QCD renormalization group equation of one loop order, while it smoothly approaches a constant as $`p_E^2`$ decreases.
Hereafter we consider the chiral limit; i.e., the $`m(\mathrm{\Lambda })=0`$ case. Corresponding to the QCD-like running coupling, the SDE with the two boundary conditions suggests the following trial mass function
$`\mathrm{\Sigma }(p_E)={\displaystyle \frac{\sigma }{p_E^2+p_R^2}}\left[\mathrm{ln}(p_E^2+p_R^2)\right]^{a/21},`$ (30)
where $`\sigma `$ is the same as before.
Using Eqs. (29) and (30), we can express $`V[\mathrm{\Sigma }(p_E)]`$ as a function of $`\sigma `$ the order parameter. A further discussion of the CJT effective potential and the dynamical chiral symmetry breaking in QCD-like theory at zero temperature and density can be found in Refs. .
### B Effective potential at finite temperature and density
In this subsection we discuss the effective potential at finite temperature and density. In order to calculate the effective potential at finite temperature and density, we apply the imaginary time formalism
$`{\displaystyle \frac{dp_4}{2\pi }f(p_4)}T{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}f(\omega _n+i\mu ),(n𝒁),`$ (31)
where $`\omega _n=(2n+1)\pi T`$ is the fermion Matsubara frequency and $`\mu `$ represents the quark chemical potential. In addition, we need to define the running coupling and the (trial) mass function at finite $`T`$ and/or $`\mu `$. We adopt the following real functions for $`𝒟_{T,\mu }(p)`$ and $`\mathrm{\Sigma }_{T,\mu }(p)`$
$`𝒟_{T,\mu }(p)`$ $`=`$ $`{\displaystyle \frac{2\pi ^2a}{\mathrm{ln}(\omega _n^2+𝒑^2+p_R^2)}}{\displaystyle \frac{1}{\omega _n^2+𝒑^2}},`$ (32)
$`\mathrm{\Sigma }_{T,\mu }(p)`$ $`=`$ $`{\displaystyle \frac{\sigma }{\omega _n^2+𝒑^2+p_R^2}}\left[\mathrm{ln}(\omega _n^2+𝒑^2+p_R^2)\right]^{a/21}.`$ (33)
In Eq. (32) we do not introduce the $`\mu `$ dependence in $`𝒟_{T,\mu }(p)`$. The gluon momentum squared is the most natural argument of the running coupling at zero temperature and density, in the light of the chiral Ward–Takahashi identity . Then it is reasonable to assume that $`𝒟_{T,\mu }(p)`$ does not depend on the quark chemical potential.
As concerns the mass function, we use the same function as Eq. (30) except that we replace $`p_4`$ with $`\omega _n`$. As already noted in Sec. II A, the quark wave function does not suffer the renormalization in the Landau gauge for $`T=\mu =0`$, while, the same does not hold for finite $`T`$ and/or $`\mu `$. However, we assume that the wave function renormalization is not required even at finite $`T`$ and/or $`\mu `$, for simplicity.
Furthermore, we neglect the $`T`$-$`\mu `$ dependent terms in the quark and gluon propagators which arise from the perturbative expansion. We expect that the phase structure is not so affected by these approximations.
Using Eqs. (32) and (33), it is easy to write down the effective potential at finite temperature and chemical potential (see Appendix). Assuming the mean-field expansion, the effective potential can be expanded as a power series in $`\sigma `$ with finite coefficients $`a_{2n}(T,\mu )`$
$`V(\sigma ;T,\mu )=a_2(T,\mu )\sigma ^2+a_4(T,\mu )\sigma ^4+\mathrm{}.`$ (34)
Once we know the value of $`\sigma _{min}`$ the location of the minimum of $`V`$, we can determine the value of $`\overline{q}q`$ using the following relation
$`\overline{q}q=T{\displaystyle \underset{n}{}}{\displaystyle \frac{d^3p}{(2\pi )^3}\text{Tr}S_{T,\mu }(p)},`$ (35)
where $`S_{T,\mu }(p)`$ is the quark propagator at finite $`T`$ and/or $`\mu `$ in our approximations and “Tr” refers to Dirac and color matrices. However, in this paper, we still determine the $`\overline{q}q`$ through the relation $`\overline{q}q=(3/2\pi ^2a)\sigma _{min}`$. We have confirmed that this relation works well even at finite $`T`$ and/or $`\mu `$.
## III CHIRAL PHASE TRANSITION AT HIGH TEMPERATURE AND DENSITY
In our numerical calculation, as mentioned before, we put $`N_c=n_f=3`$. Furthermore, since it was known that the quantities such as $`\overline{q}q`$ and $`f_\pi `$ are quite stable under the change of the infrared regularization parameter , we fix $`t_R\mathrm{ln}(p_R^2/\mathrm{\Lambda }_{\text{QCD}}^2)`$ to $`0.1`$ and determine the value of $`\mathrm{\Lambda }_{\text{QCD}}`$ by the condition $`f_\pi =93`$ MeV at $`T=\mu =0`$. We approximately reproduce $`f_\pi `$ using the Pagels–Stoker formula :
$`f_\pi ^2=4N_c{\displaystyle \frac{d^4p_E}{(2\pi )^4}\frac{\mathrm{\Sigma }(p_E)}{(\mathrm{\Sigma }^2(p_E)+p_E^2)^2}\left(\mathrm{\Sigma }(p_E)\frac{p_E^2}{2}\frac{d\mathrm{\Sigma }(p_E)}{dp_E^2}\right)},`$ (36)
and obtain $`\mathrm{\Lambda }_{\text{QCD}}=738`$ MeV. The value of $`\mathrm{\Lambda }_{\text{QCD}}`$ is almost the same as the one obtained in the previous paper in which we used Eqs. (7) and (9).
### A $`T0,\mu =0`$ case
Fig. 2 shows the $`T`$–dependence of the effective potential at $`\mu =0`$. We can realize that $`\sigma _{min}`$ the minimum of the effective potential continuously goes to zero as temperature grows. Thus we have a second-order phase transition at $`T_c=129`$ MeV. Fig. 3 shows the temperature dependence of $`\overline{q}q^{1/3}`$.
### B $`T=0,\mu 0`$ case
Fig. 4 shows the $`\mu `$–dependence of the effective potential at $`T=0`$. For small values of $`\mu `$, the absolute minimum is nontrivial. However we find that the trivial and the nontrivial minima coexist at $`\mu =422`$ MeV. For larger values of $`\mu `$, the energetically favored minimum move to the origin. Thus we have a first-order phase transition at $`\mu _c=422`$ MeV. Fig. 5 shows the chemical potential dependence of $`\overline{q}q^{1/3}`$. The chiral condensate vanishes discontinuously at $`\mu =\mu _c`$.
### C $`T0,\mu 0`$ case
In the same way as the previous two cases, we determine the critical line on the $`T`$-$`\mu `$ plane (see Fig.6). The position of the tricritical point “$`P`$” is determined by the condition
$`a_2(T_P,\mu _P)=a_4(T_P,\mu _P)=0,`$ (37)
in Eq. (34). Solving this equation, we have
$`(T_P,\mu _P)=(107,210)\text{MeV}.`$ (38)
We have varied $`t_R`$ from $`0.1`$ to $`0.3`$ in order to examine the $`t_R`$ dependence of the position of $`P`$. As a result, for instance, we have
$`(T_P,\mu _P)`$ $`=`$ $`(104,207)\text{MeV}\text{for }t_R=0.2,`$ (39)
$`=`$ $`(101,208)\text{MeV}\text{for }t_R=0.3.`$ (40)
We note that the value of $`\mathrm{\Lambda }_{\text{QCD}}`$ has been determined at $`T=\mu =0`$ by the condition $`f_\pi =93`$ MeV for each value of $`t_R`$. Thus we confirmed that the position of $`P`$ is stable under the change of $`t_R`$.
## IV CONCLUSION
In this paper we studied the chiral phase transition at high temperature and/or density in the QCD-like theory.
We extended the effective potential to finite $`T`$ and $`\mu `$ and studied the phase structure. We found the second-order phase transition at $`T_c=129`$ MeV along the $`\mu =0`$ line and the first-order phase transition at $`\mu _c=422`$ MeV along the $`T=0`$ line. We also studied the phase diagram and found a tricritical point $`P`$ at $`(T_P,\mu _P)=(107,210)`$ MeV. Phase diagrams with similar structure have been obtained in other QCD-like theories . As concerns the position of the tricritical point, however, our result is not close to theirs. Let us consider the reason why our model gives the different result. In Ref. , they used the momentum independent coupling and the mass function without logarithmic behavior. The values of $`T_c`$ and $`\mu _c`$ of Ref. are about the same as ours. However, the position of the tricritical point is in the region of small $`\mu `$. The discrepancy may arise from the fact that: (1) They did not use the variational method, but numerically solved the SDE; (2) The treatment of the gluon propagator at finite $`T`$ and/or $`\mu `$ is different from ours. Our result is rather consistent with that of the NJL model and the random matrix model . They obtained
$`T_P100\mathrm{MeV},3\mu _P(600700)\mathrm{MeV}.`$ (41)
Recently it was pointed out that the values of $`T`$ and $`\mu `$ accomplished in high-energy heavy-ion collisions may be close to the tricritical point and it may be possible to observe some signals . Thus it is significant that three different models show almost the same results.
Finally, some comments are in order. In this paper, we modified the form of the CJT effective potential at $`T=\mu =0`$ using the two representation of the SDE. Our formulation of the effective potential is entirely based on the Higashijima–Miransky approximation. It was known that the approximation breaks the chiral Ward–Takahashi identity. Therefore, it is preferable to formulate the effective potential without this approximation. However it seems that the results do not depend on choice of the argument momentum . Moreover, the treatment of the quark and the gluon propagators at finite $`T`$ and/or $`\mu `$ is somewhat oversimplified in the present work. We would like to consider the wave function renormalization and more appropriate functional form for $`𝒟_{T,\mu }(p)`$ and $`\mathrm{\Sigma }_{T,\mu }(p)`$. By including a finite quark mass, we can study a more realistic situation where the chiral symmetry is explicitly broken. In the studies of SDE with the finite quark mass, it was known that there is a difficulty in removing a perturbative contribution from the quark condensate . In the effective potential approach, however, we are free from such a difficulty. The study of the phase structure with the finite quark mass is now in progress . Furthermore, we also plan to study the quark pairing including a color superconductivity \[2,4–8\] and a “color-flavor locking” (for $`N_c=N_f=3`$ case), in the QCD-like theory.
##
In this appendix, we show the effective potential explicitly. In the first place, we consider the case of zero temperature and finite chemical potential.
Using Eq. (33), we obtain
$`V_1`$ $`=`$ $`2{\displaystyle \frac{d^4p_E}{(2\pi )^4}\mathrm{ln}\frac{\mathrm{\Sigma }_{T,\mu }^2(p)+(p_4+i\mu )^2+𝒑^2}{(p_4+i\mu )^2+𝒑^2}}`$ (42)
$`=`$ $`{\displaystyle \frac{1}{4\pi ^3}}{\displaystyle _p}\mathrm{ln}\left[{\displaystyle \frac{(\mathrm{\Sigma }_{T,\mu }^2(p)+p_4^2+𝒑^2\mu ^2)^2+(2\mu p_4)^2}{(p_4^2+𝒑^2\mu ^2)^2+(2\mu p_4)^2}}\right],`$ (43)
where the imaginary part of $`V_1`$ is odd function of $`p_4`$; therefore it has been removed from Eq. (43) and
$`{\displaystyle _p}={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑p_4{\displaystyle _0^{\mathrm{}}}d|𝒑|𝒑^2.`$ (44)
In Eq. (28), we carry out the momentum-differentiation and, then, use Eqs. (32) and (33). $`V_2`$ is obtained as
$`V_2`$ $`=`$ $`{\displaystyle \frac{4\sigma ^2}{3\pi ^3C_2a}}{\displaystyle _p}{\displaystyle \frac{(p_4^2+𝒑^2)^2[\mathrm{ln}(p_4^2+𝒑^2+p_R^2)]^{a2}}{(p_4^2+𝒑^2+p_R^2)\mathrm{ln}(p_4^2+𝒑^2+p_R^2)+p_4^2+𝒑^2}}`$ (46)
$`\times {\displaystyle \frac{1}{(p_4^2+𝒑^2+p_R^2)^3}}\left[\mathrm{ln}(p_4^2+𝒑^2+p_R^2)+1{\displaystyle \frac{a}{2}}\right]^2.`$
At finite temperature and chemical potential, the $`p_4`$ integration in Eqs. (43) and (46) is replaced by the sum over the Matsubara frequencies. |
no-problem/0001/astro-ph0001006.html | ar5iv | text | # On the Clustering of GRBs on the Sky
## Introduction
Recent optical identification of Gamma-ray bursts (GRBs) has established the cosmological origin of GRBs and redshifts have been measured in 9 cases. (For a comprehensive list of references on this subject see greiner ). However, the physical origin of these bursts, their environment, and their relationship with other astrophysical objects still remains an unsolved puzzle. If these bursts are associated with the underlying large scale structure in the universe, then they should show clustering in their positions on the sky as expected of cosmological objects.
One way to search for the clustering is to determine the two-point angular auto-correlation function of the burst positions blum ; hart ; lamb . We compute this quantity for the 4th (current) BATSE catalog (2494 objects) in the next section. In §3 we calculate the two-point correlation function from existing, viable, theoretical models of structure formation. §4 summarizes the main results.
## Two-point correlation function
Given a two-dimensional distribution of $`N`$ point objects in a solid angle $`\mathrm{\Omega }`$, the two-point angular correlation function is defined using the relation peebles :
$$n_{\mathrm{DD}}=nd\mathrm{\Omega }N(1+w(\theta ))$$
(1)
Here $`n_{\mathrm{DD}}`$ is the total number of pairs between angular separation $`\theta `$ and $`\theta +d\theta `$; $`n=N/\mathrm{\Omega }`$; and $`d\mathrm{\Omega }`$ is an infinitesimal solid angle centered around $`\theta `$. $`w(\theta )`$ is the two-point correlation function. It measures the excess of pairs over a random Poisson distribution at a given separation $`\theta `$. Eq. (1) is not very convenient for estimating the two-point correlation function and several alternative estimators of the two-point angular correlation function have been suggested. We experimented with several estimators peeb1 ; davis ; hamil ; landy . The advantage of using either hamil or landy is that the error on the two-point function is nearly Poissonian; the leading term in the error for the other two estimators is $`1/N`$, which can dominate over the Poisson term for large bin size landy . In this paper we report results using the estimator given by Landy and Szalay landy :
$$\stackrel{~}{w}(\theta )=\frac{n_{\mathrm{DD}}2n_{\mathrm{DR}}+n_{\mathrm{RR}}}{n_{\mathrm{RR}}}$$
(2)
Here $`n_{\mathrm{DD}}`$ is the number of pairs (for a given $`\theta `$) in the GRB catalog, $`n_{\mathrm{RR}}`$ is the number of pairs in a mock, random, isotropic sample, and $`n_{\mathrm{DR}}`$ is the catalog-random pair count. The variance of $`\stackrel{~}{w}(\theta )`$ is given by:
$$\delta \stackrel{~}{w}(\theta )^2\frac{1}{n_{\mathrm{DD}}}.$$
(3)
In Figure 1 (Left Panel) we show the angular correlation function with 1$`\sigma `$ error bars for current BATSE (2494 objects) catalog. We also plot the 1$`\sigma `$ errors (Eq. (3)). The main conclusions of our analysis are:
* The two-point angular correlation function is consistent with zero on nearly all angular scales of interest.
* From Figure 1 (Left Panel) it is seen that at several angular scales a 1$`\sigma `$ detection of the correlation function seems to be possible. To make a definitive statement about a detection we need to take into account several uncertainties in our analysis. One of the dominant source of uncertainty is the heterogeneity of the sample with respect to the error in angular positions of the GRBs (the localization uncertainty varies from $`1^{}\text{}10^{}`$). This means that errors at $`\theta 10^{}`$ are much larger than seen in Figure 1 (Left Panel). Another major source of uncertainty comes from anisotropic exposure function of the BATSE instrument, which results in a non-zero correlation function even for a completely isotropic intrinsic distribution<sup>1</sup><sup>1</sup>1for more details see http://www.batse.msfc.nasa.gov/batse/grb/catalog/. Though it is possible that some of the signal at large angular scales is not an artifact, more careful analysis would be required to confirm it.
## Theoretical Predictions
The two-point angular correlation function can be related to the two-point three-dimensional correlation function $`\xi (r)`$ using Limber’s equation (for details see peebles ). If we assume that the GRBs constitute a volume-limited sample up to a distance $`r_{\mathrm{max}}`$ and that the comoving number density of objects is constant, the Limber’s equation reduces to:
$$w(\theta )=\frac{_0^{r_{max}}_0^{r_{max}}r_1^2r_2^2𝑑r_1𝑑r_2\xi (r_{12},z_1,z_2)}{\left[_0^{r_{max}}r^2𝑑r\right]^2}$$
(4)
Here
$$r_{12}^2=r_1^2+r_2^22r_1r_2\mathrm{cos}\theta .$$
(5)
$`r`$ is the coordinate distance in an isotropic, homogeneous universe. The two-point correlation function is related to the power spectrum $`P(k)`$ of the density fluctuations as:
$$\xi (r,t)=b^2\frac{1}{2\pi ^2}_0^{\mathrm{}}k^2𝑑kP(k,t)\frac{\mathrm{sin}(kr)}{kr}.$$
(6)
$`b`$, the bias factor, denotes the clustering of visible matter relative to the dark matter. While its absolute value is still uncertain, the relative bias between nearby rich clusters of galaxies and optically-identified galaxies is $`5`$. And hence if GRBs originate in clusters rather than ordinary galaxies their correlation can be 25 times larger. In this paper, we use the linear perturbation theory predictions for $`P(k,t)`$. We have checked that for the angular scales of interest ($`\theta 5^{}`$) it is a reasonable assumption. We use the BBKS fit bardeen for the linear power spectrum of the standard CDM (sCDM) model and some of its variants. We normalize the power spectrum requiring $`\sigma _8=0.7`$. The time dependence of linear power spectrum is $`P(k,t)(1+z)^2`$, which is also the time dependence of the two-point correlation function. It should be noted that in general the two-point correlation function depends on both the separation between two points and their redshifts, as indicated in Eq. (4). However, the two-point correlation function is negligible for points separated by a large enough redshift difference. Therefore, for most purposes $`\xi (r,t)(1+z)^2`$, where $`z`$ refers the redshift of any of the two points.
In Figure 1 (Right Panel) we show the theoretically predicted angular two-point correlation function for sCDM model. The bias $`b`$ is taken to be one. If observed GRBs constitute a complete sample up to $`z1`$ and they are assumed to be associated with highly biased structures like rich clusters, the value of correlation function is $`10^4`$ at $`5^{}`$. This is the smallest angular scale at which information is possible in the BATSE catalog. At larger angles the correlation function typically scales as $`\theta ^1`$.
## Conclusions and Summary
The two-point correlation function of the 4th (current) BATSE catalog (2494 objects) is consistent with zero at nearly all angular scales that can be probed in the BATSE catalog. This result is consistent with theory if the GRBs are assumed to trace the dark matter distribution with some bias and are a complete sample up to $`z1`$.
When can a detection of the two-point correlation function become possible? The error in the two-point correlation function scales as $`n_{\mathrm{DD}}^{1/2}`$ (Eq. 3) and $`n_{\mathrm{DD}}N^2`$, $`N`$ being the number of objects in the catalog. Therefore the error in estimating the two-point correlation function scales as $`1/N`$. Theory suggests that the value of correlation function at $`\theta =5^{}`$ is $`10^4`$ if the GRB sample is assumed to be complete up to $`z1`$. We check that $`n_{\mathrm{DD}}`$ at $`\theta 5^{}`$ is $`10^2`$ times the total number of pairs ($`N^2/2`$) in the GRB sample. This would suggest that a detection might become possible at this angular scale when the number of objects in the sample exceeds $`10^5`$.
Future surveys like HETE-II and SWIFT will localize the GRBs to a few arc-minutes. This means smaller angular scales could be probed. And as the theoretically-predicted two-point correlation function scales as $`\theta ^1`$, the probability of detection will increase. SWIFT will detect nearly 1000 objects over a period of 3 years with an angular resolution $`1^{\prime \prime }`$. However, though the two-point correlation function is large at these angular scales, the average separation between 1000 objects on the complete sky is $`6^{}`$. Therefore as long as $`w(\theta )1`$, the probability of finding an object within a few arcseconds of the other is negligible. It is possible that $`w(\theta )1`$ at sub-arcsecond scales. However, detailed analysis, taking into account the non-linear correction to the power spectrum of density perturbation, is needed to make precise theoretical predictions for the future surveys.
Acknowledgements: JG is supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DLR) under contract No. 50 QQ 9602 3. RJG acknowledges a travel grant from DFG (KON 1973/1999 and GR 1350/7-1) to attend this conference. SKS thanks S. Bharadwaj for pointing out an error. We thank D. H. Hartman for valuable comments on the anisotropic exposure function of BATSE. |
no-problem/0001/astro-ph0001011.html | ar5iv | text | # What can BeppoSAX tell us about short GRBs: An update from the Subsecond GRB Project
## Introduction
The nature and the origin of short bursts, namely the events with a duration $``$ 2 s, that clearly represent a separate class from longer ones (Kouvelioutou, 1993), is one of the most interesting and puzzling open problems in GRB studies. The celestial distribution of these events is clearly isotropic and their gamma spectrum predominantly harder than that of longer ones, while the two classes have the same peak flux range. A lot of questions naturally arise when considering the issue and comparing the phenomenology with the successful paradigm of Fireball models: are the short bursts caused by a different emission mechanism? Do they originate in extragalactic sources as well? Is their LogN-LogS euclidean (Tavani, 1998)? Are there other duration subclasses (Cline, 1999)? Do they possess an afterglow signature as the longer ones? At present no prompt counterpart detection at any wavelength is available and the recent afterglow discoveries refer only to the long bursts class, leaving a conspicuous gap in our knowledge of the phenomenon. Hence, the identification and localization of at least some event of this kind, that we call “subsecond bursts”, remains a key objective of GRB research, an objective that can only be pursued at the moment by means of BeppoSAX satellite and, in the near future, by means of missions like Hete2, Integral and Swift.
We present here some statistical consideration on BeppoSAX triggering sensitivity to short bursts with GRBM. Finally, the non detection of subsecond events in the sample of WFC selected GRBs is analysed and the new real-time procedure for on-ground triggering at Scientific Operation Centre is briefly resumed.
## Estimation of GRBM Efficiency in triggering Short Bursts
The basic procedure for GRB counterpart detection at the BeppoSAX Scientific Operation Centre, performed since the very beginning of the mission, relies on the visual check of PDS Lateral Shield and WFC ratemeters around GRBM trigger times (Coletta, 1998). This procedure is being recently improved and extended, discriminating as possible the short, particle induced, GRBM fake events from real bursts (Feroci, 1999), monitoring the BATSE triggers as well and performing an additional on-ground search for excesses of counts on Lateral shields and WFCs below the trigger threshold (Smith, 1999; Gandolfi, 1999). In this work we address the general problem of the BeppoSAX sensitivity to GRBs with duration less than 2 s and we try to test the significance of experimental results in the field, i.e. the non detection of subsecond bursts at quick look level. The first point to evaluate is the efficiency of GRBM in triggering short events, since the fundamental method used to identify GRBs - the only one systematically applied to data for a long time, is the check of the counts at trigger times both in WFC and LS lightcurves with 1 s bins. If the sample of BeppoSAX triggers doesn’t include the majority of short bursts we would expect a priori a very low probability of finding subsecond X-ray counterparts, because the general SOC monitoring of WFC transients at a quick look level has a poor sensitivity to excesses of counts that last less than 8 s (the standard bin adopted to scan lightcurves on an orbital basis). In order to estimate the GRBM sensitivity in the duration range of interest with respect to BATSE, we have selected the subsample of common triggers in the period 10/11/1996 - 15/9/1998 (BeppoSAX/SOC Catalog, 1999; BATSE Current Catalog, 1999). This guarantees the reality of SAX events (and automatically discriminates “fake” triggers), that are also analyzed in high time resolution mode (8 ms bin) in order to obtain the T90 (i.e. the time interval during which they emit the 90$`\%`$ of their fluence). We find 111 common triggers, with 24 events under the 2 s threshold that corresponds to a 22$`\%`$ of the total. Comparing this value with the 26$`\%`$ of BATSE duration distribution and assuming that the probability is described by a binomial, we conclude that the number of short bursts detected is still compatible with completeness with a probability of 15$`\%`$, but is almost certainly affected by a bias for very brief events. In fact the bins with GRBs under 0.5 s exhibit a slight deficit of the order of 25$`\%`$ with respect to BATSE 4B duration distribution: coherently with expectations, the triggering efficiency is probably poor in that range due to the small fluence of shortest events (Fig. 1). An offline analysis of BATSE positions, WFCs field of view and ratemeters counts at the relevant trigger times confirms the non detection of counterpart in each one of the 24 cases.
## Increasing the Sensitivity with On-Ground Triggering: The BeppoSAX Subsecond Bursts Project
The efficiency of the GRBM strongly depends from the spectral properties of the burst and from its maximum flux in the 1 s on-board trigger time bin. Other selection effects that could affect GRBs detection are mainly geometrical and depend on the effective area illuminated by the burst: in case of an event exactly on the axis of the LS (this is the most favorable condition to catch the counterpart in WFC field of view if involved lateral shield is unit 1 or unit 3) the area is minimum and the probability of triggering the GRBM is consequently minimized. For this reason an on-ground triggering procedure (part of the BeppoSAX subsecond GRB Project) has been implemented at SOC last summer (Gandolfi, 1999), with the aim of increasing the global detection efficiency of short and long-faint bursts. This semi-automatic procedure relies only on the comparison between lateral shields and WFC ratemeters in the whole orbit with a 1 s bin. The new trigger condition, that has been empirically chosen in order to achieve the best sensitivity to short events without overloading the amount of quick look work and is certainly more effective with respect to the on-board one, selects events which have a 3$`\sigma `$ excess of counts in the lateral shield and at the same time a 3.5$`\sigma `$ excess in the corresponding WFC ratemeter bin. The WFC threshold guarantees that all the bursts detectable in the celestial image, that is those with a global one bin fluence of at least 70 counts in the 2-10 keV range (Smith, 1999) are selected. Interesting examples of short events detected by the new semi-automatic routine, are GRB991014 and GRB991106: the first is slightly too long to be considered a subsecond event, the latter is probably a X-ray/GRB without a detected signal in the GRBM. Other candidate short bursts have been found, but the WFC excess of counts never satisfied the above condition and the transients were in fact not detectable in the corresponding WFC celestial image.The detection of a number of such events, even if it doesn’t allow to localize the burst, could help to constrain the spectral properties of short GRBs with statistical significance.
## Experimental Results and Implications
The present number of GRBs with X-ray prompt counterpart detected by BeppoSAX is 27 and no one is classified as short event. Is this result compatible with the expectations? If we assume the completeness in our GRBM detections in the subsecond duration range (and we have seen that we are at least almost complete) we would expect to detect the 26$`\%`$ of short bursts also in the sample of GRBs with revealed counterpart, i.e. $``$7. This is not the case, but we must remember that just 17 of these events have been triggered by GRBM, the only reliable detection method if the X-ray counterpart has a very low fluence (the GRB quick look procedure guarantees an optimal analysis of WFC, at 1 s or less resolution, only around trigger times). Hence any statistical consideration should be based on the GRBM trigger catalog, which has been - and is being - carefully inspected in real time by Duty Scientists at SOC. Furthermore, the most stringent limit to subsecond counterpart identification is the WFC ratemeter sensitivity in the 1 s bin inspected, that decreases with the decreasing duration of the transient. Considering the duration distribution of short GRBs and the X ray counterparts peak flux distribution for long bursts (Frontera, 1999; various IAUCs), we can estimate a rough global detection efficiency: with a minimum 3$`\sigma `$ signal in the 1 s bin of WFC ratemeter, in the hypotesis of an average offset of 10 degrees from the centre of the field of view and of a Crab-like spectrum for the source, we should miss about the 50$`\%`$ of expected events. The conversion from the peak flux to the fluence that is compared to the threshold in counts for the bin ($``$ 36 in a standard WFC empty field), is done assuming a constant and maximum emission during T90. The expected number of subsecond counterparts in the GRBM-selected sample is therefore $`N_{xs}`$ = $`f_{\gamma s}`$ $`\times `$ $`\eta `$ $`\times `$ $`N_x`$ , where $`f_{\gamma s}`$ is the percentage of short gamma bursts, $`\eta `$ the estimated global detection efficiency for short X-ray counterparts and $`N_x`$ the global number of X ray counterparts detected by BeppoSAX GRBM. We find with all the above assumptions a value of 2 subsecond events. The non detection is not statistically significant, because considering again the binomial distribution (with p = $`f_{\gamma s}`$ $`\times `$ $`\eta `$ the probability of detecting the counterpart of a short GRB instead of a long one, x the GRBM-selected global number of counterpart detections, in this case no one, and n the dimension of the sample, that is 17 events) its probability P(x,n,p) corresponds to P(0,17,0.13) $``$ 0.09. This result strongly encourages to increase the external trigger capability at quick look level in order to maximize the probability of finding a subsecond counterpart. In fact, extending to candidate events not selected by GRBM the real-time standard GRB procedure, with its robust and tested efficiency, will surely increase the chance of identifying low flux events.
## Conclusions
At present, no sure prompt X-ray counterparts of subsecond GRBs have been noticed in BeppoSAX WFCs celestial images and we have shown that GRBM triggered detections in this range are compatible with completeness, with a probable slight deficit at shortest durations. This implies, taking into account WFC sensitivity limits and assuming a X to $`\gamma `$ peak ratio similar to that of longer GRBs, that 2 events on a total of 17 triggered with revealed counterpart, should be subsecond bursts. The non detection is clearly not sufficient to test on a statistical basis any spectral hypotesis (i.e. the correctness of the assumption on X to $`\gamma `$ peak ratio). Furthermore, we can sistematically rely on non triggered detections (that represents the 63$`\%`$ of the whole sample of discovered counterparts) just since about 6 months, thanks to the new on-ground triggering routine implemented at SOC. No inference can be made on the basis of the entire counterparts catalog (i.e. $``$ 3 / 4 events of 27), because untriggered events with a prompt counterpart could have been easily missed at a quick look level with the old standard GRB procedure. On the other side, archive analysis of the GRBM triggers catalog and of the WFC ratemeters of the whole mission are in progress, and the optimization of quick look procedures to identify and analyze at best the high time resolution data has been now obtained within the frame of the BeppoSAX Subsecond Bursts Project. |
no-problem/0001/astro-ph0001016.html | ar5iv | text | # The Space Density of Primordial Gas Clouds near Galaxies and Groups and their Relation to Galactic HVCs
## 1 Introduction
After an extensive review of the High Velocity Cloud (HVC) literature, Wakker & van Woerden (1997) concluded that no single origin can account for the properties of the HVC population of neutral hydrogen clouds that surround the Milky Way Galaxy. Instead, several mechanisms, including infalling extragalactic clouds, cloud circulation within the Galactic halo driven by a galactic fountain, and a warped outer arm extension to the Galaxy must be invoked. The Magellanic Stream and associated HVC complexes require an explanation due to tidal interactions within the Local Group (LG) (cf. Putman et al., 1998).
A defining property for the HVCs has been the lack of associated stellar emission. This also means that spectroscopic parallax methods cannot be used to measure distances to the clouds, and the lack of known distances, in turn, hinders the calculation of the clouds’ physical properties. Recently, absorption lines (cf. van Woerden et al., 1999) and Balmer recombination line emission driven by reprocessing of the ionizing radiation originating in the Galactic star forming regions (Bland-Hawthorn et al., 1998; Bland-Hawthorn & Maloney, 1999) have been used to specify distances to a few of the clouds, indicating a range of distances from within the Galactic halo to greater than 50 kpc.
Recent interest has returned to the idea that HVCs could be primordial objects raining on the Galaxy, as either remnants from the formation of the LG or as representatives from an intergalactic population of dark matter dominated mini-halos in which hydrogen has collected and remained stable on cosmological time scales (Blitz et al. 1999, BSTHB; Braun & Burton 1999, BB). The requirement of gravitational stability without star formation places lower limits on the distances of the clouds from the Sun – a sort of independent distance indicator that can be applied to each cloud individually, depending on its H i flux, angular extent and velocity profile width. Typical derived distances are of order 1 Mpc, implying that this segment of the HVC population (1) inhabits the LG rather than the Galactic halo, (2) has typical H i mass per cloud greater than $`10^7\mathrm{M}_{}`$, and (3) increases the LG H i mass budget by contributing $`4\times 10^{10}\mathrm{M}_{}`$ in the case of the BSTHB sample. For the BB sample, which is restricted to 65 confirmed “compact HVCs”, the integral H i content adds $`10^9\mathrm{M}_{}`$ to the LG.
Further impetus to search for a primordial population of low mass objects comes from simulations of galaxy and group formation (cf. Klypin et al., 1999), which predict of order 10 times more $`10^8\mathrm{M}_{}`$ mini-halos in the LG than can be counted in the dwarf galaxy population. The association of HVCs with this missing population, as well as arguments based on the kinematics of the cloud population (BSTHB and BB), makes an appealing picture for the extragalactic/LG explanation.
Similar concerns motivated the extragalactic 21cm line survey by Weinberg et al. (1991), whose study of several representative environments (clusters and voids) found only gas-rich galaxies containing stars. Several extragalactic H i surveys of substantially larger volumes to more sensitive H i mass limits have also found no objects with HVC properties (i.e. H i detections without associated starlight) (Zwaan et al., 1997; Spitzak & Schneider, 1999; Kilborn et al., 1999).
Clearly, if the HVC phenomenon is a common feature of galaxy formation and evolution, then extragalactic surveys of the halos and group environments of nearby galaxies should show evidence for this population. We take two approaches to the problem of placing the local HVC population in an extragalactic context. The first (in sections 3 and 4) is to compute the H i mass function for the local group, both for optically selected group members and for group members plus HVC populations as modeled by BSTHB and BB. The second, separate approach (section 4) is to calculate the probability that the narrow strip that the Arecibo<sup>1</sup><sup>1</sup>1The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, which is operated by Cornell University under a cooperative agreement with the National Science Foundationi Strip Survey (AHiSS) (Sorar, 1994; Zwaan et al., 1997) makes through the halos of $``$200 galaxy halos and $``$14 groups would detect members of HVC populations in those systems.
## 2 The Local Group H i Mass Function
The H i mass function (HiMF) is used in extragalactic astronomy to quantify the space density of gas-rich galaxies and possible intergalactic clouds as a function of H i mass. For the field galaxy population, the HiMF has been determined accurately for $`M_{\mathrm{HI}}>10^{7.5}h_{65}^2\mathrm{M}_{}`$ and can be fit satisfactorily with a faint end slope $`\alpha 1.2`$ (Briggs & Rao, 1993; Zwaan et al., 1997; Kilborn et al., 1999). At lower $`M_{\mathrm{HI}}<10^{7.5}h_{65}^2\mathrm{M}_{}`$, where there is considerable uncertainty due to the small number of detections, Schneider et al. (1998) have found evidence for a steep upturn in the tail of the HiMF. Although this steep tail has a tantalizing similarity to the signature of massive HVCs, it appears that at least one of the two H i signals responsible for the rise comes from a normal galaxy, and the other is too close to a bright star to exclude faint optical emission (Spitzak & Schneider, 1999).
We construct the HiMF for the LG from H i measurements of all known LG members as compiled recently by Mateo (1998). Within the LG volume, a total of 22 galaxies are known in which H i has been detected. The statistics are therefore poor, and consequently the HiMF is noisy. The LG HiMF is in one sense the best measured HiMF, with data over six orders of magnitude, compared to three or four orders of magnitude for determination for the field galaxy population. On the other hand, the LG HiMF may suffer from severe selection effects due to obscuration by dust in the disk of the Milky Way galaxy. In order to estimate how many galaxies might have escaped detection so far, Mateo (1998) plotted the cumulative number of galaxies as a function of $`1\mathrm{sin}|b|`$, for the total sample of LG galaxies. If LG galaxies were distributed equally over the sky, the resulting histogram should be a straight line. We applied the same method to the sample of galaxies with H i detections and found that 4 to 7 galaxies with H i masses $`>10^4\mathrm{M}_{}`$ are likely to be missing at low Galactic latitude. The missing galaxies would be predominantly the ones with low optical surface brightness, but since there is no clear correlation between surface brightness and H i mass, it is not possible to make a more refined correction to the HiMF than just adding one galaxy to each half decade mass bin below $`M_{\mathrm{HI}}=10^8\mathrm{M}_{}`$. For larger galaxies with $`M_{\mathrm{HI}}>10^8\mathrm{M}_{}`$, we assume that the census of the LG galaxy population is complete (cf. Henning et al., 1998).
The result is presented in the top panel of Fig. 1, where the points represent the LG HiMF and the line is the field HiMF derived by Zwaan et al. (1997) scaled vertically so as to fit the points in the region around $`M_{\mathrm{HI}}=10^9\mathrm{M}_{}`$ where the curve has been measured accurately. This scaling is justified given the fact that the HiMF for optically selected and H i selected galaxies is identical (cf. Briggs & Rao, 1993; Zwaan et al., 1997). The scaling accounts for the overdensity of the LG, which amounts to a factor of 25, assuming that the LG volume is 15 Mpc<sup>3</sup>. Also shown is a HiMF with a steep upturn proposed by Schneider et al. (1998). The large divergence between the extrapolated curves from Zwaan et al. (1997) and Schneider et al. (1998) illustrates the uncertainty in HiMF below $`M_{\mathrm{HI}}=10^{7.5}\mathrm{M}_{}`$. The LG HiMF for optically selected galaxies is remarkably flat, with the faint-end slope of a Schechter function fit of $`\alpha 1.0`$.
Studies of the H i content of galaxies in different environments (including voids \[Szomoru et al. 1996\], clusters \[McMahon 1993\] and groups \[Kraan-Korteweg et al. 1999\]) have shown that the shape of the HiMF for $`M_{\mathrm{HI}}>10^8\mathrm{M}_{}`$ is independent of cosmic density. The fact that we find here that the HiMF of optically selected LG members is flat down to H i masses of a few $`\times 10^4\mathrm{M}_{}`$ does not, however, insure that the field HiMF is also flat to these low masses. Although the crossing time of the LG is approximately equal to a Hubble time, there are clear indications of interactions (Mateo, 1998, and references therein). The H i distributions in the lowest luminosity LG dwarfs are often highly asymmetric, perhaps indicative of tidal distortions. It is quite possible that low mass systems are destroyed or merged, which could cause the LG HiMF to be flatter than that of the field.
## 3 i Mass Functions for Extragalactic HVCs
We now derive HiMFs for the population of extragalactic HVCs as proposed by BSTHB and BB. For the compact, isolated HVCs identified by BB, we estimate H i masses by using their measurements of the integrated fluxes and the assumption that all clouds are at the same 1 Mpc distance, suggested by BB. Typical H i masses are then a few times $`10^7\mathrm{M}_{}`$ and the sizes are approximately 15 kpc. The resulting HiMF is indicated by the light shaded histogram in the second panel of Fig. 1. For the BSTHB sample we use the compilation of HVCs by Wakker & van Woerden (1991, WW), which forms the main source for the BSTHB analysis. For each cloud the distance $`r_\mathrm{g}`$ at which the cloud is gravitationally stable is calculated. Adopting BHTHB’s value of $`f=0.1`$ for the ratio of baryonic mass to total mass leads to typical distances of self-gravitating HVCs of $`1`$ Mpc.
The dark shaded histogram represents the HiMF for the BSTHB clouds, assuming that all clouds are at their distance $`r_\mathrm{g}`$. The resulting HiMF shows a clear peak at approximately $`3\times 10^7\mathrm{M}_{}`$, equal to the typical H i mass estimated by BSTHB. The largest uncertainty in the determination of $`M_{\mathrm{HI}}`$ is $`f`$, which might vary from cloud to cloud. We tested the effect of a varying $`f`$ on the HiMF by applying to each individual cloud a random value of $`f`$ in the range $`0.03`$ to $`0.3`$. The resulting HiMF (shown by the unshaded histogram) is not significantly different from the HiMF based on a fixed $`f`$ in the region of interest $`>10^7\mathrm{M}_{}`$.
The space density of the BB HVC population is comparable to the HiMF for optically selected LG galaxies, but it is obvious that the BSTHB HVCs outnumber normal galaxies by a factor 5 to 10 in the range $`10^{7.5}<M_{\mathrm{HI}}<10^9\mathrm{M}_{}`$. This implies that if the BSTHB cloud population is typical for galaxy groups, H i surveys in groups should have encountered 5 to 10 dark H i clouds for every detected galaxy. This is clearly at variance with the observations.
At what distances from the Milky Way must the HVCs be located so that their HiMF is not in conflict with the observed field HiMF? Since the virial distance $`r_\mathrm{g}`$ is directly proportional to $`f`$, we can test this by varying $`f`$ from 0.2 to 0.025 and calculate the resulting HiMF. The results are shown in the third panel of Fig. 1. The space density of HVCs in the LG can only be brought into agreement with the observed field HiMF if the median value of $`f`$ is lowered to $`0.02`$, a value much lower than what is normally observed in galaxies. The median distance of such clouds must be smaller than $`200`$ kpc.
## 4 Expected number of extragalactic HVC detections
The Arecibo H i Strip Survey (AHiSS), which is discussed in detail in Sorar (1994) and Zwaan et al. (1997), puts limits on the space density of primordial gas clouds in external galaxy groups and around galaxies. The survey was taken in drift-scan mode and consists of two strips at constant declinations, together covering $`20^\mathrm{h}`$ of RA over a redshift range from $`0`$ to $`7500`$ $`\mathrm{km}\mathrm{s}^1`$. The limiting column density was $`10^{18}\mathrm{cm}^2`$, which is lower than that of most of the HVCs in the WW compilation and those presented in BB. The sky coverage is small (15 deg<sup>2</sup> excluding the side lobes) but the survey strip passes through the halo regions of many groups and galaxies as shown in Fig. 2. The unique character of this survey makes it therefore more suitable to assess the HVC problem than other surveys of equal size. From the Lyon-Meudon Extragalactic Database (LEDA<sup>2</sup><sup>2</sup>2We have made use of the Lyon-Meudon Extragalactic Database (LEDA) supplied by the LEDA team at the CRAL-Observatoire de Lyon (France). ) we selected all known galaxies with projected distances $`<1h_{65}^1`$ Mpc from the two AHiSS strips. The circles in Fig. 2 indicate shells with 1 Mpc radii around the galaxies. Since the discussion in BSTHB is primarily focussed on galaxy groups, we also selected all cataloged groups within 1 Mpc of the strips. Galaxy groups were drawn from Willick et al. (1997), who used the the Mark III catalog, and Garcia (1993) who selected groups from the LEDA galaxy sample.
We fill the volumes around the selected groups and galaxies with a synthetic population of HVCs similar to the one proposed in BSTHB. To construct such an ensemble we make use of the compilation of HVCs by WW as discussed in section 3. Although BSTHB put particular emphasis on galaxy groups, we choose to consider clouds around individual galaxies as well. Hierarchical formation scenarios do not distinguish between galaxies and groups in the relative number of satellites (Klypin et al., 1999). Further motivation comes from the fact that in the LG, subclustering of dwarfs is observed around the Milky Way and M31 (Mateo, 1998).
The $`3^{}`$ beam of the Arecibo telescope subtends $`d_{\mathrm{beam}}=0.87D\mathrm{kpc}`$ at a distance $`D`$ Mpc. The typical sizes of the HVCs discussed in BSTHB are 28 kpc and the lowest column density clouds could therefore be detected out to distances of 32 Mpc, beyond which the average HVC would no longer fill the beam. HVCs with column densities in excess of the limiting value of $`10^{18}\mathrm{cm}^2`$ could be detected to larger distances. The limiting column density for clouds at large distances where $`d_{\mathrm{cloud}}<d_{\mathrm{beam}}`$ is $`N_{\mathrm{lim}}=10^{18}\times (d_{\mathrm{beam}}/d_{\mathrm{cloud}})^2`$. For each group and galaxy, a fraction of the volume of the surrounding sphere is scanned by the Arecibo beam. The number of clouds within that volume is calculated, taking into account the column densities and sizes of the individual clouds.
Table 1 lists the number of clouds that would have been detected in the Arecibo H i strip survey if a population of extragalactic HVCs existed with the BSTHB properties. We calculate the numbers of clouds differently for groups and for galaxies. Since BSTHB do not specify the exact radial distribution of clouds, we tested three different radial distribution functions to fill the volumes with clouds: 1) a spherical volume of radius $`R`$, 2) a thin spherical shell of radius $`R`$, and 3) a thick spherical shell with clouds distributed according to a Gaussian about the radius $`R`$ with dispersion $`\sigma =R/3`$. The latter distribution most closely resembles the derived distribution of $`r_\mathrm{g}`$ given in BSTHB. The numbers in the table are based on $`R=1\mathrm{Mpc}`$, the value preferred by both BSTHB and BB. The group halos are filled with 450 clouds, the number of HVCs identified by WW, excluding complexes A, C, M, the Outer Arm and the Magellanic Stream. To calculate the expected number of clouds around galaxies, the number of clouds associated with each galaxy is scaled in direct proportion to the ratio of the galaxy luminosity compared to the integral luminosity of the LG. This leads to a median number of clouds per galaxy of 40.
Table 1 shows that the expected number of detections is essentially independent of how the clouds are distributed around the groups and galaxies. For these samples we should detect approximately 250 clouds around galaxies and 70 around groups if the HVC population of BSHTB is a common feature of nearby galaxies. Restricting our analysis to the compact clouds of BB, reduces these numbers to 39 and 9. For a uniformly filled spherical distribution of BSTHB clouds, the distribution of H i masses of the expected detections is shown in the lowest panel in Fig. 1. This figure illustrates that our analysis is sensitive to typical H i clouds (compare second panel of Fig. 1), and not only to the most massive ones.
The robustness of the result is demonstrated in Table 1, where the numbers in parentheses indicate the expected number of detections if the detection threshold is increased from $`5\sigma `$ to $`7\sigma `$. The average decrease is 25%. A 50% decrease occurs if the detection threshold is set at $`10\sigma `$. This extremely conservative threshold would still predict more than 100 detections.
## 5 Conclusions
The hypothesis that HVCs are primordial gas clouds with typical H i masses of a few $`\times 10^7\mathrm{M}_{}`$ at distances of $`1`$ Mpc from the Galaxy is not in agreement with observations of nearby galaxies and groups. Blind H i surveys of the extragalactic sky would have detected these clouds if they exist around all galaxies or galaxy groups in numbers equal to those suggested for the Local Group. These results are highly significant: the Arecibo H i strip survey would have detected approximately 250 clouds around individual galaxies and 70 in galaxy groups.
We are grateful to B. Wakker for discussions and for providing the list of HVC parameters. J. Bland-Hawthorn, L. Blitz, R. Braun, J. van Gorkom, and H. van Woerden are thanked for useful comments. |
no-problem/0001/cond-mat0001296.html | ar5iv | text | # Supercooled Water: Dynamics, Structure and Thermodynamics
## I Introduction
Water is among the most abundant substances on earth, and also among the most familiar, as it forms a significant part of the natural environment, and of living organisms. In addition to these obvious reasons, the continued study of water by physical scientists stems from the many remarkable and peculiar properties it possesses, some of which are part of the popular lore (such as the fact that liquid water is heavier than ice near ambient conditions). In the crystalline state, water exists in as many as twelve, possibly fourteen, forms of ice. Echoes of this polymorphism are to be found in the amorphous state as well. When prepared as an amorphous solid, or glass, by ultrafast cooling or by depositing vapor on a cold substrate, water is found in two forms, a high density and a low density form .
A substantial fraction of the study of liquid water has focussed on its properties in the supercooled state, where its unusual properties are amplified. Analyzing a range of dynamic and thermodynamic quantities, Speedy and Angell showed that these quantities appeared to diverge at a finite temperature in a power law fashion, leading to the hypothesis of a thermodynamic singularity at $`T_s=228K`$. Computer simulation studies and analysis of simplified models have in recent years led to the proposal of two other possibilities, namely the existence of a liquid-liquid critical point at low temperatures, and the scenario wherein no thermodynamic singularities need be invoked to explain the anomalies observed in water.
However, there has been a renewed focus recently on the idea that regardless of a thermodynamic singularity, strong changes in both the thermodynamic and dynamic properties of supercooled water may be expected in the vicinity of $`T_s`$. Evidence from computer simulations suggests that the strong temperature dependence of dynamical quantities, and the power law temperature dependence, may be explained as manifestations of an avoided dynamical singularity described by the mode coupling theory of the glass transition. Analysis of experimental data appears to indicate that the rapid changes in the entropy of supercooled water must cross over to slower changes in the vicinity of $`T_s`$, and correspondingly, the heat capacity must display a maximum. It has been argued that upon crossing this temperature, water changes character from being a very “fragile” liquid at higher temperatures, to a “strong” liquid at lower temperatures. Specifically, the stronger than Arrhenius temperature dependence of relaxation times seen at temperatures higher than $`T_s`$ is argued to cross over to Arrhenius dependence at lower temperatures (the experimental situation is ambiguous at present, with measurements in Ref. arguing in favor of continuous change down to the glass transition temperature $`135K`$).
Among the crucial issues one must address in understanding the above possibilities is the nature of structural change that takes place as liquid water is supercooled. In particular, the nature of structural change that accompanies a fragile to strong crossover must be understood. In this paper we present a preliminary report of investigations to address these questions, which focus on analyzing the properties of local potential energy minima or “inherent structures” sampled by a simulation model of water. Similar investigations have recently proved a fruitful approach to studying changes in liquid state properties under supercooling of model atomic liquids as well as liquid water. In particular, we focus on the comparison of changes in dynamics with changes in the energies of the inherent structures sampled, and structural change as evidenced by changes in the nearest neighbor geometrical properties. A specific structural feature we consider is the fraction of water molecules that are four-coordinated and those with five or greater other molecules in their first neighbor shell. These latter “bifurcated bond” arrangements have previously been shown to facilitate structural rearrangement and hence faster dynamics.
## II Simulation Details
We perform molecular dynamics (MD) simulations of 216 water molecules interacting via the SPC/E pair potential . The simulations we describe here are for a fixed density of $`\rho =1.0g/cm^3`$, for temperatures ranging from $`210K`$ to $`700K`$. For $`T300`$ K, we simulate two independent systems to improve statistics, as the long relaxation times make time averaging more difficult. Full details of the simulation protocol used are described in . At the studied density, the mode coupling singular temperature, $`T_c`$ has been estimated to be $`T_c=194K\pm 1`$.
Inherent structures are obtained, with a sample of 100 equilibrated liquid configurations as starting configurations, by performing a local minimization of the potential energy, using conjugate gradient minimization. The iterative procedure for minimizing the energy is performed until changes in potential energy per iteration are no more than $`10^{15}kJ/mol`$. A normal mode analysis confirms that the potential energy changes with positive curvature along all directions away from the minimum energy configurations so obtained.
We calculate the oxygen-oxygen pair correlation function for both the equilibrium liquid configurations, and the inherent structures. The integrated intensity under the first peak of the pair correlation function (upto the cutoff radius $`0.31nm`$) yields the coordination number. Dynamics is probed by calculating the self intermediate scattering function at wavevector 18.55 nm<sup>-1</sup>, i.e. the Fourier transform of the self van Hove self correlation function
$$G_s(r,t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}\delta (|𝐫_i(t)𝐫(0)|r).$$
(1)
Stretched exponential fits of the self intermediate scattering function yield relaxation times $`\tau `$ as a function of temperature. Average values of the inherent structure potential energy and pressure are also calculated.
## III Results
Figure 1(a) shows the relaxation times plotted on a logarithmic scale against inverse temperature. Clear deviations from the high temperature Arrhenius behavior,
$$\tau =\tau _0exp(E/k_BT)$$
(2)
(where $`E`$ is a constant) is observed for $`T275K`$. This is more clearly seen by plotting $`Tln(\tau /\tau _0)`$ (where $`\tau _0`$ is obtained from an Arrhenius fit of the highest $`5`$ temperatures), as shown in Fig. 1(b), which shows strong deviation of $`Tln(\tau /\tau _0)`$ below $`T275K`$ indicating that the temperature dependence of relaxation times at low temperatures can no longer be described by the Arrhenius form.
Figure 2 (a) shows the average inherent structure energies as a function of temperature. For comparison, the equilibrium potential energy is shown in Figure 2(b). Inherent structure energies change continuously, with modest temperature dependence at high temperatures to substantial temperature dependence at lower temperatures. In particular, in the range of temperatures where the relaxation times begin to display non-Arrhenius behavior, the inherent structure energies show considerable temperature dependence. This behavior is analogous to that found for a model atomic fluid.
Figure 3 shows the oxygen-oxygen pair correlation function, (a) for the equilibrated liquid and (b) for the inherent structures. In contrast to the model atomic liquid studied in , both pair correlation functions show marked temperature dependence, at fixed density. In both Fig.s 3(a) and 3(b), the first peak of the pair correlation function becomes sharper upon decreasing the temperature. Further, there is a systematic reduction of the intensity between the first and second neighbor peaks. The integrated intensity under the first peak, which gives the average number of neighbor molecules in the first coordinate shell, approaches the value of $`4`$ from higher values, as the temperature decreases. This implies that the configurations sampled by the liquid come closer to that of a four-coordinated random network.
To quantify this further, Fig. 4 shows the histogram of the fraction of molecules with a given coordination number. For equilibrated configurations, the histogram changes from a rather broad one to one that is peaked around the value $`4`$ as the temperature decreases. The same trend is visible for the inherent structures, although even at high temperatures, the distribution is quite narrowly peaked around the value $`4`$. Such a comparison permits us to make a separation between deviations from four-coordination arising from thermal agitation, and that arising from configurational change.
From data in Fig. 3 and 4, we calculate the fraction of molecules that are four-coordinated, and those that have five or higher coordination, i. e. which have bifurcated bonds. Fig. 5 shows the temperature dependence of the fraction of four-coordinated water molecules. For both the equilibrated configurations and inherent structures, this fraction approaches $`one`$ as $`TT_c`$. As with deviations from Arrhenius behavior, the range of temperatures displaying substantial increase in this fraction also shows substantial temperature dependence of the average inherent structure energies.
Figure 6 shows the fraction of molecules with five or higher coordination. Complementary to the variation of the fraction of four coordinated molecules, this fraction approaches values very close to zero as $`T_c`$ is approached.
## IV Summary
We have presented simulation results that demonstrate the significant correlations between relaxation times, average energies of local potential energy minima sampled, and structural features for a simulation model of water. In particular, as the mode coupling temperature $`T_c`$ estimated for this model liquid at the studied density is approached, the structure of the liquid appears to approach that of a four-coordinated network, free of defects to the extent permissible by the bulk density of the liquid. Such a structural change can potentially explain the speculated dramatic changes both in the dynamics and thermodynamic properties of water across this crossover temperature. Below the crossover temperature, no further structural arrangement may be expected, and the configurational entropy of the liquid may become ‘frozen in’ at the value that corresponds to the random tetrahedral network (plus the residual defects that may be present at concentrations varying with density). Thus the rate of change in entropy of the liquid would change substantially near the crossover temperature. Similarly, because of the significant temperature dependence of the fraction of bifurcated bonds – which facilitate structural rearrangement – above the crossover temperature, and the relative constancy below, the temperature dependence of the dynamical properties may also be expected to show a corresponding crossover. Further work is in progress to strenghten these notions.
We acknowledge useful discussions with Prof. C. A. Angell. FS acknowledges INFM-PRA and MURST-PRIN, and HES acknowledges NSF, for financial support. |
no-problem/0001/hep-ph0001211.html | ar5iv | text | # On Color Superconductivity in External Magnetic Field
## 1 Introduction.
It was shown some time ago that QCD at high baryon density is a color superconductor . Recently there was a significant activity in studing color superconductivity caused mainly by the observation that the superconducting gap can be as high as 100 MeV. It is believed that in nature and laboratory experiments color superconductivity can occur in high energy ion collisions and in the cores of neutron stars. Since temperatures obtained in high energy ion collisions are usually significantly more than temperatures characteristic for the color superconductor/quark gluon plasma phase transition at the relevant baryon densities, it is unlikely that color superconductivity can be observed in high energy ion collisions . Thus, it is possible that perhaps the only chance of observing color superconductivity is connected with astrophysical observations of neutron stars. Therefore, the study of astrophysical implications of the occurence of color superconductivity in the cores of neutron stars is an actual and important problem.
As well known neutron stars typically possess very strong magnetic fields up to $`10^{13}G`$. For usual superconductors, strong enough external magnetic fields destroy superconductivity. Consequently, it is important to study how the presence of external magnetic field affects the physics of neutron stars with color superconducting cores. This problem was considered in a recent paper by Alford, Berges, and Rajagopal (for earlier studies and other astrophysical aspects of the occurence of color superconductivity in the cores of neutron stars see ). Since there is a ’modified’ photon, which remains massless in color superconducting phase (it is in the general case a mixture of electromagnetic and some gluon fields), Alford, Berges, and Rajagopal showed that external magnetic field partially penetrates inside color superconducting cores and it is very unlike that magnetic fields typical for neutron stars can destroy color superconductivity in the cores of neutron stars.
In this paper we continue the study of color superconductivity in external magnetic field. In Sect.2 we discuss the reason why the mixing angles in CFL and 2SC phases are different despite the fact that the CFL gap goes to the 2SC gap for $`m_s\mathrm{}`$. We show in Sect.3 that although flavor symmetry is explicitly broken in external magnetic field all values of gaps in their coset spaces of possible solutions in the CFL phase are equivalent in external magnetic field. Our conclusions are given in Sect.4.
## 2 Why are mixing angles in the 2SC and CFL phases different?
In paper the CFL phase with three massless quarks and the 2SC phase with two massless quarks were explicitly considered. On the other hand it is known that in the real QCD the mass of the strange quark cannot be neglected . According to , it is expected that for realistic values of $`m_s`$ a CFL phase with 5 independent order parameters is realized. Therefore, it is natural to consider explicitly physical properties of the CFL phase with $`m_s0`$ in external magnetic field. (In what follows we consider only the case of the so called ’sharp boundary’ transition between nuclear and superconducting quark matter. The case of ’smooth boundary’ transition is trivial because there is not even partial exclusion of magnetic field flux inside the superconducting region .)
According to , the same linear combination of the electromagnetic and gluon fields, which is massless in the CFL phase with $`m_s=0`$, is also massless in the CFL phase with $`m_s0`$. Therefore, the mixing angle of the usual photon with gluon fields in the CFL phase with $`m_s0`$ coincides with the mixing angle in the CFL phase with three massless quarks and, consequently, the same part of magnetic field flux penetrates inside a color superconductor in the CFL phase with $`m_s0`$. However, one question remains to be answered. As shown in , for $`m_s\mathrm{}`$, the gap in the CFL phase with $`m_s0`$ goes to the 2SC gap. Obviously, extremely heavy strange quark decouples from the low energy dynamics in this case (of course, in the opposite limit $`m_s0`$, the gap in the CFL phase with $`m_s0`$ goes to the gap of the CFL phase with three massless quarks). Therefore, one would naively expect that mixing angle is the same in all three phases. However, according to , the mixing angle in the 2SC phase is two times less than in the CFL phase. Why are mixing angles in these phases different?
To answer this question we first consider in more detail the definition of mixing angle. The color superconducting gap is a vacuum expectation value of two quark fields $`<\psi _\alpha ^i\gamma _5C\psi _\beta ^i>`$, therefore, it transforms under flavor and color transformations as a tensor product of two natural representations of flavor and color groups of symmetry. If we group indices i and $`\alpha `$ as well as j and $`\beta `$, then the gap $`\mathrm{\Delta }_{\alpha \beta }^{ij}`$ is a matrix with respect to two united indices i,$`\alpha `$ and j,$`\beta `$. The action of the covariant derivative on the gap is
$$D_\mu \mathrm{\Delta }=(_\mu +ieA_\mu Q_{t.r.}+igA_\mu ^aT_{t.r.}^a)\mathrm{\Delta },$$
(1)
where $`t.r.`$ means tensor representation, $`(Q_{t.r.}\mathrm{\Delta })_{\alpha \beta }^{ij}=Q^{ii_1}\mathrm{\Delta }_{\alpha \beta }^{i_1j}+\mathrm{\Delta }_{\alpha \beta }^{ii_1}(Q^T)^{i_1j}`$,
$`(T_{t.r.}^a\mathrm{\Delta })_{\alpha \beta }^{ij}=T_{\alpha \alpha _1}^a\mathrm{\Delta }_{\alpha _1\beta }^{ij}+\mathrm{\Delta }_{\alpha \alpha _1}^{ij}(T^a)_{\alpha _1\beta }^T`$ ($`A^T`$ means a transpose to a matrix A), Q=diag(2/3, -1/3, -1/3) the generator of electromagnetic transformations, $`T^a=\frac{\lambda ^a}{2}`$ the generators of color transformations, and $`\lambda ^a`$ the Gell-Mann matrices. In the general case to find an operator, which is equal to zero acting on the gap, we consider an operator of the general form $`\stackrel{~}{Q}=Q+a_1T^1+\mathrm{}+a_8T^8`$ and seek a solution of the equation $`\stackrel{~}{Q}\mathrm{\Delta }=0`$ that gives us the sought operator $`\stackrel{~}{Q}`$. By representing $`ieA_\mu Q+igA_\mu ^aT^a`$ as
$$i\left(\begin{array}{cccc}\hfill A_\mu & \hfill A_\mu ^1& \hfill \mathrm{}& \hfill A_\mu ^8\end{array}\right)\left(\begin{array}{c}\hfill eQ\\ \hfill gT^1\\ \hfill \mathrm{}\\ \hfill gT^8\end{array}\right)$$
and inserting $`O^TO`$, where
$$\begin{array}{ccc}& & \\ & \hfill O& \hfill =\end{array}\left(\begin{array}{cccc}\hfill n_0& \hfill n_1& \hfill \mathrm{}& \hfill n_8\\ \hfill \mathrm{}& & & \\ \hfill m_0& \hfill m_1& \hfill \mathrm{}& \hfill m_8\end{array}\right)$$
is an orthogonal 9$`\times `$9 matrix, we find elements $`n_0,n_1,\mathrm{},n_8`$ from the equation $`en_0Q+gn_1T^1+\mathrm{}+gn_8T^8=a\stackrel{~}{Q}=a(Q+a_1T^1+\mathrm{}+a_8T^8)`$
$$n_0=a/e,n_1=\frac{aa_1}{g}\mathrm{},n_8=\frac{aa_8}{g}$$
(2)
and, consequently, the corresponding massless linear combination of electromagnetic and gluon fields is $`\stackrel{~}{A_\mu }=n_0A_\mu +n_1A_\mu ^1+\mathrm{}+n_8A_\mu ^8`$. (It is an easy task to check that $`a_3=1`$ and $`a_8=\frac{1}{\sqrt{3}}`$ (others $`a_i=0`$) is the sought solution in the case of the CFL phase, i.e., the operator $`\stackrel{~}{Q}=Q(T^3+\frac{T^8}{\sqrt{3}})`$ is equal to zero acting on the CFL gap.) A generalized mixing angle is defined as $`\mathrm{arccos}`$ of the element $`O_{11}`$ of the matrix O, i.e., it is $`\alpha =\mathrm{arccos}n_0`$. (We say a generalized mixing angle because a 9$`\times `$9 orthogonal matrix cannot be parametrized by one independent parameter unlike the familiar case of mixing of two gauge fields in Standard Model. However, since only the element $`O_{11}`$ of the matrix $`O`$ is important for us, it is convenient to define a generalized mixing angle as $`\mathrm{arccos}`$ of the element $`O_{11}`$). Since $`OO^T=1`$ for orthogonal matrices, we have $`n_0^2+n_1^2+\mathrm{}+n_8^2=1`$ that gives us $`a=\frac{eg}{\sqrt{g^2+(a_1^2+\mathrm{}+a_8^2)e^2}}`$. Therefore, for the CFL phase, we find that the mixing angle is $`\alpha _{CFL}=\mathrm{arccos}\frac{g}{\sqrt{g^2+4e^2/3}}\frac{2e}{\sqrt{3}g}1/10`$, where we assumed that $`a_s=\frac{g^2}{4\pi }1`$ at the scale of baryon densities typical for neutron stars cores and, consequently, the gauge field $`\stackrel{~}{A_\mu }=\frac{gA_\mu eA_\mu ^3\frac{e}{\sqrt{3}}A_\mu ^8}{\sqrt{g^2+\frac{4e^2}{3}}}`$ field is massless in the CFL phase. In paper the mixing angle obtained was twice times more $`\alpha _{ABR}=\mathrm{arccos}\frac{g}{\sqrt{g^2+e^2/3}}\frac{e}{\sqrt{3}g}1/20`$ for the CFL phase. The discrepancy of our result with that of is because the authors of used color generators $`T^a`$ normalized to 2, meanwhile, we used the standard definition of color generators $`T^a=\frac{\lambda ^a}{2}`$ (of course, our result can be easily recovered if we replace $`g`$ by $`g`$/2 in ). According, e.g., to Particle Data Group , the standard definition of covariant derivative in QCD, which defines strong coupling constant properly normalized at a fixed scale, is $`D_\mu =_\mu +igA_\mu ^aT^a`$, where $`T^a=\lambda ^a/2`$ and $`\lambda ^a`$ are the Gell-Mann matrices, therefore, the color generators $`T^a`$ are normalized to $`\frac{1}{2}`$. Thus, we conclude that the correct value of the mixing angle in the CFL phase is $`\alpha _{CFL}=\mathrm{arccos}\frac{g}{\sqrt{g^2+4e^2/3}}1/10`$. Although our mixing angle is two times more than the one found in , it does not change qualitative conclusions of because the mixing angle is still a small number and only a small part of the magnetic field flux is excluded inside the color superconducting region.
Let us now return to the question posed above about why the mixing angles in the CFL phase and the 2SC phase differ even if the CFL gap goes to the 2SC gap for $`m_s\mathrm{}`$. Let us recall that the 2SC gap is $`\mathrm{\Delta }_{\alpha \beta }^{ij}=\mathrm{\Delta }ϵ^{ij}ϵ_{\alpha \beta 3}`$, where flavor indices run over 1 and 2. It is easy to check that the operator $`\stackrel{~}{Q}=Q(T^3+\frac{T^8}{\sqrt{3}})`$, which is equal to zero acting on the CFL gap, is also equal to zero acting on the 2SC gap. Nevertheless, according to the mixing angles in the CFL and 2SC phases differ. Why? The answer is that the generator $`T^3`$ is equal to zero acting on the 2SC gap in difference to the case of the CFL phase. Therefore, we can add this generator to our $`\stackrel{~}{Q}`$ with any coefficient, i.e., the corresponding equation for $`\stackrel{~}{Q}`$ does not have a unique solution. Obviously, the different choice of coefficients of color generators, which are equal to zero acting on the gap, gives the different value of mixing angle. To find what is the correct $`\stackrel{~}{Q}`$ in this case, we use the condition of minimum of energy. The less the mixing angle, the larger part of the external magnetic field flux penetrates inside a color superconductor. The minimum of the mixing angle $`\alpha =\mathrm{arccos}\frac{g}{\sqrt{g^2+_{i=1}^8a_i^2e^2}}`$ obviously corresponds to the minimum of $`_{i=1}^8a_i^2`$. For the 2SC phase, it is easy to show that the minimum of energy is given by $`\stackrel{~}{Q}=Q\frac{T^8}{\sqrt{3}}`$. (Note also that the choice of the diagonal SU(3) generators in the natural representation of the group is ambiguous. If we consider the $`T^3`$ and $`T^8`$ used in , then we obtain $`\stackrel{~}{Q}=Q\frac{1}{2}(T^3+\frac{T^8}{\sqrt{3}})`$, which obviously gives the same $`_{i=1}^8a_i^2`$, i.e., the same mixing angle.) Thus, the mixing angle in the 2SC phase is indeed two times less $`\alpha _{2SC}\frac{e}{\sqrt{3}g}\frac{1}{20}`$ than in the CFL phase.
## 3 Explicit flavor symmetry breaking.
Color and flavor symmetries are spontaneously broken in the CFL phase with $`m_s=0`$ or $`m_s0`$. However, since color and flavor transformations are symmetries of the theory, all color and flavor transformed gaps $`U\mathrm{\Delta }`$ have the same energy and, therefore, $`SU_L(3)\times SU_R(3)\times SU_c(3)\times U_B(1)/SU_{L+R+c}(3)`$ and $`SU_L(2)\times SU_R(2)\times SU_c(3)\times U_B(1)/SU_{L+R+c}(3)`$ are the corresponding coset spaces of possible solutions for gaps in the CFL phase with 3 massless quarks and in the CFL phase with $`m_s0`$, respectively.
Since the generator of electromagnetic transformations does not commute with SU(3) or SU(2) flavor transformations, one can expect that a color superconductor in the CFL phase chooses a specific value in its coset space of possible solutions in external electromagnetic field. Indeed, if $`A_\mu 0`$, then the Lagrangian has only $`SU_c(3)`$ color symmetry. Thus, the flavor symmetry is explicitly broken if $`A_\mu 0`$. We show below that since the CFL gap locks flavor and color, one can, in fact, use color transformations and, therefore, the corresponding flavor transformed gap $`U\mathrm{\Delta }`$ has the same energy as the initial gap $`\mathrm{\Delta }`$ even if $`A_\mu 0`$ (the case of a color transformed gap is, of course, trivial because Q commutes with color transformations).
We first consider the CFL phase with 3 massless quarks. The corresponding gap is
$$\mathrm{\Delta }_{\alpha \beta }^{ij}=k_1\delta _\alpha ^i\delta _\beta ^j+k_2\delta _\beta ^i\delta _\alpha ^j.$$
(3)
The operator $`\stackrel{~}{Q}=Q(T^3+\frac{T^8}{\sqrt{3}})`$ is equal to zero acting on the gap. Let us consider a flavor transformed gap $`U\mathrm{\Delta }`$. Since $`[Q,U]0`$, $`\stackrel{~}{Q}U\mathrm{\Delta }0`$ in the general case and we should seek another operator $`\stackrel{~}{Q_U}`$, which is equal to zero acting on the gap $`U\mathrm{\Delta }`$, i.e., we seek a solution of the equation
$$(Q_{t.r.}+a_1T_{t.r.}^1+\mathrm{}+a_8T_{t.r.}^8)U_{t.r.}\mathrm{\Delta }=0,$$
(4)
which gives
$$Q+\underset{i=1}{\overset{8}{}}a_iU(T^i)^TU^+=0$$
(5)
or what is more convenient for analysis
$$U^+QU+\underset{i=1}{\overset{8}{}}a_i(T^i)^T=0,$$
(6)
where we used the fact that $`k_1`$ and $`k_2`$ are independent order parameters. Multipling Eq.(6) by $`T^j`$ and taking trace, we obtain a system of equations for $`a_i`$
$$\underset{i=1}{\overset{8}{}}a_itr((T^i)^TT^j)=tr(U^+QUT^j).$$
(7)
It is easy to check that for U=1 we obtain the old solution $`a_3=1,a_8=\frac{1}{\sqrt{3}}`$, and others $`a_i=0`$. Our analysis is simplified by noting that Q can be represented as $`Q=\frac{I}{3}+A`$, where $`I`$ is the unity matrix and $`A`$ is a matrix whose the only nonzero element is $`A_{11}=1`$. Indeed, since $`U^+U=1`$ and $`trT^i=0`$, we need to calculate only $`trU^+AUT^j`$ on the right-hand side of Eq.(7). By parametrizing U as follows
$$\begin{array}{ccc}& & \\ & \hfill U& \hfill =\end{array}\left(\begin{array}{ccc}\hfill u_1& \hfill u_2& \hfill u_3\\ \hfill v_1& \hfill v_2& \hfill v_3\\ \hfill w_1& \hfill w_2& \hfill w_3\end{array}\right),$$
we find $`a_i`$ and then $`_{i=1}^8a_i^2`$, which is equal to
$$\underset{i=1}{\overset{8}{}}a_i^2=\frac{4}{3}(|u_1|^2+|v_1|^2+|w_1|^2)^2.$$
(8)
Since U is a unitary matrix, we have $`UU^+=1`$ that gives us $`|u_1|^2+|v_1|^2+|w_1|^2=1`$. Therefore, we obtain from Eq.(8) that $`_{i=1}^8a_i^2=4/3`$ for any flavor transformed gap. Obviously that a similar analysis can be used for the CFL phase with $`m_s0`$. The case of the 2SC phase is trivial because flavor symmetry is not spontaneously broken in this phase. Thus, although the generator of electromagnetic transformations does not commute with flavor transformations and we have an explicit flavor symmetry breaking, all flavor transformed CFL gaps have the same energy. Of course, the reason for this is that the CFL gap locks flavor and color and, therefore, the corresponding equation for the operator $`\stackrel{~}{Q_U}`$, which is equal to zero acting on $`U\mathrm{\Delta }`$, coincides with the equation for the operator $`\stackrel{~}{Q}`$ (with unitary transformed color generators $`U^+T^iU`$ (see Eq.(5))), which is equal to zero acting on the $`\mathrm{\Delta }`$, that, of course, gives us the same $`_{i=1}^8a_i^2`$ because color generators are defined up to a unitary transformation.
## 4 Conclusions.
We considered how external magnetic field influences color superconductivity for the CFL phase with 3 massless quarks, the CFL phase with $`m_s0`$, and the 2SC phase with 2 massless quarks. We explained why the mixing angles in the 2SC and CFL phases are different even if the gap of the CFL phase with $`m_s0`$ goes to the 2SC gap for $`m_s\mathrm{}`$. We showed that despite expilicit flavor symmetry breaking in external magnetic field, all values of flavor transformed gaps in their coset spaces of possible solutions in the CFL phase are equivalent.
The author thanks A.A. Natale for helpful discussions and encouragement. I am grateful to V.P. Gusynin and V.A. Miransky for reading the manuscript and critical remarks. The author thanks K. Rajagopal for comments on the manuscript. I acknowledge useful conversations with M. Nowakowski and I. Vancea. This work was supported in part by FAPESP grant No. 98/06452-9. |
no-problem/0001/math0001109.html | ar5iv | text | # Convex lattice polytopes and cones with few lattice points inside, from a birational geometry viewpoint
## 1. Introduction
It is pretty well-known that toric Fano varieties of dimension $`k`$ with terminal singularities correspond to convex lattice polytopes $`P^k`$ of positive finite volume, such that $`P^k`$ consists of the point $`0`$ and vertices of $`P`$ (cf., e.g. , ). Likewise, $``$factorial terminal toric singularities essentially correspond to lattice simplexes with no lattice points inside or on the boundary (except the vertices). There have been a lot work, especially in the last 20 years or so on classification of these objects. The main goal of this paper is to bring together these and related results, that are currently scattered in the literature. We also want to emphasize the deep similarity between the classification of toric Fano varieties and classification of $``$factorial toric singularities.
This paper does not contain any new results. It does contain a sketch of an alternative proof of the qualitative version of the theorem of Hensley (cf. ). The paper is organized as follows. In section 2 we discuss some known results about toric Fano varieties, i.e. convex lattice polytopes. In section 3 we discuss some results about $``$factorial toric singularities, i.e. simplicial rational cones. In section 4 we explain some similarities between the above topics, and give a short geometric proof of Hensley’s theorem. We also point out some open questions.
We should note that our interest in this subject is motivated by classification questions arising in the Minimal Model Program (cf. , , ). We assume that the reader is familiar with the basic constructions of the theory of toric varieties. The good reference sources for these are the books of W. Fulton (cf. ), T. Oda (cf. ) and the paper of V. Danilov (cf. ). For a discussion of related problems from a purely combinatorial point of view, we refer to the survey by Gritzmann and Wills (cf. ). Some good discussion and references can also be found in the book of G. Ewald .
Notations. By lattice polytopes of dimension $`k`$ we will mean closed polytopes of finite positive volume in $`^k`$ whose vertices belong to the lattice $`^k^k`$. To save space, we will sometimes identify the algebraic geometry objects, like toric Fano varieties, with the corresponding combinatorial objects, like convex lattice polytopes. Hopefully, it will not lead to confusion.
Acknowledgments. We thank V. Batyrev who first introduced us to this circle of problems. We also thank G. Sankaran and J.-M. Kantor for helpful discussions.
## 2. Toric Fano varieties
As explained in , cf. also , the isomorphism classes of toric Fano varieties $`X`$ of dimension $`k`$ are in 1-to-1 correspondence with isomorphism classes of convex lattice polytopes of dimension $`k`$ with a fixed lattice point $`0`$ inside. Depending on how bad the singularities of $`X`$ are allowed to be we have the following sets of equivalence classes of toric Fano varieties, and point-containing convex lattice polytopes.
* Smooth, to be denoted by $`S=S(k)`$
* Terminal, to be denoted by $`T=T(k)`$
* Canonical, to be denoted by $`C=C(k)`$
* Gorenstein, to be denoted by $`G=G(k)`$
* For every $`\epsilon `$ such that $`0<\epsilon 1,`$ $`\epsilon `$logterminal, $`T_\epsilon =T_\epsilon (k)`$
* For every $`\epsilon `$ such that $`0<\epsilon 1,`$ $`\epsilon `$logcanonical, $`C_\epsilon =C_\epsilon (k)`$
* For every $`n,`$ those with Gorenstein index $`n,`$ $`G_n=G_n(k)`$
Here the singularity is called $`\epsilon `$logterminal ($`\epsilon `$logcanonical) if its total log-discrepancy is greater than (or equal to) $`\epsilon `$ (cf. ). Please consult , or , or section 3 of this paper for the corresponding combinatorial conditions.
Obviously, $`T=T_1,`$ $`C=C_1,`$ $`G=G_1.`$ From the definitions $`STCT_\epsilon C_\epsilon `$ for any $`\epsilon 1.`$ Also, $`SG_1G_nC_{1/n}`$ for any $`n.`$ In dimension two, some of these classes are the same, because all terminal singularities are smooth and all canonical singularities are Gorenstein. This is true for all singularities, not necessarily toric (cf. e.g. ).
There are two types of results on classification of toric Fano varieties: the general finiteness theorems and the explicit classification theorems.
The most general finiteness result is given by the following theorem.
###### Theorem 2.1.
(A. Borisov -L. Borisov, 1992, ) For any $`k,`$ $`\epsilon >0,`$ the set $`C_\epsilon (k)`$ is finite.
A weaker version of this theorem was proven earlier by V. Batyrev.
###### Theorem 2.2.
(V. Batyrev, 1982, ) For any $`k,n,`$ the set $`G_n(k)`$ is finite.
It should be mentioned that the above theorems are toric cases of the more general boundedness conjectures for Fano varieties (cf. ). The combinatorial statement that corresponds to Theorem 2.1 is the following.
###### Theorem 2.3.
(D. Hensley, 1983, ) For any $`k,`$ $`\epsilon >0,`$ there are only finitely many (up to $`GL_k()`$ action) convex lattice polytopes $`P`$ of dimension $`k`$ such that $`(\epsilon P)^k=\{0\}.`$
This theorem was first proven by D. Hensley in 1983 (cf. ). Hensley also proved a bound on the volume of such polytopes. This bound was improved by Lagarias and Ziegler (cf. ). The proof of the above theorem in is similar but ineffective. In section 4 of this paper we will sketch a simple geometric proof of it (also ineffective). Theorem 2.3 also has the following interesting corollary.
###### Theorem 2.4.
(D. Hensley, 1983, ) For any any $`k,m`$ there are only finitely many convex lattice polytopes of dimension $`k`$ with exactly $`m`$ points strictly inside, up to lattice isomorphisms.
One must note that in the above theorem $`m1,`$ otherwise the statement is false.
The particular classification theorems are mostly concerned with the sets $`S`$, $`T,`$ $`C`$, and $`G`$ for small values of $`k.`$ The smooth case was studied the most. For $`k=2`$ the classification is very easy, there are only 5 examples: $`P^1\times P^1,`$ and $`P^2`$ with up to three blown-up points. (The points that can be blown up are the closed orbits of the torus action on $`P^2`$).
For $`k=3`$ the classification was done independently by V. Batyrev (cf. ), and K. Watanabe and M. Watanabe (cf. ). It consists of 18 examples. For $`k=4`$ the situation is more complicated. There was a lot of work on this, beginning with the thesis of V. Batyrev (cf. ). Unfortunately, contained some mistakes in the case-by-case analysis which resulted in missing cases. It was partially fixed in . The recent preprint of H. Sato contains 124 polytopes which is most probably the complete list. For $`k5`$ there are some general results, due to V. Batyrev, G. Ewald (cf. ), Voskresenskii and Klyachko (cf. ), and others. We refer to the preprints of Batyrev (cf. ) and Sato (cf. ), and the book of Ewald (cf. ) for explanations and further references.
The set $`T(k)`$ is known for $`k=2`$ where $`T(2)=S(2)`$. For $`k=3`$ all $``$factorial toric Fano varieties with Picard number 1 were classified by A. Borisov and L. Borisov (cf. ). Such varieties correspond to lattice tetrahedra with one lattice point inside and no points on edges or faces. There are 8 examples. All but one of the corresponding varieties are weighted projective spaces with the following weights.
(1,1,1,1), (1,1,1,2), (1,1,2,3), (1,2,3,5), (1,3,4,5), (2,3,5,7), (3,4,5,7)
Combinatorially, to obtain the corresponding lattice tetrahedra one can take the tetrahedra in $`^3`$ such that $`0`$ is a linear combination of vertices with the above coefficients. Then for the lattice one should take the lattice in $`^3`$ generated by the vertices of such tetrahedron.
One more variety is a quotient of $`P^4`$ by some action of the group $`/5`$. This corresponds to taking the lattice tetrahedron that corresponds to $`P^4`$ and enlarging the lattice to some bigger lattice with relative index 5.
This classification relies on a computer, but its essential part is computer-free. The whole set $`T(3)`$ was also studied by A. Borisov and L. Borisov with extensive use of computer (cf. ). We found the minimal and maximal such polyhedra, with respect to the natural embedding ordering. There are 13 minimal and 9 maximal ones. The total list was never produced, because the computational complexity of checking the pairwise non-equivalence of polyhedra was too big for the slow computer that we used at that time. It is expected to contain several hundreds of examples.
The set $`C(2)`$ is pretty easy to determine. It consists of 16 elements. It was determined, among others by V. Batyrev, cf. . We refer to for the sketch of an easy proof. We should note that the paper of S. Rabinowitz (cf. ) on this topic, referred to in unfortunately misses one example. The set $`C(3)`$ is probably too big for a reasonable classification. The set of all simplexes in there was determined by A. Borisov and L. Borisov, using a computer (cf. ). It contains 225 elements.
The Gorenstein toric Fano varieties are important for mathematical physics. As noticed by Batyrev (cf. ) they provide examples for the mirror symmetry conjecture. For this reason they received considerable attention, especially among physicists. As noticed before, the set $`G(2)`$ is equal to $`C(2)`$, so it is known (cf. the paragraph above). The set $`G(3)`$ was studied M. Kreuzer and H. Skarke, using computer. They found 4319 such polytopes. Kreuzer and Skarke went further to obtain some results for $`k=4`$. The good account of these results can be found at M. Kreuzer’s webpage “http://tph16.tuwien.ac.at/ kreuzer/CY”.
## 3. $``$factorial toric singularities
A $``$factorial toric singularity $`X`$ of dimension $`k`$ is just a quotient of the affine space $`A^k`$ by a finite abelian subgroup of $`GL_k`$. It corresponds to a simplicial rational cone $`C`$ in $`^k=(^k),`$ where $`^k`$ is the lattice of one-dimensional subgroups of the torus. This cone does not contain any non-trivial linear subspaces of $`^k.`$ We will also assume that it has maximal dimension, otherwise the corresponding variety is (non-canonically) isomorphic to a product of some torus and a lower-dimensional toric singularity (cf., e.g. , , ).
Let us denote by $`P_1,P_2,\mathrm{},P_k`$ the closest to zero lattice points on the extremal rays of $`C.`$ There is exactly one linear function $`\phi `$ on $`^k`$ such that $`\phi (P_i)=1`$ for all $`i.`$ Because $`C`$ is rational, $`\phi `$ has rational coefficients.
The Gorenstein index of the singularity $`X(C)`$ is equal to $`(\phi (^k):),`$ the least common multiple of the denominators of coefficients of $`\phi `$. The minimal log-discrepancy is the smallest value of $`\phi `$ on the lattice points in $`C.`$ Actually, there are two versions of minimal log-discrepancy, the total log-discrepancy and the Shokurov log-discrepancy. The first one is defined using the exceptional divisors of all possible birational morphisms $`YX.`$ The second one only uses the divisors whose image is the distinguished point of $`X.`$ In our case we have such distinguished point, the closed orbit of the torus action. The total log-discrepancy is the smallest non-zero value of $`\phi `$ on the lattice points in the closed cone $`C.`$ The Shokurov log-discrepancy is the smallest value of $`\phi `$ on the lattice points in the open cone $`C.`$
The total log-discrepancy is obviously not bigger than the Shokurov log-discrepancy, and is also at most 1. the Shokurov log-discrepancy is equal to $`k`$ for smooth points, and is at most $`k/2`$ otherwise. Both log-discrepancies are positive, which reflects the fact that $``$factorial toric singularities (and any quotient singularities in general) are log-terminal (cf. , ).
The singularity is called $`\epsilon `$logterminal ($`\epsilon `$logcanonical) if the total log-discrepancy is greater than (or equal to) $`\epsilon `$. If $`\epsilon =1`$ we get just the definitions of terminal (canonical) singularities. These classes of singularities are very important for the Minimal Model Program (cf. , ). The most general finiteness result in this area is that for any fixed $`k`$ and $`\epsilon `$ all $`\epsilon `$logterminal ($`\epsilon `$logcanonical) singularities form finitely many “series” (cf. ). In order to explain what it means, let us first review the known classification results for terminal and canonical singularities for small $`k.`$
The terminal singularities is the most restrictive class of singularities that has to be allowed in the Minimal Model Program to make it work. As such terminal singularities have been extensively studied. In dimension 2 there it is easy to see that every terminal singularity is smooth. In dimension 3 the analytic classification exists, in the general case due to S. Mori, M. Reid, and others (cf. , ). A part of it is the classification in the toric case. It is the following theorem, often referred to as “Terminal Lemma”.
###### Theorem 3.1.
(D. Morrison - G. Stevens, 1984, ) Every three-dimensional terminal toric singularity is isomorphic to a quotient of $`A^3`$ by a group $`\mu _n`$ which acts linearly with weights $`\frac{1}{n}(1,a,na)`$ for some $`n`$ and $`a/n,`$ with $`gcd(a,n)=1`$. Here $`\mu _n`$ is the group of $`n`$th roots of unity. The notation $`\frac{1}{n}(1,a,na)`$ means that $`\rho \mu _n`$ multiplies the first coordinate by $`\rho ,`$ the second coordinate by $`\rho ^a,`$ and the third coordinate by $`\rho ^{(na)}.`$
In fact, in this theorem is stated for the three-dimensional cyclic quotient singularities. However it is easy to see that every isolated $``$factorial toric singularity is a cyclic quotient. Theorem 3.1 can also be stated as follows. Suppose $`x=(x_1,x_2,x_3)`$ is a generator of the finite cyclic subgroup of the torus $`T^3=^3/^3`$ that corresponds to the singularity. Then the singularity is terminal if and only if (up to permutation of variables) $`x_1+x_20`$ mod $``$, and for all $`k`$ such that $`kx0`$ in $`^3/Z^3`$ none of the coordinates of $`kx`$ is $`0`$ in $`/`$. In other words, the subgroup belongs to one of the fixed three 2-dimensional subtori of $`T^3`$ and intersects trivially with some of their smaller subtori.
The proof of Theorem 3.1 relies on the combinatorial lemma due to G. K. White, for which D. Morrison and G. Stevens also proposed a new proof. To explain this lemma, and why it is relevant, we first need to explain how $``$factorial terminal toric singularities are related to the lattice-free simplexes. Here by a lattice-free simplex (or, in general, a lattice-free polytope) we will mean a simplex (or polytope) whose vertices are in the lattice, and which contains no other lattice points inside or on the boundary.
In one direction, to any $``$factorial terminal toric singularity one can associate a simplex, which is the set of all points in $`xC`$ such that $`\phi (x)1.`$ The terminality of the singularity is equivalent to the simplex being lattice-free. In the other direction, for any lattice-free simplex with a distinguished vertex one can construct a rational cone by translating the simplex to put this vertex to the origin and generating the simplicial cone using the other vertices. This cone will determine a $``$factorial terminal toric singularity. Thus, the equivalence classes of the $``$factorial terminal toric singularities are in one-to-one correspondence with the equivalence classes of the lattice-free simplexes with a distinguished vertex. With this in mind, the classification of such singularities in dimension three is equivalent to the theorem 3.2 below. To formulate this theorem we first need the following definition.
###### Definition 3.1.
(cf., e.g. ) Suppose $`P`$ is a convex lattice-free polytope (or, in general, any convex body in $`R^k`$). Then its width is the minimum of the lengths of its projections to $``$ using linear functions on $`R^k`$ with integer coefficients.
It is clear that if a convex lattice polytope has width 1, it contains no lattice points inside (though it may still have some on the boundary). The following theorem is a kind of a converse statement for the tetrahedra.
###### Theorem 3.2.
(G. K. White, 1964, ). Every lattice-free tetrahedron has width 1.
In fact, G. K. White proved a stronger statement. He allowed the tetrahedra to have lattice points on one pair of opposite edges. There is also the following generalization of Theorem 3.2.
###### Theorem 3.3.
Every 3-dimensional lattice-free polytope has width 1.
We were unable to trace the proof of this theorem. V. Danilov attributes it to M. A. Frumkin (cf. ). H. Scarf attributes it to R. Howe (cf. It looks like neither of the proofs has been published.
We should note that if one allows points on the boundary, the width may be bigger than 1, even in dimension two. On the other hand, the width is always bounded by a function of dimension. This result is in fact very general, it holds for all convex lattice-free bodies in $`^k,`$ not necessarily lattice polytopes. This theorem is quite old, and there is a long history of improvements on the bound. We refer to the paper of Kannan and Lovász for a very good bound and further references. We should also mention that in higher dimensions there exist lattice-free simplexes of arbitrarily large width, by the result of J.-M. Kantor (cf. ).
In dimension 4 much less is known. D. Morrison and G. Stevens classified all abelian quotients of dimension 4 that are both terminal and Gorenstein (cf. ). In 1988 S. Mori, D. Morrison, and I. Morrison used computer to study terminal $`(/p)`$quotients, for prime $`p`$ (cf. ). The singularities they discovered seemed to fall into the finite number of series, similar to the series of three-dimensional terminal singularities above. More precisely, they found 1 three-parameter series, 2 two-parameter series, 29 one-parameter series and several thousands of 0-parameter series (what they called unstable singularities). Here the series of terminal singularities of dimension 3 is considered to have 2 parameters. They conjectured that their list was complete. It was partially proven in 1990 by G. Sankaran (cf. ). He showed that all the “stable” four-dimensional prime quotient singularities are among those found in . Together with our result (cf. ) this implies that the Mori-Morrison-Morrison list is complete up to possibly finitely many exceptions. Together with the extensive computer evidence of , it is quite likely that there are indeed no other such singularities.
For canonical singularities, the classification is known for $`k3`$. For $`k=2`$ it is very easy to see that they are cyclic quotients of the type $`\frac{1}{n}(1,n1)`$ for some $`n`$. In dimension 3 it was done by M.-N. Ishida and N. Iwashita (cf. ). In fact, their result is very general. They obtained a complete classification of all 3-dimensional canonical toric singularities, including those that are not $``$factorial. In the $``$factorial case there are two 2-parameter series, one 1-parameter series and two 0-parameter series (exceptional singularities). We refer to , Theorem 4.1 for the details. We should also note that similar but weaker results were obtained independently by D. R. Morrison, using different methods (cf. ).
In general, for any fixed $`k`$ and $`\epsilon `$ the $`\epsilon `$logterminal (logcanonical) singularities form a finite number of series. The general definition of a series is somewhat complicated, we refer to for the details. The implication for the cyclic quotients of prime index is the following. We formulate it for the $`\epsilon `$logterminal case, the same is true verbatim for the $`\epsilon `$logcanonical singularities.
###### Theorem 3.4.
(A. Borisov, 1997, ) For any fixed $`k`$ and $`\epsilon `$ there is a finite collection of closed subgroups $`\{V_i\}`$ of the torus $`T^k=^k/^k`$ and a finite collection of their closed subgroups $`V_{i,j}V_i`$ such that the $`(/p)`$quotient singularity is $`\epsilon `$logterminal if and only if a generator of the corresponding subgroup of $`T^k`$ belongs to $`V_i\backslash (\underset{j}{}V_{i,j})`$ for some i. (It is clear that the above condition does not depend on the choice of a generator).
This theorem, and a more general theorem of for all $``$factorial toric singularities in fact follows from the theorem of J. Lawrence (cf. ). The good approximation to this theorem, which is sufficient for the theorem 3.5 above, is the following.
###### Theorem 3.5.
(J. Lawrence, 1991, ) Suppose $`S`$ is a closed subset of a finite-dimensional torus $`T`$ such that $`nSS`$ for all $`nN.`$ Then $`S`$ is a finite union of the closed subgroups of $`T.`$
The main theorem of Lawrence is more general.
###### Theorem 3.6.
Suppose $`T`$ is as above (or, possibly, $`T`$ is is not a torus but a a closed subgroup of some torus). Suppose $`U`$ is an open subset of $`T`$ (or, more generally, a full subset of $`T`$, i.e. for all closed subgroups $`L`$ of $`T`$ the intersection of $`L`$ and $`U`$ is either empty or contains a relatively open subset of $`L`$). Consider all closed subgroups of $`T`$ that don’t intersect $`U`$. Then the number of maximal elements of it, with respect to inclusion, is finite.
The Lawrence’s proof of it is very elegant and well-written, which makes the paper a must-read for anyone seriously interested in the subject. It uses some geometry of numbers. One can also give a geometric proof of its weaker version (Theorem 3.5 above). It is similar to the geometric proof of the Hensley theorem, which will be discussed in the next section. It is however somewhat complicated, and will possibly appear elsewhere.
## 4. Some similarities and open questions
It was noticed in particular by J. Lawrence that the topics of the above two sections have something in common. Namely, the kind of geometry of numbers involved in the proof of the main theorem in is similar to what was used by Hensley in (and later by Lagarias -Ziegler, cf. , and Borisov-Borisov, cf. ). Another similarity is the following. The classification of terminal (and canonical) weighted projective spaces by Borisov-Borisov (cf. ) is very similar to the case-by-case analysis of Sankaran (cf. ).
In our opinion, the main driving force behind both results of Hensley and Lawrence is some elementary properties of the dynamics of multiplication by integers on a torus. To explain this, we will sketch a short conceptual proof of the (qualitative) theorem of Hensley. One can also prove in a similar manner a weak theorem of Lawrence (Theorem 3.6). But it is somewhat complicated, so it will possibly appear elsewhere. We should also note that the same ideas were used in to prove Shokurov’s conjecture that minimal discrepancies of toric singularities can only accumulate from above.
###### Theorem 4.1.
(D. Hensley, , also , ) For any $`k`$ and $`\epsilon >0,`$ there are only finitely many (up to $`GL_k()`$ action) convex lattice polytopes $`P`$ of dimension $`k`$ such that $`(\epsilon P)^k=\{0\}.`$
Sketch of the proof. For simplicity, we will present the proof for the case $`\epsilon =1`$. However essentially the same proof works in the general case. First of all, by using Minkowski Lemma one can reduce the problem to the case of simplexes, whose vertices generate the whole lattice (cf., e.g. ). If we move the simplex to put one of the vertices to zero, we get two lattices. First one, which we will now call $`^k`$ is generated by the vertices. The second one is the original lattice. It contains $`^k`$ and the quotient subgroup is generated by the point $`O`$ in $`P`$ which corresponds to the zero of the original lattice. This reduces the problem to showing that for every $`k`$ there are just finitely many points $`O`$ in the standard open simplex $`\mathrm{\Delta }T^k`$ such that the finite subgroup of $`T^k`$ generated by $`O`$ contains no other points from $`\mathrm{\Delta }`$.
Suppose there are infinitely many such points. By compactness of $`\overline{\mathrm{\Delta }}`$ we can find an infinite sequence $`\{O_i\}`$ of such points that converge to some point $`O^{}\overline{\mathrm{\Delta }}.`$
Suppose first that $`O^{}\mathrm{\Delta }.`$ Then for some natural $`n>1`$ $`nO^{}`$ also belongs to $`\mathrm{\Delta }.`$ If $`nO^{}O^{}`$ then for $`i`$ big enough $`O_i`$ is close to $`O^{}`$ and $`nO_i`$ is close to $`nO^{}`$, so they are different points in $`\mathrm{\Delta },`$ contradiction. If $`nO^{}=O^{}`$ then take $`\epsilon >0`$ such that the ball of radius $`\epsilon `$ $`B_\epsilon (O^{})`$ is contained in $`\mathrm{\Delta }`$. For $`i`$ big enough $`|O_iO^{}|<\epsilon /n.`$ Therefore the point
$$nO_i=nO^{}+n(O_iO^{})=O^{}+n(O_iO^{})$$
is in $`\mathrm{\Delta },`$ and is different from the point $`O_i=O^{}+(O_iO^{})`$, contradiction.
Finally, if $`O^{}`$ belongs to the boundary of $`\mathrm{\Delta },`$ choose the face $`\mathrm{\Delta }^{}`$ which interior it belongs to. Choose $`n>1`$ so that $`nO^{}`$ also belongs to the interior of $`\mathrm{\Delta }^{}`$. Then the same argument as above works, because since $`O_i`$ approach $`O^{}`$ from the inside of $`\mathrm{\Delta },`$ the $`nO_i`$ also approach $`nO^{}`$ from the inside of $`\mathrm{\Delta }.`$ One just needs to choose $`\epsilon `$ small enough so that $`B_\epsilon (O^{})`$ only intersect the faces that contain $`O^{}.`$ This completes the proof.
We would like to mention now several possible directions of research in this area.
1) Try to classify toric Fano varieties of small dimension with relatively mild singularities, using a computer.
2) Try to understand better the smooth and Gorenstein toric Fano varieties. Consult and for some particular conjectures.
3) Write a computer code that would find explicitly the finite set of series given in Theorem 3.5. In particular, automatize the argument of Sankaran (cf. ).
4) Try to generalize Theorem 3.5. to the non-$``$factorial case. One problem with this is that it is not exactly clear what to mean by a series of singularities in this more general context. One non-trivial result in this direction is due to M.-N. Ishida and N. Iwashita (cf. ). As a related question, what can be said about the set of all lattice-free polytopes of given dimension?
5) Try to understand better the set of Shokurov log-discrepancies of $``$factorial toric singularities. We proved in that these log-discrepancies can only accumulate from above, and only to such log-discrepancies of smaller dimensions. This proved the toric case of a more general conjecture of Shokurov. It also suggests that “stable” cyclic quotient singularities somehow come from lower-dimensional singularities. This was also noticed by I. Morrison (cf. ). It would be interesting to understand exactly how it happens, and maybe get a better conceptual understanding of terminal cyclic quotients of arbitrary dimension.
6) The effective versions of Hensley’s theorem (cf. , ) provide bounds for the volumes of the corresponding polytopes that are asymptotically close to the actually existing examples (cf. ). It would be very interesting to find an effective version of the theorem of Lawrence, and to determine the asymptotics of the number of series, and other parameters involved. We should note here the paper of J.-M. Kantor, who proved the existence of higher-dimensional lattice-free simplexes with arbitrarily large width (cf. ).
There are many other open questions in the area. Some of them can be found in the survey of Gritzmann and Wills (cf. ). We would like to stress that many of the problems and methods involved here are quite elementary. On the other hand, it is related to many very advanced areas of modern mathematics and mathematical physics. This makes it a good starting ground for beginning researchers. We hope that this short survey would help bring some more people into this interesting area. |
no-problem/0001/cond-mat0001177.html | ar5iv | text | # Pair contact process with diffusion – A new type of nonequilibrium critical behavior?
\[
## Abstract
In the preceding article Carlon et al. investigate the critical behavior of the pair contact process with diffusion. Using density matrix renormalization group methods, they estimate the critical exponents, raising the possibility that the transition might belong to the same universality class as branching annihilating random walks with even numbers of offspring. This is surprising since the model does not have an explicit parity-conserving symmetry. In order to understand this contradiction, we estimate the critical exponents by Monte Carlo simulations. The results suggest that the transition might belong to a different universality class that has not been investigated before.
\]
Symmetries and conservation laws are known to play an important role in the theory of nonequilibrium critical phenomena . As in equilibrium statistical mechanics, most phase transitions far from equilibrium are characterized by certain universal properties. The number of possible universality classes, especially in 1+1 dimensions, is believed to be finite. Typically each of these universality classes is associated with certain symmetry properties.
One of the most prominent universality classes of nonequilibrium phase transitions is directed percolation (DP) . According to a conjecture by Janssen and Grassberger, any phase transition from a fluctuating phase into a single absorbing state in a homogeneous system with short-range interactions should belong to the DP universality class, provided that there are no special attributes such as quenched disorder, additional conservation laws, or unconventional symmetries . Roughly speaking, the DP class covers all models following the reaction-diffusion scheme $`A2A`$, $`A\mathrm{}`$. Regarding systems with a single absorbing state the DP conjecture is well established nowadays. However, even various systems with infinitely many absorbing states have been found to belong to the DP class as well .
Exceptions from DP are usually observed if one of the conditions listed in the DP conjecture is violated. This happens, for instance, in models with additional symmetries. An important example is the so-called parity-conserving (PC) universality class, which is represented most prominently by branching annihilating random walks with two offspring $`A3A,\mathrm{\hspace{0.17em}2}A\mathrm{}`$ . In 1+1 dimensions this process can be interpreted as as a $`Z_2`$-symmetric spreading process with branching-annihilating kinks between oppositely oriented absorbing domains. Examples include certain kinetic Ising models , interacting monomer-dimer models , as well as generalized versions of the Domany-Kinzel model and the contact process with two symmetric absorbing states .
A very interesting model, which is studied in the present work, is the (1+1)-dimensional pair contact process (PCP) $`2A3A`$, $`2A\mathrm{}`$ . Depending on the rate for offspring production, this model displays a nonequilibrium transition from an active into an inactive phase. Without diffusion the PCP has infinitely many absorbing states and the transition is found to belong to the universality class of DP. The pair contact process with diffusion (PCPD), however, is characterized by a different type of critical behavior. In the inactive phase, for example, the order parameter no longer decays exponentially, instead it is governed by an annihilating random walk with an algebraic decay. Moreover, the PCPD has only two absorbing states, namely, the empty lattice and the state with a single diffusing particle. For these reasons the transition is expected to cross over to a different universality class. The PCPD, also called the annihilation/fission process, was first proposed by Howard and Täuber as a model interpolating between “real” and “imaginary” noise. Based on a field-theoretic renormalization group study, they predicted non-DP critical behavior at the transition.
In the preceding article, Carlon, Henkel, and Schollwöck investigate a lattice model of the PCPD with random-sequential updates. In contrast to Ref. , each site of the lattice can be occupied by at most one particle, leading to a well-defined particle density in the active phase. Performing a careful density matrix renormalization group (DMRG) study , Carlon et al. estimate two of four independent critical exponents. Depending on the diffusion rate $`d`$, their estimates for $`\theta =z`$ vary in the range $`1.60(5)\mathrm{}1.87(3)`$ while $`\beta /\nu _{}`$ is found to be close to $`0.5`$. Since these values are close the the PC exponents $`z=1.749(5)`$ and $`\beta /\nu _{}=0.499(2)`$, they suggest that the transition might belong to the PC universality class.
The conjectured PC transition poses a puzzle. In all cases investigated so far, the PC class requires an exact symmetry on the level of microscopic rules. In 1+1 dimensions this symmetry may be realized either as a parity conservation law or as an explicit $`Z_2`$ symmetry relating two absorbing states. In the PCPD, however, the dynamic rules are neither parity conserving nor invariant under an obvious symmetry transformation. Yet how can the critical properties of the transition change without introducing or breaking a symmetry? As a possible way out, there could be a hidden symmetry in the model, but we have good reasons to believe that there is no such hidden symmetry or conservation law in the PCPD. This would imply that the PC class is not characterized by a “hard” $`Z_2`$ symmetry on the microscopic level, rather, it may be sufficient to have a “soft” equivalence of two different absorbing states in the sense that they are reached by the dynamics with the same probability.
In this paper I suggest that the transition in the PCPD might belong to a different yet unknown universality class. The reasoning is based on the conservative point of view that a “soft” equivalence between two absorbing states is not sufficient to obtain PC critical behavior. As described in Ref. , the essence of the PC class is a competition between two types of absorbing domain that are related by an exact $`Z_2`$ symmetry. Close to criticality these growing domains are separated by localized regions of activity. In 1+1 dimensions, these active regions may be interpreted as kinks between oppositely oriented domains, which, by their very nature, perform an unbiased parity-conserving branching-annihilating random walk. In the PCPD, however, it is impossible to give an exact definition of “absorbing domains.” We can, of course, consider empty intervals without particles as absorbing domains. Yet, what is the meaning of a domain with only one diffusing particle? And even if such a definition were meaningful, what would be the boundary between an empty and a “single-particle” domain? Moreover, in PC models there are two separate sectors of the dynamics (namely, with even and odd particle numbers), whereas there are no such sectors in the PCPD. In fact, even when looking at typical space-time trajectories, the PCPD differs significantly from a standard branching-annihilating random walk with two offspring (see Fig. 1). In particular, offspring production in the PCPD occurs spontaneously in the bulk when two diffusing particles meet, whereas a branching-annihilating random walk generates offspring all along the particle trajectories. Therefore, it is reasonable to expect that the two critical phenomena are not fully equivalent.
In order to investigate this question in more detail, it is useful to compare the DMRG estimates with numerical results obtained by Monte Carlo simulations. It is important to note that there are two possible order parameters, namely, the particle density
$$\rho _1(t)=\frac{1}{L}\underset{i}{}s_i(t)$$
(1)
and the density of pairs of particles
$$\rho _2(t)=\frac{1}{L}\underset{i}{}s_i(t)s_{i+1}(t),$$
(2)
where $`L`$ is the system size and $`s_i(t)=0,1`$ denotes the state of site $`i`$ at time $`t`$. Performing high-precision simulations it turns out that the critical behavior at the transition is characterized by unusually strong corrections to scaling . These scaling corrections are demonstrated in Fig. 2, where the temporal decay of the two order parameters for $`d=0.1`$ is shown as a function of time running up to almost $`10^6`$ time steps. The pronounced curvature of the data in the double-logarithmic plot demonstrates the presence of strong corrections to scaling. Interestingly, the two curves bend in opposite directions and tend toward the same slope. Thus, in contrast to the mean-field prediction, $`\rho _1(t)`$ and $`\rho _2(t)`$ seem to scale with the same exponent. Discriminating between the negative curvature of $`\rho _1(t)`$ and the positive curvature of $`\rho _2(t)`$, we estimate the critical point and the exponent $`\delta =\beta /\nu _{}`$ by
$$p_c=0.1112(1),\delta =\beta /\nu _{}=0.25(2).$$
(3)
While this estimate deviates only slightly from the known PC value $`0.286(2)`$, other exponents deviate more significantly. Performing dynamic simulations starting with a single pair of particles , we measure the survival probability $`P(t)`$ that the system has not yet reached one of the two absorbing states , the average number of particles $`N_1(t)`$ and pairs $`N_2(t)`$, and the mean square spreading from the origin $`R^2(t)`$ averaged over the surviving runs. At criticality, these quantities should obey asymptotic power laws, $`P(t)t^\delta ^{}`$, $`N_1(t)N_2(t)t^\eta `$, and $`R^2(t)t^{2/z}`$, with certain dynamical exponents $`\delta ^{}`$ und $`\eta `$. Notice that in non-DP spreading processes the two exponents $`\delta =\beta /\nu _{}`$ and $`\delta ^{}=\beta ^{}/\nu _{}`$ may be different. Going up to $`2\times 10^5`$ time steps we obtain the estimates
$$\delta ^{}=0.13(2),\eta =0.13(3),z=1.83(5).$$
(4)
Although the precision of these simulations is only moderate, the estimates differ significantly from the PC exponents $`\delta ^{}=0.286`$, $`\eta =0`$ in the even sector and $`\delta ^{}=0`$, $`\eta =0.285`$ in the odd sector. The exponent $`z`$, on the other hand, seems to be close to the PC value $`1.75`$.
The most striking deviation is observed in the exponent $`\beta `$, which is not accessible in DMRG studies. Here the estimates seem to decrease with increasing numerical effort. As an upper bound we find
$$\beta <0.67.$$
(5)
Even more recently, Ódor studied a slightly different version of the PCPD on a parallel computer, reporting the estimate $`\beta =0.58(1)`$ which is incompatible with the PC exponent $`\beta =0.92(2)`$.
In summary the critical behavior of the PCPD is affected by strong corrections to scaling, wherefore it is extremely difficult to estimate the critical exponents. Although DMRG estimates presented in are very accurate, they have to be taken with care since they are affected by scaling corrections as well. Thus, the apparent coincidence with the exponents of the PC class may be accidental. Comparing other exponents, in particular the density exponent $`\beta `$ and the cluster exponents $`\delta ^{}`$ and $`\eta `$, the PC hypothesis can be ruled out.
I would like to thank J. Cardy, E. Carlon, P. Grassberger, M. Henkel, M. Howard, J. F. F. Mendes, G. Ódor, U. Schollwöck, U. Täuber, and F. van Wijland for stimulating discussions. |
no-problem/0001/cond-mat0001308.html | ar5iv | text | # Glassy properties and localization of interacting electrons in two-dimensional systems
## Abstract
We present a computer simulation study of a disordered two-dimensional system of localized interacting electrons at thermal equilibrium. It is shown that the configuration of occupied sites within the Coulomb gap persistently changes at temperatures much less than the gap width. This is accompanied by large time dependent fluctuations of the site energies. The observed thermal equilibration at low temperatures suggests a possible glass transition only at $`T=0`$. We interpret the strong fluctuations in the occupation numbers and site energies in terms of the drift of the system between multiple energy minima. The results also imply that interacting electrons may be effectively delocalized within the Coulomb gap. Insulating properties, such as hopping conduction, appear as a result of long equilibration times associated with glassy dynamics. This may shine new light on the relation between the metal-insulator transition and glassy behavior.
pacs: 72.20.Ee,75.10.Nr,64.70.Pf
The existence of glassy properties in a system of strongly disordered localized interacting electrons was predicted some time ago , thus introducing the terms “Coulomb glass”, or “Electron glass”. Recently, experimental confirmation of this concept has been obtained through the observation of very slow relaxation times characteristic of glassy dynamics . Another important manifestation of glassy properties, namely the existence of multiple low energy minima in the energy landscape of the system, has been known since the first computer simulations of the Coulomb glass . However, while there is strong evidence for glassy behavior, no finite temperature thermodynamic glass transition has been observed either experimentally or numerically. Furthermore, the two dimensional (2D) Coulomb glass has much in common with various 2D spin glass (SG) models, where there is strong numerical evidence that no finite temperature phase transition occurs . We wish to point out that our results also support the absence of any finite temperature phase transition in the 2D Coulomb glass.
The goal of this Letter is to study whether and how the multi-minima structure of the energy landscape, which results from electron-electron interaction, affects the fluctuation properties of the Coulomb glass. Previous finite temperature computer simulations have confirmed the existence of a robust and stable gap in the density of states (DS) around the Fermi level, known as the Coulomb gap. However, as far as we know, the following question has not been studied: Is the configuration of occupied sites within the gap time independent, or does it change persistently with time due to equilibrium fluctuations?
Such fluctuations may be very important for transport if the equilibration time of the system is not too long. For example, near a metal non-metal transition the equilibration time decreases sharply . It should be remembered that the standard theory of hopping conduction in localized systems is based upon percolation theory, and it assumes that the basic electronic configurations and site energies near the Fermi level are time independent. This means that the percolation paths are also time independent. It has long been suggested that this single particle picture of transport may be altered by electron-electron interactions , however, as yet no alternate picture has emerged. For clean systems and systems with weak disorder, it was recently demonstrated , that the persistent change of the occupancy configurations may cause the mechanism of the conductivity to change from percolation to diffusion.
We present here computer simulations designed to study the effect of thermodynamic fluctuations on the site occupation numbers and energies in the Coulomb glass. Our results indeed show a persistent change of the occupation configuration within the Coulomb gap, even at temperature well below the gap width. This persistent change creates a time dependent random potential, which causes fluctuations of the site energies. The fluctuations are much larger than the temperature, and for sites within the gap, are of the order of the gap width. We interpret these results in terms of a drift of the total energy of the system between different minima of the energy landscape. It can also be considered as some kind of classical delocalization effect.
For the purpose of our study we use the standard Coulomb glass Hamiltonian given by
$$H=\underset{i}{}\varphi _in_i+\frac{1}{2}\underset{ij}{}\frac{e^2}{r_{ij}}\left(n_i\nu \right)\left(n_j\nu \right).$$
(1)
The electrons occupy sites on a 2D lattice, $`n_i=0,1`$ are the occupation numbers of these sites and $`r_{ij}`$ is the distance between sites $`i`$ and $`j`$. The quenched random site energies $`\varphi _i`$ are distributed uniformly within an interval $`[A,A]`$. To make the system neutral each site has a positive background charge $`\nu e`$, where $`\nu `$ is the average occupation number, i.e. the filling factor of the lattice. We concentrate here on the case $`\nu =1/2`$, where the Fermi level is zero due to electron hole symmetry. It is expected that the features of this model relevant for our purpose are dimensionality, Coulomb interactions, and strong diagonal disorder. Hereafter we take the lattice constant $`a`$ as a length unit and $`e^2/a`$ as an energy unit. Then, the single particle energy at site $`i`$ is given by
$$ϵ_i=\varphi _i+\underset{j}{}\frac{1}{r_{ij}}\left(n_j\nu \right).$$
(2)
It has long been established that near the Fermi level the long range Coulomb interaction cannot be considered as a perturbation. This results in a soft gap in the density of single-particle states (DS), known as the Coulomb gap.
In this letter we study numerically the equilibrium fluctuations of the occupation numbers and site energies within the gap, at temperature well below the gap width. To this end we use the standard Metropolis algorithm, where the rate of a hopping transition depends only on the energy difference between the initial and final configurations. Specifically, the rate does not depend on the hopping distance, which is limited only by the system size. The use of such transition rates greatly decreases the equilibration time of the system compared to short range hopping transitions. Note that the Hamiltonian of Eq.(1) does not contain any dynamics in itself, and therefore the simulation time does not reflect any physical time. However, averaging over the simulation time is equivalent to ensemble averaging, assuming the system is in thermal equilibrium. Related approaches are commonly used to study the thermodynamics of various SG models .
The simulations were performed on a square lattice of $`L\times L`$ sites with periodic boundary conditions. In this torus geometry, the distance between two sites is taken as the length of the shortest path between them. All results obtained were averaged over $`P`$ different sets of the random energies $`\{\varphi _i\}`$. Unless stated otherwise, the values $`L=50`$ and $`P=100`$ were used throughout. Also, we use the notation whereby $`x`$ means time averaging of the quantity $`x`$, whereas $`\overline{x}`$ denotes averaging over sets of random energies $`\{\varphi _i\}`$.
The dynamics within the Coulomb gap can be seen in two ways. One is the time dependence of the single particle energies, which we call spectral diffusion. The other is the time dependence of the configuration of occupied sites within the gap. These two phenomena are closely related since the average occupation number of all sites with energy $`ϵ`$ is given in thermal equilibrium by the Fermi function . Thus, at low enough temperatures, the occupation number of a site changes when it’s energy crosses the Fermi level.
To study the spectral diffusion, we mark after $`t_w`$ Monte Carlo (MC) steps all sites whose single particle energies are in a narrow interval $`[W,W]`$ within the gap, and then observe the evolution of the distribution of these energies as the simulation proceeds. We find that after some number of MC steps, this distribution reaches an asymptotic form which is independent of $`t_w`$, unless $`t_w`$ is shorter than the equilibration time. Fig. 1 shows the final energy distribution, averaged over sets of $`\{\varphi _i\}`$, for $`A=1`$ and various values of the width $`W`$ and the temperature $`T`$. The DS of the entire system, which exhibits the Coulomb gap, is also shown. One can see that for any given $`T`$, the final distribution does not depend on $`W`$ for small $`W`$, and it’s width is much larger than $`W`$. Furthermore, the width of the distribution is much larger than the temperature.
Another way to observe spectral diffusion is to measure the time average of the single-particle energy at site $`i`$, $`ϵ_i`$, and the standard deviation at the same site, $`\mathrm{\Delta }_i=\sqrt{ϵ_i^2ϵ_i^2}`$. We perform this calculation for all sites and create a function $`\overline{\mathrm{\Delta }(ϵ)}`$. Fig. 2 shows this function for $`A=1`$ and various temperatures. From this figure it is clear that the standard deviation for all sites is much larger than the temperature, while for sites near the Fermi level the standard deviation is $`23`$ times larger than for other sites. Sites with large $`\mathrm{\Delta }`$ are expected to be “active” sites, meaning they often change their occupation numbers as their energies cross the Fermi level. The occupation number changes of these sites are accompanied by a reorganization of the local configuration of occupied sites, which in turn is responsible for the larger value of $`\mathrm{\Delta }`$. On the other hand, the sites with smaller $`\mathrm{\Delta }`$ are “passive” sites, and change their energies only in response to the random time dependent potential created by the active sites.
Fig. 3(a) shows the maximum and minimum values of the standard deviation $`\overline{\mathrm{\Delta }(ϵ)}`$, as a function of temperature. From the results it appears that these functions tend to a finite values as $`T0`$. It is also interesting to consider the width of the curves in Fig. 2 as a function of temperature. We define this width, $`E_w`$, as the energy at which $`\overline{\mathrm{\Delta }(ϵ)}=ϵ`$. The meaning of $`E_w`$ is that sites which satisfy $`ϵ<E_w`$ have energy fluctuations larger then their average energy, and therefore are active. From Fig. 2 it is also apparent that these sites have larger value of $`\mathrm{\Delta }`$, thus supporting our understanding that these are indeed the active sites of the system. The width $`E_w`$ as a function of temperature is plotted in Fig. 3(a), and it also appears to tend to a finite value as $`T0`$.
The above results may indicate that the active sites are predominantly within the Coulomb gap. This is reasonable, since the occupation number of sites within the gap is strongly affected by interactions. However, at $`A=1`$ all characteristic energies, including the gap width, are of order unity. Thus, to check whether the active sites are indeed within the gap, it is necessary to simulate $`A>1`$. Then, the width of the gap decreases with $`A`$ as $`E_g1/A`$. The results from these simulations are also presented in Fig. 2, where $`\overline{\mathrm{\Delta }(ϵ)}\sqrt{A}`$ is plotted as a function of $`ϵA`$. For these plots we use $`L=200`$ and $`P=20`$, and the temperature for each $`A`$ is $`T=0.05/A`$, keeping it constant in units of the gap width. Using this scaling, the curves for $`A>1`$ collapse into one, and correspond to the curve for $`A=1,T=0.05`$. Thus, the energy region containing active sites scales as $`1/A`$, and the active sites are indeed within the Coulomb gap. Moreover, since the number of active sites decreases with increasing $`A`$, the values of $`\mathrm{\Delta }`$ should also decrease. This is indeed supported by the data in Fig. 2.
Thus, it appears that the configuration of occupied sites within the Coulomb gap persistently changes in thermodynamic equilibrium. To obtain more information about this motion, one can study the correlation function of occupation numbers. We do this by constructing a vector $`𝐃(t_w)`$ after $`t_w`$ MC steps have been performed, whose components are the occupation numbers $`n_i`$ of all sites within a given energy range $`[W,W]`$. The vector is normalized so that $`𝐃(t_w)𝐃(t_w)=1`$. As the simulation proceeds, we check the occupation number of these same sites, construct the vector $`𝐃(t_w+t)`$, and calculate $`C(t_w,t)=\overline{𝐃(t_w)𝐃(t_w+t)}`$. Correlation functions analogous to $`C(t_w,t)`$ are commonly used to measure the similarity between two configurations . For identicle configurations $`C(t_w,t)=1`$, while if there is no correlation $`C(t_w,t)=0.5`$. Basically, we are interested in $`C_{\mathrm{}}=lim_{t_w\mathrm{}}lim_t\mathrm{}C(t_w,t)`$, which is a measure of the similarity of two arbitrary states of the system at thermal equilibrium. For a non-interacting system,
$$C_{\mathrm{}}=\frac{_W^Wf^2(\varphi )𝑑\varphi }{_W^Wf(\varphi )𝑑\varphi },$$
(3)
where $`f(\varphi )`$ is the Fermi function. Thus, for the non-interacting system $`C_{\mathrm{}}=1T/W`$ at $`WT`$ and $`C_{\mathrm{}}=0.5`$ at $`TW`$.
In order to evaluate $`C_{\mathrm{}}`$ from the simulation, we measure $`C(t_w,t)`$ as a function of $`t`$ for a given $`t_w`$, and wait long enough so that $`C(t_w,t)`$ becomes independent of $`t`$. We denote this saturated value as $`C(t_w,\mathrm{})`$. We then increase $`t_w`$ until $`C(t_w,\mathrm{})`$ becomes independent of $`t_w`$, and thus obtain our estimate of $`C_{\mathrm{}}`$. The value of $`t_w`$ at which $`C(t_w,\mathrm{})`$ becomes independent of $`t_w`$, is the equilibration time $`\tau _{eq}`$ of the system.
In this light, an important question is whether the Hamiltonian of Eq. (1) exhibits a finite temperature glass transition in 2D. If so, then below the transition temperature the equilibration time should increase with system size $`L`$, and our results may also depend on $`L`$. To test this we have studied the size dependence of the equilibration times , and found that within the temperature range studied here, namely $`T0.05`$, the equilibration times (measured in MC steps/site) saturate as a function of $`L`$. The value of $`L`$ at which the equilibration times become $`L`$-independent is the correlation length of the system, and is smaller than the system size we use to produce the results. The $`L`$-independent equilibration times strongly increases with decreasing temperature, making it difficult to study temperatures below $`T=0.05`$. However, since at $`A=1`$ the temperature $`T=0.05`$ is well below any relevant energy scale, we conclude that there is no finite temperature phase transition.
The results for $`C_{\mathrm{}}`$ for the interacting system are shown in Fig. 3(b) as a function of temperature, for $`A=1`$ and $`W=0.3`$. The corresponding function for the non-interacting system, calculated directly from Eq. (3), is also shown. Attempting to extrapolate the results to lower temperature, we obtain $`lim_{T0}(1C_{\mathrm{}})0.15`$. Thus, for any two different configurations in thermal equilibrium as $`T`$ approaches zero, about $`15\%`$ of the sites within the energy interval $`[0.3,0.3]`$ will have different occupation numbers. This also means that about $`30\%`$ of sites in this interval are active. Note that by increasing $`W`$ we include more sites in the correlation function $`C_{\mathrm{}}`$, but according to our results for the spectral diffusion, most of these sites remain passive as $`T0`$. Thus, $`C_{\mathrm{}}`$ should increase as $`W`$ increases, as we indeed observe for $`W=0.6`$.
We view the large values of $`1C_{\mathrm{}}`$ at low $`T`$ as a manifestation of the multiple minima of the total energy landscape of the system. These minima, known as pseudoground states (PS’s) in the context of the Coulomb glass , are close in energy to the ground state, but differ from each other by a finite fraction of the site occupation numbers. Their properties have been widely studied theoretically , and they have been used to explain the experimentally observed long relaxation time . The details of our interpretation are as follows: The Gaussian fluctuations of the total system energy are $`T\sqrt{C_VL^2}`$, where the specific heat $`C_V`$ is of the order of $`T/E_g`$. At finite temperature the system drifts through all PS’s that are inside the fluctuation interval of the total energy. This drift causes a persistent change of the occupation number configuration, and creates a time dependent random potential which is responsible for the spectral diffusion of the site energies. The picture should remain T-independent at very low $`T`$, providing the fluctuation interval contains many PS’s. Since the number of PS’s increases exponentially with the volume of the system , this last condition should be fulfilled down to zero temperature in a macroscopic system.
The following example may clarify our interpretation. Suppose there is no disorder in the system, so that $`A=0`$. Then at low temperature the system forms a Wigner crystal, which is two-fold degenerate on a square lattice at half filling. These two states represent our PS’s. If transitions between them are permitted, then all sites in the system continuously change their occupation number and the energy of each site fluctuates with a standard deviation of order unity. Note that a major difference between the Wigner crystal and the disordered system is that the former has a gap in the excitation spectrum, while the latter possesses a continuous spectrum of PS’s.
It is important to point out that our results cannot be explained by assuming that the excitations of the system are separated pairs of sites, with electrons hopping back and forth between the sites of each pair. This assumption would mean that electrons are effectively localized in space. Since the energy density of such excitations is constant at low energies, meaning the number of available excitations decreases linearly with temperature, one immediately obtains that $`lim_{T0}(1C_{\mathrm{}})T/W`$, like in the non-interacting system. The same temperature dependence is obtained even if excitations involve a few electrons that change their positions simultaneously (so called many electron excitation ). In fact, any picture based upon confined separated excitations which do not interact with each other would mean that $`lim_{T0}(1C_{\mathrm{}})T/W`$. Since our data definitely contradicts this temperature dependence, we conclude that such excitations cannot explain our results.
Thus, the results presented in this work may indicate the existance of a classical delocalization effect which is related to the glassy properties of the system. The formal criterion for the existence of this effect is that $`lim_{T0}(1C_{\mathrm{}})`$ is nonzero. The limit $`T0`$ should be taken in such a way that the thermodynamic fluctuations of the total energy are larger than the energy distance between PS’s. In a macroscopic system such a limit should always be possible. It is important, however, that the equilibration time of the system increases greatly as the temperature goes to zero. This increase should be even more pronounced if one takes into account the exponential dependence of the tunneling rate on the tunneling distance. Therefore, as the temperature decreases, the system will be frozen in phase space during times shorter then the equilibration time. We believe this to be the cause of the insulating properties of the Coulomb glass, such as hopping conduction. A possible confirmation of this view may be found in the experimental observation that the equilibration time decreases sharply near the metal-insulator transition. A similar view was also suggested by Pastor and Dobrosavljevic , however, they considered a dynamic mean field theory with infinite range interaction, thus neglecting the effect of dimensionality.
In summary, we have presented strong computational evidence that in a disordered 2D system of localized interacting electrons, the configuration of occupied sites within the Coulomb gap persistently changes with time. This effect persists down to temperatures well below the Coulomb gap width, and is accompanied by a time dependent random potential responsible for fluctuations of the site energies within the gap. We view this as a classical delocalization effect, which may be suppressed at low temperature due to very long equilibration times. This suggests that the transport properties of the system may be intimately related to glassy behavior. Thus, the system may be considered as a “slow metal”. The increase of the localization radius due to quantum mechanical overlap may transform it into a normal metal. We thank Z. Ovadyahu and A. Vaknin for many helpful discussions. A.E. would also like to acknowledge fruitful discussions with A. I. Larkin and B. I. Shklovskii. This work was supported by the US-Israel Binational Science Foundation Grant 9800097, and by the Forchheimer Foundation. |
no-problem/0001/physics0001016.html | ar5iv | text | # New tests for a singularity of ideal MHD
## Abstract
Analysis using new calculations with 3 times the resolution of the earlier linked magnetic flux tubes confirms the transition from singular to saturated growth rate reported by Grauer and Marliani for the incompressible cases is confirmed. However, all of the secondary tests point to a transition back to stronger growth rate at a different location at late times. Similar problems in ideal hydrodynamics are discussed, pointing out that initial negative results eventually led to better initial conditions that did show evidence for a singularity of Euler. Whether singular or near-singular growth in ideal MHD is eventually shown, this study could have bearing on fast magnetic reconnection, high energy particle production and coronal heating.
The issue currently leading to conflicting conclusions about ideal 3D, incompressible MHD is similar to what led to conflicting results on whether there is a singularity of the 3D incompressible Euler. With numerical simulations, it was first concluded that uniform mesh calculations with symmetric initial conditions such as 3D Taylor-Green were not yet singular . Next, a preliminary spectral calculation found weak evidence in favor a singularity in a series of Navier-Stokes simulations at increasing Reynolds numbers, but larger adaptive mesh or refined mesh calculations did not support this result . Eventually, numerical evidence in favor of a singularity of Euler was obtained using several independent tests applied to highly resolved, refined mesh calculations of the evolution of two anti-parallel vortex tubes . To date, these calculations have met every analytic test for whether there could be a singularity of Euler.
Several other calculations have also claimed numerical evidence for a singularity of Euler . While in all of these cases the evidence is plausible, with the perturbed cylindrical shear flow using the BKM $`\omega _{\mathrm{}}`$ test , for none has the entire battery of tests used for the anti-parallel case been applied. We have recently repeated one of the orthogonal cases and have applied the BKM test successfully. In all cases using the BKM test, $`|\omega _{\mathrm{}}A/(T_ct)`$ with $`A19`$.
To be able to make a convincing case for the existence of a singularity in higher dimensional partial differential equations, great care must be taken with initial conditions, demonstrating numerical convergence, and comparisons to all known analytic or empirical tests. On the other hand, if no singularity is suspected, some quantity that clearly saturates should be demonstrated, such as the strain causing vorticity growth . It is an even more delicate matter to claim that someone else’s calculations or conclusions are incorrect. If it is a matter of suspecting there is inadequate resolution, one must attempt to reproduce the suspicious calculations as nearly as possible and show where inadequate resolution begins to corrupt the calculations and how improved resolution changes the results.
An example of how a detailed search for numerical errors should be conducted can be found in the extensive conference proceeding that appeared prior to the publication of the major results supporting the existence of a singularity of Euler for anti-parallel vortex tubes . The primary difference with earlier work was in the initial conditions. It was found that compact profiles were an improvement, but only if used in conjunction with a high wavenumber filter. Otherwise, the initial unfiltered energy spectrum of the bent anti-parallel vortex tubes went as $`k^2`$. Oscillations in the spectrum at high wavenumber in unfiltered initial conditions for linked magnetic flux tubes are shown in Figure 1, showing that the initial MHD spectrum is steep enough that eventually these oscillations are not important.
The purpose of this letter is to address the claim that a new adaptive mesh refinement (AMR) calculation by Grauer and Marliani supercedes our uniform mesh calculations and that eventually there is a transition to exponential growth. Note that this claim was made without any evidence for whether their numerical method was converged. In all of our earlier calculations, once the calculations become underresolved, we also saw transitions to exponential growth.
Not knowing exactly the initial condition used by the new AMR calculations , where and how much grid refinement was used, and the short notice we have been given to reply has proven a challenge. Fortunately, we were in the process of new $`648^3`$ calculations in a smaller domain of $`4.3^3`$, yielding effectively 3 times the local resolution of our earlier work in a $`(2\pi )^3`$ domain on a $`384^3`$ mesh. The case with an initial flux tube diameter of $`d=0.65`$, so that the tubes slightly overlap, appears to be closer to their initial condition and so will be the focus of this letter. The importance of our other initial condition, with $`d=0.5`$, and no initial overlap of the tubes, is that it is less influenced by an initial current sheet that forms near the origin and is claimed to be the source of the saturation of the nonlinear terms. This was used for the compressible calculations.
Using semi-log coordinates, Figure 2 plots the growth of $`\omega _{\mathrm{}}`$ and $`J_{\mathrm{}}`$ for our new high resolution incompressible calculation and Figure 3 plots $`J_{\mathrm{}}`$ for a new compressible calculation. By taking the last time all relevant quantities on the $`384^3`$ and $`648^3`$ grids were converged, $`J`$ being the worst, then by assuming that the smallest scales are decreasing linearly towards a possible singular time, an estimate of the last time the $`648^3`$ calculation was valid was made. To test exponential versus inverse linear growth, fits were taken between $`T=1.72`$ and 1.87, then extrapolated to large $`T`$. The large figure shows that either an exponential or a singular $`1/(T_ct)`$ form could fit the data, while the inset shows that taking an estimated singular time of $`T_c=2.15`$ and multiplying by $`(T_cT)`$ that at least $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ have consistent singular behavior over this time span. The strong growth of $`P_{\mathrm{\Omega }J}=𝑑V(\omega _ie_{ij}\omega _j\omega _id_{ij}J_j2\epsilon _{ijk}J_id_j\mathrm{}e_\mathrm{}k)`$, which is the production of $`𝑑V(\omega ^2+J^2)`$, is discussed below. The $`384^3`$ curve for $`1/J`$ demonstrates that lack of resolution tends to exaggerate exponential growth. For the compressible calculations it can be seen that there also is an exponential regime that changes into a regime with $`1/J_{\mathrm{}}(T_ct)`$.
Using the new incompressible calculations and applying the entire battery of tests, based upon Figure 2 we would agree that for the incompressible case there is a transition as reported and signs of saturation at this stage are shown below. Whether the transition is to exponential for all times as claimed , or whether there is a still later transition to different singular behavior, will be the focus of this letter. We will look more closely at the structure of the current sheet we all agree exists for signs of saturation.
The case against a singularity in early calculations of Euler was the appearance of vortex sheets, and through analogies with the current in 2D ideal MHD, a suggestion that this leads to a depletion of nonlinearity. The fluid flow most relevant to the linked flux rings is 3D Taylor-Green, due to the initial symmetries . For both TG and linked flux tubes, two sets of anti-parallel vortex pairs form that are skewed with respect to each other and are colliding. In TG, just after the anti-parallel vortex tubes form there is a period of slightly singular development. This is suppressed once the pairs collide with each other, and then vortex sheets dominate for a period. The vortex sheets are very thin, but go across the domain, so fine localized resolution might not be an advantage at this stage. At late phases in TG, the ends of the colliding pairs begin to interact with each other, so that at 4 corners locally orthogonal vortices begin to form. Due to resolution limitations, an Euler calculation of Taylor-Green has not been continued long enough to determine whether, during this phase, singular behavior might develop. We would draw a similar conclusion for all of MHD cases studied to date , that there might not be enough local resolution to draw any final conclusions even if AMR is applied.
While Taylor-Green has not been continued far enough to rule out singularities, the final arrangement of vortex structures led first to studies of interacting orthogonal vortices , and then anti-parallel vortices (see references in ). Both of these initial conditions now appear to develop singular behavior. An important piece of evidence for a singularity of Euler was that near the point of a possible singularity, the structure could not be described simply as a vortex sheet. Therefore, there is a precedent to earlier work suggesting sheets, suppression of nonlinearity, and no singularities to later work showing fully three-dimensional structure and singular behavior.
The initial singular growth of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ for the linked flux rings, then the transition to a saturated growth rate, might be due to the same skewed, anti-parallel vortex pair interaction as in Taylor-Green. Even if this is all that is happening, the strong initial vorticity production and shorter dynamical timescale (order of a few Alfvén times) than earlier magnetic reconnection simulations with anti-parallel flux tubes is a significant success of these simulations. It might be that the vortices that have been generated are strong enough to develop their own Euler singularity. However, the interesting physics is how the magnetic field and current interact with the vorticity. Do they suppress the tendency of the vorticity to become singular, or augment that tendency?
One sign for saturation of the linked flux ring interaction would be if the strongest current remains at the origin in this sheet. Figure 4 plots the positions of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ from the origin as a function of time. During the period where exponential growth is claimed , $`J_{\mathrm{}}`$ is at the origin, which would support the claims of saturation. However, this situation does not persist.
By analogy to the movement of the $`L^{\mathrm{}}`$ norms of the components of the stress tensor $`u_{i,j}`$ in Euler, we expect that the positions of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ should approach each other and an extrapolated singular point in ideal MHD. Figure 4 supports the prediction that the positions of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ should approach each other but so far not in a convincingly linear fashion. This is addressed next. We have similar trends for the positions of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ in the compressible calculations.
Figure 5 gives an overall view of the current, vorticity and magnetic field around the inner current sheet. The vortex pattern has developed out of the four initial vortices, two sets of orthogonal, anti-parallel pairs that are responsible for the initial compression and stretching of the current sheet. By this late time, the ends of the those vortices have begun to interact as new sets of orthogonal vortex pairs. The lower right inset in Figure 5 is a 2D $`x/(y=z)`$ slice through this domain that goes through $`J_{\mathrm{}}`$ at $`t=1.97`$ to show that while $`J_{\mathrm{}}`$ is large at the origin $`(0,0,0)`$, $`J_{\mathrm{}}`$ is larger where it is being squeezed between the new orthogonal vortices. Along one of the new vortices $`\stackrel{}{B}`$ is parallel to and overlying $`\stackrel{}{\omega }`$ and on the orthogonal partner they are anti-parallel and overlying.
The location of $`\omega _{\mathrm{}}`$ is not in the vortex lines shown, but is on the outer edges of the current sheet. Therefore, the exact position of $`\omega _{\mathrm{}}`$ in Figure 4 is an artifact of the initial development and does not accurately reflect the position of $`\stackrel{}{\omega }`$ most directly involved in amplifying $`J_{\mathrm{}}`$, which is probably why the positions of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ are not approaching each other faster. The continuing effects of the initial current sheet is probably also behind the strong exponential growth of $`P_{\mathrm{\Omega }J}`$ in Figure 2, stronger even than the the possible singular growth of $`J_{\mathrm{}}`$ and $`\omega _{\mathrm{}}`$ in the inset. More detailed analysis in progress should show that near the position of $`J_{\mathrm{}}`$, the growth of $`P_{\mathrm{\Omega }J}`$ and the position of $`\omega _{\mathrm{}}`$ are more consistent with our expectations for singular growth and has already shown that some of the components of $`P_{\mathrm{\Omega }J}`$ have consistent singular growth.
As noted, for Euler all available calculations find $`|\omega _{\mathrm{}}A/(T_ct)`$ with $`A19`$. $`A`$ represents how much smaller the strain along $`\omega _{\mathrm{}}`$ is than $`\omega _{\mathrm{}}`$. Here, $`A4`$, indicating stronger growth in $`\omega _{\mathrm{}}`$ for ideal MHD than Euler. Another Euler result was that the asymptotic energy spectrum as the possible singularity was approached was $`k^3`$, whereas purely sheet-like structures in vorticity should yield $`k^4`$ spectrum. $`k^3`$ indicates a more complicated 3D structure than sheets. In Figure 1 the late time spectra are again $`k^3`$.
The next the initial condition we will investigate will be magnetic flux and vortex tubes that nearly overlay each other and are orthogonal to their partners. Our new calculations of orthogonal vortex tubes for Euler show that they start becoming singular as filaments are pulled off of the original tubes and these filaments become anti-parallel, suggesting that the fundamental singular interaction in Euler is between anti-parallel vortices. Whether the next step for ideal MHD is to become anti-parallel or something else can only be determined by new calculations. AMR might be useful, but great care must be taken with the placement of the inner domains and a large mesh will still be necessary. The complicated structures in the domain in Figure 5 are not fully contained in this innermost $`162^3`$ mesh points and the innermost domain should go out to the order of $`300^3`$ points. There are examples of how to use AMR when there are strong shears on the boundaries of sharp structures . This uncertainty of where to place the mesh is why we believe in using uniform mesh calculations as an unbiased first look at the problem.
These final results are hardly robust and their usefulness is primarily to suggest a new more localized initial condition and to show that none of the calculations to date is the full story. For $`J`$ and $`\omega `$ to show singular behavior as long as they have has been surprising. Recall that for Euler, velocity, vorticity and strain are all manifestations of the same vector field, but for ideal MHD there are two independent vector fields even though the only analytic result in 3D is a condition on the combination, $`𝑑V\left[𝝎_{\mathrm{}}(t)+𝐉_{\mathrm{}}(t)\right]𝑑t\mathrm{}`$ . Eventually, one piece of evidence for singular growth must be a demonstration of strong coupling between the current and vorticity so that they are acting as one vector field. It could be that our strong growth is due to the strongly helical initial conditions and there are no singularities. This would still be physically interesting since helical conditions could be set up by footpoint motion in the corona.
Could the magnetic and electric fields blow up too? There are signs this might be developing around the final position of $`J_{\mathrm{}}`$, in which case there might exist a mechanism for the direct acceleration of high energy particles. This has been considered on larger scale , but to our knowledge a mechanism for small-scale production of super-Dreicer electric fields has not been proposed before. A singular rise in electric fields could explain the sharp rise times in X-ray production in solar coronal measurements , which could be a consequence of particle acceleration coming from reconnection. This would also have implications for the heating of the solar corona by nanoflares and the production of cosmic rays.
This work has been supported in part by an EPSRC visiting grant GR/M46136. NCAR is support by the National Science Foundation. |
no-problem/0001/cond-mat0001422.html | ar5iv | text | # Dependence of the BEC transition temperature on interaction strength: A perturbative analysis
## I Introduction
It may well look like a long-solved text book excercise, but the variation of the dilute Bose gas’ critical temperature with the interaction strength has not yet found a conclusive answer. To date, all authors assume a continuous behavior in the limit of weak interaction, $`lim_{a0}T_c=T_c^0`$, where $`T_c^0`$ is the transition temperature of the non–interacting system, and $`a`$ is the s–wave scattering length. However, the sign, the proportionality constant $`c`$, and the exponent $`\eta `$ in the expression for the shift in the critical temperature at fixed density $`\rho `$
$$\frac{T_cT_c^0}{T_c^0}=\pm c\left[a^3\rho \right]^\eta ,$$
(1)
are still subject to considerable debate. Early calculations by Fetter and Walecka and Toyoda predict a decrease in temperature, $`T_c<T_c^0`$ (Yet, it should be noted that the expression derived by Fetter and Walecka yields zero for a point potential.). However, more recent calculations indicate the opposite \[3–10\]. Concerning the exponent, one finds in the literature a set of predicted rational values which range from $`\eta =1/6`$ , , to $`\eta =1/2`$ . The most recent analytical investigations converge towards the value $`\eta =1/3`$ , , , , i. e. they predict a linear dependence of the temperature shift on the scattering length. This result is also backed by Monte Carlo simulations , , and by an ingenious extrapolation of experimental data for the strongly interacting condensed He4 in vycor glass . Still, the result of Toyoda $`\eta =1/6`$ continues to find support . The proportionality constant, finally, has been predicted to assume a variety of values, which for $`\eta =1/3`$ range from $`c=0.3`$ to $`c=5.1`$ . The Paris group most recent numerical analysis, for example, points at $`c=2.3`$ , a value which is close to the theoretical prediction of Baym and collaborators , while the extrapolation of the experimental data on He4 in vycor glass favors $`c=5.1`$, which is closer to an early prediction $`c=4.66`$ of Stoof .
It is frequently maintained that ordinary perturbation theory can not be applied as it is plagued by seemingly unsurmountable infrared divergencies (See ). We point out that this conclusion is based on the implicit assumption that the grand–canonical statistics, which is governed by a chemical potential, is a sensible approximation to the real system, i. e. a system where – as a matter of principle – not the chemical potential, but rather the total number of particles is fixed, possibly at very large a value. While this assumption of thermodynamic equivalence does indeed hold in a system with sufficiently strong interactions, it must be rejected in the in the limit $`a0`$. In this limit the grand–canonical statistics implies fluctuations of the ground state occupation, which for temperatures at and below $`T_c`$ turn out to be extravagantly large, $`\mathrm{\Delta }n_0𝒪(N)`$ \[15–17\]. It is these unphysical fluctuations which doom to failure any attempt to reliably compute the shift of the Bose gas critical temperature, in the non–interacting limit $`a0`$, when resorting to ordinary perturbation theory in the grand–canonical ensemble.
As the ground state giant fluctuations are easily traced back to the fluctuations in the total number of particles, which in the grand–canonical statistics turn into an unacceptable $`\mathrm{\Delta }NN`$ for $`TT_c^0`$, a safe way out is to resort to statistical ensembles where the total number of particles is not allowed to fluctuate. In the canonical and microcanonical ensembles, for example, the ground state fluctuations of the non–interacting system exhibit a scaling $`\mathrm{\Delta }n_0𝒪(N^{2/3})`$ which – although still anomalous – turns out to be sufficiently suppressed for ordinary perturbation theory to be applicable.
Indeed, as we shall demonstrate in this letter, first order perturbation theory in the canonical ensemble yields the following shift in the critical temperature (where $`\lambda _0`$ is the De Broglie thermal wave length at $`T=T_c^0`$):
$`{\displaystyle \frac{T_cT_c^0}{T_c^0}}`$ $`=`$ $`{\displaystyle \frac{2}{5}}\left[{\displaystyle \frac{8\pi }{3\zeta (3/2)}}\right]{\displaystyle \frac{a}{\lambda _0}}`$ (2)
$``$ $`0.93\left[a^3\rho \right]^{\frac{1}{3}},`$ (3)
which – contrary to some early expectations – is neither zero nor infinite.
## II The Hamiltonian
We consider a uniform system of $`N`$ weakly interacting bosons in a volume $`V=L^3`$, imposing periodic boundary conditions. The Hamiltonian reads
$$\widehat{H}=\widehat{H}_0+\widehat{H}_{\mathrm{int}},$$
(4)
where $`\widehat{H}_0`$ is the Bose gas kinetic energy,
$$\widehat{H}_0=\underset{𝐤}{}\epsilon _k\widehat{n}_𝐤,\epsilon _k=\frac{\mathrm{}^2𝐤^2}{2m},$$
(5)
and $`\widehat{H}_{\mathrm{int}}`$ describes the particle pair interaction,
$$\widehat{H}_{\mathrm{int}}=\frac{u}{2N}\underset{\mathrm{𝐩𝐤𝐪}}{}\widehat{b}_𝐩^{}\widehat{b}_𝐪^{}\widehat{b}_{𝐪𝐤}\widehat{b}_{𝐩+𝐤},u=\frac{4\pi \mathrm{}^2aN}{mV}.$$
(6)
Here $`𝐤=(2\pi /L)𝐧`$ is a wave vector, with $`𝐧`$ a vector of integers, $`\widehat{b}_𝐤`$, $`\widehat{b}_𝐤^{}`$ are bosonic particle annihilation and creation operators, $`\widehat{n}_𝐤=\widehat{b}_𝐤^{}\widehat{b}_𝐤`$ is the associated number operator, $`m`$ denotes the particle mass, and $`a`$ denotes the s–wave scattering length.
## III The counting statistics
We shall be working at fixed density $`\rho =N/V`$ (equivalently: fixed specific volume $`v=\rho ^1`$), but variable total number of particles $`N`$ (and, concomitantly, variable system volume $`V`$). The first issue to be faced is to provide a definition of the transition temperature, which – as we recall – only acquires the meaning of a critical temperature in the thermodynamic limit $`N\mathrm{}`$, $`V\mathrm{}`$, $`\rho =N/V`$ constant.
We base our definition on the counting statistics of the zero–momentum state,
$$P_n(\beta ;N)=\frac{1}{Z(\beta ;N)}\mathrm{Tr}\left\{\delta _{\widehat{n}_0,n}e^{\beta \widehat{H}}\delta _{\widehat{N},N}\right\},$$
(7)
which is the probability to find $`n`$ particles (out of $`N`$ total particles) in the zero–momentum state $`𝐩=\mathrm{}𝐤=0`$. Here, $`\widehat{N}=_𝐤\widehat{n}_𝐤`$ is the operator for the total number of particles, $`\delta _{a,b}`$ is the Kronecker delta, and $`Z(\beta ;N)`$ is the canonical partition function,
$$Z(\beta ;N)=\mathrm{Tr}\left\{e^{\beta \widehat{H}}\delta _{\widehat{N},N}\right\}.$$
(8)
In the non-interacting limit, the counting statistics for high temperatures is a strictly decreasing function of $`n`$, i. e. $`P_n>P_{n+1}`$ . For sufficiently low temperatures it displays a single peak at $`nnO(N)`$. Assuming that a system of weakly interacting bosons behaves correspondingly, we introduce the auxiliary function
$$\stackrel{~}{D}(\beta ;N)\mathrm{Tr}\left\{\left[\delta _{\widehat{n}_0,0}\delta _{\widehat{n}_0,1}\right]e^{\beta \widehat{H}}\delta _{\widehat{N},N}\right\}.$$
(9)
The cross–over from the high–temperature regime, where $`\stackrel{~}{D}>0`$, to the low–temperature regime, where $`\stackrel{~}{D}<0`$, is assumed to occure for a certain value $`\beta =\beta _{}`$, which is defined by the relation
$$\stackrel{~}{D}(\beta _{};N)=0.$$
(10)
For fixed density $`\rho `$, and fixed scattering length $`a`$, the solution of this equation depends on the total number of particles $`N`$: $`\beta _{}=\beta _{}(N)`$. We stipulate that, in the thermodynamic limit, the cross–over temperature $`T_{}=(k_\mathrm{B}\beta _{})^1`$ coincides with the critical temperature of Bose–Einstein condensation:
$$\underset{N\mathrm{}}{lim}T_{}(N)=T_\mathrm{c}.$$
(11)
This identification, being non–trivial for an interacting system, will be verified below for the non–interacting case.
## IV Perturbative analysis of $`T_{}`$
We determine $`\beta _{}`$ using a series expansion in $`\widehat{H}_{\mathrm{int}}`$. The Dyson series of $`\stackrel{~}{D}=\stackrel{~}{D}(\beta ;N)`$ reads
$$\stackrel{~}{D}=\stackrel{~}{D}_0+\stackrel{~}{D}_1+\stackrel{~}{D}_2+\mathrm{},$$
(12)
where $`\stackrel{~}{D}_n\stackrel{~}{D}_n(\beta ;N)`$ is of $`n`$–th order in $`\widehat{H}_{\mathrm{int}}`$. The first two terms are given by
$$\stackrel{~}{D}_0(\beta ;N)=\mathrm{Tr}\left\{[\delta _{\widehat{n}_0,0}\delta _{\widehat{n}_0,1}]e^{\beta \widehat{H}_0}\delta _{\widehat{N},N}\right\},$$
(13)
$$\stackrel{~}{D}_1(\beta ;N)=\beta \mathrm{Tr}\left\{\left[\delta _{\widehat{n}_0,0}\delta _{\widehat{n}_0,1}\right]\widehat{H}_{\mathrm{int}}e^{\beta \widehat{H}_0}\delta _{\widehat{N},N}\right\}.$$
(14)
To solve Eq. (10) we set
$$\beta _{}=\beta _{}^{(0)}+\mathrm{\Delta }\beta _{},$$
(15)
where $`\beta _{}^{(0)}`$ denotes the cross–over inverse temperature of the non–interacting Bose gas, and $`\mathrm{\Delta }\beta _{}`$ is a correction which is assumed to be small. The defining equation for $`\beta _{}^{(0)}`$ reads
$$\stackrel{~}{D}_0(\beta _{}^{(0)};N)=0,$$
(16)
and the shift to leading order
$$\frac{\mathrm{\Delta }\beta _{}}{\beta _{}^{(0)}}=\frac{\stackrel{~}{D}_1(\beta ;N)}{\beta \stackrel{~}{E}_0(\beta ;N)}|_{\beta =\beta _{}^{(0)}},$$
(17)
where
$$\stackrel{~}{E}_0(\beta ;N)=\frac{}{\beta }\stackrel{~}{D}_0(\beta ;N).$$
(18)
### A Exact Relations
Observing $`\epsilon _0=0`$, which implies that $`\widehat{H}_0`$ does not depend on $`\widehat{n}_0`$, we may recast Eq. (13) into the form
$$\stackrel{~}{D}_0(\beta ;N)=\mathrm{Tr}_{\mathrm{ex}}\left\{\left[\delta _{\widehat{N}_{\mathrm{ex}},N}\delta _{\widehat{N}_{\mathrm{e}\mathrm{x}},N1}\right]e^{\beta \widehat{H}_0}\right\},$$
(19)
where $`\mathrm{Tr}_{\mathrm{ex}}`$ denotes the trace over the occupation of excited states $`𝐤0`$, and
$$\widehat{N}_{\mathrm{ex}}\underset{𝐤0}{}\widehat{n}_𝐤$$
(20)
denotes the operator of the number of particles in the excited states. Furthermore, using
$$\widehat{H}_{\mathrm{int}}=\frac{u}{N}\widehat{N}(\widehat{N}1)\frac{u}{N}\underset{𝐤}{}\frac{\widehat{n}_𝐤(\widehat{n}_𝐤1)}{2}+\widehat{R}$$
(21)
where $`\widehat{R}`$ has no diagonal elements in the Fock basis, and observing that $`[\delta _{\widehat{n}_0,0}\delta _{\widehat{n}_0,1}]\widehat{n}_0(\widehat{n}_01)=0`$, one finds
$$\stackrel{~}{D}_1(\beta ;N)=\frac{u\beta }{N}\mathrm{Tr}_{\mathrm{ex}}\left\{\left[\delta _{\widehat{N}_{\mathrm{ex}},N}\delta _{\widehat{N}_{\mathrm{e}\mathrm{x}},N1}\right]\underset{𝐤0}{}\frac{\widehat{n}_𝐤(\widehat{n}_𝐤1)}{2}e^{\beta \widehat{H}_0}\right\}(N1)u\beta \stackrel{~}{D}_0(\beta ;N).$$
(22)
Note that due to the definition of $`\beta _{}^{(0)}`$ the second terms does not contribute to the shift $`\mathrm{\Delta }\beta _{}`$.
To proceed, we use the Laplace representation of the Kronecker delta
$$\delta _{\widehat{N}_{\mathrm{ex}},N}=\frac{1}{2\pi i}_{i\pi }^{+i\pi }𝑑\alpha e^{(N\widehat{N}_{\mathrm{ex}})\alpha }$$
(23)
and perform the trace $`\mathrm{Tr}_{\mathrm{ex}}`$. We then face
$$\stackrel{~}{D}_0(\beta ;N)=\frac{1}{2\pi i}_{i\pi }^{i\pi }𝑑\alpha \left[1e^\alpha \right]e^{\stackrel{~}{F}(\alpha )},$$
(24)
$$\stackrel{~}{D}_1(\beta ;N)=\frac{u\beta }{N}\frac{1}{2\pi i}_{i\pi }^{i\pi }𝑑\alpha \left[1e^\alpha \right]\left(\underset{𝐤0}{}n_𝐤^2(\alpha )\right)e^{\stackrel{~}{F}(\alpha )}(N1)u\beta \stackrel{~}{D}_0(\beta ;N),$$
(25)
where $`n_𝐤(\alpha )=n_𝐤(\alpha ;\beta ,N)`$,
$$n_𝐤(\alpha ;\beta ,N)=\frac{1}{e^{\alpha +\beta \epsilon _k}1},$$
(26)
and $`\stackrel{~}{F}(\alpha )\stackrel{~}{F}(\alpha ;\beta ,N)`$,
$$\stackrel{~}{F}(\alpha ;\beta ,N)=N\alpha +N\frac{v}{\lambda ^3}\stackrel{~}{g}_{5/2}(\alpha ).$$
(27)
Here $`\lambda \lambda (\beta )`$ is the thermal De Broglie wave length,
$$\lambda (\beta )=\sqrt{2\pi \mathrm{}^2/(mk_\mathrm{B}T)},$$
(28)
and $`\stackrel{~}{g}_{5/2}(\alpha )\stackrel{~}{g}_{5/2}(\alpha ;\beta ,N)`$ is a discrete predecessor of a Bose integral function,
$$\stackrel{~}{g}_{5/2}(\alpha ;\beta ,N)=\frac{\lambda ^3}{Nv}\underset{𝐤0}{}\mathrm{ln}\left[1e^{\alpha \beta \epsilon _k}\right].$$
(29)
Upon identifying $`\alpha =\beta \mu `$, where $`\mu `$ denotes the chemical potential in the grand–canonical ensemble, we note that for fixed $`\alpha `$ the corresponding value $`\stackrel{~}{F}(\alpha )`$ is nothing but the ideal Bose gas grand–canonical free energy, and $`n_𝐤(\alpha )`$ is the grand–canonical mean occupation.
### B Continuum Approximation
For sufficiently large $`N`$, and for the interesting range of thermal De Broglie wavelength such that $`\lambda ^3\rho N`$, we may invoke the continuum approximation, and replace the discrete sum over momenta by an integral, $`_{𝐤0}\frac{Nv}{(2\pi )^3}d^3k`$.<sup>*</sup><sup>*</sup>*In an extended version of this paper we shall include a systematic study of the corrections to the continuum approximation. We then obtain
$`\stackrel{~}{F}(\alpha ;\beta ,N)`$ $``$ $`F(\alpha ;\beta ,N)=N\alpha +N{\displaystyle \frac{v}{\lambda ^3}}g_{5/2}(\alpha ),`$ (30)
$`{\displaystyle \underset{𝐤0}{}}n_𝐤^2(\alpha ;\beta ,N)`$ $``$ $`N{\displaystyle \frac{v}{\lambda ^3}}\left(g_{1/2}(\alpha )g_{3/2}(\alpha )\right),`$ (31)
where the $`g_\sigma (\alpha )`$ denote the Bose–Einstein integral functions
$$g_\sigma (\alpha )=\frac{1}{\mathrm{\Gamma }(\sigma )}_0^{\mathrm{}}𝑑x\frac{x^{\sigma 1}}{e^{x+\alpha }1}.$$
(32)
Concomitant with the above replacements we also have:
$`\stackrel{~}{D}_0`$ $``$ $`D_0={\displaystyle \frac{d\alpha }{2\pi i}\left[1e^\alpha \right]e^{F(\alpha ;\beta ,N)}},`$ (33)
$`\beta \stackrel{~}{E}_0`$ $``$ $`\beta E_0=N{\displaystyle \frac{v}{\lambda ^3}}{\displaystyle \frac{3}{2}}I_{5/2}(\beta ;N),`$ (34)
$`\stackrel{~}{D}_1`$ $``$ $`D_1={\displaystyle \frac{v}{\lambda ^3}}\beta u\left[I_{1/2}(\beta ;N)I_{3/2}(\beta ;N)\right](N1)\beta uD_0,`$ (35)
where
$$I_\sigma (\beta ;N)=\frac{d\alpha }{2\pi i}\left[1e^\alpha \right]g_\sigma (\alpha )e^{F(\alpha ;\beta ,N)}.$$
(36)
### C Expansion in $`N^{1/3}`$
As $`N`$ is assumed to be large, $`N1`$, it is now tempting to evaluate $`D_0,D_1`$ in the saddle point approximation. Yet, due to the $`𝒪(N^{2/3})`$ proximity of the saddle point to the branch point of $`F`$, this procedure is doomed to fail, and a completely different treatment must be developed.In field theory, the proximity turns into a confluence which renders meaningless the non–interacting limit of the theory at $`T=T_0`$.
Since we expect the integrals to be dominated by the small values of $`\alpha `$, we resort to the Robinson representation :
$$g_\sigma (\alpha )=\mathrm{\Gamma }(1\sigma )\alpha ^{\sigma 1}+\underset{n=0}{\overset{\mathrm{}}{}}\frac{()^n}{n!}\zeta (\sigma n)\alpha ^n.$$
(37)
Save for the branch cut of $`\alpha ^{\sigma 1}`$, which runs along the negative real axis, the Robinson expansion converges absolutely for $`|\alpha |2\pi `$. Note that the radius of convergence covers the domain of integration in the above $`\alpha `$–integrals.
Exploiting the Robinson representation, the exponent reads
$$F=\mathrm{ln}CY\alpha +\frac{2}{3}X\alpha ^{3/2}+X𝒪(\alpha ^2),$$
(38)
where $`\mathrm{ln}C=[\zeta (5/2)/(\lambda ^3\rho )]N`$ is a constant, $`𝒪(\alpha ^2)`$ denotes some analytic function, which may be extracted from Eq. (37), and
$$X=\frac{2\sqrt{\pi }}{\zeta (3/2)}\frac{\lambda _0^3}{\lambda ^3}N,Y=\left[\frac{\lambda _0^3}{\lambda ^3}1\right]N,$$
(39)
with $`\lambda _0^3=\zeta (3/2)/\rho `$, the thermal De Broglie wave length of the non–interacting gas evaluated at the transition temperature.
We note that for given $`N`$, the solution of Eq. (10) implies a relation between $`X`$, which is $`O(N)`$, and $`Y`$. As we expect $`\lambda _{}\lambda _0`$ in the limit $`N\mathrm{}`$, the scaling of $`Y_{}`$ with $`N`$ is not obvious, yet
$$ϵ=\frac{Y}{X}$$
(40)
will certainly be small. Introducing the transformation of the integration variable,
$$\alpha \tau =ϵ^2\alpha $$
(41)
the free energy reads
$$F=\mathrm{ln}C+\mathrm{\Lambda }\left(\tau +\frac{2}{3}\tau ^{3/2}\right)+\mathrm{\Lambda }ϵr_{5/2}(\tau ;ϵ),$$
(42)
where
$$\mathrm{\Lambda }=\frac{Y^3}{X^2},$$
(43)
and $`r_{5/2}`$ is a regular function,
$$r_{5/2}(\tau ;ϵ)=\frac{\tau ^2}{2\sqrt{\pi }}\underset{\nu =0}{\overset{\mathrm{}}{}}\frac{(ϵ^2)^\nu }{(\nu +2)!}\zeta (1/2\nu )\tau ^\nu .$$
(44)
Since we shall find that $`\mathrm{\Lambda }_{}^{(0)}O(1)`$ at the cross–over temperature $`T=T_{}^{(0)}`$, and concomitantly $`ϵ_{}^{(0)}𝒪(N^{1/3})`$ for large $`N`$, we may invoke a formal expansion in $`ϵ`$,
$$\frac{D_0}{C}=ϵ^4K_1(\mathrm{\Lambda })+ϵ^5\frac{\zeta (1/2)}{4\sqrt{\pi }}\mathrm{\Lambda }K_3(\mathrm{\Lambda })+𝒪(ϵ^6),$$
(45)
$$\frac{I_{1/2}I_{3/2}}{C}=\frac{\zeta (1/2)\zeta (3/2)}{C}D_0+ϵ^3\sqrt{\pi }K_{1/2}(\mathrm{\Lambda })+ϵ^4\frac{\zeta (1/2)}{4}\mathrm{\Lambda }K_{5/2}(\mathrm{\Lambda })+𝒪(ϵ^5),$$
(46)
$$\frac{I_{5/2}}{C}=\frac{\zeta (5/2)}{C}D_0ϵ^6\zeta (3/2)K_2(\mathrm{\Lambda })ϵ^7\left[\frac{\zeta (3/2)\zeta (1/2)}{4\sqrt{\pi }}\mathrm{\Lambda }K_4\frac{4\sqrt{\pi }}{3}K_{5/2}\right]+𝒪(ϵ^8),$$
(47)
where we have introduced the family of functions
$$K_\nu (\mathrm{\Lambda })=\frac{1}{2\pi i}𝑑\tau \tau ^\nu \mathrm{exp}\left\{\mathrm{\Lambda }\left(\tau +\frac{2}{3}\tau ^{3/2}\right)\right\}.$$
(48)
The functions $`K_\nu `$ obey the recurrence relation
$$K_{\nu +3/2}=K_{\nu +1}\frac{\nu +1}{\mathrm{\Lambda }}K_\nu ,$$
(49)
which is easily proven by expressing $`K`$ in terms of $`X`$ and $`Y`$, using the inverse of the transformation (41).
### D Results
Upon inserting Eq. (45) into Eq. (16), the condition which fixes the cross–over temperature of the non–interacting gas reads
$$K_1(\mathrm{\Lambda }_{}^{(0)})=0.$$
(50)
up to corrections $`𝒪(N^{1/3})`$. This equation is easily solved numerically, yielding the result $`\mathrm{\Lambda }_{}^{(0)}=0.334`$. Expressed in terms of temperature we have
$$T_{}^{(0)}=T_c^0\left[1+\left(\frac{32\pi \mathrm{\Lambda }_{}^{(0)}}{27\zeta (3/2)^2}\right)^{1/3}\frac{1}{N^{1/3}}+𝒪(N^{2/3})\right]$$
(51)
where $`T_c^0`$ is the critical temperature of the non–interacting Bose gas. Note that for $`N`$ finite, the cross–over temperature is slightly higher than the transition temperature of the ideal Bose gas, but in the thermodynamic limit $`N\mathrm{}`$ they coincide.
Collecting terms and observing that $`D_0(\beta _{}^{(0)};N)=0`$, the interaction induced shift reads
$$\frac{\mathrm{\Delta }\beta _{}}{\beta _{}^{(0)}}=\frac{8\pi }{3\zeta (3/2)}\frac{a}{\lambda _{}^{(0)}}\left[\frac{K_{1/2}(\mathrm{\Lambda })}{\mathrm{\Lambda }K_2(\mathrm{\Lambda })}\right]_{\mathrm{\Lambda }=\mathrm{\Lambda }_{}^{(0)}},$$
(52)
up to corrections of order $`𝒪(N^{1/3})`$. The shift involves the ratio $`f=K_{1/2}/(\mathrm{\Lambda }K_2)|_{\mathrm{\Lambda }=\mathrm{\Lambda }_{}}`$. Exploiting the recurrence relation (49), and observing that $`K_1(\mathrm{\Lambda }_{}^{(0)})=0`$, we find $`f=2/5`$. We note that this value is exact as it does not depend on the numerical value of $`\mathrm{\Lambda }_{}^{(0)}`$.
We are now in the position to consider the thermodynamic limit of Eq. (52). Identifying $`lim_N\mathrm{}T_{}=T_c`$, the result reads
$`{\displaystyle \frac{\mathrm{\Delta }T_c}{T_c^0}}{\displaystyle \frac{T_cT_c^0}{T_c^0}}`$ $`=`$ $`{\displaystyle \frac{2}{5}}{\displaystyle \frac{8\pi }{3\zeta (3/2)}}{\displaystyle \frac{a}{\lambda _0}}`$ (53)
$`=`$ $`{\displaystyle \frac{2}{5}}{\displaystyle \frac{8\pi }{3\zeta (3/2)^{4/3}}}a\rho ^{1/3}=0.93a\rho ^{1/3}.`$ (54)
We thus find a negative shift in the critical temperature, growing linearly with the scattering length. This result can be compared with the other fully analytical prediction existing in the literature, derived by Baym, Blaizot and Zinn–Justin :
$$\frac{\mathrm{\Delta }T_c}{T_c^0}=\frac{8\pi }{3\zeta (3/2)^{4/3}}a\rho ^{1/3}=2.33a\rho ^{1/3}.$$
(55)
The two results exhibit the same scaling, but differ for the sign and for a factor $`2/5`$ in the proportionality constant.
The prediction of Baym et al. has been obtained by evaluating the leading order in the $`1/N`$ expansion for a $`O(N)`$ field theory model that coincides with the original Bose Hamiltonian for $`N=2`$, and by observing that the final result does not explicitely depend on $`N`$. However, the result is strictly proven only for large $`N`$, and whether it is reliable also for $`N=2`$ is still an open problem.
On the other hand, our approach based on the counting statistics and on ordinary perturbation theory in the canonical ensemble indicates that in the limit of ultraweak interaction there are contributions, otherwise possibly neglected in other approaches, that tend to suppress quantum effects. Clarification of this issue may be expected from higher order perturbation theory.
## V Acknowledgments
We would like to thank Frank Laloë, Sandro Stringari, Gordon Baym, Markus Holzmann and Maciej Lewenstein for friendly and stimulating discussions in the pleasant atmosphere of the ESF Conference on BEC held at San Feliu de Guixol in the Fall 1999. |
no-problem/0001/cond-mat0001086.html | ar5iv | text | # Fluctuations of the inverse participation ratio at the Anderson transition
## Abstract
Statistics of the inverse participation ratio (IPR) at the critical point of the localization transition is studied numerically for the power-law random banded matrix model. It is shown that the IPR distribution function is scale-invariant, with a power-law asymptotic “tail”. This scale invariance implies that the fractal dimensions $`D_q`$ are non-fluctuating quantities, contrary to a recent claim in the literature. A recently proposed relation between $`D_2`$ and the spectral compressibility $`\chi `$ is violated in the regime of strong multifractality, with $`\chi 1`$ in the limit $`D_20`$.
PACS numbers: 72.15.Rn, 71.30.+h, 05.45.Df, 05.40.-a
Strong fluctuations of eigenfunctions represent one of the hallmarks of the Anderson metal-insulator transition. These fluctuations can be characterized by a set of inverse participation ratios (IPR)
$$P_q=d^dr|\psi (𝐫)|^{2q}.$$
(1)
In a pioneering work , Wegner found from the renormalization-group treatment of the $`\sigma `$-model in $`2+ϵ`$ dimensions that the IPR show at criticality an anomalous scaling with respect to the system size $`L`$,
$$P_qL^{D_q(q1)}.$$
(2)
Equation (2) should be contrasted with the behavior of the IPR in a good metal (where eigenfunctions are ergodic), $`P_qL^{d(q1)}`$, and, on the other hand, in the insulator (localized eigenfunctions), $`P_qL^0`$.
The scaling (2) characterized by an infinite set of critical exponents $`D_q`$ implies that the critical eigenfunction represents a multifractal distribution . The notion of a multifractal structure was first introduced by Mandelbrot and was later found relevant in a variety of physical contexts, such as the energy dissipating set in turbulence, strange attractors in chaotic dynamical systems, and the growth probability distribution in diffusion-limited aggregation; see for a review. During the last decade, multifractality of critical eigenfunctions has been a subject of intensive numerical studies . Among all the multifractal dimensions, $`D_2`$ plays the most prominent role, since it determines the spatial dispersion of the diffusion coefficient at the mobility edge .
In fact, to make the statement (2) precise, one should specify what exactly is meant by $`P_q`$ in its left-hand side. Indeed, the IPR’s fluctuate from one eigenfunction (or one realization of disorder) to another. Should one take the average $`P_q`$? Or, say, the most probable one? Will the results differ? More generally, this poses the question of the form of the IPR distribution function at criticality.
In a recent Letter , Parshin and Shober addressed this problem via numerical simulations for the 3D tight-binding model. Their main finding is that the fractal dimension $`D_2`$ is not a well defined quantity, but rather shows universal fluctuations characterized by some distribution function $`𝒫(D_2)`$ of a width of order unity. If true, this would force one to reconsider virtually all aspects of the multifractality phenomenon, such as the notion of the singularity spectrum $`f(\alpha )`$, the form of the eigenfunction correlations and of the density response at the mobility edge etc. In view of such a challenge to the common lore, the issue requires to be unambiguously clarified.
We begin by reminding the reader of the existent analytical results concerning the IPR fluctuations. While the direct analytical study of the Anderson transition in 3D is not feasible because of the lack of a small parameter, statistics of energy levels and eigenfunctions in a metallic mesoscopic sample (dimensionless conductance $`g1`$) can be studied systematically in the framework of the supersymmetry method; see for a review. Within this approach, the IPR fluctuations were studied recently . In particular, the 2D geometry was considered, which, while not being a true Anderson transition point, shows many features of criticality, in view of the exponentially large value of the localization length. It was found that the distribution function of the IPR $`P_q`$ normalized to its average value $`P_q`$ has a scale invariant form. In particular, the relative variance of this distribution (characterizing its relative width) reads
$$\mathrm{var}(P_q)/P_q^2=Cq^2(q1)^2/\beta ^2g^2,$$
(3)
where $`C1`$ is a numerical coefficient determined by the sample shape (and the boundary conditions), and $`\beta =1`$ or 2 for the case of unbroken (resp. broken) time reversal symmetry. It is assumed here that the index $`q`$ is not too large, $`q^2\beta \pi g`$. These findings motivated the conjecture that the IPR distribution at criticality has in general a universal form, i.e. that the distribution function $`𝒫(P_q/P_q^{\mathrm{typ}})`$ is independent of the size $`L`$ in the limit $`L\mathrm{}`$. Here $`P_q^{\mathrm{typ}}`$ is a typical value of the IPR, which can be defined e.g. as a median of the distribution $`𝒫(P_q)`$. Normalization of $`P_q`$ by its average value $`P_q`$ (rather than by the typical value $`P_q^{\mathrm{typ}}`$) would restrict generality of the statement; see the discussion below. Practically speaking, the conjecture of Ref. is that the distribution function of the IPR logarithm, $`𝒫(\mathrm{ln}P_q)`$ simply shifts along the $`x`$-axis with changing $`L`$. In contrast, the statement of Ref. is that the width of this distribution function scales proportionally to $`\mathrm{ln}L`$.
While the above-mentioned analytical results for the 2D case are clearly against the statement of , their applicability to a generic Anderson transition point may be questioned. Indeed, the 2D metal represents only an “almost critical” point, and the consideration is restricted to the weak disorder limit $`g1`$ (weak coupling regime in the field-theoretical language), while all the realistic metal-insulator transitions (conventional Anderson transition in 3D, quantum Hall transition etc.) take place in the regime of strong coupling.
To explore the IPR fluctuations at criticality in the strong coupling regime, we have performed numerical simulations of the power-law random banded matrix (PRBM) ensemble. This model of the Anderson critical point introduced in is defined as the ensemble of random Hermitean $`N\times N`$ matrices $`\widehat{H}`$ (real for $`\beta =1`$ or complex for $`\beta =2`$). The matrix elements $`H_{ij}`$ are independently distributed Gaussian variables with zero mean $`H_{ij}=0`$ and the variance
$$|H_{ij}|^2=a^2(|ij|),$$
(4)
where $`a(r)`$ is given by
$$a^2(r)=\left[1+\frac{1}{b^2}\frac{\mathrm{sin}^2(\pi r/N)}{(\pi /N)^2}\right]^1.$$
(5)
Here $`0<b<\mathrm{}`$ is a parameter characterizing the ensemble, whose significance will be discussed below. The crucial feature of the function $`a(r)`$ is its $`1/r`$–decay for $`rb`$. Indeed, for $`rN`$ Eq. (5) reduces to
$$a^2(r)=[1+(r/b)^2]^1.$$
(6)
The formula (5) is just a periodic generalization of (6), allowing to diminish finite-size effects (an analog of periodic boundary conditions).
In a straightforward interpretation, the model describes a 1D sample with random long-range hopping, the hopping amplitude decaying as $`1/r`$ with the length of the hop. Also, such an ensemble arises as an effective description in a number of physical contexts. Referring the reader to Refs. for details (see also ), we only give a brief summary of the main relevant analytical findings. The PRBM model formulated above is critical at arbitrary value of $`b`$; it shows all the key features of the Anderson critical point, including multifractality of eigenfunctions and non-trivial spectral compressibility (to be discussed below). Perhaps, the most appealing property of the ensemble is the existence of the parameter $`b`$ which labels the critical point: Eqs. (4), (5) define a whole family of critical theories parametrized by $`b`$ . This is in full analogy with the family of the conventional Anderson transition critical points parametrized by the spatial dimensionality $`2<d<\mathrm{}`$. The limit $`b1`$ is analogous to $`d=2+ϵ`$ with $`ϵ1`$; it allows a systematic analytical treatment (weak coupling expansion for the $`\sigma `$-model). The opposite limit $`b1`$ corresponds to $`d1`$, where the transition takes place in the strong disorder (strong coupling) regime, and is also accessible to an analytical treatment using the method of . This makes the PRBM ensemble a unique laboratory for studying general features of the Anderson transition. Criticality of the PRBM ensemble was recently confirmed in numerical simulations for $`b=1`$ .
We have calculated the distribution function of the IPR in the case $`\beta =1`$ for system sizes ranging from $`N=256`$ to $`N=4096`$ and for various values of $`b`$ by numerically diagonalizing the Hamiltonian matrix defined in Eq. (4) using standard techniques. The statistical average is over a few thousand matrices in the case of large system sizes up to $`10^5`$ matrices at $`N=256`$. Specifically, we have considered an average over wavefunctions having energies in a small energy interval about the band center, with a width of about $`10\%`$ of the band width.
Fig. 1 displays our result for the distribution of the IPR logarithm, $`𝒫(\mathrm{ln}P_2)`$. It is clearly seen that the distribution function does not change its shape or width with increasing $`N`$. After shifting the curves along the $`x`$-axis, they all lie on top of each other, forming a scale-invariant IPR distribution. Of course, the far tail of this universal distribution becomes increasingly better developed with increasing $`N`$. From the shift of the distribution $`𝒫(\mathrm{ln}P_2)`$ with $`N`$ we find the fractal dimension $`D_2=0.75\pm 0.05`$. Analogous results are obtained for other values of $`b`$ and $`q`$ and will be published elsewhere .
We conclude therefore that the distribution of IPR (normalized to its typical value) is indeed scale-invariant, in agreement with the conjecture of Ref. and in disagreement with Ref. . A natural question that can be asked is why the authors of failed to find this universality? We speculate that, possibly, the system sizes $`L`$ used in their numerical simulations were too small for observing the universal form of $`𝒫[\mathrm{ln}(P_2/P_2^{\mathrm{typ}})]`$ in the limit $`L\mathrm{}`$ .
The value of the fractal dimension $`D_2`$ found from the scaling of the shift of the distribution with $`N`$ is shown in Fig. 2 as a function of the parameter $`b`$ of the PRBM model. The numerical results agree very well with the analytical asymptotics in the limits of large $`b`$, $`\eta 1D_2=1/\pi b`$ and small $`b`$, $`D_2=2b`$ . We have also calculated the spectral compressibility $`\chi `$ characterizing fluctuations of the number $`n`$ of energy levels in a sufficiently large energy window $`\delta E`$, $`\mathrm{var}(n)=\chi n`$. The results are also shown in Fig. 2, and are in perfect agreement with the large-$`b`$ asymptotics, $`\chi =1/2\pi b`$ , as well. A non-trivial value of the spectral compressibility $`0<\chi <1`$ (intermediate between $`\chi =0`$ in a metal and $`\chi =1`$ in an insulator) has been understood to be an intrinsic feature of the critical point of the Anderson transition .
In a remarkable recent work , Chalker, Lerner and Smith employed Dyson’s idea of Brownian motion through the ensemble of Hamiltonians to link the spectral statistics with wavefunction correlations. On this basis, it was argued in Ref. that the following exact relation between $`\chi `$ and $`D_2`$ holds:
$$\chi =(dD_2)/2d.$$
(7)
According to (7), the spectral compressibility should tend to $`1/2`$ in the limit $`D_20`$ (very sparse multifractal), and not to the Poisson value $`\chi =1`$. However, the numerical data of Fig. 2 show that, while being an excellent approximation at large $`b`$ (we remind that for our system $`d=1`$), the relation (7) gets increasingly stronger violated with decreasing $`b`$. In particular, in the limit $`b0`$ (when $`D_20`$) the spectral compressibility tends to the Poisson limit $`\chi 1`$. The same conclusion was reached analytically in for the PRBM model with broken time reversal invariance. Similar violation of (7) is indicated by numerical data for the tight-binding model in dimensions $`d>4`$ . It would be interesting to see why the derivation of (7) in fails at small $`b`$.
Let us now comment on the necessity to distinguish between the average value $`P_q`$ and the typical value $`P_q^{\mathrm{typ}}`$. This is related to the question of the asymptotic behavior of the distribution $`𝒫(P_q)`$ at anomalously large $`P_q`$. It was found in the 2D case that the distribution has a power-law tail $`𝒫(P_q)P_q^{1x_q}`$ with $`x_q=2\beta \pi g/q^2`$ (as before, $`g1`$ and $`q^2\beta \pi g`$ assumed). We believe that the power-law asymptotics with some $`x_q>0`$ is a generic feature of the Anderson transition point. This is confirmed by our numerical simulations, as illustrated in Fig. 1. For not too large $`q`$ the index $`x_q`$ is sufficiently large ($`x_q>1`$), so that there is no essential difference between $`P_q`$ and $`P_q^{\mathrm{typ}}`$. However, with increasing $`q`$ the value of $`x_q`$ decreases. Once it drops below unity, the average $`P_q`$ starts to be determined by the upper cut-off of the power-law “tail”, determined by the system size. As a result, for $`x_q<1`$ the average shows a scaling $`P_qL^{\stackrel{~}{D}_q(q1)}`$ with an exponent $`\stackrel{~}{D}_q`$ different from $`D_q`$ as defined from the scaling of $`P_q^{\mathrm{typ}}`$ (see above). In this situation the average value $`P_q`$ is not representative and is determined by rare realizations of disorder. Therefore, the condition $`x_q=1`$ corresponds to the point $`\alpha _{}`$ of the singularity spectrum with $`f(\alpha _{})=0`$. If one performs the ensemble averaging in the regime $`x_q<1`$, one finds $`\stackrel{~}{D}_q`$ as the fractal exponent and (after the Legendre transform) the function $`f(\alpha )`$ continuing beyond the point $`\alpha _{}`$ into the region $`f(\alpha )<0`$ . With this definition, the fractal exponent $`\stackrel{~}{D}_q0`$ as $`q\mathrm{}`$. On the other hand, the fractal exponent $`D_q`$ defined above from the scaling of the typical value $`P_q^{\mathrm{typ}}`$ (or, equivalently, of the whole distribution function) corresponds to the spectrum $`f(\alpha )`$ terminating at $`\alpha =\alpha _{}`$ and saturates $`D_q\alpha _{}`$ in the limit $`q\mathrm{}`$.
In the region $`x_q>1`$ (corresponding to $`f(\alpha )>0`$) the two definitions of the fractal exponents are identical, $`D_q=\stackrel{~}{D}_q`$. This is in particular valid at $`q=2`$ for the Anderson transition in 3D and for the Quantum Hall transition.
As has been mentioned above, the two limits $`b1`$ and $`b1`$ can be studied analytically. Let us announce the corresponding results for the IPR statistics; details will be published elsewhere . As shown in Fig. 3, the “phase boundary” $`q_c(b)`$ separating the regimes of $`x_q>1`$ ($`D_q=\stackrel{~}{D}_q`$) and $`x_q<1`$ ($`D_q>\stackrel{~}{D}_q`$) has the asymptotics $`q_c=(2\pi b)^{1/2}`$ ($`b1`$) and $`q_c=2.4056`$ ($`b1`$). Notice that this implies $`D_2=\stackrel{~}{D}_2`$ for all $`b`$. The corresponding power-law tail exponent $`x_2`$ is equal to $`\pi b/2`$ at $`b1`$ and to $`3/2`$ at $`b1`$. The values of $`x_q`$ at $`q<q_c`$ (for $`b1`$) as well at $`q>q_c`$ are given in Fig. 3.
Finally, it is worth mentioning that the meaning of universality of the IPR distribution at the critical point is the same as for the conductance distribution or for the level statistics. Specifically, the IPR distribution does depend on the system geometry (i.e., on the shape and on the boundary conditions). However, for a given geometry it is independent of the system size and of microscopic details of the model, and is an attribute of the relevant critical theory.
In conclusion, we have studied the IPR statistics in the family of the PRBM models of the Anderson transition. Our main findings are as follows: (i) The distribution function of the IPR (normalized to its typical value $`P_q^{\mathrm{typ}}`$) is scale-invariant, as was conjectured in . (ii) The scaling of $`P_q^{\mathrm{typ}}`$ with the system size defines the fractal exponent $`D_q`$, which is a non-fluctuating quantity, in contrast to . (iii) The universal distribution $`𝒫(zP_q/P_q^{\mathrm{typ}})`$ has a power-law tail $`z^{1x_q}`$. At sufficiently large $`q`$ one finds $`x_q<1`$, and the average value $`P_q`$ becomes non-representative and scales with a different exponent $`\stackrel{~}{D}_qD_q`$. (iv) The relation (7) between the spectral compressibility and the fractal dimension $`D_2`$ argued to be exact in Ref. is violated in the strong-multifractality regime. In particular, $`\chi 1`$ in the limit of a very sparse multifractal ($`D_20`$).
Discussions with V.E. Kravtsov, L.S. Levitov, D.G. Polyakov, I. Varga and I.Kh. Zharekeshev are gratefully acknowledged. This work was supported by the SFB 195 der Deutschen Forschungsgemeinschaft. |
no-problem/0001/hep-ph0001031.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The energy of extra-galactic proton cosmic rays should not exceed the GZK bound . The bound, about $`10^{19}`$ eV, is based on the known interactions of nucleon primaries with the photon background of intergalactic space. The $`GZK`$ bound is tantamount to an upper limit on cosmic ray energies, inasmuch as nuclei and photons have lower energy cutoffs . Yet an experimental puzzle exists, as evidence for air shower events with energies above the GZK bound has steadily accumulated over the last 35 years . There seem to be inadequate sources nearby to account for such events, and the sources are almost certainly extragalactic . A number of showers with energies reliably determined to be above $`10^{20}`$ eV have been observed in recent years , deepening the puzzle.
A completely satisfactory explanation of the so-called $`GZK`$-violating events is still lacking. Models have been constructed to explain the puzzle by invoking “conventional” extragalactic sources of proton $`UHE`$ acceleration, as reviewed recently in Ref.. Other models introduce exotic primaries such as magnetic monopoles or exotic sources such as unstable superheavy relic particles , or appeal to topological defects . Except for monopoles all of the proposed sources must be within 50-100 Mpc to evade the GKZ bound, a requirement which is difficult to satisfy.
The $`10^{20}`$ eV events potentially pose a confrontation between observation and fundamental particle physics. Except for the neutrino, there are no candidates among $`\mathrm{𝑒𝑠𝑡𝑎𝑏𝑙𝑖𝑠ℎ𝑒𝑑}`$ elementary particles that could cross the requisite intergalactic distances of about 100 Mpc or more . The neutrino would be a candidate for the events, if its interaction cross section were large enough, but this requires physics beyond the Standard Model. The neutrino-nucleon total cross section $`\sigma _{tot}`$ is the crucial issue: Flux estimates of $`UHE`$ neutrinos produced by extra-galactic sources and GZK-attenuated nucleons and nuclei vary widely, but suffice to account for the shower rates observed. Very significantly, there is a hint of correlations in direction of the events, both with one another and candidate sources . Event-pointing toward distant sources, if confirmed, would require a neutral, long-lived primary, reducing the possibilities for practical purposes to the neutrino plus new physics to explain the cross section.
Current understanding of the $`UHE`$ Standard Model $`\sigma _{tot}`$ is based on small-x QCD evolution and $`W^\pm ,Z`$ exchange physics . This physics is extremely well understood and has been directly tested up to $`s=10^5`$ GeV<sup>2</sup> with recent HERA-based updates . These calculations are then extrapolated to the region of $`10^{20}`$ eV primary energy, leading to cross sections in the range $`10^410^5`$ mb, far too small to explain the GZK-violating air shower events. The observationally indicated cross section is completely out of reach of the extrapolations of $`W`$\- and $`Z`$\- exchange mechanisms.
Since the neutrino-nucleon cross section at $`10^{20}`$ eV has never been directly measured, it is quite reasonable to surmise that new physical processes may be at work. Total cross sections at high energies are dominated by characteristics of the t-channel exchanges. The growth of $`\sigma _{tot}`$ with energy, in turn, is directly correlated with the spin of exchanged particles. Exchange of new (W- or Z-like) massive vector bosons would produce $`\sigma _{tot}`$ growing at the same rate as the standard one, failing to explain the puzzle. If the data indicates a more rapid growth with energy, one is forced to consider higher spin, with the next logical possibility being massive spin-2 exchange. We reiterate that this deduction is data-driven; if data indicates (a) corrrelations with source directions, and (b) $`\sigma _{tot}`$ in the $`mb`$ and above range, there are few options other than neutrinos interacting by massive spin-2 exchanges.
Recent theoretical progress has opened up the fascinating possibility of massive spin-2, t-channel exchange in the context of large “extra” dimensions , while the fundamental scale can be related to a string scale of order several TeV . In this context the Kaluza-Klein ($`KK`$) excitations of the graviton act like a tower of massive spin-2 particle exchanges . We will show that large $`UHE`$ neutrino cross sections, sufficient to generate the observed showers, are a generic feature of this developing framework . At the same time the new contributions to $`\sigma _{tot}`$ at energies below center of mass energy $`\sqrt{s}`$ of $`500`$ GeV is several orders of magnitude below the Standard Model component. In fact, the new physics we propose to explain the puzzle of the $`GZK`$-violating events is consistent with all known experimental limits.
## 2 The $`\nu N`$ Cross Section with Massive Spin-2 Exchange
The low-energy, 4-Fermi interaction total cross section, $`\sigma _{tot}`$, grows like $`s^1`$ over many decades of energy. Perturbative unitarity implies that at an invariant $`cm`$ energy $`\sqrt{s}`$ large compared to the exchange mass $`m_W`$, the growth rate slows to at most a logarithmic energy dependence. The shift from power-law to logarithmic growth is seen to occur in the Standard Model. There is a second effect, that above 100 TeV the total number of targets (quark-antiquark pairs) grows roughly like $`(E_\nu )^{0.4}`$. This fractional power, in turn, leads to a formula for $`\sigma _{tot}=1.2\times 10^5mb(E_\nu /10^{18}\mathrm{eV})^{0.4}`$ as a reasonable approximation to the Standard Model calculation .
Exchange of additional spin-1 bosons cannot produce faster growth with energy than just described. However, a massive spin-2 exchange grows quite quickly with energy on very general grounds. A dimensionless spin-2 field gets its couplings from derivatives, which translate to factors of energy. Thus the naive cross section grows like $`E_\nu ^3`$, in the “low-energy” regime.
These general features are exemplified in the Feynman rules for this regime developed by several groups . To be consistent with the literature, we will describe the interaction as “graviton” exchange, implying the standard picture of a tower of spin-2 $`KK`$ modes. The parton level $`\nu `$ gluon differential cross section is given by,
$$\frac{d\widehat{\sigma }^{Gg}}{d\widehat{t}}=\frac{\pi \lambda ^2}{2M_S^8}\frac{\widehat{u}}{\widehat{s}^2}\left[2\widehat{u}^3+4\widehat{u}^2\widehat{t}+3\widehat{u}\widehat{t}^2+\widehat{t}^3\right]$$
(1)
Here $`M_S`$ is the cutoff on the graviton mass, and $`\lambda `$ is the effective coupling at the scale $`M_S`$ that cuts off the graviton $`KK`$ mode summation. The magnitude of parameter $`\lambda `$ has been lumped into the scale parameter $`M_S`$, hence $`\lambda =\pm 1`$ for our purposes . In Eqs. (1) and (2) we take $`\widehat{t}M_S^2`$, which leads to the simple factor $`1/M_S^8`$. This suffices for our extrapolation, but the full $`\widehat{t}`$ dependence is used in the partial wave projections to check the unitarity constraint (discussed momentarily). The corresponding parton level $`\nu `$ quark differential cross section is given by,
$$\frac{d\widehat{\sigma }^{Gq}}{d\widehat{t}}=\frac{\pi \lambda ^2}{32M_S^8}\frac{1}{\widehat{s}^2}\left[32\widehat{u}^4+64\widehat{u}^3\widehat{t}+42\widehat{u}^2\widehat{t}^2+10\widehat{u}\widehat{t}^3+\widehat{t}^4\right]$$
(2)
We include the contribution of the two valence quarks as well as the $`\overline{u}`$, $`\overline{d}`$ and $`\overline{s}`$ sea quarks. The $`Z`$-graviton interference terms are included with negative $`\lambda `$, though their contribution is very small compared to other terms. The negative sign gives a slight enhancement for the final result of $`\sigma _{tot}`$. Collider physics and astrophysics constrain the effective scale $`M_S`$ to be above 1 TeV, with lower number of dimensions leading to stronger constraints.
### 2.1 Unitarity
The complete theory of massive $`KK`$ modes is not yet developed, making it impossible to know the exact cross sections at asymptotic energies. The situation is analogous to the case of the 4-Fermi theory before the Standard Model. By observing the $`s^1`$ growth of $`\sigma _{tot}`$ it was possible to deduce a massive vector exchange long before a consistent theory existed. In much the same way, present data indicate a spin-2 exchange while the analogous complete “standard model” of gravitons does not yet exist. Unlike the electroweak case, in either the data- driven or extra-dimensions scenario we must face a strongly interacting, non-perturbative problem in the high energy, $`sM_S^2`$. Perturbative unitarity breaks down as a host of new channels opens up in that regime. The low-energy effective theory remains an accurate description within a particular domain of consistency. Extrapolation of the 4-Fermi predictions to higher energies is possible by matching the consistent, low-energy description with the asymptotic demands of unitarity. Similarly, we resolve the difficulties of massive graviton exchange in the high energy regime by matching the $`\sqrt{s}<M_S`$ predictions, where the perturbative calculation is under control, to the $`\sqrt{s}M_S`$ non-perturbative regime.
We proceed by first evaluating the theory’s partial wave amplitudes to find the highest energy where the low energy effective theory is applicable. Taking the case $`\nu +q\nu +q`$, and including the full $`Q^2`$ dependence of the propagator, we find that the unitarity bound on the $`J=0`$ projection of the helicity amplitude $`T_{++,++}`$ gives the strongest bound. For example, with the number of extra dimensions $`n=2`$ we find $`\sqrt{s}1.7M_S`$, while with $`n=4`$ we find $`\sqrt{s}2.0M_S`$. As mentioned earlier, the most attractive value of $`M_S`$ is in the TeV range.
The invariant energies of the highest energy cosmic rays are approximately 1000 units of the scale $`M_S1`$ TeV, well beyond the low-energy regime. A phenomenological prescription consistent with unitarity is clearly necessary to extrapolate the low energy amplitudes. We now turn to describing and motivating three different asymptotic forms that span reasonable possibilities: $`log(s)`$, $`s^1`$ and $`s^2`$ growth of $`\sigma _{tot}(s)`$. There is no guarantee a priori that any should extrapolate from low to high energy with $`M_S1`$ TeV and produce hadronic-size cross sections at $`10^{20}`$ eV. As we shall see, surprisingly, they all do!
As a first version of an extrapolation model, we use a well known result from general features of local quantum field theory. The Froissart bound , reflecting the unitarity constraint on cross sections from exchange of massive particles, dictates that $`\sigma _{tot}`$ grows no more rapidly than $`\left(\mathrm{log}(\widehat{s}/M_S^2)\right)^2`$. The bound is an asymptotic one, and strictly speaking incapable of limiting behavior at any finite energy; moreover, the bound is probably violated in the case of graviton exchange. It is quite conservative to use the Froissart bound, with its mild logarithmic growth in s, as a first test case. In terms of the differential cross section, we have at high energy
$$\frac{d\widehat{\sigma }}{d\widehat{t}}\frac{\mathrm{const}}{\widehat{t}M_S^2}\mathrm{log}(\widehat{s}/M_S^2)$$
(3)
We then propose the following interpolating formula which reproduces Eq. (1) in the low energy limit and Eq. (3) in the high energy limit
$`{\displaystyle \frac{d\widehat{\sigma }^{Gg}}{d\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\pi \lambda ^2}{2M_S^2(M_S^2+\widehat{s})^2(M_S^2\widehat{t})}}{\displaystyle \frac{\widehat{u}}{\widehat{s}^2}}\left[2\widehat{u}^3+4\widehat{u}^2\widehat{t}+3\widehat{u}\widehat{t}^2+\widehat{t}^3\right]`$ (4)
$`\times `$ $`\left[1+\xi \mathrm{log}(1+\widehat{s}/M_S^2)\right]`$ (5)
The $`\nu `$-quark parton level cross section is similarly extrapolated to high energies. We have introduced the parameter $`\xi `$, which we will allow to vary between 1 and 10. It cannot be much larger than 10 since then the low energy cross section gets modified, violating consistency. We use these parton-level expression to calculate $`\sigma _{tot}`$. We convolute the parton-level cross section with CTEQ 4.6 parton distribution functions, which give a continued growth in the $`UHE`$ regime from the small-x effect. As pointed out previously, the cross section can be expected to grow to “strong interaction” magnitudes, where parton coalescence and string effects ultimately come into play.
A different constraint for unitarizing would be $`s^2`$ growth. Regge theory would suggest $`s^2`$ growth for spin-2 exchange at small, fixed $`t`$, which (in fact) occurs in this theory when the $`t`$ values are restricted self-consistently. Thus the use of $`s^2`$ growth makes a comparatively mild alteration of the perturbative predictions and follows from eikonal unitarization of Reggeized graviton exchange . Another unitarization procedure indicates a linear growth in $`s`$ , which represents a case intermediate to the other two. These cases serve to establish a fair range of possibilities for models of unitarization. For the $`s^1`$ and $`s^2`$ models, the extrapolation form of Eq. (1) we choose is
$$\frac{d\widehat{\sigma }^{Gg}}{d\widehat{t}}=\frac{\pi \lambda ^2}{2M_S^{62p}(M_S^2+\widehat{s})^p(M_S^2\beta \widehat{t})}\frac{\widehat{u}}{\widehat{s}^2}\left[2\widehat{u}^3+4\widehat{u}^2\widehat{t}+3\widehat{u}\widehat{t}^2+\widehat{t}^3\right],$$
(6)
where $`p=1,0`$ for $`s^1,s^2`$. A similar extrapolation is applied to the cross section formula for the $`\nu `$ quark parton case, Eq. (2).
Note that we use the detailed perturbative low energy calculations that follow from to anchor the low energy end. As we discuss below, we also explore the allowed range of $`M_S`$ values, corresponding to a range of numbers of extra large dimensions. As far as we know, these consistency features in the present context have not previously appeared in the literature.
An essential feature of spin-2 exchange, complementary to the growth of $`\sigma _{tot}`$ with energy in the $`UHE`$ region, is suppression of observable effects in the low energy regime of existing data. Turning the problem around to a “data driven” view, the energy dependence of the new physics must be so strong that the millibarn-scale total cross sections at $`\sqrt{s}10^3`$ TeV are suppressed well below the Standard Model values below 1 TeV. This also follows in our approach.
## 3 Results
Results of the calculations based on our models (Eqs. 4 and 5) are given in Fig. 1. For the $`ln(s)`$ model, shown by the dotted line, we find that for incident neutrino energy of order $`10^{12}`$ GeV, $`\sigma _{tot}`$ is roughly $`0.5`$ mb, with the effective cutoff scale $`M_S=1`$ TeV and the choice $`\xi =10`$. This cross section is remarkably consistent with the low end of the range required to explain the GZK violating events. Looking at higher values of $`M_S`$, we find that the $`\sigma _{tot}`$ falls so steeply as a function of $`M_S`$ that $`M_S1`$ TeV is $`\mathrm{𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑}`$ for this model to be a viable explanation of cosmic ray events with $`E10^{19}`$ eV.
The results of the calculations with the $`s^1`$ model are shown in the dash-dotted line ($`\beta =0.1`$) and short-dashed line ($`\beta =1.0`$). The value of $`M_S`$ is fixed at 1 TeV; $`\beta `$ values of 0.1 and 1.0 are chosen to illustrate that this intermediate growth model easily produces hadronic size cross sections above the GZK cutoff. We find that raising the cutoff $`M_S`$ much above 2 TeV suppresses the cross section well below the range required for the GZK events, independently of the value of $`\beta `$.
The long-dashed curve (Fig. 1) shows the case with $`s^2`$ growth for values $`M_S=3`$ TeV and $`\beta =1.0`$. The growth of $`\sigma _{tot}`$ to tens of millibarns at the highest energies shows the viability of this model even at $`M_S`$ values above 1 TeV.
The key feature we emphasize about Fig. 1 is that $`\sigma _{tot}`$ somewhere in the range 1-100 mb is obtained at the highest energies for reasonable parameter choices for all three unitarization models. As a significant corollary, we find that if $`M_S`$ is larger than 1 - 10 TeV (depending on the model), then the low scale gravity models fail to increase $`\sigma _{tot}`$ by enough to explain the $`GZK`$-violating events. Thus the analysis serves to put an upper bound on the cutoff mass<sup>1</sup><sup>1</sup>1Our requirement is consistent with estimates of lower bounds as discussed in , for example..
For completeness let us note that the interaction cross section with CMB photons is completely negligible because the cm-energy is about $`10^{13}`$ that of a proton target. On the other hand, the mean free path of $`UHE`$ neutrinos with $`E_\nu >10^{19}`$ eV and $`mb`$-scale cross sections is of the order of a few kilometers in the upper atmosphere. This is still much larger than the mean free path of other particles such as protons at the same energy. If the ultrahigh energy showers are indeed initiated by neutrinos, the $`log(s)`$ and $`s^1`$ models predict considerable vertical spread in the point of origin of the shower. The $`s^2`$ model is capable of much larger $`\sigma _{tot}`$.
Experimental extraction of cross sections is complicated by shower development fluctuations, but it would be useful to make all possible efforts to bound the mean free path of the primaries responsible for $`GZK`$-violating events. There is a further, interesting variant of the “double-bang” $`\nu _\tau `$ signal : secondary events may come from a neutrino dumping part of its energy in a first collision, undergoing a second collision a few kilometers later. This process may serve to separate primaries with $`mb`$-scale cross sections from protons or nuclei, which have much shorter mean free paths.
### 3.1 Signatures of Massive spin-2 Exchange at Km-Scale Neutrino Detectors
It is also interesting to consider alternate signatures of the $`\nu N`$ neutral-current cross section exceeding the Standard Model. The cosmic neutrino detectors, AMANDA and RICE for example, are expected to explore the TeV-10 PeV energy regime. The cross sections we find exceeding those of the Standard Model might be tested in these experiments. Of particular interest is the angular distribution of events. The diffuse background cosmic ray neutrino flux is expected to be isotropic. We can, therefore, measure the deviations of $`\sigma _{tot}`$ from $`SM`$ predictions by measuring the ratio of upward- to downward-moving events. This ratio is plotted in Fig. 2 as a function of the incident neutrino energy. The plots show a few representative choices of $`M_S`$ and extrapolation parameter $`\beta `$, using the $`s^1`$ model for illustration. The $`up/down`$ ratio starts to deviate very strongly from the $`SM`$ value for an incident neutrino energy greater than about 5 PeV. Beyond about 5 PeV the ratio falls very sharply to zero. This in principle can be measured at RICE , which is sensitive to precisely the energy regime of 100 TeV to 100 PeV. The event rate in this energy region is also expected to be significant. Note that only the neutral-current events are affected. A more detailed, but also attractive extension of this technique is $`UHE`$ Earth tomography, recently shown practicable with existing flux estimates, and also capable of measuring $`\sigma _{tot}`$ indirectly . The graviton-exchange predictions are not especially sensitive to the precise value of $`\beta `$ in this region, but do depend strongly on the cutoff scale $`M_S`$. Large scale detectors such as RICE will be able to explore the range of $`M_S<2`$ TeV.
We conclude by reiterating that the highest energy cosmic rays are on the verge of confronting fundamental particle physics. Exciting projects underway, including AGASA, Hi-RES, and AUGER should be able to collect enough data to resolve the issues. The puzzle of $`GZK`$ violating events can be experimentally resolved by establishing neutral primaries via angular correlations. That feature would imply new physics of neutrinos, with the cross section then indicating massive spin-2 exchange rather directly. Experimentally bounding the primary interaction cross section below that of protons is another viable strategy in the $`s^1`$ and $`log(s)`$ scenarios.
Current limits on $`KK`$ modes of extra dimension models predict sufficiently large $`\nu N`$ cross sections to produce such interactions, and are consistent with observation of events in the upper atmosphere at primary energies greater than $`10^{20}10^{21}`$ eV. Depending on the observational outcome, then, the subject matter could become very important, and further development is well warranted.
Acknowledgments: We thank Tom Weiler for discussions and suggestions on the manuscript. PJ thanks Prashanta Das, Prakash Mathews and Sreerup Raychaudhuri for useful discussions. This work was supported in part by U.S. DOE Grant number DE-FG03-98ER41079 and the Kansas Institute for Theoretical and Computational Science. |
no-problem/0001/astro-ph0001087.html | ar5iv | text | # Testing population synthesis models with globular cluster colors
## 1 Introduction
Predicting the integrated spectral energy distributions of stellar populations is important in the solution of many problems in astronomy, from determining the ages of globular clusters to modeling counts of faint galaxies at high redshift. Beginning with the early work of Tinsley (1968), successive generations of modelers have combined the best available data on stellar structure and evolution to predict the appearance of the combined light of generations of stars. Although the subject of population synthesis has a long history, it is an active area of research: synthesis techniques and many of the input data (isochrones, opacities, spectral libraries) continue to be improved.
There is good evidence that globular clusters (GCs) are internally homogeneous in age and metallicity (Heald et al., 1999; Stetson, 1993). GCs are the best observational analogs of modelers’ ‘simple stellar populations’, i.e. populations of stars formed over a short time out of gas with homogeneous chemical composition. Broadband colors are among the simplest predictions of population synthesis models, so comparing the models’ predicted colors to cluster colors is the natural zeroth-order test of compatibility between the models and reality (Huchra, 1996). In this paper we compare the broad-band $`UBVRIJHK`$ colors predicted by three modern population synthesis models with the colors of Galactic and M31 GCs. Our observational database is the first one to include extensive coverage of the $`JHK`$ bandpasses, and the first with spectroscopic metallicities for all clusters. We use the cluster metallicities to bin the clusters for comparison to the appropriate models. In this way we determine the cluster-to-model offset separately for each color and avoid the ambiguity in comparing model and cluster colors in two-color diagrams.
## 2 Input data and comparison procedure
For M31 clusters, the observational data are from the Barmby et al. (2000) catalog. For the Galactic clusters we obtained optical colors, metallicity, and reddening from the June 1999 version of the Harris (1996) catalog, and IR colors from Frogel et al. (1980), as reported in Brodie & Huchra (1990) (but with the reddening correction applied by Brodie & Huchra (1990) removed and the reddening values in Harris (1996) used instead). We dereddened the clusters’ colors using the values of $`E_{BV}`$ given in the catalogs and the Cardelli et al. (1989) extinction curve for $`R_V=3.1`$. For M31, we excluded clusters where the error in the spectroscopic metallicity was $`\sigma _{[\mathrm{Fe}/\mathrm{H}]}>0.5`$, and clusters suspected of being young on the basis of strong Balmer absorption or blue B V colors (see Barmby et al., 2000). For both galaxies, we excluded clusters with $`E_{BV}>0.5`$; there are 103 M31 and 85 Galactic clusters in the final sample. Photometric data is not available in all bandpasses for all clusters: only about two-thirds have measured $`R`$ and $`I`$, and less than half have $`H`$.
We compare the cluster colors to those for simple stellar populations of ages 8, 12, and 16 Gyr from three sets of models: those of Worthey<sup>1</sup><sup>1</sup>1The version we used updates the Worthey (1994) models by including a more realistic treatment of the horizontal branch for $`[\mathrm{Fe}/\mathrm{H}]<1.0`$., Bruzual and Charlot (hereafter BC) (both the Worthey and BC models are reported in Leitherer et al., 1996), and Kurth et al. (1999) (hereafter KFF). Although model colors are tabulated in smaller age increments (typically 1 Gyr), initially it is more reasonable to use the models as a rough guide to relative ages rather than attempting to derive precise cluster ages from them. The Worthey models are computed at \[Fe/H\] values of $`2.0`$,$`1.5`$,$`1.0`$,$`0.5`$, and $`0.25`$ dex, and the BC and KFF models are computed at \[Fe/H\] values of $`2.33`$ (KFF models only), $`1.63`$, $`0.63`$, and $`0.32`$ dex. We compared clusters to both the Salpeter IMF (Worthey’s ‘vanilla’ models) and Scalo (1986) (Miller & Scalo, 1979, in the Worthey models) IMF version of the models. Worthey (1994) finds that some of his model colors have defects (e.g. B V is too red by 0.04-0.06<sup>m</sup> due to problems in the theoretical stellar atmospheres and the color-temperature calibration), but the sizes of these defects are not well-determined so we do not correct for them. Figure 1 shows data and models in two frequently-used two-color diagrams.
Since the models are computed at discrete values of \[Fe/H\], we use the spectroscopic metallicities of the clusters to compare only clusters with comparable metallicities ($`\pm 0.25`$ dex) to each model. The Galactic cluster metallicities given in Harris (1996) are on the Zinn & West (1984) (ZW) metallicity scale, and the M31 cluster metallicities are also tied to this scale through the calibration of Brodie & Huchra (1990). Recent work (Carretta & Gratton, 1997; Rutledge et al., 1997) suggests that the ZW scale may be non-linear at both high and low metallicities. We retain the ZW scale in this paper because we found that using the Carretta & Gratton (1997) scale to assign clusters to model comparison bins made little difference in our results. We caution, however, that the effect of changing the metallicity scale is unknown for the $`[\mathrm{Fe}/\mathrm{H}]=0.25`$ model bin. The transformation from the ZW to CG scales is only defined for $`[\mathrm{Fe}/\mathrm{H}]_{\mathrm{ZW}}<0.5`$, the lower limit of this metallicity bin.
We calculated the mean offsets between model and cluster colors (referenced to $`V`$) for each metallicity bin; Figures 23 show some representative comparisons. We plot $`\mathrm{\Delta }(XV)`$ for all bandpasses $`X`$ to make clear the differences in spectral energy distributions between models and data; we remind the reader that the offsets for bandpasses redward of $`V`$ thus have the opposite sign from the usual colors. One general characteristic of the models visible in the Figures is that younger-aged models predict bluer colors. The exception is the KFF Scalo model for $`[\mathrm{Fe}/\mathrm{H}]=1.63`$, which predicts only very small color differences ($`0.01^m`$) between ages of 12 and 16 Gyr. The effect of the IMF on the colors appears to depend on both metallicity and age. For the Worthey $`[\mathrm{Fe}/\mathrm{H}]=1.50`$ models, Miller-Scalo IMF colors are redder than Salpeter model colors at all ages, but for the $`[\mathrm{Fe}/\mathrm{H}]=0.50`$ models, the Miller-Scalo IMF colors are bluer for 8 and 12 Gyr and almost identical for 16 Gyr. BC predict almost no color difference between the Salpeter and Scalo IMF models of the same age and metallicity.
A striking feature in Figures 2 and 3 is the range of discrepancies between models and data. For example, the largest difference between the Worthey model with parameters (Salpeter IMF, $`[\mathrm{Fe}/\mathrm{H}]=1.50`$, age 16 Gyr) and the mean colors of clusters with $`1.75[\mathrm{Fe}/\mathrm{H}]1.25`$ is $`0.04^m`$ in $`UV`$. The same models with $`[\mathrm{Fe}/\mathrm{H}]=0.50`$ are well offset from the data at all colors except $`UV`$; the largest offset is $`0.23^m`$ in $`VK`$. To determine the best-fitting models, we quantify the overall goodness-of-fit for each model/cluster metallicity bin pair as:
$$F=\frac{_k|\mathrm{\Delta }(XV)_k|/\sigma _k^2}{_k1/\sigma _k^2}$$
(1)
The color differences $`\mathrm{\Delta }(XV)_k`$ are weighted by $`1/\sigma _k^2`$, where $`\sigma _k`$ are the standard errors in the mean colors of objects in the bin. Table 1 gives the $`\mathrm{\Delta }`$ and $`F`$ values for the best fitting models in each metallicity bin.
## 3 Discussion
Table 1 shows that the best-fitting models fit the data quite well, with typical color offsets of $`0.020.03^m`$. The two bandpasses with the most significant offsets are $`U`$ and $`B`$: the models are too blue in $`UV`$ and too red in B V. Neither offset shows a clear trend with metallicity. The offsets are likely not due to systematics in the photometric system or in the extinction curve. While problems with the photometric systems might be expected in the $`R`$ and $`I`$ bands (due to conversion between the Johnson and Cousins $`RI`$ systems), both data and models use the well-defined Johnson $`UBV`$ system. Problems in the reddening curve also seem unlikely for the same reasons. We suspect that the offsets are more likely due to systematic errors in the models. The B V offset in particular is likely due to the flux libraries used. Both Worthey (1994) and Lejeune (1997) found their model B V colors to be $`0.040.06^m`$ too red compared to empirical solar-metallicity spectra, even after correcting to the empirical color-temperature scale. This suggests a possible problem with the stellar atmosphere models of Kurucz (1995), upon which both libraries are based.
The cause of the offset in $`UV`$ is not as clear. This offset is actually worse than it appears: since we compute the $`UV`$ colors for the BC and KFF models as $`(UB)+(BV)`$, the red B V colors compensate for some of the U B defect, which is actually larger than the defect in $`UV`$. Worthey (1994) – whose models give $`UV`$ directly – finds that his model $`UV`$ is too blue compared to solar neighborhood stars and elliptical galaxies. Worthey cites problems with the $`U`$ fluxes from the stellar libraries as a possible cause: modeling the many blended atomic and molecular lines blueward of $`B`$ is difficult, and many of the necessary opacities are not well determined. This cannot be the only cause of model problems in $`U`$, since the BC and KFF models, which use the same stellar library, predict different $`UV`$ colors. The treatment of the horizontal branch in the models is another possible source of problems in the $`UV`$ colors because the HB emits most of the blue light. However, systematic problems with the model HB color (which depends on metallicity), would presumably produce a $`UV`$ offset dependent on metallicity, which we do not observe. Observational error is another possible contributor to the $`UV`$ offset, as many of the U B colors of the M31 clusters are poorly determined (see Table 3 of Barmby et al., 2000). Understanding the rest-frame $`U`$ flux of stellar populations becomes increasingly important when studying high-redshift galaxies and global star formation history, and further investigation of the models in this bandpass is clearly warranted.
A secondary result in Table 1 is that age determines which model best fits the data. Higher-metallicity cluster colors are best fit by 8 Gyr models, regardless of IMF. Lower-metallicity cluster colors ($`[\mathrm{Fe}/\mathrm{H}]_{\mathrm{bin}}1.00`$) are best fit by 12 or 16 Gyr models. The best-fit age depends on the IMF for several of the models, but not in any systematic fashion. This result is consistent with the determinations of relative ages for Galactic clusters by Rosenberg et al. (1999). These authors determined relative ages of 35 Galactic globular clusters from a homogeneous set of $`V`$, $`VI`$ color-magnitude diagrams. They compared theoretical isochrones with the observational CMDs to determine ages using two independent methods. They found that the clusters with $`[\mathrm{Fe}/\mathrm{H}]_{\mathrm{CG}}>0.9`$ were $`17`$% younger than clusters with $`[\mathrm{Fe}/\mathrm{H}]_{\mathrm{CG}}<1.2`$, with the intermediate-metallicity clusters showing a $`25`$% age dispersion. These results are model-dependent, as are ours, but the results’ similarity implies that either there is a real difference between metal-rich and metal-poor clusters or there is a systematic problem in the models in one of the metallicity regimes.
What possible systematic errors in our input data or comparison procedure could produce the result that the metal-rich clusters are younger? We redid the comparison procedure considering the clusters of each galaxy separately, and still found younger ages for the most metal-rich clusters. Although M31 has a greater proportion of the metal-rich clusters, younger ages are found for both M31 and Galactic metal-rich clusters. Cohen & Matthews (1994) suggest that the spectroscopic metallicities of the most metal-rich M31 clusters measured by Huchra et al. (1991) are systematically too high. If this is true, the clusters would appear too blue compared to old, higher-metallicity models and the best-fit model would be younger. We compared the metal-rich M31 clusters to the Worthey $`[\mathrm{Fe}/\mathrm{H}]=1.0`$ models, and the best-fitting model had age 16 Gyr. However, the goodness-of-fit was better for the young, metal-rich models than for the older, more metal-poor model, so we conclude that younger ages are still favored for these clusters. Overestimating the reddening of the metal-rich clusters would make the derived intrinsic colors too blue and yield younger ages. This seems unlikely, given that the color-metallicity relations for Galactic and M31 clusters match well throughout their metallicity range (see Barmby et al., 2000), and the methods of reddening determination for M31 and Galactic clusters are different.
If the detection of younger ages for metal-rich globular clusters is real, it has implications for galaxy formation. A range of GC ages implies that GC formation took place over an extended period of time. Conditions for GC formation were not particular to the early universe, an assertion supported by observations of ‘proto-globular’ clusters in present-day merging galaxies (e.g. Zepf et al., 1999). More precise knowledge of the distribution of cluster ages in each galaxy would be extremely useful in understanding cluster system formation. If the age distribution is continuous, the relation between age and metallicity might hold clues as to what factors controlled the cluster formation rate. If the age distribution is bimodal – with most clusters old and coeval and the remainder younger and coeval – then some event must have triggered the second episode of GC formation. Perhaps the younger clusters were stripped from or accreted along with satellite galaxies of M31 and the Galaxy.
## 4 Conclusions
Comparison of three sets of population synthesis models with integrated colors of M31 and Galactic globular clusters shows that the models reproduce the redder average cluster colors to within the observational uncertainties. The poorer agreement in $`UV`$ and B V is likely due to systematic errors in the models. Younger-age models are required to best match the colors of the metal-rich clusters, consistent with the findings of Rosenberg et al. (1999) that the most metal-rich Galactic clusters are younger than the bulk of the globular cluster population. A range of ages for globular clusters implies that conditions for cluster formation were not restricted to the early universe. The cluster age distribution has important implications for galaxy and globular cluster system formation, and attempts to determine it more precisely are needed. |
no-problem/0001/astro-ph0001394.html | ar5iv | text | # Radio Properties of 𝑧>4 Optically-Selected Quasars
## 1 Introduction
Already in the late 1960s it was becoming apparent that the rapid increase in quasar comoving space densities with redshift did not continue beyond $`z2`$. Several authors suggested the existence of a redshift cutoff beyond which a real decrease in quasar densities occurs, unrelated to observational selection effects (e.g.,Sandage 1972). Numerous studies have now shown that this is indeed the case. For example, Schmidt, Schneider, & Gunn (1995a), applying the $`V/V_{\mathrm{max}}`$ test to a well-defined sample of 90 quasars detected by their Ly$`\alpha `$ emission in the Palomar Transit Grism Survey, conclude that the comoving quasar density decreases from $`z=2.75`$ to $`z=4.75`$. Warren, Hewett, & Osmer (1994) show that the quasar space density falls by a factor of a few between $`z=3.3`$ and $`z=4.0`$. Shaver et al. (1996b) show a similar decrease in space density exists for radio-loud quasars, implying that the decline is not simply due to obscuration by intervening galaxies.
An unresolved question, however, is how the decline compares between the optically-selected and radio-selected populations. Two definitions are generally used to demarcate radio-quiet and radio-loud quasars. One criterion is the radio-optical ratio $`R_{\mathrm{r}\mathrm{o}}`$ of the specific fluxes at restframe 6 cm (5 GHz) and 4400 Å (Kellerman et al. 1989); radio-loud sources typically have $`R_{\mathrm{r}\mathrm{o}}`$ in the range $`101000`$ while most radio-quiet sources have $`0.1<R_{\mathrm{r}\mathrm{o}}<1`$. However, Peacock, Miller, & Longair (1986) and Miller, Peacock, & Mead (1990) point out that $`R_{\mathrm{r}\mathrm{o}}`$ is physically meaningful as a discriminating parameter only if radio and optical luminosities are linearly correlated. This is apparently not the case; no linear correlation has been observed for radio-loud quasars (Stocke et al. 1992) and the fraction of optically-faint, optically-selected quasars which are radio-loud would be higher if radio and optical properties were linearly correlated (Peacock et al. 1986; Miller et al. 1990).
The second definition, which can result in ambiguous taxonomy for some quasars, is to divide the populations at some restframe radio luminosity (Miller et al. 1990; Schneider et al. 1992). Different authors use slightly different luminosities to discriminate the populations; for what follows, we identify radio-loudness using the Gregg et al. (1996) cutoff value for the 1.4 GHz specific luminosity, $`L_{1.4\mathrm{GHz}}=10^{32.5}h_{50}^2`$ $`\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1`$($`10^{24}h_{50}^2`$ $`\mathrm{W}\mathrm{Hz}^1\mathrm{sr}^1`$).
Differential evolution between the radio-quiet and radio-loud quasar populations would be a fascinating result, which could change (or challenge) our understanding of different types of active galactic nuclei (AGN). For example, Wilson & Colbert (1995) propose a model whereby radio-loud AGN are products of coalescing supermassive black holes in galaxy mergers; such a process would result in rapidly spinning black holes, capable of generating highly-collimated jets and powerful radio sources. Such a model could naturally explain a lag (if there is indeed one) between the appearance of powerful radio-loud quasars in the early Universe and the more common radio-quiet quasars.
Several groups have reported on followup of optically-selected quasar surveys at radio frequencies (e.g.,Kellerman et al. 1989; Miller et al. 1990; Schneider et al. 1992; Visnovsky et al. 1992; Hooper et al. 1995; Schmidt et al. 1995b; Goldschmidt et al. 1999). Typically between 10% and 40% of the quasars are detected in the radio. One notable exception is Kellerman et al. (1989) who, using the Very Large Array (VLA), detect more than 80% of 114 quasars observed from the Palomar-Green Bright Quasar Survey (BQS; Schmidt & Green 1983), a relatively low-redshift sample. Visnovsky et al. (1992) report on radio observations of 124 quasars in the redshift range $`1<z<3`$ selected from the Large Bright Quasar Survey (LBQS; Hewett, Foltz, & Chaffee 1995). The sample, chosen to complement the predominantly low-redshift BQS, leads them to conclude that the fraction of radio-loud quasars decreases with increasing redshift. A similar conclusion is reached by Schneider et al. (1992), who report on 5 GHz VLA observations of 22 optically-selected quasars at $`z>3.1`$. La Franca et al. (1994), combining the results of several radio followup studies of optically-selected quasars, also concludes that the radio-loud fraction of optically-selected quasars decreases with increasing redshift; however, this conclusion is statistically robust only when the BQS sample is included.
Hooper et al. (1995) report on 8.4 GHz VLA observations of an additional 132 LBQS quasars, supplementing the sample discussed by Visnovsky et al. (1992). Contrary to the results discussed above, Hooper et al. (1995) find that the radio-loud quasar fraction of the LBQS exhibits a peak around $`z1`$, is constant at $``$ 10% until $`z2.5`$, and then increases substantially to $`z=3.4`$, the highest redshift in the LBQS. They suggest the enhanced radio-loud fraction may be the result of increased radio-loud fraction at bright optical absolute magnitude ($`M_B\mathrm{}<27.4`$).
The most recent thorough analysis of the issue is presented by Goldschmidt et al. (1999), who report on 5 GHz VLA observations of 87 optically-selected quasars from the Edinburgh Quasar Survey. They combine these results with all published radio surveys of optically-selected quasar samples other than the BQS sample which has been shown to be incomplete by a factor of three (Goldschmidt et al. 1992). Miller, Rawlings, & Saunders (1993) suggest that this incompleteness is not random with respect to radio properties, implying that it is inappropriate to use the BQS for determinations of the radio-loud fraction. Goldschmidt et al. (1999) further fortify their sample by correlating several large quasar surveys with two recent, large-area, deep radio surveys: Faint Images of the Radio Sky at Twenty centimeters (FIRST; Becker, White, & Helfand 1995) and the NRAO VLA Sky Survey (NVSS; Condon et al. 1998). This work samples the optical luminosity–redshift plane more thoroughly than any previous analysis. Considering only quasars in the absolute magnitude interval $`25.5M_B26.5`$, they find hints that the radio-loud fraction decreases with increasing redshift, though they note that the result is not statistically significant in the narrow $`M_B`$ range considered.
In short, the existence of differential evolution between the radio-quiet and radio-loud quasar populations remains uncertain. Some researchers find evidence that the radio-loud fraction of optically-selected quasars decreases with increasing redshift (e.g.,Visnovsky et al. 1992), while others find no (or only marginal) evidence for evolution (e.g.,La Franca et al. 1994; Goldschmidt et al. 1999), and yet other researches find that the radio-loud fraction increases with increasing redshift (e.g.,Hooper et al. 1995). We attempt to re-address the issue by providing a more accurate measure of the radio-loud fraction at $`z>4`$. In §2 we report on 5 GHz VLA observations of a sample of 32 $`z>4`$ optically-selected quasars. The Schneider et al. (1992) 5 GHz VLA study includes 13 $`z>4`$ quasars; we include these sources in our analysis. In §3 we report on correlations of larger, more recent surveys of $`z>4`$ quasars with deep, large-area radio surveys. §4 sets up the methodology for studying the radio-loud fraction, discussed in §5.
Throughout we adopt the cosmology consistent with previous work in this field: an Einstein-de Sitter cosmology with $`H_0=50h_{50}`$ $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }=1`$, and vanishing cosmological constant, $`\mathrm{\Lambda }=0`$.
## 2 Observations
### 2.1 VLA Sample Selection
The 32 $`z>4`$ quasars which comprise our VLA sample are listed in Table 1. The sample is from the Palomar multicolor survey (Kennefick, Djorgovski, & de Calvalho 1995; Djorgovski et al. 1999) and represents all $`z>4`$ quasars found from that survey at the time of the radio observations. Six of the quasars are equatorial objects found previously in a similar search by the Automated Plate Machine (APM) group (Irwin, McMahon, & Hazard 1991) which fall within the Palomar multicolor survey selection criteria. These 32 sources comprise a statistically complete and well-understood sample of optically-selected $`z>4`$ quasars; they should be an unbiased set as far as the radio properties are concerned.
Calculation of the absolute $`B`$ magnitudes listed in Table 1 is discussed in §4.
### 2.2 VLA Observations
The VLA observations were made on UT 1997 March 14, from 01:30 to 17:30 LST. The targets were observed at 4835 and 4885 MHz. Each frequency was observed with 50 MHz bandwidth and both polarizations. The configuration was ‘B’, with a maximum baseline of approximately 10 km and a resolution of $`1\stackrel{}{\mathrm{.}}3`$. Observations were made in nodding mode, in which we oscillated between 40 $`s`$ observations of a calibrator and 240 $`s`$ observations on a target. This procedure allowed us to better track and remove atmospheric fluctuations. Calibration and imaging followed standard procedures. Each target was observed for $`3045`$ min, including calibration. The rms noise was typically between 30 and 60 $`\mu `$Jy.
We detect four quasars from this sample in the VLA imaging, where a match is liberally defined to correspond to a radio source lying within 10″ of the optical source. In fact, the four matches all have optical-to-radio positional differences less than 2″, implying that the identifications are unlikely to be spurious. Three sources represent newly identified high-redshift, radio-loud quasars. The quasar BRI1050$``$0000 was previously identified as a $`10.6\pm 0.2`$ mJy source at 4.8 GHz by McMahon et al. (1994). The source PSS1618+4125 has a 4$`\sigma `$ radio detection 16$`\stackrel{}{\mathrm{.}}`$0 away from the optical identification; we do not consider this a positive match.
## 3 Optically-Selected $`z>4`$ Quasars in FIRST and NVSS
To augment the targeted VLA sample discussed above, we have correlated an updated list of all 134 $`z>4`$ quasars known to us in mid-1999 with two recent, large-area radio surveys. The FIRST survey (Becker et al. 1995), which is still in production mode, is a radio map of the northern celestial sphere at 21 cm (1.4 GHz), reaching a typical limiting flux density of 1.0 mJy (5$`\sigma `$). The 1.4 GHz NVSS survey (Condon et al. 1998) covers a larger area to a shallower flux density limit; the 5$`\sigma `$ limit of NVSS is 2.25 mJy. For luminosity distance $`d_L`$ and radio spectral index $`\alpha `$ ($`S_\nu \nu ^\alpha `$), the restframe specific luminosity $`L_\nu =4\pi d_L^2S_\nu /(1+z)^{1+\alpha }`$, where both $`L_\nu `$ and $`S_\nu `$ are measured at the same frequency. At $`z=4`$ and for a typical quasar spectral index of $`\alpha =0.5`$, the FIRST survey reaches a 3$`\sigma `$ limiting 1.4 GHz specific luminosity of $`\mathrm{log}L_{1.4}(h_{50}^2\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1)=32.6`$. The comparable limit for the NVSS survey is $`\mathrm{log}L_{1.4}(h_{50}^2\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1)=32.9`$. Therefore, using the radio luminosity definition of radio loudness, these surveys incompletely sample radio-loud quasars at high redshift.
Of the 134 $`z>4`$ quasars, 51 reside in portions of the celestial sphere observed thus far by FIRST. We deemed a radio detection in FIRST to be associated with a high-redshift quasar if it lay within 10″ of the quasar optical position. Six high-redshift, radio-loud quasars were identified, of which four were previously known (see Table 2). One, GB1428+4217 (Hook & McMahon 1998), is the highest-redshift radio-loud quasar known currently ($`z=4.72`$); it was initially identified on the basis of being a strong, flat-spectrum radio source coincident with an unresolved, red, stellar, optical source. The three sources with VLA nomenclature were initially identified by correlating the FIRST survey with the digitized 2nd generation Palomar sky survey (Stern et al. 1999). Two new, high-redshift, radio-loud quasars have been identified: BRI0151$``$0025 at $`z=4.20`$ and PSS1057+3409 at $`z=4.12`$. The former was also identified in the targeted 5 GHz survey discussed above (§2), implying a radio spectral index $`\alpha =0.41`$. The latter was undetected in the targeted 5 GHz survey, implying an unusually steep spectral index, $`\alpha <2.05`$ (3 $`\sigma `$) or significant variability on the time scale corresponding to the epochs of the two radio surveys.
We have also correlated the list of $`z>4`$ quasars with the NVSS catalog. 129 of the 134 quasars are in portions of the sky surveyed by NVSS. A radio detection in NVSS was deemed associated with a high-redshift quasar if it lay within 30″ of the quasar optical position. We found a total of 14 radio identifications (see Table 3); 6 of these quasars were initially discovered on the basis of their radio emission (Hook et al. 1995; Hook & McMahon 1998; Stern et al. 1999) and three were previously known to be radio-loud (Schneider et al. 1992; McMahon et al. 1994; Zickgraf et al. 1997). Five new high-redshift, radio-loud quasars have been identified, of which two were identified in the targeted 5 GHz survey (§2): BRI0150$``$0025 ($`\alpha =0.77`$) and BRI1050$``$0000 ($`\alpha =0.04`$).
Although 30″ is a rather large matching radius, the surface density of sources in NVSS is $`60`$ deg<sup>-2</sup>, so that the probability of a chance coincidence of an NVSS source with an optical source is only 0.013. We therefore expect that 1.7 of our radio identifications are spurious. We note that of the four radio–optical offsets larger than 10″, the three largest correspond to known radio-loud quasars, illustrating that the 30″ search radius was appropriate. The fourth largest radio–optical offset belongs to RXJ1759.4+6638, an X-ray selected quasar identified from deep ROSAT observations near the North Ecliptic Pole (Henry et al. 1994). NVSS reports a 3.6 mJy source 11$`\stackrel{}{\mathrm{.}}`$6 away. Sensitive VLA 1.5 GHz C-array mapping of this region by Kollgaard et al. (1994) finds no object brighter than 0.5 mJy associated with the source, apparently at odds with the NVSS results.
We are aware of only one $`z>4`$ radio-loud quasar not included by these exercises: PKS1251$``$407 at $`z=4.46`$ (Shaver, Wall, & Kellerman 1996a) which resides too far south to be within the VLA surveys.
The list of $`z>4`$ quasars comes from diverse sources, limiting the significance of these detections to studies of the quasar luminosity function at high redshift. However, 107 of the 129 $`z>4`$ quasars were initially identified from optical surveys, either the Palomar multicolor survey (Djorgovski et al. 1999), the APM survey (Irwin et al. 1991), or the long-term program of Schneider et al. (1989, 1997). These surveys should be largely unbiased with respect to radio properties. In the following analysis, FIRST/NVSS detections of these sources will provide lower limits to the radio-loud quasar fraction at high-redshift.
## 4 Results
### 4.1 1.4 GHz Specific Luminosity
The specific luminosity at 1.4 GHz is straight-forward to calculate. For $`S_\nu \nu ^\alpha `$ at radio frequencies,
$$L_\nu (\nu _1)=4\pi d_L^2\frac{1}{(1+z)^{1+\alpha }}\left(\frac{\nu _1}{\nu _2}\right)^\alpha S_\nu (\nu _2)$$
(1)
where $`L_\nu (\nu _1)`$ is the specific luminosity at rest-frame frequency $`\nu _1`$ and $`S_\nu (\nu _2)`$ is the flux density at observed frequency $`\nu _2`$. For the adopted cosmology, chosen to be consistent with previous work in this field, the luminosity distance
$$d_L=(2c/H_0)(1+z\sqrt{1+z}).$$
(2)
In Fig. 1 we present the results of the radio surveys, plotting $`L_{1.4\mathrm{GHz}}`$ against redshift, with the radio luminosity cutoff between radio-loud and radio-quiet quasars indicated by a horizontal dashed line. Again, consistent with previous work in this field, we take $`\alpha =0.5`$; this is a typical value for radio-loud quasars.
### 4.2 Absolute $`B`$ Magnitude
In order to compare our results at $`z>4`$ with previous results, we require transforming the apparent $`r`$ magnitude, $`m_r`$ ($`\lambda _{\mathrm{obs}}6400`$ Å), to a restframe absolute $`B`$ magnitude, $`M_B`$ ($`\lambda _{\mathrm{rest}}4400`$ Å). We follow the methodology of Kennefick et al. (1995), in which a model quasar spectrum is used to relate $`m_r`$ to the apparent magnitude at wavelength $`1450(1+z)`$ Å. A power-law optical spectrum is then assumed longward of Ly$`\alpha `$ to relate this to $`M_B`$.
The absolute magnitude $`M`$ is given by (e.g.,Hogg 1999),
$$M_{\mathrm{AB}}(\lambda )=m_{\mathrm{AB}}(\lambda )5\mathrm{log}(d_L/10\mathrm{pc})+2.5\mathrm{log}\left[(1+z)\frac{L_\nu [\lambda /(1+z)]}{L_\nu (\lambda )}\right],$$
(3)
where both the apparent magnitude $`m`$ and the absolute magnitude $`M`$ are measured at the same wavelength. The middle term on the right is called the distance modulus and the last term is the k-correction. Magnitudes are referred to the $`AB`$ system (Oke 1974), defined as $`m_{\mathrm{AB}}(\lambda )2.5\mathrm{log}S_\nu (\lambda )48.60`$, where apparent flux density $`S_\nu `$ is measured in units of $`\mathrm{ergs}\mathrm{cm}^2\mathrm{Hz}^1`$. Absolute magnitudes at differing wavelengths are related by
$$M_{\mathrm{AB}}(\lambda _1)M_{\mathrm{AB}}(\lambda _2)=2.5\mathrm{log}\left[\frac{L_\nu (\lambda _1)}{L_\nu (\lambda _2)}\right].$$
(4)
Setting $`\lambda _1=4400`$ Å and $`\lambda _2=1450(1+z)`$ Å, the above equations are combined to yield
$$M_{\mathrm{AB}}(4400)=m_{\mathrm{AB}}[1450(1+z)]5\mathrm{log}(d_L/10\mathrm{pc})+2.5\mathrm{log}\left[(1+z)\frac{L_\nu (1450)}{L_\nu (4400)}\right].$$
(5)
We assume a power-law optical spectrum longward of Ly$`\alpha `$, $`L_\nu \nu ^{\alpha _{\mathrm{opt}}}`$. For a standard quasar optical spectral index $`\alpha _{\mathrm{opt}}=0.5`$ (e.g.,Richstone & Schmidt 1980; Schneider et al. 1992), consistent with previous work in this field, the offset between Vega-based $`M_B`$ and $`AB`$-system $`M_{\mathrm{AB}}(4400)`$ is given by Kennefick et al. (1995),
$$M_B=M_{\mathrm{AB}}(4400)+0.12.$$
(6)
We note that the adopted optical spectral index is likely too flat. Preliminary results from near-infrared imaging of $`z>4`$ quasars to target the rest-frame $`B`$ show substantial scatter in spectral slopes, but with an average slope slightly steeper than the standard (lower-redshift) value (Djorgovski et al. , in preparation). The final expression relating $`M_B`$ to $`m_{\mathrm{AB}}[1450(1+z)]`$ is then
$$M_B=m_{\mathrm{AB}}[1450(1+z)]5\mathrm{log}(d_L/10\mathrm{pc})+2.5\mathrm{log}(1+z)+1.21\alpha _{\mathrm{opt}}+0.12.$$
(7)
Our observable is the (Vega-based) $`r`$ magnitude, $`m_r`$. An analytic expression relating the apparent magnitude at $`1450(1+z)`$ Å to $`m_r`$ is not possible for the redshift range we are considering as Ly$`\alpha `$ emission and the (redshift-dependent) Ly$`\alpha `$ forest traverse the $`r`$-band for $`4<z<5`$. Instead we compute this relation, or k-correction,
$$km_{\mathrm{AB}}[1450(1+z)]m_{\mathrm{AB}}(r)$$
(8)
using the Francis et al. (1991) composite (LBQS) quasar spectrum modified by the Madau (1995) model of the hydrogen opacity of the Universe. We restrict this calculation to the Ly$`\alpha `$ forest (ignoring the other Lyman series absorptions). For $`\lambda _{\mathrm{obs}}<1216(1+z)`$ Å, Madau (1995) finds the optical depth of the intergalactic medium is well-represented as the sum of Ly$`\alpha `$-forest absorptions and hydrogen absorption associated with metal line systems: respectively,
$$\tau _{\mathrm{eff}}=0.0036\left(\frac{\lambda _{\mathrm{obs}}}{1216}\right)^{3.46}+0.0017\left(\frac{\lambda _{\mathrm{obs}}}{1216}\right)^{1.68}.$$
(9)
This model well describes the observed flux decrements at the redshifts considered.
We assume that the Ly$`\alpha `$ forest is sparse in the LBQS composite, relative to the thick jungle we are calculating at $`z4`$ and scale the LBQS composite by $`e^{\tau _{\mathrm{eff}}}`$ shortward of Ly$`\alpha `$. The k-correction is calculated by convolving the resultant, redshift-dependent model spectra with the Gunn $`r`$ filter (see Fig. 2). We show the same curve for the Kron $`R`$ filter as well; these k-corrections match reasonably well with the calculations of Kennefick, Djorgovski, & Meylan (1996) for a different model quasar spectrum and different parameterization of the Ly$`\alpha `$ forest. At the desired accuracy of these calculations, it is appropriate to adopt the flat-spectrum (in $`S_\nu `$) relation between $`AB`$ and Vega-based $`r`$ magnitude, $`rm_rm_{\mathrm{AB}}(r)0.21`$, so that our final expression relating $`M_B`$ to the observed $`m_r`$ is
$$M_B=m_r5\mathrm{log}(d_L/10\mathrm{pc})+2.5\mathrm{log}(1+z)+1.21\alpha _{\mathrm{opt}}+k_{\mathrm{eff}},$$
(10)
where $`k_{\mathrm{eff}}=k+0.33`$.
This method of determining $`M_B`$ is slightly different from the approach used by Schneider et al. (1992) in their VLA followup of 22 optically-selected quasars at $`z>3.1`$. Schneider et al. (1992) use the extinction-corrected spectrophotometric flux density at $`1450\times (1+z)`$ Å and assume an optical spectral energy index of $`0.5`$ to calculate the absolute $`B`$ magnitude. We have recalculated $`M_B`$ for their sample using our method. The root-mean-square (rms) difference between the methods is 0.30 mag for the 13 quasars above $`z=4`$ in their sample. If we only consider the nine quasars at $`4.0<z<4.4`$, the rms difference between the calculated absolute $`B`$-band magnitudes is 0.23 mag. This is comparable to the photometric uncertainties. In what follows, we use the revised $`M_B`$ for the Schneider et al. (1992) sample. Absolute $`B`$ magnitudes should be correct to $`0.3`$ magnitudes.
### 4.3 Coverage of the Optical Luminosity–Redshift Plane
Fig. 3 summarizes the location of the $`z>4`$ quasars in the optical luminosity–redshift plane. Consonant with their extreme distances, these quasars have extremely bright absolute magnitudes, $`M_B\mathrm{}<26`$. To probe evolution in the radio-loud quasar fraction, we require comparison with a sample of comparably luminous quasars at lower redshift. Fig. 4 illustrates such a sample: we have augmented the $`z>4`$ quasars with several lower-redshift surveys from the literature, summarized below. As before, k-corrections have been calculated assuming that both the radio spectral index $`\alpha `$ and the optical spectral index $`\alpha _{\mathrm{opt}}`$ are equal to $`0.5`$. We do not include radio surveys of optically-selected quasars which have few objects at our target optical luminosity (e.g., the Edinburgh quasar sample considered in Goldschmidt et al. 1999).
Miller sample: Miller et al. (1990) report on sensitive 5 GHz VLA observations of 105 quasars at $`1.8<z<2.5`$. They provide apparent magnitudes at the restframe wavelength of 1475 Å. Following La Franca et al. (1994) and Goldschmidt et al. (1999), we transform to $`B`$ using $`m_B=m_{1475}+0.23`$; this is correct at the average redshift of the sample for the assumed optical spectral index. The typical limiting flux density of this survey is $`S_{1.4\mathrm{GHz}}=0.5`$ mJy ($`3\sigma `$). For $`z2`$, this translates to a radio luminosity of $`L_{1.4\mathrm{GHz}}1.5\times 10^{32}h_{50}^2`$ $`\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1`$, well below the radio-loud cutoff.
La Franca sample: La Franca et al. (1994) report on 5 GHz VLA observations of 23 optically-selected quasars with $`B<19.4`$ at intermediate redshift ($`0.8<z<3.4`$). The average $`3\sigma `$ limits to detections are $`0.12`$ mJy, corresponding to $`L_{1.4\mathrm{GHz}}8\times 10^{31}h_{50}^2`$ $`\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1`$at $`z3`$. This is well below the radio-loud cutoff. We use the absolute luminosities provided in the paper.
LBQS sample: 256 quasars from the LBQS have been observed at 8.4 GHz by the VLA by Visnovsky et al. (1992) and Hooper et al. (1995). This large survey, sampling $`0.2\mathrm{}<z\mathrm{}<3.5`$ and $`22.5\mathrm{}>M_B\mathrm{}>29`$, suffers from strongly correlated luminosity and redshift, as expected for a flux-limited sample: at a given redshift, $`M_B`$ only spans $`1.5`$ magnitudes. We use the absolute magnitudes provided in the papers. The median $`3\sigma `$ noise limit of $`S_{8.4\mathrm{GHz}}=0.29`$ mJy corresponds to $`L_{1.4\mathrm{GHz}}2.5\times 10^{32}h_{50}^2`$ $`\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1`$, again below the radio-loud cutoff. We do not include the additional 103 LBQS quasars with 8.4 GHz VLA observations discussed in Hooper et al. (1996) as that sample is largely optically less-luminous, lower-redshift quasars.
Fig. 5 illustrates the location of the quasars in $`M_BL_{1.4\mathrm{GHz}}`$ space. For the purposes of the following analysis, we assume non-detections have radio fluxes equal to their $`3\sigma `$ noise value. We omit the new $`z>4`$ radio-loud quasars detected by FIRST and NVSS in the following analysis, as those surveys are insufficiently sensitive to reach the radio-loud cutoff at $`z>4`$. The detection limits of the other surveys are sufficiently deep that few non-detections are above the radio-loud/radio-quiet boundary; we conservatively classify those few sources as radio-loud below. Our total sample is 428 quasars spanning $`0.2<z<4.7`$, $`22.7>M_B>28.7`$, and $`30.08<\mathrm{log}L_{1.4\mathrm{GHz}}(h_{50}^2\mathrm{ergs}\mathrm{s}^1\mathrm{Hz}^1)<35.7`$.
## 5 Radio-Loud Fraction
### 5.1 Radio-Loud Fraction as a Function of Optical Luminosity
We first consider if we detect optical luminosity dependence in the radio-loud fraction. We consider two redshift ranges where we have large samples of quasars, and divide the sample into absolute magnitude bins where the quasars are approximately smoothly distributed. Fig. 6 shows this fraction for each redshift bin. Error bars shown are the square root of the variance, $`f(1f)/N`$, where $`f`$ is the radio-loud fraction and $`N`$ is the number of quasars in the bin considered (e.g.,Schneider et al. 1992). The impression from Fig. 6 is that for a given redshift bin, the radio-loud fraction is independent of optical luminosity. This result stands in contrast to the analysis of Goldschmidt et al. (1999) who find that the radio-loud fraction increases with luminosity for each redshift bin considered. Consideration of the radio-loud fraction plotted in Fig. 7 of their paper suggests that the optical luminosity dependence claimed at $`1.3<z<2.5`$ depends largely on the poorly measured radio-loud fraction at $`M_B=28`$. However, at $`0.3<z<1.3`$, their data convincingly shows optical luminosity dependence in the radio-loud fraction. Comparing the ordinate between the two panels of Fig. 6 suggests that the radio-loud fraction remains approximately constant with redshift; we consider this next.
### 5.2 Radio-Loud Fraction as a Function of Redshift
Since there is little evidence of optical luminosity dependence of the radio-loud fraction in our data set, we decrease our errors by considering the radio-loud fraction in a larger absolute magnitude range, $`26<M_B<28`$. Fig. 7 shows the results of this analysis, with error bars calculated as above. At $`1.75<z<2.5`$, we classify 20 out of the 153 quasars in the optical luminosity range considered as radio-loud, corresponding to $`13.1\pm 2.7`$% of the quasars. At $`z>4`$, we classify 4 out of the 34 optically-selected quasars as radio-loud (considering only the Schneider et al. (1992) and our targeted VLA survey). This corresponds to $`11.8\pm 5.5`$% of the quasars being radio-loud. No evolution in the radio-loud fraction is detected.
The FIRST/NVSS detections of optically-selected quasars at $`z>4`$ also provides a lower limit to the radio-loud fraction at early cosmic epoch. Of the 107 $`z>4`$ optically-selected quasars, 79 have optical luminosities $`26>M_B>28`$. From this restricted sample, 35 overlap with FIRST of which one was detected, implying a statistically unrobust radio-loud fraction $`>3`$%. All 79 quasars in the luminosity range considered overlap with NVSS; four were detected, implying a radio-loud fraction $`>5`$% at $`z>4`$.
## 6 Conclusions
We report on two programs to study the radio-properties of optically-selected quasars at high-redshift. First, we consider deep, targeted 5 GHz imaging of 32 $`z>4`$ quasars selected from the Palomar multicolor quasar survey. Four sources are detected. We also correlate a comprehensive list of 134 $`z>4`$ quasars, entailing all such sources we are aware of as of mid-1999, with two deep 1.4 GHz VLA sky surveys. We find five new radio-loud quasars, not including one quasar identified in the targeted program. In total, we report on eight new radio-loud quasars at $`z>4`$; only seven such sources are in the literature currently.
We use this new census to probe the evolution of the radio-loud fraction with redshift. We find that, for $`25\mathrm{}>M_B\mathrm{}>28`$ and $`2\mathrm{}<z\mathrm{}<5`$, radio-loud fraction is independent of optical luminosity. We also find no evidence for radio-loud fraction depending on redshift. If the conventional wisdom that radio-loud AGN are preferentially identified with early-type galaxies remains robust at high redshift, this result could have implications regarding the formation epoch of late-type versus early-type galaxies. In hierarchical models of galaxy formation, one expects the late-type (less massive) systems to form first. Mergers are required to form the early-type (more massive) systems. Eventually, at high enough redshift, one would then expect the radio-loud fraction of AGN to fall precipitously. Our results show this epoch lies beyond $`z4`$, providing further evidence for an early formation epoch for early-type galaxies.
###### Acknowledgements.
We acknowledge the efforts of the POSS-II and DPOSS teams, which produced the PSS sample used in this work, and, in particular, N. Weir, J. Kennefick, R. Gal, S. Odewahn, R. Brunner, V. Desai, and J. Darling. The DPOSS project is supported by a generous grant from the Norris Foundation. The VLA of the National Radio Astronomy Observatory is operated by Associated Universities, Inc., under a cooperative agreement with the National Science Foundation. We thank Hyron Spinrad and Pippa Goldschmidt for useful comments and Chris Fassnacht for interesting discussion. DS acknowledges support from IGPP/Livermore grant IGPP/LLNL 98-AP017. SGD acknowledges partial support from the Bressler Foundation. |
no-problem/0001/cond-mat0001078.html | ar5iv | text | # ^𝟐𝟕Al Impurity-Satellite NMR and Non-Fermi-Liquid Behavior in U1-xThxPd2Al3
## I Introduction
The magnetism of heavy-fermion $`f`$-electron materials is quenched at low temperatures by conduction-electron (Kondo) screening. A many-body ground state is formed that has traditionally been described within Landau’s Fermi-liquid theory both for dilute alloys (the single-impurity Kondo problem) and for concentrated “Kondo lattice” systems (with or without lattice disorder). But the thermodynamic and transport properties of a number of $`f`$-electron “heavy-fermion” metals and alloys do not behave as predicted by Fermi-liquid theory. The inapplicability of this picture is signaled by (a) weak power-law or logarithmic divergences of the specific heat Sommerfeld coefficient $`\gamma (T)=C(T)/T`$ and the magnetic susceptibility $`\chi (T)`$, both of which are constant in Fermi-liquid theory, and (b) a temperature dependence of the electrical resistivity which is weaker than the $`T^2`$ prediction of Fermi-liquid theory. Better theoretical and experimental understanding of these so-called non-Fermi-liquid (NFL) systems has been the goal of a considerable amount of research in recent years.
Two broad classes of theoretical explanation of NFL behavior have emerged: (a) proximity to a zero-temperature quantum critical point (QCP), of either a single-ion or a cooperative nature, and (b) the effect of lattice disorder on the Kondo properties of the $`f`$ ions. In the QCP picture the NFL behavior is due to quantum fluctuations associated with a critical point at zero temperature. This mechanism is operative in uniform systems, since disorder is not required. In disorder-driven scenarios the effect of structural disorder on $`f`$-electron many-body effects such as the Kondo effect and the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction between $`f`$ ions produces a broad inhomogeneous distribution of the local susceptibilities $`\chi _j`$ associated with the $`f`$ ions. Uncompensated Kondo ions far from the singlet ground state, which are not described by Fermi-liquid theory, give rise to large values of $`\chi _j`$ and the NFL properties of the material. QCP and disorder-driven mechanisms need not be mutually exclusive, however, since critical fluctuations of a disordered system might also be involved in NFL behavior.
The single-ion “Kondo disorder” picture, in which the uncompensated ions are assumed not to interact, was developed to explain nuclear magnetic resonance (NMR) experiments in the NFL alloys UCu<sub>5-x</sub>Pd<sub>s</sub>, $`x=1.0`$ and 1.5, that revealed wide distributions of frequency shifts reflecting the required susceptibility inhomogeneity. In the Kondo disorder model structural disorder gives rise to a wide distribution of local Kondo temperatures $`(T_K)_j`$; this leads to a distribution of $`\chi _j`$ which becomes correspondingly wide at low temperatures. The “Griffiths-phase” model of Castro Neto et al. takes into account RKKY interactions between uncompensated $`f`$-ion moments in the disordered system. These RKKY interactions couple the uncompensated moments into clusters, the thermal behavior of which can be described in terms of Griffiths singularities associated with the distribution of cluster sizes.
Both the Kondo-disorder and Griffiths-phase theories make definite predictions of the inhomogeneous spread in magnetic susceptibility that is capable of being measured by magnetic resonance techniques such as NMR. The present paper compares these predictions with $`^{27}`$Al NMR spectra in unaligned and field-aligned powder samples of the NFL alloys U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>, $`x=0.7`$, 0.8, and 0.9.
The isostructural alloy series U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> exhibits NFL behavior for intermediate to high thorium concentrations. The phase diagram of U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> is shown in Fig. 1.
The heavy-fermion end compound UPd<sub>2</sub>Al<sub>3</sub> exhibits coexistence of antiferromagnetic order ($`T_N=14`$ K) and superconductivity ($`T_c=2`$ K) that persists for low Th concentrations ($`x<0.2`$). For $`x>0.6`$ the electrical resistivity, specific heat, and magnetic susceptibility are all indicative of NFL behavior.
The RKKY coupling between $`^{27}`$Al nuclei and U-ion spins affects the $`^{27}`$Al NMR in a number of ways, of which the most important for our purposes is the paramagnetic shift $`K`$ of the field for resonance at fixed frequency; this shift is expected to be proportional to the U-ion susceptibility $`\chi `$. Any inhomogeneity in $`\chi `$ leads to a corresponding distribution of shifts that broadens the NMR line. The relation between the rms spread $`\delta \chi `$ in $`\chi `$ and the NMR linewidth $`\sigma `$ is described elsewhere, where it is shown that $`\sigma /(\overline{K}H_0)`$, where $`\overline{K}`$ is the spatially averaged relative shift and $`H_0`$ is the applied field, is an estimator for the fractional rms spread $`\delta \chi /\overline{\chi }`$, where $`\overline{\chi }`$ is the spatially averaged susceptibility. We can write $`\sigma /(\overline{K}H_0)`$ equivalently as $`\delta K/\overline{K}`$, where $`\delta K\sigma /H_0`$ is the relative rms spread in shifts.
NMR line broadening can also arise from dynamic (lifetime) effects due to nuclear spin-lattice relaxation. Pulsed NMR techniques can be used to estimate such lifetime broadening independently of the spectral width, and we have found that in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> spin-lattice relaxation is far too small to contribute significantly to the observed spectral linewidths.
In a diluted magnetic alloy such as U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> the NMR shift is also distributed because of the spatial dependence of the RKKY interaction and the random positions of the magnetic ions, even if the susceptibility associated with these ions were uniform. Thus additional broadening due to susceptibility inhomogeneity may be difficult to resolve. This is unlike the situation in an alloy with ligand disorder (i.e., disorder in the nonmagnetic ions of the compound), where the magnetic-ion sublattice is ordered. In this case the only source of broadening is susceptibility inhomogeneity, as long as the ligand disorder does not affect the RKKY interaction significantly. Even if it does the two broadening mechanisms can be distinguished.
If, however, specific near-neighbor magnetic-ion configurations of the observed nuclei are probable enough in a dilute alloy, and if the shifted NMR frequencies of these nuclei are large enough to be resolved, rather than merely contributing to “dilution broadening” of the resonance line, then the shifts and linewidths of these impurity satellites may be studied separately. Impurity satellites provide a much better characterization of the inhomogeneous susceptibility distribution in an alloy with $`f`$-ion sublattice dilution (such as U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>) than is possible if the satellites are not resolved.
An important result of the present studies was the observation of resolved $`^{27}`$Al impurity satellites in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>. The shifts and linewidths of these satellites have been analyzed to provide information on the inhomogeneous distribution of local susceptibility in this system. We find that the width of the susceptibility distribution is considerably smaller than required to explain NFL behavior, and a different mechanism must be sought. This result suggests that, in contrast to the situation in ligand-disorder alloys, $`f`$-sublattice disorder may not produce enough inhomogeneity in the $`f`$-electron–conduction-electron hybridization to drive the NFL behavior.
## II Disorder-driven NFL theories
In the following we briefly describe the single-ion Kondo-disorder and Griffiths-phase models, after which we consider their applicability to the NFL behavior in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>.
The simplest implementation of the single-ion Kondo disorder model assumes that the $`f`$ ions are coupled to the conduction-electron bath by a random distribution of Kondo coupling constants $`g=\rho 𝒥`$, where $`\rho `$ is the conduction-electron density of states at the Fermi energy and $`𝒥`$ is the local-moment–conduction-electron exchange energy. Then $`g`$ is related to the Kondo temperature $`T_K`$ by
$$T_K=E_F\mathrm{exp}(1/g),$$
(1)
where $`E_F`$ is the Fermi energy of the host metal. Thus a modest distribution of $`g`$ can give rise to a broad distribution of $`T_K`$ if most values of $`g`$ are small. If the distribution function $`P(T_K)`$ is broad enough so that $`P(T_K=0)`$ does not vanish, then at any nonzero temperature $`T`$ those $`f`$ ions for which $`T_K<T`$ will not be compensated. Fermi-liquid theory does not apply to them, and they give rise to the NFL behavior.
Such uncompensated $`f`$ ions dominate thermal and transport properties at low temperatures. The magnetic susceptibility is correspondingly distributed, as can be seen from the Curie-Weiss law
$$\chi (T,T_K)=\frac{𝒞}{T+\alpha T_K}$$
(2)
($`\alpha 1`$) that approximately characterizes the Kondo physics, and is inhomogeneous on an atomic scale. Fitting this model to the temperature dependence of the bulk (i.e., spatially averaged) magnetic susceptibility $`\overline{\chi (T)}`$ gives the distribution of $`g`$, which is then used to predict $`\delta \chi (T)/\overline{\chi (T)}`$.
In the Griffiths-phase theory various physical properties are predicted to diverge at low temperatures as weak power laws of temperature. For example, the electronic specific heat $`C(T)`$ and the spatially averaged magnetic susceptibility $`\overline{\chi (T)}`$ are given by
$$C(T)/T\overline{\chi (T)}T^{1+\lambda },$$
(3)
and the fractional rms spread $`\delta \chi (T)/\overline{\chi (T)}`$ is given by
$$\frac{\delta \chi (T)}{\overline{\chi (T)}}T^{\lambda /2}.$$
(4)
The nonuniversal exponent $`\lambda `$ is a parameter that determines the degree of NFL character. The Griffiths phase is characterized by $`\lambda 1`$, so that the susceptibility diverges at zero temperature (the case $`\lambda =1`$ is marginal and gives rise to logarithmic divergences). Since $`C(T)/T`$ and $`\overline{\chi (T)}`$ have the same temperature dependence the Wilson ratio $`\overline{\chi (T)}T/C(T)`$ is independent of temperature in this picture. The procedure used to compare the Griffiths-phase theory with experiment is similar to that described above for the Kondo-disorder analysis: bulk susceptibility data are fit to the corresponding theoretical expressions, and the calculated $`\delta \chi (T)/\overline{\chi (T)}`$ is compared with NMR data.
## III Experiment
Samples of U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>, $`x=0.7`$, 0.8, and 0.9, were prepared as described previously. The arc-melted ingots were crushed and passed through a 100-$`\mu `$m sieve. $`^{27}`$Al NMR experiments were carried out on unaligned powders and also on epoxy-cast powder samples in which the single-crystal powder grains were aligned by a 6-T magnetic field during hardening of the epoxy. The crystal symmetry is hexagonal (space group $`P6/mmm`$) and thus the susceptibility is uniaxially anisotropic. The direction of largest magnetic susceptibility is in the basal ($`ab`$) plane, so that in order to orient the $`c`$ axes of the grains it was necessary to rotate the sample around an axis perpendicular to the magnetic field while the epoxy hardened. The rotation axis then defines the $`c`$-axis orientation of the grains. The anisotropic susceptibility obtained from a field-aligned sample of U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub> is shown in Fig. 2, where the strong anisotropy that permits the field alignment can be seen.
Also shown in Fig. 2 is the result of fitting the Griffiths-phase model prediction for the uniform susceptibility to the experimental $`\chi _{ab}(T)`$ data, as described above in Sec. II. Aside from an overall scale factor the fit parameters are the exponent $`\lambda `$ and a high-energy cutoff $`ϵ_0`$ for the distribution of cluster energies. Best fit was found for $`\lambda =0.90`$ and $`ϵ_0/k_B=130`$ K. A good fit was also obtained with the Kondo-disorder model, where the parameters describing the disorder were taken to be the Fermi energy $`E_F`$ \[cf. Eq. (1)\] and the mean $`\overline{g}`$ and standard deviation $`\delta g`$ of an assumed Gaussian distribution of coupling constants. The values $`E_F=1`$ eV, $`\overline{g}=0.149`$, and $`\delta g=0.020`$ gave the best fit (not shown). The results of these fits were used to calculate numerical values of $`\delta \chi (T)/\overline{\chi (T)}`$, which are compared with results of the NMR experiments in Sec. IV.
Field-swept $`^{27}`$Al NMR spectra were obtained using pulsed-NMR spin-echo signals and the frequency-shifted and summed Fourier transform processing technique described by Clark et al. The echo-decay lifetime $`T_2`$ was found to be sufficiently long (hundreds of $`\mu `$s) so that no correction for the echo decay was needed.
### A Spurious phases or impurity satellites in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>?
Figure 3 shows as an example a spectrum from an unaligned powder sample of U<sub>0.2</sub>Th<sub>0.8</sub>Pd<sub>2</sub>Al<sub>3</sub>.
In U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> the $`mmm`$ point symmetry of the Al site is lower than cubic, so that the $`^{27}`$Al nuclear Zeeman levels are split by the quadrupole interaction between the $`^{27}`$Al nuclear quadrupole moment $`Q`$ and the crystalline electric field gradient $`q`$. Then quadrupole satellite resonances, corresponding to $`m_Im_I1`$ ($`m_I1/2`$) nuclear spin transitions, are observed in the form of broad peaks shifted from a narrow central $`(1/21/2)`$ transition. The positions of the quadrupole satellites are shifted to first order in the coupling constant $`e^2qQ`$ and depend on crystallite orientation, so that the quadrupolar satellites are “powder-pattern broadened” by the random orientations of the powder grains in the sample. The central transition is shifted only to second order in $`e^2qQ`$, however, and suffers much less powder-pattern broadening than the satellites when $`e^2qQ`$ is smaller than the $`^{27}`$Al nuclear Zeeman frequency.
The most important feature of Fig. 3 is the group of extra lines on the low-field side of the central transition. (There is also a hint of this structure in some of the quadrupole satellites.) The extra lines, which also appear for Th concentrations $`x=0.7`$ and $`0.9`$, were initially suspected to be due to spurious metallurgical phases, although x-ray powder diffraction measurements indicated the samples were single-phase. More than one extra “minority” line was observed, but in the following only the most intense minority line is compared with the “principal” central transition (Fig. 3).
The temperature dependence of the principal and minority shifts is shown in Fig. 4.
It can be seen that the shift of the principal line is independent of temperature, whereas the minority line exhibits a Curie-Weiss-like temperature-dependent shift. If the minority line were due to a spurious phase, then these results suggest that the principal phase is a nearly pure thorium compound, whereas the spurious phase or phases have a high uranium concentration. This seems rather unlikely, given the absence of x-ray evidence for phase segregation and the chemical similarity of thorium and uranium, and we take the results shown in Fig. 4 as initial evidence against the spurious-phase hypothesis.
Figure 5 gives a Clogston-Jaccarino plot of shift versus susceptibility per mole uranium, with temperature an implicit variable.
A linear relation between shift and $`\overline{\chi (T)}`$ is expected for impurity satellites if the transferred hyperfine field that couples the nuclear spin to the U moment is temperature-independent, and is indeed observed for small to moderate values of $`\overline{\chi (T)}`$. The slopes of these linear relations are more or less independent of thorium concentration $`x`$, as expected for a local hyperfine interaction. \[The nonlinear behavior for larger $`\overline{\chi (T)}`$ is not well understood, but may arise from changes in crystal-field level populations with temperature.\]
The temperature independence of the principal-line shift is explained by the large number of configurations of more distant U ions and the oscillatory dependence of the RKKY interaction, which lead to broadening but little average shift contribution to the principal line. Varying the Th concentration $`x`$ can change the host-metal band structure and lead to an $`x`$-dependent host Knight shift. The fact that the principal-line shifts do not vary monotonically with Th concentration is not well understood, but may be due to shift anisotropy together with sample-dependent preferential orientation of the powder grains.
Comparison of the principal- and minority-line shifts yields strong additional evidence that the minority lines are impurity satellites rather than due to spurious phases. One of the most striking features of Fig. 5 is the fact that extrapolations of the minority-line shifts to zero $`\overline{\chi (T)}`$ (infinite temperature) are in agreement with the temperature-independent principal-line shifts. This agreement would be a complete coincidence if the minority lines were due to spurious phases, but follows naturally if they are impurity satellites.
We conclude that the observed properties of the spectra establish the minority lines as impurity satellites rather than due to spurious phases.
### B Spectra from field-aligned samples
The $`^{27}`$Al impurity-satellite spectra from unaligned powder samples of U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> described above are difficult to interpret, due to possible preferential orientation and broadening from shift anisotropy. We therefore used field-aligned samples for further NMR experiments.
Figure 6 shows a representative field-aligned spectrum from U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub> for applied field $`𝐇_0`$ parallel to the $`c`$ axis ($`𝐇_0𝐜`$).
The sharp lines indicate good alignment of the powder-grain $`c`$ axes, and the impurity satellites are more apparent than in the unaligned powder spectra. The lines in the center of the spectrum are the $`(1/21/2)`$ central transition bulk line and impurity satellites. Impurity satellites can also be seen associated with each quadrupole satellite. These are well resolved on the low-field side of the quadrupole-split spectrum but not on the high-field side, due to the combination of an anisotropic NMR shift and slightly larger quadrupole splittings (at fixed frequency) of the impurity satellites relative to the bulk lines.
It can also be seen in Fig. 6 that the quadrupole satellites are somewhat broader than the central transition, due presumably to incomplete alignment and/or disorder in the quadrupole interaction; as mentioned above this results in first-order quadrupole broadening of the quadrupole satellites but only second-order broadening of the central transitions. Henceforth we consider only the central-transition bulk line and impurity satellites.
Field-swept spectra were taken with a field range of about 300 Oe around the central transition and a small field step ($``$3 Oe) to have good resolution of narrow lines. Spectra were obtained as a function of temperature and field for both field directions ($`𝐇_0𝐜`$ and $`𝐇_0𝐜`$) on field-aligned samples of U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>, $`x=0.7,0.8,`$ and $`0.9`$.
Impurity satellites and line shapes in dilute magnetic alloys have been treated by Walstedt and Walker, who showed on very general grounds that in the absence of susceptibility inhomogeneity the line shapes and widths of the bulk line and impurity satellites are the same. Inhomogeneity results in an increase of the satellite linewidths relative to that of the bulk line. We find that a Lorentzian line shape fits the bulk line best. Following the result of Walstedt and Walker, the impurity satellites were fit with a Lorentzian with the same width as the bulk line, convoluted with an extra (Gaussian) broadening function that describes the susceptibility inhomogeneity.
Simple statistical considerations were used to constrain the fits. The probability $`P_n^{n_0}(x)`$ of finding $`n`$ and only $`n`$ uranium impurities in a given near-neighbor (Th,U) shell around an Al site is given by
$$P_n^{n_0}(x)=y^n(1y)^{n_0n}C_n^{n_0},$$
(5)
where $`y=1x`$ is the U concentration, $`n_0`$ is the total number of U sites in the shell, and $`C_n^{n_0}=n_0!/n!(n_0n)!`$ is the binomial coefficient. The intensity of each line (i.e., the area under the line) including the bulk line (for which $`n=0`$ for all resolved near-neighbor shells) is proportional to this probability. The fitting procedure then consists of taking the number of most probable U-ion configurations to be the number of resolved impurity satellites, fixing the ratios of the satellite and bulk line intensities according to Eq. (5), and varying the shifts and widths of all lines for best fit.
The crystal structure of U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>, including near-neighbor (Th,U) shells around a reference $`^{27}`$Al site out to the third shell, is shown in Figure 7.
Each of the first three near-neighbor shells contains four sites ($`n_0=4`$). We found, however, that if we took $`n_0=4`$ for both the nearest-neighbor and next-nearest-neighbor shells we could not fit the spectra well. Among various possibilities we find that the choice of $`n_0^{\mathrm{nn}}=4`$ for the nearest-neighbor shell and $`n_0^{\mathrm{nnn}}=8`$ for the next-nearest-neighbor shell gives the best fit to the spectra. This implies that an effective next-nearest-neighbor shell is a combination of the second and third shells in the crystal structure. The distance from the reference Al site to the nearest shell is 3.402 Å, to the second shell is 5.096 Å, and to the third shell is 6.828 Å, so that the difference in (U,Th)-Al distances between the second and third shells is about 1.7 Å. This is not small on the atomic scale, and we do not understand why the RKKY coupling constants for these shells are apparently so nearly equal.
For these choices of near-neighbor shell sizes, and assuming an isotropic RKKY interaction, we have 5 distinct nearest-neighbor-shell configurations ($`n^{\mathrm{nn}}=0,1,2,3,4`$) and 9 distinct next-nearest-neighbor-shell configurations ($`n^{\mathrm{nnn}}=0,1,2,\mathrm{},8`$). The total number of distinct configurations associated with these two shells is therefore $`5\times 9=45`$. Table I gives the probabilities of the six most probable of these 45 configurations from Eq. (5) for two uranium concentrations $`y=0.1`$ and 0.2 ($`x=0.9`$ and 0.8, respectively).
The configurations are designated by $`(p,q)`$, where $`p`$ and $`q`$ are the number of U ions in the nearest-neighbor and (effective) next-nearest-neighbor shells, respectively. We have taken the fit spectrum to include the lines corresponding to these six configurations (i.e., five impurity satellites and the bulk line) as shown in Fig. 8(a).
Fixing the relative areas of the lines according to Table I leaves as free parameters the area of the bulk line and the shifts and linewidths of all the lines.
The positions and widths of the lines in Fig. 8(a) are the result of a fit to the spectrum of U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub> for $`𝐇_0𝐜`$, $`T=30`$ K, and a spectrometer frequency of 17.221 MHz. This fit is shown together with the data in Fig. 8(b). It is important to note that the assignment of each line to a configuration is consistent with the oscillatory RKKY interaction. Consider for example the $`(0,1)`$ and $`(1,0)`$ impurity satellites, which are on opposite sides of the $`(0,0)`$ bulk line. Then one more uranium in the second shell should push the $`(0,2)`$ satellite further away from the $`(0,0)`$ line than the $`(0,1)`$ satellite. Similarly, the $`(1,1)`$ satellite should lie between the $`(1,0)`$ and $`(0,1)`$ satellites due to the opposite signs of the nearest- and next-nearest-shell interactions, and the $`(1,2)`$ satellite should lie between the $`(1,0)`$ and $`(0,2)`$ satellites. All these properties are satisfied by the fits without having been put in “by hand.” We argue that the success of this fitting procedure is excellent evidence that the satellites have been correctly identified.
The model is very successful, in the sense that it gives good fits to the field-aligned spectra for both field directions and for $`x=0.9`$ and $`0.8`$. Shown in Fig. 9 are some examples of the spectra and their fits.
It can be seen, however, that the fit for $`x=0.8`$ is not quite as good as for $`x=0.9`$. This might be associated with the fact that the total probability to find any of the six resolved-satellite U-ion neighbor configurations (cf. Table I) is quite high (0.9116) for $`x=0.9`$, but is only 0.6528 for $`x=0.8`$. The remaining 39 configurations must contribute to the broadening function, and if the probability associated with a few of these configurations is appreciable one might suspect that a simple Gaussian approximation to the line shape could break down. In line with this speculation it was found that spectra from U<sub>0.3</sub>Th<sub>0.7</sub>Pd<sub>2</sub>Al<sub>3</sub> were even more difficult to fit.
Even so, U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> is the first system in which impurity satellites have been resolved at such high magnetic impurity concentrations ($`y=0.2`$). This is due in part to the relatively small number of $`f`$-ion sites in the near-neighbor shells compared, for example, to the case in dilute CuFe alloyswhere $`n_0^{\mathrm{nn}}=12`$.
## IV Disorder and NFL behavior in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>
Field-swept $`^{27}`$Al NMR spectra from field-aligned powder samples were obtained at a spectrometer frequency of 17.221 MHz over the temperature range 5–250 K. For $`𝐇_0𝐜`$ all impurity satellites have very weak or temperature-independent shifts (not shown). Such behavior is expected because the $`c`$ axis is the magnetic “hard” axis.
Figure 10 gives the temperature dependence of $`^{27}`$Al impurity satellite shifts $`K_{ab}(T)`$ relative to the bulk line for $`𝐇_0𝐜`$.
There are strong similarities between results for the two concentrations, and the $`(0,1)`$ and $`(0,2)`$ satellites have stronger temperature dependences than the rest of the impurity satellites. The linewidths $`\sigma _{ab}(T)`$ of impurity satellites $`(0,1)`$ and $`(0,2)`$ also have strong temperature dependence at low temperatures as shown in Fig. 11.
It is not surprising to see that each satellite has almost the same width at high temperatures where the magnetic susceptibility is small. The $`(1,2)`$ satellite for U<sub>0.2</sub>Th<sub>0.8</sub>Pd<sub>2</sub>Al<sub>3</sub> is an exception, as it shows only a weak temperature dependence and a larger width at high temperatures compared to the other satellites. This behavior is not understood, but it can be seen from Fig. 8(a) that the $`(1,2)`$ satellite is the weakest of the impurity satellites and is not well resolved; there may be considerable systematic error in the parameters for this satellite.
Only the $`(0,1)`$ and $`(0,2)`$ impurity satellites have strong temperature-dependent linewidths and shifts for $`𝐇_0𝐜`$. We use the $`(0,1)`$ satellite in U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub> for further analysis, because for this satellite there is only one U moment in the immediate $`^{27}`$Al environment. This is in contrast to the situation in ligand-disorder NFL systems, e.g., UCu<sub>5-x</sub>Pd<sub>x</sub> (Refs. and ) and CeRhRuSi<sub>2</sub> (Refs. and ), where the $`f`$ ions are concentrated and nuclear spins couple strongly to substantially more than one neighboring $`f`$-ion moment. In the latter case it can be shown that the correlation length $`\xi _\chi (T)`$ that characterizes the random spatial variation of the susceptibility has to be taken into account. The use of a single-U-ion impurity satellite means that no information can be obtained concerning $`\xi _\chi (T)`$, but by the same token the quantity $`\sigma _{ab}(T)/K_{ab}(T)H=\delta K_{ab}(T)/K_{ab}(T)`$ gives $`\delta \chi (T)/\chi (T)`$ independent of the (unknown) value of $`\xi _\chi (T)`$.
Figure 12 plots $`\delta K_{ab}(T)/K_{ab}(T)`$ for the $`(0,1)`$ impurity satellite versus the bulk susceptibility $`\chi (T)`$ in U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub>, $`𝐇_0𝐜`$, again with temperature an implicit parameter.
Also shown are the theoretical predictions of $`\delta \chi (T)/\chi (T)`$ from the single-ion Kondo-disorder and Griffiths-phase theories, obtained as described in Sec. II. In spite of the fact that at low temperatures the satellites are significantly broader than the central transition (Fig. 11), the theoretical predictions considerably overestimate the experimental results. This suggests that the effect of disorder in this system is too weak to account for its NFL behavior. The satellites are simply too narrow, a fact which, ironically, is essential to their experimental observation. Impurity satellites have also been observed in Y<sub>0.8</sub>Th<sub>0.2-y</sub>U<sub>y</sub>Pd<sub>3</sub>, but in this case the satellite widths become very wide and unresolved at low temperatures. We also note that in U<sub>0.1</sub>Th<sub>0.9</sub>Pd<sub>2</sub>Al<sub>3</sub> $`\delta K_{ab}(T)/K_{ab}(T)`$ varies relatively slowly with $`\chi (T)`$ (Fig. 12), i.e., the NMR linewidth is nearly proportional to the shift. This is in contrast to the behavior expected from both the Kondo-disorder and Griffiths-phase pictures, where the spread in susceptibilities is a rapidly-growing fraction of the average susceptibility as the temperature is lowered.
## V Conclusions
$`^{27}`$Al NMR in the U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> alloy system has revealed satellite NMR lines due to specific uranium configurations around Al sites. These impurity satellites facilitate determination of the effect of disorder on paramagnetism in this system. The probability of finding a given configuration, which is related to the intensity of the corresponding line, follows a simple statistical calculation. We used a procedure that fixed the intensities according to their probabilities to fit the field-aligned spectra for $`x=0.9`$ and $`0.8`$ and both field directions ($`𝐇_0𝐜`$ and $`𝐇_0𝐜`$). Each impurity satellite is thereby associated with a specific near-neighbor uranium configuration.
The linewidths and shifts in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> do not have much temperature dependence for $`𝐇_0𝐜`$, as is also the case for the $`c`$-axis susceptibility $`\chi _c(T)`$. In contrast, two of the impurity satellites have Curie-Weiss-like temperature-dependent linewidths and shifts for $`𝐇_0𝐜`$. But the linewidths do not increase much more rapidly with decreasing temperature than the shift, so that $`\delta K/K`$ does not exhibit the rapid increase with bulk susceptibility $`\chi `$ expected from the disorder-driven models. These mechanisms also overestimate the observed linewidth at low temperatures, suggesting that the disorder in this system is not strong enough to account for its NFL behavior.
In the disorder-driven models the origin of the disorder is variation of the $`f`$-electron/conduction-electron hybridization matrix element with local $`f`$-ion environment. Disorder is found to be an important contributor to NFL behavior in alloys with ligand disorder, such as UCu<sub>5-x</sub>Pd<sub>x</sub> (Ref. ) and CeRhRuSi<sub>2</sub> (Ref. ). One might suspect, however, that the immediate $`f`$-ion environment is not as strongly disordered in dilute solid solutions of $`f`$ ions, given that in the limit of infinite dilution all $`f`$ ions have identical environments. NFL behavior in such systems might therefore be due to some other mechanism. This conclusion must be regarded as speculative, however, since U concentrations of 10% and 20% can hardly be considered dilute. In addition, U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> is the only $`f`$-ion diluted system studied to date using NMR.
NFL behavior in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub> is observed over a wide range of Th concentrations, and the thermodynamic and transport properties obey single-ion scaling. These results suggest that a quantum critical point associated with cooperative behavior is not the NFL mechanism in this system. Two models that rely neither on cooperative effects nor on disorder have been applied specifically to U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>. These are (a) the quadrupolar Kondo model, which assumes a non-Kramers doubly degenerate nonmagnetic ground state, and (b) the electronic polaron model of Liu, which assumes that the $`f`$-electron energies are close to the Fermi level and that transport involves polaron-like hopping between $`f`$ sites. Neither of these theories predicts strong disorder in the magnetic susceptibility, and from the standpoint of our NMR results both remain candidates for NFL behavior in U<sub>1-x</sub>Th<sub>x</sub>Pd<sub>2</sub>Al<sub>3</sub>.
## Acknowledgments
We are grateful to A. H. Castro Neto for helpful discussions and comments. This research was supported by the U.S. NSF, Grant nos. DMR-9418991 (U.C. Riverside) and DMR-9705454 (U.C. San Diego), by the U.C. Riverside Academic Senate Committee on Research, and by the Research Corporation (Whittier College). |
no-problem/0001/hep-ph0001049.html | ar5iv | text | # Cosmic rays and neutrino interactions beyond the standard model
## 1 Introduction
It has been suggested that the neutrino-nucleon cross section could be enhanced by new physics beyond the electroweak scale in the center of mass frame, or above about a PeV in the nucleon rest frame. A specific implementation of this possibility is given in theories with $`n`$ additional dimensions and a quantum gravity scale $`M`$TeV that has recently received much attention in the literature because it provides an alternative solution (i.e., without supersymmetry) to the hierarchy problem in grand unifications of gauge interactions. In such scenarios, the exchange of bulk gravitons (Kaluza-Klein modes) can lead to an extra contribution to any two-particle cross section given by
$$\sigma _g\frac{4\pi s}{M^4}10^{27}\left(\frac{\mathrm{TeV}}{M}\right)^4\left(\frac{E}{10^{20}\mathrm{eV}}\right)\mathrm{cm}^2,$$
(1)
where the last expression applies to a neutrino of energy $`E`$ hitting a nucleon at rest. Note that a neutrino would typically start to interact in the atmosphere and therefore become a primary candidate for the highest energy cosmic rays for $`\sigma _{\nu N}\begin{array}{c}>\hfill \\ \hfill \end{array}10^{27}\mathrm{cm}^2`$, i.e. for $`E\begin{array}{c}>\hfill \\ \hfill \end{array}10^{20}`$eV, assuming $`M1`$TeV.
The total charged-current neutrino-nucleon cross section is given by the sum of Eq. (1) and the cross section within the Standard Model, which can be estimated by
$$\sigma _{\nu N}^{SM}(E)2.36\times 10^{32}\left(\frac{E}{10^{19}\mathrm{eV}}\right)^{0.363}\mathrm{cm}^2$$
(2)
in the energy range $`10^{16}\mathrm{eV}\begin{array}{c}<\hfill \\ \hfill \end{array}E\begin{array}{c}<\hfill \\ \hfill \end{array}10^{21}`$eV.
The total cross section is dominated by a contribution of the form Eq. (1) at energies $`E\begin{array}{c}>\hfill \\ \hfill \end{array}E_{\mathrm{th}}`$, where, for $`M\begin{array}{c}>\hfill \\ \hfill \end{array}1`$TeV, the threshold energy can be approximated by
$$E_{\mathrm{th}}2\times 10^{13}\left(\frac{M}{\mathrm{TeV}}\right)^{6.28}\mathrm{eV}.$$
(3)
This would be reflected by a linear energy dependence of the typical column depth of induced shower development if the optical depth in the detection medium is of order unity, or by a flattening of the differential detection rate by one power of the energy if the optical depth is smaller than unity. Comparison with observations would either reveal signatures for these scenarios or constrain them in a way complementary to and independent of many studies on signatures in human made accelerators or other laboratory experiments that have recently appeared in the literature.
## 2 A Bound from the “Cosmogenic” Neutrino Flux
Fig. 1 shows neutrino fluxes for the atmospheric background at different zenith angles (hatched region marked “atmospheric”), for proton blazars that are photon optically thick to nucleons and whose flux was normalized to recent estimates of the blazar contribution to the diffuse $`\gamma `$ray background (“proton blazar”), for neutrinos created as secondaries from the decay of charged pions produced by ultra-high energy (UHE) nucleons interacting with the cosmic microwave background (“cosmogenic”), and for a model where UHE cosmic rays are produced by decay of particles close to the Grand Unification Scale (“SLBY98”, see Ref. for details).
Apart from the atmospheric neutrino flux only the cosmogenic neutrinos are guaranteed to exist due to the known existence of UHE cosmic rays, at least if these contain nucleons and are not exclusively of galactic origin.
The non-observation of deeply penetrating air showers by the experiments indicated in Fig. 1 in the presence of this cosmofenic flux can now be translated into an upper limit on the total neutrino-nucleon cross section $`\sigma _{\nu N}\sigma _{\nu N}^{SM}+\sigma _g`$ by scaling the diffuse neutrino flux limits from the Standard Model cross section Eq. (2). Using the conservative, lower estimate of the cosmogenic flux in Fig. 1 yields
$$\sigma _{\nu N}(E=10^{19}\mathrm{eV})\begin{array}{c}<\hfill \\ \hfill \end{array}2.4\times 10^{29}\mathrm{cm}^2,$$
(4)
as long as $`\sigma _{\nu N}(E=10^{19}\mathrm{eV})\begin{array}{c}<\hfill \\ \hfill \end{array}10^{27}\mathrm{cm}^2`$, such that neutrinos would give rise to deeply penetrating air showers. Using Eq. (1) results in
$$M\begin{array}{c}>\hfill \\ \hfill \end{array}1.4\mathrm{TeV}.$$
(5)
It is interesting to note that these limits do not depend on the number $`n`$ of extra dimensions, in contrast to some other astrophysical limits such as from graviton emission from a hot supernova core into the extra dimensions which depend more explicitly on phase space integrations (see Sect. 3 below).
As can be seen from Fig. 1, with an experiment such as OWL, the upper limit on the cross section Eq. (4) could improve by about 4 orders of magnitude, and the lower limit on $`M`$ consequently by about a factor 10.
## 3 Comparison with Other Astrophysical and Laboratory Bounds
There are also astrophysical constraints on $`M`$ which result from limiting the emission of bulk gravitons into the extra dimensions. The strongest constraints in this regard come from nucleon-nucleon bremsstrahlung in type II supernovae . These contraints read $`M\begin{array}{c}>\hfill \\ \hfill \end{array}50`$TeV, $`M\begin{array}{c}>\hfill \\ \hfill \end{array}4`$TeV, and $`M\begin{array}{c}>\hfill \\ \hfill \end{array}1`$TeV, for $`n=2,3,4`$, respectively, and, therefore, $`n4`$ is required if neutrino primaries are to serve as a primary candidate for the UHE cosmic ray events observed above $`10^{20}`$eV (note that $`n=7`$ for the superstring and $`n=22`$ for the heterotic string). This assumes that all extra dimensions have the same size given by
$`r`$ $``$ $`M^1\left({\displaystyle \frac{M_{\mathrm{Pl}}}{M}}\right)^{2/n}`$ (6)
$``$ $`2\times 10^{17}\left({\displaystyle \frac{\mathrm{TeV}}{M}}\right)\left({\displaystyle \frac{M_{\mathrm{Pl}}}{M}}\right)^{2/n}\mathrm{cm},`$
where $`M_{\mathrm{Pl}}`$ denotes the Planck mass. The above lower bounds on $`M`$ thus translate into the corresponding upper bounds $`r\begin{array}{c}<\hfill \\ \hfill \end{array}3\times 10^4`$mm, $`r\begin{array}{c}<\hfill \\ \hfill \end{array}4\times 10^7`$mm, and $`r\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^8`$mm, respectively.
UHE cosmic rays and neutrinos together with other astrophysical and cosmological constraints thus provide an interesting testing ground for theories involving extra dimensions which represent one possible kind of physics beyond the Standard Model. In this context, we mention that in theories with large compact extra dimensions mentioned above, Newton’s law of gravity is expected to be modified at distances smaller than the length scale given by Eq. (6). Indeed, there are laboratory experiments measuring gravitational interaction at small distances (for a recent review of such experiments see Ref. ), which also probe these theories. Thus, future UHE cosmic ray experiments and gravitational experiments in the laboratory together have the potential of providing rather strong tests of these theories. These tests would be complementary to constraints from collider experiments . |
no-problem/0001/math0001053.html | ar5iv | text | # Signs in the 𝑐𝑑-index of Eulerian partially ordered sets
## 1 Introduction
In the past thirty years or more, there has been much interest in combinatorial questions about polytopes and other geometric complexes and partial orders. Of central importance is the flag vector of a partially ordered set (poset) and various combinatorial parameters derived from it. One of these parameters is the $`cd`$-index, defined for Eulerian posets, a class that contains face lattices of polytopes. The $`cd`$-index was discovered by Fine and introduced in the literature by Bayer and Klapper (). It has captured the imagination, both for what is known and for what is not known about it. It embodies in an elegant way the linear relations of flag vectors of Eulerian posets (the generalized Dehn-Sommerville relations of Bayer and Billera ); the number of coefficients in the $`cd`$-index is a Fibonacci number. It is known to be nonnegative for polytopes (see Stanley ), but it is not known what it counts, except in special cases (see Purtill ). Among polytopes, the $`cd`$-index is minimized by the simplices (see Billera and Ehrenborg ). Novik () gives lower bounds for $`cd`$-coefficients of odd-dimensional simplicial manifolds (or, more generally, Eulerian Buchsbaum complexes).
Stanley () proved the nonnegativity of the $`cd`$-index for “S-shellable” regular CW-spheres (including polytopes). In he proposes the following as the main open problem concerning the $`cd`$-index: Is the $`cd`$-index nonnegative for all Gorenstein posets? (These are the Cohen-Macaulay Eulerian posets.) In fact, some parts of the $`cd`$-index are nonnegative for all Eulerian posets. In this paper we determine which $`cd`$-words have nonnegative coefficients for all Eulerian posets. For all other $`cd`$-words, we show how to construct Eulerian posets with arbitrarily large negative coefficients. The proofs grow out of the ideas of , which studies the cone of flag vectors of Eulerian posets.
## 2 Definitions
An Eulerian poset is a graded partially ordered set $`P`$ in which every interval has the same number of elements of even and of odd rank. For $`P`$ an Eulerian poset, the dual poset, obtained by reversing the order relation, is also Eulerian. The $`cd`$-index of an Eulerian poset is an invariant based on the numbers of chains in the poset. For $`P`$ an Eulerian poset of rank $`n+1`$ and $`S[1,n]`$, $`f_S(P)`$ is the number of chains in $`P`$ of the form $`\widehat{0}x_1x_2\mathrm{}x_k\widehat{1}`$, where $`\{\text{rank}(x_i):1ik\}=S`$. The $`2^n`$-tuple of flag numbers $`f_S(P)`$ (as $`S`$ ranges over all subsets of $`[1,n]`$) is called the flag vector of $`P`$. The flag $`h`$-vector is obtained by performing inclusion-exclusion on the flag vector. Thus $`h_S=_{TS}(1)^{|ST|}f_T`$ or, equivalently, $`f_S=_{TS}h_T`$. Write a generating function in noncommuting variables, $`\mathrm{\Psi }(a,b)=h_Su_S`$, where $`u_S=u_1u_2\mathrm{}u_n`$ with $`u_i=a`$ if $`iS`$ and $`u_i=b`$ if $`iS`$. For every Eulerian poset, there is a polynomial $`\mathrm{\Phi }(c,d)`$ in noncommuting variables $`c`$ and $`d`$ for which $`\mathrm{\Psi }(a,b)=\mathrm{\Phi }(a+b,ab+ba)`$. The polynomial $`\mathrm{\Phi }(c,d)`$ (or $`\mathrm{\Phi }_P(c,d)`$ when we need to specify the poset $`P`$) is called the $`cd`$-index of the poset. The $`cd`$-index of the dual of the Eulerian poset $`P`$ is obtained by reversing every $`cd`$-word in the $`cd`$-index of $`P`$. The coefficient of a $`cd`$-word $`w`$ is written as $`[w]`$ (or $`[w]_P`$). We think of each $`d`$ as occupying two positions in a $`cd`$-word, namely, the positions of $`ab`$ or $`ba`$ in the corresponding $`ab`$-words. Let $`\text{supp}(w)`$ be the set of positions of $`d`$ in $`w`$.
Stanley () notes a useful variation of the $`cd`$-index. The $`ce`$-index is obtained by replacing every $`d`$ in $`\mathrm{\Phi }(c,d)`$ by $`(ccee)/2`$. Alternatively, one gets the $`ce`$-index from $`\mathrm{\Psi }(a,b)`$—even for non-Eulerian posets—by letting $`c=a+b`$ and $`e=ab`$. The $`ce`$-index is thus a polynomial in the noncommuting variables $`c`$ and $`e`$, where for Eulerian posets the $`e`$’s occur only in pairs. Write $`L_Q`$ for the coefficient of the word $`v_Q=v_1v_2\mathrm{}v_n`$, where $`v_i=c`$ if $`iQ`$ and $`v_i=e`$ if $`iQ`$. The vector of coefficients of the $`ce`$-index, $`(L_Q(P))`$, is also known as the $`L`$-vector of $`P`$.
For an Eulerian poset $`P`$, $`L_Q(P)=0`$ unless $`Q`$ is an even set, that is, $`Q`$ is the union of disjoint intervals of even cardinality. We say $`Q`$ evenly contains $`S`$, written $`S_eQ`$, if $`S`$ and $`Q`$ are even sets, $`SQ`$, and the difference set $`QS`$ is also an even set. An “Eulerian” $`ce`$-word $`v_Q`$ is converted to a sum of $`cd`$-words by replacing consecutive pairs of $`e`$’s in $`v_Q`$ by $`cc2d`$ so that no $`e`$’s remain. This means that a $`cd`$-word $`w`$ occurs in the expansion of a $`ce`$-word $`v_Q`$ if and only if $`\text{supp}(w)_eQ`$. Thus the coefficient in the $`cd`$-index of a $`cd`$-word $`w`$ in which $`d`$ occurs $`r`$ times is
$$[w]=(2)^r\underset{\text{supp}(w)_eQ}{}L_Q.$$
(1)
(See for more information on $`L`$-vectors.)
In determining the cone of flag vectors of all graded posets (), Billera and Hetyei construct sequences of posets with convergent (normalized) flag vectors. Bayer and Hetyei apply a doubling operation to some of these to get sequences of Eulerian posets. Given an interval $`I=[i,j][1,n]`$, a rank $`n+1`$ poset $`P`$ and a positive integer $`N`$, let $`D_I^N(P)`$ be the rank $`n+1`$ poset obtained by replacing $`P_I`$, the subposet of $`P`$ consisting of elements with ranks in $`I`$, by $`N`$ copies of itself. The (horizontal) double $`DP`$ of a poset $`P`$ is the result of starting with $`P`$ and successively applying the operators $`D_{\{i\}}^2`$, for $`1in`$. (In the Hasse diagram of $`P`$ every edge is replaced by $``$.) For $``$ a set of subintervals of $`[1,n]`$, $``$ is an even interval system if (1) no interval of $``$ is contained in another, (2) every interval of $``$ is of even cardinality, and (3) the intersection of any two intervals of $``$ is of even cardinality. For each even interval system $``$ over $`[1,n]`$, there exists a sequence of Eulerian posets, $`DP(n,,N)`$, whose normalized flag vectors (and hence, normalized $`cd`$-indices and $`ce`$-indices) converge. These are obtained by starting with a rank $`n+1`$ chain, successively applying the operators $`D_I^N`$ for the intervals $`I`$, and finally taking the horizontal double.
For $``$ an interval system of $`k`$ intervals, write $`L_S(DP(n,))=`$ $`lim_N\mathrm{}L_S(DP(n,,N))/N^k`$. (Here $`2^nN^k`$ is the number of maximal chains in $`DP(n,,N)`$.) The symbol $`DP(n,)`$ is referred to as a limit poset. These $`ce`$-index coefficients are given by the formula
$$L_S(DP(n,))=\underset{j=0}{\overset{k}{}}(1)^j\left|\{1i_1<\mathrm{}<i_jk:I_{i_1}\mathrm{}I_{i_j}=S\}\right|,$$
(2)
where $`=\{I_1,I_2,\mathrm{},I_k\}`$. See for details. The formula applies for non-Eulerian limit posets as well; in that case it can give nonzero $`L_Q`$ for noneven sets $`Q`$.
We use one other result stated and proved in (but implicit in ).
###### Proposition (Inequality Lemma)
Let $`T`$ and $`V`$ be subsets of $`[1,n]`$ such that for every maximal interval $`I`$ of $`V`$, $`|IT|1`$. Write $`S=[1,n]V`$. For $`P`$ any rank $`n+1`$ Eulerian poset,
$$\underset{RT}{}(2)^{|TR|}f_{SR}(P)0.$$
Equivalently,
$$(1)^{|T|}\underset{TQV}{}L_Q(P)0.$$
## 3 The Main Result
###### Theorem
1. For the following $`cd`$-words $`w`$, the coefficient of $`w`$ as a function of Eulerian posets has greatest lower bound 0 and has no upper bound:
1. $`c^idc^j`$, with $`\mathrm{min}\{i,j\}1`$
2. $`c^idcd\mathrm{}cdc^j`$ (at least two $`d`$’s alternating with $`c`$’s, $`i`$ and $`j`$ unrestricted)
2. The coefficient of $`c^n`$ in the $`cd`$-index of every Eulerian poset is 1.
3. For all other $`cd`$-words $`w`$, the coefficient of $`w`$ as a function of Eulerian posets has neither lower nor upper bound.
Note. For $`n5`$, there are $`\left(\genfrac{}{}{0pt}{}{n2}{2}\right)/3+4`$ $`cd`$-words of the types described in Part 1. This is a small portion of the $`cd`$ words for large $`n`$.
Proof: The fact that the coefficient of $`c^n`$ is 1 is immediate from the definition and is included only for completeness.
Let $`w`$ be any $`cd`$-word containing $`r`$ copies of $`d`$, with $`r1`$. Let $``$ be the set of two-element intervals of the positions of $`d`$ in $`w`$. Compute the coefficient of $`w`$ in the $`cd`$-index of $`DP(n,)`$. If $`Q`$ properly contains $`\text{supp}(w)`$, then by equation (2), $`L_Q(DP(n,))=0`$. So by equations (1) and (2), for $`DP(n,)`$, the coefficient of $`w`$ is $`[w]=(2)^rL_{\text{supp}(w)}(DP(N,))=(2)^r(1)^r=2^r`$. This is the limit as $`N`$ goes to infinity of $`1/N^r`$ times $`[w]`$ for $`DP(n,,N)`$. So $`(DP(n,,N))`$ is a sequence of Eulerian posets with $`cd`$-coefficients $`[w]`$ not bounded above.
To show nonnegativity in Part 1, we use equation (1) and the Inequality Lemma. If $`w=dc^j`$, let $`S=\mathrm{}`$ (so $`V=[1,n]`$) and $`T=\{1\}`$. Then the coefficient of $`w`$ is $`[w]=2(1)_{TQV}L_Q0`$. If $`w=cdc^j`$ let $`S=\{1\}`$ (so $`V=[2,n]`$) and $`T=\{2\}`$. Then the coefficient of $`w`$ is $`[w]=(2)_{\{2,3\}_eQ}L_Q=`$ $`2(1)_{TQV}L_Q0`$, because $`L_Q`$ is zero unless $`Q`$ is an even set. The cases of $`w=c^id`$ and $`w=c^idc`$ follow by duality.
Let $`w=c^idcd\mathrm{}cdc^j`$, with $`d`$ occurring $`r`$ times, $`r2`$; thus $`\text{supp}(w)=\{i+1,i+2,i+4,i+5,\mathrm{},i+3r2,i+3r1\}`$. Let $`S=\{i+3,i+6,\mathrm{},i+3r3\}`$, $`V=[1,n]S`$ and $`T=\{i+2,i+4,i+7,\mathrm{},i+3r5,i+3r2\}`$. Here $`S`$ is the set of positions of the $`c`$’s between $`d`$’s and $`T`$ is a set of one position for each $`d`$, adjacent to the positions of the interior $`c`$’s. The coefficient of $`w`$ is $`[w]=(2)^r_{\text{supp}(w)_eQ}L_Q`$. The set $`Q`$ evenly contains $`\text{supp}(w)`$ if and only if $`Q`$ is an even set and $`TQV`$. Since $`L_Q=0`$ unless $`Q`$ is an even set, $`[w]=2^r(1)^r_{TQV}L_Q0`$.
The double of the chain, $`DC^{n+1}`$, has $`cd`$-index $`c^n`$, so for the $`cd`$-coefficients in Part I, the lower bound of 0 is actually attained.
It remains to show that the coefficients of the $`cd`$ words in Part 3 can be arbitrarily negative. We use several lemmas.
###### Lemma 1
For every even $`n4`$ the coefficient of $`dc^{n4}d`$ as a function of Eulerian posets has no lower bound.
Proof: Let $`=\{[1,n]\}`$. By equation (2) the only nonzero entries in the $`L`$-vector of $`DP(n,)`$ are $`L_{\mathrm{}}=1`$ and $`L_{[1,n]}=1`$. By (1) the coefficient of $`dc^{n4}d`$ in the $`cd`$-index of $`DP(n,)`$ is $`(2)^2(1)=4`$. This is the limit as $`N`$ goes to infinity of $`1/N^2`$ times $`[dc^{n4}d]`$ for $`DP(n,,N)`$. So $`(DP(n,,N))`$ is a sequence of Eulerian posets with $`cd`$-coefficients $`[dc^{n4}d]`$ not bounded below. (A formula of Ehrenborg and Readdy () gives directly that the $`cd`$-index of $`DP(n,,N)`$ is $`(N+1)c^nN(cc2d)^{n/2}`$.) $`\mathrm{}`$
In Bayer and Hetyei discuss constructions of Eulerian posets whose normalized $`L`$-vectors converge to sums of $`L`$-vectors of non-Eulerian Billera-Hetyei limit posets. (A few examples are found in \[2, Appendix A\].)
###### Lemma 2
For every odd $`n7`$ the coefficient of $`dc^{n4}d`$ as a function of Eulerian posets has no lower bound.
Proof: Write $`C^{n+1}`$ for the chain of rank $`n+1`$. Let
$$P^I(N)=D_{[1,2]}^{N+1}D_{[3,n3]}^{N+1}D_{[4,n2]}^{N+1}D_{[n1,n]}^{N+1}(C^{n+1});$$
let
$$P^{II}(N)=D_{[1,n3]}^{N+1}D_{[3,n2]}^{N^2}D_{[4,n]}^{N+1}(C^{n+1});$$
and let
$$P^{III}(N)=D_{[1,n]}^{N^4}(C^{n+1}).$$
Create a poset $`P(N)`$ from these three posets by identifying the elements of $`P^{II}(N)`$ with the elements of $`P^I(N)`$ at ranks 0, 1, 2, $`n1`$, $`n`$, and $`n+1`$, and then identifying the elements of $`P^{III}(N)`$ with the elements of $`P^I(N)`$ and $`P^{II}(N)`$ only at ranks 0 and $`n+1`$. The doubles $`DP(N)`$ of these posets are Eulerian, and the normalized $`L`$-vectors converge as $`N`$ goes to infinity. Write $`L_Q(DP)=lim_N\mathrm{}L_Q(DP(N))/f_{[1,n]}(DP(N))`$. Then $`L_Q(DP)=L_Q(DP(n,_1))+L_Q(DP(n,_2))+L_Q(DP(n,_3))`$, where $`_1=\{[1,2],[3,n3],[4,n2],[n1,n]\}`$, $`_2=\{[1,n3],[3,n2],[4,n]\}`$, and $`_3=\{[1,n]\}`$. The only nonzero $`L_Q`$ for which $`\{1,2,n1,n\}_eQ`$ are $`L_{\{1,2,n1,n\}}(DP)=1`$, $`L_{[1,n]\{3\}}(DP)=1`$ and $`L_{[1,n]\{n2\}}(DP)=1`$, so by equation (1) the coefficient of $`dc^{n4}d`$ in the $`cd`$-index of $`DP`$ is $`4`$. This is the limit as $`N`$ goes to infinity of $`1/f_{[1,n]}(DP(N))`$ times $`[dc^{n4}d]`$ for $`DP(N)`$. So $`(DP(N))`$ is a sequence of Eulerian posets with $`cd`$-coefficients $`[dc^{n4}d]`$ not bounded below. (In fact, a flag vector calculation gives the coefficient of $`dc^{n4}d`$ for $`DP(N)`$ as $`4(N^2N^4)`$.) $`\mathrm{}`$
The proof of Lemma 2 asserts that $`DP(N)`$ is Eulerian. It is easy to check by equation (2) that $`L_Q(DP(n,_1))+L_Q(DP(n,_2))+L_Q(DP(n,_3))=0`$ if $`Q`$ is not an even set. This condition must hold if every $`DP(N)`$ is an Eulerian poset. But to prove that $`DP(N)`$ is Eulerian requires us to show that every interval of the poset has the same number of elements of even rank and of odd rank. We show the details in one case. Let $`[x,y]`$ be an interval of $`P(N)`$ with $`x`$ of rank 2 and $`y`$ of rank $`n1`$. For the Eulerian condition to hold on corresponding intervals in $`DP(N)`$, the interval $`[x,y]`$ of $`P(N)`$ must have one more element of even rank than of odd rank. If $`x`$ and $`y`$ are in the subposet $`P^{III}(N)`$, then $`[x,y]`$ has exactly one element of each rank, so the condition is met. Suppose $`x`$ and $`y`$ are identified elements of $`P^I(N)`$ and $`P^{II}(N)`$. In the open interval $`(x,y)`$ in $`P^I(N)`$, ranks 3 and $`n2`$ each have $`N+1`$ elements and each other rank has $`(N+1)^2`$ elements. In the open interval $`(x,y)`$ in $`P^{II}(N)`$, each rank has $`N^2`$ elements. So the number of even-rank elements in $`[x,y]`$ is $`2+((N+1)^2+N^2)(n5)/2`$, and the number of odd-rank elements in $`[x,y]`$ is $`2(N+1)+2N^2+((N+1)^2+N^2)(n7)/2`$. The difference is 1. Note that neither $`P^I(N)`$ nor $`P^{II}(N)`$ satisfies the Eulerian condition for $`[x,y]`$ by itself. The two subposets balance each other to achieve the Eulerian property. This works for all intervals.
###### Lemma 3
The coefficient of $`ccdcc`$ as a function of rank 7 Eulerian posets has no lower bound.
Proof: The following limit poset is given in Appendix A of . Let $`P^I(N)=D_{[1,2]}^ND_{[2,6]}^N(C^7)`$ and $`P^{II}(N)=D_{[1,5]}^ND_{[5,6]}^N(C^7)`$. Let $`P(N)`$ be formed from these two posets by identifying the elements at ranks 0, 1, 6, and 7. The double $`DP(N)`$ of this poset is Eulerian. In the limit, the normalized $`L`$-vector includes the following values: $`L_{34}=L_{1234}=L_{3456}=0`$, and $`L_{123456}=1`$. These are the $`L_Q`$ that contribute to the coefficient of $`ccdcc`$ in the $`cd`$-index, $`[ccdcc]=2(L_{34}+L_{1234}+L_{3456}+L_{123456})=2`$. As argued before, this gives a sequence of Eulerian posets with $`cd`$-coefficients $`[ccdcc]`$ not bounded below. (In fact, for $`DP(N)`$, $`[ccdcc]=2(N1)^2`$.) $`\mathrm{}`$
###### Lemma 4
Let $`u`$ and $`v`$ be $`cd`$-words. If the coefficient of $`u`$ as a function of Eulerian posets has no lower bound, then the coefficients of $`uv`$ and $`vu`$ as functions of Eulerian posets have no lower bounds.
Proof: In Stanley considers a “join” operation, which produces an Eulerian poset $`PQ`$ from two Eulerian posets $`P`$ and $`Q`$. He shows that the $`cd`$-indices satisfy $`\mathrm{\Phi }_{PQ}(c,d)=\mathrm{\Phi }_P(c,d)\mathrm{\Phi }_Q(c,d)`$. Let $`u`$ be a $`cd`$-word of length $`m`$ and $`v`$ a $`cd`$-word of length $`n`$. Let $`B`$ be the rank $`n+1`$ Boolean algebra. Every $`cd`$-word of length $`n`$ has a positive coefficient in the $`cd`$-index of $`B`$. (This is proved most easily from the Ehrenborg-Readdy formula for the $`cd`$-index of a pyramid in .) Let $`P_N`$ be a sequence of rank $`m+1`$ Eulerian posets for which $`\underset{N\mathrm{}}{lim}[u]_{P_N}=\mathrm{}`$. Then $`\underset{N\mathrm{}}{lim}[uv]_{P_NB}=\underset{N\mathrm{}}{lim}[vu]_{BP_N}=\mathrm{}`$. $`\mathrm{}`$
We now complete the proof of the theorem. Every $`cd`$-word not included in Parts 1 and 2 of the theorem contains the subword $`ccdcc`$ or a subword of the form $`dc^{n4}d`$ for $`n41`$. Thus, by Lemmas 1 through 4, the coefficients of these $`cd`$-words as functions of Eulerian posets have no lower bounds. $`\mathrm{}`$
Acknowledgments: The referee was most generous with suggestions for improving the paper. I also wish to thank Gábor Hetyei for introducing me to the construction of Eulerian posets crucial to this paper, and for other helpful discussions. |
no-problem/0001/hep-th0001181.html | ar5iv | text | # Bound states in the three-dimensional ϕ⁴ model.
Three dimensional statistical systems with global $`Z_2`$ symmetry, the Ising model being the classic example, lie in the universality class of the $`\varphi ^4`$ field theory. Critical phenomena in such systems are known to be accurately described by simple perturbative methods . Given the success of perturbative methods, the appearance of excited states in the broken symmetry phase of the critical Ising model and in the 3D $`\varphi ^4`$ theory, which were found in , comes as a surprise, since scalar field theory apparently describes only one particle as long as interactions can be treated perturbatively. We shall argue that this is not the case and there is room for a rich spectrum of excitations in the broken symmetry phase of the $`\varphi ^4`$ theory even if the interaction is weak.
The excited states show up as poles of the correlation functions in the complex momentum plane and give visible contribution to certain universal quantities. The first excited state lies just below the two-particle threshold: its mass is $`M=1.83(3)m`$ , where $`m`$ is the mass gap. The closeness of $`M`$ to the threshold suggests the interpretation of this excitation as a weakly coupled bound state of two elementary excitations.
Indeed, the two-particle forces are attractive in the broken-symmetry phase of the $`\varphi ^4`$ theory, and bound states of two or more elementary quanta may in principle be formed. In four dimensions, these states indeed exist in the low-temperature regime, but disappear as the continuum limit is approached , in agreement with triviality.
In this letter we address the three-dimensional case. Numerical simulations show that non-perturbative states survive the continuum limit in 3d . We shall argue that these states can be identified with the multiparticle bound states. By considering the Ising realization of the $`\varphi ^4`$ model and using duality we shall also show that there is an exact one-to-one mapping between the bound states of the Ising model (and hence, thanks to universality, also of the $`\varphi ^4`$ theory) and the glueball states of the gauge Ising model.
Bound states in the $`\varphi ^4`$ theory. We consider the $`\varphi ^4`$ theory:
$$S=d^3x\left[\frac{1}{2}(\varphi )^2+\lambda (\varphi ^2v^2)^2\right].$$
The field $`\sigma =\varphi v`$ acquires the mass $`m^2=8\lambda v^2`$ at the tree level and is reasonably weakly coupled in the critical regime , since the critical value of the dimensionless interaction constant, $`\lambda /m`$, is not too big. The forces between elementary quanta of the field $`\sigma `$ are attractive: This can be shown by inspecting the scattering of two non-relativistic particles. There are three leading-order diagrams contributing to this process, shown in Fig. 1.
The contact interaction, diagram $`(a)`$, contributes $`12\lambda `$ to the scattering amplitude. As a first approximation, we can neglect altogether the momentum flow in diagrams $`(b)`$ and $`(c)`$. In this way, diagrams $`(b)`$ and $`(c)`$ contribute $`12\lambda `$ and $`72\lambda `$, respectively. Collecting the three terms together, we get for the amplitude $`𝒜=48\lambda `$. The positive sign of the amplitude means that the particles attract each other.
In this limit, the non-relativistic Hamiltonian describing the interaction of two particles is
$$H=\frac{𝐩_1^2}{2m}+\frac{𝐩_2^2}{2m}\frac{12\lambda }{m^2}\delta (𝐱_1𝐱_2),$$
(1)
which reproduces the field-theory scattering amplitude in the Born approximation . Note the factor of $`1/(2m)^2`$, which accounts for the relativistic normalization of the wave functions in field theory. The quantum-mechanical system of two dimensional particles interacting via a $`\delta `$-type potential develops short-distance divergences and requires a regularization . In our case, the cutoff is proportional to $`m`$, because at momenta of order $`m`$ the non-relativistic approximation becomes inadequate.
The binding energy $`\mathrm{\Delta }m`$, $`\mathrm{\Delta }1`$, is determined by the Schrödinger equation for the relative motion wave function:
$$\left(\frac{1}{m}^2+\mathrm{\Delta }m\right)\psi (𝐫)=\frac{12\lambda }{m^2}\psi (0)\delta (𝐫),$$
(2)
which, after the Fourier transform, gives the consistency condition:
$$1=\frac{12\lambda }{m^2}\frac{d^2p}{(2\pi )^2}\frac{1}{p^2/m+\mathrm{\Delta }m}=\frac{3\lambda }{\pi m}\mathrm{ln}\frac{\mathrm{\Lambda }^2}{\mathrm{\Delta }m^2}.$$
(3)
Taking $`\mathrm{\Lambda }^2=\kappa m^2`$, we get for the binding energy:
$$\mathrm{\Delta }=\kappa \mathrm{exp}\left(\frac{\pi m}{3\lambda }\right).$$
(4)
The constant $`\kappa `$ cannot be determined in the approximation used above and requires the inclusion of loop corrections and of the momentum dependence of diagrams $`b`$ and $`c`$. These corrections can be systematically taken into account in the approach based on the Bethe-Salpeter equation, which will be reported elsewhere ; here we only quote the result: $`\kappa =4/\sqrt{3}`$.
The above discussion tells us that only one bound state of two elementary quanta may exist in the broken phase of the $`\varphi ^4`$ model in three dimensions. This is in accord with numerical simulations . However, a rich spectrum of bound states, with different values of the angular momentum, can be found if we look at the bound states of three or more elementary quanta. Bound states of $`n2`$ particles could be studied in principle within the Bethe-Salpeter approach, but even the non-relativistic approximation described above becomes too complicated as the number of particles involved in the bound state increases. Up to our knowledge, the only existing result in the literature is a discussion of the $`0^+`$ bound state of three particles which can be found in . The counterpart of this state is also seen in the numerical simulations . An easier way to understand the qualitative features of the bound states is to study the Ising model in the low-temperature phase far below criticality.
The Ising model at low temperature. To proceed in understanding the structure of the bound state spectrum in the $`\varphi ^4`$ theory, let us address the same problem in the case of the low temperature phase of the 3d Ising model. The Ising model and the $`\varphi ^4`$ theory belong to the same universality class. Therefore they should have the same spectrum in the critical limit. Indeed, in it was shown in a Monte Carlo study that the two models share the same spectrum of non-perturbative states.
The main advantage of working with the Ising model is that the spectrum can be analyzed in a low temperature expansion of the transfer matrix (see ). The starting point of this expansion is to ignore the interactions between time-slices. In this approximation, the vectors that correspond to a single configuration on a time-slice become eigenvectors of the transfer matrix. The eigenvalues of the transfer matrix are directly given by the number of frustrated bonds.
In this framework, the bound state of two particles is obtained by flipping two nearby spins. If we flip two spins which are separated by a distance of more than one lattice spacing, the total number of frustrated bonds is exactly twice that of a single particle. On the contrary, if we flip two nearby spins, the number of frustrated bonds is reduced by two. This difference is the binding energy of the bound state. The fact that we are constrained to choose the two spins in nearest neighbor sites is another way to state that the attractive force between the two particles has a very short range. This procedure can be iterated, and one can construct clusters of $`k`$ nearby flipped spins which have a non-zero binding energy and are related to bound states of higher mass.
It is also possible to select bound states of non-zero angular momentum. These combinations can be constructed by using standard group-theoretical techniques. They are discussed in . Let us only recall here two results which are of interest for the present analysis.
On a (2+1) dimensional lattice the group of rotations and parity reflections is reduced to the dihedral group $`D^4`$ which has four one-dimensional and one two-dimensional irreducible representations. The $`0^+`$ state is associated to the trivial one-dimensional irreducible representation. The $`2^+`$ and $`2^{}`$ states are degenerate and correspond to two other one-dimensional representations. The simplest possible realization of the $`2^{}`$ state is represented in Fig. 2. At least three flipped spins are needed to create such a state (the $`2^+`$ state could be also realized in a simpler way, but a general theorem forces its mass to be the same of the $`2^{}`$ one in the continuum limit). Thus we expect that this state should appear as a bound state of at least three elementary quanta. The last one-dimensional irreducible representation corresponds to the $`0^{}`$ state. The simplest possible representation of this state is reported in Fig. 3 and requires at least four flipped spins. Finally, all the states with odd angular momentum are collected in the two dimensional representation. As in the $`2^+`$,$`2^{}`$ case they are all degenerate in parity.
Let us summarize the pattern of bound states as it emerges from these considerations. With two elementary quanta we may only create a bound state with quantum numbers $`0^+`$. We shall denote it with $`0^{+,}`$ to distinguish it from the single particle excitation which has the same quantum numbers. With three quanta we may create a pair of bound states $`2^+`$ and $`2^{}`$ and a new $`0^+`$ excitation that we shall call $`0^{+,}`$. With four particles we shall have a $`2^{\pm ,}`$ pair, a $`0^{+,}`$ state and a new state with quantum numbers $`0^{}`$. With five particles a new pair of states of the type $`1^\pm `$ appears, and so on.
Let us now make the crucial assumption that the binding energy is always much smaller that the mass of the constituent particles: then the mass of each bound state will be essentially given by the number of particles needed for its formation, minus a small correction given by the binding energy. In this way one obtains a detailed prediction of the qualitative features of the spectrum, based only on the interpretation of the states as bound states and the group-theoretical facts described above.
Numerical simulations show that not only these predictions are fulfilled, but that the same qualitative features of the spectrum survive well beyond the low-temperature regime and into the scaling region. Connected correlators of several composite operators are computed in Monte Carlo simulations and used to extract the spectrum in the various angular momentum channels. The measured masses exactly follow the pattern suggested above. This is a strong indication that the spectrum is indeed made of bound states of the elementary quanta, and that these bound states survive in the continuum limit.
The fact that states with angular momentum $`2`$ are lighter than that with angular momentum $`0^{}`$ is rather unexpected in standard quantum field theory. However it is a well established feature of the glueball spectrum in (lattice) gauge theories. This is the first hint that the spectrum of bound states of the 3d Ising model has something to do with the glueballs of gauge theory. The reason for this is obviously the duality between spin model and gauge model, that we will now discuss.
Duality. Duality is usually expressed as an exact equality between partition functions in infinite volume, hence in principle it does not automatically implies that the two theories must have the same spectrum. However it can be shown that duality holds not only in the thermodynamic limit but also for finite lattices. This correspondence is not trivial and requires a careful analysis of the boundary conditions of the two models . Since the approach to the thermodynamic limit of the finite volume partition function is driven by the full spectrum of excited states of the theory, the finite volume duality implies that the spectra of the two models must coincide. In particular the bound state of quantum numbers $`J^P`$ of the Ising spin model coincides (hence has exactly the same mass) with the $`J^P`$ glueball of the gauge Ising model. This identification has two interesting consequences. The first one is that the Bethe-Salpeter approach to the calculation of bound states in $`\varphi ^4`$ theory, described above, becomes an analytical tool to evaluate the masses of the first states of the glueball spectrum of the gauge Ising model. In principle (apart from technical difficulties) this could be extended to the whole glueball spectrum, and represents a powerful alternative to the Isgur-Paton model, which in the case of the Ising gauge model gives rather poor result . A second, more important consequence of this identification is that it gives a possible explanation for a peculiar degeneracy observed in the Monte Carlo estimates of the glueball masses in the 3d gauge Ising model for which no alternative explanation exists. This intriguing feature of the spectrum can be immediately appreciated by looking at Tab. 1, (data taken from ). In Tab. 1 the asterisks denote the radial excitations, thus $`0^+`$ is the lowest state in the family with quantum numbers $`0^+`$, $`0^{+,}`$ the next one and $`0^{+,}`$ the third one. $`0^+`$ is related by duality to the single-particle state of the 3d Ising model, $`0^{+,}`$ to the first bound state and so on. The degeneracy involves the pairs $`(0^{+,},2^\pm )`$, $`(0^{},2^{\pm ,})`$, $`(0^,,1^\pm )`$ (the last one is only roughly established, it holds within the errors). Let us stress that this degeneracy has no obvious physical reason. The only one which we would expect on physical grounds is the one between $`J^+`$ and $`J^{}`$ states (for $`J0`$) (see for a discussion of this point) which is indeed present and has been already taken into account in Tab. 1. Moreover it is not explained by the Isgur-Paton model (last column of Tab. 1). This degeneracy seems to be a rather deep phenomenon since it is also present in the glueball spectrum of the SU(2) model in (2+1) dimensions. In the third column of Tab. 1 we report the data on SU(2) obtained by Teper (the underlined values are our extrapolations of the finite-$`\beta `$ values reported in ). One can easily see that the same pattern of degeneracy is present both in the SU(2) and in the Ising gauge spectra. On the contrary, all these degeneracies seem to be lost in the $`SU(N)`$, $`(N>2)`$ case (see the data reported in ).
This degeneracy is well explained by the interpretation of the glueballs as bound states of the dual spin model: the degenerate glueball states are simply bound states of the same number $`n_c`$ of constituent particles, namely $`n_c=3,4,5`$ respectively for the $`(0^{+,},2^\pm )`$, $`(0^{},2^{\pm ,})`$ and $`(0^,,1^\pm )`$ degeneracies. In fact, according to the assumption stated above, the major contribution to the mass of the bound state is given by the number of elementary quanta involved. The dependence on the various quantum numbers is encoded in the binding energies $`\mathrm{\Delta }`$ which however give only a small correction to the mass. This results in the approximate degeneracies observed in the simulations. Notice that we do not expect to have exact degeneracies, since there is no reason to expect the binding energy to be exactly the same for different bound states.
Conclusions. Our analysis shows that bound states are very likely to exist in the broken-symmetry phase of 3d $`\varphi ^4`$ and Ising models. Their existence can be inferred both from the Bethe-Salpeter equation of the field theory and the strong-coupling analysis of the spin model, and is strongly confirmed by numerical simulations.
Duality allows one to apply the same analysis to the glueball spectrum of the 3d Ising gauge model, which exactly coincides with the one of the spin model. The interpretation of the latter as a spectrum of bound states provides a natural explanation for several features of the glueball spectrum, such as its peculiar dependence on the angular momentum and its characteristic degeneracies.
Acknowledgements. We would like to thank M. Campostrini, A. Pelissetto, P. Rossi and E. Vicari for fruitful discussions, and the organizers of the INTAS meeting 1999 for providing the stimulating environment in which this work was started. This work was partially supported by the European Commission TMR programme ERBFMRX-CT96-0045. The work of K.Z. was supported by the PIMS Postdoctoral Fellowship and by NSERC of Canada. |
no-problem/0001/cond-mat0001187.html | ar5iv | text | # Comment on ”Observation of Spin Injection at a Ferromagnet-Semiconductor Interface”, by P.R. Hammar et al.
In a recent Letter Hammar et al. claim the observation of injection of a spin-polarized current in a two-dimensional electron gas (2DEG). This is an important observation, since, despite considerable effort of several groups, all attempts to realize spin-injection into a 2DEG using purely electrical measurements have failed sofar. However, in my opinion the claim made in is not correct, and the observed behaviour can be explained by a combination of a magneto resistance (Hall) effect with a spin-independent rectification effect due to the presence of a metal-semiconductor junction.
The interpretation of the data depends crucially on the theoretical description formulated in . A 2DEG is considered, connected to a ferromagnetic electrode. The key ingredient is that the electron spin is conserved in the 2DEG, but the spin-orbit interaction of the Rashba type induces an asymmetry between electrons moving in a particular direction with different spin directions. As a result rectification is predicted, which depends on the direction $`M`$ of the magnetization of the ferromagnetic electrode.
However, at low currents linear response theory requires that $`V(I)=V(I)`$. Rectification can only occur for currents $`I`$ beyond the linear transport regime, under the condition that the transport properties of the electrons are energy dependent, and that the current in both directions is carried by electrons with different energies. However, no energy scale which determines the onset of rectification is discussed in , where the rectification depends on the direction of the current only. Therefore the theory of cannot be correct, and it is not possible to detect spin injection in this way.
Turning attention to the experiment, it should be noted first that the authors test the prediction of spin-dependent current rectification by performing a four-terminal measurement. They reverse the current direction by interchanging one current and one voltage lead. This however is not a critical test of the theory, since predicts a change in resistance when the current direction is reversed, but the current and voltage leads themselves are not changed. The reciprocity theorem for multi-terminal measurements states that (in the linear transport regime) the resistance should be invariant under interchange of the current and voltage leads, accompanied by a reversal of the magnetic field $`B`$. Although in the experiment only one current and voltage probe is interchanged, the result should be almost equivalent to interchanging both pairs, since the other pair is connected by the low resistance ferromagnetic electrode, and interchanging this pair should make little difference on the measured resistance. Therefore the observed behaviour is consistent with the reciprocity theorem, and the change in resistance can possibly be caused by a Hall effect due to the reversal of the magnetic field (e.g. resulting from the fringe fields at the edges of the ferromagnetic electrode). An estimate of the effect on the resistance is difficult however, due to the uncertainty where the current from the 2DEG actually enters the ferromagnet.
The second important aspect is that the authors study transport through a semiconductor-metal system, which contains a barrier. Judged from the measured resistance, which is low and comparable to the resistance of the 2DEG itself, the barrier is not very effective. However, on the voltage scale of $`0.1V`$ which the authors use, some sort of rectification might occur, leading to $`V(I)V(I)`$. This effect will not depend on the direction of the magnetization M, since the electron spin is not relevant for rectification in metal-semiconductor junctions. In other words: $`V(I)_M=V(I)_M`$.
The presented data therefore does not exclude that the observed effects are due to a combination of a (magnetization independent) rectification, combined with a magneto resistance (Hall) effect. It is however possible to distinguish between this explanation and that of the authors. To support their case they should show that the rectification behaviour observed in the $`V(I)`$ characteristics changes sign, when the magnetization $`M`$ is reversed. In other words, for a fixed configuration of current and voltage leads, they should show explicitly that the relation $`V(I)_M=V(I)_M`$ is obeyed, for the full current range. In particular this implies that when $`M`$ switches direction when the coercive field is exceeded, and a change $`\mathrm{\Delta }R`$ is observed for positive $`I`$, an opposite change $`\mathrm{\Delta }R`$ should be observed at negative $`I`$. This behaviour can then clearly be distinguished from a Hall effect, which predicts that the change in resistance $`\mathrm{\Delta }R`$ will have the same sign for both current directions. |
no-problem/0001/quant-ph0001053.html | ar5iv | text | # Positive Maps Which Are Not Completely Positive
## Abstract
The concept of the half density matrix is proposed. It unifies the quantum states which are described by density matrices and physical processes which are described by completely positive maps. With the help of the half-density-matrix representation of Hermitian linear map, we show that every positive map which is not completely positive is a difference of two completely positive maps. A necessary and sufficient condition for a positive map which is not completely positive is also presented, which is illustrated by some examples.
Entanglement has become one of the central concept in quantum mechanics, specially in quantum information. A quantum state of a bipartite system is entangled if it cannot be prepared locally or it cannot be expressed as a convex combination of direct product states of two subsystems. This kind of state is also called inseparable. Though easily defined, it is very hard to recognize the inseparability of a mixed state of a bipartite system.
An operational-friendly criterion of separability was proposed by Peres peres . This criterion is based on the observation that the partial transposition of a separable density matrix remains positive semidefinite. That the partial transposition of a density matrix is not positive semidefinite infers the inseparability of the density matrix. This provides a necessary condition for the separability. There exist entangled states with positive partial transposition, which exhibit bound entanglement bound . Examples of such kind were first provided in Ref. phoro and then constructed in Ref. upb systematically with the help of unextendible product basis.
Later on, by noticing that the transposition is a positive map (to be described later in details), a necessary and sufficient condition of the separability was proposed in Ref. horox : A bipartite state is separable iff it is still positive semidefinite under all positive maps acting on a subsystem. In other words, a density matrix of a bipartite system is inseparable iff there exists a positive map acting on a subsystem such that the image of the density matrix is not positive semidefinite. Hence the inseparability can be recognized by positive maps which are not completely positive.
Completely positive maps, which are able to describe the most general physical process kraus , are better understood than positive maps which are not completely positive. Positive maps from Hilbert space $`_2`$ (two-dimensional) to $`_2`$ or $`_3`$ are all decomposable decp , which are characterized by transposition and completely positive maps only. As a result in the cases of $`_2\times _2`$ and $`_2\times _3`$ the transposition criterion is also a sufficient condition for separability horox . Therefore further understandings of positive maps which are not completely positive will facilitate the recognition and classification of the inseparable mixed states.
As a direct calculation will show, under an orthonormal and complete basis $`\{|n\}_{n=0}^{L1}`$ the transposition of an $`L\times L`$ matrix $`\rho `$ can be expressed as
$$\rho ^T=\text{Tr}\rho \underset{m,n=0}{\overset{L1}{}}\sigma _{mn}\rho \sigma _{mn}^{},$$
(1)
where $`\sigma _{mn}=(|mn||nm|)/\sqrt{2}`$. We see immediately that the transposition is a difference of two completely positive maps. And this statement will be proved to hold true for all positive maps which are not completely positive, which will be also characterized by a necessary and sufficient condition in this Letter.
For this purpose we shall first develop an extremely useful tool — half density matrix that unifies the description of the quantum states and physical processes. And then we derive a half-density-matrix representation of an arbitrary Hermitian linear map from which our main results are obtained. Along with the introduction of the concept of half density matrix, its relations to the ensembles and the purifications of mixed states are clarified and its applications in the field of quantum information such as quantum teleportation tele are also presented.
Normally, quantum states, pure or mixed, are described by density matrices, positive semidefinite operators (whose eigenvalues are all nonnegative) on the Hilbert space of the system. Because of its property of positive semidefinite the density matrix $`\rho `$ can always be written as $`\rho =TT^{}`$ where matrix $`T`$ is called here as the half density matrix (HDM) for a quantum state.
Obviously, the half density matrix for a given density matrix is not unique. For example $`TU`$ and $`T`$ are corresponding to the same mixed state $`\rho =TT^{}`$ whenever $`U`$ is unitary. Generally, the half density matrix $`T`$ for a mixed state $`\rho `$ of an $`s`$-level system is an $`s\times L`$ rectangular matrix with $`Lr=\text{Rank}(\rho )`$, i.e., a linear map from an $`L`$-dimensional Hilbert space $`_L`$ to an $`s`$-dimensional Hilbert space $`_s`$. The rank $`r`$ of the density matrix equals to the rank of the half density matrix $`T`$ and $`r=1`$ for pure state.
Under an orthonormal and complete bases $`\{|m\}_{m=0}^{s1}`$ and $`\{|n\}_{n=0}^{L1}`$ of Hilbert spaces $`_s`$ and $`_L`$, a typical half density matrix of dimension $`s\times L`$ can be constructed as $`T_e=V^{}(\mathrm{\Delta }_s,0_{s\times (Ls)})`$ , where $`\mathrm{\Delta }_s`$ is a diagonal $`s\times s`$ matrix formed by all the square roots of the eigenvalues of $`\rho `$ (the singular numbers of $`T_e`$) and $`V`$ is an $`s\times s`$ unitary matrix diagonalizing the density matrix $`\rho `$. Obviously we have $`\rho =T_eT_e^{}`$. As a direct result of the singular number decomposition of an arbitrary matrix sglr we have the following
Lemma: Given a density matrix $`\rho `$ of an $`s`$-level system, an $`s\times L`$ matrix $`T`$ is a half density matrix for $`\rho `$, i.e., $`\rho =TT^{}`$, if and only if there exists an $`L\times L`$ unitary matrix $`U`$ such that $`T=T_eU`$.
When written explicitly in the established basis, the relation $`\rho =T_eT_e^{}`$ results in exactly an ensemble formed by all the eigenvectors $`V^{}|m`$ of the mixed state, which is referred to as eigen-ensemble here. In this way every half density matrix $`T`$ of a mixed state $`\rho `$ corresponds to an ensemble of the mixed state. The above Lemma tells us that every ensemble of a given mixed state is related to the eigen-ensemble by a unitary matrix which has been proved by other means hjw . Therefore the half density matrix of a density matrix is physically equivalent to an ensemble of the corresponding mixed state.
Every mixed state $`\rho `$ admits a purification schu , a pure state $`|\varphi `$ of a bipartite system including this system as a subsystem such that $`\rho =\text{Tr}_2|\varphi \varphi |`$. Under the established basis, a general pure state in $`_s_L`$ is
$$|\varphi =\underset{m=0}{\overset{s1}{}}\underset{n=0}{\overset{L1}{}}C_{mn}|m_1|n_2:=T|\mathrm{\Phi }_L.$$
(2)
Here pure state $`|\mathrm{\Phi }_L=_{n=0}^{L1}|n_1|n_2`$ lives in Hilbert space $`_L_L`$ and $`T`$ is a linear map from $`_L`$ to $`_s`$ acting on the first $`L`$-dimensional Hilbert space $`_L`$. Under the given bases linear map $`T`$ is represented by an $`s\times L`$ rectangular matrix with matrix elements given by $`m|T|n=C_{mn}`$. When the pure state $`|\varphi `$ is normalized we have $`\text{Tr}(T^{}T)=1`$. Alternatively, we also have $`|\varphi =T^T|\mathrm{\Phi }_s`$ with state $`|\mathrm{\Phi }_s`$ defined in $`_s_s`$ similar to state $`|\mathrm{\Phi }_L`$. The linear map $`T^T:_s_L`$ acts on the second $`_s`$ and it is represented by the transposition of $`T`$ under the established basis.
Tracing out the second system we obtain a reduced density matrix of the first subsystem $`\rho _s=TT^{}`$ and similarly $`\rho _L=T^TT^{}`$ for the second subsystem. That is to say $`T`$ is the HDM for the reduced half density matrix $`\rho _s=\text{Tr}_L|\varphi \varphi |`$ of the first subsystem and its transposition $`T^T`$ for $`\rho _L=\text{Tr}_s|\varphi \varphi |`$. Thus a one-to-one correspondence between a normalized pure state $`|\varphi `$ of a bipartite system, a purification, and a linear map $`T`$ satisfying $`\text{Tr}(T^{}T)=1`$, a half density matrix, is established. Therefore a half density matrix $`T`$ is also equivalent to a purification of the mixed state. The linear map $`T`$ is also referred to as the half density matrix of a bipartite pure state, which is unique by definition. If $`s=L`$ the polar decomposition of $`T`$ will result in the useful Schmit-decomposition.
The pure bipartite state is separable iff the rank of its half density matrix is one. For a pure product state $`|v_s|w_L`$ the half density matrix is $`|vw^{}|`$ where $`|w^{}`$ is the index state of state $`|w`$ defined by $`|w^{}=w|\mathrm{\Phi }_L`$ schu . For later use we define a mirror operator $`M_L=|\mathrm{\Phi }_L\mathrm{\Phi }_L|`$ in the Hilbert space $`_L_L`$, which has the property $`w^{}|M_L|w^{}=|ww|`$. The partial transposition of the mirror operator $`X=M_L^{T_1}`$ is in fact the exchanging (or swapping) operator introduced by Werner werner (denoted as $`V`$ there).
As an application, we consider a state $`|\varphi _{12}|\psi _3`$ of a tripartite system with all three subsystems 1,2 and 3 being $`s`$-level systems. Let $`T_\varphi `$ denote the HDM of the bipartite state $`|\varphi _{12}`$ and $`|k;l_{23}=T_{kl}|\mathrm{\Phi }_s_{23}`$ denote an orthonormal complete basis for systems 2 and 3 with HDMs $`T_{kl}`$ satisfying $`\mathrm{Tr}T_{kl}T_{k^{}l^{}}^{}=\delta _{kk^{}}\delta _{ll^{}}`$ for orthogonality and $`_{kl}T_{kl}𝒪T_{kl}^{}=\text{Tr}𝒪`$ for completeness. We then have expansion
$$|\varphi _{12}|\psi _3=\underset{k,l=0}{\overset{s1}{}}T_\varphi T_{kl}^{}|\psi _1|k;l_{23}.$$
(3)
This describes exactly a quantum teleportation of an unknown quantum state $`|\psi `$ from system 3 to system 1 when both $`T_\varphi `$ and $`T_{kl}`$ are unitary or state $`|\varphi _{12}`$ and basis $`|k;l_{23}`$ are maximally entangled states ys .
The mixed state $`\rho _{sL}`$ of an ($`s\times L`$) bipartite system can also be equivalently and conveniently characterized by HDMs of pure bipartite states. Let $`\{|\varphi _i,p_i\}_{i=1}^R`$ be an ensembles of $`\rho _{sL}`$ we have
$$\rho _{sL}=\underset{i=1}{\overset{R}{}}p_i|\varphi _i\varphi _i|=\underset{i=1}{\overset{R}{}}A_iM_LA_i^{},$$
(4)
where we have denoted $`A_i`$ as the half density matrix of the pure state $`\sqrt{p_i}|\varphi _i`$, i.e., $`\sqrt{p_i}|\varphi _i=A_i|\mathrm{\Phi }_L`$. Obviously HDMs defined by $`\stackrel{~}{A}_i=_jU_{ij}A_j`$ characterize the same density matrix whenever $`U`$ is a unitary matrix. And from the Lemma we know that given a density matrix this is the only freedom that the half density matrices can have.
The density matrix expressed in the form as in Eq.(4) can be easily manipulated by local operations. For example the density matrix under operation $`U_sU_L^{}`$ is transformed to density matrix specified by half density matrices $`U_sA_iU_L^{}`$. The tilde operation $`\rho \stackrel{~}{\rho }`$ introduced in Ref. 2qubit to obtain explicitly the entanglement of formation of two-qubit is simply an anti-linear transformation $`\stackrel{~}{A}_i=\text{Tr}A_i^{}A_i^{}`$.
In the discussions above we have defined the half density matrices for the states of a single system, for pure bipartite states, and for mixed bipartite states. The physical processes can also be characterized by half density matrices. A general physical process which can include unitary evolutions, tracing out one system, and general measurements is described by trace-preserving completely positive maps kraus ; schu , which is a special kind of Hermitian linear map.
A Hermitian linear map sends linearly Hermitian operators to Hermitian operators that may live in different Hilbert spaces. Let $``$ denote a general Hermitian linear map from Hilbert space $`_L`$ to $`_s`$. Because the map $``$ is linear the map $`_L`$ is also a Hermitian linear map from $`_L_L`$ to $`_s_L`$, where $`_L`$ denotes the identity map on $`_L`$. Recalling that the mirror operator $`M_L=|\mathrm{\Phi }_L\mathrm{\Phi }_L|`$ is defined on $`_L_L`$, its image
$$H_{sL}=_L(M_L)$$
(5)
is therefore a Hermitian operator in $`_s_L`$. Let $`|\psi _i^+=A_i|\mathrm{\Phi }_L`$ $`(ii_+)`$ denote the eigenvectors corresponding to the positive eigenvalues of $`H_{sL}`$ and $`|\psi _i^{}=B_i|\mathrm{\Phi }_L`$ $`(ii_{})`$ to negative eigenvalues of $`H_{sL}`$, where $`i_\pm `$ is the number of the positive/negative eigenvalues of $`H_{sL}`$. We then have
$$H_{sL}=\underset{i=1}{\overset{i_+}{}}A_iM_LA_i^{}\underset{i=1}{\overset{i_{}}{}}B_iM_LB_i^{},$$
(6)
in which the norms of the eigenvectors $`|\psi _i^\pm `$ have been taken to be the absolute values of corresponding eigenvalues. Because the eigenvectors corresponding to different eigenvalues are orthonormal we have $`\text{Tr}(A_iB_j^{})=0`$ for all $`i`$ and $`j`$. In this sense two families of half density matrices $`\{A_i\}`$ and $`\{B_i\}`$ are orthogonal to each other.
For a pure state $`P_w=|ww|`$ in the Hilbert space $`_L`$ we have $`P_w=w^{}|M_L|w^{}`$ where $`|w^{}`$ is the index state of $`|w`$. As a result we have $`(P_w)=w^{}|H_{sL}|w^{}`$. Taking into account of the linearity of the Hermitian map $``$, we finally obtain
$$(H)=\underset{i=1}{\overset{i_+}{}}A_iHA_i^{}\underset{i=1}{\overset{i_{}}{}}B_iHB_i^{},$$
(7)
where $`H`$ is an arbitrary Hermitian matrix in $`_L`$. This is called the half-density-matrix representation of a Hermitian linear map. As one result we have
$$\mathrm{\Phi }_s|_s(\mathrm{\Sigma }_{sL})|\mathrm{\Phi }_s=\text{Tr}(H_{sL}\mathrm{\Sigma }_{sL}^T)$$
(8)
for an arbitrary Hermitian matrix $`\mathrm{\Sigma }_{sL}`$ in $`_s_L`$. As another consequence, a one-to-one correspondence between the Hermitian maps $`:_L_s`$ and Hermitian matrix $`H_{sL}`$ (an observable) in $`_s_L`$ can be established
$$(H)=\text{Tr}_L(H_{sL}H^T)$$
(9)
in addition to Eq.(5).
The HDM representation of Hermitian linear map is not unique. Suppose two integers $`Mi_+`$ and $`Ni_{}`$ and let $`SU(M,N)`$ denote the pseudo-unitary group formed by $`(M+N)\times (M+N)`$ matrices satisfying $`S\eta S^{}=\eta `$ where $`\eta =I_M(I_N)`$ and $`I_{M(N)}`$ is the $`M\times M`$ ($`N\times N`$) identity matrix. If we define a family of HDMs $`\{T_i\}_{i=1}^{M+N}`$ as $`T_i=A_i`$ $`(1ii_+)`$, $`T_i=B_i`$ $`(M+1iM+i_{})`$ and $`T_i=0`$ otherwise and take an arbitrary element $`S`$ of $`SU(M,N)`$, a new family of HDMs $`\{\stackrel{~}{T}_i\}_{i=1}^{M+N}`$ defined by $`\stackrel{~}{T}_i=_jS_{ij}T_j`$ represents the same Hermitian linear map
$$(H)=\underset{i=1}{\overset{M}{}}\stackrel{~}{T}_iH\stackrel{~}{T}_i^{}\underset{j=1}{\overset{N}{}}\stackrel{~}{T}_jH\stackrel{~}{T}_j^{}.$$
(10)
A positive map is a special Hermitian linear map which maps any positive semidefinite operator to a positive semidefinite operator. A Hermitian linear map $`𝒮:_L_s`$ is positive if and only if $`\text{Tr}(𝒮(Q_L)P_s)=\text{Tr}(H_{sL}^TP_sQ_L)0`$ for all pure product state $`P_sQ_L`$ where $`H_{sL}=𝒮_L(M_L)`$. In the following $`𝒮`$ is always a positive map.
A completely positive (CP) map is a positive map which keeps its positivity when the system it acts on is embedded as a subsystem in an arbitrary larger system. That is, for a CP map $`𝒮:_L_s`$ and an arbitrary positive integer $`k`$ the induced map $`𝒮_k`$ from $`_L_k`$ to $`_s_k`$ is positive.
However it is enough to check whether the image $`H_{sL}=𝒮_L(M_L)`$ of the mirror operator $`M_L`$ is positive semidefinite or not. If it is positive semidefinite, then the negative part in the HDM representation Eq.(7) disappears, which yields exactly the operator-sum representation of a CP map schu
$$𝒮(\rho )=\underset{i=1}{\overset{i_+}{}}A_i\rho A_i^{}.$$
(11)
If the trace is preserved, we have further $`_iA_i^{}A_i=1`$. Therefore the operator-sum representation of a CP map can also be referred to as a half-density-matrix representation. Especially, if $`H_{sL}`$ equals to the identity matrix $`I_sI_L`$, the corresponding CP map is simply the trace operation $`𝒮_T(\rho )=I_s\text{Tr}\rho `$.
A positive map which is not completely positive (non-CP) is nonetheless a Hermitian map so that it has a HDM representation as Eq.(7), from which we obtain $`𝒮=𝒮_A𝒮_B`$ where two CP maps $`𝒮_{A,B}`$ are represented by HDMs $`\{A_i\}`$ and $`\{B_i\}`$ respectively. Two CP maps $`𝒮_{A,B}`$ are said to be orthogonal if their HDMs are orthogonal to each other, i.e., $`\text{Tr}(A_iB_j^{})=0`$ for all $`i,j`$. We see that $`H_{sL}`$ can not be positive semidefinite.
Conversely, if the Hermitian matrix $`H_{sL}`$ has at least one negative eigenvalue then it determines a non-CP positive map. Let $`|\psi `$ denote an eigenvector corresponding to one of the negative eigenvalues of $`H_{sL}`$ and $`P_\psi =|\psi \psi |`$. From identity (8) we see immediately that $`_s𝒮(P_\psi ^T)`$ is not positive semidefinite, i.e., map $`𝒮`$ is not completely positive. We note that the eigenspace corresponding to the negative eigenvalues of $`H_{sL}`$ contains no product state because of positivity. To summarize, we have the following
Theorem: Every positive map which is not completely positive is a difference of two orthogonal completely positive maps; A Hermitian linear map $`𝒮:_L_s`$ is positive but not completely positive if and only if for all pure product state $`P_sQ_L`$ in $`_L_s`$ we have $`\text{Tr}(H_{sL}P_sQ_L)0`$ while $`H_{sL}=𝒮_L(M_L)`$ is not positive semidefinite.
This theorem provides an obvious way to construct a non-CP positive map form $`_L`$ to $`_s`$. First, we choose a proper Hermitian matrix $`H_{sL}`$ in $`_s_L`$ satisfying the conditions specified in the above theorem. Then a non-CP positive map $`𝒮:_L_s`$ is determined by $`𝒮(\rho _L)=\text{Tr}_L(H_{sL}\rho _L^T)`$.
As the first example we consider the the exchanging operator defined in $`_L_L`$ by $`X=M_L^{T_1}`$ or explicitly
$$X=\underset{m,n=0}{\overset{L1}{}}|m,nn,m|.$$
(12)
The exchanging operator $`X`$ has two eigenvalues $`\pm 1`$ and $`\sigma _{mn}|\mathrm{\Phi }_L`$ $`(m>n)`$ are the eigenvectors corresponding to eigenvalue $`1`$. Therefore $`X`$ is not positive semidefinite and for any pure product states $`|pp=|v|w`$ we have $`pp|X|pp=|v|w|^20`$ as specified by the above theorem. In fact the resulting non-CP positive map on $`_L`$ is exactly the transposition $`\rho ^T=\text{Tr}_2(X\rho ^T)`$. By writing $`X`$ in its diagonal form we obtain $`\rho ^T=𝒮_T(\rho )𝒮_\sigma (\rho )`$, where the CP map $`𝒮_\sigma `$ is represented by HDMs $`\{\sigma _{mn}\}`$ and $`𝒮_T`$ is the trace operation.
As the second example we consider a Hermitian matrix in $`_L_L`$ defined by $`H_R=I_LI_LM_L`$. It is not positive semidefinite because $`\mathrm{\Phi }_L|H_R|\mathrm{\Phi }_L<0`$ and for every product states $`|pp`$ we have $`pp|H_R|pp=1|v|w^{}|^20`$. Accordingly, a non-CP positive map is defined on $`_L`$ as $`\mathrm{\Lambda }(\rho )=\text{Tr}\rho \rho `$, which provides the reduction criterion reduct ; red0 : Every inseparable state in $`_L_L`$ which loses its positivity under map $`_L\mathrm{\Lambda }`$ is distillable and in the distillation procedure provided in Ref. reduct the HDM of pure bipartite state serves as the filtering operation. Because $`\mathrm{\Lambda }(\rho )=𝒮_\sigma (\rho ^T)`$, the reduction map $`\mathrm{\Lambda }`$ is a decomposable positive map, which is generally of form $`𝒮_d(\rho )=𝒮_1(\rho )+𝒮_2(\rho ^T)`$ with $`𝒮_{1,2}`$ being two CP maps.
The last example makes use of an unextendible product basis upb , a set of orthonormal product basis $`\{|\alpha _i|\beta _i\}_{i=1}^S`$ of $`_s_L`$ where $`S<sL`$ and there is no other pure product state that is orthogonal to this set of basis. If we denote $`P=|\alpha _i\alpha _i||\beta _i\beta _i|`$ then $`\stackrel{~}{\rho }=(1P)/(sLS)`$ represents an inseparable states with positive partial transposition. If we define
$$ϵ=\underset{|\alpha |\beta }{\mathrm{min}}\alpha |\beta |P|\alpha \beta $$
it can be sure that $`0<ϵS/sL`$ bt . Denoting $`\rho _0`$ as a normalized density matrix in $`_s_L`$ which has the property $`\text{Tr}(\rho _0\stackrel{~}{\rho })>0`$, we define a Hermitian matrix as $`H_ϵ=Pϵd\rho _0`$ where
$$\frac{1}{d}=\underset{|\alpha |\beta }{\mathrm{max}}\alpha |\beta |\rho _0|\alpha \beta $$
and $`1dsL`$. Matrix $`H_ϵ`$ is not positive semidefinite since $`\text{Tr}H_ϵ\stackrel{~}{\rho }=ϵd\text{Tr}(\rho _0\stackrel{~}{\rho })<0`$ and for an arbitrary pure product state $`\alpha |\beta |H_ϵ|\alpha |\beta 0`$. If we choose $`\rho _0=I_sI_L/sL`$ then a non-CP positive map is defined by
$$𝒮_ϵ(\rho )=\underset{i=1}{\overset{S}{}}T_i\rho T_i^{}ϵ\text{Tr}\rho ,$$
(13)
where $`T_i=|\alpha _i\beta _i^{}|`$ is the half density matrix of the product base $`|\alpha _i|\beta _i`$. In Ref. bt $`\rho _0`$ is taken as a maximally entangle state and $`d=\mathrm{min}(s,L)`$. Positive map $`𝒮_ϵ`$ is indecomposable because $`_s𝒮_ϵ(\stackrel{~}{\rho })`$ is not positive semidefinite while $`_s𝒮_d(\stackrel{~}{\rho })`$ is positive semidefinite for any decomposable map.
In conclusion, the concept of the half density matrix was studied and its applications to the quantum information are discussed in some detail. Based on the half-density-matrix representation of a Hermitian linear map, we proved that every positive map which is not completely positive is a difference of two completely positive maps. A necessary and sufficient condition for a non-CP positive map is given, which provides a way of constructing such kind of maps. Some examples are also presented. Further applications of the half density matrix in the quantum information and other fields can be expected and the understandings of positive maps provided here may be helpful the recognition of the inseparable quantum states and to the quantification of the entanglement moe .
The author gratefully acknowledges the financial support of K. C. Wong Education Foundation, Hong Kong. |
no-problem/0001/math0001140.html | ar5iv | text | # 1 Introduction
## 1 Introduction
One of the hardest tasks in topological graph theory is the problem to determine the minimal crossing number of graph embeddings. Only very few results are known and there are several outstanding conjectures concerning the crossing numbers of, e.g., the complete graph $`K_n`$, the complete bipartite graph $`K_{n,m}`$, and the product of cycles $`C_n\times C_m`$ for arbitrary values of $`m,n`$.
A main obstacle for finding lower bounds of crossing numbers by means of knot theory is the fact that, in contrast to knot diagrams, the diagram of a planar graph in general cannot be unknotted by crossing changes. Likewise, it is in general not possible to obtain a diagram with minimal crossing number by crossing changes in an arbitrary diagram. In Section 2 of this paper, the class of graphs having this property is investigated. A graph belonging to this class is called minimalizable and, in the case that a minimalizable graph possesses a unique minimal diagram up to crossing changes, it is called strongly minimalizable. The rôle that the graph’s automorphism group is playing with respect to being minimalizable is investigated and examples of strongly minimalizable graphs are given, namely, those graphs whose automorphism group is either trivial or an appropriate product of symmetric groups.
In Section 3, a general method is described how crossing number problems in graph theory can be reduced to determine crossing numbers of finitely many graph diagrams by knot theoretical means. As an example, a planarity criterion for minimalizable graphs that arises from knot theory is stated and it is proved in Section 4.
## 2 Minimalizable Graphs
The graphs considered in the following are allowed to have multiple edges and loops. A topological graph is a 1-dimensional cell complex which is related to an abstract graph in the obvious way. If $`G`$ is a topological graph, then a graph $`𝒢`$ in $`^3`$ is the image of an embedding of $`G`$ into $`^3`$. Two graphs $`𝒢_1`$, $`𝒢_2`$ in $`^3`$ are called equivalent or ambient isotopic if there exists an orientation preserving autohomeomorphism of $`^3`$ which maps $`𝒢_1`$ onto $`𝒢_2`$. Embeddings of topological graphs in $`^3`$ can be examined via graph diagrams, i.e., images under regular projections to an appropriate plane equipped with over-under information at double points. Two graph diagrams $`D`$ and $`D^{}`$ are called equivalent or ambient isotopic if the corresponding graphs in $`^3`$ are equivalent. Equivalent graph diagrams can be transformed into each other by a finite sequence of so-called Reidemeister moves combined with orientation preserving homeomorphisms of the plane to itself, see or .
The crossing number cr(G) of a graph $`G`$ is the minimal number of double points in a regular projection of any graph embedding in 3-space. In graph theory, the equivalent definition of $`cr(G)`$ as minimal number of crossings in a good drawing of $`G`$ is more common, see for an introduction.
Definition A graph $`G`$ is called minimalizable if a diagram with minimal crossing number can be obtained from an arbitrary diagram of $`G`$ by a choice of crossing changes followed by an ambient isotopy. $`G`$ is called strongly minimalizable if it is minimalizable and possesses a minimal diagram that is unique up to crossing changes followed by an ambient isotopy.
Remark It is well-known that the (planar) graph with one vertex and one edge corresponding to a classical knot is strongly minimalizable. There exist graphs, even planar ones, that are not minimalizable, see .
###### Theorem 1
Any two diagrams of a strongly minimalizable graph are equivalent up to crossing changes, i.e., they can be transformed into each other by a finite sequence of crossing changes and ambient isotopies.
$`\mathrm{}`$
Remark Observe that every finite sequence of crossing changes and ambient isotopies applied to a graph diagram can be reduced such that all crossing changes are realized in a single diagram. This can be seen in the same way as for the equivalent definitions of the unknotting number of a knot via crossing changes and ambient isotopies, see .
###### Theorem 2
A planar minimalizable graph is strongly minimalizable.
Proof: This follows immediately from a result by Lipson , Corollary 6.
$`\mathrm{}`$
Remark In , a planar minimalizable graph is called trivializable and it is shown that a subgraph of a trivializable graph again is trivializable. In general, a subgraph of a strongly minimalizable graph is not strongly minimalizable, nor even minimalizable. This follows from the fact that every graph is isomorphic to a subgraph of the complete graph $`K_n`$ for some $`n`$ and this is strongly minimalizable (see Corollary 6 below). On the other hand, there are infinitely many examples of graphs that are not minimalizable, see , , .
As mentioned above, a knot represented by a knot diagram always can be unknotted by appropriate crossing changes. Likewise, a knotted arc that connects two points in the plane can be unknotted by crossing changes. This can be achieved by traveling along the arc from one end to the other and choosing self-crossings such that each crossing is passed as an overcrossing when reached for the first time.
Furthermore, up to crossing changes followed by an ambient isotopy, there are only finitely many ways of drawing a graph in the plane. The number of essentially different drawings depends on the graph’s automorphism group.
###### Lemma 3
Let $`G`$ be a graph with vertices $`v_1,\mathrm{},v_n`$ and let $`w_1,\mathrm{},w_n`$ be distinct points in the plane. Then, up to crossing changes followed by an ambient isotopy, $`G`$ has a unique diagram such that the vertex $`v_i`$ corresponds to the point $`w_i`$ for $`i=1,\mathrm{},n`$.
Proof: The proof is carried out by induction on the number $`k0`$ of graph edges. If $`G`$ has no edge then there is nothing to show since the $`n`$ points in the plane can be arranged arbitrarily by ambient isotopy.
Now let $`G`$ have $`k1`$ edges and let $`D`$ and $`D^{}`$ be diagrams of $`G`$ such that $`w_1,\mathrm{},w_n`$ correspond to the graph vertices. Choose an arbitrary edge $`e`$ of $`G`$. By induction hypothesis, the diagrams arising from $`D`$ and $`D^{}`$ by deleting the arcs corresponding to $`e`$ allow a choice of crossing changes such that the resulting diagrams are equivalent. Apply these crossing changes to the diagrams $`D`$ and $`D^{}`$, respectively, and change those crossings in which the arcs $`a`$ in $`D`$ and $`a^{}`$ in $`D^{}`$ related to $`e`$ are involved as follows. Choose the crossings of $`a`$ with an arc different from $`a`$ such that $`a`$ is always above the other arc and change the self-crossings of $`a`$ such that $`a`$ is transformed into an unknotted arc. Apply the same procedure to the arc $`a^{}`$ in $`D^{}`$. The two resulting diagrams of $`G`$ are easily seen to be equivalent.
$`\mathrm{}`$
###### Theorem 4
A graph $`G`$ with trivial automorphism group is strongly minimalizable.
Proof: Let $`G`$ have $`n`$ vertices. Since $`n`$ distinct points in the plane can be moved arbitrarily by ambient isotopy, an embedding of $`G`$ into the plane is determined by connecting $`n`$ fixed points by arcs corresponding to the incidence relation given by $`G`$. By Lemma 3, the only ambiguity, up to crossing changes followed by an ambient isotopy, to connect the points in the plane by arcs arises from the automorphisms of $`G`$.
$`\mathrm{}`$
Remark If a graph $`G`$ possesses no vertices of degree two then there is an appropriate subdivsion of $`G`$ that has trivial automorphism group. The additional vertices are topologically uninteresting but, of course, important for the property of being minimalizable. Adding vertices of degree two in the described way has the same effect as colouring the edges of the graph and considering only graph automorphisms and modifications of graph diagrams that respect colourings.
###### Theorem 5
Let $`G`$ be a graph with $`n`$ vertices. If the automorphism group of $`G`$ is isomorphic to $`S_{n_1}\times \mathrm{}\times S_{n_k}`$ with $`n_1+\mathrm{}+n_k=n`$ then $`G`$ is strongly minimalizable.
Proof: As explained in the proof of Theorem 4, only the effect of connecting a fixed set of points by arcs in two different ways arising from an automorphism of $`G`$ has to be investigated. Because of the structure of the automorphism group given, the $`n`$ points can be partitioned into subsets corresponding to the partition $`n=n_1+\mathrm{}+n_k`$ and such that whenever two points belonging to different subsets have to be connected by an arc then every point of the first subset has to be connected with every point of the second subset and vice versa. Likewise, whenever two different points belonging to the same subset have to be connected then any two different points of this subset have to be connected by an arc and if there is a vertex incident with a loop then every vertex of the subset is incident with a loop. Corresponding statements hold in the case of multiple edges. Thus, the arising diagram of $`G`$ is unique up to crossing changes followed by an ambient isotopy.
$`\mathrm{}`$
Remark The construction that is given in the proof of Theorem 5 shows that the graph $`G`$ has $`K_{n_i,n_j}`$ as a subgraph if two points of the subsets corresponding to $`n_i`$ and $`n_j`$ are connected by an arc, and it has $`K_{n_i}`$ as a subgraph if two points of the subset corresponding to $`n_i`$ are connected by an arc.
###### Corollary 6
For $`n,n_1,\mathrm{},n_k`$, the complete graph $`K_n`$ and the complete $`k`$-partite graph $`K_{n_1,\mathrm{},n_k}`$ are strongly minimalizable.
$`\mathrm{}`$
## 3 Crossing Number Problems
Applying the results of the previous section, an algorithm can be given to reduce the problem of finding the crossing number of a given graph to the problem of determining the crossing numbers of finitely many graph diagrams up to ambient isotopy. Start with an arbitrary diagram of the graph and calculate the crossing numbers of the graph embeddings corresponding to all possible choices of crossing information for the diagram’s double points. Then change the diagram by connecting vertices in the plane by arcs in a different manner corresponding to an automorphism of the graph. As before, calculate the crossing numbers of all related graphs in $`^3`$, and carry on until all graph automorphisms have been applied to the starting diagram. The desired crossing number of the graph is the minimum taken over all crossing numbers calculated.
By this procedure, a complicated problem of topological graph theory is transformed into finitely many knot theoretical problems. Of course, the determination of crossing numbers for graphs in $`^3`$ is, in general, no easy task either. Some results concerning the crossing numbers of particular classes of graphs can be found in and .
The planarity criterion that is stated in the following theorem is an example how results on crossing numbers for graphs can be deduced from the calculation of crossing numbers of graph embeddings in $`^3`$. A proof of the theorem is given in Section 4 where the knot theoretical ingredients are described, too.
###### Theorem 7
Let $`G`$ be a minimalizable graph and let $`D`$ be an arbitrary diagram of $`G`$. Furthermore, let there exist a graph vertex $`v`$ in $`D`$ of degree four such that the following conditions are fulfilled.
1. For each of the four pairs of neighbouring edges of $`v`$, there exists a cycle that contains the two edges such that the two cycles in $`D`$ belonging to opposite pairs only meet in $`v`$.
2. If $`D^{}`$ arises from $`D`$ by arbitrary crossing changes then replacing $`v`$ with an appropriate crossing gives a diagram of an embedded graph with crossing number at least two.
Then $`G`$ is non-planar.
Remark Because of the fact that a graph that possesses a non-planar subgraph is non-planar itself, Theorem 7 may be applied to graphs that have no (suitable) vertex of degree four by considering an appropriate subgraph.
Observe that the second part of condition i) in Theorem 7 excludes the situation depicted in Fig. 1, i.e., cutting through the vertex gives two curves that are linked.
Example
$`K_5`$ is non-planar as can easily be seen by applying Theorem 7 to the diagram depicted in Fig. 2 for an arbitrary vertex $`v`$. Condition i) obviously is fulfilled. For both choices of over-under information for the diagram’s single crossing (indeed, the two arising diagrams are equivalent), $`v`$ can be replaced by an appropriate crossing such that the resulting diagram contains a Hopf link. Thus condition ii) is fulfilled, too.
Of course, it is well-known that $`K_5`$ is non-planar and, by Kuratowski’s theorem, that every non-planar graph contains a $`K_5`$\- or a $`K_{3,3}`$-minor. The graph considered in the next example contains a $`K_{3,3}`$-minor.
Example
With the same argumentation as in the previous example, the graph depicted in Fig. 3 is non-planar. Again, both of the two vertices of degree four are appropriate for applying Theorem 7.
In general, very little is known about the behaviour of the crossing number when a new graph is constructed from one or more given graphs. For example, it is an open problem wether the crossing number is additive with respect to a connected sum $`G_1\mathrm{\#}G_2`$ of two graphs $`G_1`$ and $`G_2`$, i.e., two edges $`e_1=\{v_1,v_2\}G_1`$ and $`e_2=\{w_1,w_2\}G_2`$ that are not loops are replaced by edges $`e_1^{}=\{v_1,w_1\}`$ and $`e_2^{}=\{v_2,w_2\}`$ (the definition can be extended to subdivisions of $`G_1`$ and $`G_2`$). The corresponding problem for graph embeddings is well-known in knot theory, and the additivity of crossing numbers for connected sums of knots and links is an old outstanding conjecture. In the case of abstract graphs, it is not even clear if a connected sum of two (strongly) minimalizable graphs is (strongly) minimalizable.
###### Theorem 8
Let $`G`$ and $`G^{}`$ be (strongly) minimalizable graphs. Then the following hold.
1. The disjoint union $`GG^{}`$ of $`G`$ and $`G^{}`$ is (strongly) minimalizable and
$$cr(GG^{})=cr(G)+cr(G^{}).$$
2. Any one-point union $`GG^{}`$ of $`G`$ and $`G^{}`$ is (strongly) minimalizable and
$$cr(GG^{})=cr(G)+cr(G^{}).$$
Proof: For arbitrary diagrams of $`GG^{}`$ and $`GG^{}`$, respectively, a sequence of crossing changes can be chosen such that arcs corresponding to $`G`$ always overcross arcs corresponding to $`G^{}`$ at the diagram’s double points. Denote the arising diagrams by $`D^{}`$ and $`D^{}`$, respectively.
For the graph in $`^3`$ that belongs to $`D^{}`$, it may be assumed that the part corresponding to $`G`$ lies in the half space above the projection plane and the part corresponding to $`G^{}`$ lies beneath the projection plane. Clearly, there is an equivalent diagram $`DD^{}`$ which is the disjoint union of a diagram $`D`$ of $`G`$ and $`D^{}`$ of $`G^{}`$. Since both graphs are minimalizable, there are sequences of crossing changes that realize $`cr(G)`$ and $`cr(G^{})`$ in $`D`$ and $`D^{}`$, respectively, showing that $`cr(GG^{})cr(G)+cr(G^{})`$. The opposite inequality holds trivially and it follows that the crossing number is additive with respect to connected sums. Furthermore, crossing changes in $`DD^{}`$ realize $`cr(GG^{})`$ and therefore $`GG^{}`$ is (strongly) minimalizable.
Similarly, the graph in $`^3`$ that belongs to $`D^{}`$ may be thought to consist of a part corresponding to $`G`$ lying in the upper half space and a part corresponding to $`G^{}`$ lying in the lower half space except one point $`v`$ in which the graphs intersect. Deform the graph corresponding to $`G^{}`$ such that its projection is contained completely in one region of the subdiagram of $`D^{}`$ belonging to $`G`$ (except for the point $`v`$). This gives an equivalent diagram $`DD^{}`$ that is a one-point union of a diagram $`D`$ of $`G`$ and $`D^{}`$ of $`G^{}`$. The rest of the proof is completely the same as in the case of $`GG^{}`$.
$`\mathrm{}`$
## 4 Proof of Theorem 7
For the proof of Theorem 7, some knot theoretical definitions and results have to be given. The objects under consideration, as they are needed here, are described in more detail in . For general knot theoretical terminology see, e.g., , , , , . In the following, a link is an embedding of a graph in $`^3`$ that consists of one or more disjoint loops.
A tangle is a part of a link diagram in the form of a disk with four arcs emerging from it, see Fig. 4,
where the tangle’s position is indicated by labeling the emerging arcs with letters $`a`$, $`b`$, $`c`$, $`d`$ in a clockwise ordering (or simply one of them with ”$`a`$”). An equivalence relation for tangles is given via ambient isotopy that fixes the ends of the tangle’s arcs. For a tangle $`t`$, there are two possible ways to connect the four ends by two arcs in the plane that do not intersect, see Fig. 4. The arising link diagrams $`N(t)`$ and $`D(t)`$ are called closures of $`t`$.
Denote the two different tangles with crossing number zero by $`0`$ and $`\mathrm{}`$, and the two diffent tangles with crossing number one by $`1`$ and $`\overline{1}`$. They belong to the class $``$ of rational tangles. There is an important connection between rational tangles and Reidemeister moves of type V at a graph vertex as depicted in Fig. 5.
Indeed, a rational tangle can be defined as the result of applying a sequence of moves as depicted in Fig. 5 to one of the tangles $`0`$, $`\mathrm{}`$, $`1`$, $`\overline{1}`$ instead of a graph vertex.
Rational tangles can be uniquely classified by rational numbers, see and , and there is a normal form for rational tangles from which the crossing number and an alternating diagram that realizes this number can easily be read off, see Fig. 6. For a rational tangle $`r`$, let $`|r|`$ denote its crossing number.
From a given graph diagram $`D`$ that contains a vertex of degree four, there can be obtained new graph diagrams by substituting rational tangles for the graph vertex, see . To do this in a well-defined way, it is necessary to give an orientation to the vertex, i.e., labeling an edge incident with the vertex with the letter $`a`$. Then a rational tangle can be substituted for the graph vertex in the obvious way such that the edge labeled with ”a” fits together with the corresponding arc emerging from $`r`$. An example is given in Fig. 7.
In the same manner as in , where only diagrams of 4-regular graphs are considered, the following theorem can be shown.
###### Theorem 9
Let $`D`$ be a graph diagram that has a vertex of degree four for which an orientation is chosen. Then, the set $`\{D_r|r\}`$ is invariant with respect to ambient isotopy.
$`\mathrm{}`$
A link diagram $`D`$ is called reduced if it does not contain a crossing point $`p`$ such that $`D\{p\}`$ has more components than $`D`$ as depicted in Fig. 8.
Since a reduced alternating link diagram has minimal crossing number, see , , for proofs of this famous Tait Conjecture, the following lemma can readily be deduced from a rational tangle’s normal form.
###### Lemma 10
For every rational tangle $`r`$ with $`|r|2`$, either $`N(r)`$ or $`D(r)`$ has crossing number $`|r|`$.
$`\mathrm{}`$
Using the same technique as for the proof of Lemma 10, namely, the span of the Jones polynomial which is additive with respect to connected sums $`D_1\mathrm{\#}D_2`$ of knot diagrams $`D_1`$, $`D_2`$, the following easily can be shown.
###### Lemma 11
For a rational tangle $`r`$ with $`|r|2`$ and arbitrary knot diagrams $`D_1`$, $`D_2`$, either $`D_1\mathrm{\#}N(r)\mathrm{\#}D_2`$ or $`D_1\mathrm{\#}D(r)\mathrm{\#}D_2`$ has crossing number $`|r|`$.
$`\mathrm{}`$
Remark Indeed, Lemma 11 holds for any reduced alternating diagram instead of $`N(r)`$ or $`D(r)`$, respectively, and for connected sums with finitely many diagrams $`D_1,\mathrm{},D_k`$. The typical situation that occurs in the proof of Theorem 7 is depicted in Fig. 9.
Proof of Theorem 7: Assume that $`\stackrel{~}{D}`$ is a crossing-free diagram of $`G`$. Following Theorem 2, there exists a diagram $`D^{}`$ equivalent with $`\stackrel{~}{D}`$ which arises from $`D`$ by crossing changes. Choose a vertex-orientation for $`v`$ in $`D^{}`$ and consider the diagrams $`D_r^{}`$ with rational tangles $`r`$. If $`r`$ has crossing number at least two then $`D_r^{}`$ likewise has crossing number at least two because of condition i), which obviously is fulfilled for the diagram $`D^{}`$ as well as for $`D`$, since a subdiagram of $`D_r^{}`$ has crossing number at least two by Lemma 11. Furthermore, either $`D_1^{}`$ or $`D_{\overline{1}}^{}`$ has crossing number at least two because of condition ii). Thus, there are at most three diagrams $`D_r^{}`$, namely, the diagrams $`D_0^{}`$, $`D_{\mathrm{}}^{}`$, $`D_1^{}`$ or the diagrams $`D_0^{}`$, $`D_{\mathrm{}}^{}`$, $`D_{\overline{1}}^{}`$, that have crossing number strictly less than two.
But considering diagrams $`\stackrel{~}{D}_r`$, corresponding to an arbitrary vertex of degree four in $`\stackrel{~}{D}`$ with chosen vertex-orientation, immediately yields a contradiction since each of the four diagrams $`\stackrel{~}{D}_0`$, $`\stackrel{~}{D}_{\mathrm{}}`$, $`\stackrel{~}{D}_1`$, $`\stackrel{~}{D}_{\overline{1}}`$ obviously has crossing number zero or one.
$`\mathrm{}`$ |
no-problem/0001/hep-th0001090.html | ar5iv | text | # Quantum extreme black holes at finite temperature and exactly solvable models of 2d dilaton gravity
## I Introduction
In it was argued that thermodynamic reasonings allow one to ascribe nonzero temperature $`T`$ to extreme black holes since the Euclidean geometry remains regular irrespectively of the value of $`T`$. As a result, it was conjectured that an extreme black hole may be in thermal equilibrium with ambient radiation at any temperature. This conclusion was criticized in where it was pointed out that the prescription of is suitable only in the zero-loop approximation and does not survive at quantum level since the allowance for quantum backreaction leads to thermal divergencies in the stress-energy tensor that seems to destroy a regular horizon completely, so the demand of regularity of this tensor on the horizon seems to enforce the choice $`T=0`$ for extreme black holes unambiguously.
The aim of the present paper is to show that in dilaton gravity there exists possibility to combine a finite curvature at the horizon with divergencies of the stress-energy tensor $`T_\mu ^\nu `$. As a result, extreme black holes with $`T0`$ and finite curvature on the horizon may exist. Namely, the above mentioned divergencies may under certain conditions be compensated by the corresponding divergencies in the classical part of field equations due to derivatives of the dilaton field. We stress that the geometries discussed below are self-consistent solutions of field equations with backreaction of quantum fields taken into account. In spite of the geometry itself turns out to be regular in the sense that the curvature measured from outside is finite at the horizon, the solution as the whole which includes, apart from the metric, the dilaton field as well, is singular. (The similar result was obtained quite recently for nonextreme black holes ).
It was observed earlier that ”standard” extreme black holes with $`T=0`$ exhibit some weak divergencies at the horizon . The phenomenon we are discussing is qualitatively different. First, these divergencies are inherent in our case to the components $`T_\mu ^\nu `$ themselves, whereas in $`T_\mu ^\nu `$ were assumed to be finite and divergencies revealed themselves in the energy measured by a free falling observer which is proportional to the ratio $`(T_1^1T_0^0)/g_{00\text{ }}`$($`x^0`$ and $`x^1`$are a temporal and spatial coordinates in the Schwarzschild gauge). Second, divergencies considered in arise for a generic extreme black hole with $`T=0`$, whereas in our case they are due to $`T0`$ entirely. Third, backreaction of quantum field produces a singularity at the horizon in , whereas in our case the curvature remains finite there.
It is worth noting that in quantum domain the issue of existence of extreme black holes is non-trivial by itself. In Ref. it was argued that the existence of quantum extreme black holes is inconsistent with field equations at all. However, this conclusion was derived for the particular class of CGHS-like models , so its region of validity is very limited, as, in fact, the authors themselves note at the end of their paper. In the present paper we show that for a certain class of dilaton gravity theories (i) extreme black holes with quantum corrections taken into account do exist; (ii) the temperature of quantum fields in their background may be nonzero. We exploit the approach which was elaborated earlier in Refs. , , where it was applied to nonextreme black holes and semi-infinite throats.
## II General structure of solutions
Consider the action of the dilaton gravity
$$I=I_0+I_{PL}\text{,}$$
(1)
where
$$I_0=\frac{1}{2\pi }_Md^2x\sqrt{g}[F(\varphi )R+V(\varphi )(\varphi )^2+U(\varphi )]$$
(2)
and the Polyakov-Liouville action incorporating effects of Hawking radiation and its backreaction on the black hole metric can be written as
$$I_{PL}=\frac{\kappa }{2\pi }_Md^2x\sqrt{g}[\frac{(\psi )^2}{2}+\psi R]\text{.}$$
(3)
The function $`\psi `$ obeys the equation
$$\mathrm{}\psi =R\text{,}$$
(4)
where $`\mathrm{}=_\mu ^\mu `$, $`\kappa =N/24`$ is the quantum coupling parameter, $`N`$ is number of scalar massless fields, $`R`$ is a Riemann curvature. We omit the boundary terms in the action as we are interested only in field equations and their solutions.
A generic quantum dilaton-gravity system is not integrable. However, if the action coefficients obey the relationship
$$V=\omega (u\frac{\kappa \omega }{2})\text{,}$$
(5)
where $`\omega =U^{}/U`$, the system becomes exactly solvable , . Then static solution are found explicitly and even for a finite arbitrary temperature $`T`$ of quantum fields measured at infinity :
$`ds^2`$ $`=`$ $`g(dt^2+d\sigma ^2)\text{}g=\mathrm{exp}(\psi _0+2y)\text{}y=\lambda \sigma \text{}\psi _0={\displaystyle \omega 𝑑\varphi }=\mathrm{ln}U(\varphi )+const\text{.}`$ (6)
$`F^{(0)}(\varphi )`$ $`=`$ $`f(y)e^{2y}By+C\text{}B=\kappa (1T^2/T_0^2)\text{}T_0=\lambda /2\pi \text{,}`$ (7)
and the auxiliary function $`\psi =\psi _0+\gamma \sigma `$, $`\gamma =2\lambda (T/T_01)`$. Here $`F^{(0)}=F\kappa \psi _0`$, $`y=\lambda \sigma `$, $`\lambda =\sqrt{U}/2`$. At right infinity, where spacetime is supposed to be flat, the stress-energy tensor has the form
$$T_\mu ^{\nu (PL)}=\frac{\pi N}{6}T^2(1,1)\text{.}$$
(9)
The coordinate $`\sigma `$ is related to the Schwarzschild one $`x`$, where
$$ds^2=dt^2g+g^1dx^2\text{,}$$
(10)
by the formula $`x=g𝑑\sigma `$. We assume that the function $`\omega (\varphi )`$ changes its sign nowhere, so in what follows we may safely use the quantity $`\psi _0`$ instead of $`\varphi `$ (in fact, this can be considered as reparametrization of the dilaton).
The solutions (6) include different types of object - nonextreme black holes, ”semi-infinite throats” and soliton-like solutions, depending on the boundary conditions imposed on the function $`\psi _0`$ on the horizon . Now we want to elucidate whether the models (5) include also extreme black holes with a finite curvature at the horizon and under what conditions. Near a horizon the metric of a typical extreme black hole must have the standard form (subscript ”h” indicates that a quantity is calculated at the horizon)
$$gg_0(xx_h)^2\frac{g_0}{y^2}$$
(11)
or, equivalently,
$$\psi _02y+\mathrm{ln}y^2$$
(12)
At the right infinity the function $`\psi _0`$ must obey the relation
$$\psi _02y$$
(13)
that a spacetime have the Minkowski form and, thus, $`\psi _0+\mathrm{}`$ in accordance with general properties indicated in .
Provided eqs. (11) and (12) are satisfied, the Hawking temperature
$$T_H=\frac{1}{4\pi }(\frac{dg}{dx})_h=\underset{y\mathrm{}}{lim}\frac{\lambda dg}{4\pi gdy}=\frac{\lambda }{2\pi }\underset{y\mathrm{}}{lim}(1\frac{1}{2}\frac{d\psi _0}{dy})=0$$
(14)
as it should be for an extreme black hole, the Riemann curvature $`R=\lambda ^2g^1\frac{d^2}{dy^2}\mathrm{ln}g2g_0^1\lambda ^2`$is finite. As near the horizon $`y\mathrm{}`$, $`f(y)By`$, we obtain that the function
$$F^{(0)}\frac{B}{2}(\psi _0\mathrm{ln}\frac{\psi _0^2}{4})$$
(15)
at $`\psi \mathrm{}`$.
If eq. (15) is satisfied, the function $`\psi _0(\varphi )`$ which is the solution of (6) has the desirable asymptotic (12) and, therefore, the metric function behaves according to (11). To achieve the behavior (13) at infinity, it is sufficient to enforce the condition $`F^{(0)}e^{\psi _0}`$ at $`\psi +\mathrm{}`$.
As we are looking for solutions of field equations which are regular in the region between the horizon and infinity, we must exclude a possible singularity of curvature. Since $`R=\lambda ^2g^1\frac{d^2\psi _0}{dy^2}`$ and $`\frac{d\psi _0}{dy}=\frac{f^{}(y)}{F^{}(\psi _0)}`$ (prime denotes derivative with respect to a corresponding argument), the dangerous point is $`\psi _0^{}`$ where $`F^{(0)}(\psi _0^{})=0`$ and $`\frac{d\psi _0}{dy}`$ may diverges. Let $`B>0`$. Then $`\frac{d\psi _0}{dy}`$ is finite in the corresponding point $`y^{}`$, provided $`f^{}(y^{})=0`$. It is achieved by the choice $`C=F^{(0)}(\psi _0^{})+\frac{B}{2}(\mathrm{ln}\frac{B}{2}1)C^{}`$. If $`B<0`$, the function $`F^{(0)}(\psi _0)`$ must be monotonic to ensure the absence of singularities. For $`B=0`$, as we will see below, extreme black holes do not exist.
Let us consider an example for which all above-mentioned conditions are satisfied:
$$F^{(0)}=e^{2\gamma }B\gamma +C^{}\text{,}$$
(16)
where $`\gamma =\gamma (\psi _0)`$. Then eq. (6) has the obvious solution: $`y=\gamma (\psi _0)`$. If we choose the function $`\gamma `$ in such a way that the solution of this equation $`\psi _0(y)`$ has the needed asymptotics (12), we obtain an extreme black hole in accordance with (11). It is also clear that $`F^{(0)}=0`$ in the same point where $`f^{}(y)=0`$, so the condition of absence of singularities is satisfied.
Let us look now at the behavior of the components of the stress-energy tensor of quantum field near the horizon. For definiteness, let us choose the $`T_1^1`$ component. It can be written in the Schwarzschild gauge as (see, for example, ; it is assumed that the field is conformally invariant, our definition of $`T_\mu ^{\nu (PL)}`$ differs by sign from the quantum stresses discussed in )
$$T_1^{1(PL)}=\frac{\pi }{6g}[T^2(\frac{g^{}}{4\pi })^2]\text{.}$$
(18)
Since in our case $`TT_H`$, the expression in square brackets remains nonzero at the horizon, $`T_1^1g^1(xx_h)^2y^2`$, so stresses diverge strongly.
## III properties of solutions
It was observed in that for black holes described exact solutions (6) the Hawking temperature $`T_H=\lambda /2\pi `$ is nonzero constant irrespectively of the particular kind of the model that generalizes the earlier observation for the RST model . Therefore, it would seem that black holes entering the class of solutions under discussion may be nonextreme only. The reason why, along with nonextreme holes, the class of solutions contains, nevertheless, extreme ones too, can be explained in terms of the function $`\psi `$. It was assumed in and that this quantity is finite on the horizon to ensure the finiteness of the Riemann curvature. Then the derivative $`\frac{d\psi }{dy}0`$ at the horizon, where $`y\mathrm{}`$, and $`T_H=\lambda /2\pi `$ in accordance with (14). However, we saw that even notwithstanding $`\psi `$ diverges on the horizon, we can find the solutions with a finite $`R`$ on the horizon, provided the function in question has the asymptotic (12). Thus, we gain qualitatively new types of solutions due to relaxing the condition of the regularity of $`\psi `$ on the horizon.
It is worth paying attention to two nontrivial features of solutions under discussion: (i) the character of solutions in the classical ($`\kappa =0`$) and quantum ($`\kappa 0`$) cases is qualitatively different, (ii) extreme black holes with a finite curvature at the horizon are possible even if $`TT_H=0`$. First, let us consider the point (i). Even if $`T=0`$, in the quantum case the coefficient $`B0`$ and $`f\mathrm{}`$ at the horizon, whereas in the classical case ($`B=0`$) $`fC=const`$. Therefore, in the classical case the value of the coupling coefficient $`F`$ is finite on the horizon: according to (6), $`F_h=F_h^{(0)}=C`$. As a result, in the vicinity of the horizon $`\psi _0=\psi _{0h}`$ plus exponentially small corrections, so $`\psi _0(y)`$ is regular and a nonextreme black hole becomes nonextreme, as explained in the precedent paragraph. On the other hand, $`F^{(0)}`$ must diverge at the horizon in the quantum case. Thus, we arrive at a rather unexpected conclusion: for exactly solvable models of dilaton gravity (5) extreme black holes are due to quantum effects only and disappear in the classical case $`\kappa =0`$. This fact is insensitive to the value of temperature, so classical extreme black holes of the given type are absent even if $`T=0`$. For the same reasons quantum extreme black hole are absent in the exceptional case $`T=T_0`$, when $`B=0`$.
It was observed in that for a rather wide class of Lagrangians the typical situation is such that classical extreme black holes may exist, whereas quantum correction to the action destroy the character of the solution completely and do not allow the existence of extreme black holes. In our case, however, the situation is completely opposite: extreme black holes are possible in the quantum (but not classical) case. It goes without saying that our result does not contradict the existence of extreme black holes in the classical dilaton gravity theories in general but concerns only the special class of Lagrangians (5) and their classical limit. (It is worth recalling that the condition (5) was imposed to ensure the solvability of the quantum theory, whereas in the classical case every dilaton-gravity model is integrable, so there is no need in supplementary conditions like (5)). The results of either or the present paper show clearly that quantum effects not only may lead to quantum corrections of the classical metric but radically change the character of the geometry and topology.
The most interesting feature of solutions obtained in the present paper is the possibility to have black holes at $`TT_H=0`$. In the previous paper we showed that nonextreme black holes with $`TT_H`$ may exist. The reason in both cases is the same and I repeat it shortly. The usual argument to reject the possibility $`TT_H`$ relies on two facts: 1) this inequality makes the behavior of the stress-energy tensor of quantum fields singular in the vicinity of the horizon, 2) in turn, such a behavior of the stress-energy tensor is implied to inevitably destroy a regular horizon by strong backreaction. Meanwhile, a new subtlety appears for dilaton gravity as compared to the usual case. As, in addition to a metric and quantum fields, there is one more object - dilaton, there exists the possibility that divergencies in the stress-energy tensor are compensated completely by gradients of a dilaton field to give a metric regular at the horizon (at least, from outside). And for some models, provided the conditions describe above are fulfilled, this situation is indeed realized. Thus, dilaton gravity shares the point 1) with general relativity but the condition 2) may break down.
It is worthwhile to note that, as follows from (6), the temperature which determines the asymptotic value of the energy density at infinity, is the function of the given coefficient $`B`$ which enters the form of the action coefficient $`F^{(0)}(\psi )`$: $`T=T_0\sqrt{1B/\kappa }`$, so the solution under discussion has sense for $`B\kappa `$ only. In particular, for ”standard” extreme holes with $`T=0`$ the coefficient $`B=\kappa 0`$. It follows from (5), (12), (12), that near the horizon (i.e. in the limit $`y\mathrm{}`$) the coefficient $`V`$ in the action (2) behaves like $`V\frac{\omega ^2}{2}(\kappa B)`$, so the condition $`B\kappa `$ ensures the right sign of the term with $`(\varphi )^2`$.
The fact that for a given $`B`$ we obtain the fixed value of the temperature is contrasted with that for nonextreme black holes with $`TT_H`$ where the temperature is not fixed by the form of the action but takes its value within the whole range . This can be explained as follows. In the extreme case we must achieve $`T_H=0`$ that entails fine tuning in the asymptotic behavior of $`\psi _0(y)`$ that forces us to choose the coefficient at the linear part of $`F^{(0)}`$ equal to $`B`$ exactly. Meanwhile, in the nonextreme case, where the value of $`T_H`$ is not fixed beforehand, the corresponding coefficient is bounded only by two conditions which guarantee the existence of the horizon and the finiteness of the curvature. These conditions result in two inequalities, so the constraint is less restrictive (see for details).
It is worth recalling that Trivedi has found nonanalytical behavior of the metric of extremal black holes near the horizon: $`g\alpha _1(xx_h)^2+\alpha _2(xx_h)^{3+\delta }`$ (see eq. (27) of ). In our case corrections to the leading term of (11) also may be nonanalytical, depending on the properties of the function $`F^{(0)}(\psi _0)`$ (for the example (16) all is determined by the choice of the function $`\gamma (\psi _0)`$ ). Moreover, it is readily seen that in our case the nonanaliticity may reveal itself in the leading term of $`g`$. Indeed, let the function $`g`$ have the asymptotic
$$gg_0(xx_h)^\delta $$
(19)
with $`\delta >2`$ that is compatible with the extremality condition $`T_H=0`$. This case can be handled in the same manner as for $`\delta =2`$. From (6) and (10) it follows that now we must have near the horizon the asymptotics
$$\psi _02y+\alpha \mathrm{ln}(y)\text{,}$$
(20)
with $`\alpha =\delta /(\delta 1)`$ instead of (12). One more case arises when $`gg_0e^x`$, so $`ye^x`$, the horizon lies at $`x=\mathrm{}`$. Then we obtain the same formula (20) with $`\alpha =1`$.
From the physical viewpoint, the model considered in represents charged black holes in the dilaton theory motivated by the reduction from four dimensions. The corresponding model is not exactly solvable either due to the presence of the electromagnetic field or due to the form of the action coefficients which do not fall into the class of exactly solvable models , . On the other hand, our model is exactly solvable and deals with uncharged black holes. Their extremality is due to quantum effects entirely.
The relevant physical object in dilaton gravity includes not only a metric but both the metric and dilaton field. In the situations analyzed above the coupling $`F^{(0)}`$ between curvature and dilaton describe inevitably diverges at the horizon as seen from eq. (6) in the limit $`y`$ $`\mathrm{}`$ corresponding to the horizon. Therefore, although the curvature is finite on the horizon, the solution as the whole exhibit singular behavior. Moreover, even the metric part of the solution can be called ”regular” only in the very restricted sense: the region inside the horizon hardly has a physical meaning at all since an outer observer cannot penetrate it because of strong divergencies in the stress-energy tensor on the horizon surface, so the manifold is geodesically incomplete. All these features tell us that in fact we deal with the class of objects which occupies the intermediate positions between ”standard” regular black holes and naked singularities. In the previous paper we described the nonextreme type of such objects, in the present paper we found its extreme version.
## IV summary
We have proved that extreme black holes do exist in exactly solvable models of dilaton gravity with quantum corrections taken into account. It turned out that they are contained in general solutions of exactly solvable models. The existence of extreme black holes depends strongly on the behavior of the action coefficients near the horizon (see eq. (15)) and is insensitive to their concrete form far from it where it is only assumed that their dependence on the dilaton field ensures the absence of singularities.
We found solutions which shares features of both black holes and naked singularities and represent ”singularities without singularities”. We showed that quantum fields propagating in the background of extreme black holes may nonzero temperature at infinity. In fact, this means that such fields cannot be called Hawking radiation since the geometry does not enforce the value of the temperature. These properties of solutions with $`TT_H`$ are the same as in the nonextreme case . The qualitatively new feature inherent to extreme black holes consists in that the type of solutions we deal with is very sensitive to quantum effects: taking the classical limit destroys completely our solutions with $`TT_H=0`$, so the solutions under discussion are due to quantum effects only. (In contrast to it, solutions describing nonextreme black holes with $`TT_H`$ considered in have sense in the classical limit.)
The separate issue which deserves separate consideration is what value of entropy should be ascribed to the objects we found.
I am grateful to Sergey Solodukhin for valuable comments. |
no-problem/0001/nlin0001045.html | ar5iv | text | # Truncation-type methods and Bäcklund transformations for ordinary differential equations: the third and fifth Painlevé equations
## 1 Introduction
In a recent paper we introduced a new approach to finding Bäcklund transformations for ordinary differential equations (ODEs). This truncation-type method constituted an extension to ODEs of an approach that had been successfully developed for partial differential equations (PDEs) . The main idea in is to consider truncation as a mapping that preserves the locations of a natural subset of movable singularities. The generic solution of each of the Painlevé equations (except $`P_I`$) has pairs of movable simple poles with leading order coefficients of opposite sign. Thus the set of all poles of a solution $`y(x)`$ decomposes into the union of two nonintersecting subsets $`𝒫_+`$ and $`𝒫_{}`$, where $`𝒫_+`$ is the set of poles with positive choice of coefficient and $`𝒫_{}`$ is that with negative choice. In what follows we transform a generic solution $`y(x)`$ of a Painlevé equation to a solution $`Q(x)`$ of the same equation, but with possibly different parameters, as
$$y(x)=\rho (x)+Q(x),$$
(1)
where we demand that $`\rho (x)`$ has poles at the elements of $`𝒫_+`$ and $`Q(x)`$ has them at $`𝒫_{}`$, or vice versa. We are then able to find Bäcklund transformations through a procedure that relies only on singularity analysis of the transformed equation.
In we applied this approach to $`P_{II}`$ and $`P_{IV}`$. In addition to obtaining Bäcklund transformations for these equations we also discussed transformations to related ODEs (the ODEs satisfied by $`\rho (x)`$), and also the application of a “double-singularity approach.” In the present work we apply our approach to $`P_{III}`$ and $`P_V`$, in order to obtain Bäcklund transformations. Consideration of related ODEs and the double-singularity approach for $`P_{III}`$ and $`P_V`$ will be considered elsewhere . Descriptions of Bäcklund transformations for $`P_{III}`$ and $`P_V`$ can be found in .
## 2 Bäcklund transformations for $`P_{III}`$
We take the third Painlevé equation in the form
$$y^{\prime \prime }=\frac{y^2}{y}\frac{y^{}}{x}+\frac{\alpha y^2+\beta }{x}+\gamma ^2y^3\frac{\delta ^2}{y}$$
(2)
where for reasons of convenience we have renamed two of the paramaters as $`\gamma ^2`$ and $`\delta ^2`$ (conventionally labelled as $`\gamma `$ and $`\delta `$ respectively). A generic solution $`y(x)`$ of the third Painlevé equation is transformed to another solution $`Q(x)`$ of the same equation but with possibly different parameters $`a,b,c`$ and $`d`$ as
$$y(x)=\rho (x)+Q(x).$$
(3)
The dominant terms in the expression that results from the substitution of equation (3) into (2) are, assuming $`\gamma 0`$,
$$\rho \rho ^{\prime \prime }\rho ^2\gamma ^2\rho ^4,$$
(4)
which can be integrated to give
$$\rho ^{}\pm \gamma \rho ^2.$$
(5)
Taking first of all the minus sign in this last, we write
$$\rho ^{}=\gamma \rho ^2+\sigma \rho .$$
(6)
Substituting this into the transformed version of equation (2) and looking once again at dominant terms, we get
$$\sigma (x)=2\gamma Q(x)\frac{\alpha +\gamma }{\gamma x}+\frac{\tau }{\rho }.$$
(7)
Using this in the transformed equation yields a linear equation for $`\rho `$, which has to be compatible with the Riccati equation
$$\rho ^{}=\gamma \rho ^2\left(2\gamma Q(x)+\frac{\alpha +\gamma }{\gamma x}\right)\rho +\tau .$$
(8)
The analysis of the resulting compatibility condition depends on whether or not $`\tau `$ is assumed to depend only on $`x`$ or if it is allowed to depend also on $`Q(x)`$. (We do not consider here the possible dependence of $`\tau `$ on $`Q^{}(x)`$.) Assuming $`\tau =\tau (x,Q(x))`$ and using the fact that $`Q(x)`$ satisfies $`P_{III}`$, we obtain a polynomial in $`Q^{}(x)`$. The highest order coefficient gives
$$Q^2\tau _{QQ}Q\tau _Q+\tau =0,$$
(9)
whose general solution is given by
$$\tau =\left[f_1(x)+f_2(x)\mathrm{log}Q\right]Q.$$
(10)
If we now insert this form for $`\tau `$ in the compatibility condition and set to zero coefficients of the resulting polynomial in $`Q`$, $`Q^{}`$ and $`\mathrm{log}Q`$, we first obtain
$`f_1(x)`$ $`=`$ $`{\displaystyle \frac{a\alpha }{\gamma x}},`$ (11)
$`f_2(x)`$ $`=`$ $`0,`$ (12)
$`c^2`$ $`=`$ $`\gamma ^2,`$ (13)
$`d^2`$ $`=`$ $`\delta ^2,`$ (14)
which means that the actual form of $`\tau `$ is
$$\tau (x,Q)=\frac{a\alpha }{\gamma x}Q.$$
(15)
Note that if we had taken $`\tau `$ to be a function of $`x`$ only, our compatibility condition would have led to $`\tau =0`$ and thus to $`a=\alpha `$, and we would only have obtained restricted results. The main difference between the application of our method to $`P_{III}`$, and its previous application to $`P_{II}`$ and $`P_{IV}`$ , is that for $`P_{III}`$, allowing $`\tau `$ to depend on $`Q`$ leads to more general results, whereas for $`P_{II}`$ and $`P_{IV}`$ it does not.
Inserting the above results into the compatibility condition, we obtain from the next higher order term the following shift between the parameters,
$$2\gamma (b\beta )+ba\beta \alpha =0.$$
(16)
We now consider separately the remainder of our compatibility condition for the two cases $`b=0`$ and $`b0`$. We take first the case $`b0`$. Solving equation (16) for $`a`$ and substituting back into the compatibility condition gives the following additional constraints between the parameters,
$`(b\beta )(b+\beta )(\gamma b\delta \alpha 2\gamma \delta )(\gamma b+\delta \alpha +2\gamma \delta )`$ $`=`$ $`0,`$ (17)
$`(b\beta )(b+\beta )(\gamma b\delta \alpha 2\gamma \delta )(\gamma b+\delta \alpha +2\gamma \delta )(2\gamma \beta +\beta \alpha \gamma b)`$ $`=`$ $`0.`$ (18)
Taking $`b=\beta `$ in (17) just leads to the identity $`y(x)=Q(x)`$. However the remaining factors in (17) lead to nontrivial Bäcklund transformations. Taking $`b=\beta `$ leads to the Bäcklund transformation
$`\rho `$ $`=`$ $`2Q^2{\displaystyle \frac{A_1(x,Q,Q^{})}{A_2(x,Q,Q^{})}},`$ (19)
$`A_1`$ $`=`$ $`(2\gamma ^2x+\alpha \gamma x)Q^{}+(\alpha \gamma ^2x+2\gamma ^3x)Q^2(6\gamma ^2+5\alpha \gamma +\alpha ^2)Q`$ (20)
$`\beta \gamma ^2x,`$
$`A_2`$ $`=`$ $`\gamma ^2x^2Q^2+2\gamma ^3x^2Q^2Q^{}2\gamma ^2xQQ^{}+\gamma ^4x^2Q^42\gamma ^3xQ^3`$ (21)
$`(3\gamma ^2+4\alpha \gamma +\alpha ^2)Q^22\beta \gamma ^2xQ\delta ^2\gamma ^2x^2,`$
$`a`$ $`=`$ $`\alpha 4\gamma ,`$ (22)
$`b`$ $`=`$ $`\beta ,`$ (23)
$`c^2`$ $`=`$ $`\gamma ^2,`$ (24)
$`d^2`$ $`=`$ $`\delta ^2,`$ (25)
whereas taking $`\gamma b\pm \delta (\alpha +2\gamma )=0`$ (which requires $`\delta 0`$) leads to
$`\rho `$ $`=`$ $`{\displaystyle \frac{(\gamma \beta \pm \delta \alpha \pm 2\gamma \delta )Q^2}{\delta \left[\gamma xQ^{}+\gamma ^2xQ^2+(\alpha +\gamma )Q\pm \delta \gamma x\right]}},`$ (26)
$`a`$ $`=`$ $`\gamma \left(2\pm {\displaystyle \frac{\beta }{\delta }}\right),`$ (27)
$`b`$ $`=`$ $`\left(2\delta +{\displaystyle \frac{\alpha \delta }{\gamma }}\right),`$ (28)
$`c^2`$ $`=`$ $`\gamma ^2,`$ (29)
$`d^2`$ $`=`$ $`\delta ^2.`$ (30)
Here the choice of sign of $`\delta `$ arises because of the way we have written this parameter in $`P_{III}`$.
If we now consider the case with $`b=0`$ and we use this in the remainder of our compatibility condition, we recover the two Bäcklund transformations above (with $`b=0`$), and in addition
$`\rho `$ $`=`$ $`{\displaystyle \frac{(a\alpha )Q^2}{\gamma xQ^{}+\gamma ^2xQ^2+(\gamma +\alpha )Q}},`$ (31)
$`b`$ $`=`$ $`\beta =0,`$ (32)
$`c^2`$ $`=`$ $`\gamma ^2,`$ (33)
$`d`$ $`=`$ $`\delta =0.`$ (34)
We note however that in the case $`\beta =\delta =0`$, $`P_{III}`$ is explicitly solvable .
We now consider taking the opposite sign in front of the term in $`\rho ^2`$ in (6), i.e.
$$\rho ^{}=\gamma \rho ^2+\sigma \rho .$$
(35)
However the results thus obtained can be written down simply by changing the sign of $`\gamma `$ in the results obtained above.
Thus far we have assumed $`\gamma 0`$. However, since the change of variables
$$y(x)=\frac{1}{m(x)}$$
(36)
transforms $`P_{III}`$ in the variable $`y(x)`$ into $`P_{III}`$ in the variable $`m(x)`$ but with parameters $`\alpha ^{}=\beta `$, $`\beta ^{}=\alpha `$, $`\gamma ^2=\delta ^2`$ and $`\delta ^2=\gamma ^2`$, we can treat the case $`\gamma =0`$ as the case $`\delta ^{}=0`$ after such a change of variables. This then requires that $`\gamma ^{}0`$, i.e. $`\delta 0`$. The remaining case $`\gamma =\delta =0`$ can be dealt with by another change of variables which maps $`P_{III}`$ with parameters $`\alpha `$, $`\beta `$, $`\gamma =0`$ and $`\delta =0`$ onto $`P_{III}`$ with parameters $`\alpha ^{\prime \prime }=0`$, $`\beta ^{\prime \prime }=0`$, $`\gamma ^{\prime \prime }=2\alpha `$ and $`\delta ^{\prime \prime }=2\beta `$ (and noting that if in addition $`\alpha =0`$ or $`\beta =0`$ the original copy of $`P_{III}`$ is in any case explicitly solvable ).
The four Bäcklund transformations obtained by taking (26)—(30) together with the possible change of sign $`\gamma \gamma `$ correspond to the four fundamental Bäcklund transformations for $`P_{III}`$ in the case $`\gamma \delta 0`$ (denoted $`T_i`$, $`i=1,2,3,4`$, in ). All other known Bäcklund transformations for $`P_{III}`$ in this case $`\gamma \delta 0`$ can be expressed in terms of these $`T_i`$ together with (36) and simple rescalings .
The two Bäcklund transformations obtained by taking (19)—(25) together with the possible change of sign $`\gamma \gamma `$ correspond in the case $`\delta =0`$ (and $`\beta \gamma 0`$) to a second iteration of a Bäcklund transformation given in (see also Theorem 4.1 in for $`\gamma =0`$, or Transformation V in , for $`\alpha \delta 0`$) combined with (36) and suitable rescalings. In the case $`\delta 0`$ they correspond to the second iteration of appropriate combinations of (26)—(30) (i.e. of the transformations $`T_i`$ in ). However the general formulation of these second iterations presented here, of Bäcklund transformations which are usually treated separately, appears to be new.
## 3 Bäcklund transformations for $`P_V`$
We take $`P_V`$, with a similar renaming of paramaters as used for $`P_{III}`$, in the form
$$y^{\prime \prime }=\left(\frac{1}{2y}+\frac{1}{y1}\right)y^2\frac{y^{}}{x}+\frac{(y1)^2}{2x^2}\left(\alpha ^2y\frac{\beta ^2}{y}\right)+\frac{\gamma y}{x}\frac{\delta ^2y(y+1)}{2(y1)}.$$
(37)
When applying the approach outlined above to $`P_V`$ in this form, the presence of a resonance at first order after the leading term means that $`\sigma `$ \[in an equation corresponding to (6)\] is not determined by a linear algebraic equation. Rather then continuing with the resulting set of equations thus obtained, we make instead the change of variables
$$y(x)=\frac{m(x)}{m(x)1}$$
(38)
which gives
$`m(m1)x^2m^{\prime \prime }`$ $`=`$ $`\left(m{\displaystyle \frac{1}{2}}\right)x^2m^2+m(1m)xm^{}+\delta ^2x^2m^5`$ (39)
$`\left(\gamma +{\displaystyle \frac{5}{2}}x\delta ^2\right)xm^4+2(\gamma +\delta ^2x)xm^3`$
$`+{\displaystyle \frac{1}{2}}\left(\beta ^2\alpha ^2\delta ^2x^22\gamma x\right)m^2\beta ^2m+{\displaystyle \frac{1}{2}}\beta ^2,`$
in which form no resonance at first order after the leading term interferes with the determination of the corresponding $`\sigma `$. A generic solution $`m(x)`$ of this last equation is transformed into another solution $`Q(x)`$ of the same equation (39) but with possibly different values of the parameters that we denote by $`a,b,c`$ and $`d`$, as
$$m(x)=\rho (x)+Q(x).$$
(40)
Substitution of (40) into equation (39) gives an equation whose dominant terms near a pole of $`\rho `$, assuming $`\delta 0`$, are
$$\delta ^2\rho ^5+\rho \rho ^2\rho ^2\rho ^{\prime \prime },$$
(41)
which can be integrated to give
$$\rho ^{}\pm \delta \rho ^2.$$
(42)
We consider first the case with the plus sign in this last, and write
$$\rho ^{}=\delta \rho ^2+\sigma \rho .$$
(43)
Substituting this into the transformed version of equation (39) and looking once again at dominant terms, we get
$$\sigma (x)=2\delta Q(x)\frac{\delta +\gamma +\delta ^2x}{\delta x}+\frac{\tau }{\rho }.$$
(44)
Using this in the transformed equation we obtain, as we did for the fourth Painlevé equation , a quadratic in $`\rho `$ which has to be compatible with the Riccati equation
$$\rho ^{}=\delta \rho ^2+\left(2\delta Q(x)\frac{\delta +\gamma +\delta ^2x}{\delta x}\right)\rho +\tau .$$
(45)
Again the analysis of the resulting compatibility condition depends on whether or not $`\tau `$ is assumed to depend only on $`x`$, or on both $`x`$ and $`Q(x)`$. \[Again we do not consider here the case where $`\tau `$ is allowed to depend also on $`Q^{}(x)`$.\] Assuming $`\tau =\tau (x,Q(x))`$, and using the fact that $`Q`$ satisfies $`P_V`$, we obtain a polynomial in $`Q^{}`$ whose higher order coefficient gives the following differential equation for $`\tau `$,
$$2Q(2Q^34Q^2+3Q1)\tau _{QQ}+(4Q^3+6Q^21)\tau _Q+2(2Q^22Q1)\tau =0.$$
(46)
This has general solution
$$\tau =(2Q1)\left[f_1(x)+f_2(x)\mathrm{log}\left(Q\frac{1}{2}+\sqrt{Q(Q1)}\right)\right]+2f_2(x)\sqrt{Q(Q1)}.$$
(47)
Inserting this form for $`\tau `$ into the compatibility condition we next obtain
$`f_1(x)`$ $`=`$ $`{\displaystyle \frac{1}{2\delta x}}(c\gamma ),`$ (48)
$`f_2(x)`$ $`=`$ $`0,`$ (49)
$`d^2`$ $`=`$ $`\delta ^2,`$ (50)
which means that $`\tau `$ in fact takes the form
$$\tau =\frac{1}{2\delta x}(c\gamma )(2Q1).$$
(51)
From this last we see that if we had assumed $`\tau `$ to be a function of $`x`$ only, then as for $`P_{III}`$ we would only have obtained restricted results. Thus once again we see that allowing $`\tau `$ to depend on $`Q`$ leads to more general results.
Using the above results in our compatibility condition, we obtain from the next higher order term the following shift between the parameters,
$$a^2=\frac{\gamma ^2c^2+2\delta ^2(\alpha ^2+\beta ^2b^2)+2\delta (\gamma c)}{2\delta ^2}.$$
(52)
Using this in our compatibility condition then leads to
$`b^2`$ $`=`$ $`{\displaystyle \frac{4\beta ^2\delta ^3+2\left[(c+\gamma )\beta ^2+(c\gamma )(\alpha ^21)\right]\delta ^2+(\gamma ^23c^2+2c\gamma )\delta c^3+c\gamma ^2}{4\delta ^2(c+\delta )}}`$
where we have assumed that $`c+\delta 0`$ (the case $`c+\delta =0`$ has to be considered separately). Substituting back once again then provides the additional condition
$$(c\gamma )(c+\gamma +2\delta )(c+\delta \delta \alpha \delta \beta )(c+\delta +\delta \alpha \delta \beta )(c+\delta \delta \alpha +\delta \beta )(c+\delta +\delta \alpha +\delta \beta )=0.$$
(54)
This provides essentially three different cases to consider, since the last three factors are related to the third under changes of sign of $`\alpha `$ and $`\beta `$. These three cases are:
$`c`$ $`=`$ $`\gamma ,`$ (55)
$`c`$ $`=`$ $`\gamma 2\delta ,`$ (56)
$`c`$ $`=`$ $`\delta (\beta +\alpha 1).`$ (57)
The case $`c=\gamma `$ corresponds to the identity transformation (this is easily seen by substituting this condition into the above expressions for $`a^2`$ and $`b^2`$.) The other two cases lead to two nontrivial Bäcklund transformations, i.e.
$`\rho `$ $`=`$ $`Q(Q1){\displaystyle \frac{A_1(x,Q,Q^{})}{A_2(x,Q,Q^{})}},`$ (58)
$`A_1`$ $`=`$ $`2\delta x(\delta +\gamma )Q^{}+2\delta ^2x(\delta +\gamma )Q^2+2(\delta ^2\delta ^3x+2\gamma \delta \delta ^2\gamma x+\gamma ^2)Q`$ (59)
$`+\delta ^2(\beta ^2\alpha ^21)2\gamma \delta \gamma ^2,`$
$`A_2`$ $`=`$ $`\delta ^2x^2Q^2+2\delta ^3x^2(QQ^2)Q^{}+\delta ^4x^2Q^42\delta ^4x^2Q^3\alpha ^2\delta ^2`$ (60)
$`+(\delta ^4x^2\delta ^22\gamma \delta \gamma ^2)Q^2+(\delta ^2+\alpha ^2\delta ^2\beta ^2\delta ^2+2\gamma \delta +\gamma ^2)Q,`$
$`a^2`$ $`=`$ $`\beta ^2,`$ (61)
$`b^2`$ $`=`$ $`\alpha ^2,`$ (62)
$`c`$ $`=`$ $`\gamma 2\delta ,`$ (63)
$`d^2`$ $`=`$ $`\delta ^2,`$ (64)
and
$`\rho `$ $`=`$ $`{\displaystyle \frac{2Q(Q1)(\alpha \delta +\beta \delta \gamma \delta )}{2\delta xQ^{}2\delta ^2xQ^2+2(\gamma +\delta +\delta ^2x)Q+\delta (\alpha \beta 1)\gamma }},`$ (65)
$`a^2`$ $`=`$ $`\left[{\displaystyle \frac{\gamma +\delta (\alpha \beta +1)}{2\delta }}\right]^2,`$ (66)
$`b^2`$ $`=`$ $`\left[{\displaystyle \frac{\gamma +\delta (\alpha \beta 1)}{2\delta }}\right]^2,`$ (67)
$`c`$ $`=`$ $`\delta (\beta +\alpha 1),`$ (68)
$`d^2`$ $`=`$ $`\delta ^2.`$ (69)
We also have of course Bäcklund transformations obtained from the above under combinations of $`\alpha \alpha `$, $`\beta \beta `$. As mentioned earlier, we also have our compatibility condition for the case $`c+\delta =0`$, which needs to be considered separately. However this leads only to restricted cases of the above Bäcklund transformations.
We now consider taking the opposite sign in front of the term in $`\rho ^2`$ in (43), i.e.
$$\rho ^{}=\delta \rho ^2+\sigma \rho .$$
(70)
The results thus obtained are as above with $`\delta \delta `$.
In the above we have assumed $`\delta 0`$. We note that $`P_V`$ in the case $`\delta =0`$ can be reduced to $`P_{III}`$ with $`\gamma \delta 0`$ , which we considered in the previous section.
The Bäcklund transformations represented by (65)—(69) do not seem to have been given before. They correspond to a composition of what appear to be the only previously known (see ) auto-Bäcklund transformations for $`P_V`$ in the case $`\delta 0`$. These known Bäcklund transformations, as well as this composistion, are derived in our approach by consideration of the ODE satisfied by $`\rho (x)`$ (we leave details of this derivation to ; the resulting discussion is analogous to our discussion of $`P_{IV}`$ in ). The Bäcklund transformations represented by (58)—(64) correspond to second iterations, with appropriate choices of signs, of (65)—(69).
## 4 Conclusions
We have applied our approach to obtaining Bäcklund transformations, which is based on mappings preserving natural subsets of movable poles, to $`P_{III}`$ and $`P_V`$. For $`P_{III}`$ we have recovered the four fundamental Bäcklund transformations for the case $`\gamma \delta 0`$, as well as what appears to be a new general formulation of the second iteration of these or of a further Bäcklund transformation which exists in the case $`\gamma \delta =0`$. For $`P_V`$ our approach allows us to recover what appear to be all known Bäcklund transformations (details in ), and in addition a composition of these which seems not to have been given previously. A crucial point in obtaining such general results for $`P_{III}`$ and $`P_V`$ is our allowing $`\tau `$ to depend not only on $`x`$ but also on $`Q(x)`$. Further aspects of our approach for $`P_{III}`$ and $`P_V`$ are considered in .
## Acknowlegements
The research of PRG was supported in part by the DGICYT under contract PB95-0947. NJ acknowledges support from the Australian Research Council. AP thanks the Ministry of Education and Culture of Spain for a post-doctoral fellowship.
## Appendix
In this appendix we briefly compare the approach outlined here with the standard Painlevé truncation. We take as an example the third Painlevé equation $`P_{III}`$.
Seeking a solution of (2) in the form of a so-called truncated Painlevé expansion, we obtain
$$y=\frac{\phi ^{}}{\gamma \phi }\frac{1}{2\gamma }\left(\frac{\phi ^{\prime \prime }}{\phi ^{}}+\frac{\alpha +\gamma }{\gamma x}\right),$$
(71)
together with an equation of the form $`A\phi ^1+B=0`$, for some $`A`$, $`B`$.
The truncated expansion (71) can be compared with the results obtained in Section 2 by linearising the Riccati equation (8), (15) by setting $`\rho =\phi ^{}/(\gamma \phi )`$, which then leads to
$$Q=\frac{1}{2\gamma \frac{(a\alpha )\phi }{x\phi ^{}}}\left(\frac{\phi ^{\prime \prime }}{\phi ^{}}+\frac{\alpha +\gamma }{\gamma x}\right),$$
(72)
and thus (3) becomes
$$y=\frac{\phi ^{}}{\gamma \phi }\frac{1}{2\gamma \frac{(a\alpha )\phi }{x\phi ^{}}}\left(\frac{\phi ^{\prime \prime }}{\phi ^{}}+\frac{\alpha +\gamma }{\gamma x}\right).$$
(73)
This is the same as the truncated Painlevé expansion (71) only in the case $`a=\alpha `$, and thus we see that the only results obtainable using (71) will be those for this restricted case. That is, we can only obtain the identity or restricted cases of the Bäcklund transformations presented in Section 3. Deriving these results requires applying the method presented in rather than a standard truncation approach.
Similar remarks hold for $`P_{IV}`$ and $`P_V`$, or in general when the function $`\tau 0`$. |
no-problem/0001/gr-qc0001051.html | ar5iv | text | # Cosmological Constant, Quintessence and Scalar-Tensor Theories of Gravity**footnote *Delivered at The Ninth Workshop on General Relativity and Gravitation, Hiroshima University, Nov. 3–6, 1999: International Seminar Nara 99, Space, Time and Interactions, Nara Women’s University, December 4–5, 1999. To appear in Proceedings.
## I Introduction
For the last few years we have heard much about indications, coming from such observations like age problem, large-scale structure, lens statistics and type Ia supernovae, for the presence of a small but nonzero cosmological constant. The expected size is measured in units of the parameter $`\mathrm{\Omega }_\mathrm{\Lambda }\mathrm{\Lambda }/\rho _{\mathrm{crit}}`$, with the most favored number somewhere around $`0.60.7`$ .
Acceptance of the conclusion from a theoretical point of view is limited, however, because $`\mathrm{\Lambda }`$ has been considered to be highly artificial, unnatural and arbitrary. Alternative ideas have been discussed to produce nearly the same end result. One of the key observations is that in the language of ideal fluid in cosmology, introducing $`\mathrm{\Lambda }`$ is equivalent to assuming an equation of state, $`p=\rho `$. This aspect of negative pressure is what one can implement in terms of a scalar field that couples to ordinary matter roughly as weakly as gravity.
The study of this type of a “gravitational scalar field” has a long history . Recently, Chiba et al. presented a detailed analysis showing how the assumed negative pressure can be compared with observations . They did not confine themselves to a scalar field, calling it X-matter. Caldwell et al. were the first to use a new name “quintessence,” as the fifth element of the universe in addition to the four; baryons, radiation, dark matter and neutrinos . They focussed more on the scalar field, discussing what the potential of the field should be like in order to fit the observations. An inverse-power potential is preferred over the exponential potential, for example.
We complain, however, that many of the analyses are too much phenomenology-oriented . We are talking about $`\mathrm{\Lambda }`$ which has been much discussed, often under such a grandeur name like the “cosmological constant problem,” at the center of which lies a question why the observed value or its upper bound is so small compared with what we expect naturally from a theoretical point of view, by as much as 120 orders of magnitude .
In this connection we remind the audience that some progress has been achieved in terms of scalar-tensor theories. We believe that these theories are promising because they might have important aspects shared by attempts toward unified theories, and also because they provide the simplest derivation of an exponential potential of the scalar field, allowing us to implement the “scenario of a decaying cosmological constant” . According to this scenario, we find $`\mathrm{\Lambda }`$ not to be a true constant, but to decay like $`\mathrm{\Lambda }(t)t^2`$. In the (reduced) Planckian unit system of $`c=\mathrm{}=8\pi G=M_\mathrm{P}^2=1`$, the present age of the universe, $`t_010^{10}\mathrm{y}`$ is about $`10^{60}`$, hence giving $`\mathrm{\Lambda }(t_0)10^{120}`$; today’s cosmological constant is small only because our universe is old, not because of an unnatural fine-tuning of the parameters to the accuracy of 120 orders of magnitude.
This “success,” however, is not good enough for understanding the accelerated universe as suggested by the recent observations, which indicates $`\mathrm{\Lambda }(t)`$ that behaves nearly constant for some duration of time toward the present epoch, or at least falling off more slowly than $`t^2`$. This requires a deviation from the purely exponential potential, though without losing its advantage of implementing the scenario of a decaying $`\mathrm{\Lambda }(t)`$. Proposed modification of the prototype Brans-Dicke (BD) model will be presented in Section V, which will be preceded, however, by Sections II–IV. In Section II, we show that the prototype BD model with $`\mathrm{\Lambda }`$ added entails a serious drawback. A remedy is proposed in Section III. A possible link to non-Newtonian gravity is suggested in Section IV. Section VI is for discussion including the issue of time-variation of coupling constants, the novel view on the coincidence problem, the expected chaos-like nature of the solution, and possible extensions of the model.
The present article is based on the talks intended to outline basic ideas and main conclusions of Refs. \[11-14\], in which more details and references will be found.
## II Prototype BD model with $`\mathrm{\Lambda }`$
We start with the Lagrangian
$$_{\mathrm{BD}}=\sqrt{g}\left(\frac{1}{2}\xi \varphi ^2Rϵ\frac{1}{2}g^{\mu \nu }_\mu \varphi _\nu \varphi \mathrm{\Lambda }+L_{\mathrm{matter}}\right),$$
(1)
where we use symbols similar to but slightly different from the original ones ;
$$4\xi \omega =ϵ=\mathrm{Sign}(\omega ),\phi _{\mathrm{BD}}=(\xi /2)\varphi ^2.$$
We use the reduced Planckian unit system, in which $`\mathrm{\Lambda }`$ is naturally expected to be of the order unity, as suggested in many of theoretical models of unification.
An important assumption is made that the scalar field is decoupled from ordinary matter fields at the level of the Lagrangian. This feature distinguishes the prototype BD model from the preceding model due to Jordan . In the field equation, however, we find that the scalar field does couple to the trace of the matter energy-momentum tensor. This is due to the mixing interaction between the scalar field and the spinless component of the metric tensor implied by the nonminimal coupling term. The scalar field couples to matter only via the metric field. This explains why the scalar field in this model respects Weak Equivalence Principle (WEP) which is a privilege of the force endowed with spacetime geometry.
Conformal transformations defined by
$$g_{\mu \nu }g_{\mu \nu }=\mathrm{\Omega }^2(x)g_{\mu \nu },$$
(2)
always provide a powerful tool in analyzing theories with a nonminimal coupling, like the first term on the right-hand side of (1). Of particular importance is the choice
$$\mathrm{\Omega }=\xi ^{1/2}\varphi ,$$
(3)
by which we eliminate the nonminimal coupling. In fact the same Lagrangian (1) can be put into the form
$$_{\mathrm{BD}}=\sqrt{g_{}}\left(\frac{1}{2}R_{}\frac{1}{2}g_{}^{\mu \nu }_\mu \sigma _\nu \sigma V(\sigma )+L_{\mathrm{matter}}\right),$$
(4)
in which the term of the curvature scalar is the standard Einstein-Hilbert (EH) term, and the canonical scalar field is now $`\sigma `$ related to the original $`\varphi `$ by
$$\varphi =\xi ^{1/2}e^{\zeta \sigma },$$
(5)
where
$$\zeta ^2=6+ϵ\xi ^1.$$
(6)
We assume that the right-hand side is positive;
$$ϵ\xi ^1>6.$$
(7)
Notice also that the term of $`\mathrm{\Lambda }`$ acts now as an exponential potential as alluded before;
$$V(\sigma )=\mathrm{\Lambda }\mathrm{\Omega }^4=\mathrm{\Lambda }e^{4\zeta \sigma }.$$
(8)
We say that we have moved from the original Jordan conformal frame, abbreviated by J frame, to Einstein (E) frame. Generally speaking, quantities in different conformal frames (CFs) are related to each other in an unambiguous manner. For this reason different CFs are sometimes called to be equivalent to each other. Physics may look different, however. Then a question may arise how we can single out a particular CF out of (infinitely many) CFs. We do it according to what clock or meter stick we use. Remember that (2) represents a local change of units , as will be evident if it is put into the form $`ds_{}^2=\mathrm{\Omega }^2(x)ds^2`$. These features will be demonstrated later by explicit examples.
Let us choose E frame for convenience of mathematical analysis, for the moment. Consider spatially flat Robertson-Walker cosmology. Looking for the solution in which the scalar field is spatially uniform, we find analytic solution;
$`a_{}(t_{})`$ $`=`$ $`t_{}^{1/2},`$ (9)
$`\sigma (t_{})`$ $`=`$ $`\overline{\sigma }+\zeta ^1{\displaystyle \frac{1}{2}}\mathrm{ln}t_{},`$ (10)
$`\rho _m(t_{})`$ $`=`$ $`\left(1{\displaystyle \frac{1}{4}}\zeta ^2\right)t_{}^2\times \{\begin{array}{c}\frac{3}{4},\hfill \\ 1,\hfill \end{array}`$ (13)
$`\rho _\sigma (t_{})`$ $`=`$ $`t_{}^2\times \{\begin{array}{c}\frac{3}{16}\zeta ^2,\hfill \\ \frac{1}{4}\left(1+\zeta ^2\right),\hfill \end{array}`$ (16)
where $`t_{}`$ is the cosmic time in E frame, $`\rho _m`$ is the ordinary matter density, while $`\rho _\sigma \dot{\sigma }^2/2+V(\sigma )`$ may be interpreted as an effective cosmological constant $`\mathrm{\Lambda }_{\mathrm{eff}}`$. The upper and lower lines in (13) and (16) are for the radiation- and dust-dominated universe, respectively. We first notice that (16) shows that the $`\mathrm{\Lambda }_{\mathrm{eff}}`$ decays like $`t_{}^2`$, thus implementing the scenario of a decaying cosmological constant.
The solution (9)-(16), which is in fact an attractor reached asymptotically, differs in many ways from the one obtained from the equation without $`\mathrm{\Lambda }`$. For example, (9) for the scale factor holds true not only for radiation-dominance but also for dust-dominance; the exponent for dust dominance without $`\mathrm{\Lambda }`$ is given by $`2/(3+2\zeta ^2)`$ which is close to $`2/3`$ if $`\zeta ^21`$.
More important is, however, that particle masses depend on $`\sigma `$, and hence on time. The mass term of the quark, for example, is given by
$$_{mq}=\sqrt{g}m_0\overline{q}q,$$
(17)
with a constant $`m_0`$, due to the assumed absence of $`\varphi `$ in the matter Lagrangian. In E frame, however, we have
$$_{mq}=\sqrt{g_{}}m_{}\overline{q}_{}q_{},$$
(18)
with $`q_{}=\mathrm{\Omega }^{3/2}q`$, while
$$m_{}=m_0\mathrm{\Omega }^1=m_0e^{\zeta \sigma }t_{}^{1/2},$$
(19)
where we have used (10) to derive the last expression in (19).
If we consider the period of primordial nucleosynthesis, for example, (19) would imply the decrease of nucleon mass, say, as much as $`70`$%. This is unacceptable because the physical processes are analyzed in terms of quantum mechanics in which particle masses are taken as constant. The physical CF corresponding to this situation is J frame. We should then study the solution in J frame.
We may restart with the cosmological equations in J frame. There is, however, another interesting approach by using the relations
$`dt_{}`$ $`=`$ $`\mathrm{\Omega }dt,`$ (20)
$`a_{}`$ $`=`$ $`\mathrm{\Omega }a`$ (21)
which can be obtained directly from (2). Combining $`\mathrm{\Omega }\varphi t_{}^{1/2}`$ obtained from (3), (5) and (10), we derive
$$a=\text{const},$$
(22)
implying that the universe looks static in J frame. This can be understood even in E frame by noting that $`a_{}`$ and $`m_{}^1`$ grow in the same way $`t_{}^{1/2}`$, and that assuming time-independent mass is equivalent to choosing the inverse mass as the time unit.
We have a dilemma; varying mass in E frame and static scale factor in J frame. We admit that the conclusion is not final, because we assumed the simplest nonminimal coupling term and a purely constant $`\mathrm{\Lambda }`$ in J frame. These can be modified. More detailed study shows, however, that we must accept rather extreme choices of parameters. We would find it increasingly difficult and unnatural to understand why the simple standard cosmology works so well. It seems better to modify the model at the more fundamental level, still based on simple physical principles.
## III Dilaton model
In view of (9) which happens to coincide with the standard result for the radiation-dominated universe, we try to see if we can have a constant mass in E frame instead of J frame. Let us assume the E frame mass term,
$$_{mq}=\sqrt{g_{}}m_q\overline{q}_{}q_{},$$
(23)
where $`m_q`$ is chosen to be constant. This can be transformed back to J frame;
$$_{mq}=\sqrt{g}\xi ^{1/2}m_q\overline{q}q\varphi ,$$
(24)
a simple Yukawa interaction, with the coupling constant $`\xi ^{1/2}m_q`$ which is in fact dimensionless as will be confirmed by re-installing $`M_\mathrm{P}^1`$. We have no mass term in the starting Lagrangian. The nonminimal coupling implies that we have no dimensional gravitational constant either. For this reason we have scale invariance, or dilatation symmetry, which might be chosen as a simple physical principle we mentioned above. We have a conserved dilatation current in J frame. The presence of mass term (23) and the EH term in E frame suggests that dilatation symmetry is broken spontaneously with $`\sigma `$ a massless Nambu-Goldstone boson, called a dilaton.
Allowing $`\varphi `$ to be present in the J frame Lagrangian implies that WEP is violated. We still maintain Equivalence Principle at the more fundamental level as stated that tangential space of curved spacetime be Minkowskian. Breaking WEP as a phenomenological law can be tolerated within the limit allowed by the fifth-force type experiments . We have now E frame as a physical CF in which we develop radiation-dominated cosmology showing standard result with constant particle masses. The scalar field is left decoupled from ordinary matter in E frame. This simple situation changes, however, once we take the effect of interaction among matter fields into account.
Consider the quark-gluon interaction, for example. Calculation based on relativistic quantum field theory suffers always from divergences. To put this under control we use the recipe of dimensional regularization. We start with spacetime dimension $`N`$ which we choose off the physical value 4. Divergences can be represented by poles $`(N4)^1`$ if we confine ourselves to 1-loop diagrams for the moment. After applying renormalization techniques with $`N4`$, we finally go to the limit $`N=4`$ at the end of the calculation. We now observe that the scalar field is decoupled from matter only at $`N=4`$. In other words, we have the coupling proportional to $`N4`$ which may cancel the pole coming from divergences, thus leaving a nonzero finite result. This is a typical way we obtain “quantum anomalies,” which play important roles in particle physics. In the same way, we are left with nonzero matter coupling of the scalar field in general.
The QCD calculation yields
$$_{mq1}=\sqrt{g_{}}\zeta _qm_q\overline{q}_{}q_{}\sigma ,$$
(25)
where
$$\zeta _q=\zeta \frac{5\alpha _s}{\pi }0.3\zeta ,$$
(26)
with $`\alpha _s0.2`$ is the QCD counterpart of the fine-structure constant. The occurrence of $`\alpha _s`$ which is specific to the quark implies composition-dependence of the force mediated by $`\sigma `$, in accordance with violation of WEP, as discussed before.
For more-loop diagrams we obtain terms of higher power of $`\sigma `$. Combined with the mass term (23), these coupling terms will give $`\sigma `$-dependent mass, as before. We will show, however, that the dependence might be reasonably weaker than in the prototype model. Consider a nucleon which constitutes a major component of the real world, but is composite to make the same calculation as to the quark not applicable. In view of a rather small content of the quark mass contribution ($`60`$MeV) out of the total nucleon mass $`940`$MeV, we estimate an effective coupling strength of the nucleon;
$$\zeta _N0.02\zeta .$$
(27)
The smallness of $`\zeta _N/\zeta 1`$ seems to justify E frame as a physical CF to a good approximation, although strictly speaking we should move to anther CF in which the nucleon mass is truly constant. By this process $`\varphi `$ reappears in front of scalar curvature, thus making the gravitational constant time-dependent. The predicted rate of change based on (27) is $`\dot{G}/G10^{12}\mathrm{y}^1`$, in roughly consistent with the observational upper bound obtained so far .
We also find that the exponent of the scale factor in the dust-dominated universe is given by
$$\alpha _{}=\frac{2}{3}\left(1\frac{\overline{\zeta }}{\zeta }\right)\frac{2}{3},$$
(28)
where $`\overline{\zeta }`$ is an average of $`\zeta `$’s for particles that comprise dust matter. In this way we are in a better status compared with the model discussed in Section II.
We also add that scale invariance is broken explicitly finally when the quantum effect is included. $`\sigma `$ is now a pseudo NG boson which acquires a nonzero mass. This seems to raise a tempting possibility that $`\sigma `$ provides yet another origin of non-Newtonian gravity , as will be further pursued.
## IV Non-Newtonian gravity
We decompose $`\sigma `$ into the cosmological background part and the locally fluctuating component;
$$\sigma (x)=\sigma _b(t_{})+\sigma _f(x).$$
(29)
Substituting this into (25) yields the matter coupling of each component.
Consider the $`\sigma _f`$-coupling. A 1-quark-loop diagram will give a self-mass $`\mu _f`$ basically given by
$$\mu _f^2m_q^2M_{sb}^2,$$
(30)
where we assumed the cut-off energy of the order of $`M_{sb}`$, the mass scale of supersymmetry breaking. We use the values $`m_q5`$MeV and $`M_{sb}1`$TeV, which are $`2\times 10^{21}`$ and $`4\times 10^{16}`$, respectively in the reduced Planckian unit system, obtaining $`\mu _f0.84\times 10^{36}2.1\times 10^9`$eV, and the corresponding force-range $`\lambda =\mu ^11.2\times 10^{36}=9.6\times 10^3\mathrm{cm}100\mathrm{m}`$. This turns out to be a macroscopic size considered typically as the force-range of non-Newtonian gravity , though obviously latitudes of several orders of magnitude should be allowed.
On the other hand, it has been argued that the force of $`\sigma `$ should be of long-range . A simple estimate of this kind can be obtained by considering a component of the Einstein equation $`3H_{}^2=\rho _\sigma +\rho _m`$. The left-hand side would be $`t_{}^2`$, giving $`\rho _\sigma Vt_{}^2`$. We may also expect that the mass-squared $`\mu ^2`$ will be given by $`\mu ^2V^{\prime \prime }V`$. Combining these we reach a conclusion $`\lambda t_{}`$ which is the size of the whole visible universe, in sharp contrast with the previous estimate.
In this connection we point out that the latter argument applies to $`\sigma _b`$, a classical solution of a nonlinear equation, while $`\mu _f^2`$ in the former is a squared mass of a quantized field, which is based on the solution of a linear harmonic oscillator. These two can be simply different from each other. An example is provided by the sine-Gordon equation in 2 dimensions, featuring a quantized mesonic excitation and the classical soliton solution, which show behaviors entirely different from each other .
We do not know if the 2-dimensional model provides an appropriate analogy, nor if our cosmological equations in 4 dimensions allow a soliton-type solution. Nevertheless the lesson learned from this example seems highly suggestive. Without any further details at this moment, we offer a conjecture that $`\sigma _f`$ mediates an intermediate-range force between local objects, with $`\sigma _b`$ still rolling down the exponential slope slowly.
We also point out that $`\sigma _b`$ may also acquire self-energy through its own matter coupling. As it turns out, this amounts to be part of the vacuum energy of matter fields. In the sense of quantum field theory, this contribution is given roughly by $`M_{sb}^4`$ which provides another origin of the cosmological constant with the size represented by $`\mathrm{\Omega }_\mathrm{\Lambda }`$ as large as $`10^{60}`$. Since $`M_{sb}`$ depends on $`\sigma `$, we would have the potential too large by the same amount. The same “success” for the contribution from $`\mathrm{\Lambda }`$ in the J frame Lagrangian dose not seem expected. An entirely different mechanism based perhaps on nonlinearly or topological nature is called for. If a solution is discovered finally, however, we would be likely left with non-Newtonian gravity featuring finite force-range and WEP violation. Quintessence might be quintessentially a fifth force.
The phenomenological aspects of this putative force have been analyzed in terms of the static potential between two nuclei;
$$V_{5ab}(r)=\frac{Gm_am_b}{r}\left(1+\alpha _{5ab}e^{r/\lambda }\right).$$
(31)
The size of the coefficient $`\alpha _{5ab}`$ is given essentially by
$$\alpha _{5ab}\alpha _{5N}=2\zeta _N^210^3,$$
(32)
where the estimate at the far right-side is based on (27) together with $`\zeta 1`$ assumed. Given the experimental constraints , we are on the verge of immediate exclusion, though it might be premature to draw a final conclusion before more detailed analyses are completed on various kinds of uncertainties.
## V Two-scalar model
We now discuss how the exponential potential should be modified to understand an accelerated universe. We start with a clue which we find in a transient solution for the exponential potential.
Suppose $`\sigma (t_{})(=\sigma _b)`$ starts moving ahead even slightly. The potential will decrease rapidly, and then the frictional force coming from the cosmological expansion will decelerate $`\sigma `$ until it comes to an almost complete stop. $`\rho _\sigma `$ will fall off much faster than $`\rho _m`$ to reach a near constant. This state lasts until $`\rho _m`$ comes down to be comparable again with $`\rho _\sigma `$ when the system enters the asymptotic phase, described by (9)-(16). The plateau of $`\rho _\sigma `$ thus formed is expected to mimic a cosmological constant.
Detailed numerical simulation shows, however, that $`\sigma `$ starts moving again toward the asymptotic behavior a little too early, before $`\rho _\sigma `$ crosses $`\rho _m`$. Remember that this crossing, if it occurred, would have implied $`\mathrm{\Omega }_\mathrm{\Lambda }=0.5`$. As a consequence $`\mathrm{\Omega }_\mathrm{\Lambda }`$ would reach short of values as “large” as 0.6–0.7. We must invent a mechanism to prevent this premature departure of $`\sigma `$.
After somewhat random effort on a try-and-error basis, we came across a proposal . We introduce another scalar field $`\mathrm{\Phi }`$ which couples to $`\sigma `$ through the E frame potential,
$$V(\sigma ,\mathrm{\Phi })=e^{4\zeta \sigma }\left(\mathrm{\Lambda }+\frac{1}{2}m^2\mathrm{\Phi }^2\left[1+\gamma \mathrm{sin}(\kappa \sigma )\right]\right),$$
(33)
as shown graphically in Fig. 1, where $`m,\gamma `$ and $`\kappa `$ are constants roughly of the order unity. We will study the consequences simply for the success, without any discussion on the theoretical origin, at this moment. In Fig. 2, we show an example of the solution in which quantities are plotted against $`u\mathrm{log}t_{}`$.
We notice that $`\sigma `$ and $`\mathrm{\Phi }`$ in the upper diagram show alternate occurrence of rapid changes and near standstills. Corresponding to this we find plateau behaviors of $`\rho _s`$, the energy density of the $`\sigma \mathrm{\Phi }`$ system, in the lower diagram. There are two of them in this particular example. In both of them $`\rho _s`$ crosses $`\rho _m`$, resulting in sufficient acceleration of the scale factor, as clearly recognized as two mini-inflations in the upper and middle diagrams.
We moderately fine-tuned initial values and parameters such that the second mini-inflation takes place around the present epoch, $`u60`$. It is rather easy to “predict” values like $`t_0=(1.11.4)\times 10^{10}\mathrm{y}`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.60.8`$ and $`h=0.60.8`$, with more details described in Ref. .
We provide an intuitive explanation briefly on how we understand this result. The decelerated $`\sigma `$ settles at a place which is determined only by the “initial” values, quite independent of where it is with respect to the sinusoidal potential which had sunk so much. As the time elapses, the potential grows in terms of the strength relative to the frictional force. It may happen that $`\sigma `$ finds itself near one of the minima of the sinusoidal potential. $`\sigma `$ is then trapped, tighter and tighter, as the potential strength proportional to $`\mathrm{\Phi }^2`$ builds up again in the relative sense as above. This serves to prevent $`\sigma `$ from escaping too early, and consequently to have the plateau behavior to last sufficiently long. The growing potential implies, however, that also $`\mathrm{\Phi }`$ is attracted toward the central valley $`\mathrm{\Phi }=0`$ stronger and stronger. Finally the time will come when $`\mathrm{\Phi }`$ starts moving downward. This entails a sudden collapse of the potential wall that has confined $`\sigma `$. With the accumulated energy $`\sigma `$ is catapulted forward. See Fig. 3 showing a magnified view of this behavior. In short this “trap-and-catapult mechanism” is a result of a superb collaboration of the two scalar fields.
The decelerated $`\sigma `$ may find itself again near a potential minima. Then nearly the same pattern of the behavior will repeat itself, as in the above example. Otherwise, $`\sigma `$ and $`\mathrm{\Phi }`$ will make a smooth transition to the asymptotic phase, as in the model without $`\mathrm{\Phi }`$. No mini-inflation and hence no extra acceleration take place. There are many different combinations of these typical behaviors.
## VI Discussion
Our argument in Section IV for a possible link to non-Newtonian gravity will remain unchanged for $`\sigma `$ even in the two-scalar model. Experimental searches for the non-Newtonian force with improved accuracy are strongly encouraged. On the other hand, $`\mathrm{\Phi }`$ will most likely stay extremely massive leaving virtually no trace in the macroscopic physics.
In Fig. 2 we find $`\rho _s`$ and $`\rho _m`$ fall off interlacing each other, maintaining the dependence $`t_{}^2`$ as an overall behavior. In this way we inherit the scenario of a decaying cosmological constant, unlike many other models of quintessence.
We have not attempted to determine the parameters and initial values uniquely. We add some remarks, however. They should be determined such that a plateau behavior has lasted long enough to cover the present epoch and the period of nucleosynthesis, as shown in Fig. 2. By requiring this we can guarantee that particle masses must have been the same as today, also keeping the period well in the radiation-dominated epoch. There is a wide area in parameter space where this is not the case, thus endangering one of the most successful part of standard cosmology.
Some of the coupling constants, including $`G`$, may depend on time probably through its dependence on scalar fields. Assuming that we are today during the period of a plateau behavior entails no time variability of the coupling constants, or below the level of $`t_0^110^{10}\mathrm{y}^1`$ by perhaps several orders of magnitude, apparently in good agreement with the upper bounds obtained so far . The previous argument on $`\dot{G}/G`$ in Section III no longer applies to the epochs during a plateau period.
Given the required long plateau behavior, one may wonder if the modification of the model proposed in Sections II-III loses its ground. No need is present to distinguish CFs from each other during a plateau behavior. The argument still holds true, however, for epochs much earlier than nucleosynthesis. We must avoid concluding a contracting universe, for example, at very early epochs .
Suppose we are really witnessing an extra acceleration of the universe. Suppose also this is due to a fixed constant $`\mathrm{\Lambda }`$. It then follows that we are in a truly remarkable, special once-for-all era throughout the whole history of the universe. Taking aside a special version of the Anthropic Principle, this should be too arrogant a view, a way of representation of the coincidence problem. From the point of view of the two-scalar model, on the other hand, we are looking at only one of the repeated events. We still face the coincidence problem but certainly to a lesser degree.
We can show that the analytic solution (9)-(16) supplemented with $`\mathrm{\Phi }=0`$ is still an asymptotic solution. Before reaching this fixed-point attractor, however, there can be many different types of transient behaviors, depending sensitively on the initial values. In particular we have many different possibilities with respect to the number showing how many plateau behaviors we experience before the solution finally enters the asymptotic phase. This number, and hence the qualitative characteristics of the solution changes suddenly as an initial value changes smoothly. This reminds us of the well-known chaotic behavior in nonlinear dynamics. We remind readers that two scalar fields have four degrees of freedom beyond the smallest value three for the system possibly to be chaotic . The presence of the fixed-point attractor rather than the strange attractor in phase space, however, implies that the behavior is not genuinely chaotic, but only chaos-like. The dependence on initial values and parameters is still so sensitive that the system seems to deserve further scrutiny. It may provide a new example of dissipative structure.
In this sense we are leaving the traditional view that the universe we see now should depend as little as possible on initial values. Nothing seems seriously wrong if, on the contrary, today’s universe is, like many other phenomena in nature around us, still in a transient state of evolution depending strongly on what the universe was like at very early times .
We finally add a comment on a possible extension of the model.After completing the manuscript we learned about other detailed studies on the extended models . The starting Lagrangian in J frame may be thought of as the theoretical one obtained by the process of dimensional reduction from the still more fundamental Lagrangian perhaps in higher dimensions. It is then likely that $`\mathrm{\Lambda }`$ is multiplied by a monomial of $`\varphi `$. This provides a possible extension of the simplest model considered so far. As we find, we still reach an exponential potential with $`\zeta `$ in (8) replaced by another factor depending on the exponent of the monomial. This suggests that the analyses carried out here can be applied to more general models.
References
1. For the most recent result, see, for example, A.G. Riess et al., Astron. J. 116 (1998), 1009: S. Perlmutter et al., Astroph. J. 517 (1999), 565.
2. P. Jordan, Z. Phys. 157 (1959), 112.
3. C. Brans and R.H. Dicke, Phys. Rev. 124 (1961), 925.
4. P.J.E. Peebles and B. Ratra, Astrophys J., 325 (1988), L17; B. Ratra and P.J.E. Peebles, Phys. Rev. D37 (1988), 3406; T. Damour and K. Nordtvedt, Phys. Rev. D48 (1993), 3436; D.I. Santiago, D. Kallingas and R.V. Wagoner, Phys. Rev. D58 (1998), 124005
5. T. Chiba, N. Sugiyama and T. Nakamura, Mon. Not. R. Astron. Soc. 289 (1997), L5.
6. R.R. Caldwell, R. Dave and P.J. Steinhardt, Phys. Rev. Lett. 80 (1998), 1582.
7. J.A. Frieman and I. Waga, Phys. Rev. D57 (1998), 4642; P. Ferreira and M. Joyce, Phys. Rev. D58 (1998), 023503; A.R. Liddle and R.J. Scherrer, Phys. Rev. D59 (1999), 023509; W. Hu, D.J. Eisenstein, M. Tegmark and M. White, Phys. Rev. D59 (1999), 023512; G. Huey, L. Wang, R.R. Caldwell and P.J. Steinhardt, Phys. Rev. D59 (1999), 063005; P.J. Steinhardt, L. Wang and I. Zlatev, Phys. Rev. D59(1999), 123504; F. Perrotta, C. Baccigalup and S. Matarrese, Phys. Rev. D61 (2000), 023507; M. Bento and O. Bertolami, Gen. Rel. Grav. 31 (1999), 1461; F. Rosati, hep-ph/9906427.
8. S. Carroll, Phys. Rev. Lett. 81 (1998), 3067; T. Chiba, Phys. Rev. D60 (1999), 083508; N. Bartolo and M. Pietroni, Phys. Rev. D61 (2000), 203518.
9. See, for example, S. Weinberg, Rev. Mod. Phys. 61(1989), 1.
10. A.D. Dolgov, The very early universe, Proceedings of Nuffield Workshop, 1982, Cambridge University Press, 1982; Y. Fujii and T. Nishioka, Phys. Rev. D42 (1990), 361.
11. Y. Fujii, Prog. Theor. Phys. 99 (1998), 599.
12. Y. Fujii, Fundamental parameters in cosmology, Proceedings of the XXXIIIrd Rencontres de Moriond, Editions Frontieres, 1998, p. 93, gr-qc/9806089.
13. Y. Fujii, A two-scalar model for a small but nonzero cosmological constant, gr-qc/9908021.
14. Y. Fujii, Quintessence, scalar-tensor theories and non-Newtonian gravity, gr-qc/9911064.
15. R.H. Dicke, Phys. Rev. 125 (1962), 2163.
16. E. Fischbach and C.L. Talmadge, The search for non-Newtonian gravity, AIP Press-Springer, 1998.
17. M.P. Locher, Nucl. Phys. A527 (1991), 73c.
18. R.W. Hellings et al., Phys. Rev. Lett. 51 (1983), 1609.
19. See also R.D. Peccei, J. Sola and C. Wetterich, Phys. Lett. B195 (1987), 183.
20. Y. Fujii, Nature Phys. Sci. 234 (1971), 5: Int. J. Mod. Phys. A6 (1991), 3505.
21. See, for example, S.V. Ketov, Fortsch. Phys. 45 (1997), 237.
22. Y. Fujii and T. Nishioka, Phys. Lett. B254 (1991), 347; Y. Fujii, Astropart. Phys. 5 (1996), 133.
23. See, for example, Y. Fujii, M. Omote and T. Nishioka, Prog. Theor. Phys. 92 (1992), 521; Y. Fujii, A. Iwamoto, T. Fukahori, T. Ohnuki, M. Nakagawa, H. Hidaka, Y. Oura and P. Möller, The nuclear interaction at Oklo 2 billion years ago, to be published in Nucl. Phys. B, hep-ph/9809549v2, and papers cited therein.
24. See, for example, J.N. Cornish and J. Levin, Phys. Rev. D53 (1996), 3022; R. Easther and K. Maeda, Class. Quant. Grav. 16 (1999), 1637.
25. L. Amendola, Phys. Rev. D60(1999), 043501; J.P. Uzan, Phys. Rev. D59(1999), 123510; F. Perrotta, C. Baccigalup and S. Matarrese, Phys. Rev. D61(2000), 023507; O. Bertolami and P.J. Martins, gr-qc/9910056. |
no-problem/0001/astro-ph0001158.html | ar5iv | text | # The Off State of GX 339–4
## 1 Introduction
The black hole candidate GX 339–4 was discovered by Markert et al. (1973) with the OSO–7 satellite and was soon noted for its similarity in X-rays to the classical black hole candidate Cyg X–1 (Market et al. 1973; Maejima et al. 1984; Dolan et al. 1987). The source exhibits aperiodic and quasi-periodic modulations on time scales spanning from milliseconds to years over a wide range of wavelengths. It spends most of the time in the so-called X-ray low state (LS) which has a power-law spectrum with spectral index $`\alpha 1.52`$ (Ricketts 1983; Maejima et al. 1984) and strong (30–40% rms) band-limited noise (Nowak et al. 1999; Belloni et al. 1999). In the high state (HS), it becomes brighter (in the 2–10 keV band) and exhibits an ultra-soft spectral component plus a steeper power-law (Maejima et al. 1984; Belloni et al. 1999), while the temporal variability is only a few percent rms (Grebenev et al. 1993; Belloni et al. 1999). It also shows a very high state (VHS; Miyamoto et al. 1991) with broad band noise of 1–15% rms and 3–10 Hz quasi-periodic oscillations (QPOs) seen in its fast time variability, but with a higher X-ray luminosity than in the HS. Recently, an intermediate state (IS) was reported by Méndez and van der Klis (1997) and its spectral and timing properties are similar to the VHS but with a much lower luminosity. Finally, an ‘off’ state has also been reported (see Markert et al. 1973; Motch et al. 1985; Ilovaisky et al. 1986; Asai et al. 1998), in which the X-ray fast time variability is consistent with that seen in the LS (Méndez & van der Klis 1997) while the energy spectrum (power law with $`\alpha `$ of 1.5–2) is similar to the LS but with a 2–10keV flux which is $`10`$ times lower or even fainter than in the LS. It has already been suspected that the ‘off’ state is in fact a weak LS (see e.g. van der Klis 1995). A summary of the different states and their properties is given in Table 1.
The optical counterpart of GX 339–4 was identified by Doxsey et al. (1979) as a $`V18`$ blue star, but subsequent observations showed that it exhibited a wide range of variability from $`V=15.4`$ to 20.2 (Motch et al. 1985; Corbet et al. 1987) in its X-ray LS and ‘off’ state, while $`V=1618`$ (Motch et al. 1985) in the X-ray HS. Simultaneous optical/X-ray observations also showed a remarkable anti-correlation in the optical and soft X-ray (3–6 keV) fluxes during a transition from X-ray LS to HS (Motch et al. 1985), the cause of which is unknown. However, Ilovaisky et al. (1986) showed that there are times when the optical flux can be correlated with the X-ray luminosity. A possible orbital period of 14.8 hr from optical photometry was reported by Callanan et al. (1992). At present, there is no dynamical mass estimate available for the compact object (which would establish the black-hole nature of the compact object), since there has not yet been a spectroscopic detection of the mass-losing star.
In this Letter, we report on recent BeppoSAX and optical observations of GX 339–4 during its current X-ray ‘off’ state and compare these data with black hole soft X-ray transients (BHSXTs) in quiescence.
## 2 Observations and Data Reductions
### 2.1 RXTE/ASM
The All Sky Monitor (ASM; Levine et al. 1996) on board the Rossi X-ray Timing Explorer (RXTE; Bradt et al. 1993) has monitored GX 339–4 several times daily in its 2–12 keV pass-band since February 1996. The source remained in a low flux level ($`2`$ ASM cts/s) until early January 1998 ($``$ MJD 50810) although some variations were seen (see Fig. 1). Pointed Proportional Counter Array (PCA) observations during this period indicate that it is in the LS (Wilms et al. 1999). After MJD 50800, the source flux increased dramatically to $`20`$ ASM cts/s where it stayed for nearly 200 days before declining. Belloni et al. (1999) reported that the source underwent a LS to HS transition, probably through an IS ($``$ MJD 50820). The source changed back to the LS again in February 1999 ($``$ MJD 51200) as indicated by a sharp increment in the hard X-rays (BATSE: 20–100 keV) and radio emission (Fender et al. 1999). Note that the ASM hardness ratio (5–12/1.3–3 keV) rises significantly (see Fig. 1, lower panel) when the source changes from HS to LS. After June 1999 ($``$ MJD 51330), the ASM count rate dropped further and the source intensity fell below the 3-$`\sigma `$ detection level (see Fig. 1). This is a strong indication that the source entered a so-called ‘off’ state at that time.
### 2.2 BeppoSAX NFI
We observed GX 339–4 with the Narrow Field Instruments (NFI) on board BeppoSAX between August 13.5 and 14.1, 1999 UT (marked in Fig. 1). The NFI consist of two co-aligned imaging instruments providing a field of view of $`37^{}\times 57^{}`$: the Low-Energy Concentrator Spectrometer (LECS; 0.1–10 keV; Parmar et al. 1997) and the Medium Energy Concentrator Spectrometer (MECS; 1.6–10.5 keV; Boella et al. 1997). The other two NFI, non-imaging instruments are the Phoswich Detector System (PDS; 12–300keV; Frontera et al. 1997) and the High-Pressure Gas Scintillation Proportional Counter (HP–GSPC; 4–120 keV; Manzo et al. 1997).
During our observations, the HP–GSPC was turned off due to its recent anomalous behaviour (see the news web page of the BeppoSAX SDC<sup>1</sup><sup>1</sup>1http://www.sdc.asi.it/latestnews.html). The net exposure times are 12.8 ks for the LECS, 24.6 ks for the MECS and 11.1 ks for the PDS. The J2000 coordinates of the source derived from the MECS data are R.A.=17<sup>h</sup> 02<sup>m</sup> 48<sup>s</sup>, Dec.=-48 47 37.5<sup>′′</sup>, with a 90% confidence uncertainty radius of 56<sup>′′</sup> (see Fiore et al. 1999) which is consistent with the position of GX 339–4. We confirm that there is no other X-ray source in the field of view which could potentially contaminate our target. We applied an extraction radius of $`4^{}`$ centred on the source position for both LECS and MECS images so as to obtain the source lightcurves and spectra. The MECS background was extracted by using long archival exposures on empty sky fields. We also checked the background of the source-free regions in the image and it is similar to the empty sky fields. For the LECS spectrum, we extracted the background from two semi-annuli in the same field of view as the source (see Parmar et al. 1999 for the reduction procedure). Both the extracted spectra were rebinned by a factor of 3 so as to accumulate at least 20 photons per channel and to sample the spectral full-width at half-maximum resolution (Fiore et al. 1999). A systematic error of 1% was added to both LECS and MECS spectra to take account of the systematic uncertainties in the detector calibrations (Guainazzi et al. 1998). Data were selected in the energy ranges 0.8–4.0 keV (LECS), 1.8–10.5 keV (MECS) and 15–220 keV (PDS) to ensure a better instrumental calibration (Fiore et al. 1999). A normalization factor was included for the LECS and PDS relative to the MECS in order to correct for the NFIs’ flux intercalibration (see Fiore et al. 1999).
### 2.3 Optical
The optical counterpart of GX 339–4 (V821 Ara) was observed at the South African Astronomical Observatory (SAAO) using the 1.9-m telescope and the UCT-CCD fast photometer (O’Donoghue 1995) on 1999 August 10 (i.e. 3 days before the BeppoSAX observation reported here), when the soft X-ray (2–12 keV) flux was very low (see also Fig. 1). The observing conditions were generally good with typical seeing $``$ 1.5 arcsec. The exposure times on GX 339–4 were 240s in the $`B`$-band and 60s in the $`V`$-band. Debiasing and flat-fielding were performed with standard IRAF routines. Due to moderate crowding of the counterpart with a nearby but fainter neighbour, point spread function (PSF) fitting was employed in order to obtain good photometry with DAOPHOT II (Stetson 1987). The instrumental magnitude of our target star was then calibrated into the standard UBV system using a standard star of similar colour and observed at approximately the same period. Since the standard star counts were determined using a large aperture, local standard stars in the target field were used to determine the offset between the PSF magnitudes and these aperture results.
## 3 Results
During both the BeppoSAX NFI and optical observations, the source is barely detected in the RXTE/ASM and the count rate often drops below the 3-$`\sigma `$ ASM detection limit. The lightcurve and hardness ratio shown in Fig. 1 suggest that the source transited from the LS to a lower flux level, presumably an ‘off’ state after MJD 51330. The BeppoSAX lightcurves in the various energy bands do not show any evidence for variability on timescales from 100s to 5000s (the 3-$`\sigma `$ upper limit on the semi-amplitude is 0.3%). We have also checked for the LS-like fast time ($`<`$ 100 s) variability as seen typically in the LS of black hole X-ray binaries (see e.g. van der Klis 1995), but low counting statistics prevented us from setting useful upper limits.
The broad-band (0.8–50 keV) spectrum of GX 339–4 from the LECS, MECS and PDS data is satisfactorily ($`\chi _\nu ^2=1.09`$ for 56 degrees of freedom (d.o.f.)) fitted by a single power-law plus absorption. The best fit spectral parameters are summarized in Table 2 and the spectrum is shown in Fig. 3. We do not see any significant Fe-K line emission between 6.4–6.7 keV, with a 90% confidence upper limit of $`600`$ eV on the equivalent width. We note that there is a residual in the LECS below 0.9 keV and this might be due to a very soft black-body component or line emission near the Fe-L complex (e.g. Vrtilek et al., 1988). We have also fitted the spectrum with single black-body and bremsstrahlung models, but they are unacceptable ($`\chi _\nu ^2`$ $`>`$ 2).
In the optical, the source was seen at $`B=20.1\pm 0.1`$ and $`V=19.2\pm 0.1`$.
## 4 Discussion
Our X-ray (0.8–50 keV) and optical observations of GX 339–4 took place during a very low intensity (2–12 keV) X-ray state, presumably the X-ray ‘off’ state (see Fig. 1). Comparing with ASCA and RXTE observations obtained when the source was in a LS (Wilms et al. 1999; Belloni et al. 1999; see also Table 1), the spectral index and neutral hydrogen column are similar ($`\alpha 1.6`$, $`N_H5\times 10^{21}`$ cm<sup>-2</sup>), but our observed flux of $`2.2\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (2–10 keV) is much lower by 2–3 orders of magnitude. This confirms that the source indeed changed to a very low luminosity state as indicated by the RXTE/ASM data. Note that the observed column density is also consistent with that derived from optical reddening, i.e. $`N_H=(6.0\pm 0.6)\times 10^{21}`$ cm<sup>-2</sup> (see Zdziarski et al. 1998 for more detail). At a distance of 4 kpc, the observed soft X-ray (0.5–10 keV) luminosity is $`6.6\times 10^{33}`$ erg s<sup>-1</sup>. Ilovaisky et al. (1986) reported an ‘off’ state seen by EXOSAT with a luminosity of $`1.1\times 10^{35}`$ erg s<sup>-1</sup> in the 0.5–10 keV band; our measurement is a factor of $``$ 17 below that. We also note that an upper limit was obtained by ASCA in 1993 of $`5\times 10^{32}`$ erg s<sup>-1</sup> (Asai et al. 1998), which suggests that the source was in the ‘off’ state as well. In addition, Nolan et al. (1982) observed the source in the 12–200 keV band and claimed that one of the observations was in the ‘off’ state with a flux level of $`4.7\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 20–50 keV band. This luminosity is only comparable with the recent RXTE/HEXTE observations (Wilms et al. 1999) in the LS, while our PDS observation indicates that the source was down to $`3.5\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the same energy band. Therefore, the Nolan et al. (1982) observation was actually not in the ‘off’ state. We have obtained the first firm detection of GX 339–4 at low intensity up to 50 keV.
In the optical, our observed magnitude of $`V=19.2`$ and $`B=20.1`$ are faint compared to the wide range of reported magnitudes: $`B\mathrm{}>21`$ (Ilovaisky 1981) and $`V=15.4`$ (Motch et al. 1982). Our result is, however, comparable to the observations by Ilovaisky (1981) and Remillard and McClintock (1987), and this is the third time that the $`B`$ magnitude has been seen to fall to $`20`$. The observed colour, $`BV`$ is 0.9, which is also consistent with previous observations in different X-ray states (Makishima et al. 1986; Ilovaisky et al. 1986; Corbet et al. 1987). We note that the optical and X-ray emission can be anti-correlated during state transitions (e.g. Motch et al. 1985; observations during LS to HS transition). Our observations show that this may not be the case in the current ‘off’ state since both X-ray and optical are at very low luminosity and it suggests they correlate from the LS to ‘off’ state.
More recently, Corbel et al. (1999) show that the X-ray and radio fluxes during the ‘off’ state and the LS are well correlated. GX 339–4 was detected at a level of $`0.27\pm 0.06`$ mJy (at 8640 MHz) with the Australia Telescope Compact Array (ATCA) on 1999 August 17 (MJD 51407) and at a comparable level two weeks later (Corbel et al. 1999). This is one of the lowest levels ever detected in the radio, but still considerably above the upper limits to the radio emission during the HS (see Fender et al. 1999). It is very similar to GS 2023+338 which still showed ongoing radio emission after the source was off (see Han & Hjellming 1992). Adding all the results from the reported X-ray ‘off’ state and LS, and the behaviour at other wavelengths, it can be concluded that the ‘off’ state is indeed an extension of the LS, but at lower intensity. One might therefore expect rapid ($`<`$ 100s) temporal variability in the ‘off’ state but we were unable to verify this due to the low counting statistics. We note that Méndez & van der Klis (1997) derived a 3$`\sigma `$ upper limit on the variability in the 0.002–10 Hz range of 26% rms in their observations in the ‘off’ state.
Our observed luminosity in the 0.5–10 keV band is comparable to the BHSXT GS 2023+338 and 4U 1630–47 in quiescence (Asai et al. 1998; Menou et al. 1999; Parmar et al. 1997). The quiescent X-ray luminosities and X-ray/optical luminosity ratios of several BHSXTs are given in Table 3 for comparison. Note that the quiescent spectrum of A0620–00 can be fitted either by a power law or black-body model, presumably due to the narrow energy range of ROSAT. All the objects except 4U 1630–47 in Table 3 have firm detections such that the spectra can be determined. We note that BHSXTs can be fitted with a power law spectrum in general, while neutron star SXTs (NSSXTs) can be fitted by power law or black-body models (see Asai et al. 1998). Recently, Rutledge et al. (1999) fitted the spectra from NSSXTs with a hydrogen atmosphere model and found that the derived parameters (radius and kT) of A0620–00 and GS 2023+338 were different from those found for NSSXTs. Although the results are based on ROSAT data, Asai et al. (1998) show similar findings by re-analysing ASCA data. It suggests that the quiescent X-ray spectrum can provide additional information to distinguish between black holes and neutron stars. Significant X-ray variability in quiescence was observed in GS 2023+338 (Wagner et al. 1994), 4U 1630–47 (Parmar et al. 1997) and A0620–00 (Asai et al. 1998; Menou et al. 1999), while we have obtained a similar result for GX 339–4 (i.e. by comparing with Asai et al. 1998 and other ‘off’ state observations). This is a strong indication that the BHSXTs in quiescence are not totally turned off and that the ‘off’ state of GX 339–4 is an extended LS, as discussed above. GX 339–4 is similar to the quiescent state of BHSXTs, as will also be discussed below.
We convert the optical magnitude into an optical (300–700 nm) luminosity of $`2.4\times 10^{34}`$ erg s<sup>-1</sup> (assuming $`A_V=3.6`$ and $`E_{BV}=1.2`$; Zdziarski et al. 1998). The ratio of the soft X-ray (0.5–10 keV) and optical (300–700 nm) luminosities, $`L_X/L_{opt}`$ is $`0.27`$, which is higher than other BHSXTs (see Table 3). This could be due to a somewhat higher X-ray luminosity for GX 339–4 (see Table 3). All these results resemble the quiescent state spectrum predicted by advection-dominated accretion flow models (ADAF; see Narayan & Yi 1995; Esin et al. 1997). In the current ADAF model for BHSXTs in quiescence, the viscously dissipated energy is stored in the gas and advects into the black hole so as to account for the low luminosity of BHSXTs in quiescence. This model is in good agreement with the observed optical, UV and X-ray emission of systems such as GS 2023+338 and A0620–00 (Narayan et al. 1996; Narayan et al. 1997a). More recently, Fender et al. (1999) and Wilms et al. (1999) have applied ADAF models to explain the observed X-ray emission of GX 339–4 in the LS. Narayan et al. (1997b) point out that the ADAF luminosity depends significantly on the accretion rate. Therefore GX 339–4 maintains a certain accretion rate in the LS for most of the time, but the ‘off’ state reported here represents a sudden decrease in the accretion rate. It therefore suggests that the source is not totally turned off, as is observed.
Following Narayan et al. (1997), the mass accretion rate of GX 339–4 in the ‘off’ state corresponds to $`10^{16}`$ g s<sup>-1</sup>. It is also consistent with the $`\dot{M}10^{15}10^{16}`$ g s<sup>-1</sup> predicted by binary-evolution models (King et al. 1996; Menou et al. 1999), assuming an orbital period of 14.8 h (Callanan et al. 1992) and black hole mass of 5 $`M_{}`$ (Zdiarski et al. 1998). This accretion rate is very similar to that of GS 2023+338 (Narayan et al. 1997a) in quiescence. This relatively large accretion rate could be due to an evolved companion star like GS 2023+338 which supports the possibility of an evolved subgiant in GX 339–4 (Callanan et al. 1992). More detailed ADAF modelling may be needed in order to study the nature of the ‘off’ state of GX 339–4 and make a direct comparison with BHSXTs in quiescence.
Note that apart from the above, the observed range in $`V`$ magnitude of GX 339–4 is similar to the expected outburst amplitude for an SXT with an orbital period of 14.8 hr (Shahbaz & Kuulkers 1998). Together with the evidence for the similarity of the spectrum and luminosity, we suggest the ‘off’ state of GX 339–4 to be equivalent to BHSXTs in quiescence. The result is important since GX 339–4 undergoes state transitions on much shorter timescales, while most of the BHSXTs exhibit outbursts every 10–20 years. Finally, we note that we will need a good radial velocity measurement of GX 339–4 in its ‘off’ state when the contamination of the X-ray irradiated disc is at a minimum, so as to measure or constrain the black hole mass.
## Acknowledgments
We are grateful to the BeppoSAX SDC science team for their assistance in the data reduction. We thank Robert Rutledge and Lars Bildsten for constructive comments. EK thanks Rob Fender for stimulating discussions. This paper utilizies quick-look results provided by the ASM/RXTE team. AKHK is supported by a Hong Kong Oxford Scholarship. |
no-problem/0001/astro-ph0001045.html | ar5iv | text | # The Second Peak: The Dark-Energy Density and the Cosmic Microwave Background
\[
## Abstract
Supernova evidence for a negative-pressure dark energy (e.g., cosmological constant or quintessence) that contributes a fraction $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ of closure density has been bolstered by the discrepancy between the total density, $`\mathrm{\Omega }_{\mathrm{tot}}1`$, suggested by the location of the first peak in the cosmic microwave background (CMB) power spectrum and the nonrelativistic-matter density $`\mathrm{\Omega }_m0.3`$ obtained from dynamical measurements. Here we show that the impending identification of the location of the second peak in the CMB power spectrum will provide an immediate and independent probe of the dark-energy density. As an aside, we show how the measured height of the first peak probably already points toward a low matter density and places upper limits to the reionization optical depth and gravitational-wave amplitude.
\]
A “cosmic-concordance” model now seems to be falling into place . The central and most intriguing feature of this model is a negative-pressure dark energy (e.g., cosmological constant or quintessence) that contributes a fraction $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ of closure density. Supernova evidence for this dark energy has been bolstered by the discrepancy between the total density, $`\mathrm{\Omega }_{\mathrm{tot}}\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }1`$, suggested by the location of the first peak in the cosmic microwave background (CMB) power spectrum and the nonrelativistic-matter density, $`\mathrm{\Omega }_m0.3`$, obtained from dynamical measurements.
This dark energy has implications of the utmost importance not only for cosmology, but for fundamental physics as well. It can be viewed equivalently/alternatively as a correction to general relativity or as some new exotic form of matter. It would have significant implications for the evolution of large-scale structure in the Universe, for particle theory, and possibly for quantum gravity. Theorists have expanded the realm of possibilities for this dark energy from a simple cosmological constant to quintessence, a variable cosmological constant driven by the rolling of some new scalar field . Given the extraordinary ramifications, it is crucial to test for a nonzero dark-energy density as thoroughly as possible. There are already several promising possibilities; e.g., statistics of gravitational-lens systems , the Alcock-Paczyński test , and cross-correlation of the CMB with some tracer of the density at lower redshifts .
The purpose of this article is to show that impending measurements of the location of the second peak in the CMB power spectrum will provide an additional and independent probe of the dark-energy density. We argue that the location of the second peak depends primarily on the matter density and on the Hubble constant ($`h`$ in units of 100 km sec<sup>-1</sup> Mpc<sup>-1</sup>), and plot contours of the second-peak location in the $`\mathrm{\Omega }_m`$-$`h`$ parameter space. If the Hubble constant is fixed by independent observations (e.g., from the Hubble Space Telescope \[HST\]), then the second-peak location determines the matter density, or equivalently, the dark-energy density. As an aside, we also illustrate how recent measurements of the height of the first peak may already be pointing to a low value of $`\mathrm{\Omega }_m`$.
The aim of CMB mapping experiments is to measure the temperature $`T(\widehat{n})`$ as a function of position $`\widehat{n}`$ on the sky . The temperature can then be expanded in spherical harmonics, $`a_{lm}=Y_{lm}(\widehat{n})T(\widehat{n})`$, and rotationally invariant multipole moments (the “power spectrum”), $`C_l=_m|a_{lm}|^2/(2l+1)`$, can be constructed. Given the values of several cosmological parameters, predictions for the power spectrum can be made; Fig. 1 shows a few models. The peak structure is due to oscillations in the primordial plasma .
The location in $`l`$ of the first peak depends strongly on $`\mathrm{\Omega }_{\mathrm{tot}}`$ and only very weakly on the values of other cosmological parameters, and so it provides a robust indicator of the geometry of the Universe . A compilation of data from a number of recent experiments indicates a peak near $`l200`$, and data from the test flight of BOOMERANG clearly shows a peak at this location. We thus assume that, as argued by Dodelson and Knox , the verdict is in: the Universe is flat.<sup>*</sup><sup>*</sup>*Strictly speaking, such a peak location could be fit in an open or closed Universe with combinations of very strange values for other cosmological parameters (e.g., ). We realize this as a mathematical possibility, but a physical improbability.
If the geometry is fixed, the location of the second peak in the CMB power spectrum depends primarily (though not entirely) on the expansion rate of the Universe at the epoch of recombination , and this depends on the nonrelativistic-matter density and the Hubble constant. In principle, variations in several other parameters can change the precise location of the second peak. However, the second-peak location shifts very little as each of these uncertain parameters is allowed to vary within its acceptable range. For example, if measurements of the deuterium abundance fix $`\mathrm{\Omega }_bh^2=0.019\pm 0.001`$ , as Tytler asserts, then allowable shifts in the baryon-to-photon ratio produce negligible shifts in the location of the second peak. To be safe, we show results below for the more conservative range, $`0.015<\mathrm{\Omega }_bh^2<0.023`$, advocated by Olive, Steigman, and Walker . The spectral index $`n`$ of primordial density perturbations changes the amplitudes of the peaks, but allowable variations in $`n`$ ($`\pm 0.3`$ ) lead to even smaller uncertainties in the second-peak location than those from uncertainty in the baryon density. Moreover, a more recent analysis that includes constraints from degree-scale CMB anisotropies and large-scale structure finds that the allowed range for $`n`$ is much tighter—within 5% of unity . Reionization may reduce the amplitudes of all the peaks, but it will not strongly affect their locations, and the same is true of gravitational waves. Plausible neutrino masses would have a negligible effect on the peak locations . Higher-order effects, such as weak gravitational lensing , the Rees-Sciama effect , or unsubtracted foregrounds would primarily affect the heights or shapes of the peaks but leave their locations intact. The second-peak location is similarly insensitive to whether the dark energy is a cosmological constant or quintessence . We also expect magnetic fields to have no more than a small effect on the peak location .
Fig. 2 shows contours of $`l_2`$, the location of the second peak, in the two-dimensional parameter space ($`\mathrm{\Omega }_m`$,$`h`$) in which it varies most strongly. Results are shown for the allowable range of $`\mathrm{\Omega }_bh^2`$. Had we included contours for $`n=0.8`$ and $`n=1.2`$, they would have fallen very well within the range spanned by the allowed values of the baryon-to-photon ratio.
The location of the second peak picks out a specific contour in the $`\mathrm{\Omega }_m`$-$`h`$ plane. When combined with the range, $`0.6<h<0.8`$ , allowed by HST, determination of the second-peak location will provide a constraint to the matter density. For example, if the second peak turns out to be located at $`l_2625`$, then it will suggest $`\mathrm{\Omega }_m0.2`$ (for the entire allowable range for $`\mathrm{\Omega }_bh^2`$). A value $`l_2550`$ will allow $`0.1\mathrm{\Omega }_m0.4`$ (likewise, for the allowed range of $`\mathrm{\Omega }_bh^2`$). If it turns out that $`l_2525`$, then a broader range of larger values, $`0.2\mathrm{\Omega }_m0.6`$, will be allowed. Smaller values of $`l_2`$ allow larger values of $`\mathrm{\Omega }_m`$, and they are also less constraining. If $`\mathrm{\Omega }_m0.4`$, as suggested by supernova data , then the second peak must be at $`l_2>475`$.
The contours in Fig. 2 show that $`l_2`$ jumps to very large values for large $`\mathrm{\Omega }_m`$ and large $`h`$ (the upper right-hand corner). In this region of parameter space, the amplitude of the second peak actually becomes so small that the second peak disappears, and the de facto second peak is what would have otherwise been the third peak. To illustrate, the long-dash (magenta) curve in Fig. 1 shows the $`C_l`$ for $`\mathrm{\Omega }_m=0.8`$, $`h=0.9`$, and $`\mathrm{\Omega }_bh^2=0.015`$. The region of the $`\mathrm{\Omega }_m`$-$`h`$ parameter space in which this confusion between the second and third peaks arises conflicts with the age of the Universe, the shape of the power spectrum, and as discussed below, the amplitude of the first peak. We therefore dwell no further on this possibility.
There are some caveats we should make. The allowed range of $`\mathrm{\Omega }_m`$ for any given value of $`l_2`$ can be broadened if a larger range of values for the Hubble constant are allowed. So, for example, if the second peak is found to be at $`l_2=525`$, it will be possible that $`\mathrm{\Omega }_m=1`$, but only if the Hubble constant is $`h=0.5`$, considerably lower than the HST value. (This could be tested further with the third peak, as indicated in Fig. 1.) A smaller baryon density would shift the second peak to larger values of $`l`$ and thus allow slightly larger values of $`\mathrm{\Omega }_m`$ for fixed $`l_2`$. However, such small values of the baryon density would conflict not only with Tytler’s results, but would be additionally discordant with baryon abundances in x-ray clusters. Some combination of other effects (e.g., the optical depth, primordial spectrum, neutrino masses, recombination history, etc.) could move the peak, but such a conspiracy seems unlikely. Thus, the weakest link in the relation between $`l_2`$ and $`\mathrm{\Omega }_m`$ is probably uncertainty in the Hubble constant, as indicated in Fig. 2. Even if independent measurements of the Hubble constant are discarded, the location of the second peak will provide a useful constraint to the $`\mathrm{\Omega }_m`$-$`h`$ parameter space.
The main focus here is on the location of the second peak. However, it is easy and important to see that the observed height of the first peak already points toward a low density if primordial perturbations have a flat scale-invariant (i.e., $`n=1`$) spectrum. It is natural to expect that at least some small fraction $`\tau `$ of CMB photons re-scattered from reionized electrons after the nominal surface of last scatter at redshift $`z1100`$. If so, then the amplitude of the peaks in the power spectrum will be suppressed by a factor $`e^\tau `$. Fig. 3 shows contours of the optical depth $`\tau `$ inferred by comparing the predicted height of the first peak with the measured value of $`70`$ $`\mu `$K for the allowable range of $`\mathrm{\Omega }_bh^2`$, and for a flat (i.e., $`n=1`$) primordial spectrum and for an $`n=1.2`$ primordial spectrum. Since $`\tau <0`$ is impossible, those regions of parameter space in which $`\tau <0`$ is inferred are ruled out. If primordial perturbations have a flat spectrum, then the amplitude of the first peak thus rules out a considerable portion of the high-$`\mathrm{\Omega }_m`$–high-$`h`$ parameter space. Moreover, notice that a shift of 0.2 in $`n`$ is roughly equivalent to a shift of about 0.25 in $`\tau `$. Thus, constraints to the $`\mathrm{\Omega }_m`$-$`h`$ parameter space from the height of the first peak can be relaxed if the spectral index $`n`$ is raised a bit, while models with $`n0.8`$ are likely inconsistent as they would require a negative optical depth over virtually the entire plausible range of $`\mathrm{\Omega }_m`$ and $`h`$. The constraints may also be relaxed if the actual amplitude is a bit different than the value, 70 $`\mu `$K, used here, as may be allowed by reasonable calibration and/or statistical uncertainties.
It is also interesting to note that in currently favored models (i.e., cosmic-concordance models with $`n1`$), the optical depth to the surface of last scatter cannot be too large, $`\tau 0.2`$. A stochastic gravitational-wave background could mimic the effect of reionization by supplying power on large angular scales at which the CMB power spectrum is normalized to the COBE amplitude. Thus, the upper limits to $`\tau `$ can be translated directly to upper limits to the gravitational-wave amplitude $`𝒯`$ (see for a precise definition) by identifying $`e^{2\tau }`$ with $`𝒮/(𝒯+𝒮)`$. Doing so, the nominal limit $`\tau 0.2`$ suggests that no more than one-third the large-angle power in the CMB is due to gravitational-waves, and this improves slightly the limit to the gravitational-wave amplitude from COBE .
It has long been appreciated that the richness of the peak structure in the CMB power spectrum will eventually allow simultaneous determination of a number of cosmological parameters when the CMB power spectrum is measured with sufficient precision. However, it has also been repeatedly emphasized that a strong degeneracy in the $`(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda },h)`$ parameter space exists (e.g., ), as indicated, for example, by the elongation of the error ellipses forecast for MAP and Planck along the $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }`$ line in the $`\mathrm{\Omega }_m`$-$`\mathrm{\Omega }_\mathrm{\Lambda }`$ parameter space (e.g., Fig. 2 in Ref. ). In this paper, we have noted that by implementing recent measurements of the geometry, baryon density, and especially the Hubble constant, we can break this degeneracy and thus link the location of the second peak fairly robustly to the cosmological constant.The value of $`h`$ changes considerably as one goes from one end of the aforementioned CMB ellipse in the $`\mathrm{\Omega }_m`$-$`\mathrm{\Omega }_\mathrm{\Lambda }`$ parameter space to the other end. This observation is additionally noteworthy given the accumulation of independent evidence for some sort of dark energy, the identification of the first peak, and the approaching discovery of the second peak. Thus, by visual inspection alone, we may be able to learn something significant about the cosmological constant once the second peak is identified. Of course, mapping the second peak is also of the utmost importance as it will provide additional confirmation of the paradigm of structure formation from primordial adiabatic perturbations that underlies the entire analysis. We thus eagerly await the discovery of the second peak.
We thank P. Ullio for useful comments. We used CMBFAST to calculate the power spectra. This work was supported in part by the DoE, NSF, and NASA. AB also acknowledges the support of a Lee A. DuBridge Fellowship. |
no-problem/0001/physics0001008.html | ar5iv | text | # A semi–classical over–barrier model for charge exchange between highly charged ions and one–optical electron atoms
## I Introduction
The electron capture processes in collisions of slow, highly charged ions with neutral atoms and molecules are of great importance not only in basic atomic physics but also in applied fields such as fusion plasmas and astrophysics.
In the past years a number of measurements have been carried on the collisions between highly charged ions and rare gases or molecules , in which one or several electrons were transferred from the neutral target to a charged projectile:
$$A^{+q}+BA^{(qj)+}+B^{j+}.$$
(1)
Their results-together with those from a number of other laboratories-yielded a curve which can be fitted within a single scaling law (a linear relationship) when plotting cross section $`\sigma `$ versus projectile charge $`q`$: it is almost independent of the projectile species and of the impact velocity $`v`$ (at least in the low–speed range $`v<1`$ au). When one extends experiments to different target species, the same linear relation holds between $`\sigma `$ and $`q/I_t^2`$, with $`I_t`$ the ionization potential of the target .
It is found that this scaling law could to be predicted, in the limit of very high projectile charge, by a modification of an extended classical over–barrier model (ECBM), allowing for multiple electron capture, proposed by Niehaus . Quite recently a confirmation of this scaling has come from a sophisticated quantum–mechanical calculation .
Similar experiments were carried on more recently for collisions between ions and alkali atoms . The results show that the linear trend is roughly satisfied, but the slope of the straight line is grossly overestimated by the ECBM: in Fig. 1 we show some data points (stars with error bars) together with the analytical curve from the ECBM (dashed curve) which, for one–electron atoms, is written
$$\sigma =2.6\times 10^3q/I_t^2[10^{20}\mathrm{m}^2]$$
(2)
($`I_t`$ in eV). It should be noticed that experimental data are instead well fitted by the results of a Classical Trajectory MonteCarlo (CTMC) code .
The ECBM of ref. works in a simplified one-dimensional geometry where the only physically meaningful spatial dimension is along the internuclear axis. It does not take into account the fact that the electrons move in a three-dimensional space. This means that only a fraction of the electrons actually can fulfil the conditions dictated by the model. For rare gases and molecules, which have a large number of active electrons, this can be not a trouble (i.e., there are nearly always one or more electrons which can participate to the collision). For alkali atoms with only one active electron, on the other hand, an overestimate of the capture probability by OBM’s could be foreseen.
With present–days supercomputers there are relatively few difficulties in computing cross sections from numerical integration of the time-dependent Schrödinger equation (e.g. refer to ref. ). Notwithstanding this, simple models are still valuable since they allow to get analytical estimates which are easy to adapt to particular cases, and give physical insight on the features of the problem. For this reason new models are being still developed .
In this paper we present a modified OBM which allows to get a better agreement with the experimental data of ref. .
## II The model
We start from the same approach as Ostrovsky (see also ): be r the electron vector relative to the neutral atom (T) and R the internuclear vector between T and the projectile P (see Fig. 2 for a picture of the geometry: it is an adaptation of Figure 1 from ref. ). Let us consider the plane containing the electron, P and T, and use cylindrical polar coordinates $`(\rho ,z,\varphi )`$ to describe the position of the electron within this plane. We can choose the angle $`\varphi =0`$ and the $`z`$ direction along the internuclear axis. We will assume that the target atom can be described as an hydrogenlike atom, which is not a bad approximation when dealing with alkali atoms.
The total energy of the electron is
$$E=\frac{p^2}{2}+U=\frac{p^2}{2}\frac{Z_t}{\sqrt{\rho ^2+z^2}}\frac{Z_p}{\sqrt{\rho ^2+(Rz)^2}}.$$
(3)
$`Z_p`$ and $`Z_t`$ are the charge of the projectile and the effective charge of the target seen by the electron, respectively, and we are using atomic units.
We can also approximate $`E`$ as
$$E(R)=E_n\frac{Z_p}{R}.$$
(4)
$`E_n`$ is given by the quantum–mechanical value: $`E_n=Z_t^2/(2n^2)`$. This expression is asimptotically correct as $`R\mathrm{}`$.
On the plane (e, P, T) we can draw a section of the equipotential surface
$$U(z,\rho ,R)=E_n\frac{Z_p}{R}.$$
(5)
This represents the limit of the region classically allowed to the electron. When $`R\mathrm{}`$ this region is decomposed into two disconnected circles centered around each of the two nuclei. Initial conditions determine which of the two regions actually the electron lives in.
As $`R`$ diminishes there can be eventually a time where the two regions become connected. It is easy to solve eq. (5) for $`R`$ by imposing that $`\rho _m=0`$ and that there must be an unique solution for $`z`$ with $`0<z<R`$:
$$R_m=\frac{Z_t+2\sqrt{Z_tZ_p}}{E_n}.$$
(6)
In the spirit of OBMs it is the opening of the equipotential curve between P and T which leads to a leakage of electrons from one nucleus to another, and therefore to charge exchange. Along the internuclear axis the potential $`U`$ has a maximum at
$$z=z_0=R\frac{\sqrt{Z_t}}{\sqrt{Z_p}+\sqrt{Z_t}}.$$
(7)
Whether the electron crosses this potential barrier depends upon its initial conditions. These are chosen from a statistical ensemble, which we will leave unspecified for the moment. Let $`N_\mathrm{\Omega }`$ be the fraction of trajectories which lead to electron loss at the time $`t`$ and $`W(t)`$ the probability for the electron to be still bound to the target, always at time $`t`$. The fraction of losses in the interval $`t,t+dt`$ is given by
$$dW(t)=N_\mathrm{\Omega }\frac{dt}{T_{em}}W(t),$$
(8)
with $`T_{em}`$ the period of the electron motion along its orbit. A simple integration yields the leakage probability
$$P_l=1\mathrm{exp}\left(\frac{1}{T_{em}}_{\mathrm{}}^+\mathrm{}N_\mathrm{\Omega }𝑑t\right).$$
(9)
In order to actually integrate Eq. (9) we need to know the collision trajectory; an unperturbed straight line with $`b`$ impact parameter is assumed:
$$R=\sqrt{b^2+(vt)^2}.$$
(10)
At this point it is necessary to give an explicit expression for $`N_\mathrm{\Omega }`$. The electron is supposed to be in the ground state ($`n=1,l=m=0`$). $`T_{em}`$ becomes therefore
$$T_{em}=2\pi /Z_t^3.$$
(11)
Ref. adopts a geometrical reasoning: the classical electron trajectories, with zero angular momentum, are ellipses squeezed onto the target nucleus. The only trajectories which are allowed to escape are those whose aphelia are directed towards the opening within the angle $`\pm \theta _m`$. The integration over this angle yields an analytical expression for $`N_\mathrm{\Omega }`$ (Eq. 17 of ref. ). In Fig. 1 we show the results obtained using Ostrovsky’s model ( dotted curve–eqns. 8,17 of ref. ) <sup>*</sup><sup>*</sup>*Beware of a small difference in notation between the present paper and : here we use an effective charge for the target, $`Z_t=\sqrt{2E_n}`$, while uses an effective quantum number $`n_t=1/\sqrt{2E_n}`$ with the effective charge of the target set to 1.. Notice that from direct inspection of the analytical formula, one sees that the scaling law is not exactly satisfied, at least at small values of the parameter $`q/I_t^2`$, and this is clearly visible in the plot. The result is almost equal to the scaling (2).
The present approach is based on the electron position instead than on electron direction . The recipe used here is (I) to neglect the dependence from the angle: all electrons have the same probability of escaping, regardless of their initial phase. Instead, (II) the lost electrons are precisely those which, when picked up from the statistical ensemble, are found farther from nucleus T than the distance $`z_0`$:
$$N_\mathrm{\Omega }=_{z_0}^{\mathrm{}}f(r)𝑑r,$$
(12)
with $`f(r)`$ the electron distribution function.
There is not a unique choice for $`f(r)`$: the (phase-space) microcanonical distribution
$$\stackrel{~}{f}(𝐫,𝐩)\delta \left(E_n+\frac{p^2}{2}\frac{Z_t}{r}\right)$$
(13)
($`\delta `$ is the Dirac delta) has been often used in literature since the works as it is known that, when integrated over spatial coordinates, it reproduces the correct quantum–mechanical momentum distribution function for the case of the electron in the ground state (more recently the same formalism has been extended to Rydberg atoms ). After integration over momentum variables one gets instead a spatial distribution function
$$f_{mc}(r)=\frac{Z_t(2Z_t)^{3/2}}{\pi }r^2\sqrt{\frac{1}{r}\frac{Z_t}{2}},r<2/Z_t$$
(14)
and zero elsewhere (The lowerscript ”mc” is to emphasize that it is obtained from the microcanonical distribution). However, this choice was found to give poor results. It could be expected on the basis of the fact that (14) does not extend beyond $`r=2/Z_t`$ and misses therefore all the large impact–parameter collisions. In the spirit of the present approach, it should be instead important to have an accurate representation of the spatial distribution. We use therefore for $`f(r)`$ the quantum mechanical formula for an electron in the ground state:
$$f_{1s}(r)=4Z_t^3r^2\mathrm{exp}\left(2Z_tr\right)$$
(15)
which, when substituted in (12), gives
$$N_\mathrm{\Omega }=\left[1+2z_0Z_t+2(z_0Z_t)^2\right]\mathrm{exp}\left(2z_0Z_t\right).$$
(16)
Since the choice for $`f(r)`$ does not derive from any classical consideration, we call this method a “semi–classical” OBM.
Notice that, in principle, one could go further and compute $`f(r)`$ from a very accurate wavefunction, fruit of quantum mechanical computations (see ), but this is beyond the purpose of the present paper (it could be worthy mentioning a number of other attempts of building stationary distributions $`f(r)`$, mainly in connections with CTMC studies, see ).
The $`f(r)`$ of Eq. (15) does not reproduce the correct momentum distribution, nor the correct energy distribution (which could be obtained only by using eq. (13). However, it is shown in that this choice gives an energy distribution for the electrons, $`f(E)`$, peaked around the correct value $`E_n`$, and $`<E>=E_n`$, where $`<\mathrm{}>`$ is the average over $`f(E)`$.
Some important remarks are to be done here. First of all, a question to be answered is: why use an unperturbed distribution, when the correct one should be sensitively modified by the approaching of the projectile. The answer is, obviously, that this choice allows to perform calculations analitically. We are doing here a sort of classical counterpart of a quantum–mechanical Born calculation: there, too, the matrix elements are computed as scalar products over unperturbed states, regardless of any perturbation induced by the projectile. In the following, however, some considerations about possible improvements over this simple approximation will be done.
A second question regards the meaning of the factor $`dt/T_{em}`$ in eq. (8): in Ostrovsky’s paper this is the fraction of electrons which enter the loss zone during the time interval $`dt`$ and is valid under the hypothesis of a uniform distribution of initial phases of the electrons. In our case this this assumption ceases to be valid: electrons actually spend different fractions of their time at different radial distances from T, depending on their energy. We will do a (hopefully not too severe) assumption by assuming that, on the average, the expression (8) still holds.
## III Results
### A Iodine - Cesium
This study has been prompted by the ion-atom experiments of : first of all, therefore, we apply the above model to the process of electron capture
$$\mathrm{I}^{q+}+\mathrm{Cs}\mathrm{I}^{(q1)+}+\mathrm{Cs}^+$$
(17)
with $`q=6÷30`$. Impact energy is $`1.5\times q`$ keV . The ionization potential of Cesium is $`I_t=3.9`$ eV. Solid line in Fig. 1 is the result of the present model: the agreement is fairly good.
### B Bare ions - Hydrogen
As second test, we have computed cross section for captures
$$\mathrm{H}+\mathrm{O}^{8+}\mathrm{H}^++\mathrm{O}^{7+}$$
(18)
and
$$\mathrm{H}+\mathrm{He}^{2+}\mathrm{H}^++\mathrm{He}^+$$
(19)
and compared it with similar calculations done using the molecular approach by Harel et al . The results are summarized in fig. 3. There is a sharp discrepancy in the behaviour for $`v0`$, where the present model predicts an increasing cross section. At very low speed it is the concept itself of atomic distribution function which becomes questionable, and molecular aspects become important. Besides, quantum effects such as the discreteness of the energy levels also play a major role and are completely missed by this approach. In the higher velocity part, the present model underestimates the more accurate value by a factor 2 for process (18), but the error is much less, just 25 %, for process (19). These two ions have been chosen ad hoc: they correspond to values of the ratio $`Z_t/Z_p=1/8`$ and 1/2 respectively. In the (I, Cs) test this ratio ranged from $`1/12`$ to $`1/60`$ depending upon the projectile charge. This means that in the former case the perturbation of the projectile on the electron distribution function is comparable to the (I, Cs) case, while in the latter it is much less. We expect the electron distribution function to be more and more perturbed as $`Z_t/Z_p0`$.
## IV Summary and conclusions
We have developed in this paper a very simple OBM for charge exchange. It exploits some features of the quantum mechanical version of the problem, thus differing from similar models which are solely classical. The agreement with experiment is much better than previous calculations where a comparison could be made. It is far from excellent, but reasons for the (partial) failure have been suggested.
As it stands, the model is well suited for one-optical-electron atoms (since it uses hydrogen–like wavefunctions), therefore we do expect that other classical OBM’s can still work better in the many-electrons targets studied in previous experiments.
Some improvements are likely to be added to the present model: a possible line of investigation could be coupling the present method with a very simplified calculation of the evolution of the wavefunction, using quantum mechanics. From this one should not compute the $`f`$ as coming from a single state, but as a linear combination including also excited wavefunctions (the relative weights in the combination should be given by the quantum mechanical calculation). Work in this direction is currently underway.
## Acknowledgments
It is a pleasure to thank the staff at National Institute for Fusion Science (Nagoya), and in particular Prof. H. Tawara and Dr. K. Hosaka for providing the data of ref. and for useful discussions about the subject. The referees through their suggestions and criticism have made the manuscript readable.
## Figure Captions |
no-problem/0001/astro-ph0001455.html | ar5iv | text | # Low Frequency Insights Into Supernova Remnants
## 1. Introduction
In order to understand shocks in supernova remnants (SNR) we need to separate three issues: intrinsic properties of the explosions themselves, the character of the SNR environment, and observational constraints. In order to obtain fundamental facts about the explosion such as the age and energy released we must understand the structure of the circumstellar medium. In order to interpret observations we must understand observational limits imposed by the nature of single dish and interferometric observations.
Low frequency interferometric observations can help disentangle these three overlapping issues and will have the opportunity to contribute to three multifrequency issues: finding X-ray synchrotron emission, measuring spectral curvature predicted by particle theory, and clearing up uncertainties in observed spectral index variations.
## 2. Information from Total Flux Measurements
In order to understand supernovae (SN) and SNR, we must separate SNR from their environs. Since SNR are found preferentially near star forming regions, in the galactic plane, they are, not by chance, often in complex regions of the sky (for a convincing example consult the galactic plane surveys by Effelsberg: Reich, Reich, & Fuerst, 1990, 1997).
Even something as simple as measuring the total flux from a SNR requires imaging to avoid confusing the emission from nearby sources. While interferometers can over resolve the remnant, losing total flux information, fluxes obtained from single dish measurements often confuse the SNR with nearby objects.
An individual electron of energy E radiates its peak synchrotron emission at frequency $`\nu E{}_{}{}^{2}B`$. Since frequency is proportional to the energy squared, we need the leverage of much wider frequency “baselines” to study subtle changes in the electron spectrum. Comparing observations from 6 to 20 cm is only a range of 1.7 in energy, whereas 74 MHz to 4.6 GHz buys a factor in energy of 8.
Low frequencies also avoid a scale problem. For the VLA, many galactic SNR are large enough they are over-resolved at wavelengths longer than 6 cm. Total flux is more reliably measured at at lower frequencies. While low-frequency interferometric observations can be affected by absorption, even a lower limit to the flux would help firm up the predictions.
## 3. X-ray Synchrotron Emission
X-ray emission in SNR is generally considered to be thermal; however, certain SNR look suspiciously similar in the X-ray and radio (such as G41.1-0.2; see Dyer & Reynolds 1999). Morphological similarity does not prove the X-rays are synchrotron – but at the very least it suggests that X-rays are being excited at the same location as the relativistic electrons that produce radio synchrotron emission.
In fact, X-ray observations of some SNR like SN1006 (Koyama et al 1993) and (Slane et al. 2000) show the spectra are dominated by synchrotron emission. A more serious threat to our understanding of shocks is the possibility that other SNR could have a smaller synchrotron component confusing the thermal emission – this would stymie thermal fits and prevent accurate measurements of shock temperatures and elemental abundances.
Models have been developed by Reynolds (1993, 1996) to describe this emission. Two simple models, SRCUT and SRESC, are available in XSPEC 11.0. These models rely on the radio flux and spectral index as reported by Green (1998). The models differ subtly – the precise shape of the X-ray synchrotron spectrum can be used to determine properties of the SNR, including the age of the shock, magnetic fields and electron energies and synchrotron losses. The models depend on accurate extrapolations of the radio synchrotron spectrum over eight orders of magnitude of frequency. The current state of this knowledge is very poor, as can be seen from examining collections of flux measurements (Truskin 1999, see G041.1-0.2 for example). Reported fluxes can vary by a factor of 1/3 to 2, sometimes even between measurements made by the same instrument. Most single dish instruments do not have the resolution to separate SNR from nearby sources and absolute fluxes are not well calibrated from one instrument to another. The uncertainties reported in the literature are often absent or optimistic underestimates. Low frequency interferometric observations can contribute reliable measurements with (most importantly) accurate uncertainties, allowing us to separate thermal and non-thermal X-rays.
## 4. Spectral Curvature
Non-linear first-order Fermi shock acceleration has been shown to be the leading model describing particle acceleration in SNR shocks. Since protons determine the shock structure, in the past particle codes studying shocks have ignored electrons. However it is commonly held that higher energy electrons have longer scattering lengths across the shock. If this is true, electrons interact differently with the shock. Highly energetic electrons see a shock with a higher compression ratio than low energy electrons, and therefore gain more energy than their low energy counterparts. Tests with particle codes including electrons by Ellison & Reynolds (1991, 1992) showed the synchrotron spectrum, while very close, is not exactly a powerlaw – it deviates very slightly – concave upwards or flattening to higher energies. This subtle curvature had already been found observationally in single dish measurement of well studied remnants such as Tycho and Kepler.
This is one of the few methods by which limits can be set on the magnetic field independent of the electron energy, putting us closer to to the goal of finding intrinsic properties of the SNR. In some cases a single accurate measurements at low frequency can discriminate between models with different magnetic fields.
## 5. Spectral Index Variations and Inherent Problems
It is worth noting that the spectral index variations theorists look for, to obtain insight on shock mechanisms, should be very small. SNR look very similar from one frequency to the next. In addition, if parts of the remnant varied widely it would be unlikely that the spatial average would come as close as it does to a power law over three orders of magnitude in frequency.
Studies of spectral index variations across the face of the remnant bring out the worst in interferometric measurements. There are two serious problems underlying spectral index fluctuations reported in the literature. First, even with scaled arrays, interferometric observations at different frequencies have slightly different UV coverage. This difference is compounded by processing with non-linear deconvolution methods. Second, if we are to believe the small effects we are looking for, we must be able quantify the noise accurately – and the noise on extended sources, processed through CLEAN or MEM, is not well understood. A 3$`\sigma `$ effect is meaningful only if $`\sigma `$ is well known. We have found that re-observing a SNR with the VLA with slightly better UV coverage found spectral index variations on the same scale as previous observations: however these variations were in different locations with different signs (Dyer & Reynolds 1999). This was true even when linear regression algorithms were used, designed to take into account an offset due to lack of short-spacing information.
However some SNR do show statically significant variations (& Green 19??). The situation can be improved by adding single dish data, deriving indices from observations at three or four frequencies rather than two (including lower frequencies), and by testing algorithms designed to avoid the zero spacing problem and finally by better understanding of the statistical noise across diffuse CLEANed emission.
The last two issues could be addressed by a small but critical project – The techniques used to find spectral index variations (regression methods, T-T plots and spectral tomography) in SNR could be used to look for spectral index variations where we know there should be none – in thermal H II regions such as the Orion nebula. A thermal nebula should have a flat spectrum ($`\nu ^{0.1}`$) with no spectral index variations beyond statistical fluctuations, therefore the variations found would tell us something about the noise in our spectral index maps of SNR. A thermal nebula also provides extended emission where the unknown effects of CLEAN on source noise could be checked.
## References
Aguumleros, M. A. & Green, D. A. 1999, MNRAS, 305, 957
Dyer, K.K. & Reynolds, S.P. 1999, ApJ, 526, 365
Ellison, D.C. & Reynolds, S.P. 1991, ApJ, 382, 242
Green D.A. 1998,‘A Catalogue of Galactic Supernova Remnants (1998 September version)’, Mullard Radio Astronomy Observatory, Cambridge, United Kingdom (available on the World-Wide-Web at “http://www.mrao.cam.ac.uk/surveys/snrs/”)
Koyama, K., Petre, R., Gotthelf, E. V., Hwang, U., Matsura, M., Ozaki, M. & Holt, S. S. 1995, Nature, 378, 255
O’Sullivan, C. & Green, D. A. 1999, MNRAS, 303, 575
Reich, P., Reich, W. & Furst, E. 1997, A&AS, 126, 413
Reich, W., Reich, P. & Fuerst, E. 1990, A&AS, 83, 539
Reynolds, S. P. 1996, ApJL, 459, L13
Reynolds, S.P. 1998 ApJ 493, 357
Reynolds, S.P. & Ellison, D.C. 1992, ApJ, 399, L75
Slane, P., Gaensler, B.M., Dame, T.M., Hughes, J.P., Plucinsky, P.P. & Green, A. 1999, ApJ, 525, 357
Trushkin 1998, 200 Galactic SNR, CATS Database - Astrophysical CATalogs support System, http://cats.sao.ru |
no-problem/0001/cond-mat0001443.html | ar5iv | text | # Universal c-axis conductivity of high-𝑇_𝑐 oxides in the superconducting state
\[
## Abstract
The anisotropy in the temperature dependence of the in-plane and c-axis conductivities of high-$`T_c`$ cuprates in the superconducting state is shown to be consistent with a strong in-plane momentum dependence of both the quasiparticle scattering rate and the interlayer hopping integral. Applying the cold spot scattering model recently proposed by Ioffe and Millis to the superconducting state, we find that the c-axis conductivity varies approximately as $`T^3`$ in an intermediate temperature regime, in good agreement with the experimental result for optimally doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> and $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8+\mathrm{x}}`$.
\]
Microwave surface impedance measurements have provided important information on the pairing symmetry and quasiparticle relaxation in the superconducting state of high-$`T_c`$ oxides. For YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> (YBCO) (and similarly for other high-$`T_c`$ compounds), the in-plane microwave conductivity $`\sigma _{ab}`$ exhibits a large peak centered at approximately 25K in the superconducting state . This peak structure of $`\sigma _{ab}`$ is due to a competition between the quasiparticles life time and the normal fluid density. From $`T_c`$ down to 25K the quasiparticle life time increases much more rapidly than the decrease of normal fluid density, causing $`\sigma _{ab}`$ to rise with decreasing temperature. At low temperature, the quasiparticle life time reaches a limit and increases very slowly but the normal fluid density continues to fall, $`\sigma _{ab}`$ therefore falls with decreasing temperature. Along the c-axis, the conductivity behaves very differently . In contrast to its in-plane counterpart, $`\sigma _c`$ falls below $`T_c`$ and does not show a conductivity peak. In optimally doped YBCO, $`\sigma _c`$ rises slightly below 20K. But in Ba<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (BSCCO), La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6</sub> and Tl<sub>2</sub>Ba<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub>, $`\sigma _c`$ drops continuously from $`T_c`$ down to very low temperature, and no upturn appears.
The different temperature dependences of $`\sigma _{ab}`$ and $`\sigma _c`$ are not what one might expect within the conventional theory of anisotropic superconductors. In this paper we present a detailed theoretical analysis for the c-axis conductivity. We shall show that the decrease of $`\sigma _c`$ immediately below $`T_c`$ is explained by the fact that the region near the nodes, where the long lived quasiparticles exist, does not enter into the c-axis transport because of the anisotropic interlayer hopping integral.
Let us first consider the behaviour of quasiparticle scattering in high-Tc materials. An important feature revealed by photoemission measurements is that the life time of quasiparticles is long along the Brillouin-zone diagonals and short along other directions on the Fermi surface in both the normal and supercondcuting states. Based on this experimental result and the anisotropic temperature dependence of the in-plane and c-axis resistivity, Ioffe and Millis (IM) proposed a cold spot model to account for the normal state transport data. They assumed that the scattering rate of quasiparticles contains a large angular dependent part that vanishes quadratically as the momentum approaches the $`(0,0)(\pi ,\pi )`$ line with negligible frequency and temperature dependence and an isotropic but temperature dependent part, i.e. $`\mathrm{\Gamma }_\theta =\mathrm{\Gamma }_0\mathrm{cos}^22\theta +\tau ^1`$, where $`\theta `$ is the angle between the in-plane momentum of the electron and the a-axis. This type of the scattering rate was also used by Hussey et al. in the analysis of the angular dependent $`c`$-axis magnetoresistance. In Ref. , Ioffe and Millis has further assumed that $`\tau ^1`$ has the conventional Fermi liquid form $`\tau _{FL}^1=\frac{T^2}{T_0}+\tau _{imp}^1`$. With this phenomenological model, they gave a good explanation for the temperature dependences of several transport coefficients in the normal state. Van der Marel has recently shown that this model also provides a good description for both the in-plane and c-axis optical conductivities in the normal state.
The cold spot scattering rate, as discussed by IM, may be caused by interaction of electrons with nearly singular $`d_{x^2y^2}`$ pairing fluctuations. In the superconducting state, as the $`d_{x^2y^2}`$ -wave channel scattering is enhanced, the assumption of cold spot scattering made by IM is strengthened. This is indeed consistent with the recent photoemission data measured by Valla et al . Thus a detailed comparison between theoretical calculations and experimental measurements for the transport coefficients in the superconducting state provides a crucial test for the cold spot model.
In high-Tc superconductors, the electronic structure is highly anisotropic. In particular, the interlayer hopping integral depends strongly on the in plane momentum of electrons. For tetragonal compounds, the c-axis hopping integral is shown to have the form
$$t_c=t_{}\mathrm{cos}^2(2\theta ).$$
(1)
This anisotropic interlayer hopping integral is a basic property of high-$`T_c`$ materials. It results from the hybridization between the bonding O 2p orbitals and virtual Cu 4s orbitals in each CuO<sub>2</sub> plane and holds for all high-$`T_c`$ cuprates with tetragonal symmetry, independent of the number of CuO<sub>2</sub> layers per unit cell. This form of the $`c`$-axis hopping integral was first found in the band structure calculations of high-$`T_c`$ oxides. However, as shown in Refs. , it is valid irrespective of the approximations used in these calculations. For Hg<sub>2</sub>BaCuO<sub>4</sub> or other non-body centered tetragonal compounds, $`t_{}`$ is approximately independent of $`\theta `$. For a body-centered tetragonal compound, such as La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, $`t_{}`$ does depend on $`\theta `$, but in the vicinity of the gap nodes it is finite. Since in the superconducting state the physical properties are mainly determined by the quasiparticle excitations near the gap nodes, for simplicity we ignore the $`\theta `$-dependence of $`t_{}`$ in the discussion given below.
In YBCO, the CuO planes are dimpled with displacements of O in the $`c`$ direction. O displacements, together with the CuO chains in YBCO, reduce the crystal symmetry and introduce a finite hybridization between the $`\sigma `$ and $`\pi `$ bands. This hybridization results in a small but finite $`t_c`$ along zone diagonals which will change the low temperature behavior of the electromagnetic response functions. However, at not too low temperatures, Eq. (1) is still a good approximation.
The conductivity is determined by the imaginary part of the current-current correlation function. If vertex corrections are ignored, it can be shown that the conductivity is given by
$`\sigma _\mu `$ $`=`$ $`{\displaystyle \frac{\alpha _\mu }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega {\displaystyle \frac{f(\omega )}{\omega }}{\displaystyle _0^{2\pi }}{\displaystyle \frac{d\theta }{2\pi }}u_\mu ^2(\theta )M(\theta )`$ (2)
$`M(\theta )`$ $`=`$ $`{\displaystyle \frac{\pi }{\mathrm{\Gamma }_\theta }}\mathrm{Re}{\displaystyle \frac{\left(\omega +i\mathrm{\Gamma }_\theta \right)^3\omega \mathrm{\Delta }_0^2\mathrm{cos}^22\theta }{\left[\left(\omega +i\mathrm{\Gamma }_\theta \right)^2\mathrm{\Delta }_0^2\mathrm{cos}^22\theta \right]^{3/2}}},`$ (3)
where $`u_{ab}(\theta )=1`$, $`u_c(\theta )=\mathrm{cos}^2(2\theta )`$, $`\alpha _{ab}=e^2v_F^2N(0)/4`$, $`\alpha _c=e^2t_{}^2N(0)/4`$, $`N(0)`$ is the density of states of electrons at the Fermi level, and $`f(\omega )`$ is the Fermi function. In obtaining the above equation, the retarded Green’s function of the electron, $`G_{ret}(k,\omega ),`$ is assumed to be
$$G_{ret}(k,\omega )=\frac{1}{\omega \xi _k\tau _3\mathrm{\Delta }_\theta \tau _1+i\mathrm{\Gamma }_\theta },$$
(4)
where $`\xi _k=\epsilon _{ab}(k)t_{}\mathrm{cos}k_zu_c(\theta )`$ is the energy dispersion of the electron, $`\mathrm{\Delta }_\theta =\mathrm{\Delta }_0\mathrm{cos}2\theta `$ is the d-wave gap parameter, $`\mathrm{\Gamma }_\theta `$ is the quasiparticle scattering rate, $`\tau _1`$ and $`\tau _3`$ are the Pauli matrices.
In the normal state, $`\mathrm{\Delta }_0=0`$, and Eq. (2) becomes
$$\sigma _\mu =\alpha _\mu _0^{2\pi }\frac{d\theta }{2\pi }\frac{u_\mu ^2(\theta )}{\mathrm{\Gamma }_\theta }.$$
(5)
If the scattering is isotropic, i.e. $`\mathrm{\Gamma }_\theta =\mathrm{\Gamma }(T)`$ independent of $`\theta `$, then $`\sigma _{ab}`$ and $`\sigma _c`$ should have the same temperature dependence, in contradiction with experiments. However, if $`\mathrm{\Gamma }_\theta =\mathrm{\Gamma }_0\mathrm{cos}^22\theta +\tau ^1(T)`$, then
$`\sigma _{ab}`$ $`=`$ $`{\displaystyle \frac{\alpha _{ab}}{\sqrt{\tau ^1(\mathrm{\Gamma }_0+\tau ^1)}}},`$ (6)
$`\sigma _c`$ $`=`$ $`{\displaystyle \frac{\alpha _c}{2\mathrm{\Gamma }_0}}\left(1{\displaystyle \frac{2\tau ^1}{\mathrm{\Gamma }_0}}+{\displaystyle \frac{2\tau ^1}{\mathrm{\Gamma }_0}}\sqrt{{\displaystyle \frac{\tau ^1}{\mathrm{\Gamma }_0+\tau ^1}}}\right).`$ (7)
When $`\mathrm{\Gamma }_0\tau ^1`$, $`\sigma _{ab}`$ is proportional to $`\sqrt{\tau }`$ not $`\tau `$. This is the result first obtained by IM with a Boltzman equation analysis. If $`\tau ^1`$varies quadratically with $`T`$ as in conventional Fermi liquid theory, the resistivity, i.e. $`\sigma _{ab}^1`$, varies linearly with $`T`$. This provides a phenomenological account for the linear resistivity of optimal doped cuprates. $`\sigma _c`$ depends on two parameters, $`\alpha _c/\mathrm{\Gamma }_0`$ and $`\tau \mathrm{\Gamma }_0`$. In the limit $`\mathrm{\Gamma }_0\tau ^1`$, $`\sigma _c`$ depends very weakly on $`T`$ and extrapolates to a finite value $`\alpha _c/\left(2\mathrm{\Gamma }_0\right)`$ at $`T=0K`$, in qualitative agreement with the experimental data. These results indicate that the simple cold spot model captures the key features of high-$`T_c`$ transport properties in the normal state, although its microscopic mechanism is still unclear.
In the superconducting state, Eq. (2) cannot be integrated out analytically. However, in the temperature regime $`T_c>T`$ $`\tau _0^1`$, where $`\tau _0=\mathrm{\Gamma }_\theta ^1`$ is the thermal average of $`\mathrm{\Gamma }_\theta ^1`$, the leading order approximation in $`\mathrm{\Gamma }_\theta `$ is valid and the conductivity is given by
$$\sigma _\mu \alpha _\mu _{\mathrm{}}^{\mathrm{}}𝑑\omega \frac{f(\omega )}{\omega }_0^{2\pi }\frac{d\theta }{2\pi }\frac{u_\mu ^2(\theta )}{\mathrm{\Gamma }_\theta }\mathrm{Re}\frac{\left|\omega \right|}{\sqrt{\omega ^2\mathrm{\Delta }_\theta ^2}}.$$
(8)
$`\tau _0^1`$ can be estimated from the experimental data of the in-plane microwave conductivity $`\sigma _{ab}`$ and the normal fluid density with the generalized Drude formula: $`\sigma _{ab}n_{ab}\tau _0`$. For optimally doped YBCO, $`\tau _0^1`$ is less than $`1K`$ at low temperatures and increases with increasing temperatures. At $`60K`$, $`\tau _0^1`$ is about $`6K`$. Close to $`T_c`$, $`\tau _0^1`$ becomes larger but still much less than the temperature. This means that the leading order approximation in $`\mathrm{\Gamma }_\theta `$ is valid in nearly the whole temperature range in which the experimental measurements have been done so far, at least for optimally doped YBCO.
If $`\mathrm{\Gamma }_\theta =\mathrm{\Gamma }(T)=\tau ^1(T)`$ does not depend on $`\theta `$, Eq. (8) can be simplified to
$$\sigma _\mu =\frac{e^2n_\mu (T)\tau }{2m},$$
(9)
where $`n_\mu (T)`$ is the normal fluid density which decreases with decreasing temperatures. This is nothing but the generalized Drude formula which was first used by Bonn et al. in their data analysis for the in-plane microwave conductivity in the superconducting state. From Eq. (9), it is easy to show that the ratio of the in- and out-of-plane conductivities $`\sigma _{ab}/\sigma _c`$ is proportional to the ratio $`n_{ab}/n_c`$, i.e. $`\sigma _{ab}/\sigma _c=`$ $`n_{ab}/n_c`$. However, this does not agree with experiments, even qualitatively. It implies that the scattering rate must be anisotropic, as mentioned previously.
In the cold spot model, $`\mathrm{\Gamma }_\theta =\mathrm{\Gamma }_0\mathrm{cos}^22\theta +\tau ^1(T)`$ , $`\sigma _{ab}`$ and $`\sigma _c`$ behave very differently. Eq. (8) now can be approximately written as
$`\sigma _a`$ $``$ $`{\displaystyle \frac{T\tau \alpha _a}{\mathrm{\Delta }_0}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑x{\displaystyle \frac{f(xT)}{x}}{\displaystyle \frac{\left|x\right|}{\sqrt{1+T^2\mathrm{\Gamma }_0\tau x^2/\mathrm{\Delta }_0^2}}},`$ (10)
$`\sigma _c`$ $``$ $`{\displaystyle \frac{9\alpha _c\zeta [3]T^3}{2\mathrm{\Gamma }_0\mathrm{\Delta }_0^3}}{\displaystyle \frac{\left(2\mathrm{ln}2\right)T\alpha _c}{\tau \mathrm{\Gamma }_0^2\mathrm{\Delta }_0}}+{\displaystyle \frac{\alpha _c\sigma _a}{\alpha _a\tau ^2\mathrm{\Gamma }_0^2}},`$ (11)
where $`\zeta (3)=1.202`$. In the high temperature limit $`\mathrm{\Gamma }_0\tau T^2/\mathrm{\Delta }_0^21`$,
$`\sigma _{ab}`$ $``$ $`\alpha _{ab}\sqrt{{\displaystyle \frac{\tau }{\mathrm{\Gamma }_0}}},`$ (12)
$`\sigma _c`$ $``$ $`{\displaystyle \frac{9\alpha _c\zeta (3)T^3}{2\mathrm{\Gamma }_0\mathrm{\Delta }_0^3}}=9\zeta (3)\sigma _{n,c}(0K){\displaystyle \frac{T^3}{\mathrm{\Delta }_0^3}},`$ (13)
where $`\sigma _{n,c}(0K)=\alpha _c/2\mathrm{\Gamma }_0`$ is the extrapolated normal state c-axis conductivity at $`0K`$. These equations reveal a few interesting properties of the conductivities. Firstly, $`\sigma _{ab}`$ is proportional to $`\sqrt{\tau }`$ and does not depend explicitly on $`\mathrm{\Delta }_0`$. This $`\sqrt{\tau }`$ dependence of $`\sigma _{ab}`$ is an extension of Eq. (6) in the superconducting state, which means that $`\sigma _{ab}`$ (excluding the fluctuation peak at $`T_c`$) will change smoothly across $`T_c`$ since $`\tau `$ changes continuously at $`T_c`$. The temperature dependence of $`\tau `$ in the superconducting state is unknown. But phenomenologically it can be determined from the measured in-plane conductivity via Eq. (12). Secondly, $`\sigma _c`$ decreases monotonically with decreasing temperature and behaves approximately as $`T^3`$ in the above temperature regime ($`\mathrm{\Delta }_0`$ depends very weakly on temperature except close to $`T_c`$). Furthermore $`\sigma _c`$ does not depend on $`\tau `$, which means that this $`T^3`$ behavior is universal and independent of the impurity scattering provided it is sufficiently weak that the coherent interlayer tunneling dominates. Furthermore, there is no free adjustable parameters in Eq. (13) since both $`\sigma _{n,c}(0K)`$ and $`\mathrm{\Delta }_0`$ can be determined directly from experiments. This therefore provides a good opportunity to test the cold spot scattering model by comparison with experiments.
In the low temperature limit $`\mathrm{\Gamma }_0\tau T^2/\mathrm{\Delta }_0^21`$, Eq. (8) leads to the following results
$`\sigma _{ab}`$ $``$ $`{\displaystyle \frac{(2\mathrm{ln}2)T\tau \alpha _{ab}}{\mathrm{\Delta }_0}},`$ (14)
$`\sigma _c`$ $``$ $`{\displaystyle \frac{675\zeta [5]\alpha _c\tau }{4}}\left({\displaystyle \frac{T}{\mathrm{\Delta }_0}}\right)^5.`$ (15)
In this limit both $`\sigma _{ab}`$ and $`\sigma _c`$ do not depend on $`\mathrm{\Gamma }_0`$. This means that the $`\mathrm{\Gamma }_0\mathrm{cos}^22\theta `$ term in $`\mathrm{\Gamma }_\theta `$ is not important in this temperature regime. In fact, Eqs. (14) and (15) are just the results of $`\sigma _{ab}`$ and $`\sigma _c`$ in an isotropic scattering system as given by Eq. (9) since the normal fluid densities $`n_{ab}`$ and $`n_c`$ behave as $`T`$ and $`T^5`$ for tetragonal high-$`T_c`$ compounds at low temperatures, respectively . In real materials where impurity scattering is not negligible, as discussed in Refs. , this $`T^5`$ behaviour of the $`c`$-axis normal fluid density is fairly unstable and will be replaced by a $`T^2`$ law at low temperatures. In this case, the temperature dependence of $`\sigma _c`$ will also be changed.
The condition $`\mathrm{\Gamma }_0\tau T^2/\mathrm{\Delta }_0^21`$ can be written as $`T/\mathrm{\Delta }_01/\sqrt{\mathrm{\Gamma }_0\tau }`$. Since $`\tau _0^1>\tau ^1`$, the condition $`T/\mathrm{\Delta }_01/\sqrt{\mathrm{\Gamma }_0\tau }`$ holds if $`T/\mathrm{\Delta }_01/\sqrt{\mathrm{\Gamma }_0\tau _0}`$ is satisfied. If we assume $`\mathrm{\Gamma }_0`$ $`0.15eV`$ , which is the value used by IM in their analysis of normal state transport coefficients, and $`\tau _0^16K`$ as given by the experimental data for YBCO at $`60K`$, then $`1/\sqrt{\mathrm{\Gamma }_0\tau _0}`$ is estimated to be about $`0.06`$. Thus the condition $`\mathrm{\Gamma }_0\tau T^2/\mathrm{\Delta }_0^21`$ holds at least when $`T/\mathrm{\Delta }_00.06`$ for YBCO. Since $`\mathrm{\Delta }_02T_c`$ at low temperatures, therefore Eqs. (12) and (13) are valid when $`T/T_c0.12`$ for optimally doped YBCO.
To compare the above results with experiments, we plot in Figure 1 the experimental data of $`\sigma _c`$ at 22GHz as a function of $`(T/T_c)^3`$ for $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{6.95}`$ . From $`30K`$ up to $`T_c`$, $`\sigma _c`$ exhibits a $`T^3`$ behavior within experimental error, in agreement with Eq. (13). We have fitted the experimental data with other power laws of $`T/T_c`$ in the same temperature range, but found none of them fit the experimental data as well as the $`T^3`$ power law. For this material, there is no experimental data on the temperature dependence of $`\sigma _c`$ in the normal state. But just above $`T_c`$, $`\sigma _c6.3\times 10^4\mathrm{\Omega }^1m^1`$ . Since the normal state $`\sigma _c`$ depends very weakly on $`T`$ at low $`T`$ for optimally doped YBCO , we can therefore approximately take this value of $`\sigma _c`$ as the extrapolated normal state c-axis conductivity at $`0K`$, i.e. $`\sigma _{n,c}(0K)6.3\times 10^4\mathrm{\Omega }^1m^1`$. Substituting this value into Eq. (13), we obtain $`\sigma _c6.8\times 10^5(T/\mathrm{\Delta }_0)^3\mathrm{\Omega }^1m^1`$. By fitting this theoretical result with the corresponding experimental data in Figure 1, we find that $`\mathrm{\Delta }_0/T_c2.6`$. This value of $`\mathrm{\Delta }_0/T_c`$ agrees with all other published data for optimally doped YBCO within experimental uncertainty. This agreement indicates that not only the leading temperature dependence but also the absolute values of $`\sigma _c`$ predicted by Eq. (13) agrees with experiments.
For YBCO, $`t_c`$ is small but finite at the gap nodes. This has not been considered in the above discussion and if included, the temperature dependence of $`\sigma _c`$ at low temperatures will be changed. What effect leads to the upturn of $`\sigma _c`$ at low $`T`$ remains a puzzle to us. The model presented here is inadequate to address this issue. Perhaps the interplay between the CuO chains and CuO<sub>2</sub> is playing an important role. A possible cause for the upturn of $`\sigma _c`$ is the proximity effect between CuO chains and CuO<sub>2</sub> planes. However, a detailed analysis for this is too complicated to carry out at present and at this point we are unable to show explicitly whether this effect alone can lead to the observed upturn in $`\sigma _c`$. Another possibility is that this upturn is due to the onset of coherent tunnelling of the $`c`$-axis hopping whereas before this upturn the c-axis hopping is incoherent. If this is the case, the assumption made in this paper would fail. However, since the dramatic increase in the quasiparticle lifetime occurs at a temperature much higher t han the $`\sigma _c`$ upturn temperature, we believe that this possibility is unlikely to be relevant.
Figure 2 shows the $`c`$-axis quasiparticle conductivity obtained by Latyshev et al. from an intrinsic mesa tunneling measurement for BSCCO. $`\sigma _c`$ of BSCCO also exhibits a $`T^3`$ behavior in a broad temperature range in the superconducting state. However, the onset temperature of this $`T^3`$ term ($`45K`$) is higher than that for YBCO. This is because BSCCO is very anisotropic and disorder effects are stronger than in YBCO. The crossover temperature from the impurity dominated limit at low temperatures to the intrinsic limit at high temperatures, as estimated in Ref. , is about 30K. The value of $`\sigma _{n,c}(0K)`$ for this material is difficult to determine because of the pseudogap effect. Since the opening of a pseudogap always reduces the value of $`\sigma _{n,c}`$ in the normal state, we can therefore take the value of $`\sigma _{n,c}`$ just above $`T_c`$ ($`3\mathrm{\Omega }^1m^1`$), as a lower bound to $`\sigma _{n,c}(0K)`$. The c-axis conductivity measured at a voltage well above the pseudogap is nearly temperature independent and higher than the corresponding conductivity in the limit $`V0`$ (Figure 2 of Ref. ). This conductivity, $`\sigma _{n,c}(eV>\mathrm{\Delta }_0)8\mathrm{\Omega }^1m^1`$, sets a upper bound to $`\sigma _{n,c}(0K)`$. Thus $`\sigma _{n,c}(0K)`$ is between $`3\mathrm{\Omega }^1m^1`$ and $`8\mathrm{\Omega }^1m^1`$. By fitting the experimental data of $`\sigma _c`$ from 45K to $`T_c`$ with Eq. 13, we find that $`\mathrm{\Delta }_0/T_c`$ is within (2.3, 3.1). This range of $`\mathrm{\Delta }_0/T_c`$ is consistent with the published data for BSCCO within experimental uncertainty.
In conclusion, we have studied the temperature behavior of microwave conductivity of high-$`T_c`$ cuprates in the superconducting state within the framework of low energy electromagnetic response theory of superconducting quasiparticles. We found that the c-axis conductivity varies approximately as $`T^3`$ and does not depend on $`\tau (T)`$ in the temperature regime $`T_cT\mathrm{\Delta }_0/\sqrt{\mathrm{\Gamma }_0\tau }`$. This universal tempearture dependence of $`\sigma _c`$ agrees quantitatively with the experimental data for YBCO and BSCCO. Our study shows that it is important to include the anisotropy of the interlayer hopping integral in the analysis of $`c`$-axis transport properties of high-$`T_c`$ cuprates.
We thank D. Broun, A. J. Millis, C. Panagopoulos, A. J. Schofield for useful discussions, and Yu. I. Latyshev for sending us the experimental data as shown in Figure 2. T.X. was supported in part by the National Science Fund for Distinguished Young Scholars of the National Natural Science Fundation of China. |
no-problem/0001/astro-ph0001172.html | ar5iv | text | # The discovery of photospheric nickel in the hot DO white dwarf REJ0503-289Based on observations made with the Goddard High Resolution Spectrograph on board the Hubble Space Telescope
## 1 Introduction
About one quarter of all white dwarfs have helium-rich photospheres and are classified according to the relative strengths of $`\mathrm{He}\mathrm{ii}`$ and $`\mathrm{He}\mathrm{i}`$ lines in their optical spectra. These are determined by the ionization balance of the He plasma, depending on the effective temperature of the star. The DB white dwarfs display only $`\mathrm{He}\mathrm{i}`$ absorption lines, and cover the temperature range from $`1100030000`$K. Temperatures below 11000K are too low to excite $`\mathrm{He}\mathrm{i}`$ sufficiently to yield observable lines, leading to the featureless DC white dwarfs. In contrast, the hot DO stars contain $`\mathrm{He}\mathrm{ii}`$ lines alone, the ionization of helium requiring higher effective temperatures than found in the DB white dwarfs. The upper limit to the DO temperature range, at approximately 120000K, is associated with the helium, carbon and oxygen-rich PG1159 stars (also denoted as DOZ by Wesemael et al. 1985, WGL), which are the proposed precursors of the DO white dwarfs. The 45000K lower temperature limit of the DO range is some 15000K higher than the beginning of the DB sequence, presenting a so-called DO-DB gap between 30000K and 45000K, first noted by Liebert et al. (1986). Subsequent surveys of white dwarfs have failed to find examples of He-rich objects within the gap which continues to present a problem in our understanding of white dwarf evolution.
It is clear, from the existence of the DO-DB gap and the changing ratio of H-rich to He-rich objects, that white dwarf photospheric compositions evolve as the stars cool. A number of physical mechanisms may be operating. For example, the presence of elements heavier than H or He in white dwarf photospheres is the result of radiative forces acting against the downward pull of gravity, preventing these heavy elements sinking out of the atmosphere (e.g. Chayer, Fontaine & Wesemael 1995). A possible explanation of the DO-DB gap is that, following the AGB and PN phases of mass-loss, residual hydrogen mixed in the stellar envelope floats to the surface converting the DO stars into DAs. Later, the onset of convection may mix the hydrogen layer, depending on its thickness, back into the He-rich lower layers, causing the stars to reappear on the DB sequence.
Any understanding of the possible evolutionary processes depends on several important measurements, including the determination of effective temperature, surface gravity and photospheric composition. Several detailed studies have been carried out for DA white dwarfs (e.g. Marsh et al. 1997; Wolff et al. 1998; Holberg et al. 1993, 1994; Werner & Dreizler 1994), but comparatively little work has been carried out on the DO stars. This is partly due to the smaller number of stars available for detailed study, but also arises from the comparative difficulty of establishing a reliable self-consistent temperature determination from the $`\mathrm{He}\mathrm{ii}`$ lines. The most recent and probably the most detailed study of the DO white dwarf sample has been carried out by Dreizler and Werner (1996). They applied the results of non-LTE model atmosphere calculations to the available optical and UV spectra to determine the atmospheric parameters of 14 stars, confirming the existence of the DO-DB gap. Dreizler and Werner found the mean mass of the white dwarfs in their sample to be $`0.59\pm 0.08\mathrm{M}_{}`$, very close to the mean masses of the DA and DB samples. A large scatter in heavy element abundances was found, even for stars with similar parameters, and no clear trend along the cooling sequence could be seen.
With such a small sample of DO white dwarfs available for study, each individual object is significant. Discovered as a result of the ROSAT WFC all-sky survey in the EUV, the DO white dwarf REJ0503$``$289 (WD0501$``$289, MCT0501$``$2858) is particularly important because of its low interstellar column density (Barstow et al. 1994). As a consequence, it is the only DO white dwarf which can be observed throughout the complete spectral range from optical to X-ray and has been the subject of intense study. However, despite this attention, it has not proved possible to generate a model spectrum that is consistent at all wavelengths. Optical determinations of the effective temperature, from a single ESO NTT spectrum yield a value of $`70000`$K, with a log surface gravity of 7.5 (Barstow et al. 1994; Dreizler & Werner 1996). Similar values are obtained from an LTE analysis of the far-UV $`\mathrm{He}\mathrm{ii}`$ lines in an ORFEUS spectrum of the star (Vennes et al. 1998). In contrast, a lower temperature of 63000K is required to reproduce the flux level of EUVE spectrum and simultaneously match the $`\mathrm{C}\mathrm{iii}`$ and $`\mathrm{C}\mathrm{iv}`$ line strengths in the IUE high dispersion spectra (Barstow et al. 1996). However, such a low temperature is incompatible with the absence of $`\mathrm{He}\mathrm{i}`$ 4471Å and $`\mathrm{He}\mathrm{i}`$ 5876Å absorption lines in the optical spectrum, which provides a lower limit on $`T_{\mathrm{eff}}`$ of $`65000`$K.
Analyses of IUE high dispersion spectra yield measurements of the abundances of C (also obtained from the optical spectrum), N, O and Si and give limits on the presence of Ni and Fe. More recently, phosphorus has also been detected in the REJ0503$``$289 ORFEUS spectrum (Vennes et al. 1998). The presence of heavy elements in the atmosphere of any white dwarf, DO or DA, is known to have a significant effect on the temperature structure of the photosphere and the emergent spectrum. With millions of absorption lines in the EUV wavelength range, the influence of iron and nickel is particularly dramatic. This is illustrated very clearly in the DA stars, where the Fe and Ni opacity produces a steep drop in the observed flux, compared to that expected from a pure H atmosphere (e.g. Dupuis et al. 1995; Lanz et al. 1996; Wolff et al. 1998). In addition, for an H-rich model atmosphere including significant quantities of Fe and Ni, the change in atmospheric structure also alters the predicted Balmer line profiles. Inclusion of these effects, together with a NLTE analysis, has the affect of yielding lower Balmer line effective temperatures compared with those determined from a pure H-model atmosphere. This results in a net downward shift of the temperature scale for the hottest heavy element-rich objects (Barstow Hubeny & Holberg 1998).
Compared to the extensive studies of the effect of Fe and Ni (and other elements) on the atmospheres of DA white dwarfs, as discussed above, little has been done in the case of the DO stars. First, there are few detections of these species in the atmospheres of the DOs (see Dreizler & Werner 1996). Second, it has been more difficult to calculate suitable stellar model atmospheres for comparison with the data. However, it is possible that their inclusion in such computations might eventually solve the probem of the EUV flux. We present an analysis of HST spectra of REJ0503$``$289 obtained with the GHRS, which reveal the presence of Ni in the atmosphere of the star, but yield only upper limits to the abundance of Fe. We analyse a recent optical spectrum of the star, to determine the effective temperature and surface gravity, and evaluate the possible influence of photospheric Ni and trace Fe on the estimated temperature.
## 2 Observations
### 2.1 Optical spectrum
The optical spectrum of REJ0503$``$289 was obtained by one of us (D.F.) during an observational campaign aimed at the determination of the temperature scale of DA white dwarfs (Finley, Koester & Basri 1997). The spectrum of REJ0503$``$289 was obtained on 1992 September $`22^{\mathrm{nd}}`$ at the 3 m Shane telescope of the Lick Observatory, using the Kast double spectrograph. The spectrograph was configured to cover the optical wavelength range from 3300Å up to 7500Å in one exposure. A dichroic mirror divides the beam at about 5500Å. The blue spectrum was recorded with a 1200 x 400 Reticon CCD providing a resolution varying from a little over 4Å FWHM at 4000Å to about 6Å FWHM at 5000Å, with an entrance slit width of $`2\mathrm{"}`$. The red side is recorded separately on a second 1200 x 400 Reticon CCD. Exposure times were set to obtain a peak S/N of $`100`$ in the blue spectral region, with the actual S/N ranging from 50 to 110. More details of the instrument set up as well as the data reduction are described by (Finley, Koester & Basri 1997).
### 2.2 GHRS on the Hubble Space Telescope
The HST far UV spectra used in this work were obtained from two separate observing programmes carried out during cycle 6 in 1996 and 1997 by two of us (Barstow and Werner), using the Goddard High Resolution Spectrograph (GHRS) before its replacement during the second servicing mission. All spectra utilised the G160M grating, yielding a spectral resolution $`0.018`$Å rms, the grating angle adjusted to sample different wavelength ranges. The Barstow programme obtained 6 separate exposures covering wavelengths 1233-1271Å (2 spectra), 1369-1406Å (3 spectra) and 1619-1655Å (1 spectrum). The main purpose of the multiple exposures was to monitor any possible variation in absorption line strengths that might be associated with a reported episodic wind (Barstow & Sion 1994). In contrast, the observations of Werner comprised just two single spectra spanning the ranges 1225-1265Å and 1335-1375Å. Table 1 summarises all the GHRS observations. Within the observational errors, there is no evidence for any changes in the profile of any of the $`\mathrm{N}\mathrm{v}`$ (1238.8/1242.8Å), $`\mathrm{O}\mathrm{v}`$ (1371.3Å) and $`\mathrm{Si}\mathrm{iv}`$ (1393.8/1402.8) resonance lines. Measurement of the individual equivalent widths (Table 2) also fails to show any of the variability that might be associated with the episodic wind reported by Barstow & Sion (1994).
With no apparent variability and repeated exposure of particular spectral ranges, it is possible to generate a final data set with an overall improvement in signal-to-noise, by merging and coadding the data in an appropriate way. The procedure followed here was to first coadd the results of the initial observations, where the spectral ranges are identical for each exposure, weighting them according to their exposure time. Secondly, the results of this exercise were merged with the later observations, also weighted for exposure time, but only in the regions where the data overlap. The resulting spectra are shown in Fig. 1.
### 2.3 Temporal variability
Although our GHRS spectra of REJ0503$``$289 show little evidence of temporal variability such variations have been reported from IUE data. As previously mentioned, Barstow & Sion (1994) reported evidence of variations in the $`\mathrm{C}\mathrm{iv}`$, $`\mathrm{O}\mathrm{v}`$ and $`\mathrm{He}\mathrm{ii}`$ features in two SWP echelle spectra of REJ0503$``$289 obtained 13 months apart. There exist two additional spectra of this star, obtained subsequent to the Barstow & Sion work, which show further evidence of significant variations in the $`\mathrm{C}\mathrm{iv}`$ resonance lines (see Holberg, Barstow & Sion 1998). In Fig. 9 we show a comparison of the region of the $`\mathrm{C}\mathrm{iv}`$resonance lines in all four spectra of REJ0503$``$289, together with the predicted synthetic spectrum computed from a $`\mathrm{tlusty}`$ model. As is evident, the two spectra on the right, obtained in Nov. 1994, show much more prominent $`\mathrm{C}\mathrm{iv}`$ lines compared with those at the left, obtained in Dec. 1992 and Jan. 1994. In each spectrum a vertical line marks the photospheric velocity of the star. It appears that the most pronounced change is associated with a strengthening of the blue wings of the C IV lines and a possible development of a blue-shifted component approximately 10 months later in 1994. Similar blue-shifted components are to be found in the majority of the hot He-rich degenerates observed with IUE (Holberg, Barstow & Sion 1998). A discussion of the nature of these blue-shifted features in DO stars is presented in Holberg, Barstow & Sion (1999).
There is additional evidence of temporal variability in the comparison between the equivalent widths as measured in the GHRS data and in the IUE spectra. Holberg, Barstow & Sion (1998) presented results from a coadded version of all four SWP spectra. These authors report equivalent widths for the $`\mathrm{N}\mathrm{v}`$ resonance lines which are 30% less than those in Table 6. Such a change in equivalent width is significantly outside the range of mutual uncertainties of the two data sets. The $`\mathrm{Si}\mathrm{iv}`$ resonance lines in both data sets, however, remain consistent within mutual errors.
## 3 Data analysis
### 3.1 Non-LTE model atmospheres
We have used theoretical model atmospheres calculated using two independent computer programmes in this study: the $`\mathrm{tlusty}`$ code developed by Hubeny (Hubeny 1988; Hubeny & Lanz 1992, 1995) and Werner’s $`\mathrm{pro2}`$ suite (Werner 1986, Werner & Dreizler 1999, Dreizler & Werner 1993). Both take account of non-LTE effects in the calculations and include extensive line blanketing.
The $`\mathrm{tlusty}`$ models are an extension of work carried out on the atmospheres of hot heavy element-rich DA white dwarfs by Lanz et al. (1996) and Barstow et al. (1998) and have been described extensively in those papers. Briefly, the models include a total of 26 ions of H, He, C, N, O, Si, Fe and Ni. Radiative data for the light elements have been extracted from TOPBASE, the database for the opacity project (Cunto et al. 1993), except for extended models of carbon atoms. For iron and nickel, all the levels predicted by Kurucz (1988) are included, taking into account the effect of over 9.4 million lines.
To facilitate analysis of both the optical and far UV data sets, an initial grid of models was calculated for determination of effective temperature and surface gravity, spanning a range of $`T_{\mathrm{eff}}`$ from 65000K to 80000K (5000K steps) and for log g from 7.0 to 8.0 (0.5 dex steps). For computation time reasons, these models only treated the elements H, He and C. It is essential to deal explicitly with carbon because it is the only element, apart from He, visible in the optical spectrum. In addition, the 1548/1550Å resonance lines have a strong influence on the temperature structure of the photospheric models and, as a result, influence the $`\mathrm{He}\mathrm{ii}`$ line strengths. The grid was extended to include the heavier elements N, O, Si, Fe and Ni for the single value of log g=7.5, but all values of $`T_{\mathrm{eff}}`$. Table 3 lists the element adundances included in the model grid.
$`\mathrm{pro2}`$ is a code for calculating NLTE model atmospheres in radiative and hydrostatic equilibrium using plane-parallel geometry. The $`\mathrm{pro2}`$ models are an extension of the work of Dreizler & Heber (1998). In addition to H, He, C, N, and O we included Fe and Ni in the same way as treated by Werner, Dreizler & Wolff (1995). In this set of model atmospheres we used the previously determined parameters for $`T_{\mathrm{eff}}`$ and log g (Dreizler & Werner 1996), which took into account the strong influence of H, He and C for the structure of the atmosphere due to the same reasons discussed above.
The physical approximations as well as the atomic input data used in $`\mathrm{tlusty}`$ and $`\mathrm{pro2}`$ are very similar but the code itself and the numerical techniques as well as the treatment of the atomic data are completely independent. It is, therefore, very interesting to compare the two sets of model atmospheres. Differences allow a reliable estimation of the systematic errors. It is very satisfying from the point of view of the modelers that these are below other uncertainties like the flux calibration of the spectra. This is demonstrated in Table 3 and in Fig. 10, where the 1228Å–1252Å and 1338Å– 1375Å regions are compared with synthetic spectra from $`\mathrm{pro2}`$ (top) and $`\mathrm{tlusty}`$ (bottom). The data are smoothed by a 0.1Å (fwhm) gaussian to reduce the noise in the observed spectrum. It is interesting to note, in Fig. 10, that the line broadening approximation adopted in $`\mathrm{pro2}`$ gives slightly better results than that used by $`\mathrm{tlusty}`$/$`\mathrm{synspec}`$. Vennes et al. (1998) using IUE and ORFEUS spectra of REJ0503$``$289, together with LTE models, determined abundances for C, N, and O which on average are over an order of magnitude greater than those determined here. In their study, only Si had a larger abundance, by a factor of 4, than our results.
## 4 An optical determination of effective temperature and surface gravity
The technique of using the H Balmer lines to estimate the effective temperatures and surface gravities of the H-rich DA white dwarfs is well-established (e.g. Kidder 1991; Bergeron Saffer & Liebert 1992). These measurements are made by comparing the observed line profiles with the predictions of synthetic stellar spectra, computed from theoretical model atmospheres, searching and interpolating a model grid to find the best match. An objective test, such as a $`\chi ^2`$ analysis, is used to determine the best-fit solution which also allows formal determination of the measurement uncertainties. While some questions have been raised about the limitations of the technique for the highest temperature DA white dwarfs, with heavy element contaminated atmospheres and weak Balmer lines (Napiwotzki 1992, Napiwotzki & Rauch 1994, Barstow Hubeny & Holberg 1998), it remains the most important and widely used means of determining the DA temperature scale.
In contrast, it has been more difficult to establish a similar standard technique for the DO white dwarfs. This has partly arisen from problems in obtaining good agreement between the models and the data. In particular, it has been difficult to find models that can match all observed $`\mathrm{He}\mathrm{ii}`$ profiles simultaneously (e.g. see Dreizler & Werner 1996). A particular problem has usually been obtaining a good fit to the $`\lambda 4686`$ line. Consequently, in determining $`T_{\mathrm{eff}}`$ and log g for their sample of DO white dwarfs (the first comprehensive survey), Dreizler & Werner (1996) were forced to rely on a simple visual comparison to select the best-fit model. There is no reason to suppose that the parameters determined for REJ0503$``$289 and the other DOs studied are seriously in error. However, it is certainly difficult to determine the possible uncertainties in the measurements. Here we develop a more objective approach to obtaining $`T_{\mathrm{eff}}`$ and log g by fitting the He lines present in the optical spectrum of REJ0503$``$289 in a manner similar to the technique applied to the DA white dwarf Balmer lines.
Several $`\mathrm{He}\mathrm{ii}`$ lines are visible in the optical spectrum of REJ0503$``$289 (Fig. 11), from 4339Å to 6560Å. In addition, there are important $`\mathrm{He}\mathrm{i}`$ lines at 4471Å and 5876Å, not detected in the spectrum, which are very sensitive to effective temperature and, therefore, should be included in any analysis. However, for this spectrum, the 5876Å line falls in a region of comparatively noisy data, providing a weaker constraint than the 4471Å feature. Hence, we only consider the latter line in this analysis. For this paper, we have adapted the technique used in our previous work (e.g. Barstow et al. 1998), splitting the spectrum into discrete regions spanned by the absorption lines for comparison with the synthetic spectra. The features included and the wavelength ranges needed to capture the complete lines are listed in Table 4. The complex $`\mathrm{C}\mathrm{iv}`$/$`\mathrm{He}\mathrm{ii}`$ blend is covered in a single section of data and the $`\mathrm{He}\mathrm{i}`$ 4471Å feature is incorporated with the $`\mathrm{He}\mathrm{ii}`$ 4542Å line.
We used the programme $`\mathrm{xspec}`$ (Shafer et al. 1991) to compare the spectral models with the observational data. $`\mathrm{xspec}`$ utilises a robust $`\chi ^2`$ minimisation routine to find the best match to the data. All the lines included were fit simultaneously and an independent normalisation constant was applied to each, reducing the effect of any wavelength dependent systematic errors in the flux calibration of the spectrum. $`\mathrm{xspec}`$ interpolates the synthetic spectra linearly between points in the model grid. Any wavelength or velocity shifts were accounted for by allowing the radial velocity of the lines to vary (taking identical values for each line) during the fit. Once, the best match to the velocity had been obtained, this parameter was fixed, being of no physical interest in this work. The carbon abundance was fixed at a single value of C/He$`=0.005`$ throughout the analysis. Provided the model corresponding to minimum $`\chi ^2`$ can be considered to be a ‘good’ fit ($`\chi _{red}^2<2`$: $`\chi _{red}^2=\chi ^2/\nu `$, where $`\nu `$ is the number of degrees of freedom), the uncertainties in $`T_{\mathrm{eff}}`$ and log g can be determined by considering the departures in $`\chi ^2`$ ($`\delta \chi ^2`$) from this minimum. For the two parameters of interest in this analysis (the variables $`T_{\mathrm{eff}}`$ and log g), a $`1\sigma `$ uncertainty corresponds to $`\delta \chi ^2=2.3`$ (see Press et al. 1992). It should be noted that this only takes account of the statistical uncertainties in the data and does not include any possible systematic effects related to the model spectra or data reduction process.
Fig. 11 and Table 5 show the good agreement achieved between the best fit model and the data. The value of $`\chi _{red}^2`$ (1.49) is clearly indicative of a good match between model and data. However, inspection of the fit to the $`\mathrm{He}\mathrm{ii}`$ 4686Å line shows that the predicted line strength is weaker than observed, particularly in the line core. If the 4686Å line is ignored, the agreement between model and data improves, lowering $`\chi _{red}^2`$ to 1.38 (see Table 5). There is also an increase in the estimated temperature, from 72660K to 74990K, and a slight lowering of the surface gravity (by 0.1 dex).
It has recently been reported by Barstow et al. (1998) that the presence of significant quantities of heavy elements, in particuar Fe and Ni, in the atmospheres of DA white dwarfs has a significant effect on the temperature scale. For example, a lower value of the effective temperature is measured from the Balmer line profiles when these elements are included self consistently in the model calculations, in comparison with the results of pure H or H+He models. In the analysis of the He-rich star REJ0503$``$289, discussed here, we included a significant abundance of carbon but no other heavy elements in the theoretical models. However, it is known from earlier work (Barstow & Sion 1994; Barstow et al. 1996) that O, N and Si are definitely present, although at lower abundances (with respect to He) than C, and there were also hints at the possible presence of Fe and Ni.
To test the possible effect of these other heavy elements we extended the spectral grid, fixing log g at a single value of 7.5 but covering the original 65000K to 80000K temperature range. The nominal heavy element abundances incorporated are listed in Table 3. To assess the possible impact of treating all the heavy elements on the determination of effective temperature, the He lines analysis was repeated using the full heavy element models but with fixed values of the Fe/He and Ni/He abundances ($`10^5`$ in both cases). An $`4`$% decrease in the effective temperature is seen, but it should be noted that the experimental error bars overlap considerably.
## 5 Determination of heavy element abundances from the GHRS data
Since the effective temperature and surface gravity of REJ0503$``$289 are close to 70000K and 7.5 respectively, one of the points of the model atmosphere grid, we adopt these values when applying model calculations to this particular analysis. Fig. 1 shows the merged GHRS spectrum, described in section 2.2, together with a synthetic spectrum computed for the nominal C, N, O and Si abundances listed in Table 3 and with the Fe and Ni abundances fixed at $`10^5`$. The synthetic spectrum has been convolved with a 0.042Å (fwhm) gaussian function to represent the instrumental response. The positions of all lines with a predicted equivalent width greater or equal to 5mÅ are marked. Several strong lines from highly ionized species of C, N, O and Si are clearly visible, which are most likely to be of photospheric origin. For the most part these have already been identified in the IUE echelle spectra of this star (Barstow & Sion 1994; Holberg Barstow and Sion 1998) but we list them here, together with their rest wavelengths, measured wavelengths and measured equivalent widths (Table 6). However, the improved signal-to-noise of the GHRS spectrum, compared to even the coadded IUE data allows the detection of several new features, which are included in Table 6. Another important factor may also be the absence of any so-called reseau marks in the GHRS, an important feature of the IUE spectra arising from the spatial calibration. Also visible is a single interstellar line of $`\mathrm{Si}\mathrm{ii}`$ at 1260.4221Å.
Apart from these very strongest lines, which are clearly detected, any other possible absorption features are close to the limits of detection imposed by the general signal-to-noise of the data. However, further inspection of Fig. 1 shows a number of coincidences between possible features and predicted $`\mathrm{Ni}\mathrm{v}`$ lines. Several of these observed features, at 1250.4Å, 1257.6Å (Fig. 1c) and 1266.4Å (Fig. 1d), are almost strong enough to constitute detections in their own right. On the basis of these features alone, the evidence for the presence of Ni in the photosphere of REJ0503$``$289, is not very strong. As a further test we coadded the eight $`\mathrm{Ni}\mathrm{v}`$ lines predicted to be the strongest ($`\lambda \lambda `$1244.23, 1250.41, 1252.80, 1253.25, 1254.09, 1257.66, 1264.62, 1266.40) in velocity space (Fig. 12). This technique has been applied to IUE data in the past to detect elements such as Fe and Ni, which do not have any particularly strong resonance transitions but large numbers of comparatively weak features (see e.g. Holberg et al. 1994). The procedure involves shifting the spectra (in this case the GHRS data) into a velocity frame of reference centred on the wavelength of a particular line and then summing and averaging the results for several lines. If apparent weak features are just random fluctuations of the noise the coaddition process will tend to eliminate them whereas, if the features are real, summing the spectra will produce a more significant combined absorption line. The typical experimental uncertainties on the original coadded GHRS spectrum are $`710`$% , whereas the scatter from data point to data point on the velocity coadded spectrum is around 2%.
Fig. 12 shows the results of the coaddition of the 8 Ni lines. For comparison, the same procedure was carried out for the synthetic spectrum. An absorption feature is clearly detected in both the data and model, at approximately the same strength. This is clear evidence that Ni is present in the photosphere of REJ0503$``$289 at an abundance of $`10^5`$ with respect to He, although the predicted, coadded line strength is a little stronger than the observation. Interestingly, while there are fewer Fe lines expected to be present in these spectral ranges and their predicted equivalent widths are typically smaller than those of the Ni lines, there are no similar coincidences where significant Fe lines are expected. However, the constraints placed on the Fe abundance by individual lines are not particularly restrictive, only implying an Fe abundance below $`10^5`$ (see Fig. 1f). Again, coaddition of the nine strongest predicted Fe lines ($`\lambda `$1361.826, 1373.589, 1373.679, 1376.337, 1376.451, 1378.561, 1387.095, 1387.937, 1402.237) provides a more sensitive indication as to whether or not there is Fe present in the atmosphere (Fig. 13). However, Fe is not detected in this case, as there is no sign of any absorption feature. Comparing the coadded spectrum with models calculated for a range of Fe abundances from $`10^6`$ to $`10^5`$ (Fig. 13), gives an improved lower limit to the Fe abundance $`10^6`$.
## 6 Discussion
The availability of GHRS spectra of REJ0503$``$289, coupled with a new optical spectrum of the star has revealed important information regarding the structure and evolution of this interesting DO white dwarf. We have used the optical spectrum to determine the temperature and surface gravity of the star which is broadly in agreement with earlier determinations from the original ESO optical observation (Barstow et al. 1994; Dreizler & Werner 1996) and the ORFEUS far-UV spectrum published by Vennes et al. (1998).
Perhaps the most important part of this particular optical analysis is the more objective determination of the values of $`T_{\mathrm{eff}}`$ and log g, using a spectral fitting technique, and their respective errors. Whether or not these results might be considered to be in agreement or disagreement with the results of Vennes et al. (1998) depends somewhat on how we choose to make the measurement, with or without the $`\mathrm{He}\mathrm{ii}`$ 4686 line and using models with or without the elements heavier than H, He or C. In fact such a comparison is probably not particularly instructive as Vennes et al. used LTE models (compared to our non-LTE calculations) including only H and He. What is important is that we find that the inclusion of heavy elements in the models may have an influence on the outcome of the temperature determination, as it does for the hot DA white dwarfs. Significant abundances of N, O, Si, Fe and Ni (abundances estimated from the GHRS spectra), in addition to the H, He and C treated in the simpler models, lower the value of $`T_{\mathrm{eff}}`$ by approximately 2500K. Nevertheless, a detailed study of any DO star probably needs to be entirely self-consistent, combining temperature/gravity and abundance determinatons using spectra from at least visible and UV wavelength ranges. We note that, in this analysis, a higher Fe/He abundance than observed was included in the models used. Therefore, the observed $`T_{\mathrm{eff}}`$ change should only be regarded as an upper limit and it must be remembered that systematic errors may be of similar magnitude.
Detections of nitrogen, oxygen and silicon in the far UV spectra have already been reported by other authors (e.g. Barstow et al. 1996; Barstow & Sion 1994; Dreizler & Werner 1996), while carbon is clearly seen in both UV and visible bands. However, while the IUE data may have hinted at the presence of Fe and/or Ni (see Barstow et al. 1996), we are able to demonstrate that the star really does contain significant quantites of nickel for the first time, using the GHRS data. This is revealed initially in marginal detections of the strongest individual Ni lines but clearly confirmed when the eight strongest Ni lines are coadded in velocity space. This is the first detection of Ni in a non-DA white dwarf.
Nickel has also been observed in a number of very hot H-rich DA white dwarfs (e.g. Holberg et al. 1994; Werner & Dreizler 1994) and recently reported for one other hot DO (PG0108+101, Dreizler 2000) but it is always associated with the presence of iron. Furthermore, the measured Fe abundance is typically larger than that of Ni, by factors between 1 and 20. Hence, it is very surprising, given the detection of Ni in REJ0503$``$289, that we find no evidence at all of any Fe. Not even coadding the regions of the strongest predicted Fe lines in velocity space reveals the slightest hint of an absorption feature. Indeed, this technique allows us to place a tighter upper limit on the abundance of Fe, at $`10^6`$, than imposed by the individual lines. Thus, the abundance of Ni is greater than the abundance of Fe in REJ0503$``$289, in the opposite sense to what is observed in the DA white dwarfs, the Fe/Ni ratio being about one order of magnitude lower than found in those stars. Interestingly, the only other DO star with any iron group elements is PG1034+001. KPD0005+5106 was reported to have Fe VII features but GHRS and coadded IUE spectra do not show these features (Werner et al. 1996 and Sion et al. 1997). Dreizler & Werner (1996) find log(Fe/He) $`=5`$ and log(Ni/He) $`<5`$ in PG1034+001. For PG0108+101, Dreizler (2000) gives log(Fe/He) $`=4.3`$ and log(Ni/He) $`=4.3`$. Thus, the only other DOs with detectable iron and nickel, have Fe/Ni ratios $`>1`$, like those of the DA stars.
The relative abundances observed in REJ0503$``$289 are clearly in disagreement with the cosmic abundance ratio of Fe/Ni ($`18`$). Unfortunately, the predictions of radiative levitation calculations do not offer much help in explaining the observations. First, while Fe and Ni levitation has been studied in DA white dwarfs, the predicted abundances are much larger than observed, when all possible transitions (from the Kurucz line lists, Kurucz 1992) are included in the calculations (Chayer et al. 1994). Interestingly, better agreement is achieved, for Fe at least, when the subset of lines available in TOPBASE (Cunto et al. 1993) is used and after several physical improvements to the calculations (Chayer et al. 1995). Until recently, no similar calculations were available for Ni. Although much of the radiative levitation work has concentrated on DA white dwarfs, Chayer, Fontaine & Wesemael (1995) did deal with radiative levitation on He-rich atmospheres, but only considering elements up to and including Fe. The predicted Fe abundance for a star with the temperature and gravity of REJ0503$``$289 is in excess of $`10^4`$, two orders of magnitude above the level observed by us.
Recently, Dreizler (1999, 2000) has calculated NLTE model atmospheres taking the radiative levitation and gravitational settling self-consistently into account. In agreement with the results of Chayer, Fontaine & Wesemael (1995) the predicted iron abundance is far in excess of the observed one. The observed Fe/Ni ratio, however, can be reproduced qualitatively by these new models, which predict an excess of the nickel abundance over the iron abundance by a factor of three, but it is also clear that the stratified models do not work very well for the DOs, which are best represented by chemically homogeneous calculations.
This clear anomaly mirrors the comparison between predicted and observed abundances for most of the heavy elements (see Table 3). Only the observed abundance of oxygen is close to its predicted value. Consequently, it seems reasonable to conclude that the theoretical calculations are deficient in some way. The Chayer et al. (1995) make very clear statements about what physical effects are considered by their work and what, for various reasonable reasons, they do not deal with. Perhaps the most important limitation is that the current published results for He-rich stars are for static atmospheres whereas there is some evidence for active mass-loss in He-rich objects, including REJ0503$``$289 (Barstow & Sion 1994). On the other hand, recent studies of DA white dwarf atmospheres show evidence of heavy element stratification (Barstow et al. 1999; Holberg et al. 1999; Dreizler & Wolff 1999). This calls into question the validity of trying to compare ‘abundances’ determined from homogeneous models with the radiative levitation predictions where depth dependent elemental abundances are a direct result of the calculations.
## 7 Conclusion
We have presented the first direct detection of nickel (Ni/He$`=10^5`$) in the photosphere of the hot DO white dwarf REJ0503$``$289 together with a new determination of $`T_{\mathrm{eff}}`$ and log g utilising an objective spectral fitting technique. Nickel has been seen previously in the atmospheres of hot H-rich white dwarfs, but this is one of the first similar discoveries in a He-rich object, detection of Ni in PG0108+101 having also been recently reported by Dreizler (2000). It is also one of a very small number of detections of Fe group elements in any of the DO white dwarfs. A careful search for the presence of Fe in the star only yields an upper limit of Fe/He$`=10^6`$, implying a Fe/Ni ratio a factor 10 lower than seen in the H-rich white dwarfs. Although there are no published theoretical predictions, from radiative levitation calculations, for the abundance of Ni in He-rich photospheres the observed Fe abundance is some two orders of magnitude below that expected. An explanation of the observed heavy element abundances in this star clearly requires new studies of the various competing effects that determine photospheric abundance, including radiative levitation and possible mass loss via winds. In addition, model atmosphere calculations need to consider the possible effect of depth-dependent heavy element abundances. Some work of this nature has already been undertaken for DA white dwarfs but is only just beginning for He-rich objects. This work appears to be necessary to explain the continued problem of the inconsistency between the results of the far-UV analyses and the EUV spectrum, which cannot be matched by a model incorporating the abundances measured here at the optically determined temperature, the predicted EUV flux level exceeding that observed by a factor 2–3. However, the early indication of studies using self-consistent stratified models is that they do not match the observations very well.
## Acknowledgements
The work of MAB was supported by PPARC, UK, through an Advanced Fellowship. JBH and EMS wish to acknowledge support for this work from NASA grant NAG5-3472 and through grant GO6628 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, incorporated under NASA contract NAS5-26555. HST data analysis in Tübingen is supported by the DLR under grants 50 OR 96029 and 50 OR 97055. Data analysis and interpretation were performed using NOAO $`\mathrm{iraf}`$, NASA HEASARC and Starlink software. We would like to thank the support scientists at the Space Telescope Science Institute for their help in producing successul observations of REJ0503$``$289. |
no-problem/0001/astro-ph0001152.html | ar5iv | text | # W UMa-type Binary Stars in Globular Clusters
## 1 Introduction
Our thinking about close binary stars in globular clusters (GC’s) has undergone a tremendous change in the last decade. Once such binaries were thought to be totally absent in the GC environment, while now they appear to be of great importance to the dynamical evolution of the clusters. A large volume of research on the dynamical effects of the binary systems on the cluster evolution, including important and complex inter-relations with the evaporation and tidal-striping effects, has been summarized in several reviews starting with Hut et al. (1992), with updates in Sections 9.5 and 9.6 of Meylan & Heggie (1997) and in McMillan et al. (1998). Solar-type contact binaries (also known as W UMa-type variable stars) contain the least amounts of angular momentum that binary systems made of Main Sequence components can have. They represent last stages of the angular momentum loss (AML) evolution of primordial binaries or are one of the products of the dynamical inter-cluster interactions. Together with the RR Lyr-type pulsating stars on the horizontal branch and SX Phe-type pulsating stars among Blue Stragglers, the W UMa-type binaries are the most common type of variables in the GC’s.
The number of W UMa-type binaries in GC’s has expanded in the recent four years from 24 (Mateo 1996a ) to the current number of 86. This paper attempts to integrate and analyze the available data for these variables in order to establish most essential results and to guide further research. The analysis is similar in its goals to the study of the Galactic Disk systems which compared the W UMa binaries in the galactic field – as seen in the direction of the galactic Bulge in the OGLE survey – with those in several old open clusters (Rucinski 1998b = R98). Searches for contact binaries in GC’s are much more difficult than in open clusters. They require – on top of a generous allocation of observing time permitting variability detection and monitoring over several nights – at least moderate-size telescopes located in sites with excellent seeing. Cores of some clusters remain too compact for photometry of individual stars even in perfect seeing; such clusters must be observed with the Hubble Space Telescope for which continuous monitoring of variability over long periods of time is difficult to arrange. It is expected, however, that new methods of analysis of difference images, such as developed by Alard & Lupton (1998) and Alard (1999), will result in substantial progress in detection and analysis of variable stars in very dense fields close to the cores; one of the first applications to GC’s (RR Lyr stars in the core of M5) is by Olech et al. (1999).
Because of the action of two possibly mutually reinforcing types of processes, of dynamical interactions and of the magnetic braking, rather than one – as in the Galactic Disk and in open clusters, where only the latter mechanism would be sufficiently effective – one may expect high frequency of occurrence of the contact systems in globular clusters, possibly even higher than in the Disk. It is, in fact, surprisingly high in the Disk: As shown in R98, the Disk systems show an increase in the frequency of occurrence among F–K dwarfs over time, in the accessible range of 0.7 – 7 Gyr, reaching the spatial frequency as high as about one such binary (counted as one object) per 80 – 100 single stars in the Galactic Disk. Contact systems with spectral types earlier than about middle A-type and orbital periods longer than 1.3 – 1.5 days are less common with currently un-measurable frequency of occurrence, as shown in Rucinski 1998a ; this long-period cutoff may be a function of the parent population because the contact binaries with periods up to 2.5 – 3 days are known to exists in LMC (Rucinski (1999)). The high frequency of contact binaries with a gradual increase with age in old open clusters is consistent with the prolonged magnetic-wind braking AML process acting over a time scale of some 1 – 5 Gyr (R98) and producing relatively long-lived contact systems.
Contrary to expectations based on the above reasoning, the preliminary indications from individual studies that reported discoveries of contact binaries in GC’s (and are cited in this paper) do not confirm the high frequency of occurrence in these clusters. As several authors of such papers already remarked – but bearing in mind the tremendous technical difficulties – the frequency appears to be relatively low, at the level of a small fraction of a percent and some clusters do not seem to have W UMa-type binaries at all. An attempt to assess this matter is presented in this paper. It is argued that, at this moment, we cannot really say much about the frequency of occurrence of the contact systems on the Main Sequence, below the Turn-Off Point; however, the frequency among the Blue Stragglers appears to be high, some 2 to 3 times higher that among the stars of the Galactic Disk.
The current paper consists of a description of the sample of clusters with contact binaries in Section 2, then of the sample itself in Section 3. The observed metallicity effects affecting the absolute-magnitude calibration (which is used to select the members) and affecting the observed properties of the systems are described in Sections 4 and 5. The color-magnitude and the period-color relations are discussed in Sections 6 and 7. Details concerning systems with EB-type light curves, Blue Straggler contact systems and the frequency of occurrence are given in Sections 810. Conclusions are stated in Section 11.
## 2 The cluster sample
The sample of GC’s surveyed deep enough to include Main Sequence stars, below the Turn-Off Point (TOP), currently consists of 14 clusters. The sample is quasi random in the sense that several authors contributed the data using their own preferences, but that practically all data have come from ground-based telescopes. Dr. Kałużny and his collaborators, who contributed most of the results, selected primarily the nearest clusters, with moderately developed cores (permitting photometry close to the centers) and avoiding those with small galactic latitudes, within $`|b|<10^{}`$. The essential parameters characterizing the clusters are given in Table W UMa-type Binary Stars in Globular Clusters, with clusters arranged according to the NGC number. To insure uniformity of these parameters, they have been taken from the database of Harris (1996), version June 22, 1999, which is available at: http://physun.physics.mcmaster.ca/Globular.html. In Table W UMa-type Binary Stars in Globular Clusters we give the following parameters: the galactic coordinates $`l`$, $`b`$ in degrees, the galacto-centric distance $`R_{GC}`$ in kpc, the reddening $`E_{BV}`$, the observed distance modulus $`(mM)_V`$, the metallicity parameter $`[Fe/H]`$ and the concentration parameter $`c`$ (for collapsed cores, $`c=2.5`$). The galacto-centric distances span a wide range $`3.5<R_{GC}<18.5`$ kpc while the metallicities occur within the representative range $`2.22<[Fe/H]<0.73`$. The sample is dominated by clusters with moderate and low concentration; clusters with $`c2.5`$ are under-represented. Eight clusters have galactic latitude $`|b|<20^{}`$ indicating a possibility of large interstellar extinction and heavy Milky Way stars contamination; for three clusters, NGC 4372, NGC 6121 and NGC 6441, the reddening is large, $`E_{BV}>0.3`$.
The cluster sample requires some comments:
1. While Table W UMa-type Binary Stars in Globular Clusters lists 14 clusters, the entry for NGC 5904 (M5) actually reports a null result because all the systems suggested as contact binaries by Yan & Reid (1996) turned out to be spurious detections, as has been shown by Kałużny et al. (1999). The cluster has been monitored for variability of its stars by several investigators (see references to Table W UMa-type Binary Stars in Globular Clusters).
2. NGC 4372 is a very important cluster because of its low metallicity and a large number of systems discovered in its direction. Unfortunately, because of its low galactic altitude of $`b=10^{}`$, it is has a large and patchy reddening. The discovery paper by Kałużny & Krzeminski (1993) gives, in its Table 3, the observed data in $`(BV)`$ and $`(VI)`$, but the patchiness-corrected data are given only for the former color index. The corrections have been calculated for $`(VI)`$ assuming the relation $`\mathrm{\Delta }E_{VI}=1.24\mathrm{\Delta }E_{BV}=1.24[(BV)_c(BV)]`$. The same slope is used throughout this paper in deriving the values of $`E_{VI}`$ from $`E_{BV}`$. On top of the patchy reddening, the values of the mean reddening and of the distance modulus are very uncertain for this cluster. While Kałużny & Krzeminski assume $`E_{BV}=0.48`$ and $`(mM)_V=14.8`$, the Harris database (Table W UMa-type Binary Stars in Globular Clusters here) quotes $`E_{BV}=0.39`$ and $`(mM)_V=15.01`$. For consistency, the Harris set has been used here, but – as we discuss in the next Section – this choice makes a very important difference in ascertaining membership of systems detected in this cluster.
3. The color-magnitude of the core of NGC 6752 has been studied with the Hubble Space Telescope (Rubenstein & Bailyn (1997)). It indicates a relatively high frequency of binary stars at a level of $`1538`$ percent in the inner core, but below 16 percent beyond the core. The relative frequency must be therefore a strong function of the radial distance from the cluster center. Unfortunately, searches for variable stars in the clusters analyzed in this paper are not uniform in this respect: Some clusters were observed with the inclusion of the cores, some were observed only at some distance from the center where crowding was assumed to be tolerable. This casts a large uncertainty on any considerations involving numbers of contact systems and their frequency of occurrence relative to other stars.
## 3 Cluster members
All contact binary systems discovered in globular clusters are listed in Table W UMa-type Binary Stars in Globular Clusters. The Fourier-analysis of the light curves was not used to verify the W UMa-type or the EW shape of the light-curves (as in the selection of the OGLE sample in Rucinski 1997a ) because of the partial coverage of some of the light curves which could produce incorrect values of the Fourier coefficients; the general appearance of the light curve and the original classification by the discoverers were the only criteria used here. Contact systems with unequally deep eclipses (EB-type light curves) have been retained; they are marked as such in Table W UMa-type Binary Stars in Globular Clusters. Systems having light curves suggesting detached components (EA type) are not considered here.
The empty entries in Table W UMa-type Binary Stars in Globular Clusters are due to the fact that most of the photometric searches have been done either in $`B`$ and $`V`$ band-passes or $`V`$ and $`I_C`$ band-passes (the subscript indicating that the $`I`$ band is of the Cousins system will not be used from now on). Thus, the available data split in two sets, forcing us to discuss all relations and all properties in parallel in $`V`$ and $`(BV)`$ and in $`V`$ and $`(VI)`$. We will call these the $`BV`$-set and the $`VI`$-set. Only one cluster was observed in all three bands, NGC 4372 (Kałużny & Krzeminski (1993)) permitting consideration of two photometric indices, $`(BV)`$ and $`(VI)`$, for 8 contact systems in this cluster. As luck would have it, this is the cluster which is the most heavily reddened with a highly patchy and uncertain extinction.
Table W UMa-type Binary Stars in Globular Clusters lists the variables with the names and designations as assigned in the discovery papers. For each system, we give the orbital period in days, then the maximum brightness $`V`$, $`(BV)`$ and/or $`(VI)`$ and the total variability amplitude $`A_V`$. The last columns give our results on the membership to the parent clusters and the variability/membership type (see below). Assignment of the membership has been performed by comparison of “observed” absolute magnitudes, derived from the distance modulus: $`M_V^{obs}=V(mM)_V`$, with those derived from the calibrations, $`M_V^{BV}=M_V(\mathrm{log}P,BV)`$ or $`M_V^{VI}=M_V(\mathrm{log}P,VI)`$, as described in more detail below. A simple criterion for membership was used calling systems with the deviations in $`\mathrm{\Delta }M_V=M_V^{obs}M_V^{cal}`$ smaller than 0.5 mag the Class 1 members and those with $`0.5<|\mathrm{\Delta }M_V|<1.0`$ the Class 2 members (here $`M_V^{cal}`$ takes the meaning of $`M_V^{BV}`$ or $`M_V^{VI}`$ depending which one has been available). Deviations larger than one magnitude were assumed to signify that the binary is not a member but a foreground or background projection.
The calibrations used in this paper have been:
$`M_V^{BV}=4.44\mathrm{log}P+3.02(BV)_0+0.12`$ (1)
$`M_V^{VI}=4.43\mathrm{log}P+3.63(VI)_00.31`$ (2)
The $`M_V^{BV}`$ calibration is based on the Hipparcos data (Rucinski & Duerbeck (1997)) while the one for $`M_V^{VI}`$ was developed and served well for the analysis of all the OGLE data (Rucinski 1997a , Rucinski 1997b , Rucinski 1998a , Rucinski 1998b ). No allowance for lowered metallicity of GC’s in the calibration has been made. We discuss this important issue in the next Section 4.
The deviations $`\mathrm{\Delta }M_V`$ are shown in graphical form in Figure W UMa-type Binary Stars in Globular Clusters versus the orbital period and color index, for both color-index sets. While many systems do fall within the band of $`\mathrm{\Delta }M_V`$ close to zero, a large fraction of systems are actually foreground projections onto the fields of the observed clusters. This fully agrees with the high frequency of the W UMa-type binaries in the Galactic Disk. The data based on the $`(VI)`$ index show a somewhat better consistency, with smaller scatter in $`\mathrm{\Delta }M_V`$ and with a better definition of the group of the Class-1 members. This may be due to a weaker dependence of this index on the interstellar reddening and/or on the metallicity, but may possibly be simply due to lesser photometric difficulties of observing red stars in the $`V`$ and $`I`$ bands than in the $`B`$ and $`V`$ bands. As pointed out to the author by Dr. Kałużny (private communication), selection of the best photometric bands is not an easy matter so that $`(VI)`$ may not always be preferable to $`(BV)`$: The upper MS is better observable in $`B`$ than in $`I`$ because the numerous red dwarfs from the lower MS produce a strong background in $`I`$; the $`I`$ band is also inconvenient for red stars because Asymptotic Giant Branch stars are usually strongly over-exposed and prevent photometry of other stars close to cluster centers.
The two penultimate columns of Table W UMa-type Binary Stars in Globular Clusters give the number of contact systems in the GC’s. $`n_{det}`$ is the total number of systems in a cluster, while $`n_{C1}`$ and $`n_{C2}`$ are the numbers of Class-1 and Class-2 members, respectively. $`n_{BS}`$ is the number of Blue Stragglers (of both classes). $`N_{BS}`$ are very approximate estimates of Blue Stragglers which were monitored for variability in the GC’s. We will discuss these data later, in Sections 9 and 10. The approximate locations of the color indices at the Turn-Off Point are marked by vertical broken lines in Figure W UMa-type Binary Stars in Globular Clusters. Note that practically all systems to the blue of these lines (i.e. the Blue Stragglers) are members of the clusters.
The cluster-member selection process described above assumes that the calibration formulae, Eq. 1 and Eq. 2 are applicable to contact systems in globular clusters. This is verified by assuming that most typical systems which have been detected in a given direction are actually genuine members of the clusters. In effect, we require that the calibrations reproduce the modal (most probable) values of the deviations $`\mathrm{\Delta }M_V`$. The histograms of the deviations are shown in Figure W UMa-type Binary Stars in Globular Clusters. Disregarding a complication of the metallicity-dependence of $`M_V^{cal}`$ which will be discussed in the next section, we can observe the following facts:
1. There are 35 Class-1 systems in the sample, that is roughly 1/2 of the total.
2. The number of the cluster members would increase by four if we add Class-2 systems which have blue color indices and short periods characteristic for Population II Blue Straggler systems (Section 9).
3. We have no good argument to claim that any of the eight remaining Class-2 systems is a cluster member.
4. There are 21 definite non-members in the sample, that is about 1/3 of the total number. All but one are foreground Disk systems.
5. The only obvious background system is V6 in NGC 6752. The $`\mathrm{\Delta }M_V`$ deviation of about 2.5 mag. suggests that the system is some 3 times further away than the cluster, at a distant periphery of the Galaxy at the distance of some 15 kpc.
6. As discussed in Kałużny et al. 1998a , there exists an ambiguity with the orbital period for the only contact system detected in M3, V238. It appears that neither of the acceptable periods places the system in the cluster. The two possibilities are joined by a dotted line in Figure W UMa-type Binary Stars in Globular Clusters. Dr. Kałużny (private communication) suspects – from analysis of the individual CCD images – that blending with a red giant is the cause of photometric difficulties with V238.
## 4 Metallicity effects in the $`M_V`$ calibrations
The cluster-selection process requires a clear conceptual separation of the effects of lowered metallicity on the simplified $`M_V`$ calibrations given by Equations 1 and 2 (they are called here $`M_V^{cal}`$ or specified as $`M_V^{BV}`$ or $`M_V^{VI}`$) which we use for selecting the cluster members from the effects genuinely influencing the observed binary properties, such as the absolute magnitudes $`M_V^{obs}`$ (derived from cluster moduli), the de-reddened color indices or the orbital periods. Here we concentrate on the effects affecting $`M_V^{cal}`$ and solely from the observational point of view, without going into the details of how genuine properties are affected; this will be discussed in the next Section 5. We note that once we select a sample of the cluster members, we will discuss only the directly observed quantities; the values of $`M_V^{cal}`$ will not be used from this point at all.
In Rucinski (1995) arguments have been presented that the $`M_V^{cal}=M_V(\mathrm{log}P,\mathrm{color})`$ calibrations require small, but significant corrections for metallicity variations. The corrections would reflect the blue colors of Population II stars which have low atmospheric blanketing due to weak spectral lines of metals. It was argued that while the period term in Equations 1 and 2 would account for any differences in the system size, the color-index term would require a correction as a proxy of the effective temperature. The data and the available calibrations were very preliminary at that time; the current material is much richer so that the need for the corrections can be re-evaluated. This matter is very closely linked to the selection of the cluster members, so that we must explain the details of this process.
Figure W UMa-type Binary Stars in Globular Clusters shows the same deviations $`\mathrm{\Delta }M_V`$ as in Figure W UMa-type Binary Stars in Globular Clusters, but this time plotted versus $`[Fe/H]`$ for the clusters. The slanting lines give the metallicity corrections $`\delta M_V^{BV}=0.3[Fe/H]`$ and $`\delta M_V^{VI}=0.12[Fe/H]`$, as suggested in Rucinski (1995). To visualize how the corrections would impact the membership selection process, the $`\mathrm{\Delta }M_V`$ deviations – with the metallicity corrections applied – are shown as broken-line histograms in Figure W UMa-type Binary Stars in Globular Clusters.
Figures W UMa-type Binary Stars in Globular Clusters and W UMa-type Binary Stars in Globular Clusters contain the only available information on the need of the corrections and how their use would influence the selection of the cluster members. The situation is relatively simple for the $`VI`$ set: Because of the well-known low sensitivity of this color index to metallicity variations, the current data do not forcefully suggest or reject the need for the metallicity corrections, although the case of no corrections seems to be slightly preferable; for simplicity, we assume that they are not needed. The data for the $`BV`$-set show a large scatter in the band of the expected members and its is difficult to decide if the corrections are really needed. The crucial clusters for resolution of the problem for the $`BV`$ set are the two clusters with the lowest metallicities $`[Fe/H]<2`$, NGC 4372 and NGC 5466.
As remarked before, NGC 4372 is seen at low galactic latitude through a patchy interstellar extinction. However, the cluster was observed in $`BVI`$ so that two sets of color indices are available and a consistency check is in principle available. Assuming the data as in Harris’ database (as given in Table W UMa-type Binary Stars in Globular Clusters), the $`M_V^{VI}`$ calibration indicates that none of the contact systems can be classified as Class-1 member while the $`M_V^{BV}`$ calibration suggests that one system, V4, is a Class 1 member. However, if we follow the assumptions of Kałużny & Krzeminski (1993), $`E_{BV}=0.48`$ and $`(mM)_V=14.8`$, then large shifts in $`\mathrm{\Delta }M_V`$ by 0.49 and 0.61 mag. for both color-index sets occur: While $`M_V^{obs}`$ become fainter by $`+0.21`$ mag., the predicted $`M_V^{BV}`$ and $`M_V^{VI}`$ become brighter by $`0.28`$ and $`0.40`$ mag. As the result, all data points for this cluster slide down in Figure W UMa-type Binary Stars in Globular Clusters by amounts shown there by arrows. While for the Harris data only the system V4 would be a Class-1 member in the $`BV`$ set (without a confirming evidence from the $`VI`$ set), now the systems V4, V16 and V22 would be Class-1 members for the $`BV`$ set and V5, V16 and V22 would be Class-1 members for the $`VI`$ set. One gains then in consistency between both sets for V16 and V22, but the matter of membership remains unclear for V4 and V5. It is obvious that only three or four systems among the eight might be members of NGC 4372, but we cannot be absolutely sure which ones. In this situation, we have taken a conservative approach and conclude that the data for NGC 4372 are too uncertain to be sure of the membership of the systems in this cluster; this cluster cannot tell us much about the need for a metallicity correction in the calibration. We note that judging by its period-color combination, the system V22 is a genuine Blue Straggler and irrespectively how big is its deviation $`\mathrm{\Delta }M_V`$, it is almost certainly a member of NGC 4372 (see Section 9).
NGC 5466, with even more extreme metallicity than that of NGC 4372 of $`[Fe/H]=2.22`$, follows the Hipparcos calibration for the Disk stars (Rucinski & Duerbeck (1997)), without any metallicity correction, very well. The two contact binaries must be genuine members simply because at the galactic latitude of $`b=+74^{}`$ chances of having Disk population stars within the cluster are practically equal to zero. Also, both stars are exceptionally blue, belonging to the Blue Straggler group of contact binaries (see Section 9); equally blue systems very rarely occur in the Galactic Disk.
Guided mostly by the case of NGC 5466 and by the resulting simplicity of the assumption, we conclude that the $`M_V`$ calibrations established for $`[Fe/H]=0`$ apparently work well for low values of metallicity and that the $`[Fe/H]`$-corrections in the expressions for $`M_V^{cal}`$ are apparently not needed. As we will see in the next sections, the color indices of the low-metallicity systems are definitely much bluer than those for Disk systems, so that – without any corrections – one would expect values of $`M_V^{cal}`$ indicating artificially higher luminosities. Since this is not observed, some other metallicity-dependent factor must provide a compensating effect through the period-term (this may be for example the un-accounted for influence of the mass-ratio). Concerning the previous, apparently erroneous result in Rucinski (1995), we remark here that the correction of $`0.3\times [Fe/H]`$ for the $`(BV)`$-based calibration was suggested for an old version of the $`M_V^{BV}`$ calibration which has been superseded by the much better Hipparcos calibration. The corresponding correction for the $`(VI)`$-calibration of $`0.12\times [Fe/H]`$ was suggested for consistency with the one for $`(BV)`$, but it was always recognized that it was smaller than the measurement and definition uncertainty of the calibration itself.
## 5 Expected effects of low metallicity
In summarizing the main effects of lowered metallicity, we discuss two effects which manifest themselves very differently: of the blue atmospheres (for the same effective temperature) and of smaller stellar sizes.
The decreased metallicity influences the atmospheric structure in that metal lines produce less blanketing so that stars become bluer. Only this effect was discussed in Rucinski (1995), together with its possible influence onto the $`M_V`$ calibrations. For consistency with the previous results, we use the same relations between the color-index changes and $`[Fe/H]`$ as evaluated from the models of Buser & Kurucz (1992). The expected changes in the color indices $`(BV)`$ and $`(VI)`$ are given for a MS star atmosphere at $`T_{eff}=5000`$ K in Table W UMa-type Binary Stars in Globular Clusters. Obviously, several qualifications may be in order here: Single-star model-atmosphere results may be in-applicable to magnetically active contact binaries and the color index data have been calculated for spherical stars whereas strong and variable limb darkening effects are always present in contact binaries.
The low-metallicity systems are expected to have smaller dimensions and – for the same contact geometry as for Galactic Disk systems – should have shorter periods than Population I systems. In the currently sole study on properties of Population II contact systems binaries by Webbink (1979), the stress was on the influence of the prolonged angular-momentum loss for such old objects. The author, however, remarked about the smaller sizes of such systems relative to Population I systems, but did not discuss this point any further. Some guidance on the expected effect can be found in the mass-radius relation and its dependence on the stellar metallicity. For a fixed mass, Kepler’s law enforces the proportionality $`\mathrm{log}P3/2\mathrm{log}A`$. If the geometry of contact is the same irrespective of the chemical composition so that the relative sizes of components are independent of the metallicity, $`r_i=R_i/A`$, then smaller stellar sizes $`R`$ should lead to smaller $`A`$ and to shorter orbital periods. Again, a qualification may be in order: The period changes may depend on metallicity in a much more complex way than just through simple radius scaling because the internal structure of Population II contact binaries does not have to be identical to that of Population I systems.
The mass-radius relation for low-metallicity stars is currently a subject of very lively discussions, stimulated by the distance determinations for subdwarfs, as provided by the Hipparcos mission. One of the most recent theoretical investigations of stellar models with varying abundances is by Castellani et al. (1999). The absolute magnitude calibration for stars with $`T_{eff}=5000`$ K (Eq. (1) in this paper) can be re-written into a metallicity-radius dependence: $`\mathrm{\Delta }\mathrm{log}R=0.158[Fe/H]+0.035[Fe/H]^20.485\mathrm{\Delta }Y`$; with the solar abundance assumed to be $`Z_{}=0.0169`$ and $`[Fe/H]=\mathrm{log}Z\mathrm{log}Z_{}`$. The size of the correction for helium abundance changes, $`\mathrm{\Delta }Y`$, which accompanies the metallicity changes as the stellar population ages, is a difficult matter. The authors discuss the large uncertainty in the ratio $`C=\mathrm{\Delta }Y/\mathrm{\Delta }Z`$ which is currently very poorly known; some observational results suggest $`C=3\pm 2`$ while the authors consider $`C56`$. The metallicity-radius relation is shown in Figure W UMa-type Binary Stars in Globular Clusters for three values of $`C`$.
With the expected values of $`\mathrm{\Delta }\mathrm{log}P3/2\times \mathrm{\Delta }\mathrm{log}R`$ one can compute the expected variations of the absolute magnitude. We have a choice here: We can simply assume that the luminosity will scale with the square of the radius or we can use the $`\mathrm{log}P`$ terms in the Equations 1 and 2. The former choice would suggest: $`\mathrm{\Delta }M_V=5\times \mathrm{\Delta }\mathrm{log}R`$, while the latter choice gives a steeper dependence: $`\mathrm{\Delta }M_V=4.44\times 3/2\times \mathrm{\Delta }\mathrm{log}R`$ (the period-term coefficients are basically identical for both color-index sets). In what follows, we will use the second expressions for consistency with the adopted expressions for $`M_V^{cal}`$, observing that the steepness of the period term (which probably hides many unaccounted period-dependent effects) may actually explain the unexpected absence of the $`[Fe/H]`$ term in the equations giving $`M_V^{cal}`$.
## 6 Color-magnitude diagram
The color-magnitude diagrams for the sample of the GC members are shown in Figure W UMa-type Binary Stars in Globular Clusters. The data plotted are the observed $`M_V^{obs}`$ and the de-reddened color indices so that uncertainties with the metallicity corrections in $`M_V^{cal}`$ do not enter directly into this figure (but only through the selection of cluster members, via the deviations $`\mathrm{\Delta }M_V`$). The Class-1 members, which we consider genuine members of the clusters, are marked by filled circles and the range of metallicity ($`[Fe/H]`$ smaller or larger than $`1.5`$) is shown by the size of the symbol. The figure also contains the theoretical isochrones computed by the Padova group (Bertelli et al. (1994)) for 14 Gyr and for three values of metallicity $`[Fe/H]=1.66`$, $`1.26`$ and $`0.66`$. In addition, the data for members of old open cluster members are shown by small x-symbols, following the results in R98.
The most striking feature of the color-magnitude diagrams for Population II contact binaries is the shift of the contact-binary sequence by about one magnitude below that for Galactic Disk systems. A similar shift is well known for single subdwarfs, but it is seen here for contact binaries for the first time. The sequence is relatively well defined and extends uniformly on both sides of the Turn-Off Point (TOP), similarly as in old open clusters such as Cr 261 (R98).
A group of Blue Stragglers (BS) in the low-metallicity clusters, to the blue of the TOP is the most conspicuous group of the contact binaries in the color-magnitude diagrams in Figure W UMa-type Binary Stars in Globular Clusters. A few Class-2 systems also have very blue color indices of the BS stars; since such systems practically do not occur among Disk contact systems we think that they are not foreground projections, but genuine cluster members with the $`\mathrm{\Delta }M_V`$ deviations larger than 0.5 mag. These systems are in the $`BV`$ set: V22 in NGC 4372 and V8 in NGC 6752; in the $`VI`$ set: V22 in NGC 4372 and V65 in NGC 5139. Thus, V22 in the controversial cluster NGC 4372, with its Blue Stragglier characteristics, is intrinsically the brightest system in the current GC sample with $`M_V^{obs}=1.83`$, $`(BV)_0=0.24`$ and $`(VI)_0=0.30`$.
## 7 Period-color diagram
The period-color diagram is, to some degree, a more natural and precise way to display properties of contact binaries than the color-magnitude diagram. This is because one quantity, the orbital period, is known basically without error, at least when compared with photometric errors. Only one photometric quantity – the color index – then enters into the picture. The color index can still be affected by several factors, in addition to the measurement errors, the most severe being the uncertainty in the reddening correction, but the period-color relation is not affected by errors in the distance modulus.
Figure W UMa-type Binary Stars in Globular Clusters shows the two period-color diagrams for the GC sample, with the same symbols as in Figure W UMa-type Binary Stars in Globular Clusters. The reader is suggested to view both figures simultaneously and to note the common features. The figure contains the Short-Period Blue-Envelopes (SPBE) for Disk systems, shown as continuous lines for both color-index sets (Rucinski 1998b , Rucinski 1997a ). Their shapes are given by: $`(BV)_{SPBE}=0.04P^{2.25}`$ and $`(VI)_{SPBE}=0.053P^{2.1}`$ (the orbital period $`P`$ is in days). While the numerical values in these definitions do not have any physical meaning, the curves are important because they delineate location of the least-evolved contact systems in the Disk Population sample. As many theoretical investigations indicated, contact systems would normally evolve away from the main Sequence, toward larger stellar dimensions, longer orbital periods and cooler atmospheres. The interstellar reddening also increases the color index. Thus, the SPBE has a meaning of a Zero-Age Main Sequence for the Disk population systems.
The above considerations should also apply to Population II contact systems except that, as pointed out by Webbink, the angular-momentum loss due to gravitation radiation emission may win over the long time scales involved and eventually lead to shortening of the orbital periods. As we see in Figure W UMa-type Binary Stars in Globular Clusters, the GC systems indeed have periods shorter than those for Disk systems, but this is expected irrespective of whatever mechanism of forming them is involved: As discussed in Section 5, to be a contact system, a low-metallicity binary must have a short period because its components are small. Judging by the size and direction of the metallicity corrections (shown by arrows), the relatively larger effect is observed in $`\mathrm{\Delta }\mathrm{log}P`$ than in the color-index shifts $`\mathrm{\Delta }(BV)`$ and $`\mathrm{\Delta }(VI)`$. Although the two changes can partly compensate each other in their control of the absolute magnitude, the compensation is not exact because the arrows in Figure W UMa-type Binary Stars in Globular Clusters are not parallel to the lines of constant $`M_V`$. This was visible in the color-magnitude diagrams in Figure W UMa-type Binary Stars in Globular Clusters, where the GC systems were obviously fainter than the Disk systems. It is striking how different Population II contact systems are from the Galactic Disk systems, but also how similar they are within their group. Apparently, a difference in metallicity from the solar $`[Fe/H]0`$ to $`0.7`$ or so produces a relatively larger change in their properties than a further change to $`2.2`$ observed for the most extreme-metallicity systems. Since most changes are due to the change in the component dimensions, this would argue for a relatively small value of the currently poorly-known coefficient $`C=\mathrm{\Delta }Y/\mathrm{\Delta }Z`$ (see Figure W UMa-type Binary Stars in Globular Clusters in Section 5).
## 8 EB-type systems
The sample of binary stars considered in this paper contains six systems with EB-type light curves. Such light curves are characterized by unequally deep eclipses, but with strong variations between minima suggesting possibility of a physical contact. The OGLE sample (Rucinski 1997b ) contained only 2 systems of this type among 98 systems in a volume-limited sample of contact binaries selected using a Fourier light-curve shape filter. In this paper, we do not use this filter because it is sensitive to the phase coverage and tends to be too discriminatory. Thus, we assume that the six systems are related to contact systems. In fact, these may be various forms of semi-detached binaries in the pre-contact or broken-contact stages, with either the more massive, hotter or less-massive, cooler components filling their Roche lobes (Eggleton (1996)). The latter cases appear to be less common, but a good case of a very short period Algol has been recently identified in W Crv (Rucinski & Lu (1999)).
Three among the six systems appear in the $`BV`$ set and three in the $`VI`$ set. Inspection of Figure W UMa-type Binary Stars in Globular Clusters and Table W UMa-type Binary Stars in Globular Clusters shows that all but one are Class-1 cluster members; the one with a slightly larger $`\mathrm{\Delta }M_V`$ is a Class-2 member. Thus, they all follow the absolute-magnitude calibrations for normal contact binaries and seem to be genuine cluster members. This relatively high frequency of occurrence among contact systems, of 6 among 35 (or 39 if 4 Class-2 Blue Stragglers are added) is unexplained and interesting. We note that two among the six systems are Blue Stragglers.
## 9 Amplitude distribution and the Blue Stragglers
It has been noted in Section 6 that the contact system sequence continues without any obvious changes in the period, color or absolute magnitude properties across the Turn-Off Point, into the Blue Stragglers domain. Now we will address the only property which is available to characterize the light curves, the variability amplitude and its distribution.
The Blue Stragglers (BS) are an important group of stars in old stellar clusters. It is now recognized that they must form through binary evolution processes, although it appears that there are actually many such processes and it is not easy to find out which one occurs most commonly (Leonard (1996), Mateo 1996b ). The BS formation and evolution is such a large and active area that special meetings have been devoted to it (Saffer (1993)) and very active research continues. Contact BS are relatively easy to identify for several reasons: (1) They are bluer than typical galactic Disk systems which start appearing at $`(BV)_0<0.3`$ (R98), (2) Their photometry is relatively less difficult than for the Main Sequence stars because they are photometrically well above the level where – usually formidable – crowding problems for the MS stars set in. The question is: Are they in any other way different from the “normal” Main-Sequence contact systems in the GC’s? One such property can be amplitudes of light variations. Although the amplitude statistics involves a convolution of the distribution of the mass-ratios with the distribution of orbital inclinations, as was discussed in Rucinski 1997b , lack of large amplitudes must mean that the large ($`q1`$, $`q=M_2/M_11`$) mass-ratios do not occur: When components differ in sizes, only small amplitudes are possible.
Figure W UMa-type Binary Stars in Globular Clusters shows the observed amplitudes for the GC members plotted in relation to the intrinsic color indices. For the color index values, we assume that systems with $`(BV)_0<0.4`$ or $`(VI)_0<0.55`$ are Blue Stragglers. The amplitudes do show a change at the Turn-Off Point, but this change is well defined only for the $`VI`$ set: The large amplitudes are observed only for the systems to the red of the TOP, that is for the genuine MS systems. The two-sided Kolmogorov–Smirnov tests comparing distribution on both sides of the TOP’s, limited to Class-1 systems, indicates that the difference in the distributions is not significant for the $`BV`$ set, but the probability of a random chance producing the observed difference for the $`VI`$ set is only $`1.1\times 10^5`$. The significance changes only slightly when Class-2 members are added with the probabilities of a random result of 0.05 for the $`BV`$ set and $`0.71\times 10^5`$ for the $`VI`$ set. The change in the observed amplitude distribution at the TOP is therefore very highly significant for the $`VI`$ set, but insignificant for the $`BV`$ set. At this moment, we have no explanation for this difference between the color-index sets except that – as pointed several times in this paper – it may be related to the photometric difficulties in the $`B`$-band for red stars and to the larger and more uncertain reddening corrections in the $`(BV)`$ color index.
On the basis of the above numbers one would be tempted to claim that, indeed, the BS’s have smaller amplitudes than the Main-Sequence contact systems. However, a caution is in order here: It is very important that we do not see small amplitudes among the Main Sequence systems below the TOP, only the large ones. They are obviously missing there because for random orbital inclinations systems showing small amplitudes should be always more common than systems showing large amplitudes. The histograms in the upper parts of the panels of Figure W UMa-type Binary Stars in Globular Clusters contain the expected amplitude distributions calculated in Rucinski 1997b for two assumptions of the mass-ratio distribution, a flat $`Q(q)`$ and $`Q(q)=(1q)`$. The data for the Disk systems in the OGLE sample suggested that the distribution may be actually even more strongly decaying with $`q1`$ than the one $`(1q)`$, because large amplitudes are exceedingly rare in an unbiased sample (this is very much unlike the sky-field sample). In the case of the present GC sample, a comparison of the theoretical distributions with the observed ones suggests that the observed data may be severely modified by strong detection selection effects for systems below the TOP. While almost all systems in the BS group appear to be detected (there is only a small depression in the distribution for $`Ampl_V<0.2`$), the MS systems appear to be entirely missed for $`Ampl_V<0.3`$ due to the difficulties with accurate photometry in conditions where measurement errors and crowding problems rapidly increase with the apparent magnitude.
## 10 Frequency of W UMa-type systems in globular clusters
The discovery selection effect described in the previous section may lead to an under-estimate of the number of the contact Main Sequence (below TOP) systems by a factor of the order of about 5 to 10. The previous results for the sky-field sample should be recalled here as a sobering experience: For several years the frequency of contact binaries of one per one thousand MS stars has been considered a well-established, “textbook-level” fact, in spite of the warnings that the sky sample of known contact binaries contained only large-amplitude systems (see Figure 2 in Kałużny & Rucinski (1994)). Only the systematic characteristics of the OGLE sample (R98) have shown that the apparent<sup>1</sup><sup>1</sup>1We distinguish the apparent frequency, which is uncorrected for missed low-inclination systems, from the spatial frequency which is about 1.5 to 2 times higher. The correction factor depends on the mass-ratio distribution (Rucinski 1997b ). frequency is about ten times higher, reaching about 100 – 130 normal F–K dwarfs per one contact system. A very similar situation re-emerges here, except that this time we can directly suspect – from the difference in the amplitude distributions on both sides of the TOP – that the discovery selection effects are more severe below the TOP. We cannot correct for these selection effects because they must be different for each cluster and are very difficult to quantify.
In this situation, it has been felt prudent to abandon attempts of determining the frequency below the TOP and concentrate on the frequency data for the BS systems. This is in turn complicated by lack of information in discovery papers on the number of Blue Stragglers which were monitored for variability. A simple, but potentially risky assumption has been made at that point that the numbers of the BS’s shown in diagrams in the discovery papers are equal to the numbers of systems which were actually monitored. Frequently stars without measurable color indices (and not shown in color-magnitude diagrams) are monitored for variability, but one may hope that this would not happen in the BS region. Thus, approximate numbers of the Blue Stragglers, $`N_{BS}`$, have been estimated by simply counting of data points on the color-magnitude diagrams. It is stressed that these are very approximate estimates; for example, for $`\omega `$ Cen the available estimate for the fields D–F (Kałużny et al. 1997b ) was multiplied by two to obtain the total number in all observed fields. For 47 Tuc (Kałużny et al. 1998b ) unpublished data have been supplied by Dr. Kałużny. The estimates are given in Table W UMa-type Binary Stars in Globular Clusters and should be compared with the number of the contact BS systems, $`n_{BS}`$. In doing so we immediately see the problem of numerically very small values for $`n_{BS}`$ which are obviously subject to relatively large Poissonian fluctuations. Thus, we can see that we are still very far from being able to correlate the frequency of contact BS systems with cluster properties such as the metallicity index, $`[Fe/H]`$, or the concentration parameter, $`c`$. In this situation another disputable step was made in assuming that the average frequency is the same for all GC’s. This can be derived by simply summing $`N_{BS}`$ and $`n_{BS}`$ for all clusters and taking their ratio. The result is that 20 contact binaries are observed among about 900 Blue Stragglers. Thus the average inverse frequency is $`f_{BS}=45\pm 10`$ normal BS stars per one contact system. This frequency is the apparent one, that is it applies to systems which can be discovered, without any corrections for systems missed because of the low orbital inclinations. Since we do not know the mass-ratio distribution for the contact BS systems, we cannot correct for these missed systems to evaluate the true spatial frequency.
The inverse apparent frequency of $`f_{BS}=45`$ is significantly different from the one observed for the Disk stars, which is approximately $`f_{Disk}100130`$. Thus, the Blue Straggler population of the globular clusters contains some $`23`$ times more contact binaries than the Old Disk stars. We note also that the inverse frequency at the level of $`45\pm 10`$ is in perfect accord with lack of detections in clusters poor in Blue Stragglers where their absence can be simply explained by the Poisson fluctuations.
The high frequency of contact binaries among Blue Stragglers is visible in the color index and period distributions shown in Figure W UMa-type Binary Stars in Globular Clusters. These distributions are obviously far from being rigorous in the statistical sense, yet they do show interesting trends. When compared with the Disk systems (R98), the GC contact binaries occupy only the blue ends of the $`(BV)_0`$ and $`(VI)_0`$ distributions. Note, however, that the blue end points are the approximately same for the GC and Disk sample distributions indicating that the Disk sample may contain an admixture of low-metallicity objects similar to those in globular clusters. While the red systems are mostly likely under-represented in the GC sample because of the selection biases against faint, red systems, we see a definite lack of long-period systems which are intrinsically the brightest and should be easily detected. Of course temporal-window biases for the GC sample may have contributed here (in the sense that monitoring programs were by necessity short), especially when compared with the excellent data for the 5 kpc OGLE sample (R98) which defines the long-period part of the period distribution in Figure W UMa-type Binary Stars in Globular Clusters.
## 11 Conclusions
Although at least 1/3 among 86 systems presumably located in the analyzed globular clusters are foreground projections from the Disk which must be carefully weeded out, we can state with confidence that contact binaries in globular clusters are definitely different than the very common Disk population systems. The main feature are their short orbital periods resulting from small dimensions of components. This is seen not only in the distribution of the orbital periods, but also in low luminosities. Thus, the downward shift of the contact binary subdwarf sequence below that for the Disk systems is primarily due to the reduced-dimension effect, not to the blue shift caused by the reduced blanketing. The long-period systems are intrinsically more luminous and easier to discover so that their absence is highly significant.
While the metallicity effects are clearly seen in the properties of the Population II contact binaries, to our surprise, they are not visible in the $`M_V^{cal}=M_V^{cal}(\mathrm{log}P,color)`$ calibrations. More exactly, they do not manifest themselves in the calibration based on the $`(VI)`$ color index; the data utilizing the $`(BV)`$ color index are too poor to be sure that the calibration does not need a metallicity term. For simplicity, we assumed that both calibrations (which are used only to select the systems, not to analyze them) do not require any $`[Fe/H]`$ dependent terms. It is recommended that the $`VI`$ bandpass filter combination be used in the future because the $`BV`$ data may indeed show some metallicity dependence, but – more importantly – are more susceptible to photometric errors for red stars and to reddening correction uncertainties.
Very little can be said about contact binaries located on the Main Sequence, below the Turn-Off Point (TOP). Severe discovery selection effects are suspected from the peculiar distribution of amplitudes with missing small amplitudes. In contrast, 20 Blue Straggler contact binaries known at this time give a reasonable estimate of their frequency of occurrence of one such system (counted as one object) per $`45\pm 10`$ Blue Stragglers. This reciprocal apparent frequency is about 2–3 times higher than for the Disk systems among the normal F–K stars. It is entirely possible that the same mechanism which produces a continuous sequence of contact binaries across the TOP in old open clusters simply had more time to produce more contact systems in globular clusters.
The author would like to express his indebtedness to the authors of the original papers on individual globular clusters. Special thanks are due to Dr. Janusz Kałużny for his particular contribution to the field and for his enthusiastic help in various stages of this work and for detailed suggestions. Thanks are due to Dr. Bohdan Paczynski for his useful comments on the draft of the manuscript.
Captions to figures: |
no-problem/0001/hep-ph0001228.html | ar5iv | text | # The Nambu–Goto action as the one of the quantized space–time excitation.
## Abstract
The concept of the quantized space–time of the formless finite fundamental elements is suggested. This space–time can be defined as a set of continual space–time coverings by simply connected non–overlapping regions of any form and arbitrary sizes with some probability measure. The functional integrating method and the space–time action problem are analyzed. A string in this space–time is considered as an excitation of a number of fundamental elements forming one–dimensional curve. It is possible to direct the way for the volume term of the space–time action to yield the Nambu–Goto action with this consideration.
PACS numbers: 02.40.-k; 04.20.Gz; 04.60.Nc; 11.25.-w.
Classical space–time is the continuum which is an arena for particles motion at their interaction. This concept of the space–time description remains in the quantum mechanics and the quantum field theory, where the space–time properties do not depend on particles structure and propagation, but particles are described by wave functions that have finite values in all space–time. Connection of the space–time structure and the matter is shown in the theory of gravitation. Classical gravitation theory space–time is the continuum also, but the quantum description of space–time properties at small distances is connected with the fundamental length ($`l_f`$) existence and discrete structure of the space–time.
The lattice space–time with the fixed lattice is most commonly investigated . But the space–time with the fixed lattice consideration leads to several problems. The first and seemingly the most essential problem is the passage from the lattice space–time to the continuum in the limit $`l_f0`$. The lattice space–time has the power of a countable set. Any subdivision of a lattice yields a set with the same power. Thus in the limit $`l_f0`$ any lattice space–time with any subdivision remains a set with the power of a countable set. The second, the resulting equations in the lattice space and other spaces with determined fundamental element form depend on form of a fundamental element. The third, the equations in the lattice space are non-invariant under the continual symmetry operations.
Discrete geometry has been developed in the direction of the formless fundamental element in the last thirty years. The Regge calculus and the space–time foam idea made the first steps on this way. In refs. the stereohedra space is investigated. This space fundamental element has some set of forms. Random lattice field theory is analyzed in . Quantum configuration space investigated in is the method of quantized space–time description based on the not–fixed ”floating lattice”.
The alternative approach to the particle physics geometrical base is the superstring theory . Strings are fundamental objects, their excited states describe all particles. But the strings are described as the geometrical objects that propagate in the background continual space–time. Therefore strings can be considered as the basic objects for fundamental particles description, but they cannot be considered as the fundamental units of the space–time.
In this work the concept of the space (space–time) that consists of the formless finite fundamental elements (FFFE) is suggested. On the one hand it makes the base for space–time quantization. Consideration of this concept leads to the idea of the space–time action with one of the terms proportional to the volume of FFFE. This method of space–time quantization is a consistently geometrical approach to the physics of fundamental particles and interactions. In this space–time the particles are excited states of the fundamental elements, and the interactions are connected with the transformations of the Riemannian space–time of FFFEs, that describes the space–time with particle–like excitations.
On the other hand the strings in this space–time picture can be considered as propagating excitations of any number of FFFEs, forming one- dimensional space–like curve (in the sense of FFFE space–time). The volume term of the space–time action of this excitation yields the Nambu–Goto action term. This picture of the quantized space–time and the strings as this space–time excitations enables to connect these approaches and assigns the meaning of fundamental space–time units for the strings.
The space of FFFEs can be defined as the set of coverings of the continual space by any number of non-overlapping simply connected regions of any form and arbitrary sizes. This set is provided with the probability measure, i.e. each covering contributes to the space with some probability. This measure enables the calculations based on this coverings set. Obviously, the probability measure like that the average values of sizes of FFFEs are equal to $`l_f`$, and the average number of FFFEs localized in the continual space region by the volume $`V`$ is $`N=[V(l_f^n)^1]`$. But the configurations with the FFFEs sizes greatly different from $`l_f`$ also have the finite probabilities. For example, the configurations where one fundamental element expands on all the space–time (or investigated manifold), or the configuration of continual space–time region itself, i.e. covering this region by points. This set of coverings have the power of continuum. Therefore limit passage from the space of FFFEs to the continual space can be carried out correctly.
The general construction for calculations on this coverings set is the continual (functional) integral. In agreement with the central idea of the continual integral theory the calculation of quantum quantities is the integrating over all possible configurations of the space–time of FFFEs (i.e. coverings of the space–time) with the corresponding probability measure taken into account.
Consider the general construction of a functional integral. In the plane space–time it is:
$$Z=𝒟Ve^{S(s_i)},$$
(1)
where $`𝒟V`$ is a measure on the set of coverings, $`S(s_i)`$ is the plane space–time vacuum action, $`s_i`$ is the set of the element parameters (sizes, areas, volume). Here integrating is over all coverings of the continual space by non-overlapping simply connected regions of any forms and any sizes. The average value of a function on a separate FFFE is defined by
$$<f(\{a\})>=\frac{𝒟Ve^{S(s_i)}f_{\{a\}}(x^i)}{𝒟Ve^{S(s_i)}},$$
(2)
where $`f_{\{a\}}(x^i)`$ are values of the function $`f`$ at regions of coverings set which forms the element with FFFE space–time coordinates $`\{a\}`$, $`f(x^i)`$ is a function defined in the continual space.
The functional integral construction requires the information about the action. The space–time vacuum action of the Minkowski space–time can depend on fundamental elements geometrical characteristics only: volume, $`m`$–dimensional areas ($`m<n`$), sizes. Suppose that one term of the four–dimensional space–time vacuum action is proportional to the volume of a fundamental element:
$$S_{vac}=A\mathrm{}^1G^2c^6\underset{FE}{}\sqrt{\eta }𝑑V,$$
(3)
where $`A`$ is a numerical factor. Here $`dV`$ and $`\sqrt{\eta }`$ are continual space values. Integrating in (3) is over one region from some covering of the continual space–time. This term is analogous to ”space–time foam” action proportional to the volume . But this term of an action is unsufficient for description the equilibrium configuration of the space–time of FFFE. Total actions (3) of all configurations from the set of coverings are equal. The action minimum must correspond to the classical configuration, i.e. the continual space–time configuration in the considered case.
The complete expression for the space–time action, meeting this requirement, must contain other terms besides the volume term. The possible term is the one proportional to the total $`n1`$– dimensional area of FFFE. In this supposition the vacuum space–time action is
$$S=\alpha \underset{i}{}V_i+\beta \underset{i}{}(S_{n1})_i,$$
(4)
where $`_i`$ is summarizing over all FFFEs.
Let us direct the way, on which the Nambu–Goto term of the string action might be obtained from the space–time action (3). The expression of the action of a space–time element for this analysis is required. This action is the average value of an action $`S`$ with the functional integrating (2) using. This action is denoted by $`S_{FFFE}`$:
$$S_{FFFE}=<S_{\{a\}}>=A\mathrm{}L_{pl}^4\underset{FFFE}{}\sqrt{\eta }𝑑x^1𝑑x^2𝑑x^3𝑑x^4$$
(5)
for the four–dimensional space–time.
A string in the space–time of FFFEs can be considered as an excitation of a number of FFFEs forming one–dimensional space–like structure (in the meaning of FFFE space–time). In the own reference frame the action of this excitation is represented in the form
$$S=A\mathrm{}L_{pl}^4\underset{FFFEs}{}\sqrt{\eta }𝑑x^1𝑑x^2𝑑l𝑑\tau ,$$
(6)
where $`\tau `$ is the own time, $`l`$ is the own space–like coordinate of an excitation, $`x^1,x^2`$ are the transverse space–like coordinates. Here integrating is over a set of FFFEs, participated in the excitation propagation. Integrations over transverse coordinates yields the average values of FFFE sizes, i.e. $`l_f`$. In supposition $`l_fL_{pl}`$ we might obtain ($`\gamma `$ is the two–dimensional metric tensor determinant):
$$S=A\mathrm{}L_{pl}^2𝑑l𝑑\tau \sqrt{\gamma },$$
(7)
i.e. the Nambu–Goto action for a string. This result is not completely correct due to the transformation problem of the four–dimensional metric tensor determinant $`\eta `$ in (6) to the two–dimensional one $`\gamma `$ in (7) and absence of the correct definition of one–dimensional integrating. In this concept the $`p`$–branes are considered as the $`p`$–dimensional space–like excitations of FFFEs, and the volume term (6) of the FFFE space–time excitation yields the bosonic term of $`p`$–branes action analogically.
Consideration of the strings as the excitations of the quantized space–time is the step to the understanding of the superstrings properties at the Plank distances. With this strings and $`p`$–branes consideration all these objects are identical at the Plank distances because the excitation of one element does not have dimension in the sense of FFFE space–time.
The author thanks M.S. Orlov, A.V. Klochkov, E.V. Klochkova, A.V. Sokolov, G.S. Sokolova, A.B. Vankov for their friendly support during the work time, A.A. Amerikantsev, M.E. Golod, A.N. Lobanov for their technical assistance and M.A. Tyntarev for useful discussions. |
no-problem/0001/hep-ph0001168.html | ar5iv | text | # Spinodal Instabilities and the Dark Energy Problem
## Abstract
The accelerated expansion of the Universe measured by high redshift Type Ia Supernovae observations is explained using the non-equilibrium dynamics of naturally soft boson fields. Spinodal instabilities inevitably present in such systems yield a very attractive mechanism for arriving at the required equation of state at late times, while satisfying all the known constraints on models of quintessence.
One of the most startling developments in observational cosmology is the mounting evidence for the acceleration of the expansion rate of the Universe. Coupled with cluster abundance and CMB observations, these data can be interpreted as evidence for a cosmological constant $`\mathrm{\Lambda }`$ which contributes an amount $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ to the critical energy density while matter contributes $`\mathrm{\Omega }_{\text{matter}}0.3`$, leading to a flat FRW cosmology (see and references therein).
While introducing a cosmological constant may be a cosmologically sound explanation of the observations, it is a worrisome thing to do indeed from the particle physics point of view. It is hard enough to try to explain a vanishing cosmological constant, given the various contributions from quantum zero point energies, as well as from the classical theory, but at least one could envisage either a symmetry argument (such as supersymmetry, if it were unbroken) or a dynamical approach (such as the ill-fated wormhole approach) that could do the job. It is much more difficult to see how cancellations between all possible contributions would give rise to a non-zero remnant of order $`10^{47}`$ GeV<sup>4</sup> which is extremely small compared to $`M_{\text{Planck}}^4`$ or $`M_{\text{SUSY}}^4`$, the “natural” values expected in a theory with gravity or one with a supersymmetry breaking scale $`M_{\text{SUSY}}`$. Even from the cosmological perspective, a cosmological constant begs the question of why its effects are dominating now, as opposed to any time prior to today, especially given its different redshifting properties compared to matter or radiation energy density.
These fine tuning problems can at least be partially alleviated if instead of using a $`\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{s}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{t}`$ energy density to drive the accelerated expansion, a dynamically varying one were used instead. This is the idea behind Quintessence models. A scalar field whose equation of state violated the strong energy condition (i.e. with $`\rho +3P<0`$ ) during its evolution would serve just as well as a cosmological constant in terms of explaining the data, as long as its equation of state satisfied the various known constraints for such theories . However, an arbitrary scalar field whose energy density dominates the expansion rate is not sufficient to get out of all fine tuning problems; in particular, for a field of sufficiently small mass that it would only start evolving towards its minimum relatively recently (i.e. masses of order the inverse Hubble radius), the ratio of matter energy density $`\rho _m`$ to field energy density $`\rho _\varphi `$ would be need to be incredibly fine tuned at early times so as to have $`\rho _m/\rho _\varphi 1`$ today. The quintessence approach uses so-called tracker fields that have potentials that drive the field to attractor configurations that have a a fixed value of $`\rho _m/\rho _\varphi `$. Thus, for these models, regardless of the initial conditions, the intermediate time value of $`\rho _m/\rho _\varphi `$ will always be the same. The only fine tuning required in these models is the timing of the deviation of $`\rho _\varphi `$ from the tracking solutions to allow it to satisfy the condition $`\rho _\varphi +3P_\varphi <0`$ today.
What we propose is a working alternative to the idea of tracker fields without the usual fine tuning problems. Recent work on the non-equilibrium dynamics of quantum fields has shown that under certain circumstances the back reaction of quantum fluctuations can have a great influence on the evolution of the quantum expectation value of a field , to the extent that using the classical equations of motion can grossly misrepresent the actual dynamics of the system. What we will show below is that we can make use of this modified dynamical behavior to construct models that might allow for a more natural setting for a late time cosmological constant.
The class of models we consider are those using pseudo-Nambu-Goldstone bosons (PNGBs) to construct theories with naturally light scalars. Such models have been used for late time phase transitions, as well as to give rise to a cosmological “constant” that eventually relaxed to zero, not unlike what we want to do here. However, our take on these models will be significantly different from that of ref..
We can write the required energy density as $`\rho _{\text{Dark Energy}}\left(10^3\text{ eV}\right)^4`$, which is suggestive of a light neutrino mass scale. There is a way to construct models of scalar fields coupled to neutrinos where the scalar field potential naturally (in the technical sense of t’Hooft) incorporates the small mass scale $`m_\nu ^4`$.
Consider a Lagrangian containing a Yukawa coupling of the form:
$$_{\text{Yuk}}=\underset{j=0}{\overset{N1}{}}\left(m_0+\epsilon \mathrm{exp}i\left(\frac{\mathrm{\Phi }}{f}+\frac{2\pi j}{N}\right)\right)\overline{\nu }_{jL}\nu _{jR}+\text{h.c.}$$
(1)
The scale $`f`$ is the scale at which a the global symmetry that gives rise to the Nambu-Goldstone mode $`\mathrm{\Phi }`$ is spontaneously broken. The Lagrangian $`_{\text{Yuk}}`$ is to be thought of as part of the low-energy effective theory of $`\mathrm{\Phi }`$ coupled to neutrinos at energies below $`f`$.
The term proportional to $`\epsilon `$ could be obtained by a coupling to a Higgs field $`\chi `$ that acquires an expectation value $`\chi =f/\sqrt{2}\mathrm{exp}i\frac{\mathrm{\Phi }}{f}`$. Note that in the absence of $`m_0`$ this Yukawa term possesses a continuous chiral $`U\left(1\right)`$ symmetry. The term proportional to $`m_0`$ breaks this symmetry explicitly to a residual discrete $`Z_N`$ symmetry given by:
$$\nu _j\nu _{j+1},\text{ }\nu _{N1}\nu _0,\text{ }\mathrm{\Phi }\mathrm{\Phi }+\frac{2\pi f}{N}.$$
(2)
This interaction can generate an effective potential for the Nambu-Goldstone mode $`\mathrm{\Phi }`$ which must vanish in the limit that $`m_00`$ which is equivalent to the vanishing of the neutrino masses. Since $`\mathrm{\Phi }`$ is an angular degree of freedom, it should not be a surprise that the effective potential is periodic and of the form
$$V\left(\mathrm{\Phi }\right)=M^4\left(1+\mathrm{cos}\frac{N\mathrm{\Phi }}{f}\right).$$
(3)
Here $`M`$ should be associated with a light neutrino mass $`m_\nu 10^3`$ eV.
Here, we have followed the working hypothesis of which states that the effective vacuum energy will be dominated by the heaviest fields still evolving towards their true minimum. We assume that the super light PGNB field $`\mathrm{\Phi }`$, with associated mass of order $`m_\nu ^2/f`$, will be the last field still rolling down its potential. Thus we have chosen by hand the constant in eq.(3) so that when $`\mathrm{\Phi }`$ reaches the minimum it will have zero cosmological constant associated with it. This choice is essentially a choice of the zero of energy at asymptotically late times.
The finite temperature behavior associated with these models is extremely interesting. For $`N3`$ the $`\mathrm{\Phi }`$ dependent part of the potential can be written as
$$c\left(T\right)M^4\mathrm{cos}\frac{N\mathrm{\Phi }}{f},$$
(4)
where $`c\left(T\right)`$ vanishes at high temperature $`T`$. Thus the high temperature phase of the theory has a non-linearly realized $`U\left(1\right)`$ symmetry where the $`\mathrm{\Phi }`$ potential becomes exactly flat with value $`M^4`$. Since $`Mm_\nu `$ this cosmological constant contribution will have no effect during nucleosynthesis and through the matter dominated phase until $`TM`$. At this time $`c\left(T\right)`$ reaches its asymptotic value of unity and we have the potential in eq.(3). For $`N=2`$, $`c\left(T\right)`$ changes sign continously, passing through zero at the critical temperature such that the high temperature minima become the low temperature maxima and vice-versa. There is a $`Z_2`$ symmetry in both the low and high temperature phases.
The potential in eq.(3) has regions of spinodal instability, i.e. where the effective mass squared is negative. These occur when $`\mathrm{cos}N\mathrm{\Phi }/f>0.`$ If $`\mathrm{\Phi }`$ is in this region, modes of sufficiently small comoving wavenumber follow an equation of motion that at least for early times is that of an inverted harmonic oscillator. This instability will then drive the non-perturbative growth of quantum fluctuations until they reach the spinodal line where $`\mathrm{cos}N\mathrm{\Phi }/f=0`$. Since the quantum fluctuations grow non-pertrubatively large, we have to resum perturbation theory to regain sensible behavior and this is done by the Hartree truncation . The prescription is to first expand $`\mathrm{\Phi }`$ around its (time dependent) expectation value $`\mathrm{\Phi }(\stackrel{}{x},t)\varphi \left(t\right)`$ as
$$\mathrm{\Phi }(\stackrel{}{x},t)=\varphi \left(t\right)+\psi (\stackrel{}{x},t),$$
(5)
where the tadpole condition $`\psi (\stackrel{}{x},t)=0`$ gives the equation of motion for $`\varphi \left(t\right)`$. The Hartree approximation involves inserting eq.(5) into eq.(3), expanding the cosines and sines, and then making the following replacements:
$$\mathrm{cos}\frac{N\psi }{f}\left(1\frac{N^2\left(\psi ^2\psi ^2\right)}{2f^2}\right)\mathrm{exp}\frac{N^2\psi ^2}{2f^2},\mathrm{sin}\frac{N\psi }{f}\frac{N\psi }{f}\mathrm{exp}\frac{N^2\psi ^2}{2f^2}.$$
(6)
The equations for the field $`\varphi ,`$ and the fluctuation modes $`f_k`$ coupled to the scale factor $`a\left(t\right)`$ are
$$\ddot{\varphi }+3\frac{\dot{a}}{a}\dot{\varphi }\frac{NM^4}{f}\mathrm{exp}\frac{N^2\psi ^2}{2f^2}\mathrm{sin}\frac{N\varphi }{f}=0,$$
(7)
$$\ddot{f}_k+3\frac{\dot{a}}{a}\dot{f}_k+\left(\frac{k^2}{a^2}\frac{N^2M^4}{f^2}\mathrm{exp}\frac{N^2\psi ^2}{2f^2}\mathrm{cos}\frac{N\varphi }{f}\right)f_k=0,$$
(8)
with
$$\psi ^2=\frac{d^3k}{\left(2\pi \right)^3}\left|f_k\right|^2.$$
(9)
The effective Friedmann equation for the scale factor is obtained by use of semiclassical gravity, i.e. by using $`T_{\mu \nu }`$ to source the Einstein equations:
$$\frac{\dot{a}^2}{a^2}=\frac{8\pi }{3M_p^2}\left[\rho _m(t)+\frac{1}{2}\dot{\varphi }^2+\frac{1}{2}\dot{\psi }^2+\frac{1}{2a^2}(\stackrel{}{}\psi )^2+M^4\left(1+\mathrm{cos}(N\varphi /f)\mathrm{exp}\frac{N^2\psi ^2}{2f^2}\right)\right],$$
(10)
with
$$\rho _m(t)=\rho _m(t_i)\frac{a^3(t_i)}{a^3(t)}$$
(11)
being the matter density and $`t_i`$ being the time at which the PGNB field begins its evolution.
The interesting feature of the above equations of motion is the appearance of terms involving $`\mathrm{exp}\frac{N^2\psi ^2}{2f^2}`$. These multiply terms in the potential and its various derivatives that contain the non-trivial $`\varphi `$ dependence. What we expect to have happen is that as the spinodally unstable modes grow, they will force $`\psi ^2`$ to grow as well. This in turn will rapidly drive the exponential terms to zero, leaving a term proportional to $`M^4`$ in the Friedmann equations, which will act as a cosmological constant at late times.
If we consider the $`N3`$ models, then at temperatures larger than $`T_{\text{crit}}M`$ the potential is just given by $`M^4`$ and is swamped by both the matter and radiation contributions to the energy density. Since the potential is flat, we expect that the zero mode is equally likely to attain any value between $`0`$ and $`2\pi `$ and in particular, we expect a probability of order $`1/2`$ for the initial value to lie above the spinodal line. If there was an inflationary period before this phase transition, we expect that the zero mode will take on the same value throughout the region that will become the observable universe today.
As the temperature decreases, the non-trivial parts of the potential turn on and the zero mode begins its evolution towards the minimum once the Universe is old enough, i.e. $`H\left(T_{\text{roll}}\right)m_\varphi M^2/f`$. At the same time, if the zero mode started above the spinodal line, the fluctuations begin their spinodal growth. Whether the spinodal instabilities have any cosmological effect depends crucially on a comparison of time scales, the first being between the time $`t_{}`$ it takes the zero mode to reach the spinodal point at $`\varphi _{\text{spinodal}}/f=\pi /2N`$ under the purely classical evolution (i.e. neglecting the fluctuations), and the time $`t_{\text{spinodal}}`$ it takes for the fluctuations to sample the minima of the tree-level potential, so that $`N^2\psi ^2/f^2𝒪\left(1\right)`$. Since the growth of instabilities will stop at times later than $`t_{}`$, if spinodal instabilities are to be at all relevant to the evolution of $`\varphi `$, we need $`t_{\text{spinodal}}t_{}`$. By looking at the equations of motion we can argue that
$$t_{\text{spinodal}}\frac{f}{2M^2}\mathrm{ln}\frac{f^2}{N^2\psi ^2\left(t_i\right)}+\frac{3}{2H_i},$$
(12)
where $`t_i`$ is the time at which the zero mode starts to roll and $`H_i`$ is the Hubble parameter at this time. Furthermore the early time behavior of the equations of motion gives us
$$t_{}\frac{f}{M^2}\mathrm{ln}\frac{f}{N\varphi (t_i)}+\frac{3}{2H_i}.$$
(13)
Comparing eqs.(12,13), we see that to have the spinodal instabilities be significant we need $`\varphi ^2(t_i)\psi ^2\left(t_i\right)`$.
The other condition that needs to be met is that there should be sufficient time for the spinodal instabilities to dominate the zero mode evolution before today. This will ensure that the expansion of the Universe will be driven by the remnant cosmological constant $`M^4`$ at the times relevant to the SNIa observations. For large enough initial fluctuations we can make the spinodal time as early as we need.
What sets the scale of the initial fluctuations? If we assume a previous inflationary phase, we can treat the PGNB as a minimally coupled massless field and the standard inflationary results should apply. The initial conditions for the mode functions are then given by:
$$f_k\left(t_i\right)=\frac{i}{\sqrt{2k^3}}H_{DS}\text{ for }\kappa kH_i,$$
(14)
where $`\kappa `$ is an infrared cutoff corresponding to horizon size during the De Sitter phase and $`H_{DS}`$ is the Hubble parameter during inflation. The short wavelength modes ($`k>H_i`$) have their conformal vacuum initial conditions. With these initial conditions
$$\frac{\psi ^2\left(t_i\right)}{f^2}\frac{H_{DS}^2}{4\pi ^2f^2}\left(N_{\text{e-folds}}60\right).$$
(15)
and for $`H_{DS}10^{13}`$ GeV, $`N_{\text{e-folds}}6010^5`$, and $`f10^{15}`$ GeV, we would only need that $`\varphi (t_i)/f0.5`$. These are not outlandish parameter values and we see that very little fine tuning is required. In fact, there is a great deal of freedom in choosing the values of these parameters, the only requirement being that the ratio of parameters appearing in (15) not be so small that the required value of $`\varphi (t_i)/f`$ is overly restricted.
In the figures below we use these parameters as well as $`M5.5\times 10^3`$ eV corresponding to neutrino masses, beginning the evolution at a redshift $`1+z=1200`$. In Fig. 1 we plot the numerical evolution of the zero mode and of the growth of the fluctuations, while Fig. 2 shows the equation of state of the PNGB field and the total equation of state including PNGB and matter components as a function of redshift. What we quickly infer from these graphs is that the evolution of the Universe becomes dominated by the remnant cosmological term, leading to an evolution toward a late time equation of state $`wP/\rho 1`$. The equation of state today is seen to be $`w0.7`$ and indicates a matter component $`\mathrm{\Omega }_{\text{matter}}=0.3`$ and a cosmological constant-like component $`\mathrm{\Omega }_{\text{pngb}}=0.7`$. Because the PNGB component has an equation of state $`w=1`$ by a redshift as high as $`z=50`$, these results reproduce the best fit spatially flat cosmology of the SNIa data.
One feature of this model is that the parameter $`M`$ is directly related to the measured value of today’s Hubble constant. We find
$$M=\left(5.5\times 10^3\text{eV}\right)\left(\frac{H_0}{65\frac{\text{km}}{\text{s}\text{}\text{Mpc}}}\right)^{1/2}\left(\frac{\mathrm{\Omega }_{\text{pngb}}}{0.7}\right)^{1/4},$$
(16)
which is to be compared to the observed $`90\%`$ confidence range of $`\mathrm{\Delta }m^2`$ from the Super Kamiokande contained events analysis of $`5\times 10^4\text{eV}^2<\mathrm{\Delta }m^2<6\times 10^3\text{eV}^2`$, and the more recent results of the up-down asymmetry analysis which indicates a range $`1.5\times 10^3\text{eV}^2<\mathrm{\Delta }m^2<1.5\times 10^2\text{eV}^2`$, whereas the small and large mixing angle MSW solutions of the solar neutrino problem yield a range of $`5\times 10^6\text{eV}^2<\mathrm{\Delta }m^2<4\times 10^4\text{eV}^2`$.
There is no shortage of models to explain the accelerating expansion of the universe. However, most options are lacking in motivation and require significant fine tuning of initial conditions or the introduction of a fine tuned small scale into the fundamental Lagrangian. We too have a fine tuned scale: the neutrino mass. However, we can take solace in the fact that this fine tuning is related to a particle that can be found in the Particle Data Book, with known mechanisms to produce the required value, and experiments dedicated to its measurement.
The model itself is also relatively benign, not requiring invocations of String or M-theory to justify its potential. Chiral symmetry breaking leading to PNGB’s is not unheard of in nature (pions do exist after all!), and should probably be expected in GUT or SUSY symmetry breaking phase transitions involving coupled scalars. This, together with the dynamical effects of backreaction allow the present model to be successful in explaining the data with only minor tuning of initial conditions.
###### Acknowledgement 1
D.C. was supported by a Humboldt Fellowship while R.H. was supported in part by the Department of Energy Contract DE-FG02-91-ER40682. |
no-problem/0001/cond-mat0001404.html | ar5iv | text | # Skyrmion Strings and Anomalous Hall Effect in Double Exchange Systems.
\[
## Abstract
We perform Monte Carlo simulations to obtain quantitative results for the anomalous Hall resistance, $`R_A`$, observed in colossal magnetoresistance manganites. $`R_A`$ arises from the interaction between the spin magnetization and topological defects via spin-orbit coupling. We study these defects and how they are affected by the spin-orbit coupling within the framework of the double exchange model. The obtained anomalous Hall resistance is, in sign, order of magnitude and shape, in agreement with experimental data.
\]
Doped perovskite manganites have attracted much attention lately, since they show the so called colossal magnetoresistance. These materials undergo a ferromagnetic-paramagnetic transition accompanied by a metal-insulator transition. The Double-Exchange (DE) mechanism plays a major role to explain this magnetic transition. In the DE picture, the carriers moving through the lattice are strongly ferromagnetically coupled to the $`Mn`$ core spins and this produces a modulation of the hopping amplitude between neighboring $`Mn`$ ions.
The Hall resistivity $`\rho _H`$ in ferromagnets has two contributions, one proportional to the magnetic field $`𝐇`$, and the other to the spin magnetization $`𝐌`$: $`\rho _H`$=$`R_OH+R_AM`$. $`R_O`$ and $`R_A`$ denote the ordinary (O) and the anomalous (A) Hall resistances (HR). The existence of $`R_A`$ requires a coupling of orbital motion of electrons to $`𝐌`$, and the AHE is usually explained in terms of skew scattering due to spin-orbit (s-o) interaction.
Several groups have measured $`\rho _H`$ of the doped $`Mn`$ oxide $`La_{0.7}Ca_{0.3}MnO_3`$ for different temperatures ($`T`$). These experiments found that $`R_O`$ and $`R_A`$ have opposite sign, that $`R_A`$ is much bigger than $`R_O`$, that $`R_A`$ peaks at a $`T`$ above $`T_c`$, and decreases slowly at higher $`T`$. These effects can not be explained with the conventional skew scattering theory. Recently, it was proposedthat in $`Mn`$ oxides $`R_A`$ arises from the interaction of $`𝐌`$ with non trivial spin textures (topological charges) via s-o coupling. The number of topological charges in the three-dimensional (3D) Heisenberg model increases exponentially with $`T`$ and Ye et al. extrapolated these results to DE materials obtaining a rapid increase of $`R_A`$ at low $`T`$. For $`T>T_c`$, although in 3D there is not a theory of topological defects, Ye et al. were able to estimate the overall shape of $`R_A`$.
In this work we perform Monte Carlo simulations in order to obtain a quantitative form of $`R_A`$ in DE systems. We compute the number of topological defects as a function of $`T`$ and, by introducing s-o interaction, we couple the defects orientation with $`𝐌`$. In this way we obtain an AHR which has the same sign, shape and order of magnitude as found experimentally.
Hamiltonian. The electronic and magnetic properties of the $`Mn`$ oxides are described by the DE Hamiltonian,
$`\widehat{H}=`$ $``$ $`t{\displaystyle \underset{ij,\sigma }{}}e^{ia\frac{e}{\mathrm{}c}A_{i\widehat{\delta }}}d_{i,\sigma }^+d_{j,\sigma }`$ (1)
$``$ $`J_H{\displaystyle \underset{i,\sigma ,\sigma ^{}}{}}d_{i,\sigma }^+𝝈_{\sigma ,\sigma ^{}}d_{i,\sigma ^{}}𝐒_i+\widehat{H}_{so}+\widehat{H}_z,`$ (2)
here $`d_{i,\sigma }^+`$ creates an electron at site $`i`$ and with spin $`\sigma `$, $`t`$ is the hopping amplitude between nearest-neighbor sites, $`J_H`$ is the Hund’s rule coupling energy, $`𝐒_i`$ is the core spin at site $`i`$ and $`\widehat{H}_{so}`$ is the spin-orbit (s-o) interaction. In the tight binding approximation, the effect of $`𝐇`$ is to modify the hopping matrix element by introducing the phase $`a\frac{e}{\mathrm{}c}A_{i\widehat{\delta }}`$, where $`a`$ is the lattice parameter, $`j=i+\widehat{\delta }`$ and $`𝐀`$ is the vector potential corresponding to $`𝐇`$. The last term in the Hamiltonian is the Zeeman coupling $`\widehat{H}_z=g\mu _b𝐇_i𝐒_i`$. We assume that the $`Mn`$ ions form a perfect cubic lattice.
In the limit of infinite $`J_H`$, the electron spin at site $`i`$ should be parallel to $`𝐒_i`$, and $`\widehat{H}`$ becomes
$$\widehat{H}=t\underset{ij}{}\mathrm{cos}\frac{\theta _{i,j}}{2}e^{i\left(\varphi (i,j)/2+a\frac{e}{\mathrm{}c}A_{i\widehat{\delta }}\right)}c_i^+c_j+\widehat{H}_{so}+\widehat{H}_z.$$
(3)
Now $`c_i^+`$ creates an electron at site $`i`$ with spin parallel to $`𝐒_i`$, $`𝐦_i`$=$`𝐒_i/S`$ and $`\mathrm{cos}\theta _{i,j}`$=$`𝐦_i𝐦_j`$. The first term in Eq.(2) describes the motion of electrons in a background of core spins. The electron hopping is affected by the nearest neighbor spin overlap, $`\mathrm{cos}(\theta _{i,j}/2)`$, being the kinetic energy minimum when all core spins are parallel. This is the DE mechanism for the existence of a ferromagnetic metallic ground state in $`Mn`$ doped oxides. When the orientation of the electron spin is moved around a close loop the quantum system picks up a Berry’s phase proportional to the solid angle enclosed by the tip of the spin on the unit sphere. $`\varphi (i,j)`$ is the Berry’s phase defined mathematically as the solid angle subtended by the unit vectors $`𝐦_i`$, $`𝐦_j`$ and $`\widehat{z}`$ on the unit sphere.
The phase $`\varphi (i,j)`$ in the Hamiltonian affects the motion of electrons in the same way as does the phase arising from a physical magnetic field. $`\varphi (i,j)`$ is related with internal gauge fields generated by non trivial spin textures, which appears in the system when increasing T. In absence of s-o coupling the phases $`\varphi (i,j)`$ are random and the net internal gauge field is zero. In the presence of s-o coupling there is a privilege orientation for the spin textures and a non zero average internal gauge field appear.
In the following we study, as a function of T, the appearance of topological defects in the DE model. Once the spin textures are characterized, we study the effect that the s-o coupling has on the defects and we analyze their contribution to the Hall effect.
Topological defects in the DE model. The temperature induces fluctuations in the ferromagnetic state of the core spins and at $`T>T_c`$ the system becomes paramagnetic. Typically $`T_c300K`$ and this $`T`$ is much smaller that the electron Fermi energy, $`t0.1eV`$, therefore we consider that the conduction electrons temperature is zero.
In first order in the electron wave function the temperature dependence of the magnetic properties is described by the classical action,
$$S=\beta tc_i^+c_j_0\underset{ij}{}\mathrm{cos}\frac{\theta _{i,j}}{2}e^{i\varphi (i,j)/2}+\widehat{H}_z,$$
(4)
where $`_0`$ means expectation value at $`T=0`$.
The system described by Eq.(3) is expected to behave with $`T`$ similarly to the Heisenberg model. In particular, at low $`T`$, we expect the occurrence of point singularities as those occurring in the 3D Heisenberg model. These points singularities are called topologically stable because no local fluctuations in a uniform system can produce them. The defects are classified by the topological charge, $`Q`$, which represents the number of times and the sense in which spins on a closed surface surrounding the defect cover the surface of a unit sphere in spin space. An example of topological defects with $`Q=+1(1)`$ occurs at a position from where all the spins point radially outward (inward).
In order to locate the topological charges in the lattice model we follow the prescription of Berg and Lüsher. For each unit cube of the lattice we divide each of its six faces into two triangles. The three unit-normalized spins at the corners of a triangle $`l`$ define a signed area $`A_l`$ on the unit sphere. The topological charge enclosed by the unit cube is given by
$$Q=\frac{1}{4\pi }\underset{l=1}{\overset{12}{}}A_l.$$
(5)
This definition of $`Q`$ ensures that, when using periodic boundary conditions, the total topological charge in the system is zero. $`Q`$ in each cube is an integer number, and the magnitude of nonzero charges is almost always equal to unity. Only at rather high temperatures a few defects with $`|Q|2`$ are found.
$`Q`$ can be interpreted as the number of magnetic monopoles enclosed by the unit cube. The quantities $`A_l\varphi _0/4\pi `$ represent the magnetic flux piercing the triangle $`l`$. Here $`\varphi _0=hc/e`$ is the magnetic flux quantum. Following this analogy we assign at each point of the lattice, $`i`$, a three dimensional internal magnetic field $`𝐛_i`$.
The phase $`\varphi (i,j)`$ can be written in the form,
$$\frac{\varphi (i,j)}{2}=a\frac{e}{\mathrm{}c}a_{i\widehat{\delta }},$$
(6)
where $`𝐛`$=$`\times 𝐚`$. Here it is clear that the $`𝐛`$ associated with a spin texture affects the motion of an electron just as does an external magnetic field.
In a system with a uniform magnetization at the surface, the interaction between a positive and a negative defect is finite and in the continuous Heisenberg model increases linearly with separation. Defects with opposite charges are closely bound in pairs at low $`T`$. These pairs of defects are Skyrmion strings (dipoles) which begin at a monopole ($`Q`$=1) and end at an antimonopole ($`Q`$=-1). The Skyrmions are characterized by a dipole $`𝐏`$ joining $`Q`$=-1, and $`Q`$=+1,
$$𝐏=\frac{a^2}{\varphi _0}\underset{i}{}𝐛_i.$$
(7)
By performing Monte Carlo simulations on the classical variables $`𝐦_i`$ , we have studied the dependence on $`T`$ of the number of defects in the system. With the definition of $`Q`$, the number of positive and negative defects is the same and the important quantity is the average defect pair density $`n`$. In Fig. 1 we plot $`n`$ as a function of $`T`$ for a DE system of size 16$`\times `$16$`\times `$16. We have checked that our results are free of finite size effects. In the same figure we plot $`m`$(T). In the DE system $`T_c1.06tc_i^+c_j_0`$. The results correspond to $`g\mu _bH`$=0 and $`g\mu _bH`$=0.067$`T_c`$ ($`H`$ 10 Tesla).
We have studied correlation between defects. For $`T<T_c`$ the $`Q`$=+1 and $`Q`$=-1 defects are strongly coupled forming Skyrmions. The Skyrmions are very dilute, almost independent, and their density can be fitted to
$$n=\alpha e^{\beta E_c}.$$
(8)
Here $`E_c`$ and $`\alpha `$ are, respectively, the core energy and the entropy of the Skyrmions. Numerically we have obtained $`E_c=7.05T_c`$. This value is slightly smaller than the value obtained for the Heisenberg model $`E_c^{Heis}`$ = 8.7 $`T_c`$. At $`T>T_c`$ the number of defects increases sharply and it becomes very difficult to pair up defects with opposite charges in an unambiguous way. The core energy practically does not depend on $`H`$, this result is in agreement with the obtained for pure two-dimensional Skyrmions. The entropy $`\alpha `$ is related with the degeneracy of the Skyrmions in the orientation of $`𝐏`$ and with thermally activated twist and dilatation soft modes. We have obtained that for $`H`$=0, $`\alpha `$=51. Note that $`\alpha ^{Heis}320`$ . For finite $`𝐇`$ the six (1,0,0) directions keep being degenerated, but some of the twist soft modes becomes more gaped and the value of the entropy is reduced.
By performing calculations in a system constrained to get a unique Skyrmion with a given dipole $`𝐏`$, we have calculated the energy of isolated Skyrmions. In this way we have checked that in the absence of s-o coupling the energy of the Skyrmion only depend on the absolute value of $`𝐏`$ and not on its orientation. We have obtained that the core energy of a Skyrmion with $`𝐏`$=(1,0,0) is $`E_c^{(1,0,0)}`$ =7.05$`T_c`$, with $`𝐏`$=(1,1,0) is $`E_c^{(1,1,0)}`$ =8.65 $`T_c`$ and for P=(1,1,1) is $`E_c^{(1,1,1)}`$ =10.11 $`T_c`$. The dependence of the energy on $`P`$ confirm the strong confinement energy of the topological defect. The increase of the Skyrmion energy with $`P`$, implies that for $`T<T_c`$ the only relevant pairs are those with $`P`$=1.
In the case of zero s-o coupling, the Skyrmion core energy does not depend on the direction of $`𝐏`$ and the average internal gauge magnetic field is zero, $`𝐛`$=0. The s-o coupling privileges an orientation of the Skyrmions and it results in a finite value of $`𝐛`$.
Spin-orbit interaction. The s-o coupling has two contributions, the interaction of the spin of the carriers with the ion electric field, and the magnetic interaction between the carriers with the core spins $`𝐒_i`$. In the $`J_H\mathrm{}`$ limit, both contributions have the same form and,
$$\widehat{H}_{so}=\lambda _{so}\frac{x}{2}\frac{a^2}{\varphi _0}S\underset{i}{}𝐦_i𝐛_i,$$
(9)
where $`x`$ is the carriers concentration. For $`\lambda _{so}0`$ and $`m0`$, the Skyrmions are preferentially oriented. If $`𝐇\widehat{z}`$, $`𝐦\widehat{z}`$ and a net $`z`$ component of $`𝐛`$ results.
We have included the s-o interaction in the Monte Carlo simulation and we have obtained that $`b_z`$ is linear with $`\lambda _{so}`$ in all range of $`T`$. In Fig. 2 we plot for a system of size $`12\times 12\times 12`$, $`\varphi _z`$=$`b_za^2/\varphi _0`$ as a function of $`T`$. For $`T<T_c`$ our results are free of finite size effects. For $`T>T_c`$ the increase of the system size does not change the overall shape.
It is interesting to analyze the results for $`\varphi _z`$ as a function of the density of defects $`n`$. The main effect of the s-o coupling is to privilege the appearance of Skyrmions polarized parallel to $`𝐦`$, $`n_{}`$, with respect to Skyrmions with $`𝐏`$ antiparallel to $`𝐦`$, $`n_+`$. This asymmetry produce a $`z`$-component of the internal gauge field, $`b_z`$=$`\varphi _0/a^2(n_{}n_+)`$. Assuming that the Skyrmions are independent,
$$n_{}n_+=\frac{1}{3}n\mathrm{sinh}(\epsilon _0/T),$$
(10)
where
$$\epsilon _0=\frac{\lambda _{so}x}{2}\stackrel{~}{m}(T),$$
(11)
is the s-o energy interaction of a Skyrmion with $`𝐏=(0,0,1)`$ and $`\stackrel{~}{m}(T)`$ is the spin polarization inside the Skyrmion. Finally, within the independent Skyrmions picture, after linearizing Eq.10,
$$\varphi _z^{ind}=\frac{n}{6}\frac{\lambda _{so}}{T}x\stackrel{~}{m}(T).$$
(12)
$`\stackrel{~}{m}(T)`$ is expected to be a function of $`m(T)`$. By comparing the expression for $`\varphi _z^{ind}`$ with the Monte Carlo results, Fig. 2, we obtain that for $`TT_c`$, $`\stackrel{~}{m}(T)`$=$`m(T)/5`$. That means that the spin polarization inside the Skyrmions is five times smaller that the average spin polarization. It is interesting to note how the independent Skyrmions picture describes not just the low temperature regimen, but also the trends of $`b_z`$ at $`TT_c`$.
The sign of $`\varphi _z`$ depends on the sign of $`\lambda _{so}`$. Physically we expect that the motion of the electrons leads to an internal gauge field which acts to cancel the applied field, and therefore a negative sign for $`b_z`$ is expected.
Anomalous Hall effect. The Hall resistivity can be written as $`\rho _H=(H+b)/nec`$. Comparing this expression with the definition of $`R_A`$ we obtain
$$\frac{R_A}{R_O}=\frac{a^3}{g\mu _b}\frac{b_z}{m}.$$
(13)
By using the appropriate parameters, we plot in Fig. 3 $`R_A/R_0`$ for $`La_{0.7}Ca_{0.3}MnO_3`$. We take $`T_c270K`$, we use $`\lambda _{so}`$=5$`K`$ and $`g\mu _bH=0.067T_c`$.
Fig.3 is the main result of our calculation. We obtain an AHR which i) is negative, ii) increases exponentially and becomes evident at $`TT_c/2`$, iii)at temperatures close to $`T_c`$, is around 20 times bigger than the ordinary Hall resistance, and iv) has a maximum at temperatures slightly higher ($`30K`$) than $`T_c`$. The subsequent decrease is due to thermal fluctuations that destroy the directional order of the Skyrmions. If we take into account the fact that the conductance at $`T>T_c`$ is mainly due to polaron hopping, an extra factor of $`1/T^2`$ is expected leading to a steeper decrease. Our results are in good agreement with the data obtained experimentally. We have checked that these results do not depend significantly on the magnetic field applied.
In addition, we have found, by diagonalizing the electron Hamiltonian, that there is not significant electronic charge associated to the Skyrmions. This is in contrast to the quantum Hall ferromagnetic systems where the topological and electrical charge are equivalent. The Skyrmions we have studied appear when increasing $`T`$ but we have found that they can also appear at $`T=0`$ when spin defects (in particular, antiferromagnetic islands) are present in the system.
In closing, we have computed the anomalous Hall resistance in colossal magnetoresistance manganites. We have obtained, as a function of temperature, the average value of Skyrmions for the double-exchange model. By introducing a spin-orbit interaction, we obtain a coupling between the Skyrmions orientation and the magnetization which results in the appearance of an anomalous Hall resistance. The sign, order of magnitude and shape of the obtained anomalous Hall resistance are in agreement with the experimental information.
This work was supported by the Cicyt of Spain under Contract No. PB96-0085 and by the CAM under Contract No. 07N/0027/1999. |
no-problem/0001/astro-ph0001259.html | ar5iv | text | # Could merged star-clusters build up a small galaxy?
## 1. Introduction
Interacting galaxies like the Antennae (NGC 4038/4039; Whitmore & Schweizer 1995) or Stephan’s Quintet (HCG 92; Hunsberger 1997) show much star-burst activity in their tidal features. High resolution images from the HST resolve these regions into many compact groups of young massive star clusters (i.e. super-clusters) and/or tidal-tail dwarf galaxies, with typical radii of 100–500 pc. Here we aim to study the future fate of these super-clusters.
We begin with an overview of the numerical method and explain the setup of our simulations. We then show results obtained so far and conclude with an outlook on future work we intend to pursue.
## 2. Superbox
Superbox is a hierarchical particle-mesh code with high-resolution sub-grids focusing on the cores and the star-clusters as a whole, and moving with them through the simulation area (Fellhauer et al. 2000). The code has, for particle-mesh codes, a highly-accurate force-calculation based on a nearest grid-point (NGP) scheme. The main advantages of Superbox are it’s speed and the low memory requirement which makes it possible to use a high particle number with high grid-resolution on normal desktop computers.
## 3. Setup
As a model for our massive star-clusters we use, for each, Plummer-spheres with 100,000 particles, a Plummer-radius $`R_{\mathrm{pl}}=6`$ pc and a cutoff radius $`R_{\mathrm{cut}}=15`$ pc, giving a total mass of $`10^6\mathrm{M}_{}`$ and crossing time of $`1.4`$ Myrs. Twenty of these clusters are placed in a compact group orbiting in a logarithmic potential of the parent galaxy,
$`\mathrm{\Phi }`$ $`=`$ $`{\displaystyle \frac{1}{2}}v_{\mathrm{circ}}^2\mathrm{ln}\left(R_{\mathrm{gal}}^2+r^2\right),`$ (1)
with $`R_{\mathrm{gal}}=4`$ kpc and $`v_{\mathrm{circ}}=220`$ km/s. The case $`v_{\mathrm{circ}}=0`$ is dealt with in Kroupa (1998). The distribution of the super-cluster is also Plummer-like with different Plummer-radii (Table 1). The tidal radius is $``$ 2.4 kpc at apo-galacticon and 1.2 kpc at peri-galacticon. The orbits have the same eccentricity in all cases, and begin at apo-galacticon ($`x=60`$ kpc, $`y,z=0`$) with $`v_y=150`$ km/s ($`v_x=v_z=0`$).
## 4. First Results
### 4.1. Global properties of the Super-Cluster
In all runs some of the clusters merge very rapidly within the first $`100`$ Myrs as seen in Fig. 1.
The number of surviving clusters drops with increasing concentration of the super-cluster (SC). The orbits of the clusters inside their super-cluster also change rapidly due to the very short relaxation-time of the SC ($`T_{\mathrm{relax}}T_{\mathrm{cr}}^{\mathrm{sc}}`$). Thereafter the merger-object and the surviving clusters move on epicycles around the orbit of the original SC about the galactic centre, with an increasing amplitude and period due to tidal heating (Fig 2). The surviving star-clusters remain in the vicinity of the merger-object (Fig. 3).
In the intermediate concentration case (run5) we find 2 merger-objects. To check whether this was just a chance event, a second run with a different random number seed for the star-cluster positions also gave 2 objects. Interesting in this context is that Theis (1996) found that two clusters form in a cold collapse of a stellar system in an external tidal field. Violent relaxation in an external tidal field needs further addressing in the present context.
### 4.2. Internal properties of merger-objects
In those cases where the SC has a low concentration initially, the resulting merger-system is an extended object (several kpc) with a low density and an off-centre nucleus (Fig. 3). The radial density profile follows an exponential distribution (Fig. 4). The velocity-dispersion in these objects drops to $``$ 5 km/s (Fig. 5), which is comparable with the measured dispersions of dSph-galaxies in the Milky Way, and is slightly anisotropic.
In the case of the most concentrated SC, the merger-object is a dense compact spheroidal object (Fig. 3) with a high density core ($`10^4\mathrm{M}_{}/\mathrm{pc}^3`$). The density profile follows a power-law with $`r^3`$ out to 0.8-1.0 kpc (Fig. 4). The velocity-dispersion of this spheroidal dwarf galaxy is anisotropic and around 15 km/s (Fig. 5). It has a half-mass radius of about 300 pc. The size of this object is too large in comparison with even the biggest of the globular clusters, and the central density is far too high to be comparable with any dE-, dIrr- or dSph-galaxy. However, it has properties similar to the initial satellites studied by Kroupa (1997), and is thus likely to evolve to a dSph-like satellite. But further investigation of this issue is necessary.
## 5. Conclusion and Outlook
We found that even if new high resolution images show that most of the so-called tidal dwarf galaxies are clusters of young compact massive star clusters they are likely to merge within a short time-scale. The properties of the merger-objects differ with the scale-length of their initial distribution. We found large fluffy objects with similar properties as the local dSph-galaxies as well as very compact and massive spheroidal objects, which, however, may be similar to the progenitors of some of the local present-day dSph satellites.
In the course of future work we intend to investigate the influence of the choice of orbit around the parent galaxy and how this alters the results. We will focus on the transition between bound and unbound objects, and look for a region in the space of parameters where 2 merger-objects (binary system) are more likely to form. Our further research will also address the future fate of the merger-objects and their possible counterparts in reality.
## References
Fellhauer, M., et al. 2000, submitted to NewA
Hunsberger, S. D. 1997, BAAS, 29, 1406
Kroupa, P. 1997, NewA, 2, 139-164
Kroupa, P. 1998, MNRAS, 300, 200-204
Theis, C. 1996 Astron.Soc.Pac.Conf.Ser., 112, 35-44
Whitmore, B. C., Schweizer, F. 1995, AJ, 109, 960-980, 1412-1416
#### Acknowledgments.
We thank C. M. Boily for providing helpful comments, and R. Spurzem for supporting this project at the ARI.
U. Fritze von Alvesleben: It would be interesting to try and see if the age distribution among clusters that are clustered is different (younger) from that of YSCs distributed more homogeneously. What do you expect to happen with the merged clusters that you compared to a dSph. Isn’t it bound to sink into the core of the merger remnant by dynamical friction ? The bulk of the YSCs is at $`d10`$ kpc from the nucleus of NGC 4038, i.e. not further away than NGC 4039 (nucleus)!
Answer: We assume that at least some of the tidal-tail dwarfs seen to form in outer ($`>30`$ kpc) tidal arms are composed of clusters of young massive star clusters. For our models we use an analytic galactic potential, because in the mass range of the merger-object of about $`10^7`$ M, dynamical friction does not play a significant role.
E. Grebel: Could you comment on how the merged clusters will resemble a dSph galaxy in their properties (E.g. dSph don’t show rotation, have very low density and surface brightness, etc.) ?
Answer: It is too early to quantify the reply in detail, but we expect the merged object to show properties that could make it look similar to the progenitors of some of the dSph satellites. The merged object is spheroidal, and has a high specific frequency of globular clusters, and low angular momentum, which however depends on the initial conditions. It’s stellar population contains stars from the mother galaxy, as well as stars formed during the star burst, and maybe stars (and clusters) formed during a possible later accretion event of a co-moving gas cloud. Many of these issues are discussed in Kroupa (1998).
J. Gallagher (comment): Since super-star-clusters are often born in groups – the luminous clumps – destruction via cluster merging is of general interest. For example, if this process reduces the survival rate of massive-star-clusters, it might help to explain why intermediate age examples seem to be rare.
D. McLaughlin: What evidence is there that the clustered clusters in the Antennae will actually merge ? There are $`10^9`$ M clouds of gas in this system, so it may be that young clusters are clustered because several form in any given cloud, but they disperse after gas-loss. With no information on either the cluster-cluster velocity dispersion or the time when the parent cloud was disrupted, it would be difficult to rule out this possibility.
Answer: This is an important issue, and very similar to the problem of forming bound star clusters. While not disproving rapid dispersal entirely, the argument which makes rapid dispersal less likely is as follows: The cluster-cluster velocity dispersion, $`\sigma _{\mathrm{clcl}}`$ is either small, which will lead to a bound merger object. If $`\sigma _{\mathrm{clcl}}`$ is near to virial for the stellar mass in the super-cluster, then our models take care of that. If, however, $`\sigma _{\mathrm{clcl}}`$ is virial for the stars and a much larger mass in gas, then $`\sigma _{\mathrm{clcl}}>20`$ km/s assuming a star-formation efficiency of 20 per cent and a pre-gas removal super-cluster configuration as in run5 here. Thus, within 10 Myr, the object will have expanded to a radius of at least 350 pc. Since many of the super-clusters are still very concentrated, and about 10 Myr old, rapid expansion does not appear to be taking place, especially so since there are at least about 10 observed super clusters and they would have had to start in unrealistically concentrated configurations for them to appear with the sizes they have now. This is further discussed in (Kroupa 1998). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.