id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0003/hep-ph0003287.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The extraordinary smallness of the observational upper bound on the cosmological constant conflicts with naive field theoretic expectations. This is a well-known fundamental problem in our understanding of the interplay between gravity and the known quantum field theories (see, e.g., for a review). Recent observations suggesting a small non-zero value for the cosmological constant make this problem even more severe since an exact zero, possibly the result of a yet unknown symmetry, is replaced by a small number, $`ϵ_{vac}/M_p^42\times 10^{123}`$. In any fundamental unified theory this number would have to be calculable.
A possible alternative is a homogeneous contribution to the energy density of the universe which varies with time. It is typically connected to a time varying scalar field – the cosmon – which relaxes asymptotically for large time to zero potential energy . The late time behavior in these models of “quintessence” is insensitive to the initial conditions due to the stable attractor properties of the asymptotic solution. The homogeneous fraction of the cosmic energy density may be constant or slowly increase with time (say from 0.1 during nucleosynthesis to 0.7 today) , in sharp contrast to a cosmological constant, which needs to be adjusted such that it becomes important precisely today.
It has been argued that the relaxation to a zero value of the effective potential rather than to a constant is connected to the dilatation anomaly. In absence of a fundamental theory it is, however, not obvious how to verify (or falsify) this assertion. We explore here an alternative possibility, namely that the value to which the effective cosmon potential relaxes is itself governed by a field which dynamically adjusts the “cosmological constant” to zero. We consider our proposal as an existence proof that such a mechanism can work. It is conceivable that simpler and more elegant models can be found once the basic adjustment mechanism is identified.
Rubakov has recently suggested a mechanism for the dynamical adjustment of the cosmological constant to zero that avoids Weinberg’s no-go-theorem in a very interesting way (cf. for other recent work on adjustment mechanisms). In his scenario, a scalar field governing the value of the cosmological constant rolls down a potential and approaches the zero of the potential, i.e., the point where the cosmological constant vanishes, as $`t\mathrm{}`$. Such behavior is realized by a diverging kinetic term , which depends on a second scalar field. This field, a Brans-Dicke-field that couples to the current value of the cosmological constant, ensures the stability of the solution.
However, in Rubakov’s model the universe is inflating after the adjustment of the cosmological constant. This makes it necessary to add a period of reheating, which implies the need to fine tune the minimum of the inflaton potential to zero. In this sense, the fine tuning problem for the total effective potential is now shifted to the inflaton sector. Furthermore, it is difficult to imagine testable phenomenological consequences of such an adjustment at inflation.
The present paper suggests a dynamical adjustment mechanism for the cosmological constant that can be at work in a realistic, late Friedmann-Robertson-Walker universe. In this model, the energy density is dominated by non-standard-model dark matter together with a cosmon field $`\phi `$. This field can account for a homogeneous part of the total energy density which does not participate in structure formation and may lead to an accelerated expansion of the universe today. As in Rubakov’s scenario, the cosmological constant, characterized by a field $`\chi `$, rolls down a potential and approaches zero asymptotically. This is realized by a kinetic term for $`\chi `$ that depends on $`\phi `$ and diverges as $`t^4`$ when $`\phi (t)\mathrm{}`$ at large $`t`$. To make this solution insensitive to changing initial conditions, a Brans-Dicke field $`\sigma `$ is introduced. This field ‘feels’ the current value of the cosmological constant and provides the required feedback to the diverging kinetic term.
Thus, a realistic, late cosmology with an asymptotically vanishing cosmological constant arises. Baryons can be added as a small perturbation and do not affect the stability of the solution. In the concrete numerical example provided below, their coupling to the Brans-Dicke field $`\sigma `$ is not yet realistic.
In the following, the cosmological model outlined above is explicitly constructed.
## 2 Adjusting a scalar potential to zero in a given Friedmann-Robertson-Walker background
The action of the present model can be decomposed according to
$$S=S_E+S_{SF}+S_{SM},$$
(1)
where $`S_E`$ is the Einstein action, $`S_{SF}`$ the scalar field action, and $`S_{SM}`$ the standard model action, which is written in the form
$$S_{SM}=S_{SM}[\psi ,g_{\mu \nu },\chi ]=d^4x\sqrt{g}_{SM}(\psi ,g_{\mu \nu },\chi ).$$
(2)
Here $`g=\text{det}(g_{\mu \nu })`$ and $`\psi `$ stands for the gauge fields, fermions and non-singlet scalar fields of the standard model (or some supersymmetric or grand unified extension). The scalar singlet $`\chi `$ is assumed to govern the effective UV-cutoffs of the different modes of $`\psi `$, thereby influencing the effective cosmological constant. Units are chosen such that $`M^2=(16\pi G_N)^1=1`$.
Integrating out the fields $`\psi `$, one obtains (up to derivative terms)
$$S_{SM}=d^4x\sqrt{g}V(\chi ).$$
(3)
Let the potential $`V(\chi )`$ have a zero, $`V(\chi _0)=0`$ with $`\alpha =V^{}(\chi _0)`$, and rename the field according to $`\chi \chi _0+\chi `$. Then the action near $`\chi =0`$ becomes
$$S_{SM}=d^4x\sqrt{g}\alpha \chi .$$
(4)
Due to this potential the field $`\chi `$ will decrease (for $`\alpha >0`$) during its cosmological evolution. It can be prevented from rolling through the zero by a diverging kinetic term .
First, let the geometry be imposed on the system, i.e., assume a flat FRW universe with Hubble parameter $`H=(2/3)t^1`$, independent of the dynamics of $`\chi `$. With a kinetic Lagrangian
$$_{SF}=\frac{1}{2}^\mu \chi _\mu \chi F(t),$$
(5)
one obtains the following equation of motion for $`\chi `$:
$$\ddot{\chi }+(3H+\dot{F}/F)\dot{\chi }+(V/\chi )/F=0.$$
(6)
Here $`F(t)=t^4`$ is an externally imposed condition which will be realized below by the quintessence field, which serves as a clock. Equation (6) has the particular solution $`\chi =(\alpha /6)t^2`$, which provides an acceptable late cosmology since all energy densities associated with $`\chi `$ scale as $`t^2`$.
Clearly, it requires fine tuning of the initial conditions to achieve the desired behavior $`\chi 0`$ as $`t\mathrm{}`$, which is realized in this particular solution. However, this fine tuning can be avoided by adding a Brans-Dicke field $`\sigma `$ that ‘feels’ the deviation of $`\chi `$ from zero and provides the appropriate ‘feedback’ to the kinetic term so that $`\chi `$ reaches zero asymptotically independent of its initial value.
The field $`\sigma `$ has a canonical kinetic term and it is coupled to $`_{SM}`$ by the substitution $`g_{\mu \nu }g_{\mu \nu }\sqrt{\sigma }`$ in Eq. (2),
$$S_{SM}=S_{SM}[\psi ,g_{\mu \nu }\sqrt{\sigma },\chi ]=d^4x\sigma \sqrt{g}_{SM}(\psi ,g_{\mu \nu }\sqrt{\sigma },\chi ).$$
(7)
Integrating out the fields $`\psi `$, one obtains now
$$S_{SM}=d^4x\sqrt{g}\alpha \sigma \chi $$
(8)
near $`\chi =0`$. The scalar field Lagrangian is now taken to be
$$_{SF}=\frac{1}{2}(\chi )^2F(\sigma ,t)+\frac{1}{2}(\sigma )^2\beta \sigma t^2,$$
(9)
where $`F(\sigma ,t)=\sigma ^2t^4`$. The additional $`t`$ dependence of the term $`\beta `$ is again introduced ad hoc and will later on be realized by the dynamics of the cosmon.
Now Eq. (6) is supplemented with the equation of motion for $`\sigma `$,
$$\ddot{\sigma }+3H\dot{\sigma }+\alpha \chi \beta t^2\sigma \dot{\chi }^2t^4=0.$$
(10)
The combined equations have the asymptotic solution $`\chi =\chi _0t^2`$ and $`\sigma =\sigma _0=\text{const.}`$ with $`\chi _0=3\beta /\alpha `$ and $`\sigma _0=\alpha ^2/(18\beta )`$. The last term in Eq. (9) was introduced to allow this solution. The above solution is stable, i.e., for a range of initial conditions one still finds the desired asymptotic behavior $`\chi t^2`$ and $`\sigma \text{const.}`$ for $`t\mathrm{}`$. This is easy to check numerically setting, e.g., $`\alpha =\beta =1`$. The stability does not depend on the precise values of these parameters.
Thus, an asymptotic decay of the energy densities associated with the fields $`\chi `$ and $`\sigma `$, which is sufficiently fast to be consistent with a FRW cosmology, is realized without any fine tuning. What remains to be done is the replacement of the various $`t`$-dependent functions by the dynamics of appropriate fields.
## 3 Adding matter, quintessence, and gravitational dynamics
To embed the above adjustment mechanism in a realistic universe that includes matter and gravitational dynamics, assume first that the energy densities associated with $`\chi `$ and $`\sigma `$ remain small compared to the total energy density throughout the evolution. The total energy density is taken to consist of $`20\%`$ dark matter and $`80\%`$ quintessence. The cosmon field can be used to realize the explicit $`t`$ dependence in Eq. (9).
It is known that a system with matter and a scalar field $`\phi `$ that is governed by an exponential potential $`V_Q(\phi )=e^{a\phi }`$ gives rise to a realistic late cosmology with a fixed ratio of matter and field energy densities . The differential equations describing the system read
$`\ddot{\phi }+3H\dot{\phi }+V_Q^{}(\phi )=0`$ (11)
$`6H^2=\rho +{\displaystyle \frac{1}{2}}\dot{\phi }^2+V_Q(\phi )`$ (12)
$`\dot{\rho }+3H\rho =0,`$ (13)
where $`\rho `$ is the density of dark matter. One finds the stable solution $`H=(2/3)t^1`$, $`\phi =(2/a)\mathrm{ln}t`$ and $`\rho =\rho _0t^2`$. For $`a^2=2`$ one has $`\rho _0=2/3`$, which corresponds to a realistic dark matter to quintessence ratio. The explicit time dependence in Eq. (9) can now be replaced by a coupling to $`\phi `$. Technically, this is realized by the substitution $`t^2e^{a\phi }`$ in $`_{SF}`$.
Now the $`\chi `$$`\sigma `$ system of the last section and the $`\phi `$$`\rho `$–gravity system described above have to be combined and it has to be checked whether the stability of each separate system suffices to ensure the stability of the complete system.
The complete Lagrangian, including the curvature term and the effective standard model action, Eq. (8), reads
$$=R+\frac{1}{2}(\chi )^2F(\sigma ,\phi )+\frac{1}{2}(\sigma )^2+\frac{1}{2}(\phi )^2+V(\chi ,\sigma ,\phi ),$$
(14)
where
$$F(\sigma ,\phi )=\sigma ^2e^{2a\phi }\text{and}V(\chi ,\sigma ,\phi )=\alpha \sigma \chi +(1\beta \sigma )e^{a\phi }.$$
(15)
In a flat FRW universe, it gives rise to the equations of motion
$`\ddot{\chi }+(3H+\dot{F}/F)\dot{\chi }+{\displaystyle \frac{1}{F}}{\displaystyle \frac{V}{\chi }}=0`$ (16)
$`\ddot{\sigma }+3H\dot{\sigma }+{\displaystyle \frac{}{\sigma }}\left(V{\displaystyle \frac{1}{2}}\dot{\chi }^2F\right)=0`$ (17)
$`\ddot{\phi }+3H\dot{\phi }+{\displaystyle \frac{}{\phi }}\left(V{\displaystyle \frac{1}{2}}\dot{\chi }^2F\right)=0`$ (18)
$`6H^2={\displaystyle \frac{1}{2}}\dot{\chi }^2F+{\displaystyle \frac{1}{2}}\dot{\sigma }^2+{\displaystyle \frac{1}{2}}\dot{\phi }^2+V+\rho `$ (19)
$`\dot{\rho }+3H\rho =0.`$ (20)
They have the asymptotic solution
$$\chi =\chi _0t^2,\sigma =\sigma _0,\phi =\phi _0+(2/a)\mathrm{ln}t,\rho =\rho _0t^2,H=(2/3)t^1.$$
(21)
For the parameters $`\alpha =\beta =1`$ and $`a^2=2`$, one finds
$$\chi _0=\frac{3}{c},\sigma _0=\frac{1}{18c},\phi _0=\frac{\mathrm{ln}c}{\sqrt{2}},\rho _0=\frac{5}{3}\frac{1}{c}\frac{1}{6c^2},$$
(22)
where $`c=\left(1+\sqrt{11/9}\right)/2`$. The clustering part of the energy density is $`\rho /6H^20.21`$.
We have checked numerically that the above solution is stable, i.e., that a small variation of the initial conditions does not affect the asymptotic behavior for $`t\mathrm{}`$. The stability does not depend on the precise values of the parameters $`\alpha `$, $`\beta `$ and $`a`$.
A small amount ($`1\%`$) of baryons can be introduced as a perturbation. In the present context, baryons are quite different from the above non-standard-model dark matter since, by virtue of Eq. (7), they couple to the Brans-Dicke field $`\sigma `$. Thus, the baryonic energy density is $`\sigma n_B`$, where $`n_B`$ is the number density of baryons. It has been checked that such an additional term does not affect the stability of the above solution. Solar system tests of the post-Newtonian approximation to general relativity place an upper bound on the coupling of baryons to almost massless scalar fields. In the present setting, the relevant coupling depends on $`c`$ whose present numerical value is not compatible with phenomenology in this respect. However, this can probably be avoided in a more carefully constructed model or by the ad-hoc introduction of a kinetic term for $`\sigma `$ that grows for large $`t`$. We also have not yet implemented the desirable slow decrease of $`\rho /6H^2`$, which would influence the detailed dynamics.
## 4 Conclusions
In the present letter, a dynamical adjustment mechanism for the cosmological constant is constructed. It can be at work in a late FRW universe and ensures that the cosmological constant vanishes asymptotically. The existence of a working late-time adjustment mechanism is interesting because of its possible observational consequences.
The field theoretic model that is used to realize this adjustment mechanism is generic but relatively complicated. In particular, it involves a Brans-Dicke field which, for the set of parameters used above, couples too strongly to standard model matter. However, there seems to be no reason why, with a different choice of parameters or scalar potentials, it should not be possible to avoid this phenomenological problem. A systematic investigation of such possibilities requires the analytic understanding of the stability of the system – a task that appears to be relatively straightforward.
The beauty of adjustment mechanisms lies in the fact that they are independent of all the intricacies of the field theoretic standard model vacuum. As a new ingredient we use time or, equivalently, the value of the cosmon field as an essential parameter. Recently suggested adjustment mechanisms with an extra dimension are related to this idea since they also employ a new parameter, the position in the extra dimension, and adjust the cosmological constant to zero only at a certain value of this parameter.
One may hope that this new approach to the construction of adjustment mechanisms will eventually lead to a completely realistic and testable model.
|
no-problem/0003/gr-qc0003030.html
|
ar5iv
|
text
|
# Gravitational Aharonov-Bohm effect and gravitational lensing
## 1 Introduction
The Aharonov-Bohm (AB) effect is due to the influence of the vector potential on the phase of the wave function of a charged particle. The existence of a similar effect for gravitational fields also was pointed out in connection with phenomena expected when working with rotating reference frames or rotating sources of gravity. In fact many an author (see for instance Aharonov and Carmi, Sakurai, Semon, Tsai and Neilson) showed how the Sagnac effect could be explained in terms of an inertial AB effect, both at the classical and at the quantum level. A truly gravitational effect was worked out by Harris exploiting the similarity between the electromagnetic field and the description of the gravitational field in terms of gravitoelectric and gravitomagnetic components to show precisely the existence of an AB effect induced by the rotation of a massif body.
This gravitational contribution to what is otherwise known as the Sagnac effect in a terrestrial environment (surface of the Earth or satellites orbiting it) is very tiny, but there are situations of astronomical interest where the conditions could be different.
One such field where to look for a gravitational AB effect is gravitational lensing (GL). The influence of gravity on the path of a light ray was considered long before the outset of general relativity, dating back to the XVIII century; the idea of GL appeared as soon as in 1920 when it was somehow foreshadowed by Eddington, then it slowly grew up in theorists’ consideration but it was only at the end of the 70’s that an observational evidence for such a phenomenon was found and since then the examples have multiplied and with them the interest by the astronomers community. Nowadays a rich literature exists exploring many details of the lensing mechanism (a complete account of the theory may be found in ). One of the problems tackled when examining light coming from different images of one single object is the one of the phase relations between each of them and the others. A phase difference arises from the different travel times of the light rays following different paths. The difference in times has its origin both in the different length of the path as seen by an inertial observer and in the intensity of the gravitational fields encountered during the travel.
Now, coming back to the gravitational AB effect, one may expect that it also contributes to the phase difference between light rays following different paths, at least in the case that the bending source is a compact object endowed with a strong gravitational field and a very high angular momentum. The purpose of this paper is to evaluate this contribution. Section II summarizes the essentials of the gravitational AB effect; section III determines the gravitational AB contribution to the time delay, and finally section IV contains the conclusions and a short discussion of the results.
## 2 The gravitational Aharonov-Bohm effect
The gravitational AB effect originates in the peculiarities of the space time around a rotating mass. Rotation per se leads to a desynchronization of clocks laid along a closed path contouring the rotation axis; this is the actual explanation of the Sagnac effect, i.e. of the phase difference between light rays co-rotating and counter-rotating with a turntable. This happens because the worldlines of points on such turntable are helixes in four dimensions and this implies that no unique space exists for them, whereas time is polydromic.
A similar situation is induced by a Kerr metric, just as it is the case for the polydromic scalar potential of the magnetic field of a current. Studying the Sagnac effect in a Kerr metric one finds that a phase difference between light rays contouring the source of the gravitational field in opposite directions is present even when the light source (and observer) is not rotating. This is what most properly can be qualified as a gravitational AB effect because it may be reconduced to a situation where a stationary source sends light beams on opposite sides of a rotating mass towards a stationary observer.
In the simple case of a stationary observer and a circular equatorial path for the light beams, the difference in travel time between the two oppositely rotating rays when they come back to the observer, is:
$$\delta \tau =\frac{4\pi }{c^2}\frac{R_S}{R}\frac{a}{\sqrt{1\frac{R_S}{R}}}$$
(1)
Here $`R_S`$ is the Schwarzschild radius of the central body ($`R_S=2GM/c^2`$), $`M`$ is its mass, $`a`$ is the ratio between the angular momentum and the mass and $`R`$ is the radius of the considered equatorial circumference. Considering a light source diametrically opposite to the observer the time difference between the two paths would of course be half the value of (1).
## 3 Phase difference originated by the gravitational Aharonov-Bohm effect.
The situation we consider is that of a light source and an observer stationary and far away from each other. Somewhere in between there is a massive and rotating object which partly deflects the light rays. The configuration is the one shown in fig.1. For simplicity we assume that the rotation axis of the (let us call it so) gravitational lens (GL) is orthogonal to the plane of the figure.
Space-time is almost everywhere flat; the curvature is relevant only in the vicinity of the lens. Furthermore the distances $`l_1`$ and $`l_2`$ are supposed to be far greater than the size of the region where the curvature matters. As a result of this assumption and of the fact that in astronomical situations the deviation angles are always very small, the lens is treated as acting on a plane perpendicular to the line of sight from the observer.
This means that the light rays may be thought of as being straight lines broken at the lens plane. To find out the position of the break point, where the whole deviation is supposed to occur, the Fermat’s principle can be used. In practice the position of the break point must be such that the proper (for the observer) arrival time of light be stationary under any variation of the position itself.
Now, in our conditions, we assume that the metric around the lens is the Kerr metric:
$$g=\left(\begin{array}{cccc}c^2\frac{r^2R_Sr+\rho ^2\mathrm{cos}^2\vartheta }{r^2+\rho ^2\mathrm{cos}^2\vartheta }& 0& 0& \rho c\frac{R_Sr}{r^2+\rho ^2\mathrm{cos}^2\vartheta }\mathrm{sin}^2\vartheta \\ 0& \frac{r^2+\rho ^2\mathrm{cos}^2\vartheta }{r^2R_Sr+\rho ^2}& 0& 0\\ 0& 0& \left(r^2+\rho ^2\mathrm{cos}^2\vartheta \right)& 0\\ \rho c\frac{R_Sr}{r^2+\rho ^2\mathrm{cos}^2\vartheta }\mathrm{sin}^2\vartheta & 0& 0& \frac{\left(\rho ^4+r^2\rho ^2\right)\mathrm{cos}^2\vartheta +\rho ^2\left(r^2+R_Sr\mathrm{sin}^2\vartheta \right)+r^4}{r^2+\rho ^2\mathrm{cos}^2\vartheta }\mathrm{sin}^2\vartheta \end{array}\right)$$
(2)
The origin of the coordinates is at the center of the lens and $`\rho =a/c`$. The plane containing the light trajectories is so oriented that $`a>0`$ (counterclockwise rotation).
The coordinated time lapse along an infinitesimal null worldline is in general (Latin indices span the space coordinates):
$$dt=\frac{1}{g_{00}}\left[g_{0i}dx^i\pm \sqrt{\left(g_{0i}g_{0j}g_{ij}g_{00}\right)dx^idx^j}\right]$$
(3)
Specializing to the Kerr metric and for $`\vartheta =\frac{\pi }{2}=`$cost (3) becomes
$$dt_\pm =d\phi \frac{r}{c\left(rR_S\right)}\left[\rho \frac{R_S}{r}\pm f(r,\phi )\right]$$
(4)
where $`f(r,\phi )=\sqrt{\frac{\rho ^2R_S^2}{r^2}+\left(1\frac{R_S}{r}\right)\frac{\rho ^2\left(r+R_S\right)+r^3}{r}+\left(1\frac{R_S}{r}\right)\frac{r^2}{r^2R_Sr+\rho ^2}\left(\frac{dr}{d\phi }\right)^2}`$
Time, by default, flows toward the future ($`dt>0`$). To insure this we see that when the ray goes past the lens on the left ($`d\phi >0`$) it must be
$$dt_l=d\phi \frac{r}{c\left(rR_S\right)}\left[\rho \frac{R_S}{r}+f(r,\phi )\right]$$
(5)
When viceversa it passes on the right ($`d\phi <0`$), it must be
$`dt_r`$ $`=`$ $`d\phi {\displaystyle \frac{r}{c\left(rR_S\right)}}\left[\rho {\displaystyle \frac{R_S}{r}}f(r,\phi )\right]=`$ (7)
$`\left|d\phi \right|{\displaystyle \frac{r}{c\left(rR_S\right)}}\left[\rho {\displaystyle \frac{R_S}{r}}+f(r,\phi )\right]`$
The difference between (5) and (7) is the origin of the time delay between the two beams. If, by any means, one could impose the same geometrical path on both sides and if source, GL and observer are alined, the total time difference is:
$$\delta t=\frac{2}{c}\rho R_S_0^\pi \frac{d\phi }{rR_S}$$
(8)
This situation corresponds geometrically to the formation of a Chwolson ring<sup>1</sup><sup>1</sup>1Usually called Einstein ring, but Einstein noticed the effect now named after him only in 1936, whereas Chwolson did so in 1924.. In general one can expect that the angular momentum of the GL produces unequal paths on the sides of the bender, if however the additional bending is small with respect to the usual potential effect (8) may be used to estimate the contribution to the phase difference coming from the angular momentum alone.
Accepting the idea that the light rays are straight, we can immediately write their equation. The reference is made to fig 1; the result for $`0\phi \pi /2`$ is
$$r_1=\frac{l_1\mathrm{sin}\alpha }{\mathrm{sin}\alpha \mathrm{cos}\phi +\mathrm{cos}\alpha \mathrm{sin}\phi }$$
(9)
and for ($`\pi /2\phi \phi _s`$)
$$r_2=\frac{l_1\mathrm{tan}\alpha }{\mathrm{sin}\phi \frac{l_1}{l_2}\mathrm{tan}\alpha \mathrm{cos}\phi }$$
(10)
Now we can introduce (9) and (10) into (8) and perform the integration. Considering the smallness of $`\alpha `$ the result is (see appendix):
$$\delta t4\frac{\rho }{c}\frac{R_S}{l_1\alpha }=4\frac{a}{c^2}\frac{R_S}{l_1\alpha }=8\frac{G}{c^4}\frac{J}{l_1\alpha }$$
(11)
The corresponding phase difference for a radiation whose frequency is $`\nu ,`$ is of course:
$$\delta \mathrm{\Phi }=2\pi \nu \delta t=16\pi \nu \frac{G}{c^4}\frac{J}{l_1\alpha }$$
(12)
## 4 Conclusion
We have found a simple formula for the time delay (and phase difference) between two light rays running on opposite sides of a rotating and gravitating object. In the final formula, $`\alpha `$ is an observational parameter which actually depends in turn on the mass and angular momentum of the source. The determination of its value may be done, as said in sect. III, using Fermat’s principle, however for an order of magnitude estimate it is enough to know that it is approximately
$$\alpha \frac{R_s}{b}$$
where $`b`$ is the impact parameter of the light beam. It is in turn
$$bl_1\alpha $$
then
$$\alpha \sqrt{\frac{R_S}{l_1}}\frac{1}{c}\sqrt{\frac{GM}{l_1}}$$
Finally:
$$\delta t\frac{1}{c^3}\sqrt{\frac{G}{Ml_1}}J$$
(13)
In our simplified situation $`\delta t`$ is the whole delay between the arrivals of light rays from two different images of the same source. In a more realistic situation one should add to (11) the geometrical delay originated by the difference in length for the two paths and the potential delay (here I am using the terminology found in ) due to the fact that, if the paths are geometrically different, the light rays cross regions with different gravitational potentials.
The actual value of the delay depends of course on the parameters of the bender, but the order of magnitude formula (13) makes one suspect that it can be not entirely negligible. The conversion of the delay into a phase shift discloses the possibility to evidence it by interferometry techniques.
## 5 Appendix
In order to evaluate (8) we start from the two expressions (9) and (10).
Suppose $`R_S<<l_1\mathrm{sin}\alpha .`$ When $`rr_1`$ it is:
$`{\displaystyle \frac{1}{rR_S}}`$ $`=`$ $`{\displaystyle \frac{1}{l_1\frac{\mathrm{sin}\alpha }{\mathrm{sin}\alpha \mathrm{cos}\phi +\mathrm{cos}\alpha \mathrm{sin}\phi }R_S}}`$
$``$ $`\left(1+{\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(\mathrm{sin}\alpha \mathrm{cos}\phi +\mathrm{cos}\alpha \mathrm{sin}\phi \right)\right){\displaystyle \frac{\left(\mathrm{sin}\alpha \mathrm{cos}\phi +\mathrm{cos}\alpha \mathrm{sin}\phi \right)}{l_1\mathrm{sin}\alpha }}`$
Consequently the first part of the integral in (8) is
$`I_1`$ $`=`$ $`{\displaystyle \frac{2}{c}}\rho R_S{\displaystyle _0^{\frac{\pi }{2}}}{\displaystyle \frac{d\phi }{rR_S}}`$
$``$ $`{\displaystyle \frac{1}{2c}}\rho {\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(4\mathrm{sin}\alpha +{\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(\pi +4\mathrm{sin}\alpha \mathrm{cos}\alpha \right)+4\mathrm{cos}\alpha \right)`$
$``$ $`{\displaystyle \frac{2}{c}}\rho {\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(\mathrm{sin}\alpha +\mathrm{cos}\alpha \right)`$
Now let us play the same game when $`rr_2`$. One has:
$`{\displaystyle \frac{1}{rR_S}}`$ $`=`$ $`{\displaystyle \frac{1}{l_1\frac{\mathrm{sin}\alpha }{\mathrm{cos}\alpha \mathrm{sin}\phi \frac{l_1}{l_2}\mathrm{sin}\alpha \mathrm{cos}\phi }R_S}}`$
$``$ $`{\displaystyle \frac{1}{l_1\mathrm{sin}\alpha }}\left(\mathrm{cos}\alpha \mathrm{sin}\phi {\displaystyle \frac{l_1}{l_2}}\mathrm{sin}\alpha \mathrm{cos}\phi \right)\left(1+{\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(\mathrm{cos}\alpha \mathrm{sin}\phi {\displaystyle \frac{l_1}{l_2}}\mathrm{sin}\alpha \mathrm{cos}\phi \right)\right)`$
The second part of the integral in (8) is:
$`I_2`$ $`=`$ $`2{\displaystyle \frac{\rho }{c}}R_S{\displaystyle _{\pi /2}^\pi }{\displaystyle \frac{d\phi }{rR_S}}`$
$``$ $`2{\displaystyle \frac{\rho }{c}}{\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left(\mathrm{cos}\alpha +{\displaystyle \frac{l_1}{l_2}}\mathrm{sin}\alpha \right)`$
Finally the whole integral is:
$`\delta t`$ $`=`$ $`I_1+I_2`$
$``$ $`2{\displaystyle \frac{\rho }{c}}{\displaystyle \frac{R_S}{l_1\mathrm{sin}\alpha }}\left[2\mathrm{cos}\alpha +\left(1+{\displaystyle \frac{l_1}{l_2}}\right)\mathrm{sin}\alpha \right]`$
The angle $`\alpha `$ is necessarily small in an astronomical configuration, then:
$$\delta t4\frac{\rho }{c}\frac{R_S}{l_1\alpha }$$
|
no-problem/0003/astro-ph0003274.html
|
ar5iv
|
text
|
# Evidence for topological nonequilibrium in magnetic configurations
## I Introduction.
Formation of singularities, or current sheets, is one of the striking features of astrophysical as well as tokamak plasmas . Such singularities are key to understanding active phenomena related to fast magnetic field reconnection , . For example, fast dynamos rely on fast reconnection of magnetic field lines , . Despite their importance, key issues related to current sheet formation are still not well understood. Supposing, e.g., that they are formed due to instabilities, one has to assume that fluid dynamical processes are able to slowly deform equilibrium magnetic field configurations (and thereby build up regions of field gradients) without significant reconnection until a marginal state is reached. At this threshold, instability-driven reconnection would then lead to release of the stored free energy on the (observed) time scales thought to be too short to be consistent with, for example, Sweet-Parker reconnection , . However, it has been long recognized that in the presence of reconnection, it is not obvious how one can attain (meta)stable configurations which store significant free energy. Furthermore, it is not clear why reconnection would not simply return the system to the marginal state, thus releasing only a small fraction of the available free energy.
In this paper, we explore one possible solution to these puzzles: We consider specific magnetic field configurations which could arise from a slow evolution of (stable) quasi-equilibria, and then examine their subsequent (unforced) evolution. Our aim is to show that there exist configurations that evolve initially on the slow rate, but that can reach a point at which spontaneous current sheet formation occurs. These configurations have been referred to as “topological nonequilibria” (TN) , , and lead to situations in which the topology of the field is such that in a relaxed equilibrium state it inevitably contains discontinuities. TN results in spontaneous reconnection, because no external forces are involved; and in the cases we shall examine, the result is that extraction of all of the available free energy becomes possible.
Finally, we note that an important aspect of this problem relates to the fact that there is a direct correspondence between magnetostatic equilibria and steady Euler flows, as pointed out by Moffatt ; this problem is therefore closely connected to the possible formation of singularities in hydrodynamics; see also , .
## II Description of the approach.
### A The idea of topological nonequilibrium.
The main ideas of topological nonequilibrium (henceforth, TN) were formulated rigorously by Moffatt , , . Consider an ideally conductive viscous (ICV) flow. We restrict ourselves to incompressible flows. Starting with initial magnetic field $`𝐁(𝐱,t=0)=𝐁_0(𝐱)`$ of arbitrary topology, one expects that such a configuration will relax to a static state, with zero velocity field, and nontrivial magnetic field $`𝐁_E`$. The latter configuration is then called ‘topologically accessible’ because the field’s topology does not change during this frozen-in evolution. If this relaxed equilibrium state contains discontinuities, then all of the states in the evolution are referred to as TN. It may be expected in a realistic situation, when small but finite resistivity $`\eta `$ is taken into account, that these discontinuities evolve into finite–width current sheets, resulting in efficient reconnection and dissipation of the magnetic field. Unfortunately, there are only a few special cases for which it is possible to demonstrate that TN exists . In this paper, we restrict ourselves to analysis of two-dimensional configurations, and study the evolution of two generic field configurations which can lead to TN.
Of course, in general, there is no reason that a given initial configuration is at equilibrium. However, one would normally expect that, after relaxation, such a configuration would evolve to attain a smooth equilibrium, and that the magnetic field evolution subsequently stops. Some initial field topologies, however, cannot possibly relax to a smooth equilibrium, resulting in TN. It is obvious that use of the word “nonequilibrium” is not strictly correct, because in the final state the field is at equilibrium as long as the diffusivity vanishes exactly. However, in the spirit of maintaining already existing tradition, we retain this terminology.
As a result of relaxation, the magnetic field will reach an equilibrium state. In two dimensions, $`B_x=_yA`$, $`B_y=_xA`$, and the flux $`A`$ obeys in equilibrium the equation
$$^2A=4\pi \frac{dP(A)}{dA},$$
(1)
where $`P=p+B_z^2/(8\pi )`$, $`p=p(A)`$, and $`B_z=B_z(A)`$; see, e.g., . As an aside, we note that if the pressure $`p`$ can be neglected, then this equilibrium is force-free; this may occur in specific applications such as in the solar corona. In addition, the total pressure $`P`$ (Eq. (1)) ought to substantially exceed the transverse magnetic energy $`B_x^2+B_y^2`$ in order to justify the incompressibility assumption for the evolution to this equilibrium state.
Equation (1) is trivially satisfied in the one-dimensional case. To start with, suppose that $`A`$ is a function of $`x`$ only, corresponding to straight field lines parallel to $`y`$ axis. An initial arbitrary distribution is generally not at equilibrium. However, after relaxation, the field reaches the well-known equilibrium
$$\frac{B_y(x)^2}{8\pi }+P(x)=\mathrm{const},$$
(2)
automatically satisfying (1). The same is true for axisymmetric configurations, when $`A=A(r)`$, and the field consists of concentric circles.
Going to two dimensions complicates the problem considerably. Consider first a configuration with closed nested field lines, so that $`A(x,y)`$ has one maximum (minimum). The configuration is depicted in Fig. 1, and we will refer to it below as “case A”. Of course, the axisymmetric configuration is topologically accessible from this configuration, and therefore it can reach equilibrium. The question, however, is if this equilibrium is unique.
If there exists a magnetostatic equilibrium with type A field topology with essentially arbitrary field line geometry, e.g., with elliptic field lines, as in Fig. 1, then we would expect an arbitrary type A configuration to relax to this equilibrium without dramatic changes in its geometry. However, suppose this equilibrium exists only in axisymmetric form, i.e., can only be realized with concentric (field line) circles; then, if this configuration were placed between magnetic “walls” (such as regions of strong magnetic fields in the solar corona, or solar wind), as in Fig. 2a, we would expect the formation of discontinuities (because it would not be possible to evolve to the equilibrium state). Furthermore, if we allowed for nonvanishing diffusion, then such a configuration would not settle down until all magnetic lines are reconnected, and the bubble seen in Fig. 2 disappears entirely.
It is useful to expand slightly on the astrophysical relevance of this case. Our point is that “case A” shown in Fig. 2a can be regarded as an abstraction of a commonly expected field configuration in the solar atmosphere: Consider the emergence of a magnetic flux tube from the solar interior to the corona, where it enters a highly conducting medium already suffused by pre-existing magnetic fields. If one abstracts such an emerging flux tube as a rising cylinder, then the expected field topology in planes perpendicular to the tube axis should be similar to case A: The nested closed field lines in such planes then represent the toroidal field component of the emerging flux tube; and the magnetic “walls” shown in Fig. 2a represent the projections in such planes of the magnetic fields of the surrounding magnetized coronal plasma.
The second type of field topology we consider below is what we call type B; this more complicated topology is a “rosette structure” (Fig. 3a), which has been investigated experimentally . In terms of the flux function $`A(x,y)`$, this configuration consists of two maxima, e.g., two “mountains”, surrounded by a pedestal, i.e., two magnetic islands surrounded by closed magnetic field lines going around the two islands; the field vanishes outside the zero-line. If the type B topology cannot exist in smooth equilibrium, then a current sheet develops, resulting in efficient reconnection of field lines until all field lines of the islands are reconnected, and eventually only one island remains, of the topology of the type A. In contrast, if this kind of topology does exist in smooth equilibrium, then nothing dramatic would happen, and the configuration would relax to this equilibrium without any discontinuities. The astrophysical context in which this type of configuration may be created is similar to that just described above: consider the emergence of two adjacent twisted solar flux tubes into a non-magnetized ambient corona; again, the field structure in a cross-section perpendicular to the tube axes will appear as shown in Fig. 3a. Thus, in both cases A and B, we are dealing with the generic case of bounded magnetic flux systems (i.e., systems of magnetic field lines which lie within a finite bounding surface on which the field vanishes), which can be regarded as abstractions of, for example, isolated flux tubes emerging into the highly conductive solar corona.
Generally, finding TN states is far from trivial. To illustrate, let us return to the type A topology. The axisymmetric equilibrium solution is not unique. For example, one can construct a solution to (1),
$$A(x,y)=\mathrm{sin}kx\mathrm{sin}ky,$$
depicted in Fig. 4a, which has the same topology of field lines as depicted in Fig. 1. This asymmetric field is at equilibrium, so that the general answer to the question of whether, say, elliptic configurations of the form shown in Fig. 1 can be at equilibrium, is affirmative. An analogous construction can be carried our for the topology of type B; in this case, the solution of (1) can be constructed as
$$A_z=\underset{n}{}A_ne^{ik_x^{(n)}x+ik_y^{(n)}y},$$
where $`(k_x^{(n)})^2+(k_y^{(n)})^2=\mathrm{constant}`$ (see, e.g., , ). This example is depicted in Fig. 4b; the rosette structure shown is at equilibrium without any discontinuities.
The situation changes if a zero-line (a line where $`𝐁_{}=0`$, $`𝐁_{}=\{B_x,B_y\}`$, and generally $`B_z0`$) is present, such as the dashed line shown in Fig. 1 for the type A, and in Fig. 3 for the type B field topology. The zero-line possess two remarkable properties. First, the magnetic field remains zero on this line in the presence of ICV flow. Thus, if we write the ideal induction equation in the form,
$$\frac{d𝐁}{dt}=(𝐁)𝐯,$$
which in 2D reads,
$$\frac{d𝐁_{}}{dt}=(𝐁_{})𝐯,\frac{dB_z}{dt}=0,$$
(3)
it is easy to see that because the left-hand side describes transport of any fluid element (in particular, of the zero-line) by the motion, and because the right-hand side corresponds to change of the field along the Lagrangian trajectory (and as the right-hand side vanishes on the zero-line), this equation will preserve the property $`𝐁_{}=0`$ on the zero-line.
Second, if the zero-line has a constant (along the line) curvature, e.g., is a straight line or a circle, and if the field is also analytical, then the entire configuration will have the same geometry as the zero-line. In other words, if the zero-line is a straight line, then all other field lines are straight as well; alternatively, if the zero-line is a circle, then the analytical equilibrium configuration consists of concentric circles. The proof is easily constructed by expanding $`A(x,y)`$ in the vicinity of the zero-line . Note, however, that the constant curvature zero-line is a special case (although it can be regarded as a representation of the emergence of magnetic flux on, for example, the solar surface, in which geometrically symmetric flux bundles straddle the separating “neutral line”); in general, the zero-line is arbitrary in shape, as shown in Figs. 1 – 3. Nevertheless, we may conjecture that the zero-line imposes a severe constraint on the geometry: That is, we conjecture that the existence of this line results in unique (smooth) solutions of the equation (1) in the form of magnetic field lines with constant curvature .
One of the considerations in favor of this conjecture is as follows. Without loss of generality, $`A0`$ outside the configuration, and thus $`A=0`$ on the boundary (whose shape is as yet unspecified), corresponding to the Dirichlet problem for equation (1). On the other hand, because $`B_x=B_y=0`$ on the same boundary, we have $`_nA=0`$, corresponding to a Neumann problem. The problem is thus over-constrained; and one would expect this to lead to degeneracy of the solution. That is, these specific boundary conditions are expected to restrict the shape of the boundary itself, and thus in turn to restrict the topology of possible equilibria. Although the boundary conditions are specified, and the problem is thus rigorously formulated, the above statement regarding the over-constrained nature of our problem nevertheless has not been shown to be useful in constructing a formal mathematical proof concerning the geometry of the configuration in the presence of a zero-line. Formally, it is easier to discard the TN for a given topology by direct construction of a solution with needed properties. Generally, it is not clear at all how to construct a formal proof that the only solution of (1) with boundary condition $`𝐁_{}=0`$ for the type A topology is unique, and axisymmetric, thus defining the shape of the boundary itself! It is even less clear how to prove that there is no smooth solution for the type B topology, assuming that this statement is true.
Finally, we note that one can simply produce artificial discontinuities, but these are irrelevant to our discussion. To illustrate, suppose that we place superconductive walls at the locations of the thick lines in both panels of Fig. 4. In that case, the configurations will be at equilibrium (for both type A and B topologies), and the field will be smooth everywhere except at the boundaries (where the field jumps from a finite value just outside the walls to zero at the walls in order to meet the boundary condition of zero field within the superconducting walls). This type of discontinuity is irrelevant to the astrophysical problem we are aiming at, and we therefore do not discuss it any further.
### B Description of the solution method.
One of the powerful ways to study the formation of current sheets is via numerical simulation. However, in numerical simulations the Lundquist number, $`S=c_AL/\eta `$ ($`c_A`$ the Alfvén speed, and $`L`$ the characteristic length), which is critical for this problem, is far below that corresponding to values encountered in natural systems, viz., under astrophysical conditions . When $`S`$ is not sufficiently large, the separation between typical reconnection times and typical fluid dynamical times may not be large; it is therefore difficult interpret realistic resistive calculations in the context of a problem in which current sheet formation is to occur without topological changes. On the other hand, numerical schemes which attempt to circumvent this problem by solving the ideal MHD equations suffer from the difficulty that such schemes may be subject to numerical instability, so that it becomes difficult to distinguish between numerical artifact and physically correct current sheet formation. When discontinuities in the magnetic field appear, traditional numerical MHD codes tend to either break down, or to introduce a small amount of resistivity to broaden the current sheets (so that, for example, their width is larger than a mesh cell). In certain situations, the symmetry of the problem can be exploited to study the approach to the ideal solution, i.e., $`\eta 0`$, see, e.g., . However, typical simulations actually add some amount of numerical resistivity, as in , so that numerical solutions of the ideal induction equation correspond to solutions of that equation with an added effective diffusivity. In studies of reconnection it is known that the specification of boundary conditions on the magnetic field and velocity (which specify the rate at which magnetic field and plasma is brought into the reconnection layer) may affect the rate of reconnection. In our simulations we study spontaneous formation of singularities by isolating the flux system from the boundaries. We surround our flux bundle by a (transverse) field-free region, and we place the boundaries far away from the bundle, thereby minimizing the effect of the boundary conditions on the formation of the current sheets.
We address this issue as a relaxation problem in the framework of ICV flows. Our approach involves a direct numerical simulation of ICV flows, i.e., solving the set of equations (3) and the momentum equation,
$$\frac{d𝐯}{dt}=\frac{𝐯}{t}+(𝐯)𝐯=$$
$$=\frac{1}{\rho }p+\frac{1}{4\pi \rho }\{\times 𝐁\}\times 𝐁+\nu ^2𝐯,$$
(4)
with $`𝐯=0`$ . We use a Lagrangian approach to solve the induction equation (3), as in , . More specifically, the magnetic field inside the region of interest is represented by a large number of field lines; the evolution of the field lines is then followed using the exact Lundquist solution, i.e., knowing the initial strength of magnetic field on a fluid element connecting two nearby points on a field line, the final strength is proportional to the length of the segment, as it is stretched by the motions. We assume for all cases that the magnetic field vanishes on the outermost field line (the dashed curves shown in the figures). The number of field lines which fill the domain is chosen so that the subsequent field evolution can be followed without leaving gaps in the final state, i.e., we determine the number of initial field lines by fixing the spatial resolution of the final state; we discuss this point further immediately below. As an important aside, we note that the initial magnetic field is smooth, implying that the current system, $`𝐣(x,y)`$, which is defined by Ampere’s law
$$\times 𝐁=\frac{4\pi }{c}𝐣,$$
is smooth as well, i.e., there are no current sheets initially.
The momentum equation (4), in contrast, is solved using standard finite difference techniques, with finite viscosity. However, the requirement of coupling the magnetic field evolution to the momentum equation does lead to a complication for computing the Lorentz force. The key issue is that the momentum equation requires the Lorentz force to be evaluated on a homogeneous spatial grid, while the magnetic field evolution is given in Lagrangian space. We resolve this issue by (quadratically) interpolating the Lorentz force at each time step onto the homogeneous grid used by the momentum equation (4); similarly, we use quadratic interpolation from the momentum equation mesh to evaluate the velocity field on the Lagrangian mesh. In order to minimize interpolation errors, we fix the number of field lines such that every Eulerian grid domain is pierced by at least a few field lines throughout the calculation. Note here that interpolation errors do lead to inaccuracies in the solution of the flow and magnetic fields, but by construction cannot lead to changes in the magnetic field topology. Note also that we have checked for convergence of the solutions as the spatial resolution of our calculation is increased; our conclusion is that the results presented here do not depend on grid resolution. Our solution corresponds to the limit $`\eta 0`$, in the sense that the topology is strictly conserved, but with finite viscosity; thus, the computational scheme we use forces the relaxation to be due solely to viscous damping, and as a consequence, the field relaxes to an equilibrium state. Our approach has the dual virtues that the boundary conditions for the magnetic field do not need to be specified, and that the field topology is preserved; it is therefore appropriate for the study of TN.
If the viscosity is large, then (3-4) describe monotonic relaxation to equilibrium. We can estimate the relaxation time as follows: from (4) we find that $`vc_A^2L/\nu =c_AS_\nu `$, where $`S_\nu =c_AL/\nu `$. This viscous regime is realized if $`S_\nu 1`$. The relaxation time is then $`t_\nu L/v=\tau _A/S_\nu `$, with $`\tau _A=L/c_A`$. In the opposite limiting case, $`S_\nu 1`$, the system undergoes (strong) Alfvén oscillations ($`vc_A`$), with a period $`\tau _A`$, decaying on a viscous time $`t_\nu L^2/\nu =\tau _AS_\nu `$. These two cases can be jointly described by an interpolation formula,
$$t_\nu =\tau _A(S_\nu +1/S_\nu ),$$
from which it follows that the relaxation time is large for both limiting cases (in terms of $`\tau _A`$). Thus, optimal relaxation to equilibrium occurs for $`S_\nu O(1)`$; in the simulations, we used the value $`S_\nu =5`$. It is important that $`S_\nu `$ not be too large: An important constraint on the value of $`S_\nu `$ is that the simulations remain stable. This constraint is not met if $`S_\nu `$ is too large; because the two dynamical equations are solved in different coordinates \[eq. (3) in Lagrangian, and eq. (4) in Eulerian coordinates\], errors arise from the interpolation from one coordinate system to the other, and therefore the calculations make sense only if these errors are damped sufficiently by viscosity.
## III Description of the results.
We conducted two series of numerical experiments for case A. In the first series, we consider the relaxation of this type of topology without any external field, as depicted in Fig. 1. We explored different initial shapes of the field lines, including ellipse-like, diamond-like, and other similar configurations. In addition, for a fixed shape, we explored different distributions of the flux function $`A(x,y)`$, i.e., different functional dependences $`A(s)`$, where $`s`$ labels the field lines. The results are always the same: the field ends up in an axially symmetric state, provided the field vanishes on the outermost field line.
In another sets of experiments, this same configuration (case A) is placed between magnetic walls, as in Fig. 2a; in this case, the system always evolves to create discontinuities, as in Fig. 2b, where the field lines are taken from one of our simulation runs. We see that as the “bubble” evolves, it attempts to become axisymmetric, but as it does so, two discontinuities begin to form, as depicted in Fig. 2b, see also Fig. 5a. It is interesting to note that, for some initial conditions, a current point is formed, rather than a current sheet (or a line in two dimensions), suggesting that finite conductivity could presumably result in fast reconnection; that is, according to the Sweet-Parker mechanism (see, e.g., ), the reconnection rate $`v_d1/\mathrm{}`$, where $`\mathrm{}`$ is the length of the current sheet, so that a short current sheet speeds up the reconnection. (In the classical Sweet-Parker mechanism, $`\mathrm{}=L`$, and $`v_d=c_A/S^{1/2}`$.)
It is crucial to note here that the evolution we just described is not forced by the walls; thus, the field and fluid near the walls (i.e., on the wall side of the zero-lines) has the equilibrium property (2). To see this, note that on the zero-line, the total pressure is continuous. Thus, we could replace the initial elliptical field configuration (the “bubble”) lying between the two zero-lines with a field-free region whose gas pressure exactly balances the total pressure on the wall side of the zero-lines. The resulting configuration is clearly in equilibrium, and makes clear that the walls do not push the bubble, i.e., that the evolution of the bubble is entirely driven by the fact that it is not in equilibrium. Thus, it is as the bubble tries to become axisymmetric, and pushes back the walls, that the two discontinuities are formed. In principle, if the bubble could reach equilibrium with ellipse-shaped field lines as in Fig. 2(a), then it would not even interact with the walls, and the equilibrium of the whole configuration would be smooth. The field evolution to TN here described, i.e., evolution from an initially smooth state to a state containing a singularity, is therefore an intrinsic property of the initially smooth state, rather than being forced by external means.
Consider now the type B configurations. We again conducted two series of experiments. In the first series of numerical experiments, we studied different kinds of initial states, with different initial field line shapes, and with different distributions $`A(s)`$; in all cases, we again required that the outermost line must be a zero-line, as shown in Fig. 3. We found for the type B configuration that a field discontinuity always appeared, as in Fig. 3b (which is taken from one of our simulations), no matter what the initial distribution of $`A`$, or what kind of analytical representation of the initial field lines we used.
In the second set of experiments, we simulated the evolution of magnetic field with different number of field lines. The issue is as follows: The magnetic field gradient at $`x=0`$ increases during the evolution, so that the current, $`\times 𝐁`$, approaches a $`\delta `$-function (Fig. 5). It is not possible to observe this tangential discontinuity because in the simulations, the field is described via a finite (albeit a very large) number of field lines (recall ). Our hypothesis is that in the limit of an infinite number of field lines $`N\mathrm{}`$, the current at $`x=0`$ tends to infinity; in order to test this hypothesis, we increased the number $`N`$ (recall ) in a succession of simulations that were otherwise identical. According to our hypothesis, we expected the current to grow roughly as $`1/\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the closest distance between the $`X`$-point and the nearest field line; the experiments confirmed this expectation.
## IV Conclusion.
The fundamental result emerging from our simulations is that the vanishing magnetic field on the outermost field line imposes strict constraints on the geometry of equilibrium: The type A topology can be at equilibrium only if it is axisymmetric; and therefore, if constrained by external walls, it is at TN. Similarly, the type B rosette structure develops discontinuities, but only in the presence of an external zero-line. The presence of zero-lines is thus an important aspect of topological nonequilibria.
Finally, we comment briefly on the applicability of these results to astrophysical situations. Observations of the solar atmosphere commonly show topologically unconnected magnetic flux systems which are seen to interact (viz., emerging flux loops). In such circumstances, in which one expects to encounter small but finite resistivity, these flux systems are initially unlinked, but as they are pushed together (and begin to reconnect), flux linkage is expected to occur and to lead to a field topology analogous to that depicted in Fig. 2, or to the generic type B configuration, discussed here. The magnetic flux surrounding these two islands would be initially weak, and the current sheet which is formed is therefore expected to be weak. However, during the course of reconnection, more flux will be pushed outside the two islands, thus accelerating the process of reconnection. This process may therefore be self-accelerating, resulting in final (spontaneous) reconnection; preliminary numerical simulations of a resistive case of this sort suggest that the reconnection rate $`v_d`$ scales as $`c_A/S^\alpha `$, where $`\alpha `$ is a small power, $`\alpha O(0.1)`$ . If confirmed, it would imply that the reconnection is fast enough to satisfy the observed (solar) constraints on reconnection times. (Recall that while the Sweet-Parker reconnection time for typical parameters corresponding to the solar corona is about three years, the time corresponding to $`v_d=c_A/S^{0.1}`$ is only 30 minutes, which is comparable to the energy release time scale for large solar flares, related to the so-called “long-enduring” events ). Therefore the two topologies depicted in Figs. 2 and 3 may be regarded as generic examples of “fast” reconnection and activity in magnetically active astrophysical systems.
###### Acknowledgements.
We have benefited considerably from comments by, and discussions with, H.K. Moffatt, E.N. Parker, and T. Emonet. This work has been supported by the NASA Space Physics Theory Program at the University of Chicago (RR, SIV) and, in part, by NSF grant ATM-9320575, and NASA contracts NAS5-96081 and NASW-5017 to SAIC (ZM and JAL).
|
no-problem/0003/cond-mat0003474.html
|
ar5iv
|
text
|
# Fracture energy of gels
## 1 Introduction
Fracture is an old subject in material science. However, it is still extensively studied from the viewpoint both of macroscopic mechanics (fracture mechanics) and of molecular scale physics. One of the recent topics in macroscopic studies is an instability of fast cracks. Experiments Fineberg and simulations Abraham show that the steady state propagation of a straight crack becomes unstable at a critical crack speed of the order of the Rayleigh speed in the material and accordingly roughening and branching of the crack path occur. It is believed that the instability can be essentially caused by the qualitative change of the stress field which occurs as crack speed approaches to the speed of surface wave Langer ; Freund . On the other hand, molecular scale processes in vicinity of the crack front can also be responsible for macroscopic behavior of fracture. For example some metals undergo the so-called brittle-ductile transition. This transition is attributed to temperature dependence of mobility of dislocations emitted on the crack front Serbena . Effects of molecular scale process are quite different depending on the system in question and thus the unified understanding of fracture is difficult even in the phenomenological level.
Systems classified in soft matter often show elastic or viscoelastic nature under normal conditions, and they undergo fracture with well-defined crack front lines. Fracture in the soft matter is interesting in the following two aspects. Firstly their structural units have large spatial sizes and show slow response against external forces. Thus molecular process near crack fronts can drastically change in slow fracture, which is experimentally controllable. Secondly bulk behavior of the materials against external forces deviates from the linear elasticity. To describe the fracture phenomena of the soft matter, we should extend usual linear fracture mechanics according to physical nature of the system in question, for example, elastic large deformabiliy of rubbers Thomas ; Andrews , bulk viscoelastic effect of polymer melt DG96 and the anisotropic elastic energy of smectics DG90liq .
Polymer gels DG are one of the typical systems of the soft matter. Various phenomena in gels such as gelation, deformation and phase transition have been extensively studied. Nevertheless, few studies have been carried out on fracture of gels Bonn ; Tanaka96 ; Tanaka98 . This is partially because gels have less industrial significance than hard solids do from the viewpoint of strength. However, in the field of polymer physics, this topic is interesting because we have much knowledge on fracture processes in other polymer systems Kinloch and on the various physical properties of polymer gels Li , which may be useful as a reference frame to study fracture of gels. For example, Gent and co-workers Gent81 showed that the strength of the butadiene rubber and that of chemically-bonded interfaces between the rubbers decrease with increasing cross-link density of the rubber. de Gennes DG90 interpreted the effects as a consequence of friction during chain pulling-out process on the crack front. In gels the friction should be very small because they contain large amounts of solvent, and it is not clear whether the frictional dissipation is a dominant factor or not. The existence of solvent causes other unique effects, such as the coupling between deformation of polymer network and chemical potential of solvent. To understand fracture of gels, we should first know experimental results which show fundamental features of the phenomena.
In this study we investigate dependence of fracture energy of acrylamide gel on crack speed $`V`$ and on cross-link density in order to get the fundamental experimental results. Generally the fracture energy is determined by microscopic process near crack fronts and appears in macroscopic descriptions of fracture as an important parameter. Thus, the fracture energy is an essential physical quantity to understand the nature of fracture in gels. However, it is difficult to apply usual techniques for measuring fracture energy of hard solids to gels because of their extreme softness and large deformability. We have developed a novel method suitable for measuring fracture energy of gels and precisely measured the fracture energy of four kinds of acrylamide gel with different cross-link densities. In our method, we have adopted a peel test-like geometry to drive fracture steadily and taken account of roughness of fracture surfaces to evaluate the fracture energy of the gels. Following results are obtained. (i) For each cross-link density, the fracture energy is an increasing function of $`V`$ and when we change $`V`$, a crossover occurs; in the faster $`V`$ side of the crossover the fracture energy linearly increases with $`V`$ and in the slower $`V`$ side, where roughening of the fracture surfaces is remarkable and the correction about the area of the fracture surfaces played an important role, the fracture energy depends on $`V`$ with larger increasing rate than in the faster $`V`$ side. (ii) At a given value of $`V`$, both the value of fracture energy and the increasing rate of the fracture energy with $`V`$ decrease with increasing the cross-link density.
## 2 Experiment
Samples— We use as samples four kinds of acrylamide gels which have same polymer concentration and different cross-link densities. The amount of each reagent for preparing acrylamide gels is shown in Table 1. Acrylamide monomer (AA, $`M_w`$ =71.08 ) constitutes sub-chains and methylenebisacrylamide (BIS, $`M_w`$ =154.17) constitutes cross-links. Ammonium persulphate (APS) is an initiator and tetramethylethlylenediamine (TEMD) is an accelerator of the radical polymerization of AA and BIS. We will distinguish the samples by the codes of 4BIS $``$ 10BIS as shown in Table 1. To make pillar-shaped gels (2cm$`\times `$1.8cm$`\times `$14cm), pre-gel solutions produced according to Table 1 were poured into containers in which molds were arranged and left for 24 hours at 25 C.
The gels are took off from the molds and used for the fracture experiment. The values of Young’s modulus $`E`$ and velocity of transverse wave of the gels $`V_t`$ are also shown in Table 1. The values of $`E`$ are measured by compressing the gels. To calculated $`V_t`$ we used the value of density $`\rho `$ = 1.01 (g/cm<sup>3</sup>), and the value of Poission’s ratio $`\nu `$ = 1/2.
Peel test like method— In order to measure the fracture energy of the gel we developed a method which is similar to the peel test. In Fig. 1 we present a gel fractured by the method. The set-up of the experiment is as follows. We put the pillar-shaped gel on an aluminum plate and heated the aluminum plate with a gas burner for about ten seconds. By this treatment the sample gel is fixed on the aluminum plate. We attached a strip of a filter paper to the upper surface of the gel. The filter paper is tightly absorbed to the gel. We created an initial notch on one of the smallest surface of the gel and made it propagate by 2cm by pulling the filter paper with hands. An end of the filter paper was connected to a stepping motor located well above the gel (1.8m) through a strain-gauge and a wire. A thin layer of the gel ($``$1mm) is peeled off by rolling up the wire with the stepping motor. The control parameter is the rolling speed $`V`$ which is equal to the crack speed $`V`$ in our 90 peeling geometry. The measured quantity is the peeling force $`F(t)`$.
Fracture energy— Fracture energy $`G`$ is defined as the energy needed to make a unit area of a fracture surface. In our peel-test like method the fracture energy $`G`$ is calculated by the following equation,
$$G=\frac{F}{w}$$
(1)
where $`F`$ is the measured force and $`w`$ is the width of the pillar-shaped gel (see Fig. 1). In order to understand this relation, let us suppose that the crack front in Fig. 1 steadily propagates over a distance $`\mathrm{\Delta }x`$ along the direction of the longest axis. The increase in area of the fracture surface by this propagation of the crack front is $`w\times \mathrm{\Delta }x`$ and the energy required to extend the fracture surface is the work done to the gel, $`F\times \mathrm{\Delta }x`$. Therefore the fracture energy $`G`$ is $`(F\times \mathrm{\Delta }x)/(w\times \mathrm{\Delta }x)`$. This is identical to the quantity mentioned above.
Roughness of fracture surfaces— In this study we evaluate the roughness of the fracture surfaces using replicas produced by molding the fracture surfaces using silicon rubber. The replica was cut along the plane which was normal to the global fracture surface and parallel to the direction of fracture propagation. The shape of the cross section was recorded by an image scanner (Microtek, ScanMaker III). A quantity which can be regarded as a measure of the roughness of the fracture surfaces was extracted from the image of the cross section.
## 3 Results
Figure 2a is $`F(t)`$ at $`V=`$ 0.4cm/s. The arrows indicate the initiation and the termination of the fracture propagation. The fracture propagates steadily in the period of time between the arrows. We divided the period of time corresponding to the steady state fracture propagation equally into three periods and evaluated the fracture energy $`G`$ using the time average of $`F(t)`$ for the central period.
Figure 2b is $`F(t)`$ at $`V=`$ 0.04 cm/s. As shown in Fig. 2a there is a period of time corresponding to a steady sate fracture propagation. However the fluctuation of $`F(t)`$ in Fig. 2b is larger than that in Fig. 2a even in the period of the steady state fracture propagation. The increase in fluctuation of $`F(t)`$ is characteristic of slow fracture and is accompanied with roughening of fracture surfaces. We show evidence for the existence of the roughening of fracture surface later (see Fig. 5 and Fig. 6).
Figure 3 is a plot of the fracture energy $`G`$ as a function of crack speed $`V`$. At fast values of $`V`$ ($`V>`$ 1cm/s), $`G(V)`$ depends linearly on $`V`$ and both $`G(V)`$ and $`dG/dV`$ decrease with increasing BIS concentration of the samples. Figure 4 is a plot of the fracture energy $`G(V)`$ of 4BIS, 6BIS and 8BIS for $`V<`$ 1cm/s. A common feature of $`G(V)`$ for these samples is that there is a region of $`V`$ where $`G`$ increases with decreasing $`V`$, and $`G(V)`$ has a minimum (shown by the upward arrows in Fig. 4) at a value of $`V`$. Hereafter, we will call this $`V_{min}`$. The dependence of $`G(V)`$ on BIS concentration is non-monotonical below the value of $`V`$ 0.8cm/s.
As $`V`$ decreases across $`V_{min}`$, the roughness of the fracture surfaces grows up (the roughening at slow fracture). In Figures 5a-c we show the morphologies of fracture surface of 6BIS at different crack speeds. The bars represent 0.9 cm. Figures 5e-g show the cross sections of the fracture surfaces shown in Figs. 5a-c, respectively. The cross section is along the plane which is perpendicular to the global fracture surfaces and contains the center lines of the fracture surfaces (the $`x`$-axis in Fig. 5d). The vertical size of the cross section corresponds to 3cm and the horizontal size is magnified 2.5 times compared with the true scale. The shape of right-hand side boundary of the cross-section corresponds to the $`h(x)`$ shown in the illustration, i.e., the height of the fracture surface measured at each point of the $`x`$-axis. Figure 5a is a fracture surface of 6BIS above $`V_{min}`$. At such crack speeds most parts of fracture surface are flat and a few steps exist on the global fracture surface which seem like lines in Fig. 5a. Around $`V_{min}`$, such steps are frequently produced and the roughness of the fracture surfaces begins to grow up (Fig. 5b). As $`V`$ decreases further, the roughness of the fracture surfaces becomes more remarkable (Fig. 5c).
To quantify the roughness of the fracture surfaces we introduce a quantity $`R`$ defined by the following equations.
$`R{\displaystyle _{l_c}}\sqrt{1+(dh/dx)^2}𝑑x/{\displaystyle _{l_c}}𝑑x`$ (2)
$`=<\sqrt{1+(dh/dx)^2}>,`$
where the range of integration $`l_c`$ represents the distance along the $`x`$-axis which corresponds to the central period of time in which the average of $`F(t)`$ is took (see the first paragraph of this section), and the symbol $`<\mathrm{}>`$ represents the spatial average over the distance. The numerator on right-hand side in (2) is the contour length of $`h(x)`$ over the distance, thus $`R`$ is equal to 1 for the completely flat fracture surface ($`dh/dx=0`$) and increases from 1 as the roughness of the fracture surface increases. Therefore, $`R`$ is an index of the roughness of the fracture surfaces.
In Fig. 6 we show $`R`$ as a function of the crack speed $`V`$ for four kinds of sample gels. $`R(V)`$ of the gels has a common feature; i.e. at fast values of $`V`$, $`R`$ is close to 1 and with decreasing $`V`$, $`R`$ begins to increase at the value of $`V`$ close to $`V_{min}`$. This fact clearly shows the correlation between the roughening of fracture surfaces and the increase in $`G(V)`$ with decreasing $`V`$ across $`V_{min}`$.
When we take into account the roughness of fracture surfaces, we should correct the fracture energy by dividing it by $`R^2`$. Strictly speaking, we should measure the same quantity as $`R`$ along the lateral direction in Fig. 5, $`R^{}`$, and divide $`G(V)`$ by $`RR^{}`$. However, the structures causing the roughening are the steps on the fracture surfaces extending at 45 from the $`x`$-axis. Therefore we can expect $`R`$ and $`R^{}`$ are very close. In Fig. 7 and Fig. 8, we show the corrected fracture energy $`\overline{G}(V)G(V)/R(V)^2`$. Behavior of $`\overline{G}(V)`$ at fast values of $`V`$ is qualitatively identical to that of $`G(V)`$, i.e., $`\overline{G}(V)`$ linearly increases with $`V`$ and $`\overline{G}(V)`$ and $`d\overline{G}/dV`$ decrease with BIS concentration. On the other hand, when $`V`$ decreases, the crossover in $`\overline{G}(V)`$ occurs in the narrow range of $`V`$, and below the crossover range $`d\overline{G}/dV`$ becomes larger than above the crossover range. As a result of the correction, $`\overline{G}(V)`$ at each value of $`V`$ in the region monotonically depends on BIS concentration as in the fast $`V`$ region.
Our results for the corrected fracture energy $`\overline{G}`$ can be summarized as follows:
1. At a given value of $`V`$, $`\overline{G}(V)`$ decreases with increasing BIS concentration.
2. At fast values of $`V`$ ($`V>`$ 1cm/s), $`\overline{G}(V)`$ for each BIS concentration linearly increases with $`V`$.
3. The increasing rate $`d\overline{G}/dV`$ decreases with increasing BIS concentration.
4. With decreasing $`V`$ across a crossover range, $`d\overline{G}/dV`$ becomes larger.
## 4 Discussion
In the first half of this section we will discuss the corrected fracture energy $`\overline{G}`$. As shown in Figs. 7-8, the order of the fracture energy $`\overline{G}(V)`$ of the gels is several hundred times as large as that of the surface tension of water (about 72 dyn/cm at 25Handbook ). Thus $`\overline{G}(V)`$ reflects energy needed for breaking the network structure of the gels near crack fronts and consists of two parts; one is due to cutting polymer chains of the gel network $`G_{cut}`$, the other is due to viscous resistance $`G_{vis}`$.
$$\overline{G}=G_{cut}+G_{vis}.$$
(3)
Gels synthesized from monomer solutions contain various kinds of defects. Characterization of the gels is open problems in polymer physics; We can not give quantitative discussions about our results from the microscopic viewpoint. However, the elastic modulus of the gels used in this study increases with BIS concentration as shown in Table 1. This shows that actual cross-link density increases with BIS concentration. The following qualitative discussion can be made on $`\overline{G}(V)`$.
We first consider the number of polymer chains cut on the fracture surface of the gels. If all cross-linkings of a gel disappear, the system would be a solution of linear polymer, and we could divide it into two pieces without cutting any polymer chain. On the threshold cross-link density of gelation, we need cut a finite number of the polymer chains. With increasing the cross-linking density, we need to cut more number of the polymer chains on the fracture surfaces. From this consideration, we expect that $`G_{cut}`$ increases with BIS concentration.
We next consider the number of elements which cause viscous resistance for extension of a fracture surface. Dangling chains, which have free ends, should be such structures. Another possibility is the sub-chains long enough to penetrate into other part of the gel. As the cross-linking density decreases, i.e., as network structure of gels becomes looser, the number of the elements increases. Thus, we can expect that $`G_{vis}`$ increases with decreasing the BIS concentration.
From above consideration, we can conclude that the result i) shows $`G_{vis}`$ overwhelms $`G_{cut}`$ at the values of $`V`$ accessed in this study. The result ii) shows that $`G_{vis}`$ can be represented by the following form,
$$G_{vis}=\alpha V.$$
(4)
The result iii) shows that the prefactor $`\alpha `$ decreases with increasing the BIS concentration. This is reasonable because $`\alpha `$ should be an increasing function of the number of the elements contributing to viscous resistance.
The physical meaning of iv) becomes clear if we exchange the ordinate and the abscissa of Fig. 7 and we recall that $`\overline{G}`$ is proportional to the force driving the fracture. Above a critical value of $`V`$, an increase of the driving force $`\overline{G}`$ causes larger increase of $`V`$ than below the critical value. This type of nonlinear relation between a driving force and the conjugate rate is often observed in soft polymeric systems; for example, the relation in stress/strain rate of polymer melts (shear thinning) Doi-Ed and the relation in loading/detaching speed of glass-rubber interfaces stitched by linear polymers Brown , etc. In essence these phenomena are explained by conformational change of polymers due to the driving force Ajdari . This mechanism may be applicable to fracture of gels.
At present, we do not identify the origin of $`G_{vis}`$. A possibility is the bulk viscoelastic dissipation. Moreover, strong nonlinear process localized near crack fronts, for example, chain pulling-out, may participate in $`G_{vis}`$. To clarify this point, we need another experiment where we use more controlled gels.
Now we discuss the phenomena related to the roughening of fracture surfaces. As shown in Figs. 4-6, the apparent fracture energy $`G(V)`$ increases with decreasing $`V`$, accompanied with the roughening of the fracture surfaces. This means that fracture of the gels does not follow the path of minimum dissipation. In other words, under slow fracture conditions gels increase their strength by undulating crack front lines. On phenomenological level, similar $`V`$-$`G`$ curve is reported in fracture of glassy polymers such as PMMA Kinloch ; Kausch , which is attributed to crazing. The crazing, which is a kind of plastic deformation, is characteristic of glassy polymers, and we can not expect concrete relation in molecular level between the $`V`$-$`G`$ curve of gels and that of glassy polymers.
Scale invariant nature of rough fracture surfaces is one of the topics in physical study of fracture. Theoretically, it has been studied as a stochastic effect Bouchaud . In the gels, the roughening is caused by the definite elements, i.e., steps on fracture surfaces extending to the direction of $`\pm `$ 45 from the $`x`$-direction.<sup>1</sup><sup>1</sup>1Besides the oblique step-lines, scratch-like steps extending along the $`x`$-direction can be seen in Figs. 5a-c. From qualitative observation by eyes and by low magnification microscope, following tendency is recognized: (i) Large(thick) steps are the oblique type and small(thin) steps are the scratch-like type. (ii) the critical height of steps becomes larger as $`V`$ increases. Two kinds of morphological transition occur on fracture surfaces of gels: One is the roughening transition which is related to nucleation frequency of the steps on the whole of a crack front, the other is the oblique/scratch-like transition of each step-line. In a previous study Tanaka96 where we used acrylamide gels of higher polymer concentration and lower cross-link density than those used in the present study, we confused these two transitions because in the gel used in the previous study the coexistence of the two kinds of steps occurs only in relatively narrow range of $`V`$ ($`V`$ $``$ 0.5 cm/s) and the roughening transition occurs in the same range. We will report on details of the morphological transition elsewhere. (In a previous work Tanaka98 we clarified the structures on crack fronts which create the oblique step-lines, and classified collision process between the structures.) This result gives a new viewpoint on the general study for the rough fracture surfaces.
Wallner Wallner reported similar oblique step-lines on fracture surfaces on glass (Wallner lines). The proposed mechanism Holloway is that the Wallner lines are created at the parts of a moving crack front where stress field is disturbed by stress pulses which are nucleated when the crack fornt passed through irregular points of the matterial, i.e., the Wallner lines are loci of the intersections between the crack front and fronts of stress pulses. This mechanism does not hold for the step-lines of gels because the $`\pm `$45oblique step-lines of gels are observed even at much slower values of $`V`$ than the sound velocity in the gels; in fracture process in Fig. 5c, for example, the crack front ($`V=`$ 0.015cm/s) is almost stationary compared with the stress pulses ($`V_t`$=200cm/s, see Table 1), and the loci of the intersections are almost along the crack front itself, i.e., almost horizontal in Fig. 5c. This does not agree with the $`\pm `$45 obliquity of the step-lines. On the other hand, with regard to geometrical aspect it is probable that the Wallner lines are created by similar structure on crack fronts to that observed in gels.
In summery, we studied dependence of the fracture energy on the crack speed $`V`$ and on cross-link density, taking account of the roughness of the fracture surfaces. The following features is found on $`V`$ dependence of the fracture energy; (i)At a given value of $`V`$, the fracture energy decreases with the cross-link density. (ii)At fast values of $`V`$ ($`V>`$ 1cm/s), the fracture energy linearly increases with $`V`$, and at slow values of $`V`$ the fracture energy increases with $`V`$ with larger increasing rate than at fast values of $`V`$. These results indicate that the dissipative effcts dominate over the effects of the breakage of polymer chains in the fracture of gels.
The authors thank Professor Ken Sekimoto for conducting us to study of this field. They also thank to Professor Fumihiko Tanaka and Professor Mitsugu Matsushita for their helpful comments. This work was partly supported by a Grant-in-Aid from the Ministry of Education, Science, Sports and Culture of Japan.
|
no-problem/0003/astro-ph0003320.html
|
ar5iv
|
text
|
# HS 0822+3542 – a new nearby extremely metal-poor galaxy
## 1 Introduction
Since the Searle & Sargent (1972) paper identifying blue compact galaxies (BCGs), that is, low-mass galaxies showing emission line spectra characteristic of Hii regions, intense star formation (SF), and oxygen abundances of 1/50 – 1/3 solar<sup>1</sup><sup>1</sup>112+log(O/H) = 8.92 (Anders & Grevesse Anders89 (1989))., such objects have been considered as young galaxies undergoing one of their first star formation bursts. I Zw 18, a BCG with the lowest known oxygen abundance among the galaxies (O/H $``$ 1/50 (O/H), Searle & Sargent Searle72 (1972); Izotov & Thuan IT99 (1999)), has been suggested as a candidate to be a truly-local young galaxy, experiencing its first short SF episode. The second candidate young galaxy, SBS 0335–052E, with an oxygen abundance of 1/41 (O/H) (Melnick et al. Melnick92 (1992); Izotov et al. 1997a ; Lipovetsky et al. Lipovetsky99 (1999)) was discovered 18 years later by Izotov et al. (Izotov90 (1990)). With only two probable examples, we must be extremely lucky to be witnessing local galaxy formation. The proximity of these probable young galaxies allows one to study their properties in detail and to set important constraints on models of galaxy formation. Such studies are important for understanding the nature of very faint and compact probable primeval galaxies at high redshifts. Most of such galaxies at $`z=35`$ were discovered only recently (e.g. Steidel et al. Steidel96 (1996); Dey et al. Dey98 (1998)), and it seems that the majority of them are already rather evolved systems. Moreover, the local candidate young galaxies are at least one order of magnitude less massive than the faintest candidate young galaxies at high redshifts, and represent the range of baryon mass (10<sup>8</sup>–10<sup>9</sup> $`M_{}`$) within which possibly most of primeval galaxies have formed (e.g. Rees Rees88 (1988)).
Evidence for the existence of old low-mass stellar populations was obtained in the last 25 years for most of the studied BCGs (Thuan Thuan83 (1983); Loose & Thuan Loose86 (1986)). Moreover, no conclusive answer has been reached yet about the youth of the few most metal-poor BCG. However, some observational data have been collected lately, which apparently support young ages for these BCGs. Among them we point out:
a) Extremely low abundances of heavy elements in Hii regions surrounding young clusters, consistent with theoretical expectations of “metal” yield during a first SF event ($`Z`$ $`<`$ 1/20 $`Z_{}`$) (e.g., Pilyugin Pil93 (1993));
b) Very blue colours outside the location of the current SF burst, consistent with a lack of stars older than 100 Myr (Hunter & Thronson Hunter95 (1995); Papaderos et al. Papa98 (1998)). While the recent analysis of HST data for I Zw 18 by Aloisi et al. (Aloisi99 (1999)) suggests an age of 1 Gyr for the underlying stellar population of the galaxy, Izotov et al. (Izotov2000 (2000)) argue that a self-consistent treatment of all data favours a significantly larger distance to I Zw 18 then adopted by Aloisi et al., and a 100 Myr stellar population;
c) A large amount of neutral gas, making up 99% of all baryonic (luminous) mass (van Zee et al. vanZee98 (1998); Pustilnik et al. Pus2000 (2000));
d) Practically zero metallicity for this Hi gas, e.g., (O/H) $`<`$ 3$`\times `$10<sup>-5</sup>(O/H), as reported for SBS 0335–052E (Thuan & Izotov TI97 (1997)). This emphasizes either an extremely slow evolution on these systems, or a very recent onset of metal production. The latter suggests that the neutral gas clouds in these galaxies are composed of pregalactic material not yet polluted by stellar nucleosynthesis products.
It was suggested recently by Izotov & Thuan (IT99 (1999)), from the analysis of carbon and nitrogen abundances, that several BCGs with O/H $`<`$ 1/20 (O/H) in Hii regions are good candidate galaxies with a recent first SF episode. Until now, less than ten such galaxies with good abundance determinations are known. Even though the point on the existence of truly young local galaxies is debatable (see, e.g., Kunth & Östlin Kunth99 (1999)), the importance of studies of extremely metal-poor galaxies is undoubtful, since they best approximate the properties of primeval galaxies at large redshifts.
In this paper we describe the data obtained for the third most metal-deficient galaxy, HS 0822+3542 with O/H = 1/36 (O/H). This is one of the nearest, and at the same time the dimmest candidate young galaxy known.
## 2 Observations and data reduction
A new extremely metal-poor BCG, HS 0822+3542, was discovered on April 5, 1998, during observations with the 6 m telescope of the Special Astrophysical Observatory (SAO) of the Russian Academy of Sciences (Pustilnik et al. 1999b ), in the framework of the Hamburg/SAO survey for emission-line galaxies (Ugryumov et al. Ugryumov99 (1999)). Its J2000 coordinates are: R.A. = 08<sup>h</sup>25<sup>m</sup>55$`\stackrel{s}{.}`$0, Dec. = +3532′31″. The main parameters of the galaxy are presented in Table 1. Here we present new optical spectroscopic and photometric, and Hi 21 cm radio observations to study the properties of this galaxy.
### 2.1 Long-slit spectroscopy
#### 2.1.1 Observations
Optical spectra of HS 0822+3542 were obtained with the 6 m telescope on April 6, 1998 with the spectrograph SP-124 equipped with a Photometrics CCD-detector ($`24\times 24\mu `$m pixel size) and operating at the Nasmyth-1 focus. The grating B0 with 300 grooves/mm provides dispersion of 4.6 Å pixel<sup>-1</sup> and spectral resolution of about 12 Å at first order. A long slit with a size 2″$`\times `$40″ was used. The scale along the slit was 0$`\stackrel{}{.}`$4 pixel<sup>-1</sup>. The total spectral range covered was $`\lambda \lambda `$ 3700–8000 Å.
High S/N ratio long-slit spectroscopy was conducted with the 2.5 m Nordic Optical Telescope (NOT) on May 27 and 28, 1998. We used the spectrograph ALFOSC equipped with a Loral (W11-3AC) CCD, with a 1$`\stackrel{}{.}`$3$`\times `$400″ slit, and grisms #6 and #7 (110 Å/mm), which provide a spectral dispersion of 1.5 Å pixel<sup>-1</sup> and a resolution of about 8 Å (FWHM). The spectral range was $`\lambda \lambda `$ 3200–5500 Å for grism #6 and $`\lambda \lambda `$ 3800–6800 Å for grism #7. The spatial resolution was of 0$`\stackrel{}{.}`$189 per pixel. The total exposure time for grism #6 was 60 min, split into three 20 min exposures, and 40 min for grism #7, split into two 20 min exposures. The slit, centred on the brightest knot (see in Fig. 1 the $`R`$-band image of the galaxy), was oriented in the N-S direction. Spectra of He-Ne comparison lamp were obtained after each exposure for wavelength calibration, and three spectrophotometric standard stars Feige 34, HZ 44 and BD+332642, were observed for flux calibration. The seeing during the spectral observations was $``$ 0$`\stackrel{}{.}`$8.
#### 2.1.2 Reduction of long-slit spectra
Data reduction was carried out using the MIDAS<sup>2</sup><sup>2</sup>2MIDAS is an acronym for the European Southern Observatory package — Munich Image Data Analysis System. software package (Grosbøl Grosbol89 (1989)). Procedures included bias subtraction, cosmic-ray removal and flat-field correction. The flat-field correction was produced with the normalization algorithm described by Shergin et al. (SKL96 (1996)). After the wavelength mapping and night sky subtraction, each 2D frame was corrected for atmospheric extinction and was flux calibrated. To derive the sensitivity curves, we used the spectral energy distributions of the standard stars from Bohlin (Bohlin96 (1996)). Average sensitivity curves for grisms were produced for each observing night. R.m.s. deviations between the average and individual sensitivity curves are $``$1.5%, with the maximum deviations of $``$4% in the spectral region 3800$`÷`$4000 Å.
The 2D flux-calibrated spectra were then corrected for atmospheric dispersion (see Kniazev et al. Kniazev2000 (2000)) and averaged. Finally, the 1D averaged spectrum was extracted from a 0$`\stackrel{}{.}`$8 region along the slit, where $`I`$($`\lambda `$4363 Å) $`>`$ 2$`\sigma `$ ($`\sigma `$ is the dispersion of a noise statistics around this line) (see Fig. 2).
Redshift and line fluxes were measured applying Gaussian fitting. For H$`\alpha `$, \[Nii\] $`\lambda \lambda `$ 6548,6583 Å and \[Sii\] $`\lambda \lambda `$ 6716,6731 Å a deblending procedure was used, assuming gaussian profiles with the same FWHM as for single lines. The errors of the line intensities in Table 2 take into account the Poisson noise statistics and the noise statistics in the continuum near each line, and include uncertainties of data reduction. These errors have been propagated to calculate element abundances.
The observed emission line intensities $`F(\lambda )`$, and those corrected for interstellar extinction and underlying stellar absorption $`I(\lambda )`$ are presented in Table 2. All lines have been normalized to the H$`\beta `$ intensity. The H$`\beta `$ equivalent width $`EW`$(H$`\beta `$), the absorption equivalent widths $`EW`$(abs) of the Balmer lines, the H$`\beta `$ flux, and the extinction coefficient $`C`$(H$`\beta `$) (this is a sum of internal extinction in HS 0822+3542 and foreground one in the Milky Way) are also shown there.
For the simultaneous derivation of $`C`$(H$`\beta `$) and $`EW`$(abs), and to correct for extinction, we used a procedure described in detail in Izotov, Thuan & Lipovetsky (Izotov94 (1994)). The abundances of the ionized species and the total abundances of O, Ne, N, S, and He have been obtained following Izotov, Thuan & Lipovetsky (Izotov94 (1994), 1997b ) and Izotov & Thuan (IT99 (1999)).
### 2.2 CCD photometry
#### 2.2.1 Observations
CCD images in Bessel $`BVR`$ filters were obtained with the NOT and ALFOSC on 1998, May 28. The same 2k$`\times `$2k Loral (W11-3AC) CCD was used, with a plate scale of 0$`\stackrel{}{.}`$189$`\times `$0$`\stackrel{}{.}`$189 and a 6$`\stackrel{}{.}`$5$`\times `$6$`\stackrel{}{.}`$5 field of view. Exposures of 900 s in $`B`$ and 600 s in both $`V`$ and $`R`$ were obtained under photometric conditions but no photometric calibration was performed. The seeing FWHM was 1$`\stackrel{}{.}`$25. Dr. A. Kopylov (SAO RAS) kindly obtained short CCD images in $`BVR`$ with the 1 m telescope of SAO. These observations were used to calibrate the NOT data. Photometric calibration was provided by observations of the stars #4, #7 and #10 from the field of OJ 287 (Firucci & Tosti Firucci96 (1996); Neizvestny Neizvestny95 (1995)).
#### 2.2.2 Reduction of photometric data
All primary data reduction was done with MIDAS. The frames were corrected for bias, dark, and flat field in the same way as for reduction of the 2D frames of NOT long-slit spectra. Aperture photometry was performed on the standard star frames using the MAGNITUDE/CIRCLE task, with the same aperture for all stars. The instrumental magnitudes were transformed to the standard photometric system magnitudes via secondary local standards, calibrated with the 1 m telescope of SAO. The final zero-point uncertainties of the transformation were $`\sigma _B`$ = $`0\stackrel{m}{.}06`$ in $`B`$, $`\sigma _V`$ = $`0\stackrel{m}{.}05`$ in $`V`$, and $`\sigma _R`$ = $`0\stackrel{m}{.}08`$ in $`R`$.
For the creation of the sky background, we used the dedicated software for adaptive filtering developed at the Astrophysical Institute of Potsdam (Lorenz et al. Lor93 (1993)).
The photometry of extended objects was carried out with the IRAF<sup>3</sup><sup>3</sup>3IRAF: the Image Reduction and Analysis Facility is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, In. (AURA) under cooperative argeement with the National Science Foundation (NSF). software package. Elliptical fitting was performed with the ELLIPSE task in the STSDAS package. To construct a surface brightness (SB) profile we used the equivalent radius as the geometrical average, $`R^{}`$=$`\sqrt{ab}`$. The SB profiles were decomposed into two components: one with a gaussian distribution in the central part and the second being an exponential disc. The final function has the form:
$$I=I_{\mathrm{E},0}\mathrm{exp}\left(\frac{R^{}}{\alpha _\mathrm{E}}\right)+I_{\mathrm{G},0}\mathrm{exp}\left[\mathrm{ln}2\left(2\frac{R^{}}{\alpha _\mathrm{G}}\right)^2\right]$$
(1)
For the profile decomposition, the NFIT1D task of the STSDAS was used with weights inversely proportional to the accuracy of the surface brightness profile. The final photometric errors take into account the instrumental errors and the error of transformation to the standard photometrical system. To check correctness of disc parameters we performed separately the fitting of external part of SB profile, corresponding to the region in Fig. 5 from $`R^{}`$ of 3″ to 6″. The derived disc parameters were the same within the cited errors.
For consistency with previous works (e.g. Telles et al. Telles97 (1997), Papaderos et al. Papa96 (1996), Papa98 (1998), Doublier et al. 1999a ) the parameters of the disc have been obtained by fitting again the SB versus the equivalent radius, but we note that the obtained values should be taken with caution, because of ellipticity variations.
### 2.3 Hi 21 cm observations and data reduction
Hi line observations were carried out in July 1998 and in February 1999 with the Nançay<sup>4</sup><sup>4</sup>4The Nançay Radioastronomy Station is part of the Paris Observatory and is operated by the Ministère de l’Education Nationale and Institut des Sciences de l’Univers of the Centre National de la Recherche Scientifique. 300m radio telescope (NRT). The NRT has a half-power beam width of 3$`\stackrel{}{.}`$7 (EW) $`\times `$ 22′ (NS) at the declination Dec. = 0.
Since HS 0822+3542 had a known optical redshift, we split the 1024-channel autocorrelator in two halves and used a dual-polarization receiver to increase the S/N ratio. Each correlator segment covered a 6.4 MHz bandwidth, corresponding to a 1350 km s<sup>-1</sup> velocity coverage, and was centred at the frequency corresponding to the optical redshift. The channel spacing was 2.6 km s<sup>-1</sup> before smoothing and the effective resolution after averaging pairs of adjacent channels and Hanning smoothing was 10.6 km s<sup>-1</sup>. The system temperature of the receiver was $``$ 40 K in the horizontal and vertical linear polarizations. The gain of the telescope was 1.1 K/Jy at the declination Dec. = 0. The observations were made in the standard total power (position switching) mode with 1-minute on-source and 1-minute off-source integrations.
The data were reduced using the NRT standard programs DAC and SIR, written by the telescope’s staff. Both H and V polarization spectra were calibrated and processed independently, and were finally averaged together. Error estimates were calculated following Schneider et al. (Schneider86 (1986)). With an integration time of 210 minutes, the r.m.s. noise is of 1.4 mJy after smoothing. HS 0822+3542 is detected with S/N=11. The spectrum is presented in Fig. 3.
## 3 Results and discussion
The main parameters of HS 0822+3542, along with those of the two earlier known local candidate young galaxies, are presented in Table 1.
The distances to all three galaxies for the sake of homogeneity are derived from the radial velocities. For the distance-dependent parameters we used the distances $`D_{\mathrm{Vir}}`$ derived from the quoted heliocentric Hi velocities $`V_{\mathrm{HI}}`$, accounting for the Galaxy motion relative to the centroid of the Local Group of 220 km s<sup>-1</sup>, for Virgocentric infall (Kraan-Korteweg Kraan86 (1986)), and assuming a Hubble constant $`H_0`$ = 75 km s<sup>-1</sup>Mpc<sup>-1</sup>. Note that HS 0822+3542 is the nearest galaxy among the extremely metal-deficient galaxies shown in Table 1. However, such distance estimates for two galaxies, HS 0822+3542 and I Zw 18 with small radial velocities are rather uncertain. In particular, for I Zw 18 different distance determinations from the literature based on the different observational data, range from 10.9 Mpc to 20 Mpc. To resolve this disagreement the deep HST imaging of I Zw 18 is vital in order to detect the tip of the red giant branch stars and to measure the distance directly. Unfortunately, such observations have not yet been done. The same relates to HS 0822+3542. Keeping in mind this possible source of uncertainty which may change integrated characteristics of HS 0822+3542 and I Zw 18 by a factor of 2–4, we describe below in more detail various parameters of HS 0822+3542 and compare them to other well-known young galaxy candidates.
### 3.1 Chemical abundances
The results from the chemical abundance determination are presented in Table 3. For the sake of comparison, we also show the data for the other two low metallicity galaxies SBS 0335–052 and I Zw 18. A two-zone photoionized Hii region has been assumed; the electron temperature for the high-ionization region has been obtained from the \[Oiii\] $`\lambda `$ 4363/($`\lambda `$ 4959+5007) ratio using a five-level atom model (Izotov & Thuan IT99 (1999)) and for the low-ionization region by using the empirical relation between both electron temperatures (Izotov, Thuan & Lipovetsky 1997b ). The electron density was derived from the \[Sii\]$`\lambda `$ 6717,6731 ratio. Abundances have been calculated following Izotov, Thuan & Lipovetsky (Izotov94 (1994), 1997b ).
The oxygen abundance in HS 0822+3542 is slightly higher than in the other two galaxies. This galaxy is, therefore, one more object filling the gap between the metallicity of I Zw 18 and those for the bulk of BCGs. The comparison of the data on element abundances in HS 0822+3542 with the abundance ratios from Izotov & Thuan (IT99 (1999)) shows that all data for HS 0822+3542 agree with the derived average values for the sample of low-metallicity BCGs. The largest deviation occurs in the N/O ratio. But if we take into account the large uncertainty in the flux of \[Nii\] $`\lambda `$6583 emission line blended with H$`\alpha `$, the N/O is consistent with the expected value for this kind of galaxies.
We thus conclude that the measured ratios of heavy element abundances to that of oxygen in HS 0822+3542 follow the same relation as in all metal-poor BCGs. In particular, nitrogen is likely synthesized in massive stars that also produce oxygen, neon and sulfur, as proposed by Izotov & Thuan (IT99 (1999)).
We note, that low measured value of $`C`$(H$`\beta `$)=0.005$`\pm `$0.04, transfered to a 2$`\sigma `$ upper limit: $`C`$(H$`\beta `$) $`<`$ 0.085, equivalent to $`E(BV)`$ = 0.68 $`C`$(H$`\beta `$) $`<`$ 0.058, is consistent with $`E(BV)`$ = 0.047$`\pm `$0.007 from Schlegel et al. (Schlegel98 (1998)), which was used to correct measured colours and absolute magnitude. Anyway the net internal extinction A<sub>B</sub> in HS 0822+3542 seems to be very low, implying a low dust content, what is also consistent with the object extremely low metallicity.
### 3.2 Morphology and colours
The $`R`$-band image of the galaxy, with filamentary structures at the periphery of the low surface brightness region, is typical of dwarfs with strong SF activity. Curiously, the appearance of HS 0822+3542 is morphologically similar, on the same angular scales, to that of SBS 0335–052E (Melnick et al. Melnick92 (1992); Thuan et al. TIL97 (1997)), including some arcs and filaments. The integrated ($`BV`$) and ($`VR`$) colours of HS 0822+3542 are very blue ($`0\stackrel{m}{.}32`$ and $`0\stackrel{m}{.}17`$, respectively), similar to those of SBS 0335–052E.
Below, we describe the surface brightness (SB) distribution of HS 0822+3542. We show in Fig. 4 the SB profiles in $`B`$, $`V`$ and $`R`$-bands, which look very similar, with some deviations in $`R`$-band in the middle part of the profile. This is presumably due to additional H$`\alpha `$ emission from the filamentary structure (see Fig. 1). The $`B`$-band SB profile (Fig. 5) indicates the presence of two components: the central bright compact body and an exponential disc, dominating the light in the outer part of the profile (hereafter LSB component). The parameters of both components, derived from the fitting of the SB profiles in $`B`$, $`V`$ and $`R`$, are given in Table 4.
The corresponding exponential scale lengths are $`\alpha _\mathrm{E}^B`$ = 86$`\pm `$1 pc, $`\alpha _\mathrm{E}^V`$ = 86$`\pm `$2 pc and $`\alpha _\mathrm{E}^R`$ = 84$`\pm `$2 pc. A comparison with the same parameters derived for the LSB component of other BCGs (Papaderos et al. Papa96 (1996); Papaderos et al. Papa98 (1998); Doublier et al. 1999a ) shows that the scale length of HS 0822+3542 is smaller than that of any BCG studied in the cited works. Even the “ultracompact” dwarf galaxy POX 186 (Doublier et al. 1999b ) has a scale length more than twice larger (180 pc). However HS 0822+3542 is not the only extreme case in its small disc “size”. At least two galaxies have comparable or smaller scale lengths: GR 8 (Mateo Mateo98 (1998)) and Tol 1116–325 (Telles et al. Telles97 (1997)).
Integrating the SB profile of the underlying exponential disc we obtain its total $`B`$ magnitude:
$$B_{\mathrm{disc}}=\mu _{\mathrm{E},0}5\mathrm{log}(\alpha _\mathrm{E})1.995,$$
(2)
where $`\alpha _\mathrm{E}`$ is in arcsec.
With the resulting $`B_{\mathrm{disc}}`$ = 18$`\stackrel{m}{.}`$22 we can estimate also the luminosity of current SF burst and the corresponding brightening of the galaxy. The brightening is quite modest, about 30%. The light of current burst corresponds to $`B_{\mathrm{burst}}`$ = 19$`\stackrel{m}{.}`$46 and a luminosity of $`M_B=`$11$`\stackrel{m}{.}`$1.
Since the scale lengths for the underlying disc in all three bands are identical within small uncertainties (1–1.5%), we can accept as a first approximation that the scale length of the disc is a unique one for $`B`$, $`V`$ and $`R`$, equal to the average $`<\alpha _\mathrm{E}>`$ = 1.40$`\pm `$0.02 arcsec, and corresponding to 85$`\pm `$1 pc. Hence, the underlying disc has no colour gradient, and its colours ($`BV`$)<sub>disc</sub> and ($`VR`$)<sub>disc</sub> can be approximated by the colours of its central SB
$$(BV)_{\mathrm{disc}}=\mu _{\mathrm{E},0}^B\mu _{\mathrm{E},0}^V=0\stackrel{m}{.}10\pm 0\stackrel{m}{.}06,$$
$$(VR)_{\mathrm{disc}}=\mu _{\mathrm{E},0}^V\mu _{\mathrm{E},0}^R=0\stackrel{m}{.}29\pm 0\stackrel{m}{.}09,$$
respectively. The disc colours can also be obtained from its integral $`B`$, $`V`$ and $`R`$ magnitudes (column 7 in Table 4):
$$(BV)_{\mathrm{disc}}=0\stackrel{m}{.}12\pm 0\stackrel{m}{.}10,$$
$$(VR)_{\mathrm{disc}}=0\stackrel{m}{.}22\pm 0\stackrel{m}{.}16,$$
consistent with the estimates above. The uncertainties of the latter colours are derived from the errors of the integrated disc magnitudes, from the propagation of errors in equation (2). They are $`0\stackrel{m}{.}07`$, $`0\stackrel{m}{.}08`$ and $`0\stackrel{m}{.}14`$ for $`B`$, $`V`$ and $`R`$, respectively.
A close coincidence of the scale lengths of BCG LSB components in the different filters was found by Papaderos et al. (Papa98 (1998)) for SBS 0335–052E. The same correlation for disc scale lengths in different bands was shown to exist for a sample of 19 BCGs (Doublier et al. 1999a ). However, the colours $`(BR)_{\mathrm{disc}}`$ for BCGs with small scale lengths from Doublier et al. are redder and lie in the range of 0$`\stackrel{m}{.}`$61–1$`\stackrel{m}{.}`$47, with an average $`(BR)_{\mathrm{disc}}`$ = 1$`\stackrel{m}{.}`$11$`\pm `$0$`\stackrel{m}{.}`$30. The most similar to HS 0822+3542 in this colour (with $`(BR)_{\mathrm{disc}}`$ = 0$`\stackrel{m}{.}`$61) is SBS 0940+544, with 12+log(O/H) = 7.43$`\pm `$0.01 (Izotov & Thuan IT99 (1999)).
In Fig. 6 we show the distributions of observed ($`BV`$) (filled squares) and ($`VR`$) (open circles) colours, as functions of the effective radius. Mean disc colours are shown by filled and open rectangles to the right of the observed colour profiles. Note that the ($`VR`$) colours are shifted by 1$`\stackrel{m}{.}`$0 in Fig. 6, to minimize confusion. Note also that ($`VR`$) is even bluer for the outermost part of disc ($`R^{}>`$3″): 0$`\stackrel{m}{.}`$16$`\pm `$0$`\stackrel{m}{.}`$12.
Effective sizes for the central gaussian component in the different bands also coincide, within small uncertainties. We can therefore think of this component as a body with a homogeneous colour distribution. We note also, that this compact gaussian component is significantly larger than we could expect for a point-like source convolved with the point spread function (PSF). The FWHM for stellar images measured on the CCD frames near the galaxy is 1$`\stackrel{}{.}`$25, whereas it is $`\alpha _\mathrm{G}`$=1$`\stackrel{}{.}`$6 (Table 4) for the bright central region. The deconvolved FWHM for this component amounts to $``$ 1$`\stackrel{}{.}`$0. Comparing with the structure of star-forming regions in other BCGs, it is natural to assign the enhanced optical emission in the central part of HS 0822+3542 to a young massive-star cluster formed in the current SF burst and its associated Hii region. The characteristic linear radius of this complex is $``$ 30 pc, comparable to the size of the star cluster R136 in LMC (Walborn Walborn91 (1991)). Accordingly, the absolute $`B`$ magnitude of this bright component –11$`\stackrel{m}{.}`$1 is near the lower limit of the range found with HST for super-star clusters by O’Connel et al. (O'Connel94 (1994)) in two nearby star-bursting galaxies.
The total size of HS 0822+3542 out to the $`\mu _B`$=25 mag arcsec<sup>-2</sup> isophote can be approximated by an ellipse with major and minor axes 14$`\stackrel{}{.}`$8 and 7$`\stackrel{}{.}`$4, or 900 by 450 pc, respectively.
As we have shown above, the SB distribution of HS 0822+3542 can be fitted by two components. While the brighter and more compact central component seemingly corresponds to the complex of young massive stars formed during the current SF burst and its associated Hii region, the underlying exponential disc can consist of older stars. The latter could be formed either during a previous much earlier SF episode, or relatively recently, in a precursor of the current SF burst, which could be significantly displaced ($``$150–200 pc) from the underlying disc centre. If the current SF burst and the previous SF activity are causally connected, the propagation time of the SF wave from the underlying disc centre, with a typical velocity of 10 km s<sup>-1</sup> (e.g., Zenina et al. Zenina97 (1997)) would be only 15–20 Myr. In order to check this option one could search for the Hei absorption features of early B stars in the underlying disc.
To check for emission from older stellar populations, which may reside in the region outside the current SF burst, we compare the colours of underlying disc with those predicted for various models, as well as with similar parameters for other young galaxy candidates.
The very blue colours of the disk, after correcting for the extinction in the Galaxy ($`BV`$)<sub>0</sub> = $`0\stackrel{m}{.}05\pm 0\stackrel{m}{.}06`$ and ($`VR`$)<sub>0</sub> = $`0\stackrel{m}{.}26\pm 0\stackrel{m}{.}09`$ are reasonably consistent with the predictions for an instantaneous SF burst with a Salpeter IMF, a metallicity of 1/20 $`Z_{}`$ and an age of $``$ 100 Myr (Leitherer et al. Leitherer99 (1999)). If prolonged star formation is assumed, then the age of the oldest stellar population in the extended disc can be larger than that of the instantaneous burst. The observed colours can also be explained by continuous SF with a constant star formation rate from 500 Myr to 20 Myr ago. However, older stellar populations with an age of 10 Gyr are excluded. The derived age is somewhat lower for the outermost part of the of the disc with a bluer ($`VR`$) colour in comparison to its average value. Integrated colours of ionized gas are about ($`BR`$) = 0$`\stackrel{m}{.}`$6–0$`\stackrel{m}{.}`$8 (Izotov et al. 1997a ), what is redder than our observed values in the underlying disc. Therefore if some gas emission inputs to the integrated disc colours, then the true colours of stellar component should be even bluer than measured.
These colours are very similar to those of the underlying nebulosity in SBS 0335–052. The latter colours are shown to be well explained by the radiation of ionized gas and A-stars with the ages of no more than 100 Myr, created in the current SF episode. This SF episode is prolonged and may represent a propagating SF wave (Papaderos et al. Papa98 (1998)), similar to what is suggested for another candidate young galaxies I Zw 18 (Izotov et al. Izotov2000 (2000)) and CG 389 (Thuan et al. 1999b ). This similarity of LSB disc colours of HS 0822+3542 and SBS 0335–052E suggests that up to the radial distances of $``$300 pc the input of stellar population with the ages larger than 100 Myr to the disc radiation is undetectable.
As a first approximation, the colours of the underlying disc do not contradict the possibility that the current burst is the first SF episode in this galaxy. However since the emission of ionized gas can add significantly to the total radiation from the volume within 300–400 pc, further photometric and spectroscopic observations of HS 0822+3542 are required to account properly for the contribution of gaseous emission.
### 3.3 Current star formation rate
The star formation rate (SFR) of the current SF episode can be estimated from the total H$`\alpha `$ luminosity. The H$`\alpha `$ flux was measured within the slit, and was corrected for the missing light outside the slit. A correction factor of 2.33 was calculated, using the brightness profile along the slit and assuming circular symmetry. For the derived total flux $`F`$(H$`\alpha `$)=5.8$`\times 10^{14}`$ erg s<sup>-1</sup>cm<sup>-2</sup> we obtain a total H$`\alpha `$ luminosity of 10<sup>39</sup> erg s<sup>-1</sup>.
This H$`\alpha `$ luminosity corresponds to a current SFR of $``$ 0.007 $`M_{}`$ yr<sup>-1</sup> (Hunter & Gallagher Hunter86 (1986)) assuming a Salpeter IMF with a 0.1 $`M_{}`$ lower mass cutoff. The gaseous mass converted into the stars during a 3 Myr burst is $``$ 2$`\times `$10<sup>4</sup> $`M_{}`$. The unknown contribution of the gaseous emission to the light from the LSB component further complicates the stellar mass estimate. If only stars with age $``$ 100 Myr contribute to the LSB luminosity, then the total mass of stars in the underlying disc with $`M_{B,\mathrm{disc}}`$=–12$`\stackrel{m}{.}`$3 is 1.3$`\times `$10<sup>6</sup> $`M_{}`$ (from e.g., Leitherer et al. Leitherer99 (1999)). This is much smaller than the total neutral gas mass of $``$3.0$`\times `$10<sup>7</sup> $`M_{}`$, accounting for a helium mass fraction of 0.25.
The total mass of ionized gas can be obtained from the average mass density inside the Hii region and its volume. An average mass density corresponding to the average electron density $`N_\mathrm{e}`$ of 1 cm<sup>-3</sup> within a volume with a diameter of 0.5 kpc yields a total ionized gas mass of 10<sup>6</sup> $`M_{}`$.
The above estimates show that baryonic matter in HS 0822+3542 is dominated by the gaseous component.
### 3.4 Hi and dynamical mass
The integrated Hi line flux, the characteristic widths of 21 cm line profile $`W_{50}`$ and $`W_{20}`$ (for a Hanning smoothing of 10.6 km s<sup>-1</sup>), and the derived Hi mass $`M_{\mathrm{HI}}`$ for HS 0822+3542 are presented in Table 1. The small Hi mass 2.4$`\times 10^7`$$`M_{}`$ of this galaxy is consistent with its low optical luminosity. The very narrow Hi profile is indicative of its very low amplitude of rotational velocity, which does not exceed 30 km s<sup>-1</sup>. It is difficult to assess the inclination angle correction, since the optical morphology can be unrelated to global properties of the associated Hi cloud, as exemplified by the case of SBS 0335–052 (Pustilnik et al. Pus2000 (2000)). The role of chaotic gas motions is, in general, more important in very low mass galaxies, where the amplitude of random velocities reaching ten km s<sup>-1</sup> or more can be commensurate with the rotational velocity.
The measured Hi mass and the profile width are in the range characteristic of very low mass galaxies. The mass-to-light parameter $`M`$(Hi)/$`L_B`$ = 1.40 $`M_{}`$/$`L_{}`$, is comparable to those of I Zw 18 and SBS 0335–052E. It is not as high as in some gas-rich dwarfs from van Zee et al. (vanZee97 (1997)), but these galaxies, although they have a few Hii regions, are in relatively quiescent state. The extremely metal-poor BCGs discussed here experience significant luminosity enhancement due to very intense current and recent SF activity; this results in a significant decrease of their mass-to-light ratios relative to their non-active state.
A rough estimation can be made also on the dynamical mass of HS 0822+3542. From the width of Hi profile at the 20% level, a maximum rotation velocity of 30 km s<sup>-1</sup> can be assumed. The extent of the Hi cloud associated with a BCG is normally many times larger than its optical size. The optical radius $`R_{25}`$ is the radius of a disc galaxy at the isophotal level $`\mu _B`$ = 25.0 mag arcsec<sup>-2</sup>. A conservative lower limit to the ratio of Hi-to-optical radii is 4 (see, e.g. Taylor et al. Taylor95 (1995); Chengalur et al. Chengalur95 (1995); Salzer et al. Salzer91 (1991); van Zee et al. vanZee98 (1998); Pustilnik et al. Pus2000 (2000)). By equating gravitation and centrifugal force at the edge of Hi disc an estimate of the dynamical mass inside 1.5 kpc is obtained; this is 3.4$`\times `$10<sup>8</sup> $`M_{}`$, one order of magnitude larger than the total visible mass of the galaxy $`M_{\mathrm{neutral}}`$ \+ $`M_{\mathrm{stars}}`$ \+ $`M_{\mathrm{HII}\mathrm{region}}`$.
Even rather unprobable case of Hi radius equal to the optical one results in the total dynamical mass $``$3 times higher than the visible mass. Hence, HS 0822+3542 like other extremely metal-deficient dwarf galaxies, is dynamically dominated by a dark matter halo, supporting the modern view of primeval galaxy formation process (e.g. Rees Rees88 (1988)). In turn the mass of its DM halo is one of the smallest for galaxies.
## 4 A candidate young galaxy?
The properties of HS 0822+3542 presented above (extremely low abundance of heavy elements, very blue colour of underlying nebulosity, and extremely small mass ratio of stars to neutral gas) suggest that this could be the nearest candidate young dwarf galaxy forming its first stellar generation. However, on the basis of the present data, we cannot exclude the presence of an underlying older stellar populations originating from earlier SF episodes.
Similar studies of the two previously known young galaxy candidates I Zw 18 and the pair SBS 0335–052E/0335–052W (Izotov et al. 1997a ; Thuan & Izotov TI97 (1997); Pustilnik et al. Pus97 (1997); Lipovetsky et al. Lipovetsky99 (1999)) have not been conclusive whether such young systems really do exist.
While the chemical properties (see Table 3) of these candidates seem to follow a general trend and behave quite homogeneously, other global properties have still to be understood. From Table 1 it appears that they cover a broad range of neutral hydrogen masses and blue luminosities (a factor of $``$ 40–60). The same is true for their current SFRs and for the mass of stars formed in a single star formation episode.
We suggest that a simple linear scaling between several important parameters holds for forming galaxies, at least in the range of baryon masses of (0.3–20)$`\times `$10<sup>8</sup> $`M_{}`$ (see Table 1, and assuming that most of baryons in these BCGs are in atomic hydrogen and helium). This is important both for the analysis of conditions capable of maintaining pristine gas clouds stable for $``$ a Hubble time, and for the planning of further searches for such objects. In particular, such low-mass objects can represent a significant fraction of the Ly$`\alpha `$ absorbers at high redshifts.
One more indirect argument for the possible youth of the three BCGs considered here comes from the calculations of mass and heavy elements loss in dwarfs with active SF (Mac Low & Ferrara MacLow99 (1999)). These show that the rate of metal loss is strongly dependent on the baryon mass in the range of 10<sup>7</sup> to 10<sup>9</sup> $`M_{}`$. Therefore, if these extremely metal-poor galaxies with masses ranging from 3$`\times `$10<sup>7</sup> to 2.5$`\times `$10<sup>9</sup> $`M_{}`$ were not in their first SF episode, we should expect significant differences in their observational properties. For the three BCGs discussed here we do not find these differences.
From the surface density of already known objects with extremely low metallicity and small radial velocity one can expect to find at least ten more such galaxies within 15 Mpc, if the search will be extended to the entire sky.
## 5 Conclusions
From the data and discussion above we reach the following conclusions:
1. HS 0822+3542 is a new nearby ($`D`$ = 12.5 Mpc) galaxy with oxygen abundance 12 + log(O/H) = 7.35. After I Zw 18 and SBS 0335–052 this is the third lowest metallicity object among Blue Compact Galaxies.
2. Its very low metallicity, very small stellar mass fraction (0.05 relative to the entire baryon mass) and blue colours of the underlying disc \[($`BV`$)<sub>0</sub> = 0$`\stackrel{m}{.}`$05, ($`VR`$)<sub>0</sub> = 0$`\stackrel{m}{.}`$26\] imply that this is one of the few candidates to be a local young galaxy, forming its first generation of stars.
3. HS 0822+3542 is 50–60 times less luminous and massive than another candidate young galaxy SBS 0335–052. This implies a broad range of global parameters for the candidate young galaxies. A linear scaling between several important parameters of such galaxies probably exists, including parameters related to the SF burst.
4. The dynamical mass estimate using the width of the Hi profile and a typical Hi gas extent relative to the optical size, leads to the conclusion that HS 0822+3542 is dynamically dominated by a dark matter halo.
5. Higher S/N long-slit spectra than presented here, and deep H$`\alpha `$ images are needed to follow the ionized gas extent. Resolved Hi maps will be very helpful to study the dynamics of its ISM and the parameters of its DM halo.
###### Acknowledgements.
The authors thank A.I.Kopylov for $`BVR`$ calibration frames. This work was partly supported by the INTAS grant 96-0500. The SAO authors acknowledge partial support from the Russian Foundation for Basic Researches by grant No. 97-2-16755 and the Center for Cosmoparticle Physics “Cosmion”. Part of the data presented here have been taken using ALFOSC, which is owned by the Instituto de Astrofisica de Andalucia (IAA) and operated at the Nordic Optical Telescope under agreement between IAA and the NBIfA of the Astronomical Observatory of Copenhagen. One of us (AK) acknowledges the support by the Junta de Andalucia during his stay at the IAA. The authors thank the anonymous referee for useful suggestions which allowed to improve the presentation of several points. The authors acknowledge the use of the NASA/IPAC Extragalactic Database (NED) and Lyon-Meudon Extragalactic Database (LEDA).
|
no-problem/0003/astro-ph0003298.html
|
ar5iv
|
text
|
# 2.5–11 micron spectroscopy and imaging of AGNs: Based on observations with ISO, an ESA projects with instruments funded by ESA member states (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA
## 1 Introduction
According to “unified models” of Active Galactic Nuclei (AGN), Seyfert 1 and Seyfert 2 galaxies (hereafter Sf1 and Sf2) are essentially the same objects viewed at a different angle: Sf1s are observed close to face-on such that we have a direct view to the Broad emission Line Region (BLR) and the accretion disk responsible for the strong UV-Optical-X-ray continuum, whereas Sf2s are seen at an inclination such that our view is blocked by an optically thick dusty torus which surrounds the disk and the BLR (e.g. Antonucci antonucci (1993)). This model makes specific predictions. In particular, the UV photons from the disk which are absorbed by the grains in the torus should be re-emitted as thermal radiation in the IR. Several arguments constrain the torus inner radius to be of the order of $``$ 1 pc in which case the dust temperature should peak to about 700–1000 K and give rise to an emission “bump” between $``$ 2 and 15 $`\mu `$m (Pier & Krolik pier (1992)). The model also predicts that the silicate 9.7 $`\mu `$m feature should appear preferentially in absorption in Sf2s and in emission in Sf1s. In order to test these predictions and better constrain the model, we initiated a program of mid-IR (MIR) observations of a large sample of AGNs. Throughout this paper, we use $`\mathrm{H}_0=\mathrm{\hspace{0.17em}75}\mathrm{km}\mathrm{s}^1;\mathrm{q}_0=\mathrm{\hspace{0.17em}0}`$. Unless otherwise stated, all quoted uncertainties correspond to 1-$`\sigma `$ errors.
## 2 Observations
A sample of 57 AGNs and one non-active “normal” SB galaxy were observed with the ISOPHOT (Lemke et al. lemke (1996)) and ISOCAM (Cesarsky et al. 1996a ) instruments on board the Infrared Space Observatory (ISO; Kessler et al. kessler (1996)).
Table 1 lists all the sources successfully observed with ISO. Columns 1–3 give the most common name and equatorial coordinates, columns 4 & 5 the seyfert type and the redshift, respectively, while columns 6–8 list the instrument, the corresponding exposure time and start time of the observation. The redshifts and types are taken from the NED <sup>2</sup><sup>2</sup>2The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.. The sample is drawn from the CfA hard X-ray flux limited complete sample (Piccinotti et al. piccinotti (1982)) but lacks the most well known objects (e.g. NGC~4151) which were embargoed by ISO guaranteed time owners. On the other hand, the sample was enriched in bright Sf2s. We caution that our sample is therefore not “complete” in a statistical sense. It is about equally divided into Sf1s (28 sources, including 2 QSOs) and Sf2s (29), where we define Sf1s as all objects of type $``$ 1.5 and Sf2s those whose type is $`>`$ 1.5. The mean and $`r.m.s.`$ redshift are $`0.047\pm 0.083`$ and $`0.016\pm 0.013`$ for Sf1s and Sf2s. Excluding the two QSO’s (HS~0624+6907 and H~1821+643), the mean Sf1 redshift becomes 0.024$`\pm `$0.015, not significantly different from that of Sf2s. At these mean redshifts, a 10 $`\mathrm{}`$ angular dimension projects onto a linear size at the source of 4.6 and 3.1 kpc, for Sf1 and Sf2 respectively.
For every object but two, the data-set consists of ISOCAM images obtained in staring mode through the LW2 and LW7 filters at 6.75 and 9.63 $`\mu `$m respectively, with a 3 ″per pixel magnification, together with 2.5–11.8 $`\mu `$m spectra obtained immediately before with the ISOPHOT-S low resolution ($`3360\mathrm{km}\mathrm{s}^1`$) spectrograph. The images consist of arrays of $`32\times \mathrm{\hspace{0.17em}32}`$ pixels (i.e. $`96\times \mathrm{\hspace{0.17em}96}`$ ″) with an effective resolution (FWHM) of 3.8 ″and 4.5 ″, for the LW2 and LW7 filter, respectively. The exposure times per filter were always larger than 200 s, sufficiently long to ensure proper stabilization of the detectors. For the spectra, on-source measurements were alternated with sky measurements at a frequency of 1/256 Hz, with a chopper throw of $`300\mathrm{}`$. Through a common $`24\mathrm{}\times \mathrm{\hspace{0.17em}24}\mathrm{}`$ aperture, light is fed simultaneously to the spectrograph and short wavelength detector (ISOPHOT-SS: 2.5-4.8 $`\mu `$m) and to the equivalent long wavelength channel (ISOPHOT-SL: 5.8–11.8 $`\mu `$m). There is a detector gap between 4.8 and 5.8 $`\mu `$m where no data can be recorded. IR~05189-2524 was observed twice with the ISOCAM, while 3C~390.3 and Ark~564 were observed with ISOPHOT-S only.
## 3 Data reduction, calibration & analysis
### 3.1 ISOCAM data
The ISOCAM images were reduced and calibrated using standard procedures of the CAM Interactive Analysis (CIA; Ott ott (1997)) software package <sup>3</sup><sup>3</sup>3CIA is a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matiere, C.E.A., France. starting from the Edited Raw Data (ERD). The first few ($``$ 4) readouts of each frame were discarded so as to retain only those data for which the detector throughput had reached 90 % of its stabilization level. The remainder of the processing involved the usual steps, i.e. dark current subtraction, flat-fielding, removal of particle hits (“de-glitching”) and flux calibration. All sources are detected at a very high significance level. Fluxes were obtained by integrating all the emission in a circle of 3 pixels radius ($`9\mathrm{}`$) and subtraction of the background emission summed over an external annulus centered on the source. These fluxes were further multiplied by 1.23 to account for the emission in the wings of the PSF. In the case of extended sources (see below), the flux was integrated over a circle of radius of 4.5 pixels (13.5 $`\mathrm{}`$) with the same area as the ISOPHOT-S aperture. The resulting ISOCAM fluxes are listed in columns 2 and 4 of Table 2, where an “E” in the last column denotes extended sources. The flux accuracy is mainly limited by flat-fielding residuals and imperfect stabilization of the signal. It is typically $`\pm 5`$ % for fluxes greater than 200 mJy, 10 % for fluxes in the range 100–200 mJy and $`\pm `$15 % at lower intensity levels. This is confirmed by the two observations of the bright galaxy IR~05189-2524 for which the fluxes differ by only 5.5 % and 0.5 %, at 6.75 $`\mu `$m and 9.63 $`\mu `$m respectively (see table 2).
### 3.2 ISOPHOT-S data
The ISOPHOT-S data were reduced with the PHOT Interactive Analysis (PIA; <sup>4</sup><sup>4</sup>4PIA is a joint development by the ESA Astrophysics Division and the ISOPHOT Consortium led by the ISOPHOT PI, D. Lemke, MPIA, Heidelberg. Gabriel gabriel (1998)) software package. However, because ISOPHOT-S was operating close to its sensitivity limit, special reduction and calibration procedures had to be applied. During the measurement, the chopper mirror switches periodically between source and background. After such a change of illumination, the photocurrent of the Si:Ga photoconductors immediately changes to an intermediate level, followed by a slow characteristic transition to the final level. At the fainter fluxes, a few Janskys and below, the time constant of this transition is extremely long. In our case of chopped-mode observations with a frequency of $`1/256`$ Hz, the final asymptotic value is never reached and only the initial steps towards the final value are observed. These are practically equal to the intermediate level.
This allows to simplify the flux calibration procedure by determining a spectral response function for this particular mode and flux, thereby minimising possible systematic errors due to differences in instrument configuration between the observations of calibrator and galaxy. We derived this function from a chopped observation of a faint standard star HD 132142 (TDT 63600901) of similar brightness to our objects, whose flux ranges from 0.15 to 2.54 Jy. The measurement was performed with the same chopper frequency and readout-timing as the AGN observations. The $`S/N`$ of the ISOPHOT-S spectra was considerably enhanced by two additional measures: $`i`$) the 32-s integration ramps were divided into sub-ramps of 2 sec, to provide enough signals per chopper plateau for the statistical analysis and to keep the loss of integration time low when de-glitching, i.e. when removing ramps affected by cosmic ray hits. $`ii`$) after fitting slopes to all sub-ramps, and removal of outliers (de-glitching) with PIA, the mean signal (slope) for each pixel was determined separately for on- and off-source pointings by fitting gaussians to the signal histograms. This corrects for the remaining asymmetry in the signal distribution, i.e. the tail of the distribution towards higher values due to non-recognised glitches is ignored and the result is closer to the median. We did not use the median itself because the digitisation of the voltages leads to signal-quantisation effects at small fluxes. The difference of on- and off-source signals was then divided by our spectral response function to derive fluxes expressed in Janskys. Taking into account the accuracy of the model-SED of the stellar calibrator (Hammersley 1995, priv. comm., see also Hammersley hamm98 (1998)) and the reproducibility of the measurements (Schulz schulz99 (1999)), the flux calibration is accurate to within $`\pm 10`$ % (1-$`\sigma `$), except at wavelengths $`>`$10 $`\mu `$m, where the flux of HR 132142 is weakest and the noise in the calibration measurement dominates. This agrees also with the accuracies given in Klaas et al. (klaas (2000)). The individual spectra will be published in a separate paper (Schulz et al. schulz (2000)) and can be provided on request.
### 3.3 Comparison of ISOCAM & ISOPHOT-S fluxes
To check the reliability of our calibration, the ISOCAM and ISOPHOT-S results were compared as follows. The total ISOPHOT-S flux in the LW2 and LW7 bands were computed by integration of the signal over the nominal band-pass of the filters, 5.00–8.50 $`\mu `$m and 8.50–10.70 $`\mu `$m, respectively. The exact shape of the filter spectral response is somewhat uncertain and the LW2 band-pass extends over the data-gap between ISOPHOT-SS and ISOPHOT-SL detector arrays. The integration was performed assuming a simple rectangular profile for the filter and interpolating the power-law continuum (see next section) over the 4.8–5.8 $`\mu `$m data gap. The resulting ISOPHOT-S fluxes are listed in columns 3 & 5 of Table 2. The projected linear dimension in kilo-parsec sustained by a 10 $`\mathrm{}`$ angle is given for convenience in column 6. The ISOCAM and ISOPHOT-S fluxes are in good agreement, especially given the crudeness of the integration method and the assumption about the data gap. The mean ratios of the ISOPHOT-S to the ISOCAM nuclear flux are 0.87 ($`\pm `$ 0.15, r.m.s.) and 1.07 ($`\pm `$ 0.20) at 6.75 and 9.63 $`\mu `$m respectively. Neither ratio differs significantly from unity. Our crude interpolation method, which neglects emission above the continuum, is at least partly responsible for the fact that the 6.75 $`\mu `$m ratio is smaller than one (at the 0.9-$`\sigma `$ level only). Averaged over the whole sample, the mean relative difference between the ISOCAM and ISOPHOT-S fluxes amount to 16 % for both filters. Selecting only those sources with a flux larger than 100 mJy, the difference decreases to 14 %, as expected. These figures are quite close to the quadratic sum of the uncertainties on the ISOCAM and ISOPHOT-S fluxes, thereby confirming the reliability of our calibration and error estimates.
### 3.4 Extended sources
The 14 extended sources are listed in Table 3 where columns 2–7 give the estimated nuclear flux, the flux in the ISOPHOT-S aperture and the total flux from the galaxy in the two ISOCAM filters. Column 8 lists the spatial extension of the source obtained by averaging the FWHM of the LW2 and LW7 images, while comments in column 9 provide a brief morphological description of the object. In the case of NGC~5953, the galaxy is compact (4 pixels FWHM) with an approximately gaussian flux distribution and no well defined point-source. In this case, no nuclear flux can be derived. For the other sources, the nuclear flux was obtained by deconvolution. Apart from NGC~1097 which is clearly extended, all other sources are compact and smaller than the entrance of the ISOPHOT-S spectrograph. Averaged over the whole sample, the ratio of the nuclear flux to the flux in the ISOPHOT-S aperture is 0.77 and 0.79, at 6.75 and 9.63 $`\mu `$m respectively. This implies that even for these extended sources, the ISOPHOT-S spectrum is dominated by the nuclear emission, with a 20–25 % contribution from the underlying galaxy. The mean ratio of the ISOPHOT-S flux to the total flux from the galaxy is 0.75 at 6.75 $`\mu `$m and 0.71 at 9.63 $`\mu `$m. This further implies that the extended MIR emission is relatively weak compared to the flux from the central bulge and nucleus and that the bulk of it is anyway recorded in the ISOPHOT-S spectrum. Hence, the redshift bias noted earlier should not have a significant impact on the ISOPHOT-S spectra.
### 3.5 Analysis
The typical spectrum of a Sf1 (Mrk~509) is shown in Fig. 1 and that of a Sf2 (NGC~5953) in Fig. 2. The MIR spectrum of a Sf2 is characterized by very strong emission features with well defined peaks at 6.2, 7.7 and 8.6 $`\mu `$m, usually ascribed to Polycyclic Aromatic Hydrocarbon (PAH) bands. The weaker 3.3 $`\mu `$m band is also detected in most sources and, in galaxies of adequate $`S/N`$ and redshift, the blue side of the strong 11.3 $`\mu `$m PAH feature also shows up as a sharp rise in flux toward the long wavelength end of the ISOPHOT-S array. Though much weaker, the PAH emission bands are also present in Sf1s
As can be judged from Fig. 1, the continuum of a Sf1 is well approximated by a power-law ($`\mathrm{F}_\nu \nu ^\alpha `$). The continuum of a Sf2 is less well defined but in the absence of a better prescription and for the sake of consistency, we also adopted a power-law functional form to fit the continuum of type 2 sources. The results of the fit are given in Table 4, were we list the spectral index and the continuum flux at a fiducial wavelength of 7 $`\mu `$m in the rest frame of the object. No correction for foreground redenning in the milky way was applied as it is negligible at this wavelength. The error on the flux is estimated to be $`\pm `$10 % and the uncertainty on the spectral index $`\pm `$0.05 for Sf1s and $`\pm `$0.1 for Sf2s.
The flux and equivalent widths (EW) of the PAH emission bands were measured by integrating all the flux above the best fit power-law continuum in the following pre-defined wavelength range (rest frame of the source): 3.22–3.35 $`\mu `$m, 5.86–6.54 $`\mu `$m, 6.76–8.30 $`\mu `$m and 8.30–8.90 $`\mu `$m for the 3.3, 6.2, 7.7 and 8.6 $`\mu `$m PAH band, respectively. In addition, the total PAH flux (excluding the weakest 3.3 $`\mu `$m feature) was computed by integration over the range 5.86–8.90 $`\mu `$m. The results are given in Table 4 and Table 5. The errors quoted represent the quadratic sum of the statistical uncertainties attached to each spectral data-point in the integration interval. The ISOCAM images of ESO~137-G34 show a star at a distance of 12 $`\mathrm{}`$ from the nucleus, with a flux of 124 and 81 mJy at 6.75 $`\mu `$m and 9.63 $`\mu `$m respectively. Emission from this star therefore contaminates the ISOPHOT-S spectrum. This contamination shows-up as an excess of short wavelength emission which completely dominates the spectrum for $`\lambda \mathrm{\hspace{0.17em}4.5}\mu `$m. The parasitic spectrum was crudely estimated by fitting a straight line to the excess over the 2.5–4.5 $`\mu `$m range and removed from the ISOPHOT-S spectrum of ESO 137-G34. For this object, the continuum and line parameters are therefore subject to larger uncertainties than the rest of the sample.
## 4 The difference between Sf1 and Sf2
A comparison of Fig. 1 and Fig. 2 reveals that the MIR spectrum of a typical Sf1 is markedly different from that of a Sf2: while Sf1s have a strong continuum with only weak PAH emission, most Sf2s are characterized by a weak continuum but very strong PAH emission. This difference is confirmed by a detailed statistical analysis. In Sf1s, the average equivalent width of the strongest of the PAH band at 7.7 $`\mu `$m is $`\mathrm{EW}_{7.7}=\mathrm{\hspace{0.17em}0.53}\pm 0.47\mu \mathrm{m}`$, where the error refers to the $`r.m.s`$ dispersion about the mean. This is 5.4 times smaller than the average equivalent width in Sf2s, $`\mathrm{EW}_{7.7}=\mathrm{\hspace{0.17em}2.86}\pm 1.95\mu \mathrm{m}`$. Similarly, the equivalent width of the sum of the 3 strongest PAH features is $`\mathrm{EW}_{\mathrm{TOT}}=\mathrm{\hspace{0.17em}0.85}\pm 0.79\mu \mathrm{m}`$ in Sf1s, compared to $`\mathrm{EW}_{\mathrm{TOT}}=\mathrm{\hspace{0.17em}4.38}\pm 2.98\mu \mathrm{m}`$ in Sf2s. The mean equivalent widths of the two populations and their variances are statistically different at the $`10^7`$ and $`2\times 10^{14}`$ confidence level, respectively.
The distribution of the $`7.7\mu `$m PAH band EW is shown in Fig. 3. It clearly illustrates that Sf1s and Sf2s have different EW distributions: Sf1s are confined to a small range of EW with a maximum of 2.0 $`\mu `$m whereas Sf2s EW extend all the way up to a maximum of 7.2 $`\mu `$m. A two-tail Kolmogorov-Smirnov (KS) test confirms that the Sf1 and Sf2 EW distributions are statistically different at the $`4\times 10^8`$ confidence level.
As can be seen from Fig. 4 however, the distribution of the 7.7 $`\mu `$m PAH luminosity is the same for Sf1s and Sf2s, at the 64 % confidence level (KS test). The mean ($`\pm r.m.s`$) 7.7 $`\mu `$m PAH luminosity of Sf1s is $`\mathrm{log}\mathrm{L}_{7.7}=\mathrm{\hspace{0.17em}42.44}\pm \mathrm{\hspace{0.17em}0.80}\mathrm{erg}\mathrm{s}^1`$, not statistically different (at the 39 % confidence level) from that of Sf2s, $`\mathrm{log}\mathrm{L}_{7.7}=\mathrm{\hspace{0.17em}42.28}\pm 0.78\mathrm{erg}\mathrm{s}^1`$.
On the other hand, the average 7 $`\mu `$m continuum luminosity of Sf1s $`\mathrm{log}\nu \mathrm{L}_{\nu ,7}=42.84\pm 0.75`$ is nearly 8 times smaller than that of Sf2s, $`\mathrm{log}\nu \mathrm{L}_{\nu ,7}=43.73\pm 0.85`$. A KS test confirms that the luminosity distribution is also different for the two populations, at the $`10^4`$ confidence level. The average continuum spectral index is however not statistically different in Sf1s ($`\alpha =0.84\pm 0.24`$) and Sf2s ($`\alpha =0.82\pm 0.37`$) and a KS test confirms that the distribution of indices is the same in the two populations at the 69 % confidence level.
It must be stressed that selection effects cannot account for the difference between Sf1s and Sf2s. First, the mean redshift and projected linear size of the two populations is not statistically different if one excludes the two QSOs. Second, as noted in section 3.4, all sources are relatively compact and smaller than the entrance of the PHT-S spectrograph, except NGC 1097. Third, the difference in PAH EW persists if one considers individual pairs of Sf1s and Sf2s with similar redshift. For instance, the Sf1 NGC~4051 and Sf2 NGC~5033 have very similar redshift (0.00242 versus 0.00292, respectively), but the 7.7 $`\mu `$m PAH band EW of the latter, 2.394 $`\mu `$m is 5.6 times larger than that of the former (0.428 $`\mu `$m).
## 5 Implications for unified schemes
The simplest explanation of the above observational results is that the MIR continuum is depressed in Sf2s relative to Sf1s. Indeed, a depressed continuum accounts for both the larger PAH EW and the reduced continuum luminosity of type 2 AGNs. This interpretation is consistent with the weak anti-correlation that exists between the 7.7 $`\mu `$m PAH EW and the 7 $`\mu `$m continuum luminosity, significant at the $`8\times 10^5`$ confidence level (Kendall rank order coefficient = -0.358). The most likely reason for the continuum depression is obscuration by dust, as postulated by unification schemes. The reddening law is essentially grey from 3 to 11 $`\mu `$m and does not alter significantly the shape of the continuum. Hence, the obscuration hypothesis is consistent with Sf1 and Sf2 having the same spectral index on the average. On the other hand, we have seen that the PAH luminosity is the same in Sf1s and Sf2s. This further requires that the region responsible for the PAH emission is located outside the screen that absorbs the MIR (and optical-UV) continuum. In other words, the screen must be located in the immediate vicinity of the nucleus such that it absorbs the 7 $`\mu `$m continuum but leaves the PAH emission unaffected. Again, this is consistent with unification schemes and with the molecular torus hypothesis. We have made here the implicit assumption that the MIR continuum observed in Sf1s is of nuclear origin. There is strong observational support for this hypothesis:
* Up to at least 3.4 $`\mu `$m, the near-IR continuum of AGNs is known to be variable, indicating a compact emission source (e.g. Neugebauer et al. neugebauer (1989)). Furthermore, in F~9 (Clavel Wamsteker & Glass clavel (1989)), GQ~COMAE (Sitko, Sitko & Siemiginowska sitko (1993)), NGC~1566 (Baribaud et al. baribaud (1992)) and NGC~3783 (Glass glass (1992)), the variations follow closely those of the optical and ultraviolet continuum with a delay of a few months to $``$ one year, which is commensurate with the photon travel time to the dust sublimation radius. Such observations are a strong indication that the 1.25–3.4 $`\mu `$m flux of radio-quiet AGNs originates from thermal emission by dust grains located in the immediate vicinity of the central engine, presumably at the torus inner edge.
* AGNs have warm 12 to 100 $`\mu `$m colors, clearly different from those of “normal” non-active galaxies (e.g. Miley Neugebauer & Soifer miley (1985)) which indicates that the 12 $`\mu `$m emission is associated with the nuclear activity
In the following, we have therefore assumed that the MIR continuum of Sf1s originates from thermal emission by dust grains located at the inner edge of the molecular torus.
This leaves the origin of the PAH emission unclear. PAH emission is ubiquitous throughout the interstellar medium of our galaxy, in particular in star forming region and reflection nebulae (Verstraete et al. verstraete (1996); Cesarsky et al. 1996b ; Cesarsky et al 1996c ) and galactic cirrus (Mattila et al. mattila (1996)). PAH emission bands are also conspicuous in normal late type galaxies (Xu et al. xu (1998); Boulade et al. boulade (1996); Vigroux et al. vigroux (1996); Metcalfe et al. metcalfe (1996); Acosta-Pulido et al. acosta (1996)). Hence, the PAH emission bands seen in AGN spectra most probably arise in the general ISM of the underlying galaxies, more specifically in its bulge given that nearly all sources have low redshifts and are unresolved at the 4–5 ″resolution of ISOCAM.
To check this hypothesis, we overlay in Fig. 5 the spectrum of the typical Sf2 spectrum of NGC~5953 on top of the spectrum of the normal SB galaxy NGC~701 (scaled by a factor 0.5). The two spectra are virtually indistinguishable. Moreover, the 7.7 $`\mu `$m PAH EW in NGC 701, 5.88$`\pm 0.05\mu `$m as well as the luminosity, $`\mathrm{log}\mathrm{L}_{7.7}=\mathrm{\hspace{0.17em}42.048}\pm 0.009\mathrm{erg}\mathrm{s}^1`$, are well within the range spanned by Sf2s. This confirms that the PAH emission is not related to the activity in the nucleus. Note also that the continuum in the two objects matches perfectly. This further implies that, at least in NGC 5953, the nuclear continuum is completely extinguished and the faint residual we observe originates from outside the active nucleus. The origin of this faint continuum in normal galaxies is currently a matter of debate. However, the fact that it appears to correlate tightly with the PAH feature, possibly indicate a common origin (Xu et al. xu (1998)). Our results are at odds with those of Malkan, Gorjian & Tam (malkan (1998)) who conclude that Sf1s and Sf2s differ in galaxy type and infer that much of the extinction occurs in the ISM, at larger radii than the molecular torus. We note however, that the statistical evidence for a difference in spectral types between the host of Sf1s and Sf2s in the Malkan et al. sample is marginal.
## 6 The obscuration of Seyfert 2 galaxies
Since the PAH emission is not obscured by the torus while the MIR continuum of Sf2s is, the PAH EW is a de-facto indicator of the nuclear extinction. According to unification schemes, the extinction through which we see the nucleus depends on the orientation of the torus with respect to our line of sight. One therefore expects a spread in Sf2 PAH EW, as is indeed observed (Fig. 3). The small EW sources correspond to the low inclination case where our line of sight intercepts only the upper layers of the torus, while the large EW Sf2s are the sources seen close to edge-on where our line of sight to the nucleus intercepts the full extent of the torus. In such object, like NGC 5953, the MIR nuclear emission is completely absorbed and we are left with the spectrum of a normal galactic bulge, such as that of NGC 701. According to unification schemes, the MIR continuum of Sf1s should not suffer from extinction since we have a direct view of the torus inner edge. We can therefore use the ratio R of the average PAH EW in Sf1s and in Sf2s to estimate the average MIR extinction of type 2 AGNs. This ratio for the strongest PAH band at 7.7 $`\mu `$m is $`R=\mathrm{\hspace{0.17em}5.4}\pm 3.7`$, where the error quoted reflects the $`r.m.s.`$ dispersion of Sf2 EWs. This implies that the continuum of Sf2s suffers on the average from $`1.83\pm 0.74`$ magnitudes of extinction at 7.7 $`\mu `$m. This translates into a visual extinction $`\mathrm{A}_\mathrm{v}=\mathrm{\hspace{0.17em}92}\pm 37`$ magnitudes (Rieke and Lebofsky rieke (1985)). For a normal gas to dust ratio, this corresponds to an average X-ray absorbing column, $`\mathrm{N}_\mathrm{H}=\mathrm{\hspace{0.17em}2.0}\pm 0.8\times 10^{23}\mathrm{cm}^2`$ (Gorenstein gorenstein (1975)). The latter is in good agreement with the mean Sf2 absorbing column as measured directly from X-ray data by Mulchaey et al. (mulchaey (1992)), $`\mathrm{N}_\mathrm{H}=\mathrm{\hspace{0.17em}1.6}_{1.3}^{+8.6}\times 10^{23}\mathrm{cm}^2`$ or Smith and Done (smith (1996)), $`\mathrm{N}_\mathrm{H}=\mathrm{\hspace{0.17em}1.0}\pm 1.3\times 10^{23}\mathrm{cm}^2`$. This excellent agreement should be seen more as a consistency check of our assumptions and a validation of unified schemes than an accurate determination of the torus optical depth. First, as mentioned earlier, it represents a mean value of the extinction averaged over the range of viewing angles, from grazing incidence to edge-on. Second, there is an intrinsic spread in the luminosity of the PAH emission, as illustrated by Fig. 4 which introduces uncertainties in this estimate. Third, in sources like NGC 5953 where the MIR continuum is totally absorbed, the PAH EW only provides a lower-limit of the true extinction.
### 6.1 The highly obscured, large PAH EW Sf2s
From the 29 Sf2s in our sample, 4 have $`\mathrm{EW}_{7.7}\mathrm{\hspace{0.17em}5.5}\mu \mathrm{m}`$, in the range of a normal galaxy. This suggests that about 14 % of Sf2s suffer from extinction in excess of 125 visual magnitudes, sufficient to block-out the mid-IR continuum. These extreme Sf2s are presumably those where the torus symmetry axis lies in the plane of the sky. The 4 galaxies are IC~4397, ESO~137-G34, NGC~5728 and NGC 5953. If our conclusions are correct, such extreme Sf2s should be heavily absorbed in the X-rays, with hydrogen column densities in excess of $`3\times 10^{23}\mathrm{cm}^2`$. Since for such cases the PAH EW ratio only provides a lower limit to the extinction, these galaxies could even be compton thick (i.e. $`\mathrm{N}_\mathrm{H}10^{24}\mathrm{cm}^2`$) and opaque to X-rays below $``$10 keV. In order to verify this prediction, we have searched the literature and the NED for X-ray data on these sources. IC 4397 and ESO 137-G34 have no entry in the ROSAT data-base nor in the EINSTEIN catalog of IPC sources (Burstein et al. burstein (1997)), presumably because they are too faint to have been detected. NGC 5728 is not in the ROSAT data-base but appears in the IPC catalog with a 0.5–4.5 keV luminosity $`\mathrm{log}L_x=\mathrm{\hspace{0.17em}40.92}`$, in the range of luminous non-active galaxies. NGC 5953 coincides with the ROSAT HRI source 1RXH J153432.5+151137. Its 0.5–2 keV luminosity $`\mathrm{log}L_x=\mathrm{\hspace{0.17em}39.4}`$ can be accounted for entirely by integrated stellar emission. It therefore appears that our predictions are borne by existing X-ray data and that these 4 large PAH EW Sf2s are indeed X-ray faint and probably compton thick. The next three largest PAH EW Sf2s are NGC~1667, Mrk~673 and NGC~5674. They have 7.7 $`\mu `$m PAH EW of the order of 4.6 $`\mu `$m, smaller than that of the normal galaxy NGC 701, but still a factor of $``$ 10 larger than Sf1’s. NGC 1667 was observed with ASCA by Turner et al. (1997a ) who report luminosities of $`0.8\times 10^{40}\mathrm{erg}\mathrm{s}^1`$ and $`2.6\times 10^{40}\mathrm{erg}\mathrm{s}^1`$ over the 2–10 keV and 0.5–2 keV range, respectively, well within the range of inactive galaxies. Furthermore, according to these authors,
68 % of the X-ray flux in this galaxy can be accounted for by a thermal component due to the integrated stellar emission from a starburst. Mrk 673 appears in Burstein et al. (burstein (1997)) with an upper limit $`\mathrm{log}\mathrm{L}_\mathrm{x}\mathrm{\hspace{0.17em}41.2}`$ to its 0.5–4.5 keV luminosity. Mrk 673 also coincides with the ROSAT PSPC source WGA J1417.3+2651 with a 0.5–2 keV luminosity $`\mathrm{log}\mathrm{L}_\mathrm{x}=40.8`$, again consistent with a mostly stellar origin. Finally, NGC 5674 may represent the first Sf2 where one starts seeing through the torus, since its X-ray absorbing column could be measured with Ginga, $`0.8\times 10^{23}\mathrm{cm}^2`$ (Smith & Done smith (1996)). To summarize, all large PAH EW Sf2s are faint and heavily absorbed in the X-rays, as predicted by unification schemes.
### 6.2 The low obscuration, small PAH EW Sf2s
Ten Sf2 galaxies have 7.7 $`\mu `$m PAH EW $`\mathrm{\hspace{0.17em}2.0}\mu `$m, in the range occupied by Sf1s (fig 3). The spectrum of Mrk 3 is shown in Fig.6 as an example of such a small PAH EW Sf2. Among these ten Sf2s, four have been observed in spectropolarimetry and/or near IR spectroscopy (Heisler et al. heisler (1997); Young et al. young (1996); Veilleux Goodrich & Hill veilleux (1997)). All four galaxies display broad-lines in polarized light. These are Mrk~3, NGC~7674, IRAS~05189-2524 and NGC~4388. Conversely, none of the three Sf2s with 7.7 $`\mu `$m PAH EW $`>\mathrm{\hspace{0.17em}1.6}\mu `$m for which spectropolarimetric or IR spectroscopic data exist (Mrk~266, NGC~5728, NGC~1097) exhibit “hidden” broad lines. This confirms the finding of Heisler et al. (heisler (1997)) that the Sf2s which have “hidden” BLR (i.e. seen in spectropolarimetry or in direct IR spectroscopy) are those where our line-of-sight grazes the torus upper layer such that we have a direct view of the reflecting mirror but not of the BLR. As discussed previously, the MIR continuum most likely originates from thermal emission by hot dust grains located on the inner wall of the torus. Hence, the fact that the “hidden” BLR Sf2s are the same sources which exhibit a Sf1-like MIR continuum further constrains the mirror and the torus inner wall to be in neighbouring regions. It is in fact conceivable that the mirror is the wall itself or a wind of hot electrons boiled-off the torus surface. It is interesting to note that these “hidden” BLR Sf2s remain heavily absorbed in the X-rays while their MIR continuum apparently does not suffer from a significant amount of extinction. For instance, Mrk 3 has an X-ray absorbing column $`\mathrm{N}_\mathrm{H}=\mathrm{\hspace{0.17em}10}^{24}\mathrm{cm}^2`$ (Turner et al 1997b ) corresponding to a 7 $`\mu `$m extinction of 9 magnitudes, amply sufficient to block-out the MIR continuum. Nevertheless, the MIR continuum of Mrk 3 (see Fig.6) is hardly absorbed. This indicates that our line-of-sight to the X-ray source is different from our line-of-sight to the MIR source, the former intercepting a much larger fraction of the torus than the latter. As predicted by unified schemes, this confirms that the X-ray source (like the BLR and the disk) is embedded further down the throat of the torus than the mirror and the wall emitting the MIR continuum.
As a final remark, we note that MIR spectral characteristics do not allow to distinguish between type 1.8, 1.9 and type 2 objects. For instance, the sub-class of small PAH EW Sf2s include sources such as Mrk~3 and NGC~4507 which are bona-fide type 2 Seyferts. Conversely, NGC~5674 and Mrk~334 are classified as type 1.9 and 1.8 Seyfert galaxies whereas their 7.7 $`\mu `$m PAH band EW is as large as 4.6 $`\mu `$m and 2.6 $`\mu `$m, respectively. Similarly, we do not find significant differences between Broad-Line Radio-Galaxies (BLRG) and genuine radio-quiet Sf1s. Indeed, 3C~382 and 3C~390.3 have 7.7 $`\mu `$m PAH EW of 0.30 and 0.22 $`\mu `$m respectively, close to the Sf1’s average ($`0.53\pm 0.47\mu `$m). Last, two sources in our sample qualify as “Narrow-Line Seyfert 1” (NLS1) galaxies, Ark~564 and Mrk~507 (Boller, Brandt & Fink boller (1996)). With a 7.7 $`\mu `$m PAH EW of 0.26 and 0.59 $`\mu `$m respectively, their MIR spectrum is undistinguishable from that of “normal” Sf1s.
Our scheme seems to be supported by the few published ISO spectra of Sf2 galaxies. The Circinus galaxy displays broad $`\mathrm{H}_\alpha `$ emission (Oliva et al. oliva (1998)) in polarized light while its MIR spectrum reveals a strong continuum with relatively weak ($`\mathrm{EW}\mathrm{\hspace{0.17em}2}\mu \mathrm{m}`$) PAH emission bands (Moorwood et al. moorwood (1996)). NGC 1068, the prototype Sf2 galaxy with a “hidden BLR” also has weak PAH emission features (Genzel genzel (1998)). Finally, IRAS 05189-2524 (Watson et al. watson (1999)) has moderate strength PAH emission bands (EW $`\mathrm{\hspace{0.17em}1}\mu `$m) together with polarized broad-lines (Young et al. young (1996)).
## 7 The silicate 9.7 micron feature
In individual spectra, there are hints of a spectral curvature around 9.7 $`\mu `$m, the expected wavelength of the silicate dust feature. However, the local continuum in the spectral range $`\lambda \mathrm{\hspace{0.17em}9}\mu `$m is difficult to define due to the proximity of the 8.6 $`\mu `$m and 11.3 $`\mu `$m PAH emission bands. Moreover, the $`S/N`$ ratio decreases rapidly as one approaches the long wavelength end of the ISOPHOT-S array. Finally, the silicate feature itself appears to have a complex profile with a narrow absorption (possibly of galactic origin) superimposed on a broader emission line. Attempts measuring its strength usually yield insignificant detections. To overcome this limit and increase the signal-to-noise ratio, we have computed the average MIR spectrum of a Sf1 and a Sf2 galaxy separately. Fig. 7 shows the mean Sf1 spectrum obtained by normalizing and averaging the rest wavelength spectra of all 20 type $`\mathrm{\hspace{0.17em}1.5}`$ AGNs with a signal-to-noise ratio per pixel large than 7, while Fig. 8 displays the average of the 23 Sf2 spectra with $`\frac{S}{N}\mathrm{\hspace{0.17em}3}`$. In the mean Sf1 spectrum, the silicate 9.7 $`\mu `$m feature appears in emission with an equivalent width $`\mathrm{EW}_{9.7}=\mathrm{\hspace{0.17em}0.25}\pm 0.01\mu \mathrm{m}`$. This immediately rules out models with very large torus optical depths. In the model of Pier and Krolik (pier (1992)) for instance, the strength of the silicate feature is calculated as a function of inclination $`i`$ and of the vertical and radial Thomson optical depth, $`\tau _z`$ and $`\tau _r`$ respectively. Reading from their figure 8, models with $`\tau _z\mathrm{\hspace{0.17em}1}`$ and/or $`\tau _r\mathrm{\hspace{0.17em}1}`$ are ruled-out as they predict the silicate feature in absorption. For an average Sf1 inclination $`\mathrm{cos}i=\mathrm{\hspace{0.17em}0.8}`$, the best fit to $`\mathrm{EW}_{9.7}=\mathrm{\hspace{0.17em}0.254}\pm 0.008\mu \mathrm{m}`$ suggests $`\tau _r\mathrm{\hspace{0.17em}1}`$ and $`0.1\tau _z\mathrm{\hspace{0.17em}1}`$. A unit Thomson optical depth corresponds to a column density $`\mathrm{N}_\mathrm{H}\mathrm{\hspace{0.17em}10}^{24}\mathrm{cm}^2`$. While these figures are somewhat model dependent, it is reassuring that they agree with our independent estimate of $`\mathrm{N}_\mathrm{H}`$ based on the PAH EW ratio. The PAH bands are so strong in Sf2s that placing the continuum at 9.7 $`\mu `$m becomes a subjective decision. The mean Sf2 spectrum (Fig. 8) shows a weak maximum at 9.7 $`\mu `$m and a shallow minimum near 10 $`\mu `$m. In the absence of longer wavelengths data, one can only set a provisional upper limit of 0.32 $`\mu `$m to the silicate EW in Sf2s, whether in absorption or in emission. It must be emphasized that the limitation is not the S/N but the uncertainty in placing the local continuum.
## 8 Summary and conclusions
A sample of 57 AGNs and one normal SB galaxy (NGC 701) were observed with the ISOPHOT-S spectrometer and the ISOCAM imaging camera. The sample is about equally divided into Sf1s (28) and Sf2s (29), where we define Sf1s as all objects of type $`1.5`$ and Sf2s those whose type is $`>1.5`$. The observations show that:
1. Forty-four of the 57 AGNs in the sample appear unresolved at the $`45\mathrm{}`$ resolution of ISOCAM. Of the 13 resolved sources, 12 are sufficiently compact to ensure that all of the flux falls into the $`24\mathrm{}\times \mathrm{\hspace{0.17em}24}\mathrm{}`$ ISOPHOT-S spectrograph aperture. Moreover, even in these resolved sources, nuclear/bulge emission contributes for at least 3/4 th of the light recorded with ISOPHOT-S.
2. The spectrum of Sf1s is characterized by a strong continuum and weak Polycyclic Aromatic Hydrocarbon (PAH) emission bands at 3.3, 6.2, 7.7 and 8.6 $`\mu `$m. The continuum is well described by a power-law of average index $`\alpha =0.84\pm 0.24`$.
3. In sharp contrast with Sf1s, Sf2s generally have a weak continuum with very strong PAH emission bands.
4. The distribution of PAH equivalent widths (EW) is statistically different in Sf1s and Sf2s. The average EW for the strongest band at 7.7 $`\mu `$m is $`0.53\pm 0.47\mu `$m in Sf1s versus $`2.86\pm 1.95\mu `$m in Sf2s. Moreover, the distribution of PAH EW in Sf1s is confined to values smaller than 2.0 $`\mu `$m whereas that of Sf2s extends from 0.24 $`\mu `$m up to 7.2 $`\mu `$m.
5. There are however no statistical differences in the PAH luminosity distribution of Sf1s and Sf2s.
6. The 7 $`\mu `$m continuum is on the average a factor $``$ 8 less luminous in Sf2s than in Sf1s.
7. The PAH emission is not related to the activity in the nucleus and originates in the interstellar medium of the underlying galactic bulge. The PAH EW can therefore be used as a nuclear redenning indicator.
8. The above results are consistent with unification schemes and imply that the MIR continuum of Sf2s suffers from an average extinction of $`92\pm 37`$ visual magnitudes. This corresponds to an average hydrogen absorbing column $`\mathrm{N}_\mathrm{H}=\mathrm{\hspace{0.17em}2.0}\pm 0.8\mathrm{cm}^2`$, in good agreement with X-ray measurements. The large dispersion in the Sf2s EW is consistent with the expected spread in viewing angles.
9. The spectrum of Sf2s whose 7.7 $`\mu `$m PAH band EW exceeds 5 $`\mu `$m is indistinguishable from that of a normal non-active galaxy, implying that the MIR continuum is completely obscured in these sources ($`\mathrm{A}_\mathrm{v}>\mathrm{\hspace{0.17em}125}`$ magnitudes). Without exception, these Sf2s are also heavily absorbed in the X-rays and probably “compton thick”. These large PAH EW Sf2s are presumably those where the torus is seen edge-on.
10. Ten Sf2s have 7.7 $`\mu `$m PAH EW $`\mathrm{\hspace{0.17em}2.0}\mu `$m, in the range of Sf1s. Of these ten, four have been observed in spectropolarimetry and all four display “hidden” broad lines. Conversely, none of the three Sf2s with PAH EW $`>\mathrm{\hspace{0.17em}2}\mu `$m which have been observed in spectropolarimtery display “hidden” broad lines. This confirms the finding of Heisler et al. (heisler (1997)) that those Sf2s with a “hidden” BLR are those for which our line-of-sight grazes the upper surface of the torus. In these sources, we have a direct view of both the reflecting mirror and of the torus inner wall responsible for the MIR continuum. Thus, our observations strongly favour a model where the “mirror” and the torus inner wall are spatially co-located. It is in fact conceivable that the mirror is the torus inner wall itself or a wind of hot electrons boiled-off its surface by radiation pressure.
11. The silicate 9.7 $`\mu `$m feature appears weakly in emission in Sf1s. This implies that the torus cannot be extremely thick and the average silicate EW ($`0.25\pm 0.01\mu `$m) suggests that the total hydrogen column integrated along the torus vertical axis lies in the range $`10^{23}\mathrm{N}_\mathrm{H}\mathrm{\hspace{0.17em}10}^{24}\mathrm{cm}^2`$, consistent with our previous estimate based on the Sf2 PAH EW.
12. As far as their MIR properties are concerned, AGNs of intermediate types 1.8 and 1.9 are indistingishable from genuine SF2s, whereas Narrow Line Seyfert 1 (NLS1) and Broad-Line Radio-Galaxies (BLRG) behave as normal SF1s.
The sketch outline in this paper makes specific predictions. First, Sf2s which have 7.7 $`\mu `$m PAH EW in excess of $`\mathrm{\hspace{0.17em}5}\mu `$m should never exhibit broad-lines in spectropolarimetry. Second, these sources should always be heavily absorbed in the X-rays, possibly up to 10 keV. Third, Sf2s whose PAH EW $`\mathrm{\hspace{0.17em}2}\mu `$m should exhibit broad lines when observed in spectropolarimetry and/or direct IR spectroscopy. This last prediction seems to be borne by the few existing ISO observations of Sf2 with a “hidden” BLR
|
no-problem/0003/cond-mat0003427.html
|
ar5iv
|
text
|
# Quantum chiral phases in frustrated easy-plane spin chains
## Abstract
The phase diagram of antiferromagnetic spin-$`S`$ chain with XY-type anisotropy and frustrating next-nearest-neighbor interaction is studied in the limit of large integer $`S`$ with the help of a field-theoretical approach. It is shown that the existence of gapless and gapped chiral phases found in recent numerical studies \[M.Kaburagi et al., J. Phys. Soc. Jpn. 68, 3185 (1999), T.Hikihara et al., J. Phys. Soc. Jpn. 69, 259 (2000)\] is not specific for $`S=1`$, but is rather a generic large-$`S`$ feature. Estimates for the corresponding transition boundaries are obtained, and a sketch of the typical phase diagram is presented. It is also shown that frustration stabilizes the Haldane phase against the variation of the anisotropy.
In recent few years, the problem of possible nontrivial ordering in frustrated quantum spin chains has attracted considerable attention . Nersesyan et al. predicted that in anisotropic (easy-plane) antiferromagnetic $`S=\frac{1}{2}`$ chain with sufficiently strong frustrating next-nearest-neighbor (NNN) coupling, a new phase with a broken parity appears, which is characterized by the nonzero value of chirality
$$\kappa _n^z(𝐒_n\times 𝐒_{n+1})_z;$$
(1)
note that the definition (1) differs from the other, so-called scalar chirality $`\stackrel{~}{\kappa }𝐒_{n1}(𝐒_n\times 𝐒_{n+1})`$ which is often discussed in the context of the isotropic spin chains . This prediction was made on the basis of the bosonization technique combined with a subsequent mean-field-type decoupling procedure. A similar conclusion can be reached by means of a mean-field decoupling of the quartic terms in the Jordan-Wigner transformed fermionic version of the Hamiltonian, in the spirit of the Haldane’s treatment of spontaneously dimerized phase . Up to now, however, this prediction for $`S=\frac{1}{2}`$ was not confirmed in numeric studies . On the other hand, two different types of chiral ordered phases, gapped and gapless, were found numerically in $`S=1`$ easy-plane frustrated chain . At present, to our knowledge, there is no theoretical analysis addressing the problem of chiral ordered phases for the $`S1`$ case.
The aim of the present Letter is to study the generic large-$`S`$ behavior of antiferromagnetic easy-plane integer-$`S`$ chain with frustrating NNN interaction, described by the following Hamiltonian:
$$\widehat{H}=J\underset{n}{}\{(𝐒_n𝐒_{n+1})_\mathrm{\Delta }+j(𝐒_n𝐒_{n+2})_\mathrm{\Delta }+D(S_n^z)^2\},$$
(2)
Here $`(𝐒_1𝐒_2)_\mathrm{\Delta }S_1^xS_2^x+S_1^yS_2^y+\mathrm{\Delta }S_1^zS_2^z`$, $`𝐒_n`$ denotes the spin-$`S`$ operator at the $`n`$-th site, the lattice spacing $`a`$ has been set to unity, $`J>0`$ is the nearest-neighbor exchange constant, $`j>0`$ is the relative strength of the NNN coupling, and $`0<\mathrm{\Delta }<1`$ and $`D>0`$ are respectively the dipolar (inter-ion) and the single-ion anisotropies.
We argue that the existence of gapless and gapped chiral phases found in is not specific for $`S=1`$, but is rather a generic large-$`S`$ feature. Estimates for the corresponding transition boundaries are obtained, and a sketch of the typical phase diagram is presented. As a side result, we also show that the domain of stability of the Haldane phase against the anisotropy variation grows when the frustrating coupling $`j`$ is increased, in accordance with the numerical results .
We use the well-known technique of spin coherent states which effectively replaces spin operators by classical vectors $`(S_n^+,S_n^z)=S(\mathrm{sin}\theta _ne^{i\phi _n},\mathrm{cos}\theta _n)`$ and incorporates the quantum features by means of the path integral over all space-time configurations of $`(\theta ,\phi )`$. The classical ground state of (2) is well-known: the spins always lie in the easy plane $`(xy)`$, i.e. $`\theta =\frac{\pi }{2}`$, for $`j<\frac{1}{4}`$ the alignment of spins is antiferromagnetic, $`\phi _n=\phi _0+\pi n`$, and for $`j>\frac{1}{4}`$ a helical structure with incommensurate magnetic order develops, $`\phi _n=\phi _0\pm (\pi \lambda _0)n`$, where $`\lambda _0=\mathrm{arccos}(1/4j)`$, and the $`\pm `$ signs above correspond to the two possible chiralities of the helix.
In one dimension the long-range helical ordering is impossible since it would imply a spontaneous breaking of the continuous in-plane symmetry; in contrast to that, the existence of the finite chirality $`\kappa _n^z=\mathrm{sin}(\phi _{n+1}\phi _n)`$ is not prohibited by the Mermin-Wagner theorem.
The classical isotropic system has for $`j>\frac{1}{4}`$ three massless modes with wave vectors $`q=0`$, $`q=\pm \delta `$, where $`\delta \pi \lambda _0`$ is the pitch of the helix. The effective field theory for the isotropic case is the so-called $`SO(3)`$ nonlinear sigma model, with the order parameter described by the local rotation matrix . The physics becomes simpler in presence of anisotropy since the modes with $`q=\pm \delta `$ acquire a finite mass. Our starting point will be the following ansatz for the angular variables $`\theta ,\phi `$:
$`\theta _n`$ $`=`$ $`\pi /2+p_n+(\xi _ne^{i\delta n}+\xi _n^{}e^{i\delta n})/2`$ (3)
$`\phi _n`$ $`=`$ $`\pi n+\psi _n+(w_ne^{i\delta n}+w_n^{}e^{i\delta n})/2.`$ (4)
We assume that the fluctuations $`p`$, $`\xi `$, $`w`$ are small and that they are smooth functions of $`n`$, slowly varying over the characteristic distance $`l_0=2\pi /\lambda _0`$; the same property is assumed for the function $`\lambda _n\psi _{n+1}\psi _n`$ which can be viewed as a dual variable to $`\psi `$ .
After passing to the continuum in the effective Lagrangian $`L=𝑑x=\mathrm{}S_n(1\mathrm{cos}\theta _n)_t\phi _nH(\theta ,\phi )`$ we average over $`l_0`$, making the oscillating terms disappear. The resulting expression for $``$ is
$``$ $`=`$ $`\mathrm{}S\left\{p(_t\psi )(1|\xi |^2/4)+[w^{}(_t\xi )+w(_t\xi ^{})]/4\right\}`$ (5)
$``$ $`JS^2\left\{V[\lambda ](1|\xi |^2/2)+A_0p^2+(A_1/2)|w|^2\right\}`$ (6)
$``$ $`(JS^2/4)\left\{M|\xi |^2+F\mathrm{\Delta }|_x\xi |^2\right\},`$ (7)
where the following notation has been used:
$`V[\lambda ]=j\mathrm{cos}2\lambda \mathrm{cos}\lambda U_0+(j/2)\mathrm{cos}2\lambda (_x\lambda )^2,`$ (8)
$`U_0=j\mathrm{cos}2\lambda _0\mathrm{cos}\lambda _0,F=\mathrm{cos}\lambda _04j\mathrm{cos}2\lambda _0,`$ (9)
$`A_0=D+\mathrm{\Delta }(1+j)U_0,M=2[D(1\mathrm{\Delta })U_0],`$ (10)
$`A_1=\mathrm{cos}^2\lambda _0+j\mathrm{cos}^22\lambda _0U_0.`$ (11)
Integrating out the “slave” fields $`p(\mathrm{}/2JSA_0)_t\psi `$, $`w(\mathrm{}/2JSA_1)_t\xi `$, and passing to the imaginary time $`y=ict`$, $`c=JS(2A_0)^{1/2}/\mathrm{}`$, we obtain the effective Euclidean action
$`𝒜_E`$ $`=`$ $`{\displaystyle \frac{1}{g_0}}{\displaystyle 𝑑x𝑑y\left\{\frac{1}{2}(_y\psi )^2+V[\lambda ]\right\}\left(1\frac{1}{2}|\xi |^2\right)}`$ (12)
$`+`$ $`{\displaystyle \frac{\stackrel{~}{E}}{4g_0}}{\displaystyle d^2X\left\{(_\mu \xi ^{})(_\mu \xi )+m_0^2|\xi |^2\right\}},`$ (13)
where $`(X_1,X_2)=(x,ic^{}t)`$, $`c^{}=JS(A_1F\mathrm{\Delta })^{1/2}/\mathrm{}`$, $`_\mu =/X_\mu `$, and the constants $`g_0`$, $`\stackrel{~}{E}`$, $`m_0`$ are given by
$$g_0=\frac{\sqrt{2A_0}}{S},\stackrel{~}{E}=\left(\frac{A_0F\mathrm{\Delta }}{A_1}\right)^{1/2},m_0^2=\frac{M}{F\mathrm{\Delta }}.$$
(14)
Further, integrating out the massive $`\xi `$ yields the effective action for $`\psi `$ only, with a renormalized coupling $`g_{\mathrm{eff}}`$:
$`𝒜_E`$ $`=`$ $`{\displaystyle \frac{1}{g_{\mathrm{eff}}}}{\displaystyle 𝑑x𝑑y\left\{\frac{1}{2}(_y\psi )^2+V[\lambda ]\right\}},`$ (15)
$`g_{\mathrm{eff}}`$ $`=`$ $`g_0/\left(1{\displaystyle \frac{g_0}{2\pi \stackrel{~}{E}}}\mathrm{ln}(1+\mathrm{\Lambda }^2/m_0^2)\right),`$ (16)
where $`\mathrm{\Lambda }=\pi `$ is the lattice cutoff. A similar derivation may be carried out for $`j<1/4`$: starting from the ansatz of the type (3) with real $`\xi `$, $`w`$ and $`\delta =\pi `$, one arrives at the same result (15), but with $`\lambda _0`$ set to zero in all quantities defined in (8), (14).
We have mapped the initial quantum 1D model to the 2D classical XY helimagnet at effective “temperature” $`g_{\mathrm{eff}}`$ described by the effective action (15). The validity of this mapping is determined by the requirements $`g_01`$, $`|w||\xi |1`$, $`p1`$, which translate into
$$S(2A_0)^{1/2},\mathrm{\Lambda }e^{\pi \stackrel{~}{E}/g_0}m_0Sg_0/\stackrel{~}{E}$$
(17)
The first inequality above means that we are not allowed to consider large $`jS^2/2`$, and the second one requires the anisotropy to be within a certain range. We will be mainly interested in the behavior of (15) for $`j`$ close to the Lifshitz point $`j_L\frac{1}{4}`$, then the condition for the anisotropy transforms into
$$\zeta \pi ^2\epsilon e^{2\pi S\sqrt{\zeta \epsilon }}3\mu /81,$$
(18)
where $`\mu 1\mathrm{\Delta }+4D/3`$, $`\epsilon |jj_L|`$, and the constant $`\zeta =1`$ for $`j<j_L`$ and $`\zeta =2`$ for $`j>j_L`$, respectively.
The model (15) possesses two basic types of topological defects : (i) domain walls connecting regions of opposite chirality, and (ii) vortices, existing inside the domains with certain chirality and destroying the long-range helical magnetic order (only quasi-long-range helical order is possible at finite $`g_{\mathrm{eff}}`$). Thus one may expect two phase transitions: Ising-type transition (“freezing” of the domain walls) which corresponds to the onset of the chiral order, and the Kosterlitz-Thouless (KT) transition (vortex unbinding) corresponding to the transition from the gapless chiral phase with algebraically decaying helical magnetic correlations $`\mathrm{cos}\psi (x)\mathrm{cos}\psi (0)(1/x)^{g_{\mathrm{eff}}/2\pi }`$ to the gapped chiral phase with only short-range helical order (but still with the long-range chiral order $`\kappa ^z(x)\kappa ^z(0)\text{const}`$, $`x\mathrm{}`$). For the two transitions to be possible, one has to assume (later this assumption will be checked self-consistently) that the critical temperature of the Ising-type transition is higher that the corresponding temperature of the KT transition.
Inside the phase with broken chiral symmetry one can set $`\psi =\pm \lambda _0x+\varphi _\pm `$, then $`\lambda \pm \lambda _0+_x\varphi _\pm `$ and $`V[\lambda ]\frac{1}{2}F(_x\varphi _\pm )^2`$. One obtains then the following estimate for the KT temperature:
$$g_c^{KT}(\pi /2)\sqrt{F}.$$
(19)
The equation $`g_{\mathrm{eff}}=g_c^{KT}`$ determines the transition from the chiral gapless to the chiral gapped phase at $`j>j_L`$, as well as the transition from the non-chiral ($`\lambda _0=0`$) gapless XY phase to the non-chiral gapped Haldane phase at $`j<j_L`$. Note that (19) is still valid away of the Lifshitz point $`j=j_L`$, since the field $`\varphi `$ remains smooth far from the vortex core, and the KT transition temperature is determined by the logarithmic divergence in the free vortex energy at large distances.
In order to estimate the critical temperature of the Ising transition, let us first make some observations concerning the properties of chiral domain walls. The domain wall (DW) energy can be easily calculated in the vicinity of the Lifshitz point, where $`\lambda 1`$, so that the potential $`V[\lambda ](1/8)\{(\lambda ^2\lambda _0^2)^2+(_x\lambda )^2\}`$ takes the form of the $`\phi ^4`$ model, and one readily obtains the static DW solution $`\lambda =\lambda _0\mathrm{tanh}\left\{\lambda _0(xx_{DW})\right\}`$ and the corresponding energy (per unit length in the $`y`$ direction)
$$E_{DW}\lambda _0^3/3,jj_L1.$$
(20)
Further, it is easy to see that the chiral DW cannot move freely, since the infinitesimal displacement of the DW coordinate $`x_{DW}`$ would cause global change of the phase $`\psi `$ at $`x\mathrm{}`$. The DW can only “jump” by the integer multiples of $`\pi /\lambda _0`$, then the phase at infinity changes by the integer multiples of $`2\pi `$. The jump by $`n\pi /\lambda _0`$ involves formation of $`n`$ vortices bound on the DW, the elementary $`n=1`$ jump is schematically shown in Fig. 1. The energy per such a bound vortex can be estimated as
$$E_{bv}\pi \sqrt{F}\mathrm{ln}(\pi /\lambda _0),\lambda _01.$$
(21)
Since the Ising transition is governed by the discrete fluctuations of the DW interface, it is natural to use the so-called Müller-Hartmann-Zittartz, or the “solid-on-solid” approximation . In this approach the transition temperature is determined by looking for the point where the free energy $`\sigma `$ of the DW interface becomes zero; a simple calculation yields the following equation for the critical coupling $`g=g_c^I`$:
$$\sigma =E_{DW}\frac{g}{d_0}\mathrm{ln}\left\{1+d_0[\text{cotanh}\frac{E_{bv}}{2g}1]\right\}=0,$$
(22)
where $`d_0\pi /(\lambda _0\sqrt{F})`$ is the characteristic size of the bound vortex in the $`y`$ direction (in the derivation of (22) we have assumed that the distance along the $`y`$ axis between two successive “jumps” should be greater than $`d_0`$). This equation can be solved numerically, and for $`j0.26`$ the solution is well fitted with the function $`g_c^I1.62\lambda _0+0.28\lambda _0^2`$, thus at $`jj_L`$ the Ising transition temperature is larger than the KT one, $`g_c^I>g_c^{KT}\frac{\pi }{2}\lambda _0`$, confirming the consistency of our assumption.
Away from the Lifshitz point the above discussion of the Ising transition is no more valid, because the characteristic size of the bound vortex and the DW thickness become comparable with the lattice constant, and the continuum description breaks down. However, it is known that $`E_{DW}`$ saturates at $`C_{DW}0.87`$ for $`j0.8`$ ; one could also speculate that for $`j\mathrm{}`$ the energy of the bound vortex $`E_{bv}C_{bv}\sqrt{j}`$, where $`C_{bv}`$ is some constant, and then from (22) one obtains $`g_c^IC_{bv}\sqrt{j}/\mathrm{ln}j`$ at $`j\mathrm{}`$. On the other hand, according to (19) $`g_c^{KT}\pi \sqrt{j}`$ in the same limit. Thus, one may expect that above a certain critical value of $`j`$ the Ising transition temperature $`g_c^I`$ becomes lower than $`g_c^{KT}`$, and the gapped chiral phase disappears.
The resulting conjectured phase diagram of the 2D helimagnet (15) is shown in Fig. 2 It should be mentioned that our picture of the transitions in 2D XY helimagnet disagrees strongly with that presented in . In the latter work, using the arguments of , it was concluded that at low temperatures the vortices are bound by strings, which would suppress the KT transition and make the Ising transition to occur first with increasing the temperature. However, the argument of is adequate only for systems with broken in-plane symmetry, which is not the case here. Another point is that in the description used in the fields $`\varphi ^\pm `$, measuring the deviations from the two different possible helix states with opposite chirality, are allowed to live and interact at the same space-time point, which, to our opinion, is rather unphysical.
The above picture of the transitions in the 2D XY helimagnet is now easily translated into the phase diagram of the frustrated spin chain, which is schematically shown in Fig. 3 for $`D=0`$. Very close to $`j_L`$, where $`m_0\mathrm{\Lambda }`$, which in terms of $`\epsilon |jj_L|`$ and $`\mu (1\mathrm{\Delta })+\frac{4}{3}D`$ means $`\epsilon 3\mu /(8\zeta \pi ^2)`$, the renormalization of the coupling constant is small, $`g_{\mathrm{eff}}g_0`$, and the transition boundaries are approximately given by
$$\epsilon _c^a=\frac{K_a}{\pi ^2S^2}\left(D+\frac{3+5\mathrm{\Delta }}{4}\right).$$
(23)
Here the coefficient $`K_{C:C}1`$ for the transition between gapless and gapped chiral phases, $`K_{C:H}0.94`$ for the transition from the chiral gapped to the Haldane phase, and $`K_{H:XY}2`$ for the Haldane-XY transition. One can see that the slope of the transition lines in the vicinity of $`j_L`$ is very large (proportional to $`S^2`$), and for large $`S`$ the boundaries mover closer and closer to the classical Lifshitz point $`j=j_L=\frac{1}{4}`$.
At larger deviations from $`j=j_L`$, when $`m_0\mathrm{\Lambda }`$, one has the following equations for the phase boundaries:
$$\mu _c^a=\frac{8\zeta \pi ^2}{3}\epsilon e^{2\pi S\sqrt{\zeta }\left(\sqrt{\epsilon }\sqrt{\epsilon _0^a}\right)},\epsilon _0^a=\frac{2K_a}{\pi ^2S^2},$$
(24)
which are valid for $`\sqrt{\epsilon }\sqrt{\epsilon }_0^a1/S`$. One can see that the chiral gapped phase shrinks with increasing $`j`$. It is interesting to note that for $`j<j_L`$ the Haldane phase is stabilized by the frustration, in accordance with the numerical results . Further away from $`j_L`$, when $`\lambda _0`$ becomes of the order of $`1`$, the theory breaks down; however, from the above arguments concerning the behavior of $`g_c^I`$ we expect that the chiral gapped phase disappears above certain critical value of $`j`$.
Certain limitations of the present theory should be mentioned. Our approach does not distinguish between integer and half-integer $`S`$, since we have integrated out the out-of-plane components, and the only remaining topological charge, in-plane vorticity, plays no role. The topological term present in the full theory of the unit vector field contains another quantum number, the so-called Pontryagin index; for $`j<j_L`$ this term is known to suppress the KT transition for half-integer $`S`$, preventing the appearance of the Haldane phase. At $`j>j_L`$ there is no topological term , and one may expect that the KT transition for $`j>j_L`$ survives also for half-integer $`S`$. However, this point is not so clear since the ground state of a half-integer spin chain at sufficiently strong frustration is spontaneously dimerized , and our approach does not allow one to capture this feature. Another limitation is that we cannot describe the hidden (string) order in any way, and thus it is not possible to analyze the coexistence of the string order and chirality in the gapped chiral phase observed in Refs. nor to study the transition to the so-called double Haldane phase characterized by the absence of the string order .
Acknowledgments.– I would like to thank B. A. Ivanov and H.-J. Mikeska for fruitful discussions; the hospitality of Hannover Institute for Theoretical Physics is gratefully acknowledged. This work was supported by the German Federal Ministry for Research and Technology (BMBFT) under the contract 03MI5HAN5.
|
no-problem/0003/physics0003094.html
|
ar5iv
|
text
|
# Generalized Optimal Current Patterns and Electrical Safety in EIT
## 1 Introduction
The problem of optimizing the drive patterns in EIT was first considered by Seagar who calculated the optimal placing of a pair of point drive electrodes on a disk to maximize the voltage differences between the measurement of a homogeneous background and an offset circular anomaly. Isaacson , and Gisser, Isaacson and Newell argued that one should maximize the $`L^2`$ norm of the voltage difference between the measured and calculated voltages constraining the $`L^2`$ norm of the current patterns in a multiple drive system. Later they used a constraint on the maximum dissipated power in the test object. Eyöboǧlu and Pilkington argued that medical safety legislation demanded that one restrict the maximum total current entering the body, and if this constraint was used the distinguishability is maximized by pair drives. Cheney and Isaacson study a concentric anomaly in a disk, using the ’gap’ model for electrodes. They compare trigonometric, Walsh, and opposite and adjacent pair drives for this case giving the dissipated power as well as the $`L^2`$ and power distinguishabilies. Köksal and Eyöboǧlu investigate the concentric and offset anomaly in a disk using continuum currents.
Yet another approach is to find a current pattern maximizing the voltage difference for a single differential voltage measurement.
## 2 Medical Electrical Safety Regulations
We will review the current safety regulations here, but notice that they were not designed with multiple drive EIT systems in mind and we hope to stimulate a debate about what would be appropriate safety standards.
For the purposes of this discussion the equipment current (“Earth Leakage Current” and “Enclosure Leakage Current”) will be ignored as the emphasis is on the patient currents. These will be assessed with the assumption that the equipment has been designed such that the applied parts, that is the electronic circuits and connections which are attached to the patient for the delivery of current and the measurement of voltage, are fully isolated from the protective earth (at least $`50M\mathrm{\Omega }`$).
IEC601 and the equivalent BS5724 specify a safe limit of 100 $`\mu `$A for current flow to protective earth (“Patient Leakage Current”) through electrodes attached to the skin surface (Type BF) of patients under normal conditions. This is designed to ensure that the equipment will not put the patient at risk even when malfunctioning. The standards also specify that the equipment should allow a return path to protective earth for less than 5 mA if some other equipment attached to the patient malfunctions and applies full mains voltage to the patient. Lower limits of 10 $`\mu `$A (normal) and 50 $`\mu `$A (mains applied to the patient) are set for internal connections, particularly to the heart (Type CF), but that is not at present an issue for EIT researchers.
The currents used in EIT flow between electrodes and are described in the standards as “Patient Auxiliary Currents” (PAC). The limit for any PAC is a function of frequency, 100 microamps from 0.1Hz to 1 kHz; then $`100f`$ $`\mu `$A from 1 kHz to 100 kHz where $`f`$ is the frequency in kHz; then 10 mA above 100 kHz. The testing conditions for PAC cover 4 configurations; the worst case of each should be examined.
1. Normal conditions. The design of single or multiple current source tomographs should ensure that each current source is unable to apply more than the maximum values given.
2. The PAC should be measured between any single connection and all the other connections tied together. a) if the tomograph uses a single current source then the situation is similar to normal conditions (above) b) if the tomograph uses multiple current sources then as far as the patient is concerned the situation is the same as normal conditions. The design of the sources should be such that they will not be harmed by this test.
3. The PAC should be measured when one or more electrodes are disconnected from the patient. This raises issues for multiple-source tomographs : a) if an isolated-earth electrode is used then the current in it will be the sum of the currents which should have flowed in the disconnected electrodes; they could all be of the same polarity. The isolated-earth electrode should therefore include an over-current sensing circuit which will turn down/off all the current sources. b) If no isolated-earth electrode is used then the situation is similar to normal conditions.
4. The PAC should be measured when the disconnected electrodes are connected to protective earth. This introduces no new constraints given the tomograph is fully isolated.
## 3 Constrained Optimization
Let $`V=(V_1,\mathrm{},V_K)^T`$ be the vector of potentials measured on electrodes when a pattern of currents $`I=(I_1,\mathrm{},I_K)^T`$ is applied. These are related linearly by $`R`$ the transfer impedance matrix: $`V=RI`$. For simplicity we will assume the same system of electrodes is used for current injection and voltage measurement. We will also assume that the conductivity is real and the currents in-phase to simplify the exposition. A model of the body is used with our present best estimate for the conductivity and from this we calculate voltages $`V_\mathrm{c}`$ for the same current pattern. Our aim is to maximize the distinguishability $`VV_\mathrm{c}_2=(RR_\mathrm{c})I_2`$. The use of the $`L^2`$ norm here corresponds to the assumption that the noise on each measurement channel is independent and identically distributed. If there were no constraints on the currents the distinguishability would be unbounded.
The simplest idea is to maximize $`(RR_\mathrm{c})I_2`$ subject to $`I_2M`$ for some fixed value of $`M`$. The solution of this problem that $`I`$ is the eigenvector of $`RR_\mathrm{c}`$ corresponding to the largest (in absolute value) eigenvalue. One problem is that the 2-norm of the current has no particular physical meaning. In a later paper it was proposed that the dissipated power be constrained, that is $`IV=I^TRI`$. The optimal current is the eigenvector of $`(RR_\mathrm{c})R^{1/2}`$. (The inverse implied in the expression $`R^{1/2}`$ has to be understood in the generalized sense, that is one projects on to the space orthogonal to $`(1,\mathrm{},1)^T`$ and then calculates the matrix exponent $`1/2`$.) In practical situations in medical EIT the total dissipated power is unlikely to be an active constraint, although local heating effects in areas of high current density may be an issue. Even in industrial applications of EIT, the limitations of voltages and currents handled by normal electronic devices mean that one is unlikely to see total power as a constraint. One exception might be in EIT applied to very small objects.
As we have seen a reasonable interpretation of the safety regulations is to limit the current on each electrode to some safe level $`I_{\mathrm{max}}`$. We will refer to this as an $`L^{\mathrm{}}`$ constraint. This corresponds to a convex system of linear constraints $`I_{\mathrm{max}}I_kI_{\mathrm{max}}`$. When we maximize the square of the distinguishabilty, which is a positive definite quadratic function of $`I`$, with respect to this set of constraints it can be seen that the maximum must be a vertex of the convex polytope $`\{I:\mathrm{max}_k\{|I_k|\}=I_{\mathrm{max}},_kI_k=0\}`$. For example, for an even number $`2n`$ of electrodes the $`{}_{}{}^{2n}C_{n}^{}`$ vertices are the currents with each $`I_k=\pm I_{\mathrm{max}}`$, and an equal number with each sign. For the circularly symmetric case these are the Walsh patterns referred to in .
If one wanted to be safe under the multiple fault condition that all the electrodes driving a current with the same sign became disconnected, and the safety mechanism on the isolated-earth failed, one would employ the $`L^1`$ constraint $`_k|I_k|2I_{\mathrm{max}}`$. Again this gives a convex feasible set. In this case a polyhedron with vertices $`I`$ such that all but two $`I_k`$ are zero, and those two are $`I_{\mathrm{max}}`$ and $`I_{\mathrm{max}}`$. These are the pair drives as considered by Seagar. Pair drives were also considered by ,, for single circular anomalies. Notice that $`L^1`$ optimal currents will be pair drives for any two- or three-dimensional geometry and any conductivity distribution.
Another constraint which may be important in practice is that the current sources are only able to deliver a certain maximum voltage $`V_{\mathrm{max}}`$ close to their power supply voltage. If the EIT system is connected to a body with transfer impedance within its design specification then the constraints $`V_{\mathrm{max}}V_kV_{\mathrm{max}}`$ will not be active. If they do become active then the additional linear constraints in $`I`$ space $`V_{\mathrm{max}}R^1IV_{\mathrm{max}}`$ (here $`R^1`$ is to be interpreted as the generalized inverse), will still result in a convex feasible region.
When any of the linear constraints are combined with quadratic constraints such as maximum power dissipation the feasible set of currents is still convex but its surface is no longer a polytope.
## 4 Numerical Results
Although we can easily find the vertices of the feasible region there are too many for it to be wise to search exhaustively for a maximum of the distinguishability. For $`32`$ electrodes for example there are $`{}_{}{}^{32}C_{16}^{}>6\times 10^8`$. Instead we use a discrete steepest ascent search method of the feasible vertices. That is from a given vertex we calculate the objective function for all vertices obtained by changing a pair of signs, and move to whichever vertex has the greatest value of the objective function. For comparison we also calculated the $`L^2`$ optimal currents, the optimal currents for the power constraint, and the optimal pair drive ($`L^1`$ optimal).
We used a circular disk for the forward problem, and the EIDORS Matlab toolbox for mesh generation and forward solution. The mesh and conductivity targets can be seen in Figure 3. Our results are interesting in that for the cases we have studied so far the $`L^{\mathrm{}}`$ optimal currents have only two sign changes. The distinguishabilies given in Table 1 should be read with caution, as it is somewhat unfair to compare for example power constrained with $`L^{\mathrm{}}`$ patterns. They are designed to optimise different criteria. However the contrast between pair drive and $`L^{\mathrm{}}`$ is worth noting as the majority of existing EIT systems can only drive pairs of electrodes.
The greatest current densities occur at the contact points between the electrode boundaries and skin. At each electrode this current density is determined mainly by the total electrode current, the contact impedance and the skin conductivity just below the electrode. These factors dominate the current density near the electrode boundaries and the other electrode’s currents have a much smaller contribution to the maximum current densities
## 5 Conclusions
If using optimal current patterns one should be sure to use the right constraints. We suggest that in many situations the $`L^{\mathrm{}}`$ constraint may be the correct one. We have demonstrated that it is simple to compute these optimal patterns, and the instrumentation required to apply these patterns is much simpler than the $`L^2`$ or power norm patterns. While still requiring multiple current sources, they need only be able to switch between sinking and sourcing the same current.
|
no-problem/0003/math0003213.html
|
ar5iv
|
text
|
# On threefolds covered by lines
## Introduction
Projective varieties containing “ many” linear spaces appear naturally in several occasions. For instance, consider the following examples which, by the way, motivated our interest in this topic.
The first example concerns varieties of $`4`$-secant lines of smooth threefolds in $`𝐏^5`$. The family of such lines has in general dimension four and the lines fill up the whole ambient space, but it can happen that they form a hypersurface.
A second example comes from the following recent theorem of Arrondo (see ), in some sense the analogous of the Severi theorem about the Veronese surface:
let $`Y`$ be a subvariety of dimension $`n`$ of the Grassmannian $`𝐆(1,2n+1)`$ of lines of $`𝐏^{2n+1}`$ and assume that $`Y`$ can be isomorphically projected into $`𝐆(1,n+1)`$. Then, if the lines parametrized by $`Y`$ fill up a variety of dimension $`n+1`$, $`Y`$ is isomorphic to the second Veronese image of $`𝐏^n`$.
If those lines generate a variety of lower dimension, nothing is known.
In both cases it would be very interesting to have a classification of such varieties. Moreover, these examples show that for such a classification it would be desirable to avoid any assumption concerning singularities.
The first general results about the classification of projective varieties containing a higher dimensional family of linear spaces were obtained by Beniamino Segre (). In particular, in the case of lines, he proved:
Let $`X𝐏^N`$ be an irreducible variety of dimension $`k`$, let $`\mathrm{\Sigma }𝐆(1,N)`$ be an irreducible component of maximal dimension of the variety of lines contained in $`X`$, such that the lines of $`\mathrm{\Sigma }`$ cover $`X`$. Then $`dim\mathrm{\Sigma }2k2`$. If equality holds, then $`X=𝐏^k`$. Moreover, if $`k2`$ and $`dim\mathrm{\Sigma }=2k3`$, then $`X`$ is either a quadric or a scroll in $`𝐏^{k1}`$’s over a curve.
The case of a family $`\mathrm{\Sigma }`$ of dimension $`2k4`$ is treated in some papers by Togliatti (), Bompiani (), M. Baldassarri (), but their arguments are not easy to be followed. Recently, varieties of dimension $`k3`$ with a family of lines of dimension $`2k4`$ have been classified by Lanteri–Palleschi (), as particular case of a more general classification theorem. Their starting point is a pair $`(X,L)`$ where $`L`$ is an ample divisor on $`X`$, which is assumed to be smooth or, more in general, normal and $`𝐐`$-Gorenstein. The assumptions on the singularities of $`X`$ are removed by Rogora in his thesis (), but he assumes $`k4`$ and codim $`X>2`$.
The aim of this paper is the classification of the varieties of dimension $`k`$ covered by the lines of a family of dimension $`2k4`$, in the first non–trivial case: $`k=3`$, i.e. threefolds covered by a family of lines of dimension $`2`$. So, we classify threefolds covered by “ few” lines.
A first remark is that among these varieties there are threefolds which are birationally scrolls over a surface or ruled by smooth quadrics over a curve. The first ones come from general surfaces contained in $`𝐆(1,4)`$, while the second ones come from general curves contained in the Hilbert scheme of quadric surfaces in $`𝐏^n`$. Note that these “ quadric bundles” are built by varieties of lower dimension having a higher dimensional family of lines.
So we have focused our attention on threefolds not of these two types.
Observe that, if $`X`$ is a threefold covered by the lines of a family of dimension two, then there is a fixed finite number $`\mu `$ of lines passing through any general point of $`X`$. In particular, having excluded scrolls, we have assumed $`\mu >1`$.
It is interesting to remark that the surfaces $`\mathrm{\Sigma }`$ in $`𝐆(1,4)`$ corresponding to threefolds with $`\mu >1`$, can be characterized by the property that the tangent space to $`𝐆(1,4)`$ at every point $`r`$ of $`\mathrm{\Sigma }`$ intersects (improperly) $`\mathrm{\Sigma }`$ along a curve. This follows from the fact that the points of $`𝐆(1,4)T_r𝐆(1,4)`$ represent the lines meeting $`r`$.
Our point of view, that we have borrowed from the quoted paper of Mario Baldassarri, is the following. Since we do not care about singularities, we are free to projected birationally into $`𝐏^4`$ our threefolds to hypersurfaces of the same degree and with the same $`\mu `$. Hence, it is enough to classify hypersurfaces in $`𝐏^4`$ having a family of lines with the requested properties.
If $`X𝐏^4`$ is a hypersurface of degree $`n,`$ then the equation of $`X`$ is a global section $`G\mathrm{\Gamma }(𝐏^4,𝒪_{𝐏^4}(n)).`$ The section $`G`$ induces in a canonical way a global section $`s\mathrm{\Gamma }(𝐆(1,4),S^nQ),`$ where $`Q`$ is the universal quotient bundle on $`𝐆(1,4).`$ It is a standard fact that the points of the scheme of the zeros of the section $`s`$ of $`Q`$ correspond exactly to the lines on $`X.`$ In this paper we will denote by $`\mathrm{\Sigma }`$ the Fano scheme of the lines on $`X`$, which is, by definition, the scheme of the zeros of the section $`s.`$
In this paper we will study threefolds $`X`$ in $`𝐏^4`$ covered by lines such that $`\mathrm{\Sigma }`$ has dimension two.
The following theorem is the main result of the paper.
###### Theorem 0.1
Let $`X𝐏^4`$ be a projective, integral hypersurface over an algebraically closed field $`K,`$ of characteristic zero, covered by lines. Let $`\mathrm{\Sigma }`$ denote the Fano scheme of the lines on $`X`$ just introduced. Assume that $`\mathrm{\Sigma }`$ is generically reduced, that $`\mu >1`$ and that $`X`$ is not birationally ruled by quadrics over a curve. Then one of the following happens:
1. $`X`$ is a cubic hypersurface with singular locus of dimension at most one; if $`X`$ is smooth, then $`\mathrm{\Sigma }`$ is irreducible and $`\mu =6`$;
2. $`X`$ is a projection of a complete intersection of two hyperquadrics in $`𝐏^5`$; in general, $`\mathrm{\Sigma }`$ is irreducible and $`\mu =4`$;
3. $`\mathrm{deg}X=5`$: $`X`$ is a projection of a section of $`𝐆(1,4)`$ with a $`𝐏^6`$, $`\mathrm{\Sigma }`$ is irreducible and $`\mu =3`$;
4. $`\mathrm{deg}X=6`$: $`X`$ is a projection of a hyperplane section of $`𝐏^2\times 𝐏^2`$, $`\mathrm{\Sigma }`$ has two irreducible components and $`\mu =2`$;
5. $`\mathrm{deg}X6`$: $`X`$ is a projection of $`𝐏^1\times 𝐏^1\times 𝐏^1`$, $`\mathrm{\Sigma }`$ has at least three irreducible components and $`\mu 3.`$
Note that these five cases are precisely the projections of Fano varieties with $`K_X=LL,`$ $`L`$ ample (). This list is the same as in the article of Baldassarri.
It is interesting to remark that the bound $`\mu =6`$ is attained only by cubic threefolds.
The assumption that $`\mathrm{\Sigma }`$ is generically reduced is necessary to make our method work. Note that this is a genericity assumption for $`X`$ (however our threefolds are not general, if the degree is $`>3`$; in fact, none of them is linearly normal in $`𝐏^4`$, so they have a big singular locus). This assumption is quite strong, because it implies in particular that the dual variety of $`X`$ is a hypersurface and that a general line on $`X`$ is never contained in a fixed tangent plane.
The paper is organized as follows.
In § 1 we prove that, under suitable conditions, on a general line of $`\mathrm{\Sigma }`$ there are $`n3`$ singular points of $`X,`$ where $`n`$ is the degree of $`X,`$ and we derive from this many consequences we shall need in the paper. In particular, we will show that, if $`n5,`$ then the singular locus of $`X`$ is a surface and give an explicit lower bound for its degree (Theorem 1.11). Our main technical tool will be the family of planes containing a line of $`\mathrm{\Sigma }`$. We prove that the assumption $`\mathrm{\Sigma }`$ generically reduced implies that there is no fixed tangent plane to $`X`$ along a general line on $`X.`$ From this it follows readily that the dual variety of $`X`$ is a threefold (Theorem 1.6). In this section we also introduce the ruled surfaces $`\sigma (r)`$, generated by the lines on $`X`$ meeting a fixed line $`r`$.
§ 2 contains the proof of the bound $`\mu 6.`$ Moreover, if $`n>3`$ we prove that $`\mu 4.`$
§ 3 is devoted to the classification of threefolds with an irreducible family of lines with $`\mu >1`$. First of all, we check that, if $`deg(X)>3`$ and $`X`$ is not a quadric bundle, only two possibilities are allowed for $`\mu ,`$ i.e. $`\mu =3,4`$. The threefolds with these invariants are then classified, respectively in Propositions 3.2 and 3.3.
§ 4 contains the classification of threefolds with a reducible $`2`$-dimensional family of lines, such that all components of $`\mathrm{\Sigma }`$ have $`\mu _i=1`$.
It is a pleasure to thank E. Arrondo, J.M. Landsberg, R. Piene and K. Ranestad for several useful conversations about the content of the paper, as well as for constant encouragement.
In the paper we will use the following:
Notations, general assumptions and conventions
1. We will always work over an algebraically closed field $`K`$ of characteristic zero.
2. $`X𝐏^4`$ will be a projective, integral hypersurface, of degree $`n,`$ covered by lines.
3. We will denote by $`\mathrm{\Sigma }`$ the Fano scheme of the lines on $`X`$. (In particular, by the result of B. Segre quoted above, from $`dim(\mathrm{\Sigma })=2`$ it follows $`n3.`$)
4. Let $`\mu `$ be the number of lines of $`\mathrm{\Sigma }`$ passing through a general point of $`X`$. If $`\mathrm{\Sigma }`$ is reducible, with $`\mathrm{\Sigma }_1,\mathrm{},\mathrm{\Sigma }_s`$ as irreducible components of dimension $`2,`$ then we will denote by $`\mu _i`$ the number of lines of $`\mathrm{\Sigma }_i`$ passing through a general point of $`X.`$ Clearly $`\mu =\mu _1+\mathrm{}+\mu _s`$. We assume $`\mu >1.`$
5. We will assume that $`X`$ is not birationally ruled by quadric surfaces over a curve.
6. For a “ general line in $`\mathrm{\Sigma }`$” we mean any line which belongs to a subset $`S\mathrm{\Sigma }`$ (never given explicitly), such that $`S`$ is Zariski dense in $`\mathrm{\Sigma }.`$ So “ general line in $`\mathrm{\Sigma }`$” is meaningful also in the case of a reducible $`\mathrm{\Sigma }`$.
7. We will denote by the same letter both a line in $`𝐏^4`$ and the corresponding point of $`𝐆(1,4)`$. We hope that it will be always clear from the context which point of view is adopted.
8. For $`r`$ general in $`\mathrm{\Sigma }`$, the assumption $`\mu >1`$ ensures that the union of all the lines of $`\mathrm{\Sigma }`$ meeting $`r`$ is a surface $`\sigma (r)`$, which can also be seen as a curve inside $`𝐆(1,4).`$ As $`r`$ varies in $`\mathrm{\Sigma }`$, these curves describe an algebraic family in $`\mathrm{\Sigma }`$ of dimension $`2.`$ If $`\mathrm{\Sigma }`$ is reducible, with $`\mathrm{\Sigma }_1,\mathrm{},\mathrm{\Sigma }_s`$ as irreducible components of dimension $`2,`$ then the surfaces $`\sigma (r)`$ are unions $`\sigma _1(r)\mathrm{}\sigma _s(r)`$, where $`\sigma _i(r)`$ is formed by the lines of $`\mathrm{\Sigma }_i`$ intersecting $`r.`$
## 1 Preliminary results
We consider the degree $`n`$ of $`X`$. For $`n=3`$, it is well known that all cubic hypersurfaces of $`𝐏^4`$ contain a family of lines of dimension at least $`2`$, and of dimension exactly $`2`$ if the singular locus of $`X`$ has codimension at least $`2`$. For $`n4`$, a general hypersurface of $`𝐏^4`$ of degree $`n`$ is not covered by lines.
The following theorem is the main technical result of the paper. Here the assumption that the irreducible components of dimension two of $`\mathrm{\Sigma }`$ are reduced is essential.
###### Theorem 1.1
If $`r`$ is a general line of an irreducible component $`\mathrm{\Sigma }_1`$ of $`\mathrm{\Sigma }`$ which is of dimension two and generically reduced, then $`rSing(X)`$ is a $`0`$-dimensional scheme of lenght $`n3`$ (we will express this briefly by saying that “ on $`r`$ there are exactly $`n3`$ singular points of $`X`$”). In particular, if $`n4`$, then $`X`$ is singular.
Proof Let $`r`$ be a general line of $`\mathrm{\Sigma }_1`$ (in particular, $`\mathrm{\Sigma }_1`$ is the only component of $`\mathrm{\Sigma }`$ containing $`r`$), and let $`\pi `$ be a plane containing $`r`$. Then $`\pi X`$ splits as a union $`rC`$ where $`C`$ is a plane curve of degree $`n1`$. So $`rC`$ has length $`n1`$ and it is formed by points that are singular for $`\pi X`$, hence either tangency points of $`\pi `$ to $`X`$ or singular points of $`X`$. We will prove that, if $`\pi `$ is general among the planes containing $`r`$, then exactly $`n3`$ of these points are singular for $`X`$. To this end, let us consider the family (possibily reducible) of planes $`=\{\pi |\pi r,r\mathrm{\Sigma }\}`$; its dimension is $`4`$.
Claim. The general plane through $`r`$ cannot be tangent to $`X`$ in more that two points.
Proof of the Claim We have to prove that $`X`$ does not possess a $`4`$-dimensional family of $`k`$-tangent planes, with $`k>2.`$ Assume by contradiction that $`X`$ possesses such a family $`𝒢.`$ Let $`O`$ be a general point of $`𝐏^4`$, $`OX`$. The projection $`p_O:X𝐏^3`$, centered at $`O`$, is a covering of degree $`n`$, with branch locus a surface $`\rho `$ contained in $`𝐏^3`$. There is a $`2`$-dimensional subfamily $`𝒢^{}`$ of $`𝒢`$ formed by the planes passing through $`O`$: they project to lines $`k`$-tangent the surface $`\rho `$. Then $`\rho `$ satisfies the assumptions of the following lemma:
###### Lemma 1.2
Let $`S𝐏^3`$ be a reduced surface and assume that there exists an irreducible subvariety $`H𝐆(1,3),`$ with $`dim(H)2,`$ whose general point represents a line in $`𝐏^3`$ which is tangent to $`S`$ at $`k>2`$ distinct points. Then $`dim(H)=2`$ and $`H`$ is a plane parametrizing the lines contained in a fixed plane $`M𝐏^3,`$ which is tangent to $`S`$ along a curve.
Therefore there exists a plane $`\tau `$ tangent to $`\rho `$ along a curve of degree $`k`$. But $`\tau `$ is the projection of a $`3`$-space $`\alpha `$ passing through $`O`$, which must contain the planes of $`𝒢^{}`$. So these planes are $`k`$-tangent also to $`X\alpha `$, which is a surface of $`𝐏^3`$: this means that all planes tangent to $`X\alpha `$ are $`k`$-tangent. Since $`X\alpha `$ is not a plane, this is a contradiction.
Therefore, we have at least $`n3`$ singular points of $`X`$ on $`r.`$ Assume there are $`n2.`$
Let $`H𝐏^4`$ be a hyperplane containing $`r.`$ Let us denote by $`𝐆(1,H)𝐆(1,3)`$ the Schubert cycle in $`𝐆(1,4)`$ parametrizing lines contained in $`H.`$ Then, for general $`H`$ the intersection $`𝐆(1,H)\mathrm{\Sigma }`$ is proper, namely it is purely $`0`$-dimensional. In fact, if infinitely many lines of $`\mathrm{\Sigma }`$ were contained in $`H,`$ then $`dim(\mathrm{\Sigma })3,`$ a contradiction. Moreover, since we assume that $`\mathrm{\Sigma }`$ is generically reduced, both $`\mathrm{\Sigma }`$ and $`𝐆(1,H)`$ are smooth at $`r.`$ We will show, now, that if $`r`$ contains $`n2`$ singular points of $`X,`$ then $`\mathrm{\Sigma }`$ and $`𝐆(1,H)`$ do not intersect transversally at $`r,`$ and this will yield a contradiction. In fact, $`PGL(4)`$ acts transitively on $`𝐆(1,4),`$ and we can use because we have assumed that our base field $`K`$ has characteristic zero.
Before we start, let us recall briefly for the reader convenience some basic facts about $`T_r𝐆(1,4).`$ Let $`\mathrm{\Lambda }K^5`$ be the $`2`$-dimensional linear subspace corresponding to $`r,`$ i.e. $`r=𝐏(\mathrm{\Lambda }).`$ Then $`T_r𝐆(1,4)`$ can be identified with $`Hom_K(\mathrm{\Lambda },K^5/\mathrm{\Lambda }),`$ hence for a non zero $`\phi T_r𝐆(1,4)`$ we have $`rk\phi =1`$ or $`2.`$ In both cases we can associate to $`\phi `$ in a canonical way a double structure on $`r.`$ When $`rk\phi =1`$ this structure is obtained by doubling $`r`$ on the plane $`𝐏(\mathrm{\Lambda }Im(\phi )),`$ hence it has arithmetic genus zero (). When $`rk\phi =2`$ the doubling of $`r`$ is on a smooth quadric inside $`𝐏(\mathrm{\Lambda }Im(\phi ))𝐏^3,`$ and the arithmetic genus is $`1.`$ In both cases we have $`r𝐏(\mathrm{\Lambda }Im(\phi ))`$ and $`\phi T_r𝐆(1,𝐏(\mathrm{\Lambda }Im(\phi ))).`$
To prove the non transversality of $`\mathrm{\Sigma }`$ and $`𝐆(1,H)`$ at $`r,`$ it is harmless to assume that $`H`$ is not tangent to $`X`$ at any smooth point of $`r.`$ Therefore, the singularities of the surface $`S:=XH`$ on the line $`r`$ are exactly those points which are already singular for $`X.`$
To fix ideas, let $`r`$ be defined by the equations $`x_2=x_3=x_4=0,`$ $`H`$ defined by $`x_4=0,`$ and $`S`$ defined in $`H`$ by $`\overline{G}=0.`$ Then, the restriction to $`r`$ of the Gauss map of $`S`$ is given analytically as follows:
$$\alpha :P[\overline{G}_{x_0}(P),\overline{G}_{x_1}(P),\overline{G}_{x_2}(P),\overline{G}_{x_3}(P)].$$
We can regard the $`\overline{G}_{x_i}(P)`$’s as polynomials of degree $`n1`$ in the coordinates of $`P`$ on $`r.`$ Since we assume $`X`$ has $`n2`$ singular points on $`r,`$ the four polynomials $`\overline{G}_{x_i}(P)`$ have a common factor of degree $`n2.`$ Therefore, if we clean up this common factor, the above map can be represented analytically by polynomials of degree $`1.`$ Therefore, the double structure on $`r`$ defined by $`\alpha `$ has arithmetic genus $`1,`$ and it arises from a non zero vector $`\phi T_r𝐆(1,H).`$
Now, for every $`Pr`$ which is a smooth point for $`S`$ we have $`\alpha (P)=T_PS=T_PXH,`$ and in particular we have $`\alpha (P)T_PX.`$ This means that $`\phi `$ is also a tangent vector to the Fano scheme $`\mathrm{\Sigma }`$ of the lines on $`X`$ (see , pp. 209-210), i.e. $`\phi T_r\mathrm{\Sigma }`$. Since we assume that $`\mathrm{\Sigma }_1`$ is the only component of $`\mathrm{\Sigma }`$ containing $`r`$ and $`\mathrm{\Sigma }_1`$ is reduced at $`r,`$ by the usual criterion for multiplicity one, we conclude that $`𝐆(1,H)`$ and $`\mathrm{\Sigma }`$ are not transversal at $`r,`$ and the proof is complete (for general facts about intersections multiplicities the reader is referred to ).
Proofof Lemma 1.2
The lines in $`𝐏^3`$ which are tangent to $`S`$ are parametrized by a ruled threefold $`K𝐆(1,3)`$: any line on $`K`$ corresponds to the pencil of lines in $`𝐏^3`$ which are tangent to $`S`$ at a fixed smooth point. Then $`HK.`$
$`H`$ is a surface: otherwise, a general point $`O𝐏^3`$ would be contained in infinitely many lines of $`H`$, therefore, every tangent line to a general plane section $`C`$ of $`S`$ would be $`k`$-tangent to $`C,`$ with $`k>2,`$ a contradiction.
Let $`L𝐏^3`$ be a line corresponding to a smooth point of $`H;`$ then $`L`$ is tangent to $`S`$ at least at points $`P,Q,R`$. Since a general point of $`K`$ represents a line which is tangent to $`S`$ at a unique point, $`K`$ has three branches at $`L.`$ We denote by $`U_P,U_Q,U_R`$ the tangent spaces to these branches at $`L,`$ i.e. $`U_PU_QU_R`$ is contained in the tangent cone to $`K`$ at $`L.`$ We have $`U_PU_Q=T_LH.`$ The intersection of this plane with $`𝐆(1,3)`$ is the union of two lines. Then, a direct, cumbersome computation proves that these lines inside $`𝐆(1,3)`$ represent respectively the pencil of lines in $`T_PS`$ through $`Q`$ and the pencil of lines in $`T_QS`$ through $`P.`$
Claim. For a general point $`LH`$ we have $`T_LH𝐆(1,3).`$
It is sufficient to show that $`T_LH𝐆(1,3)`$ contains three distinct lines.
From $`U_PU_QU_R=T_LH`$ we get that the two lines of $`T_LH𝐆(1,3)`$ are contained also in $`U_R.`$ If we translate all this into equations, an easy computation shows that $`T_PS=T_RS.`$ By symmetry we get $`T_PS=T_QS=T_RS.`$ Therefore, the three distinct lines in $`𝐆(1,3)`$ which correspond to the pencils in $`T_PS`$ of centres respectively $`P,Q,R`$ are contained in $`T_LH`$, and the claim is proved.
By continuity, all the tangent planes $`T_LH`$ belong to one and the same system of planes on $`𝐆(1,3).`$ Therefore, the tangent planes at two general points of $`H`$ meet, and either $`H`$ is a Veronese surface, or its linear span $`H`$ is a $`𝐏^4.`$
The first case is impossible because the tangent planes to a Veronese surface fulfill a cubic hypersurface in $`𝐏^5,`$ whereas $`𝐆(1,3)`$ is a quadric. On the other hand, the quadric hypersurface $`𝐆(1,3)H`$ in $`H=𝐏^4`$ contains planes, hence it is singular. Therefore, the hyperplane $`H`$ is tangent to $`𝐆(1,3)`$ at some point $`r`$ and all the lines of $`H`$ meet the fixed line $`r`$ in $`𝐏^3.`$ Were the lines of $`H`$ not lying on a unique plane through $`r,`$ then any plane $`N`$ through $`r`$ would contain infinitely many lines $`3`$-tangent to the plane section $`SN`$ of $`S,`$ a contradiction.
In the statement of Theorem 1.1 we assume that an irreducible component of $`\mathrm{\Sigma }`$ is generically reduced. We will give now a criterion that leads to an easy way to check in practice if this hypothesis is satisfied.
We generalize a little and assume that an integral hypersurface $`X𝐏^N`$ is covered by lines and that the dimension of the Fano scheme $`\mathrm{\Sigma }`$ of lines on $`X`$ is $`N2.`$ Let the line $`r`$ represent a general point of an irreducible component $`\mathrm{\Sigma }_1`$ of $`\mathrm{\Sigma },`$ of dimension $`N2,`$ and let $`p`$ be a general point of $`r.`$
Let $`[x_0,\mathrm{},x_N]`$ be homogeneous coordinates in $`𝐏^N.`$ Assume that the line $`rX`$ is defined by $`x_2=\mathrm{}=x_N=0,`$ and that the point $`p`$ is $`[1,0,\mathrm{},0].`$ We will work on the affine chart $`p_{01}=1`$ of the Grassmannian $`𝐆(1,N).`$ Coordinates in this chart are $`p_{02},\mathrm{},p_{0N},p_{12},\mathrm{},p_{1N}`$ and the line $`r`$ is represented by the origin. It is easy to see that a line $`l`$ in this affine chart contains the point $`p`$ if and only if its coordinates satisfy the equations $`p_{12}=\mathrm{}=p_{1N}=0.`$
Moreover, we will work on the affine chart $`x_0=1`$ of $`𝐏^N,`$ and we set $`y_i:=x_i/x_0`$ for $`i=1,\mathrm{},N.`$ Then $`p`$ is the origin.
Let $`G=G_1+G_2+\mathrm{}+G_n=0`$ be the equation of $`X`$ in this chart, where the $`G_i`$ are the homogeneous components of $`G.`$ We can assume that the tangent space to $`X`$ at $`p`$ is defined by $`y_N=0`$, and we can consider $`y_1,\mathrm{},y_{N1}`$ as homogeneous coordinates in $`𝐏(T_pX).`$ Then, the line $`r`$ is represented in $`𝐏(T_pX)`$ by the point $`[1,0,\mathrm{},0].`$ Finally, it is convenient to write $`G_i=F_i+y_NH_i,`$ where the $`F_i`$’ s are polynomials in $`y_1,\mathrm{},y_{N1}.`$
###### Proposition 1.3
Assume that a hypersurface $`X𝐏^N`$ is covered by lines and that the dimension of the Fano scheme $`\mathrm{\Sigma }`$ of lines on $`X`$ is $`N2.`$ Let the line $`r`$ represent a general point of an irreducible component $`\mathrm{\Sigma }_1`$ of $`\mathrm{\Sigma },`$ of dimension $`N2,`$ and let $`p`$ be a general point of $`r.`$ With the notations introduced above, $`\mathrm{\Sigma }_1`$ is reduced at $`r`$ if and only if the intersection of the hypersurfaces in $`𝐏(T_pX)`$ defined by $`F_i=0`$ for $`i=2,\mathrm{},n,`$ is reduced at $`[1,0,\mathrm{},0].`$ Or, equivalently, if the $`(y_2,\mathrm{},y_{N1})`$-primary component of the ideal $`(F_2,\mathrm{},F_n)K[y_2,\mathrm{},y_{n1}]`$ is $`(y_2,\mathrm{},y_{N1})`$.
Proof Let $`s𝐏^N`$ be a line such that $`sX`$ and $`ps.`$ Let $`A𝐆(1,N)`$ be the Schubert variety parametrizing the lines in $`𝐏^N`$ which intersect $`s.`$ The only singular point of $`A`$ is $`s.`$ In fact, it is easily seen that $`A`$ is the intersection of $`𝐆(1,N)`$ with the (projectivized) tangent space to $`𝐆(1,N)`$ at $`s.`$ In particular, the points of $`A`$ different from $`s`$ are exactly the tangent vectors to $`𝐆(1,N)`$ at $`s`$ which are of rank $`1.`$ Then, by using the facts on tangent vectors to Grassmannians briefly recalled in the proof of Thm. 1.1, it is easily seen that $`A`$ is the affine cone inside $`T_s𝐆(1,N),`$ over a $`𝐏^1\times 𝐏^{N2}𝐏(T_s𝐆(1,N)).`$ It is clear that $`\mathrm{\Sigma }_1`$ and $`A`$ intersect properly at $`r.`$
We claim that $`\mathrm{\Sigma }_1`$ is reduced at $`r`$ if and only if $`\mathrm{\Sigma }_1A`$ is reduced at $`r`$. Assume that $`\mathrm{\Sigma }_1A`$ is reduced at $`r.`$ Let $`𝒪`$ be the local ring of $`𝐆(1,N)`$ at $`r`$ and let $`I`$ and $`J`$ denote respectively the ideals of $`\mathrm{\Sigma }_1`$ and $`A`$ in $`𝒪`$. Then the Artinian ring $`𝒪/I+J`$ is reduced, i.e. it is a field, and we want prove that $`𝒪/I`$ is reduced. The Cohen-Macaulay locus of $`\mathrm{\Sigma }_1`$ is certainly open and non empty. So, by genericity, we can assume that $`𝒪/I`$ is Cohen-Macaulay. We have $`dim(𝒪/I)=N2=ht(J).`$ But $`J`$ is generated by a regular sequence of length $`N2`$ since $`A`$ is smooth at $`r.`$ Therefore, the same is true for $`J+I/I,`$ being $`𝒪/I`$ a Cohen-Macaulay ring. But $`𝒪/I+J`$ is a field, hence $`J+I/I`$ is the maximal ideal of $`𝒪/I`$. It follows that this last ring is a regular local ring.
Assume, conversely, that $`\mathrm{\Sigma }_1`$ is reduced at $`r`$. Then $`\mathrm{\Sigma }_1A`$ is reduced at $`r`$ bevause $`r`$ is general and because of Kleiman’ s criterion of transversality of the generic translate, already used in the proof of Thm. 1.1.
Denote by $`B`$ the Schubert cycle in $`𝐆(1,N)`$ parametrizing the lines in $`𝐏^N`$ through $`p.`$ A moment’s thought shows that the local rings at $`r`$ of $`\mathrm{\Sigma }_1A`$ and $`\mathrm{\Sigma }_1B`$ are the same. Then we are reduced to compute the ideal of $`\mathrm{\Sigma }_1B`$ inside $`𝒪=𝒪_{𝐆(1,N),r}.`$
To do this, we replace the parametric representation of a general line $`l`$ containing $`p,`$ namely $`y_1=t,`$ and $`y_i=p_{0i}t`$ ($`i2,`$ where $`t`$ varies in the base field $`K`$) in all the equations $`G_i=0,`$ for $`i1.`$ From $`G_1(t,p_{02}t,\mathrm{},p_{0N}t)=0`$ we get simply $`p_{0N}=0.`$ Then, since the $`F_i`$ are homogeneous polynomials, the other generators for the ideal of $`\mathrm{\Sigma }_1A`$ at $`r`$ are the $`F_i(1,p_{02},\mathrm{},p_{0,N1})`$ $`i=2,\mathrm{},n.`$ An obvious change of variables completes the proof.
Example 1.4: Let $`X`$ be the variety of the secant lines of a rational normal quartic curve $`\mathrm{\Gamma }𝐏^4.`$ It is well known that the degree of $`X`$ is $`3.`$ On $`X`$ we have two families of lines of dimension two, each covering $`X.`$ We denote by $`\mathrm{\Sigma }_1`$ the family of the secant lines of $`\mathrm{\Gamma }.`$ By Terracini’s Lemma, these lines are also the fibres of the Gauss map. Hence $`dim(\stackrel{ˇ}{X})=2`$ and for the family $`\mathrm{\Sigma }_1`$ we have $`\mu _1=1`$.
Since $`deg(X)=3,`$ the intersection of $`X`$ with its tangent space along $`r`$ is a cubic surface which is singular along $`r,`$ hence ruled. These new lines form the second family $`\mathrm{\Sigma }_2`$.
With a suitable choice of coordinates, a concrete case of such an $`X`$ is given by the equation:
$$y_4+y_1y_4y_2^2y_3^2y_1y_2^22y_2y_3y_4y_4^3=0,$$
and the line $`r`$ defined by $`y_2=y_3=y_4=0`$ is one of the secant lines of $`\mathrm{\Gamma }.`$ Now $`F_2=y_2^2+y_3^2`$ and $`F_3=y_1y_2^2.`$ Then, the curves $`F_2=0`$ and $`F_3=0`$ do not intersect transversally at $`[1,0,0]`$, and $`\mathrm{\Sigma }_1`$ is not reduced at $`r.`$ In fact, on any line of $`\mathrm{\Sigma }_1`$ there are two points of $`Sing(X).`$ This shows that the hypothesis “ $`\mathrm{\Sigma }_1`$ is generically reduced” in Theorem 1.1 is essential.
Note also that the curves $`F_2=0`$ and $`F_3=0`$ intersect outside $`[1,0,0]`$ transversally at two points. These points represent two lines on $`X`$ through $`p`$, which belong to $`\mathrm{\Sigma }_2`$. Therefore $`\mu _2=2.`$
The following proposition deals with a delicate point, namely the possibility for a general line $`r`$ of $`\mathrm{\Sigma }`$ to be contained in a plane which is tangent to $`X`$ at any point of $`r.`$
###### Proposition 1.5
Let $`X𝐏^4`$ be an irreducible hypersurface covered by the lines of a family of dimension $`2`$ such that $`\mathrm{\Sigma }`$ is generically reduced. Let $`r\mathrm{\Sigma }`$ be general. Then there is no plane containing $`r`$ which is tangent to $`X`$ at any general point of $`r`$.
Proof Assume by contradiction that there exists a plane $`M`$ such that $`MT_qX`$ for every $`qrX_{sm}.`$ We perform some local computations and we use the same notations as in Proposition 1.3. So, let $`𝐀^4`$ be an affine chart in $`𝐏^4,`$ with coordinates $`y_1,\mathrm{},y_4.`$ Assume that the origin is a general point $`p`$ of $`X,`$ and that $`T_pX`$ is defined by $`y_4=0.`$ Let $`r`$ and $`M`$ be defined respectively by $`y_2=y_3=y_4=0`$ and $`y_3=y_4=0.`$ Let $`G=G_1+G_2+\mathrm{}+G_n=0`$ be the equation of $`X`$ in this chart. We write also $`G_i=F_i+y_4H_i,`$ where the $`F_i`$’ s are homogeneous polynomials in $`y_1,y_2,y_3.`$ Since the line $`r`$ is represented in $`𝐏(T_pX)`$ by the point $`[1,0,0],`$ we have
$$F_i=y_1^{i1}A_{i,1}(y_2,y_3)+y_1^{i2}A_{i,2}(y_2,y_3)+\mathrm{}+A_{i,i}(y_2,y_3),$$
where the $`A_{i,j}`$ are homogeneous polynomials of degree $`j,`$ or zero.
Now, if we move the origin of our system of coordinates to the point $`qr`$ by a change of coordinates of type $`Y_1=y_1t`$ and $`Y_i=y_i`$ for $`i=2,3,4`$ and $`tK`$ (hence $`q=(t,0,0,0)`$), then in the new system of coordinates $`X`$ is defined by the equation
$$\stackrel{~}{G}_t(Y_1,\mathrm{},Y_4)=G(Y_1+t,Y_2,Y_3,Y_4)=Y_4+\underset{i=2}{\overset{n}{}}\{F_i(Y_1+t,Y_2,Y_3)+Y_4H_i(Y_1+t,Y_2,Y_3,Y_4)\}$$
$$=(1+f(t))Y_4+\underset{i=2}{\overset{n}{}}t^{i1}A_{i,1}(Y_2,Y_3)+H.O.T.,$$
where $`f(t)K.`$ Now, since $`MT_qX`$ for every $`qrX_{sm},`$ the above equation shows that, necessarily the linear term of $`\stackrel{~}{G}_t`$ belongs to the ideal $`(Y_3,Y_4)`$ for every $`tK.`$ Therefore, the linear forms $`A_{i,1}(Y_2,Y_3)`$ are in the ideal $`(Y_3)`$ for every $`i2.`$ But in this case the curves in $`𝐏(T_pX)`$ defined by $`F_i=0`$ are either singular at $`[1,0,0]`$, or with tangent line $`y_3=0`$ at $`[1,0,0].`$ This contradicts Prop. 1.3, and the proof is complete.
From Proposition 1.5 we will deduce the following very useful corollaries.
Let $`\gamma :X\mathrm{}\stackrel{ˇ}{𝐏}^4`$ be the Gauss map, which is defined on the smooth locus $`X_{sm}`$ of $`X`$. The closure of the image is $`\stackrel{ˇ}{X}`$, the dual variety of $`X.`$ If $`dim\stackrel{ˇ}{X}<3`$, then the fibres of $`\gamma `$ are linear subvarieties of $`X,`$ and the tangent space to $`X`$ is constant along each fibre.
###### Corollary 1.6
Let $`X𝐏^4`$ be an irreducible hypersurface covered by the lines of a family of dimension $`2`$ such that $`\mathrm{\Sigma }`$ is generically reduced. Then the dual variety $`\stackrel{ˇ}{X}`$ of $`X`$ is a hypersurface of $`\stackrel{ˇ}{𝐏}^4.`$
Proof First of all, the dimension of $`\stackrel{ˇ}{X}`$ must be at least $`2:`$ otherwise $`X`$ would contain a $`1`$-dimensional family of planes, hence a $`3`$-dimensional family of lines, a contradiction. So assume by contradiction that $`dim(\stackrel{ˇ}{X})=2.`$ But then along each fibre of the Gauss map there is even a fixed tangent hyperplane, contradicting Proposition 1.5.
###### Corollary 1.7
Let $`X𝐏^4`$ be an irreducible hypersurface covered by the lines of a family of dimension $`2`$ such that $`\mathrm{\Sigma }`$ is generically reduced. Let $`\mathrm{\Sigma }_1`$ be an irreducible component of $`\mathrm{\Sigma }`$ of dimension two, such that $`\mu _1>1`$. Let $`r\mathrm{\Sigma }_1`$ be general, and set $`\sigma _1(r)=\{r^{}\mathrm{\Sigma }_1|rr^{}\mathrm{}\}`$. Then $`r\sigma _1(r)`$.
Proof Assume the contrary. Then, when $`r^{}\sigma _1(r)`$ moves on $`\sigma _1(r)`$ to $`r,`$ the plane $`r^{}r`$ moves to a limit plane $`M.`$ The intersection $`XM`$ is a curve which has the line $`r`$ as a “ double component” ; in particular, this curve is singular along $`r.`$
Then $`MT_qX`$ for every $`qrX_{sm}.`$ In fact, if $`MT_qX,`$ then $`XM`$ would be smooth at $`q,`$ contradiction.
Let $``$ be the $`4`$-dimensional family of planes introduced in the proof of Theorem 1.1. We will consider now its subfamily $`^{}`$ of dimension $`3`$, formed by the planes generated by pairs of coplanar lines of $`\mathrm{\Sigma }`$.
###### Proposition 1.8
Let $`\pi `$ be a general plane of $`^{}`$ generated by the lines $`r`$ and $`r^{}`$ of $`\mathrm{\Sigma }`$. Then $`\pi `$ is tangent to $`X`$ at exactly $`3`$ points of $`rr^{}`$ (but maybe $`\pi `$ is tangent to $`X`$ elsewhere, outside $`rr^{}`$).
Proof By Theorem 1.1 there are two tangency points of $`\pi `$ to $`X`$ on $`r`$ and two on $`r^{}`$. The point $`rr^{}`$ is singular for $`X\pi `$, but it cannot be singular for $`X,`$ because, otherwise, letting $`r`$ and $`r^{}`$ vary, every point of $`X`$ would be singular. So $`rr^{}`$ is a tangency point of $`\pi `$ to $`X`$. Hence, $`\pi `$ is tangent to $`X`$ at exactly three points lying on $`r`$ or $`r^{}`$.
To prove the next proposition, and also in the sequel, we will need the following refined form of the connectedness principle of Zariski, due to A.Nobile ():
###### Lemma 1.9
Let $`f:XT`$ be a flat family of projective curves, parametrized by a quasi–projective smooth curve, such that the fibres $`X_t`$ are all reduced and $`X_t`$ is irreducible for $`t0`$. Assume that, for $`t0`$, $`X_t`$ has a fixed number $`d`$ of singular points $`P_1^t,\mathrm{},P_d^t`$ and that there exist $`d`$ sections $`s_j:TX`$ such that $`s_j(t)=P_j^t`$ if $`t0`$, that $`s_i(t)s_j(t)`$ if $`ij`$ and that $`\delta (X_t,P_j^t)`$ is constant (where $`\delta (X_t,P_j^t)`$ denotes the length of the quotient $`\overline{A}/A`$, $`A`$ being the local ring of $`X_t`$ at $`P_j^t`$ and $`\overline{A}`$ its normalization). If the singularities of $`X_0`$ are $`s_1(0),\mathrm{},s_d(0),Q_1,\mathrm{},Q_r`$, then $`X_0\{s_1(0),\mathrm{},s_d(0)\}`$ is connected.
###### Proposition 1.10
Let $`n4`$ and let $`\pi `$ be a general plane of an arbitrary irreducible component of $`^{}`$. Then $`\pi `$ does not contain three lines of $`\mathrm{\Sigma }`$.
Proof Assume by contradiction that $`\pi `$ contains the lines $`r,r^{},r^{\prime \prime }`$. Then the residual curve of $`r`$ in $`\pi X`$ splits as $`r^{}r^{\prime \prime }C`$. Hence, by Lemma 1.9, there is a new tangency point on $`r^{}r^{\prime \prime }`$, against Proposition 1.8.
Since our hypersurfaces $`X𝐏^4`$ contain “ too many” lines if $`n4,`$ it is quite natural that they are far from general in the linear system of all hypersurfaces of $`𝐏^4`$ of a fixed degree $`n.`$ In fact, it will turn out that, if $`n4`$ none of them is linearly normal. Hence their singular loci have always dimension $`2.`$ We will prove, now, directly this last property, under the more restrictive assumption that $`n5,`$ which is sufficient for our application of the theorem.
###### Theorem 1.11
Let $`X𝐏^4`$ be a hypersurface of degree $`n5,`$ covered by a family of lines $`\mathrm{\Sigma }`$ of dimension $`2`$, with $`\mu >1`$ Let $`\mathrm{\Delta }`$ denote the singular locus of $`X`$. Then $`\mathrm{\Delta }`$ is a surface. If $`X`$ is not birationally ruled by quadrics, then $`\mathrm{deg}(\mathrm{\Delta })2(n3)`$.
Proof We assume by contradiction that $`\mathrm{\Delta }`$ is a curve. Then every point of $`\mathrm{\Delta }`$ belongs to infinitely many lines of $`\mathrm{\Sigma }`$. The curve $`\mathrm{\Delta }`$ is not a line because every line of $`\mathrm{\Sigma }`$ meets $`\mathrm{\Delta }`$ in $`n3`$ points, and $`n5.`$ If $`xX`$ is general, from $`\mu >1`$ it follows that through $`x`$ there are two secant lines of $`\mathrm{\Delta },`$ say $`r`$ and $`s.`$ By Terracini’ s lemma the tangent space to $`X`$ must be constant along $`r`$ and also along $`s.`$ Therefore, the plane spanned by $`r`$ and $`s`$ is (contained in) a fibre of the Gauss map, hence it is contained in $`X.`$ So, through a general point of $`X`$ there is a plane on $`X,`$ contradiction. This proves that $`\mathrm{\Delta }`$ is a surface.
To prove the assertion on the degree, we consider a general plane of $`^{}`$. If it intersects properly $`\mathrm{\Delta },`$ then this intersection contains at least $`2(n3)`$ points, and the claim follows. If the intersection is not proper, then $`\mathrm{\Delta }`$ contains a family of plane curves of dimension $`3,`$ hence it is a plane. Let $`H`$ be a hyperplane containing $`\mathrm{\Delta };`$ then $`XH`$ splits as the union of $`\mathrm{\Delta }`$ with a surface $`S.`$ If $`PS`$ is general, there are at least two lines on $`X`$ passing through $`P.`$ Each of them meets $`\mathrm{\Delta },`$ hence is contained in $`H,`$ and therefore in $`S.`$ This shows that $`S`$ is a union of smooth quadrics.
We will give in the next proposition some generalities on the surfaces $`\sigma (r)`$.
###### Proposition 1.12
Let $`X𝐏^4`$ be a hypersurface of degree $`n`$ covered by the lines of the family $`\mathrm{\Sigma }`$ of dimension $`2`$, with $`\mu 2`$. Let $`r`$ be a general line of $`\mathrm{\Sigma }`$ and $`\sigma (r)`$ be the union of the lines of $`\mathrm{\Sigma }`$ intersecting $`r`$. Then:
(i) $`\sigma (r)`$ is a ruled surface, having $`r`$ as line of multiplicity $`\mu 1`$;
(ii) if the surfaces $`\sigma (r)`$ describe, as $`r`$ varies in $`\mathrm{\Sigma }`$, an algebraic family in $`X`$ of dimension $`<2`$, then $`X`$ is covered by a $`1`$-dimensional family of quadrics such that there is one and only one quadric of the family passing through any general point of $`X`$.
Proof The first assertion of $`(i)`$ is clear. To prove the second, it is enough to observe that exactly $`\mu 1`$ lines of $`\mathrm{\Sigma }`$, different from $`r`$, pass through a general point of $`r`$, and that these lines are separated by the blow-up of $`X`$ along $`r`$.
The assumption of $`(ii)`$ means that, for every $`r`$, the lines of $`\mathrm{\Sigma }`$ intersecting $`r`$ intersect also infinitely many other lines of the family, so $`\sigma (r)`$ is doubly ruled, hence it is a smooth quadric, or a finite union of smooth quadrics. In the second case, the algebraic family described by the surfaces $`\sigma (r)`$ has dimension two, so this case is excluded.
We will refer to threefolds $`X`$ as in $`(ii)`$ as “ quadric bundles”.
In the following we will analyze the self–intersection of the curves $`\sigma (r)`$ on $`\mathrm{\Sigma }`$ assuming it positive. If the family of these curves is one–dimensional, then the self–intersection is zero and $`X`$ is a quadric bundle. This is the reason why we exclude quadric bundles in our classification.
Our final task concerning the surfaces $`\sigma (r)`$ will be the determination of their degree. For this we need another proposition.
Let $`r`$ and $`r^{}`$ denote two general lines in the same irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }.`$ We will call $`\overline{\mu }_i`$ the number of lines of all $`\mathrm{\Sigma }`$ intersecting both $`r`$ and $`r^{}.`$
Recall that, for every $`r\mathrm{\Sigma },`$ the curve $`\sigma (r)\mathrm{\Sigma }`$ (we switch our point of view, now) parametrizes the lines of $`\mathrm{\Sigma }`$ intersecting $`r.`$ If $`\mathrm{\Sigma }`$ is reducible, with $`\mathrm{\Sigma }_1,\mathrm{},\mathrm{\Sigma }_s`$ as irreducible components of dimension $`2,`$ then the curves $`\sigma (r)`$ are unions $`\sigma _1(r)\mathrm{}\sigma _s(r)`$, where $`\sigma _i(r)`$ is formed by the lines of $`\mathrm{\Sigma }_i`$ intersecting $`r.`$ Note that, if $`\mu _i=1`$ for some index $`i`$ and $`r\mathrm{\Sigma }_i`$, then $`\sigma _i(r)`$ is empty.
Then $`\overline{\mu }_i`$ is the intersection number $`\sigma (r)\sigma (r^{})`$ on (a normalization of) $`\mathrm{\Sigma }.`$
###### Proposition 1.13
Let $`X`$ be a threefold such that $`deg(X)>3.`$ Let $`r`$ and $`r^{}`$ be two general lines in the same irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }.`$ Then $`\overline{\mu }_i=\mu 2`$ (independent of $`i`$ !)
Proof To evaluate $`\overline{\mu }=\sigma (r)\sigma (r^{})`$ we choose the lines $`r`$ and $`r^{}`$ so that they intersects at a point $`p,`$ smooth for $`X.`$ Since $`deg(X)>3,`$ by Proposition 1.10 we can also assume that $`r`$ and $`r^{}`$ are the only lines of $`\mathrm{\Sigma }`$ contained in the plane $`rr^{}`$, so that the lines intersecting both $`r`$ and $`r^{}`$ are those passing through $`p`$. The conclusion follows from Corollary 1.7
###### Proposition 1.14
Assume $`\mathrm{deg}X4`$ and let $`r`$ be a general line on $`X.`$ Then $`\mathrm{deg}\sigma (r)=3\mu 4`$.
Proof Note first that $`\mathrm{deg}\sigma (r)`$ is equal to the degree of the curve, intersection of $`\sigma (r)`$ with a hyperplane $`H`$. We can assume $`rH;`$ then $`H\sigma (r)`$ splits in the union of $`r`$ with $`m`$ other lines meeting $`r`$. Indeed, if $`PH\sigma (r)`$ and $`Pr`$, there exists a line passing through $`P`$ and meeting $`r`$, which is necessarily contained in $`H`$. Moreover, $`\sigma (r)`$ and $`H`$ meet along $`r`$ with intersection multiplicity $`\mu 1`$ (Proposition 1.12). Therefore $`\mathrm{deg}\sigma (r)=\mu 1+m`$.
To compute $`m`$, the number of lines meeting $`r`$ and contained in a $`3`$-space $`H`$, we can assume that $`H`$ is tangent to $`X`$ at a point $`P`$ of $`r`$. In this case $`H`$ contains the $`\mu 1`$ lines through $`P`$ different from $`r`$. To control the other $`m(\mu 1)`$ lines, we use the following degeneration argument.
Since $`H`$ is tangent to $`X`$ at $`pr,`$ the intersection multiplicity of $`\mathrm{\Sigma }`$ and $`𝐆(1,H)`$ at $`r`$ is $`2`$ (this will be proved in §2, Lemma 2.3). According to the so called“dynamical interpretation of the multiplicity of intersection”, in any hyperplane $`H^{}`$ “close” to $`H`$ (if we are working over $`𝐂`$ this means: in a suitable neighbourhood of $`H`$ for the Euclidean topology of $`\stackrel{ˇ}{𝐏^4}`$) there are two distinct lines $`g,g^{}\mathrm{\Sigma }`$ which both have $`r`$ as limit position when $`H^{}`$ specializes to $`H.`$ Note that the lines $`g`$ and $`g^{}`$ are skew, because otherwise $`g\sigma (g^{}),`$ which becomes $`r\sigma (r)`$ when $`H^{}`$ specializes to $`H,`$ a contradiction with Prop. 1.7.
Therefore, we can choose a family of $`3`$-spaces $`H_t`$, parametrized by a smooth curve $`T`$, such that $`H_0=H`$ and, for general $`t`$, $`H_t`$ is generated by two skew lines $`r_t`$ and $`r_t^{}`$, having both $`r`$ as limit position for $`t=0`$. The lines in $`H`$ meeting $`r`$ come from lines in $`H_t`$ meeting either $`r_t`$ or $`r_t^{}`$. In other words, the intersections $`\sigma (r_t)H_t`$ and $`\sigma (r_t^{})H_t`$ both move to $`\sigma (r)H`$. Therefore to preserve the degree of these intersections, the remaining lines intersecting $`r`$ have to come from the $`\overline{\mu }`$ lines of $`H_t`$ meeting both $`r_t`$ and $`r_t^{}`$. Note that, if $`l`$ is one of these “ remaining” lines, then the multiplicity of $`l`$ in $`\mathrm{\Sigma }𝐆(1,H)`$ is $`1.`$ In fact, otherwise, $`H`$ would be tangent to $`X`$ at some point of $`l;`$ but $`H`$ is already tangent to $`X`$ at $`p,`$ and $`pl.`$ We can conclude by the previous proposition that $`m=\overline{\mu }+\mu 1=2\mu 3`$.
## 2 Bounds for $`\mu `$
It is well known that for a surface covered by the lines of a $`1`$-dimensional family, there are at most two lines through any general point. The following theorem is the analogous for threefolds.
###### Theorem 2.1
Let $`X𝐏^4`$ be a $`3`$-fold covered by lines. Assume that the Fano scheme $`\mathrm{\Sigma }`$ is generically reduced and of dimension $`2`$. Then $`\mu 6`$.
Proof It was already remarked in the Introduction that for the degree $`n`$ of $`X`$ we have $`n3`$. Let $`p`$ be a general point of $`X`$ and fix a system of affine coordinates $`y_1,\mathrm{},y_4`$ such that $`p=(0,\mathrm{},0)`$. Let $`G=G_1+\mathrm{}+G_n`$ be the equation of $`X.`$ As usual, we assume that $`T_pX`$ is defined by $`y_4=0,`$ and, moreover, we write $`G_i=F_i+y_4H_i,`$ for $`i2.`$
The polynomials $`F_2,\mathrm{},F_n`$ define (if not zero) curves in the plane $`𝐏(T_pX).`$ In particular, $`F_2=0`$ is a conic $`C_2,`$ whose points represent tangent lines to $`X`$ having at $`p`$ a contact of order $`>2`$, and $`F_3=0`$ is a cubic $`C_3;`$ the points of $`C_2C_3`$ represent the tangent lines to $`X`$ having at $`p`$ a contact of order $`>3`$, and so on. Clearly the points of $`𝐏(T_pX)`$ corresponding to lines contained in $`X`$ are exactly those of $`C_2C_3\mathrm{}C_n.`$
We have $`F_20`$ at any general point of $`X`$ because, otherwise $`X`$ would be a hyperplane of $`𝐏^4.`$ On the other hand, since $`deg(X)3,`$ at any general point of $`X`$ we have also that $`F_3`$ is not a multiple of $`F_2`$ (, Lemma (B.16)). In particular, we have $`F_30,`$ and $`C_2`$ is not contained in $`C_3.`$
Now, $`dim(\stackrel{ˇ}{X})=3,`$ so $`C_2`$ is an irreducible conic (see or ), and we are done.
###### Remark 2.2
Actually, it is possible to give a proof of Theorem 2.1 which is independent of Theorem 1.6, hence of the assumption that $`\mathrm{\Sigma }`$ is generically reduced.
###### Lemma 2.3
For general $`H\stackrel{ˇ}{𝐏^4}`$ the intersection $`\mathrm{\Sigma }𝐆(1,H)`$ is proper. Moreover, if $`r\mathrm{\Sigma }`$ is general and $`rH,`$ then the intersection multiplicity of $`\mathrm{\Sigma }`$ and $`𝐆(1,H)`$ at $`r`$ is always $`2`$ and it is $`1`$ if and only if $`H`$ is not tangent to $`X`$ at any point of $`rX_{sm}.`$
Proof The first part of the statement was already shown in the proof of Thm. 1.1.
Moreover, in the same proof we saw that, if $`H`$ is not tangent to $`X`$ at some point of $`r,`$ then $`T_r\mathrm{\Sigma }`$ and $`T_r𝐆(1,H)`$ are transversal inside $`T_r𝐆(1,4).`$ In fact, the GCD of the polynomials $`\overline{G}_{x_1}`$ in $`(\text{1})`$ has degree exactly $`n3.`$ Hence the double structure on $`r`$ they define has arithmetic genus $`2`$ and does not represent any vector in $`T_r𝐆(1,H).`$ Therefore $`T_r\mathrm{\Sigma }T_r𝐆(1,H)=(0)`$ and the intersection is transversal.
So we have proved that $`i(r)=1`$ if and only if $`H`$ is not tangent to $`X`$ at any point of $`r.`$ Hence, we assume now that $`H`$ is tangent to $`X`$ at some point of $`r.`$ To show that $`i(r)2`$ we perform some local computations. Let $`[x_0,\mathrm{},x_4]`$ be a system of homogeneous coordinates in $`𝐏^4`$ such that the line $`r`$ is defined by the equations $`x_2=x_3=x_4=0.`$ Let $`H=T_PX,`$ where $`P=[0,1,0,0,0]`$ and $`H`$ is defined by $`x_4=0.`$ Let $`[p_{01},\mathrm{},p_{34}]`$ be the related Plücker coordinates. So $`r`$ has coordinates $`[1,0,\mathrm{},0],`$ hence $`p_{01}0.`$ We will restrict, from now on, to work in the affine chart $`U_{01}`$ of $`𝐆(1,4)`$ given by $`p_{01}0;`$ coordinates in this chart are $`p_{02},p_{03},p_{04},p_{12},p_{13}`$ and $`p_{14}`$. The equations of $`𝐆(1,H)`$ inside $`U_{01}`$ are $`p_{04}=p_{14}=0`$. Then the general point of a line $`rU_{01}𝐆(1,H)`$ is $`[s,1,p_{02}sp_{12},p_{03}sp_{13},0]`$.
In a suitable system of coordinates, the equation of $`X`$ is of the form:
$$F=x_2\mathrm{\Psi }x_0^2+x_3\mathrm{\Psi }x_0x_1+x_4\mathrm{\Psi }x_1^2+ax_2^2+bx_2x_3+cx_2x_4+\mathrm{}+fx_4^2+$$
$$+\text{terms of degree }>2\text{ in }x_2,x_3,x_4$$
(1)
where $`\mathrm{\Psi },a,\mathrm{},fK[x_0,x_1]`$ are forms of degree $`n3`$ and $`n2`$ respectively. Here we have used the condition $`rX`$. Moreover the homogeneous part of degree $`1`$ in $`x_2,x_3,x_4`$ of $`F`$ can be normalized in this way because there is no fixed tangent plane to $`X`$ along $`r`$ ().
From $`PSing(X)`$ it follows that the coefficient of $`x_1^{n3}`$ in the polynomial $`\mathrm{\Psi }`$ is not zero, and we can set $`\mathrm{\Psi }=x_1^{n3}+\rho _1x_0x_1^{n4}+\rho _2x_0^2x_1^{n5}+\mathrm{}`$ . The point $`P`$ is $`(0,0,0,0)`$ in the affine chart $`x_10`$, and if we dehomogeneize $`F`$ w.r.t. $`x_1`$ we get
$${}_{}{}^{a}F=({}_{}{}^{a}F)_1+({}_{}{}^{a}F)_2+\mathrm{}=x_4+x_0x_3+\rho _1x_0x_4+V(0,1,x_2,x_3,x_4)+\mathrm{}$$
(2)
where $`V:=ax_2^2+bx_2x_3+cx_2x_4+\mathrm{}+fx_4^2`$.
The condition $`rX`$ implies that $`F(s,1,p_{02}sp_{12},p_{03}sp_{13},0)`$ is identically zero as a polynomial in $`s`$. If we set $`F(s,1,p_{02}sp_{12},p_{03}sp_{13},0)=\alpha +\beta s+\gamma s^2+\delta s^3+\mathrm{}`$, then we can compute $`\alpha ,\beta ,\gamma ,\delta `$ from (2), and we get:
$$\begin{array}{c}\alpha =\overline{a}p_{02}^2+\overline{b}p_{02}p_{03}+\overline{d}p_{03}^2\\ \\ \beta =p_{03}+\text{terms of degree }>1\text{ in }p_{12},p_{13},p_{02},p_{03}\\ \\ \gamma =p_{13}+p_{02}+\rho _1p_{03}+\text{terms of degree }>1\text{ in }p_{12},p_{13},p_{02},p_{03}\\ \\ \delta =p_{12}+\rho _1(p_{13}+p_{03})+\rho _2p_{03}+\text{terms of degree }>1\text{ in }p_{12},p_{13},p_{02},p_{03}\end{array}$$
where $`\overline{a},\overline{b},\overline{d}`$ are the constant terms of the polynomials $`a(x_0,1),b(x_0,1)`$ and $`d(x_0,1)`$ respectively. Note that $`\alpha ,\beta ,\gamma ,\delta `$ are some of the equations of $`𝐆(1,H)\mathrm{\Sigma }.`$
By setting $`\beta =\gamma =\delta =0`$ we define inside the four dimensional affine space $`H`$ a curve which is smooth at $`(0,0,0,0),`$ the point in $`𝐆(1,H)`$ which represents $`r.`$ The direction of the tangent line to this curve at $`r`$ is given by the vector $`(\rho _1,1,1,0).`$
Assume by contradiction that $`i(r)>2.`$ Then this vector annihilates $`\alpha `$, and $`\overline{a}=0.`$ It follows that $`x_0`$ divides $`a(x_0,x_1)`$, hence $`x_2^2a(x_0,1)`$ does not give any contribution to $`({}_{}{}^{a}F)_2`$. Therefore, the reduction modulo $`x_4`$ (the equation of $`T_PX`$ in $`𝐏^4`$) of the polynomial $`({}_{}{}^{a}F)_2`$ is $`x_3(x_0+\overline{b}x_2+\overline{d}x_3).`$ This is the equation of the conic $`C_2`$ embedded in $`𝐏(T_PX)`$. But, since the dual variety of $`X`$ has dimension $`3`$, by Theorem 1.6 this conic should be smooth (, ).
###### Theorem 2.4
Let $`X`$ be a threefold such that $`deg(X)>3.`$ Then $`\mu 4.`$
Proof We analyze in detail the case $`\mu =5`$. A similar proof can be given if $`\mu =6`$. For a different proof of this last case, see .
Let us recall that $`\overline{\mu }=3`$, so given $`r,r^{}\mathrm{\Sigma }`$ general and skew, there are three lines $`a,b,c\mathrm{\Sigma }`$ meeting both $`r`$ and $`r^{}`$.
The lines $`a,b,c`$ are pairwise skew, otherwise $`r,r^{}`$ would fail to be skew. Since $`\overline{\mu }=3,`$ there exists a third line in $`\mathrm{\Sigma }`$, besides $`r`$ and $`r^{}`$, meeting both $`a`$ and $`b.`$ The same conclusion holds for the pairs $`(a,c)`$ and $`(b,c).`$
Claim: If $`r`$ and $`r^{}`$ are general lines of $`\mathrm{\Sigma }`$, then the three lines of $`\mathrm{\Sigma }`$ constructed above starting from the pairs $`(a,b)`$, $`(a,c)`$ and $`(b,c)`$ are distinct.
Assume the contrary. Then there exists a unique line $`s\mathrm{\Sigma }`$, different from both $`r`$ and $`r^{}`$, which meets $`a,b,c`$. Note that all the six lines $`a`$, $`b`$, $`c`$, $`r`$, $`r^{}`$, $`s`$ are contained in the linear span of $`r`$ and $`r^{}`$.
We consider now a family of pairs of lines $`\{(r_t,r_t^{})\}`$ on $`X`$, parametrized by a smooth quasi–projective curve $`T`$, such that $`r_t`$ and $`r_t^{}`$ are disjoint for a general $`tT,`$, while for $`t=0`$ the lines $`r_0`$ and $`r_0^{}`$ meet at a point $`P`$, general on $`X`$. Therefore for $`t`$ general $`\alpha _t:=r_t,r_t^{}`$ is a $`𝐏^3`$: we get a family of $`3`$-spaces whose limit position $`\alpha _0`$ is the tangent space $`T_PX`$.
We can assume that the plane of $`r_0`$ and $`r_0^{}`$ does not contain other lines of $`\mathrm{\Sigma }`$ (because $`n>3`$). For general $`t`$, we have three lines $`a_t`$, $`b_t`$, $`c_t`$, meeting $`r_t`$ and $`r_t^{}`$, and a third line $`s_t`$, meeting $`a_t`$, $`b_t`$ and $`c_t`$, which exists by assumption. For $`t=0`$, the lines $`a_0`$, $`b_0`$, $`c_0`$ still meet $`r_0`$ and $`r_0^{}`$, and $`s_0`$ meets $`a_0`$, $`b_0`$ and $`c_0`$. Hence $`a_0`$, $`b_0`$, $`c_0`$ pass through $`P`$. By Corollary 1.7, $`s_0`$ cannot coincide with $`a_0`$, $`b_0`$ or $`c_0`$, therefore by the assumption $`\overline{\mu }=3`$ and $`\mu =5`$, either $`s_0=r_0`$ or $`s_0=r_0^{}`$.
Assume $`s_0=r_0`$ .
By Lemma 2.3, the intersection multiplicity of $`𝐆(1,\alpha _0)`$ and $`\mathrm{\Sigma }`$ is two at each of the five points corresponding to the lines $`r_0`$, $`r_0^{}`$, $`a_0`$, $`b_0`$, $`c_0`$, therefore, by the dynamical interpretation of the intersection multiplicity, there exist four more lines in $`\alpha _t`$ moving to $`r_0^{}`$, $`a_0`$, $`b_0`$, $`c_0`$ respectively. Let $`u_t`$ be a line of $`\alpha _t`$, having $`r_0^{}`$ as limit position: by Corollary 1.7 $`r_t^{}u_t=\mathrm{}`$.
Let us assume that $`u_t(a_tb_tc_tr_ts_t)=\mathrm{}`$. In this case, from $`\overline{\mu }=3`$, it follows that there exist six lines in $`\alpha _t`$, three of them meeting both $`s_t`$ and $`u_t`$, three meeting both $`r_t`$ and $`u_t`$.
The limit position of each of these six lines passes through $`P`$: but in this way we get too many lines passing through $`P`$ in $`T_PX`$, contradicting the “ multiplicity two ” statement of Lemma 2.3.
Therefore $`u_t`$ meets either $`r_t`$ (or, symmetrically, $`s_t`$) or $`a_t`$ (or, symmetrically, $`b_t`$ or $`c_t`$).
Case $`(i)`$: $`u_tr_t\mathrm{}`$.
In this case $`u_ts_t=\mathrm{}`$, otherwise we would have four lines meeting both $`r_t`$ and $`s_t`$. Also $`u_ta_t=\mathrm{}`$ (and analogously $`u_tb_t`$ and $`u_tc_t`$), otherwise the three lines $`r_t`$, $`a_t`$ and $`u_t`$ would be coplanar. Therefore there exist three lines meeting $`u_t`$ and $`s_t`$, two more lines meeting $`u_t`$ and $`a_t`$, two meeting $`u_t`$ and $`b_t`$, two meeting $`u_t`$ and $`c_t`$: summing up, we get nine new lines.
We get again a contradiction with Lemma 2.3, because we have found $`16`$ lines tending to lines of $`T_PX`$ passing through $`P`$. We conclude that $`u_tr_t=\mathrm{}`$.
Case $`(ii)`$: $`u_ta_t\mathrm{}`$.
So, being $`\overline{\mu }=3`$, $`u_tb_t=u_tc_t=\mathrm{}`$. In this case, we can construct four new lines, two meeting $`s_t`$ and $`u_t`$ and two meeting $`r_t`$ and $`u_t`$. Summing up we have $`11`$ lines moving to lines of $`T_PX`$ passing through $`P`$: this contradiction proves the Claim.
Hence, given $`r`$ and $`r^{}`$, general lines on $`X`$, there exist lines $`a`$, $`b`$ and $`c`$ meeting both of them, and two by two distinct lines $`s_1`$, $`s_2`$, $`s_3`$ meeting $`a`$ and $`b`$, $`a`$ and $`c`$, $`b`$ and $`c`$ respectively. Moreover: $`s_is_j=\mathrm{}`$ for $`ij`$; $`rs_i=r^{}s_i=\mathrm{}`$, $`i`$.
Using the assumption $`\overline{\mu }=3`$, we get the existence of six more lines: $`l`$ meeting $`r`$ and $`s_1`$, $`l^{}`$ meeting $`r^{}`$ and $`s_1`$; $`m`$ meeting $`r`$ and $`s_2`$, $`m^{}`$ meeting $`r^{}`$ and $`s_2`$; $`n`$ meeting $`r`$ and $`s_3`$, $`n^{}`$ meeting $`r^{}`$ and $`s_3`$. Altogether there is a configuration of $`14`$ lines obtained from $`r`$ and $`r^{}`$.
The first observation is that the $`s_i`$’s tend to lines through $`P`$, but $`s_1`$ tends neither to $`a_0`$ nor to $`b_0`$, because $`s_1`$ meets $`a`$ and $`b`$. Therefore there are three possibilities, that we examine separately:
$`s_1r_0`$; in this case the lines tending to $`r_0`$ are only $`r`$ and $`s_1`$. Now we consider $`s_2`$: there are two subcases:
* $`s_2r_0^{}`$: hence $`s_3a_0`$. Since $`l^{}`$ meets both $`r^{}`$ and $`s_1`$, then it moves either to $`b_0`$ or to $`c_0`$; similarly $`m^{}`$, which meets both $`r`$ and $`s_2`$, moves either to $`b_0`$ or to $`c_0`$, and also $`n`$ does the same. This contradicts Lemma 2.3 and Corollary 1.7.
* $`s_2b_0`$: then we consider $`l^{}`$, which moves either to $`a_0`$ or to $`c_0`$. If $`l^{}a_0`$: then $`s_3`$, which meets $`b`$ and $`c`$, goes to $`r_0^{}`$; $`m^{}`$, which meets $`r`$ and $`s_2`$, goes to $`c_0`$; $`n`$ which meets $`r`$ and $`s_3`$ could go to $`a_0`$ or to $`b_0`$ or to $`c_0`$: but all three cases are excluded by Lemma 2.3 and Corollary 1.7 again. If $`l^{}c_0`$, the conclusion is similar.
$`s_1r_0^{}`$; this case is analogous to case (i).
$`s_1c_0`$. We consider $`s_2`$: since it meets $`a`$ and $`c`$, it goes to $`b_0`$, or to $`r_0`$, or to $`r_0^{}`$. The last two possibilities are excluded as in (i) and (ii) for $`s_1`$, so $`s_2b_0`$ and finally $`s_3a_0`$. By considering the limit positions of $`l`$, $`l^{}`$, $`m`$, we find that also in this case the “ multiplicity two” statement of Lemma 2.3 is violated.
This concludes the proof.
The statement of Theorem 0.1 shows that the families of lines in $`𝐏^4`$ we want to classify are characterized by the number $`s`$ of irreducible components $`\mathrm{\Sigma }_1`$, $`\mathrm{}`$, $`\mathrm{\Sigma }_s`$ of $`\mathrm{\Sigma }`$ and by the relative $`\mu _i`$’s. Therefore the proof can be organized according to the following two possibilities:
* there exists an irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }`$ with $`\mu _i>1`$;
* for every irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }`$$`\mu _i=1`$.
By Theorem 2.1, there are only finitely many values of $`s`$ and $`\mu _i`$ to analyze. A posteriori, it will turn out that, actually, in the first case there do not exist other irreducible components of $`\mathrm{\Sigma }.`$
## 3 There exists an irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }`$ with $`\mu _i>1`$
Let $`\mathrm{\Sigma }_i`$ be an irreducible component of $`\mathrm{\Sigma }`$ of dimension $`2,`$ such that $`\mu _i>1.`$ In this section we will consider and use only the lines of $`\mathrm{\Sigma }_i`$, e.g. for constructing the surfaces $`\sigma (r)`$ and so on. So, for simplicity, we will denote $`\mathrm{\Sigma }_i`$ by $`\mathrm{\Sigma }`$ and $`\mu _i`$ by $`\mu .`$ Note that Proposition 1.13 is still true (with the same proof) even if we use in the statement our “ $`\mu `$” and “ $`\overline{\mu }`$” defined by using only the lines of $`\mathrm{\Sigma }_i.`$
###### Proposition 3.1
Assume that $`X`$ is not a quadric bundle and that $`deg(X)>3.`$ Then $`\mu >2.`$
Proof Since we assume that $`X`$ is not a quadric bundle we have that the dimension of $`\{\sigma (r)\}_{r\mathrm{\Sigma }}`$ is $`2`$ by Prop. 1.12. Then, through a general point of $`\mathrm{\Sigma }`$ there are infinitely many curves $`\sigma (r),`$ and, by Proposition 1.13 we conclude
$$\mu 2=\sigma (r)^2>0.$$
Then, if we assume that $`deg(X)>3`$ and that $`X`$ is not a quadric bundle, by the above proposition and by Theorem 2.4, the only possibilities for $`\mu `$ are $`\mu =3,4.`$
The case $`\mu =3.`$
###### Proposition 3.2
Let $`X𝐏^4`$ be a hypersurface of degree $`>3,`$ containing an irreducible family of lines $`\mathrm{\Sigma }`$ with $`\mu =3.`$ Then $`X`$ has degree $`5,`$ sectional genus $`\pi =1`$ and it is a projection of a Fano threefold of $`𝐏^6`$ of the form $`𝐆(1,4)𝐏^6`$.
Proof The algebraic system of dimension two $`\{\sigma (g)\}_{g\mathrm{\Sigma }}`$ on the surface $`\mathrm{\Sigma }`$ is linear because there is exactly one curve of the system passing through two general points ($`\overline{\mu }=1`$). Also the self–intersection is equal to $`\overline{\mu }=1`$, therefore $`\{\sigma (g)\}`$ is a homaloidal net of rational curves, which defines a birational map $`f`$ from $`\mathrm{\Sigma }`$ to the plane, such that the curves of the net correspond to the lines of $`𝐏^2`$. The degree of the curves $`\sigma (g)`$ is $`5`$ by Proposition 1.14. So the birational inverse of $`f`$ is given by a linear system of plane curves of degree $`5`$. Hence we get immediately the weak bound $`\mathrm{deg}\mathrm{\Sigma }25`$. Let $`\nu `$ denote the number of lines of $`\mathrm{\Sigma }`$ contained in a $`3`$-plane: by Schubert calculus, $`\mathrm{deg}\mathrm{\Sigma }=\mu n+\nu `$. To evaluate $`\nu `$, we consider two general skew lines $`r`$, $`r^{}`$ on $`X`$, generating a $`3`$-space $`H`$. The lines $`r`$ and $`r^{}`$ have a common secant line $`l`$. The set–theoretical intersection $`\sigma (r)H`$ is the union of $`r`$, $`l`$ and two more lines $`l_1`$, $`l_2`$ by Proposition 1.14. Similarly we get two new lines $`m_1`$, $`m_2`$ in $`\sigma (r^{})H`$. The line $`l_1`$ (resp. $`l_2`$) cannot meet both $`m_1`$ and $`m_2`$ because $`\overline{\mu }=1`$, so there are two new lines in $`H.`$
So we have found at least $`9`$ lines in $`H`$, hence $`\nu 9`$. The assumption $`\mu =3`$ together with $`\nu 9`$ gives at once $`n5`$.
Let $`S`$ be a general hyperplane section of $`X`$. If $`n=4`$, then it is well known (see for example the classical book of Conforto ) that under our assumptions one of the following happens: $`S`$ is a ruled surface (in particular a cone) or a Steiner surface or a Del Pezzo surface with a double irreducible conic. None of these surfaces is section of a threefold $`X`$ with the required properties. In the first case $`X`$ would have a family of lines of dimension $`3`$, in the second case $`X`$ would be a cone, in the third case $`\mu =4`$ (see and ). Therefore the degree of $`X`$ is exactly $`5.`$
We can apply, now, Theorem 1.11 which gives $`\mathrm{deg}\mathrm{\Delta }4`$ since $`n=5.`$ If $`\pi `$ denotes the sectional genus of $`X`$ (i.e. the geometric genus of a general plane section of $`X`$) we deduce $`\pi 2.`$
To exclude $`\pi =2,`$ we show that there exist planes containing three lines of $`\mathrm{\Sigma }`$. Indeed let $`r`$ be a general line of $`\mathrm{\Sigma }`$. We fix in $`𝐏^4`$ a $`3`$-plane $`H`$ not containing $`r`$, intersecting $`r`$ at a point $`O`$. Let $`\gamma :=\sigma (r)H`$ be a hyperplane section of $`\sigma (r)`$. By Proposition 1.14, $`\sigma (r)`$ has degree $`5`$, hence there exists a trisecant line $`t`$ passing through $`O`$ and meeting $`\gamma `$ again at two points $`P`$ and $`Q`$. Let $`M`$ be the plane generated by $`r`$ and $`t`$: it contains also the lines of $`\sigma (r)`$ passing through $`P`$ and $`Q`$, so $`M`$ contains three lines contained in $`X`$. Now we consider $`M\mathrm{\Delta }`$. By Lemma 1.9 in $`MX=rr^{}r^{\prime \prime }C`$ there must be a “ new” tangency point, hence $`\mathrm{\Delta }M`$ contains at least five points. Therefore $`\mathrm{deg}\mathrm{\Delta }5`$ and $`\pi 1`$. If $`\pi =0`$, the curves intersection of $`S`$ with its tangent planes have a new singular point, so they split. Then by the Kronecker–Castelnuovo theorem, $`S`$ is ruled, a contradiction. So we have $`\pi =1`$ and $`S`$ is a projection of a linearly normal Del Pezzo surface $`S^{}`$ of $`𝐏^5`$ of the same degree $`5`$ (see ), which is necessarily a linear section of $`𝐆(1,4)`$. This proves the theorem.
The case $`\mu =4.`$
###### Proposition 3.3
Let $`X𝐏^4`$ be a hypersurface of degree $`>3,`$ containing an irreducible family of lines $`\mathrm{\Sigma }`$ with $`\mu =4.`$ Then $`X`$ has degree $`4`$ and sectional genus $`\pi =1`$, hence it is a projection of a Del Pezzo threefold of $`𝐏^5,`$ complete intersection of two quadric hypersurfaces of $`𝐏^5`$.
Proof Let $`\overline{g}\mathrm{\Sigma }`$ be general and set $`\sigma :=\sigma (\overline{g}),`$ for simplicity. Let $`\gamma `$ denote a normalization of $`\sigma .`$ The proof of the proposition is based on the following two lemmas.
###### Lemma 3.4
The curve $`\gamma `$ is irreducible, hyperelliptic of genus $`2.`$ Hence $`\gamma `$ can be embedded into $`𝐏^3`$ as a smooth quintic.
Let $`S𝐆(1,3)`$ be the surface parametrizing the secant lines of $`\gamma .`$ Let $`r𝐏^3`$ be a fixed general secant line of $`\gamma ;`$ we will denote by $`A`$ and $`B`$ the points of $`r\gamma .`$ The family of all secant lines of $`\gamma `$ that intersect $`r`$ has three irreducible components: the secant lines through $`A,`$ those through $`B`$ and “the other ones”. This last component is represented on $`S`$ by an irreducible curve that we will denote by $`I_r.`$
###### Lemma 3.5
There exists a birational map $`\tau :\mathrm{\Sigma }\mathrm{}S`$ such that the image via $`\tau `$ of every curve $`\sigma (g)\mathrm{\Sigma }`$ is the curve $`I_{\tau (g)}`$ on $`S`$ just introduced. If $`g,g^{}\mathrm{\Sigma }`$ are general, then $`gg^{}\mathrm{}`$ if and only if $`\tau (g)\tau (g^{})\mathrm{}.`$
We will prove now Proposition 3.3 assuming Lemmas 3.4 and 3.5 .
Let $`p`$ be a general point of $`𝐏^3,`$ $`p\gamma .`$ There are four secant lines $`l_1,\mathrm{},l_4`$ of $`\gamma `$ through $`p`$ and we can assume that $`l_i=\tau (g_i),`$ with $`g_i\mathrm{\Sigma },`$ $`i=1,\mathrm{},4.`$ By Lemma 3.5 we have $`g_ig_j\mathrm{}`$ for every $`ij.`$
The first possibility is that, for a general $`p𝐏^3,`$ the four lines $`g_1,\mathrm{},g_4`$ all lie in a plane $`M_p𝐏^4.`$ By Prop. 1.10 the family of such planes has dimension at most $`2`$ and, therefore, the same plane $`M_p`$ corresponds to infinitely many points of $`𝐏^3.`$ This implies that every plane $`M_p`$ contains infinitely many lines of $`\mathrm{\Sigma },`$ hence $`M_pX.`$ Then $`X`$ contains at least a $`1`$-dimensional family of planes: a contradiction.
Therefore, for a general $`p𝐏^3,`$ the four lines $`g_1,\mathrm{},g_4`$ all contain one fixed point $`PX,`$ and we get a rational map $`\alpha :𝐏^3\mathrm{}X`$ by setting $`\alpha (p):=P.`$ This map is dominant because $`\tau :\mathrm{\Sigma }\mathrm{}S`$ is birational, and it has degree $`1,`$ because $`\mu =4.`$ Hence $`X`$ is birational to $`𝐏^3`$ via $`\alpha .`$
Note that $`\alpha `$ is not regular at the points of $`\gamma ,`$ so $`\alpha `$ is defined by a linear system of surfaces $`F𝐏^3`$ of degree $`m,`$ all containing $`\gamma .`$ Let $`s`$ be the maximum integer such that these surfaces contain the $`s^{th}`$ infinitesimal neighbourhood of $`\gamma .`$ So $`F|mH(s+1)\gamma |,`$ where $`H`$ is a plane divisor in $`𝐏^3.`$ We claim that $`s=0`$ and $`m=3.`$
The second part of the statement of Lemma 3.5 makes clear that any secant line of $`\gamma `$ is transformed by $`\alpha `$ into a line of $`\mathrm{\Sigma }.`$ Therefore we must have $`m=2(s+1)+1;`$ if we intersect one of the surfaces $`F`$ with the unique quadric surface $`Q`$ containing $`\gamma ,`$ by Bezout and $`deg(\gamma )=5`$ we get
$$2m=2\left[2(s+1)+1\right]5(s+1),$$
hence $`s1.`$
If $`s=1`$ we get $`m=5`$ and the surfaces $`F`$ contain the first infinitesimal neighbourhood of $`\gamma .`$ Let $`IK[x_0,\mathrm{},x_3]`$ denote the saturated ideal of $`\gamma .`$ Since $`\gamma 𝐏^3`$ is arithmetically Cohen-Macaulay, the saturated ideal of the first infinitesimal neighbourhood of $`\gamma `$ is $`I^2`$ (, 2.3.7). Now, $`I`$ can be minimally generated by one polynomial $`q`$ of degree $`2`$ (the equation of $`Q`$) and two polynomials of degree $`3;`$ therefore, every homogeneous polynomial of degree $`5`$ in $`I^2`$ must contain $`q`$ as a factor. So the case $`s=1`$ is excluded.
Hence, the linear system defining $`\alpha `$ is a system of cubic surfaces of $`𝐏^3,`$ containing $`\gamma `$ with multiplicity $`1.`$ The linear system of all such surfaces defines a rational map $`𝐏^3\mathrm{}𝐏^5,`$ whose image is a Del Pezzo threefold, complete intersection of two quadric hypersurfaces of $`𝐏^5`$. This completes the proof of Proposition 3.3.
Proofof Lemma 3.4 The proof is divided into several steps.
Step 1. There is a birational map $`\psi :\mathrm{\Sigma }\mathrm{}\sigma ^{(2)},`$ where $`\sigma ^{(2)}`$ denotes the symmetric product of the curve $`\sigma `$ by itself.
On $`\mathrm{\Sigma }`$ there is the algebraic system of curves $`\{\sigma (g)\}_{g\mathrm{\Sigma }},`$ of dimension $`2.`$ Since $`\overline{\mu }=2`$, there are exactly $`2`$ curves of the system containing two fixed general points on $`\mathrm{\Sigma };`$ moreover $`\sigma (g)^2=2`$.
The map $`\psi `$ is defined as follows: let $`r`$ be a general line of $`\mathrm{\Sigma }`$; let $`a,b`$ be the two lines of $`\mathrm{\Sigma }`$ intersecting both $`r`$ and $`\overline{g}.`$ The corresponding points on $`\mathrm{\Sigma }`$ actually lie on $`\sigma .`$ We set $`\psi :r(a,b);`$ it is easily seen that $`\psi `$ is birational. Note that the map $`\psi `$ depends on the choice of $`\overline{g}\mathrm{\Sigma }.`$
In particular, from $`\mathrm{\Sigma }`$ irreducible it follows that $`\sigma `$ is also irreducible.
Step 2. The characteristic series of the algebraic system $`\{\sigma (g)\}_g`$ on the curve $`\sigma `$ is a complete $`g_2^1.`$ Therefore also the algebraic system $`\{\sigma (g)\}_g`$ is complete.
From the fact that the dimension and the degree of the algebraic system $`\{\sigma (g)\}_g`$ are both $`2`$, it follows at once that the characteristic series has degree $`2`$ and dimension $`1,`$ i.e. it is a $`g_2^1.`$
Assume it is not complete; then $`\sigma `$ is necessarily a rational curve and the characteristic series generates a complete $`g_2^2`$. In this case $`\mathrm{\Sigma }`$ is a rational surface and we can embed $`\{\sigma (g)\}_g`$ into the complete linear system $`|\sigma (g)|`$ of dimension $`3`$. Let $`L`$ be the linear span of $`\{\sigma (g)\}_g`$ inside $`|\sigma (g)|.`$ Let $``$ be the linear system of those ruled surfaces on $`X`$ which correspond to the curves of $`L.`$ Fix a general point $`P`$ of $`X`$ and denote by $``$ the subsystem of surfaces of $``$ containing $`P`$: $``$ contains $`4`$ linearly independent surfaces, hence its dimension is at least $`3`$: a contradiction.
Step 3. Let $`\pi `$ denote the geometric genus of $`\sigma .`$ Then $`\pi 2.`$
By the previous step we already know that $`\pi 1;`$ assume $`\pi =1.`$ Then, by the well known fact that the irregularity of $`\sigma ^{(2)}`$ equals the (geometric) genus of $`\sigma ,`$ the irregularity of $`\mathrm{\Sigma }`$ is $`1.`$ But the surface $`\mathrm{\Sigma }`$, which parametrizes the curves of $`\{\sigma (g)\}_{g\mathrm{\Sigma }},`$ is therefore fibered by a $`1`$-dimensional family of lines, each line representing a linear pencil of curves $`\sigma (g);`$ from $`\sigma (g)^2=2`$ it follows that every such pencil has $`2`$ base points. This also means that on $`X`$ we have a $`1`$-dimensional family of linear pencils of elliptic ruled surfaces $`\sigma (g),`$ each pencil having exactly two base lines.
We fix one of these pencils $`\{\sigma (g_t)\}_{t𝐏^1},`$ and we let $`r`$ and $`r^{}`$ denote the two base lines. Every surface of the pencil is of the type $`\sigma (g),`$ with $`g`$ intersecting both $`r`$ and $`r^{}.`$ Set
$$R:=\underset{t𝐏^1}{}g_tX$$
We claim that, for general $`t,t^{}𝐏^1,`$ the lines $`g_t`$ and $`g_t^{}`$ don’t meet on $`r.`$ Indeed, if $`g_tg_t^{}=Pr`$, then also the fourth line of $`\mathrm{\Sigma }`$ through $`P`$ would be contained in $`\sigma (g_t)\sigma (g_t^{}),`$ the base locus of the pencil: a contradiction.
So $`r`$ is a simple unisecant for $`R`$. Since $`\sigma (r)`$ is irreducible, from $`R\sigma (r)`$ it follows that $`R=\sigma (r).`$ Then we have a contradiction because $`r`$ has multiplicity $`3`$ on $`\sigma (r)`$ by Proposition 1.12. Therefore, $`\sigma `$ is hyperelliptic of geometric genus $`\pi 2`$.
To complete the proof of Lemma 3.4 it remains to show:
Step 4. The genus of $`\gamma `$ is $`2.`$ In particular, $`\gamma `$ is embedded in $`𝐏^3`$ with degree $`5.`$
Let $`p\overline{g}`$ be a general point, and let $`a,b,c\mathrm{\Sigma }`$ denote the lines through $`p,`$ different from $`\overline{g}.`$ Moreover, let $`d,e\sigma `$ be such that $`d+eg_2^1`$ on $`\gamma .`$ Then $`H:=a+b+c+d+e`$ is a positive divisor on $`\gamma ,`$ of degree $`5.`$ When $`p`$ varies on $`\overline{g},`$ the divisors on $`\gamma `$ of type $`a+b+c`$ are all linearly equivalent because they are parametrized by the rational variety $`\overline{g}.`$ We denote by $`𝒟`$ the pencil of such divisors. Since the two rational maps $`\gamma 𝐏^1`$ defined respectively by $`𝒟`$ and $`g_2^1`$ are clearly different, it is easily seen that $`dim|H|3.`$ Hence, by Clifford’ s theorem $`H`$ is non special. Since $`\pi 2,`$ it follows then by Riemann–Roch that $`dim|H|=3,`$ and that $`\pi =2.`$ Then $`H`$ is also very ample on $`\gamma .`$
To prove Lemma 3.5 we need
###### Lemma 3.6
$`\{I_r\}_{rS}`$ is an algebraic system of curves on $`S`$ of dimension $`2,`$ degree $`2`$ and index $`2.`$
Proof Since $`deg(\gamma )=5`$ and $`\pi =2,`$ there are $`4`$ secant lines of $`\gamma `$ through a general point of $`𝐏^3,`$ and $`10`$ secant lines of $`\gamma `$ contained in a general plane of $`𝐏^3.`$ Therefore, the class of $`S`$ in the Chow group $`CH_2(𝐆(1,3))`$ is $`4\alpha +10\beta ,`$ with traditional notations. It follows that the degree of $`S𝐏^5`$ is $`14;`$ this means that there are $`14`$ secant lines of $`\gamma `$ intersecting two general lines $`r`$ and $`r^{}`$ in $`𝐏^3.`$
Assume, now, that $`r`$ and $`r^{}`$ are chords of $`\gamma ,`$ and set $`r\gamma =\{A,B\},`$ $`r^{}\gamma =\{C,D\}.`$ To compute $`I_rI_r^{}`$ we have just to compute the number of the spurious solutions among these $`14`$ secant lines. Let $`M`$ be the plane generated by $`r`$ and $`C;`$ besides $`A,B,C`$ the plane $`M`$ intersects $`\gamma `$ at the points $`P,Q.`$ Therefore, we have the $`4`$ secant lines $`AC,BC,PC,QC`$ on $`M.`$ By repeating this argument for the planes $`rD,`$ $`r^{}A,`$ $`r^{}B,`$ we get $`16`$ spurious secant lines, $`4`$ of them have been counted twice. Hence, $`I_rI_r^{}=2.`$
It follows easily that the index of $`\{I_r\}_r`$ is also $`2.`$
Proofof Lemma 3.5 Let us remark first of all that the curves $`\gamma `$ and $`I_r`$ are birational. Indeed let $`r\gamma =\{A,B\}.`$ If $`P\gamma ,`$ and $`Pr,`$ then the plane $`rP`$ intersects $`\gamma `$ at the points $`A,B,P,C,D.`$ We get a birational map $`f:\gamma I_r`$ by setting $`f:P\overline{CD}.`$
We fix now a general secant line $`r`$ of $`\gamma .`$ Starting from the just constructed map $`f`$, we can also construct, in a canonical way, a map $`f^{(2)}:\gamma ^{(2)}I_r^{(2)},`$ which is again birational.
In the first step of the proof of Lemma 3.4 we have constructed a birational map $`\psi :\mathrm{\Sigma }\mathrm{}\sigma ^{(2)}.`$ Since $`\gamma `$ and $`\sigma `$ are birational, we get also a map $`\phi :\mathrm{\Sigma }\mathrm{}\gamma ^{(2)}.`$
Finally, the algebraic system $`\{I_r\}_{rS}`$ allows us to construct a birational map $`\chi :I_r^{(2)}\mathrm{}S`$ as follows. Let $`a,b`$ be a general pair of secant lines of $`\gamma ,`$ and assume that each of them intersects $`r.`$ By Lemma 3.6 we have $`I_aI_b=2;`$ one of these intersections is $`r,`$ the other one is, by definition, $`\chi (a,b).`$
If we compose $`\phi `$, $`f`$ and $`\chi `$ we get the desired map $`\tau :\mathrm{\Sigma }\mathrm{}S.`$
It remains to show that $`\tau (\sigma (g))=I_{\tau (g)}`$. Consider a curve $`\sigma (g)`$ such that $`g`$ intersects $`\overline{g}.`$ It is mapped by $`\phi `$ to the curve on $`\gamma ^{(2)}`$ formed by all the pairs of elements of $`\gamma `$ containing $`g.`$ Therefore, $`f^{(2)}\phi `$ sends $`\sigma (g)`$ to the curve on $`I_r^{(2)}`$ formed by all the pairs of elements of $`I_r`$ containing $`f(g)`$, and clearly $`\chi `$ maps this last curve to $`I_{\tau (g)}.`$
###### Remark 3.7
Note that, if $`X`$ is one of the threefolds found in this section with $`\mu =3,4`$, then the Fano scheme $`\mathrm{\Sigma }`$ of $`X`$ is actually irreducible.
## 4 Every irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }`$ has $`\mu _i=1`$
In this section we assume that the family of lines $`\mathrm{\Sigma }`$ on $`X`$ is reducible and that for every irreducible component $`\mathrm{\Sigma }_i`$ of $`\mathrm{\Sigma }`$ we have $`\mu _i=1`$.
Note that, from $`\mu _i=1`$ for all $`i`$ and from Theorem 2.1, it follows that $`s=\mu 6`$.
The case $`s=2`$.
###### Proposition 4.1
Let $`X𝐏^4`$ be a threefold containing two irreducible families of lines $`\mathrm{\Sigma }_i`$ ($`i=1,2`$) both with $`\mu _i=1.`$ Assume that $`X`$ is not a quadric bundle. Then $`X`$ is a threefold of degree $`6`$ with sectional genus $`\pi =1`$, projection of a Fano threefold of $`𝐏^7`$, hyperplane section of $`𝐏^2\times 𝐏^2`$ (see ).
Proof If $`g_1`$ is a fixed line of $`\mathrm{\Sigma }_1`$, then the lines of $`\mathrm{\Sigma }_2`$ meeting it generate the rational ruled surface $`\sigma _2(g_1)`$ having $`g_1`$ as simple unisecant. Hence $`\mathrm{\Sigma }_2`$ results to be a rational surface. Similarly for $`\mathrm{\Sigma }_1`$.
There are two possibilities regarding the algebraic system $`\{\sigma _2(g_1)\}_{g_1\mathrm{\Sigma }_1}`$, whose dimension is two (because $`X`$ is not a quadric bundle): either it is already linear, or it can be embedded in a larger linear system of curves in $`\mathrm{\Sigma }_2`$, which corresponds to a linear system of rational ruled surfaces on $`X`$. We will prove now that the second case can be excluded.
To this end, we reformulate the problem in a slightly different way. We consider the rational map $`\varphi :X𝐏^r:=𝐏^{H^0(\sigma _2(g_1))^{}}`$ associated to the complete linear system $`\sigma _2(g_1)`$. The map $`\varphi `$ sends a point $`p`$ to the subsystem formed by the ruled surfaces passing through $`p`$. From $`\mu _2=1`$, it follows that $`\varphi `$ contracts the lines of $`\mathrm{\Sigma }_2`$, which are therefore the fibres of $`\varphi `$. Hence $`\varphi (X)`$ is a surface $`S`$ of degree $`d=\sigma _2(g_1)^2`$. By an argument similar to that of Proposition 1.14, we have that $`\mathrm{deg}\sigma _2(g_1)=d+2`$.
The inverse images of the hyperplane sections of $`S`$ are the surfaces of $`\sigma _2(g_1)`$, so $`S`$ is a surface with rational hyperplane sections. We replace now $`S`$ with a general projection in $`𝐏^3`$, so we can apply the theorem of Kronecker–Castelnuovo and we get only three possibilities:
1. $`S=𝐏^2`$: in this case the considered algebraic system is already linear and $`d=1`$;
2. $`S`$ is a scroll and $`d>1`$;
3. $`S`$ is a Steiner surface, projection of a Veronese surface, with $`d=4`$.
We have to prove that only the first case happens. Assume by contradiction that $`S`$ is like in $`2.`$ or $`3.`$ Note that any section of $`S`$ with a tangent plane is reducible. If $`S`$ is a scroll, such a section is the union of a line $`l`$ with a plane curve $`C`$ of degree $`d1`$. Let $`\pi `$ be the arithmetic genus of $`C`$. The following relation expresses the arithmetic genus of a reducible plane section of $`S`$: $`\pi +d2=0`$, so $`d=2`$, $`\pi =0`$ and $`S`$ is a quadric. Moreover $`\mathrm{deg}(\sigma _2(g_1))=4`$, so a general ruled surface in the linear system $`\sigma _2(g_1)`$ is a scroll of type $`(1,3)`$ or $`(2,2)`$. The case $`(1,3)`$ is excluded because every surface of the system should have a unisecant line and our threefold $`X`$ contains a family of lines of dimension exactly $`2`$. So a general scroll of the system should be of type $`(2,2)`$, hence contain a $`1`$-dimensional family of conics. In this case $`X`$ contains a $`4`$-dimensional family of conics, and a general hyperplane section $`XH`$ of its contains a $`2`$-dimensional family of conics. By the usual argument, $`XH`$ is a quadric or a cubic scroll or a Steiner surface: all three possibilities are easily excluded.
We assume now that $`S`$ is a projection of a Veronese surface. In this case $`\mathrm{deg}\sigma _2(g_1)=6`$, so a general ruled surface in the linear system $`\sigma _2(g_1)`$ is a scroll of type $`(2,4)`$ or $`(3,3)`$. The reducible plane sections of $`S`$ are unions of conics and correspond to reducible ruled surfaces on $`X`$, unions of two scrolls of degree three. Necessarily they are both of type $`(1,2)`$ so each of them contains a family of conics of dimension $`2`$: we conclude as in the previous case.
So we have proved that for both systems of lines $`d=1`$, hence $`\mathrm{deg}\sigma _2(g_1)=\mathrm{deg}\sigma _1(g_2)=3`$. Also the curves in the Grassmannian $`𝐆(1,4)`$ corresponding to these ruled surfaces have degree $`3`$. So the surface $`\mathrm{\Sigma }_i`$ ( for $`i=1,2`$) contains a linear system of dimension two of rational cubics, with self–intersection one: it defines a birational map from $`\mathrm{\Sigma }_i`$ to $`𝐏^2`$, whose inverse map is defined by a linear system of plane cubic curves. Hence $`\mathrm{deg}\mathrm{\Sigma }_i9`$ and $`\mathrm{\Sigma }_i`$ has rational or elliptic hyperplane sections.
Moreover there is a natural birational map between plane sections of $`X`$ and some hyperplane sections of $`\mathrm{\Sigma }_i`$. Precisely, let $`H`$ be the singular hyperplane section of $`𝐆(1,4),`$ given by lines meeting a plane $`\pi `$: then $`\mathrm{\Sigma }_iH`$ represents lines of $`\mathrm{\Sigma }_i`$ passing through the points of $`X\pi `$. Since there is only one line of $`\mathrm{\Sigma }_i`$ through a general point of $`X`$, we get the required birational map between $`\mathrm{\Sigma }_iH`$ and $`X\pi `$.
We conclude that also the plane sections of $`X`$ are rational or elliptic curves. In particular a general hyperplane section of $`X`$ is a surface of $`𝐏^3`$ with the same property. The case of rational sections can be excluded using the Kronecker–Castelnuovo theorem as in Proposition 3.2. So a hyperplane section of $`X`$ is a Del Pezzo surface and $`X`$ is a (projection of) a Fano threefold. Looking at the list of Fano threefolds we get the proposition.
The case $`s>2`$.
If $`\mathrm{\Sigma }`$ has three or more components, a new situation can appear, precisely $`X`$ could be a quadric bundle in more than one way.
For example, if $`X=𝐏^1\times 𝐏^1\times 𝐏^1`$ (or one of its projections), $`\mathrm{\Sigma }`$ has three components with $`\mu _i=1`$, so that there are three lines passing through any point $`P`$ of $`X`$, one for each of the three systems. The lines of a system $`\mathrm{\Sigma }_i`$ meeting a fixed line of another system $`\mathrm{\Sigma }_j`$ fill up a smooth quadric, so the surfaces $`\sigma _i(g_j)`$ are all quadrics. Moreover the $`1`$-dimensional families $`\{\sigma _i(g_j)\}_{g_j\mathrm{\Sigma }_j}`$ and $`\{\sigma _j(g_i)\}_{g_i\mathrm{\Sigma }_i}`$ coincide. Hence there are three different structures of quadric bundle on $`X`$ giving raise to six families of conics in $`𝐆(1,4)`$.
Let $`X`$ be a threefold of $`𝐏^4`$ covered by $`s3`$ two-dimensional families of lines $`\mathrm{\Sigma }_i`$, $`i=1,\mathrm{},s`$. We distinguish the following two cases:
* there exists a pair of indices $`(\overline{ı},\overline{ȷ})`$ such that the family $`\{\sigma _{\overline{ı}}(g_{\overline{ȷ}})\}_{g_{\overline{ȷ}}\mathrm{\Sigma }_{\overline{ȷ}}}`$ has dimension two;
* for all $`(i,j)`$, $`dim\{\sigma _i(g_j)\}_{g_j\mathrm{\Sigma }_j}=1`$.
In the first case, we consider only the two components $`\mathrm{\Sigma }_{\overline{ȷ}}`$ and $`\mathrm{\Sigma }_{\overline{ı}}`$: we can argue on these components as we did in the case $`s=2`$, obtaining that $`X`$ has to be a projection of a Fano threefold. Since there are no Fano threefolds satisfying our assumption, we can exclude the first case.
Therefore, if $`s3`$, necessarily the surfaces $`\sigma _i(g_j)`$ are smooth quadrics for all pair $`(i,j)`$. To get the classification, our strategy will be the usual one: to fix three of the families of lines and argue with them. Our result is:
###### Proposition 4.2
Let $`X`$ be a threefold of $`𝐏^4`$ containing three or more irreducible families of lines $`\mathrm{\Sigma }_i`$ all with $`\mu _i=1.`$ Then $`X`$ is a threefold of degree $`6`$ with sectional genus $`\pi =1`$, projection of $`𝐏^1\times 𝐏^1\times 𝐏^1`$.
Proof For every pair of indices $`(i,j)`$ and general $`g_j\mathrm{\Sigma }_j`$, the surface $`\sigma _i(g_j)`$ is a smooth quadric and it is clear that the linear systems $`\{\sigma _i(g_j)\}_{g_j\mathrm{\Sigma }_j}`$ and $`\{\sigma _j(g_i)\}_{g_i\mathrm{\Sigma }_i}`$ coincide: we call it $`\mathrm{\Sigma }_{ij}`$. We want to study the intersection of two quadrics belonging to two families of the form $`\mathrm{\Sigma }_{ik}`$ and $`\mathrm{\Sigma }_{jk}`$, $`ij`$.
Let us remark first that, if $`g_j`$, $`g_k`$ are two general coplanar lines in $`\mathrm{\Sigma }_j`$, $`\mathrm{\Sigma }_k`$ respectively, then two cases are possible: either the plane $`g_j,g_k`$ does contain a line of $`\mathrm{\Sigma }_i`$, or it does not. In the first case $`X`$ is a cubic (Prop. 1.10). So if $`\mathrm{deg}X>3`$ and $`p\sigma _j(g_k)`$, $`pg_k`$, then $`p\sigma _i(g_k)`$. This immediately implies that $`\sigma _j(g_k)\sigma _i(g_k)=g_k`$. Let us consider now $`\sigma _j(g_k)\sigma _i(g_k^{})`$: it can be written also as $`\sigma _k(g_j)\sigma _k(g_i^t)`$ for a fixed $`g_j\mathrm{\Sigma }_j`$ and $`g_i^t`$ varying in a ruling of the second quadric. Now $`g_j`$ certainly meets all the quadrics of $`\mathrm{\Sigma }_{ik}`$ and is not contained in any of them, so there exists a $`\overline{t}`$ such that $`g_j`$ and $`g_i^{\overline{t}}`$ meet at a point $`q`$. Let $`\overline{g_k}`$ be the line of $`\mathrm{\Sigma }_k`$ through $`q.`$ Then:
$$\sigma _j(g_k)\sigma _i(g_k^{})=\sigma _k(g_j)\sigma _k(g_i^{\overline{t}})=\sigma _j(\overline{g_k})\sigma _i(\overline{g_k}),$$
so we fall in the previous case. We conclude that two general quadrics of these families meet along a line of the family having the common index.
As a consequence, we have that through a general point $`p`$ of $`X`$ there pass one quadric of the family $`\mathrm{\Sigma }_{ij}`$ and one line of $`\mathrm{\Sigma }_k`$.
Now, we embed the $`𝐏^4`$ containing $`X`$ as a subspace of a $`𝐏^7,`$ and call $`Y𝐏^7`$ the image of the Segre embedding $`𝐏^1\times 𝐏^1\times 𝐏^1𝐏^7.`$ If $`QX`$ is a fixed general quadric of the family $`\mathrm{\Sigma }_{12}`$, by acting on $`Y`$ with an element of the projective linear group, we can assume that $`QY`$ as well. Let $`L𝐏^7`$ be a linear subspace of dimension $`5,`$ in “general position” with respect to $`X,`$ i.e. $`LX`$ is a curve. Let $`\mathrm{\Sigma }_1^{},\mathrm{\Sigma }_2^{}`$ and $`\mathrm{\Sigma }_3^{}`$ denote the three families of lines on $`Y;`$ to fix ideas, assume that $`Q`$ contains lines of the families $`\mathrm{\Sigma }_1^{},\mathrm{\Sigma }_2^{}`$ on $`Y.`$
We define a rational map $`\alpha :X\backslash L\mathrm{}Y`$ as follows. Let $`pX`$ be general; then, the line $`r\mathrm{\Sigma }_3`$, such that $`pr`$, intersects $`Q`$ at a single point $`p^{}.`$ Let $`r^{}\mathrm{\Sigma }_3^{}`$ be the line (on $`Y`$) containing $`p^{}.`$ Set $`\alpha (p):=Lpr^{}`$. It is clear that $`\alpha `$ is birational. Moreover, by considering the case of a hyperplane through $`L,`$ we see that $`\alpha `$ takes hyperplane sections of $`X`$ to hyperplane sections of $`Y.`$
There are suitable $`𝐏^3`$ ’s in $`𝐏^7,`$ let us call $`M`$ one of them, such that the restriction $`\beta :YM\mathrm{}𝐏^3`$ of the projection $`𝐏^7M\mathrm{}𝐏^3`$ is birational. The inverse map $`\beta ^1:𝐏^3\mathrm{}Y`$ is defined by a linear system $`|3H_{𝐏^3}l_1l_2l_3|,`$ where the $`l_i`$ ’s are three lines, pairwise skew.
Since $`\alpha `$ takes hyperplane sections of $`X`$ to hyperplane sections of $`Y,`$ the birational map $`(\beta \alpha )^1:𝐏^3\mathrm{}X`$ is defined by a linear subsystem of $`|3H_{𝐏^3}l_1l_2l_3|,`$ i.e. $`X`$ is a projection of $`Y=𝐏^1\times 𝐏^1\times 𝐏^1,`$ and the proof is complete.
Dipartimento di Scienze Matematiche
Università di Trieste
34127 – Trieste
Italia
|
no-problem/0003/cond-mat0003062.html
|
ar5iv
|
text
|
# Untitled Document
Recent Theoretical Results for Nonequilibrium
Deposition of Submicron Particles
Vladimir Privman
Department of Physics, Clarkson University, Potsdam, New York 13699–5820, USA
ABSTRACT
Selected theoretical developments in modeling of deposition of submicrometer size (submicron) particles on solid surfaces, with and without surface diffusion, of interest in colloid, polymer, and certain biological systems, are surveyed. We review deposition processes involving extended objects, with jamming and its interplay with in-surface diffusion yielding interesting dynamics of approach to the large-time state. Mean-field and low-density approximation schemes can be used in many instances for short and intermediate times, in large enough dimensions, and for particle sizes larger than few lattice units. Random sequential adsorption models are appropriate for higher particle densities (larger times). Added diffusion allows formation of denser deposits and leads to power-law large-time behavior which, in one dimension (linear substrate, such as DNA), was related to diffusion-limited reactions, while in two dimensions (planar substrate), was associated with evolution of the domain-wall and defect network, reminiscent of equilibrium ordering processes.
This is a review article, to appear in The Journal of Adhesion (2000).
Keywords: adsorption, deposition, attachment, surface, interface, adhesion, colloid, protein, particle, interaction, dynamics, kinetics, submicron
1. Introduction
1.1. Surface Deposition of Submicron Particles
Surface deposition of submicron particles is of immense practical importance \[1-4\]. Typically, particles of this size, colloid, protein or other biological objects, are suspended in solution, without sedimentation due to gravity. In order to maintain the suspension stable, one has to prevent aggregation (coagulation) that results in larger flocks for which gravity pull is more profound. Stabilization by particle-particle electrostatic repulsion or by steric effects, etc., is usually effective for a sufficiently dilute suspension. But this means that even if a well-defined suspension of well-characterized particles is available, it cannot be always easily observed experimentally in the bulk for a wide range of particle interactions. For those interaction parameters for which the system is unstable with respect to coagulation, the time of observation will be limited by the coagulation process which can be quite fast.
One can form a dense deposit slowly, if desired, on a surface. Indeed, particles can be deposited by diffusion, or more realistically by convective diffusion from a flowing suspension, on collector surfaces. The suspension itself need not be dense even though the on-surface deposit might be quite dense, depending of the particle-particle and particle-surface interactions. Dilution of suspension generally prolongs an experiment aimed at reaching a certain surface coverage. Thus, surface deposition has been well established as an important tool to probe interactions of matter objects on the submicron scale \[1-4\].
1.2. Particle Jamming and Screening at Surfaces
Figure 1 illustrates possible configurations of particles at a surface. From left to right, we show particles deposited on the surface of a collector, then particles deposited on top of other particles. The latter is possible only in the absence of significant particle-particle repulsion. The two situations are termed monolayer and multilayer deposition even though the notion of a layer beyond the one exactly at the surface is only approximate. We next show two effects that play important role in surface growth. The first is jamming: a particle marked by an open circle cannot fit in the lowest layer at the surface. A more realistic two-dimensional ($`2D`$) configuration is shown in the inset.
The second effect is screening: surface position marked by the open circle is not reachable. Typically, in colloid deposition monolayer or few-layer deposits are formed and the dominant effect is jamming, as will be discussed later. Screening plays dominant role in deposition of multiple layers and, together with the transport mechanism, determines the morphology of the growing surface. In addition, the configuration on the surface depends on the transport mechanism of the particles to it and on the particle motion on the surface, as well as possible detachment. Particle motion is typically negligible for colloidal particles but may be significant for proteins.
1.3. Role of Dimensionality and Relation to Other Systems
An important feature of surface deposition is that for all practical purposes it is essentially a $`2D`$ problem. As a result, any mean-field, rate-equation, effective-field, etc., approaches which are usually all related in that they ignore long-range correlations and fluctuation effects, may not be applicable. Indeed, it is known that as the dimensionality of a many-body interacting system decreases, fluctuations play a larger role. Dynamics of important physical, chemical, and biological processes \[6-7\] provides examples of strongly fluctuating systems in low dimensions, $`D=1`$ or 2. These processes include surface adsorption on planar substrates or on large collectors. The surface of the latter is semi-two-dimensional owing to their large size as compared to the size of the deposited particles.
The classical chemical reaction-diffusion kinetics corresponds to $`D=3`$. However, heterogeneous catalysis generated interest in $`D=2`$. For both deposition and reactions, some experimental results exist even in $`D=1`$ (see later). Finally, kinetics of ordering and phase separation, largely amenable to experimental probe in $`D=3`$ and $`2`$, attracted much recent theoretical effort in $`D=1,2`$.
Models in $`D=1`$, and sometimes in $`D=2`$, allow derivation of analytical results. Furthermore, it turns out that all three types of model: deposition-relaxation, reaction-diffusion, phase separation, are interrelated in many, but not all, of their properties. This observation is by no means obvious. It is model-dependent and can be firmly established \[6-7\] only in low dimensions, mostly in $`D=1`$.
Such low-dimensional nonequilibrium models pose several interesting challenges theoretically and numerically. While many exact, asymptotic, and numerical results are already available in the literature \[6-7\], this field presently provides examples of properties which lack theoretical explanation even in $`1D`$. Numerical simulations are challenging and require large scale computational effort already for $`1D`$ models. For more experimentally relevant $`2D`$ cases, where analytical results are scarce, difficulty in numerical simulations has been the limiting factor in understanding of many open problems.
1.4. Outline of This Review
The purpose of this article is to provide an introduction to the field of nonequilibrium surface deposition models of extended particles. By “extended” we mean that the main particle-particle interaction effect will be jamming, i.e., mutual exclusion. No comprehensive survey of the literature is attempted. Relation of deposition to other low-dimensional models mentioned earlier will be only referred to in detail in few cases. The specific models and examples selected for a more detailed exposition, i.e., models of deposition with diffusional relaxation, were biased by author’s own work.
The outline of the review is as follows. The rest of this introductory section is devoted to defining the specific topics of surface deposition to be surveyed. Section 2 describes the simplest models of random sequential adsorption. Section 3 is devoted to deposition with relaxation, with general remarks followed by definition of the simplest, $`1D`$ models of diffusional relaxation for which we present a more detailed description of various theoretical results. Multilayer deposition is also commented on in Section 3. More numerically-based $`2D`$ results for deposition with diffusional relaxation are surveyed in Section 4. Section 5 presents brief concluding remarks.
Surface deposition is a vast field of study. Our emphasis here will be on those deposition processes where the particles are “large” as compared to the underlying atomic and morphological structure of the substrate and as compared to the range of the particle-particle and particle-substrate interactions. Thus, colloids, for instance, involve particles of submicron to several micron size. We note that 1$`\mu `$m$`=10000`$Å, whereas atomic dimensions are of order 1Å, while the range over which particle-surface and particle-particle interactions are significant as compared to $`kT`$, is typically of order 100Å or less. Extensive theoretical study of such systems is relatively recent and it has been motivated by experiments where submicron-size colloid, polymer, and protein “particles” were the deposited objects \[1-4,8-18\].
Perhaps the simplest and the most studied model with particle exclusion is Random Sequential Adsorption (RSA). The RSA model, to be described in detail in Section 2, assumes that particle transport (incoming flux) onto the surface results in a uniform deposition attempt rate $`R`$ per unit time and area. In the simplest formulation, one assumes that only monolayer deposition is allowed. Within this monolayer deposit, each new arriving particle must either fit in an empty area allowed by the hard-core exclusion interaction with the particles deposited earlier, or the deposition attempt is rejected.
The basic RSA model will be described shortly, in Section 2. Recent work has been focused on its extensions to allow for particle relaxation by diffusion, see Sections 3 and 4, to include detachment processes, and to allow multilayer formation. The latter two extensions will be briefly surveyed in Section 3. Several other extensions will not be discussed \[1-4\].
2. Random Sequential Adsorption
2.1. The RSA Model
The irreversible Random Sequential Adsorption (RSA) process \[19-20\] models experiments of submicron particle deposition by assuming a planar $`2D`$ substrate and, in the simplest case, continuum (off-lattice) deposition of spherical particles. However, other RSA models have also received attention. In $`2D`$, noncircular cross-section shapes as well as various lattice-deposition models were considered \[19-20\]. Several experiments on polymers and attachment of fluorescent units on DNA molecules (the latter is usually accompanied by motion of these units on the DNA and detachment) suggest consideration of the lattice-substrate RSA processes in $`1D`$. RSA processes have also found applications in traffic problems and certain other fields. Our presentation in this section aims at defining some RSA models and outlining characteristic features of their dynamics.
Figure 2 illustrates the simplest possible monolayer lattice RSA model: irreversible deposition of dimers on the linear lattice. An arriving dimer will be deposited if the underlying pair of lattice sites are both empty. Otherwise, it is discarded, which is shown schematically by the two dimers above the surface layer. Their deposition on the surface is not possible unless detachment and/or motion of monomers or whole dimers clear the appropriate landing sites.
Let us consider the irreversible RSA without detachment or diffusion. The substrate is usually assumed to be empty initially, at $`t=0`$. In the course of time $`t`$, the coverage, $`\rho (t)`$, increases and builds up to order 1 on the time scales of order $`\left(RV\right)^1`$, where $`R`$ was defined earlier as the deposition attempt rate per unit time and area of the surface, while $`V`$ is the particle $`D`$-dimensional “volume.” For deposition of spheres on a planar surface, $`V`$ is actually the cross-sectional area.
At large times the coverage approaches the jammed-state value where only gaps smaller than the particle size were left in the monolayer. The resulting state is less dense than the fully ordered close-packed coverage. For the $`D=1`$ deposition shown in Figure 2 the fully ordered state would have $`\rho =1`$. The variation of the RSA coverage is illustrated by the lower curve in Figure 3.
At early times the monolayer deposit is not dense and the deposition events are largely uncorrelated. In this regime, mean-field like low-density approximation schemes are useful \[21-23\]. Deposition of $`k`$-mer particles on the linear lattice in $`1D`$ was in fact solved exactly for all times . In $`D=2`$, extensive numerical studies were reported \[23,25-36\] of the variation of coverage with time and large-time asymptotic behavior which will be discussed shortly. Some exact results for correlation properties are available in $`1D`$. Numerical results for correlation properties have been obtained in $`2D`$.
2.2. The Large-Time Behavior in RSA
The large-time deposit has several characteristic properties. For lattice models, the approach to the jammed-state coverage is exponential \[36-38\]. This was shown to follow from the property that the final stages of deposition are in few sparse, well separated surviving landing sites. Estimates of decrease in their density at late stages suggest that
$$\rho (\mathrm{})\rho (t)\mathrm{exp}\left(R\mathrm{}^Dt\right),$$
$`(1)`$
where $`\mathrm{}`$ is the lattice spacing and $`D`$ is the dimensionality of the substrate. The coefficient in Eq. (1) is of order $`\mathrm{}^D/V`$ if the coverage is defined as the fraction of lattice units covered, i.e., the dimensionless fraction of area covered, also termed the coverage fraction, so that coverage as density of particles per unit volume would be $`V^1\rho `$. The detailed behavior depends of the size and shape of the depositing particles as compared to the underlying lattice unit cells.
However, for continuum off-lattice deposition, formally obtained as the limit $`\mathrm{}0`$, the approach to the jamming coverage is power-law. This interesting behavior \[37-38\] is due to the fact that for large times the remaining voids accessible to particle deposition can be of sizes arbitrarily close to those of the depositing particles. Such voids are thus reached with very low probability by the depositing particles, the flux of which is uniformly distributed. The resulting power-law behavior depends on the dimensionality and particle shape. For instance, for $`D`$-dimensional cubes of volume $`V`$,
$$\rho (\mathrm{})\rho (t)\frac{\left[\mathrm{ln}(RVt)\right]^{D1}}{RVt},$$
$`(2)`$
while for spherical particles,
$$\rho (\mathrm{})\rho (t)(RVt)^{1/D}.$$
$`(3)`$
For $`D>1`$, the expressions Eqs. (2-3), and similar relations for other particle shapes, are actually empirical asymptotic laws which have been verified, mostly for $`D=2`$, by extensive numerical simulations \[4,25-36\]. The most studied $`2D`$ geometries are circles (corresponding to the deposition of spheres on a planar substrate) and squares. The jamming coverages are
$$\rho _{\mathrm{squares}}(\mathrm{})0.5620\mathrm{and}\rho _{\mathrm{circles}}(\mathrm{})0.544\mathrm{to}\mathrm{\hspace{0.33em}0.550},$$
$`(4)`$
much lower than the close-packing values, 1 and $`\frac{\pi }{2\sqrt{3}}0.907`$, respectively. For square particles, the crossover to continuum in the limit $`k\mathrm{}`$ and $`\mathrm{}0`$, with fixed $`V^{1/D}=k\mathrm{}`$ in deposition of $`k\times k\times \mathrm{}\times k`$ lattice squares, has been investigated in some detail , both analytically (in any $`D`$) and numerically (in $`2D`$).
The correlations in the large-time jammed state are different from those of the equilibrium random gas of particles with density near $`\rho (\mathrm{})`$. In fact, the two-particle correlations in continuum deposition develop a weak singularity at contact, and correlations generally reflect the full irreversibility of the RSA process .
3. Deposition with Relaxation
3.1. Detachment and Diffusional Relaxation
Monolayer deposits may relax, i.e., explore more configurations, by particle motion on the surface, by their detachment, as well as by motion and detachment of the constituent monomers or recombined units. In fact, detachment has been experimentally observed in deposition of colloid particles which were otherwise quite immobile on the surface . Theoretical interpretation of colloid particle detachment data has proved difficult, however, because binding to the substrate once deposited, can be different for different particles, whereas the transport to the substrate, i.e., the flux of the arriving particles in the deposition part of the process, typically by convective diffusion, is more uniform. Detachment also plays role in deposition on DNA molecules .
Recently, more theoretically motivated studies of the detachment relaxation processes, in some instances with surface diffusion allowed as well, have lead to interesting model studies \[40-46\]. These investigations did not always assume detachment of the original units. Models involving monomer recombination prior to detachment, of $`k`$-mers in $`D=1`$, have been mapped onto certain spin models and symmetry relations were identified which allowed derivation of several exact and asymptotic results on the correlations and other properties \[40-46\]. We note that deposition and detachment combine to drive the dynamics into a steady state, rather than jammed state as in ordinary RSA. These studies have been largely limited thus far to $`1D`$ models.
We now turn to particle motion on the surface, in a monolayer deposit, which was experimentally observed in deposition of proteins and also in deposition on DNA molecules . From now on, we consider diffusional relaxation, i.e., random hopping on the surface in the lattice case. The dimer deposition in $`1D`$, for instance, is shown in Figure 2. Hopping of dimer particles one site to the left or to the right is allowed only if the target site is not occupied. Such hopping can open a two-site gap to allow additional deposition. Thus, diffusional relaxation lets the deposition process to reach denser, in fact, close-packed configurations. Initially, for short times, when the empty area is plentiful, the effect of the in-surface particle motion will be small. However, for large times, the density will exceed that of the RSA process, as illustrated by the upper curve in Figure 3.
It is important to emphasize that deposition and diffusion are two independent processes going on at the same time. External particles arrive at the surface with a fixed rate per unit area. Those finding open landing sites are deposited; others are discarded. At the same time, internal particles, those already on the surface, attempt, with some rate, to hop to a nearby site. They actually move only if the target site is available.
3.2. One-Dimensional Models
Further investigation of this effect is much simpler in $`1D`$ than in $`2D`$. Let us therefore consider the $`1D`$ case first, postponing the discussion of $`2D`$ models to the next section. Specifically, consider deposition of $`k`$-mers of fixed length $`V`$. By keeping the length fixed, we can also naturally consider the continuum limit of no lattice by having the lattice spacing vanish as $`k\mathrm{}`$. This limit corresponds to continuum deposition if we take the underlying lattice spacing $`\mathrm{}=V/k`$. Since the deposition attempt rate $`R`$ was defined per unit area (unit length here), it has no significant $`k`$-dependence. However, the added diffusional hopping of $`k`$-mers on the $`1D`$ lattice, with the attempt rate to be denoted by $`H`$, and hard-core or similar particle interaction, must be $`k`$-dependent. Indeed, we consider each deposited $`k`$-mer particle as randomly and independently attempting to move one lattice spacing to the left or to the right with the rate $`H/2`$ per unit time. Particles cannot run over each other so some sort of hard-core interaction must be assumed, i.e., in a dense state most hopping attempts will fail. However, if left alone, each particle would move diffusively for large time scales. In order to have the resulting diffusion constant $`𝒟`$ finite in the continuum limit $`k\mathrm{}`$, we must assume that
$$H𝒟/\mathrm{}^2=𝒟k^2/V^2.$$
$`(5)`$
which is only valid in $`1D`$.
Each successful hopping of a particle results in motion of one empty lattice site. It is useful to reconsider the dynamics of particle hopping in terms of the dynamics of this rearrangement of empty area fragments \[48-50\]. Indeed, if several of these empty sites are combined to form large enough voids, deposition attempts can succeed in regions of particle density which would be jammed in the ordinary RSA. In terms of these new “diffuser particles” which are the empty lattice sites of the deposition problem, the process is in fact that of reaction-diffusion. Indeed, $`k`$ reactants (empty sites) must be brought together by diffusional hopping in order to have finite probability of their annihilation, i.e., disappearance of a group of consecutive nearest-neighbor empty sites due to successful deposition. Of course, the $`k`$-group can also be broken apart due to diffusion. Therefore, the $`k`$-reactant annihilation is not instantaneous in the reaction nomenclature. Such $`k`$-particle reactions are of interest on their own \[51-57\].
3.3. Beyond the Mean-Field Approximation
The simplest mean-field rate equation for annihilation of $`k`$ reactants describes the time dependence of the coverage, $`\rho (t)`$, in terms of the reactant density $`1\rho `$, i.e., the density of the empty spaces,
$$\frac{d\rho }{dt}=\mathrm{\Gamma }(1\rho )^k,$$
$`(6)`$
where $`\mathrm{\Gamma }`$ is the effective rate constant. Note that we assume that the close-packing dimensional coverage is 1 in $`1D`$. There are two problems with this approximation. Firstly, it turns out that for $`k=2`$ the mean-field approach breaks down. Diffusive-fluctuation arguments for non-mean-field behavior have been advanced for several chemical reactions \[51,53,59-59\]. In $`1D`$, several exact calculations support this conclusion \[60-66\]. The asymptotic large-time behavior turns out to be
$$1\rho 1/\sqrt{t}(k=2,D=1),$$
$`(7)`$
rather than the mean-field prediction $`1/t`$. The coefficient in Eq. (7) is expected to be universal, when expressed in an appropriate dimensionless form by introducing single-reactant diffusion constant.
The power law Eq. (7) was confirmed by extensive numerical simulations of dimer deposition and by exact solution for one particular value of $`H`$ for a model with dimer dissociation. The latter work also yielded some exact results for correlations. Specifically, while the connected particle-particle correlations spread diffusively in space, their decay it time is nondiffusive . Series expansion studies of models of dimer deposition with diffusional hopping of the whole dimers or their dissociation into hopping monomers, has confirmed the expected asymptotic behavior and also provided estimates of the coverage as a function of time .
The case $`k=3`$ is marginal with the mean-field power law modified by logarithmic terms. The latter were not observed in Monte Carlo studies of deposition . However, extensive results are available directly for three-body reactions \[53-56\], including verification of the logarithmic corrections to the mean-field behavior \[54-56\].
3.4. Continuum Limit of Off-Lattice Deposition
The second problem with the mean-field rate equation is identified when one attempts to use it in the continuum limit corresponding to off-lattice deposition, i.e., for $`k\mathrm{}`$. Note that Eq. (6) has no regular limit as $`k\mathrm{}`$. The mean-field approach is essentially the fast diffusion approximation assuming that diffusional relaxation is efficient enough to equilibrate nonuniform density fluctuations on the time scales fast as compared to the time scales of the deposition events. Thus, the mean-field results are formulated in terms of the uniform properties, such as the density. It turns out, however, that the simplest, $`k^{\mathrm{th}}`$-power of the reactant density form Eq. (6) is only appropriate for times $`t>>e^{k1}/(RV)`$.
This conclusion was reached by assuming the fast-diffusion, randomized hard-core reactant system form of the inter-reactant distribution function in $`1D`$. This approach, not detailed here, allows estimation of the limits of validity of the mean-field results and it correctly suggests mean-field validity for $`k=4,5,\mathrm{}`$, with logarithmic corrections for $`k=3`$ and complete breakdown of the mean-field assumptions for $`k=2`$. This detailed analysis yields the modified mean-field relation
$$\frac{d\rho }{dt}=\frac{\gamma RV(1\rho )^k}{\left(1\rho +k^1\rho \right)}(D=1),$$
$`(8)`$
where $`\gamma `$ is some effective dimensionless rate constant. This new expression applies uniformly as $`k\mathrm{}`$. Thus, the continuum deposition is also asymptotically mean-field, with the essentially-singular rate equation
$$\frac{d\rho }{dt}=\gamma (1\rho )\mathrm{exp}[\rho /(1\rho )](k=\mathrm{},D=1).$$
$`(9)`$
The approach to the full, saturation coverage for large times is extremely slow,
$$1\rho (t)\frac{1}{\mathrm{ln}\left(t\mathrm{ln}t\right)}(k=\mathrm{},D=1).$$
$`(10)`$
Similar predictions were also derived for $`k`$-particle chemical reactions .
3.5. Comments on Multilayer Deposition
When particles are allowed to attach also on top of each other, with possibly some rearrangement processes allowed as well, multilayer deposits will be formed. It is important to note that the large-layer structure of the deposit and fluctuation properties of the growing surface will be determined by the transport mechanism of particles to the surface and by the allowed relaxations (rearrangements). Indeed, these two characteristics determine the screening properties of the multilayer formation process which in turn shape the deposit morphology, which can range from fractal to dense, and the roughening of the growing deposit surface. There is a large body of research studying such growth, with recent emphasis on the growing surface fluctuation properties.
However, the feature characteristic of the RSA process, i.e., the exclusion due to particle size, plays no role in determining the universal, large-scale properties of thick deposits and their surfaces. Indeed, the RSA-like jamming will be only important for detailed morphology of the first few layers in a multilayer deposit. However, it turns out that RSA-like approaches (with relaxation) can be useful in modeling granular compaction .
In view of the above remarks, multilayer deposition models involving jamming effects were relatively less studied. They can be divided into two groups. Firstly, structure of the deposit in the first few layers is of interest \[71-73\] because they retain memory of the surface. Variation of density and other correlation properties away from the wall has structure on the length scale of particle size. These typically oscillatory features decay away with the distance from the wall. Numerical Monte Carlo simulation aspects of continuum multilayer deposition (ballistic deposition of $`3D`$ balls) were reviewed in . Secondly, few-layer deposition processes have been of interest in some experimental systems. Mean-field theories of multilayer deposition with particle size and interactions accounted for were formulated and used to fit such data \[15-16,75-76\].
4. Two-Dimensional Deposition with Diffusional Relaxation
4.1. Combined Effects of Jamming and Diffusion
We now turn to the $`2D`$ case of deposition of extended objects on planar surfaces, accompanied by diffusional relaxation, assuming monolayer deposits. We note that the available theoretical results are limited to few studies \[34,77-79\]. They indicate a rich pattern of new effects as compared to $`1D`$. In fact, there exists extensive literature on deposition with diffusional relaxation in other models, in particular those where the jamming effect is not present or plays no significant role. These include deposition of monomer particles, usually of atomic dimensions, which align with the underlying lattice without jamming, as well as models where many layers are formed (mentioned in the preceding section).
The $`2D`$ deposition with relaxation of extended objects is of interest in certain experimental systems where the depositing objects are proteins . Here we focus on the combined effect of jamming and diffusion, and emphasize dynamics at large times. For early stages of the deposition process, low-density approximation schemes can be used. One such application was reported for continuum deposition of circles on a plane.
In order to identify new features characteristic of $`2D`$, let us consider deposition of $`2\times 2`$ squares on the square lattice. The particles are exactly aligned with the $`2\times 2`$ lattice sites as shown in Figure 4. Furthermore, we assume that the diffusional hopping is along the lattice directions $`\pm x`$ and $`\pm y`$, one lattice spacing at a time. In this model dense configurations involve domains of four phases as shown in Figure 4. As a result, immobile fragments of empty area can exist. Each such single-site vacancy (Figure 4) serves as a meeting point of four domain walls.
Here by “immobile” we mean that the vacancy cannot move due to local motion of the surrounding particles. For it to move, a larger empty-area fragment must first arrive, along one of the domain walls. One such larger empty void is shown in Figure 4. Note that it serves as a kink in the domain wall. Existence of locally immobile (“frozen”) vacancies suggests possible frozen glassy behavior with extremely slow relaxation, at least locally. The full characterization of the dynamics of this model requires further study. The first numerical results do provide some answers which will be reviewed shortly.
4.2. Ordering by Shortening of Domain Walls
We first consider a simpler model depicted in Figure 5. In this model \[78-79\] the extended particles are squares of size $`\sqrt{2}\times \sqrt{2}`$. They are rotated 45 with respect to the underlying square lattice. Their diffusion, however, is along the vertical and horizontal lattice axes, by hopping one lattice spacing at a time. The equilibrium variant of this model (without deposition, with fixed particle density) is the well-studied hard-square model which, at large densities, phase separates into two distinct phases. These two phases also play role in the late stages of RSA with diffusion. Indeed, at large densities the empty area is stored in domain walls separating ordered regions. One such domain wall is shown in Figure 5. Snapshots of actual Monte Carlo simulation results can be found in Refs. 78-79.
Figure 5 illustrates the process of ordering which essentially amounts to shortening of domain walls. In Figure 5, the domain wall gets shorter after the shaded particles diffusively rearrange to open up a deposition slot which can be covered by an arriving particle. Numerical simulations \[78-79\] find behavior reminiscent of the low-temperature equilibrium ordering processes \[83-85\] driven by diffusive evolution of the domain-wall structure. For instance, the remaining uncovered area vanishes according to
$$1\rho (t)\frac{1}{\sqrt{t}}.$$
$`(11)`$
This quantity, however, also measures the length of domain walls in the system (at large times). Thus, disregarding finite-size effects and assuming that the domain walls are not too convoluted (as confirmed by numerical simulations), we conclude that the power law Eq. (11) corresponds to typical domain sizes growing as $`\sqrt{t}`$, reminiscent of the equilibrium ordering processes of systems with nonconserved order parameter dynamics \[83-85\].
4.3. Numerical Results for Models with Frozen Vacancies
We now turn again to the $`2\times 2`$ model of Figure 4. The equilibrium variant of this model corresponds to hard-squares with both nearest and next-nearest neighbor exclusion \[82,86-87\]. It has been studied in lesser detail than the two-phase hard-square model described in the preceding paragraphs. In fact, the equilibrium phase transition has not been fully classified (while it was Ising for the simpler model). The ordering at low temperatures and high densities was studied . However, many features noted, for instance large entropy of the ordered arrangements, require further investigation. The dynamical variant (RSA with diffusion) of this model was studied numerically . The configuration of the single-site frozen (locally immobile) vacancies and the associated network of domain walls turn out to be boundary-condition sensitive. For periodic boundary conditions the density freezes at values $`1\rho L^1`$, where $`L`$ is the linear system size.
Preliminary indications were found that the domain size and shape distributions in such a frozen state are nontrivial. Extrapolation $`L\mathrm{}`$ indicates that the power law behavior similar to Eq. (11) is nondiffusive: the exponent $`1/2`$ is replaced by $`0.57`$. However, the density of the smallest mobile vacancies, i.e., dimer kinks in domain walls, one of which is illustrated in Figure 4, does decrease diffusively. Further studies are needed to fully clarify the ordering process associated with the approach to the full coverage as $`t\mathrm{}`$ and $`L\mathrm{}`$ in this model.
Even more complicated behaviors are possible when the depositing objects are not symmetric and can have several orientations as they reach the substrate. In addition to translational diffusion (hopping), one has to consider possible rotational motion. The square-lattice deposition of dimers, with hopping processes including one-lattice-spacing motion along the dimer axis and 90 rotations about a constituent monomer, was studied . The dimers were allowed to deposit vertically and horizontally. In this case, the full close-packed coverage is not achieved at all because the frozen vacancy sites can be embedded in, and move by diffusion in, extended structures of different topologies. These structures are probably less efficiently demolished by the motion of mobile vacancies than the elimination of localized frozen vacancies in the model of Figure 4.
5. Conclusion
In summary, we reviewed theoretical developments in the description of deposition processes of extended objects, with jamming and diffusional relaxation. While significant progress has been achieved in $`1D`$, the $`2D`$ systems require further study. Most of these investigations will involve large-scale numerical simulations.
Other research directions that require further work include multilayer deposition and particle detachment, especially the theoretical description of the latter, including the description of the distribution of values/shapes of the primary minimum in the particle-surface interaction potential. This would allow to advance beyond the present theoretical trend of studying deposition as mainly the process of particle transport to the surface, with little or no role played by the details of the actual particle-surface and particle-particle double-layer and other interactions. Ultimately, we would like to interrelate the present deposition studies and approaches in study of adhesion , of typically larger particles of sizes up to several microns, at surfaces.
REFERENCES
Particle Deposition at the Solid-Liquid Interface, edited by Tardos, Th.F., and Gregory, J., Colloids Surf., Vol. 39, No. 1/3, 30 August, 1989.
Advances in Particle Adhesion, edited by Rimai, D.S., and Sharpe, L.H. (Gordon and Breach Publishers, Amsterdam, 1996).
Particle Deposition & Aggregation. Measurement, Modeling and Simulation, Elimelech, M., Gregory, J., Jia, X., and Williams, R.A. (Butterworth-Heinemann Woburn, MA, 1995).
Adhesion of Submicron Particles on Solid Surfaces, edited by Privman, V., Colloids Surf. A (in print, 2000).
Levich, V.G., Physiochemical Hydrodynamics (Prentice-Hall, London, 1962).
Privman, V., Trends in Statistical Physics 1, 89 (1994).
Nonequilibrium Statistical Mechanics in One Dimension, edited by Privman, V. (Cambridge University Press, 1997).
Feder, J., and Giaever, I., J. Colloid Interface Sci. 78, 144 (1980).
Schmitt, A., Varoqui, R., Uniyal, S., Brash, J.L., and Pusiner, C., J. Colloid Interface Sci. 92, 25 (1983).
Onoda, G.Y., and Liniger, E.G., Phys. Rev. A33, 715 (1986).
Kallay, N., Tomić, M., Biškup, B., Kunjašić, I., and Matijević, E., Colloids Surf. 28, 185 (1987).
Aptel, J.D., Voegel, J.C., and Schmitt, A., Colloids Surf. 29, 359 (1988).
Adamczyk, Z., Colloids Surf. 39, 1 (1989).
Adamczyk, Z., Zembala, M., Siwek, B., and Warszyński, P., J. Colloid Interface Sci. 140, 123 (1990).
Ryde, N., Kihira, H., and Matijević, E., J. Colloid Interface Sci. 151, 421 (1992).
Song, L., and Elimelech, M., Colloids Surf. A73, 49 (1993).
Ramsden, J.J., J. Statist. Phys. 73, 853 (1993).
Murphy, C.J., Arkin, M.R., Jenkins, Y., Ghatlia, N.D., Bossmann, S.H., Turro, N.J., and Barton, J.K., Science 262, 1025 (1993).
Bartelt, M.C., and Privman, V., Internat. J. Mod. Phys. B5, 2883 (1991).
Evans, J.W., Rev. Mod. Phys. 65, 1281 (1993).
Widom, B., J. Chem. Phys. 58, 4043 (1973).
Schaaf, P., and Talbot, J., Phys. Rev. Lett. 62, 175 (1989).
Dickman, R., Wang, J.-S., and Jensen, I., J. Chem. Phys. 94, 8252 (1991).
Gonzalez, J.J., Hemmer, P.C., and Høye, J.S., Chem. Phys. 3, 228 (1974).
Feder, J., J. Theor. Biology 87, 237 (1980).
Tory, E.M., Jodrey, W.S., and Pickard, D.K., J. Theor. Biology 102, 439 (1983).
Hinrichsen, E.L., Feder, J., and Jøssang, T., J. Statist. Phys. 44, 793 (1986).
Burgos, E., and Bonadeo, H., J. Phys. A20, 1193 (1987).
Barker, G.C., and Grimson, M.J., J. Phys. A20, 2225 (1987).
Vigil, R.D., and Ziff, R.M., J. Chem. Phys. 91, 2599 (1989).
Talbot, J., Tarjus, G., and Schaaf, P., Phys. Rev. A40, 4808 (1989).
Vigil, R.D., and Ziff, R.M., J. Chem. Phys. 93, 8270 (1990).
Sherwood, J.D., J. Phys. A23, 2827 (1990).
Tarjus, G., Schaaf, P., and Talbot, J., J. Chem. Phys. 93, 8352 (1990).
Brosilow, B.J., Ziff, R.M., and Vigil, R.D., Phys. Rev. A43, 631 (1991).
Privman, V., Wang, J.-S., and Nielaba, P., Phys. Rev. B43, 3366 (1991).
Pomeau, Y., J. Phys. A13, L193 (1980).
Swendsen, R.H., Phys. Rev. A24, 504 (1981).
Kallay, N., Biškup, B., Tomić, M., and Matijević, E., J. Colloid Interface Sci. 114, 357 (1986).
Barma, M., Grynberg, M.D., and Stinchcombe, R.B., Phys. Rev. Lett. 70, 1033 (1993).
Stinchcombe, R.B., Grynberg, M.D., and Barma, M., Phys. Rev. E47, 4018 (1993).
Grynberg, M.D., Newman, T.J., and Stinchcombe, R.B., Phys. Rev. E50, 957 (1994).
Grynberg, M.D., and Stinchcombe, R.B., Phys. Rev. E49, R23 (1994).
Schütz, G.M., J. Statist. Phys. 79, 243 (1995).
Krapivsky, P.L., and Ben-Naim, E., J. Chem. Phys. 100, 6778 (1994).
Barma, M., and Dhar, D., Phys. Rev. Lett. 73, 2135 (1994).
Bossmann, S.H., and Schulman, L.S., in Nonequilibrium Statistical Mechanics in One Dimension, edited by Privman, V. (Cambridge University Press, 1997), p. 443.
Privman, V., and Barma, M., J. Chem. Phys. 97, 6714 (1992).
\[49 Nielaba, P., and Privman, V., Mod. Phys. Lett. B 6, 533 (1992).
Bonnier, B., and McCabe, J., Europhys. Lett. 25, 399 (1994).
Kang, K., Meakin, P., Oh, J.H., and Redner, S., J. Phys. A 17, L665 (1984).
Cornell, S., Droz, M., and Chopard, B., Phys. Rev. A44, 4826 (1991).
Privman, V., and Grynberg, M.D., J. Phys. A 25, 6575 (1992).
ben-Avraham, D., Phys. Rev. Lett. 71, 3733 (1993).
Krapivsky, P.L., Phys. Rev. E 49, 3223 (1994).
Lee, B.P., J. Phys. A 27, 2533 (1994).
Grynberg, M.D., Phys. Rev. E 57, 74 (1998).
Kang, K., and Redner, S., Phys. Rev. Lett. 52, 955 (1984).
Kang, K., and Redner, S., Phys. Rev. A32, 435 (1985).
Racz, Z., Phys. Rev. Lett. 55, 1707 (1985).
Bramson, M., and Lebowitz, J.L., Phys. Rev. Lett. 61, 2397 (1988).
Balding, D.J., and Green, N.J.B., Phys. Rev. A 40, 4585 (1989).
Amar, J.G., and Family, F., Phys. Rev. A 41, 3258 (1990).
ben-Avraham, D., Burschka, M.A., and Doering, C.R., J. Statist. Phys. 60, 695 (1990).
Bramson, M., and Lebowitz, J.L., J. Statist. Phys. 62, 297 (1991).
Privman, V., J. Statist. Phys. 69, 629 (1992).
Privman, V., and Nielaba, P., Europhys. Lett. 18, 673 (1992).
Grynberg, M.D., and Stinchcombe, R.B., Phys. Rev. Lett. 74, 1242 (1995).
Gan, C.K., and Wang, J.-S., Phys. Rev. E55, 107 (1997).
de Oliveira, M.J., and Petri, A., J. Phys. A31, L425 (1998).
Xiao, R.-F., Alexander, J.I.D., and Rosenberger, F., Phys. Rev. A45, R571 (1992).
Lubachevsky, B.D., Privman, V., and Roy, S.C., Phys. Rev. E47, 48 (1993).
Lubachevsky, B.D., Privman, V., and Roy, S.C., J. Comp. Phys. 126, 152 (1996).
Privman, V., Frisch, H.L., Ryde, N., and Matijević, E., J. Chem. Soc. Farad. Tran. 87, 1371 (1991).
Ryde, N., Kallay, N., and Matijević, E., J. Chem. Soc. Farad. Tran. 87, 1377 (1991).
Zelenev, A., Privman, V., and Matijević, E., Colloids Surf. A135, 1 (1998).
Wang, J.-S., Nielaba, P., and Privman, V., Physica A199, 527 (1993).
Wang, J.-S., Nielaba, P., and Privman, V., Mod. Phys. Lett. B7, 189 (1993).
James, E.W., Liu, D.-J., and Evans, J.W., in Ref. 4.
Grigera, S.A., Grigera, T.S., and Grigera, J.R., Phys. Lett A226, 124 (1997).
Vernables, J.A., Spiller, G.D.T., and Hanbücken, M., Rept. Prog. Phys. 47, 399 (1984).
Runnels, L.K., in Phase Transitions and Critical Phenomena, Vol. 2, edited by Domb, C., and Green, M.S. (Academic, London, 1972), p. 305.
Gunton, J.D., San Miguel, M., and Sahni, P.S., in Phase Transitions and Critical Phenomena, Vol. 8, edited by Domb, C., and Lebowitz, J.L. (Academic, London, 1983), p. 267.
Mouritsen, O.G., in Kinetics of Ordering and Growth at Surfaces, edited by Lagally, M.G. (Plenum, NY, 1990), p. 1.
Sadiq, A., and Binder, K., J. Statist. Phys. 35, 517 (1984).
Binder, K., and Landau, D.P., Phys. Rev. B21, 1941 (1980).
Kinzel, W., and Schick, M., Phys. Rev. B24, 324 (1981).
Figure Captions
Figure 1: Possible configurations of particles at surfaces. From left to right, $`A`$ — particles deposited directly on the collector; $`B`$ — particles deposited on top of other particles. We next show an example of jamming, $`C`$ — a particle marked by an open circle cannot fit in the lowest layer at the surface. A top view of a more realistic two-dimensional ($`2D`$) surface configuration is shown in the inset. The rightmost example, $`E`$, illustrates screening: surface position marked by the open circle is not reachable.
Figure 2: Deposition of dimers on the $`1D`$ lattice. Only one of the three hatched dimers can deposit on the surface, which then becomes fully jammed in the interval shown.
Figure 3: Schematic variation of the coverage $`\rho (t)`$ with time for deposition without (lower curve) and with (upper curve) diffusional or other relaxation. The “ordered” density corresponds to close packing.
Figure 4: Fragment of a deposit configuration in the deposition of $`2\times 2`$ squares. Illustrated are one single-site frozen vacancy at which four domain walls meet (indicated by arrows), and one dimer vacancy which causes a kink in one of the domain walls.
Figure 5: Illustration of deposition of $`\sqrt{2}\times \sqrt{2}`$ particles on the square lattice. Diffusional motion during time interval from $`t_1`$ to $`t_2`$ can rearrange the empty area “stored” in the domain wall to open up a new landing site for deposition. This is illustrated by the shaded particles.
|
no-problem/0003/hep-ph0003110.html
|
ar5iv
|
text
|
# References
Ref. SISSA 5/2000/EP
February 2000
On the New Conditions for a Total Neutrino Conversion in a Medium
M. V. Chizhov
Centre for Space Research and Technologies, Faculty of Physics,
University of Sofia, 1164 Sofia, Bulgaria
E-mail: $`mih\mathrm{@}phys.uni`$-$`sofia.bg`$
S. T. Petcov <sup>1</sup><sup>1</sup>1Also at: Institute of Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia, Bulgaria
Scuola Internazionale Superiore di Studi Avanzati, I-34014 Trieste, Italy, and
Istituto Nazionale di Fizica Nucleare, Sezione di Trieste, I-34014 Trieste, Italy
## Abstract
We show that the arguments forming the basis for the claim that the conditions for total neutrino conversion derived and studied in detail in “are just the conditions of the parametric resonance of neutrino oscillations supplemented by the requirement that the parametric enhancement be complete”, given in have flaws which make the claim physically questionable. We show also that in the case of the transitions in the Earth of the Earth-core-crossing solar and atmospheric neutrinos the peaks in the relevant transitions probabilities $`P_{ab}`$, associated with the new conditions, $`maxP_{ab}=1`$, are of physical relevance - in contrast to what is suggested in . Actually, the enhancement of $`P_{ab}`$ in any region of the corresponding parameter space are essentially determined by these absolute maxima of $`P_{ab}`$. We comment on few other aspects of the results derived in which have been misunderstood and/or misinterpreted in .
1. In a Comment on our results derived and discussed briefly in and in detail in and on the results derived by one of us in , Akhmedov and Smirnov claim , in particular, that “the conditions for total neutrino conversion studied by Chizhov and Petcov <sup>2</sup><sup>2</sup>2By using the verb “studied” in connection with our new conditions for a total neutrino conversion in a medium , the authors of suggest indirectly that these conditions were already considered in the literature before the publication of our papers. We would like to note that none of our new conditions for a total neutrino conversion in a medium were derived, postulated and/or discussed in some form in the literature on the subject published before the articles . are just the conditions of the parametric resonance of neutrino oscillations supplemented by the requirement that the parametric enhancement be complete.” We show below that the arguments forming the basis for these claims have flaws which, in our opinion, make the claims physically questionable. We show also that in the case of the transitions in the Earth of the Earth-core-crossing solar and atmospheric neutrinos the peaks in the relevant transitions probabilities $`P_{\alpha \beta }`$, associated with the new conditions, $`maxP_{\alpha \beta }=1`$, are of physical relevance - in contrast to what is suggested in . Actually, the enhancement of $`P_{\alpha \beta }`$ in any region of the corresponding parameter space is essentially determined by these absolute maxima of $`P_{\alpha \beta }`$. We comment on few other aspects of the results derived in which have been misunderstood and/or misinterpreted in .
The form of the new conditions for a total neutrino conversion in a medium consisting of two or three (nonperiodic) constant density layers, derived in , the region of the parameter space (i.e., the $`\mathrm{\Delta }m^2/E\mathrm{sin}^22\theta `$ plane) where they can be realized, and the physical interpretation of the corresponding absolute maxima of the neutrino transition probabilities of interest as being caused by a maximal constructive interference between the amplitudes of the neutrino transitions in the (two) different constant density layers, found in our studies, made us conclude in that these new conditions differ from the conditions for parametric resonances in the neutrino transitions, discussed in the articles and possible in a medium with density, varying periodically along the neutrino path. The Comment does not provide viable arguments against our conclusions.
2. Consider transitions of neutrinos crossing a medium consisting of i) two layers with different constant densities $`N_{1,2}`$ and widths $`L_{1,2}`$, or of ii) three layers of constant density, with the first and the third layers having identical densities $`N_1`$ and widths $`L_1`$, which differ from those of the second layer, $`N_2`$ and $`L_2`$ . Suppose the transitions are caused by two-neutrino mixing in vacuum with mixing angle $`\theta `$. Let us denote by $`\theta _i`$ and 2$`\varphi _i`$, $`i=1,2`$, the mixing angle in matter in the layer with density $`N_i`$ and the phase difference acquired by the two neutrino energy-eigenstates after neutrinos have crossed this layer. It proves convenient to introduce the quantities (see, e.g., ):
$$\mathrm{cos}\mathrm{\Phi }Y=c_1c_2\mathrm{cos}(2\theta _22\theta _1)s_1s_2,$$
(1)
$$𝐗^2=1Y^2,$$
(2)
$$n_3\mathrm{sin}\mathrm{\Phi }X_3=(s_1c_2\mathrm{cos}2\theta _1+c_1s_2\mathrm{cos}2\theta _2),$$
(3)
$`X_3`$ being the third component of the vector <sup>3</sup><sup>3</sup>3To facilitate the understanding of the main points of our criticism of , we use the same notations as in for most of the quantities discussed. $`𝐗=(X_1,X_2,X_3)`$, whose first two components are also given in terms of $`\theta _i`$ and $`\varphi _i`$ (see ). The probability of the transition $`\nu _a\nu _b`$ (i.e., $`\nu _e\nu _{\mu (\tau )}`$, $`\nu _\mu \nu _e`$, etc.) after neutrinos have crossed $`n`$ alternating layers with densities $`N_1`$ and $`N_2`$ is given according to by:
$$P(\nu _a\nu _b;nL)=\left(1\frac{X_3^2}{𝐗^2}\right)\mathrm{sin}^2\mathrm{\Phi }_p\frac{X_1^2+X_2^2}{X_1^2+X_2^2+X_3^2}\mathrm{sin}^2\mathrm{\Phi }_p,$$
(4)
where $`\mathrm{\Phi }_p=(n/2)\mathrm{\Phi }`$ if the number of layers $`n`$ is even, and
$$\mathrm{\Phi }_p=\frac{n1}{2}\mathrm{\Phi }+\phi ,\phi =\mathrm{arcsin}\left(s_1\mathrm{sin}2\theta _1/\sqrt{1X_3^2/|𝐗|^2}\right)$$
(5)
for odd number of layers with the first layer having density $`N_1`$.
The first thing to note is that the expression for $`\phi `$ in eq. (5), given in , is strictly speaking incorrect: it is valid only if $`Z0`$, where
$$Z=s_2\mathrm{sin}2\theta _2+s_1(\mathrm{sin}2\theta _1)Y.$$
(6)
The correct expression for $`\phi `$ for arbitrary $`signZ`$ reads:
$$\phi =\mathrm{arctan}\left(s_1\mathrm{sin}(2\theta _1)|𝐗|/Z\right).$$
(7)
The authors of demonstrate the same imprecision in eq. (2) of their Comment: the functions $`\mathrm{arccos}Y`$ and $`\mathrm{arcsin}|𝐗|`$ have different defining regions and it is incorrect to write $`\mathrm{arccos}Y=\mathrm{arcsin}|𝐗|`$: this equality is obviously wrong when $`Y<0`$.
According to the authors of , “Eqs. (4) and (5) describe the parametric oscillations with the pre-sine factor in (4) and $`\mathrm{\Phi }_p`$ being the oscillation depth and phase.” and further “Parametric resonance occurs when the depth of the oscillations becomes equal to unity. The resonance condition is therefore $`X_3=0`$.”
In the two-layer case i) considered by us in , which is relevant for the present discussion, one has $`\mathrm{\Phi }_p=\mathrm{\Phi }`$ ($`n=2`$). This result and eqs. (1) and (2) imply that actually $`\mathrm{sin}^2\mathrm{\Phi }_p=X_1^2+X_2^2+X_3^2`$. Correspondingly, the “parametric-resonance” form in which the authors of cast the probability $`P(\nu _a\nu _b;nL)`$ in this case is artificial: the probability is given by
$$P(\nu _a\nu _b;2L)=X_1^2+X_2^2=1Y^2X_3^2,$$
(8)
where we have used eq. (2). Therefore any resonance interpretation of the probability $`P(\nu _a\nu _b;2L)`$ based solely on the eq. (4) with $`n=2`$ seems to us physically unjustified. The new conditions for a total neutrino conversion in a medium follow in this case from the form (8) of $`P(\nu _a\nu _b;2L)`$ and read :
$$Y=0,X_3=0.$$
(9)
It should be clear from eqs. (8) and (9) that the condition $`X_3=0`$ alone does not ensure the existence even of a local maximum of the probability $`P(\nu _a\nu _b;2L)`$.
It is not guaranteed a priori that the equations in (9) have non-trivial solutions in general, and in the specific case of transitions of neutrinos crossing the Earth core and the Earth mantle on the way to the detector. As we have shown in , the solutions of the conditions (9) i) exist and are given by
$$solutionA^{(2)}:\{\begin{array}{c}\mathrm{tan}\varphi _1=\pm \sqrt{\frac{\mathrm{cos}\left(2\theta _2\right)}{\mathrm{cos}\left(2\theta _1\right)\mathrm{cos}\left(2\theta _22\theta _1\right)}},\hfill \\ \mathrm{tan}\varphi _2=\pm \sqrt{\frac{\mathrm{cos}\left(2\theta _1\right)}{\mathrm{cos}\left(2\theta _2\right)\mathrm{cos}\left(2\theta _22\theta _1\right)}},\hfill \end{array}$$
(10)
where the signs are correlated, and ii) they can be realized only in the region determined by the three inequalities:
$$regionA^{(2)}:\{\begin{array}{c}\mathrm{cos}2\theta _10\hfill \\ \mathrm{cos}2\theta _20\hfill \\ \mathrm{cos}(2\theta _22\theta _1)0.\hfill \end{array}$$
(11)
It was demonstrated in as well that the two conditions in eq. (9), or equivalently the solutions expressed by eqs. (10) and (11), are conditions for a maximal constructive interference between the amplitudes of the neutrino transitions in the two layers. Thus, a natural physical interpretation of the absolute maxima of $`P(\nu _a\nu _b;2L)`$ associated with the conditions (9) is that of constructive interference maxima.
It should be clear from the above arguments that we do not see physical reasons to call $`X_3=0`$ in the case under discussion a “parametric resonance condition”. Using the trick of one can easily cast the probability of two-neutrino oscillations in vacuum and in matter with constant density, for example, in the “parametric-resonance” form (4). The analog of the condition $`X_3=0`$ reduces in these cases respectively to $`\mathrm{sin}^22\theta =1`$ and $`\mathrm{sin}^22\theta _m=1`$, $`\theta _m`$ being the mixing angle in matter. Thus, according to the terminology suggested in , one should call the condition $`\mathrm{sin}^22\theta =1`$ and the MSW resonance condition $`\mathrm{sin}^22\theta _m=1`$ “parametric-resonance conditions”. One can use such a terminology, of course, but this is not justified by the physics of the process and nobody uses it. The situation in the two-layer case discussed above is in essence the same.
Analogous results and conclusions are valid in the case of medium with three layers (case ii)) considered in ($`n=3`$ in eqs. (4) and (5)). Using the correct expression for $`\phi `$, eq. (7), one finds again that the “parametric resonance” form in which the authors of write the probability of interest $`P(\nu _a\nu _b;3L)`$ is artificial: the probability has the same form as in eq. (8)
$$P(\nu _a\nu _b;3L)=1\overline{Y}^2\overline{X}_3^2,$$
(12)
where
$$\overline{Y}=c_2+2c_1Y,$$
(13)
and
$$\overline{X}_3=s_2\mathrm{cos}2\theta _22s_1\mathrm{cos}(2\theta _1)Y.$$
(14)
The conditions for a total neutrino conversion in this case read :
$$\overline{Y}=0,\overline{X}_3=0.$$
(15)
Their solutions in the case of the inequality $`N_1<N_2`$ (corresponding to the relation between the densities in the Earth mantle and core) were given in and have the form:
$$solutionA^{(3)}:\{\begin{array}{c}\mathrm{tan}\varphi _1=\pm \sqrt{\frac{\mathrm{cos}2\theta _2}{\mathrm{cos}\left(2\theta _24\theta _1\right)}},\hfill \\ \mathrm{tan}\varphi _2=\pm \frac{\mathrm{cos}2\theta _1}{\sqrt{\mathrm{cos}\left(2\theta _2\right)\mathrm{cos}\left(2\theta _24\theta _1\right)}},\hfill \end{array}$$
(16)
where the signs are again correlated. The solutions can only be realized in the region
$$regionA^{(3)}:\{\begin{array}{c}\mathrm{cos}(2\theta _2)0,\hfill \\ \mathrm{cos}(2\theta _24\theta _1)0.\hfill \end{array}$$
(17)
It is easy to show that for any number of layers $`n`$, the denominator in (4) is always canceled by $`\mathrm{sin}^2\mathrm{\Phi }_p`$ and $`P(\nu _a\nu _b;nL)`$ is just a polynomial without any resonance-like feature. Indeed, for even $`n`$, as can be shown, we have
$$\mathrm{sin}\mathrm{\Phi }_p=\mathrm{sin}\mathrm{\Phi }U_{n/21}(\mathrm{cos}\mathrm{\Phi }),$$
(18)
where $`U_n(x)`$ is well-known Chebyshev’s polynomial of the second kind . In the case of odd number of layers ($`n3`$) one finds
$$P(\nu _a\nu _b;nL)=\left[s_1\mathrm{sin}2\theta _1\mathrm{cos}\left(\frac{n1}{2}\mathrm{\Phi }\right)+ZU_{{\scriptscriptstyle \frac{n3}{2}}}(\mathrm{cos}\mathrm{\Phi })\right]^2.$$
(19)
Finally, similar considerations apply to the probability of the $`\nu _2\nu _e`$ transitions, $`\nu _2`$ being the heavier of the two vacuum mass-eigenstate neutrinos, in the three-layer case ii), $`P(\nu _2\nu _e;3L)`$. This probability can be used to account for the the Earth matter effects in the transitions of solar neutrinos traversing the Earth: $`P(\nu _2\nu _e;3L)`$ corresponds to the case of solar neutrinos crossing the Earth mantle, the core and the mantle again on the way to the detector. As was shown in , the conditions for a total $`\nu _2\nu _e`$ conversion, $`maxP(\nu _2\nu _e;3L)=1`$, read:
$$\overline{Y}=0,\overline{X}_3^{}=0,$$
(20)
where $`\overline{Y}`$ is given by eq. (13) and
$$\overline{X}_3^{}=s_2\mathrm{cos}(2\theta _2\theta )2s_1\mathrm{cos}(2\theta _1\theta )Y.$$
(21)
The solutions of the conditions (20) providing the absolute maxima of $`P(\nu _2\nu _e;3L)`$ and the region where these solutions can take place were given in ; they can formally be obtained from eqs. (16) and (17) by replacing $`2\theta _1`$ and $`2\theta _2`$ with ($`2\theta _1\theta `$) and ($`2\theta _2\theta `$).
The three sets of two conditions, eqs. (9), (15) and (20), and/or their solutions (e.g., eqs. (10) and (16)), and/or the regions where the solutions can be realized (e.g., eqs. (11) and (17)), were not derived and/or discussed in any form in or in any other article on the subject of neutrino transitions in a medium published before . None of them follows from the conditions of enhancement of $`P(\nu _a\nu _b)`$ found in and thus they not a particular case of the latter. That is the reason we used the term “new conditions for a total neutrino conversion in a medium” for them.
The authors of write further: “One well known realization of the parametric resonance condition”, i.e., of $`X_3=0`$, is <sup>4</sup><sup>4</sup>4We quote here only the references which, in our opinion, are relevant for the present discussion.$`c_1=c_2=0`$, or
$$2\varphi _1=\pi +2\pi k_1,2\varphi _2=\pi +2\pi k_2,k_1,k_2=0,1,2,\mathrm{},$$
(22)
independently of the mixing angles.” Contrary to what the authors of claim, the two conditions in eq. (22) were not given in the articles : what one finds in these articles at most is the condition $`2\varphi _1+2\varphi _2=2\pi +2\pi k`$ which is not equivalent to the two conditions in eq. (22). The two conditions in eq. (22) were discussed in detail for the 3-layer case in . Moreover, as we have shown in and would like to emphasize here again, the conditions $`c_1=0,c_2=0`$ by themself do not lead to a maximum of the neutrino transition probabilities of interest in the neutrino energy variable, unless a third nontrivial condition is fulfilled. This third condition has the following form for the $`\nu _a\nu _b`$ (i.e., $`\nu _e\nu _{\mu (\tau )}`$, $`\nu _\mu \nu _e`$, etc.) transitions in the two-layer (n = 2) and three-layer (n=3) medium cases i) and ii), respectively : $`\mathrm{cos}(2\theta _22\theta _1)=0`$ and $`\mathrm{cos}(2\theta _24\theta _1)=0`$. For the probability $`P(\nu _2\nu _e;3L)`$ it reads : $`\mathrm{cos}(2\theta _24\theta _1+\theta )=0`$. It is not difficult to convince oneself that the indicated sets of three conditions represent possible solutions respectively of (9), (15) and (20) . More generally, the condition $`X_3=0`$ alone does not guarantee the existence even of a local maximum of the neutrino transition probabilities of interest.
In what regards the article by Q.Y. Liu and A. Yu. Smirnov quoted in in connection with the conditions (22) (see ref. in ), these authors noticed that in the case of muon neutrinos crossing the Earth along the specific trajectory characterized by a Nadir angle $`h28.4^{}`$, and for $`\mathrm{sin}^22\theta 1`$ and $`\mathrm{\Delta }m^2/E(12)\times 10^4\mathrm{eV}^2/\mathrm{GeV}`$, the $`\nu _\mu \nu _s`$ transition probability, $`\nu _s`$ being a sterile neutrino, is enhanced. The authors interpreted this enhancement as being due to the conditions $`2\varphi _j=\pi `$, $`j=1,2`$, which they claimed to be approximately satisfied. Actually, for the values of the parameters of the examples chosen by Q.Y. Liu and A. Yu. Smirnov to illustrate their conclusion one has $`2\varphi _1(0.60.9)\pi `$ and $`2\varphi _2(1.21.5)\pi `$. The indicated enhancement is due to the existence of a nearby total neutrino conversion point which for $`h28.4^{}`$ is located at $`\mathrm{sin}^22\theta 0.94`$ and $`\mathrm{\Delta }m^2/E2.4\times 10^4\mathrm{eV}^2/\mathrm{GeV}`$ and at which $`2\varphi _10.9\pi `$ and $`2\varphi _21.1\pi `$. We have also found in that for each given $`h<30^{}`$ there are several total neutrino conversion points at large values of $`\mathrm{sin}^22\theta `$, at which the phases $`2\varphi _1`$ and $`2\varphi _2`$ are not necessarily equal to $`\pi `$ or to odd multiples of $`\pi `$ (see Table 3 in ). Thus, the explanation of the enhancement offered by the indicated authors is at best qualitative and incorrect in essence.
That $`X_3=0`$ alone is a condition for local maxima of the probability $`P(\nu _a\nu _b;nL)`$ was suggested in E. Kh. Akhmedov, Nucl. Phys. B538 (1999) 25 (to be quoted further as NP B538, 25), on the basis of the form of the probability in eq. (4). However, as we have already emphasized, the condition $`X_3=0`$ alone does not guarantee the existence even of a local maximum of the neutrino transition probabilities $`P(\nu _a\nu _b;2L)`$ and $`P(\nu _a\nu _b;3L)`$ . This should be clear from the “natural” expressions for the probabilities $`P(\nu _a\nu _b;2L)`$ and $`P(\nu _a\nu _b;3L)`$, given by eqs. (8) and (12). Of the three solutions for the extrema of $`P(\nu _a\nu _b;3L)`$ found in NP B538, 25, the solution $`c_1=0,c_2=0`$ was already discussed in detail in ref. (compare eqs. (11) - (16) and (24) in with conditions (1) on page 37 of NP B538, 25), while the other two correspond to MSW transitions <sup>5</sup><sup>5</sup>5Let us note that in what regards the cases of neutrino transitions studied in , the article NP B538, 25, contains a rather large number of incorrect statements and conclusions. Most of these statements are concentrated in Section 4 of NP B538, 25, where the author discusses the realistic case of transitions of neutrinos crossing the Earth core we were primarily interested in in . (they were also briefly discussed in ).
4. The authors of claim that “The existence of strong enhancement peaks in transition probability P rather than the condition P=1 is of physical relevance.”, although they do not give an example of a relevant strong enhancement peak (i.e., local maximum). As we have shown in (see also ), the solutions given by eqs. (10) and (16) and those for the probability $`P(\nu _2\nu _e;3L)`$ are realized in the transitions in the Earth of the Earth-core-crossing neutrinos (solar, atmospheric, accelerator) and lead to observable effects in these transitions. From the extensive numerical studies we have performed in the realistic cases of transitions of neutrinos in the Earth (e.g., neutrinos crossing the Earth core on the way to the detector) we do not have any evidence about the presence of significant local maxima in the neutrino transition probabilities of interest not related to the peaks of total neutrino conversion. Actually, our studies show that only the peaks of total neutrino conversion are dominating in $`P(\nu _a\nu _b;2L)`$, $`P(\nu _a\nu _b;3L)`$ and $`P(\nu _2\nu _e;3L)`$, and correspondingly determine the regions where these probabilities can be significant in the corresponding space of parameters. The peaks considered, e.g., in (see figs. 1 and 2) are points on the “ridges” formed by local maxima, e.g., in the energy variable at fixed values of the other parameters, leading to the peaks of total neutrino conversion, discovered by us (see figs. 6 - 9 and Tables 5 - 6 in ). As one can convince oneself using Figs. 6 - 9 from , all maxima in Figs. 4, 5, 10 - 13 and 15 in , including the relatively small local ones, are related to (and determined by) the presence of corresponding points (peaks) of total neutrino conversion.
5. The authors of studied the effects of small density perturbations on the neutrino oscillations, while we have investigated in the different physical problem of large “perturbations” of density. In neutrino oscillations in a medium consisting of even number $`n`$ of alternating layers with densities $`N_1`$ and $`N_2`$ have been considered. However, i) the two layers were assumed to have equal widths, $`L_1=L_2`$, and i) an enhancement of the neutrino transitions was found to take place for small vacuum mixing angles at densities $`N_{1,2}`$ much smaller than the MSW resonance density. The authors of were interested in and found the conditions for the classical parametric resonance in neutrino oscillations, which can take place after many periods of density modulation of the oscillations. From reading the articles it becomes clear that their authors had in mind astrophysical applications of their results <sup>6</sup><sup>6</sup>6In all the concrete examples and corresponding plots in , for instance, the physical quantities having a dimension of length are given in units of the solar radius and are comparable with it; the values of the densities used in the examples differ substantially from those met in the Earth - they are noticeably smaller than the Earth mantle and core densities., and the results obtained in may indeed have astrophysical applications. Our studies were concerned primarily with the neutrino oscillations in the Earth. As a consequence, the conditions of the enhancement of $`P(\nu _a\nu _b)`$ obtained in differ from those found by us in . In the two- and three-layer medium cases i) and ii) discussed by us (e.g., in the case of the Earth) the enhancement found in , for example, is not realized.
To summarize, after studying ref. and the references quoted therein one can conclude that the new conditions for a total neutrino conversion in a medium found and studied in (i.e., the three sets of two conditions, eqs. (9), (15) and (20), and/or their solutions (e.g., eqs. (10) and (16)), and/or and the regions where the solutions can be realized, eqs. (11) and (17)), were indeed new: they were not derived and/or discussed in any form in or in any other article on the subject of neutrino transitions in a medium published before . None of them follows from the conditions of enhancement of $`P(\nu _a\nu _b)`$ obtained in and thus they are not a particular case of the latter. In , in particular, the derivation of these conditions, first given in , is reproduced. Most importantly, the new conditions for a total neutrino conversion were shown in (see also ) to be realized in the transitions in the Earth of the Earth-core-crossing neutrinos (solar, atmospheric, accelerator) and to lead to observable effects in these transitions - contrary to the claims made in . As for the physical interpretation of the associated new effect of total neutrino conversion in the cases of two-layer and three-layer medium we have considered, we have proven that this is a maximal constructive interference effect. The interpretation of the effect based on the expression (4) for the neutrino transitions probabilities, offered in , as we have pointed out, is not convincing: expression (4), in particular, has an artificial resonance-like form. For the studies of the new effect of total neutrino conversion in the Earth, eqs. (8) and (12) represent one of the several possible natural expressions for the relevant neutrino transition probabilities. The rest is terminology.
|
no-problem/0003/gr-qc0003047.html
|
ar5iv
|
text
|
# Black Hole Decay and Quantum Instantons
## Abstract
We study the analytic structure of the S-matrix which is obtained from the reduced Wheeler-DeWitt wave function describing spherically symmetric gravitational collapse of massless scalar fields. The complex simple poles in the S-matrix lead to the wave functions that satisfy the same boundary condition as quasi-normal modes of a black hole, and correspond to the bounded states of the Euclidean Wheeler-DeWitt equation. These wave function are interpreted as quantum instantons.
preprint: gr-qc/0003047
In the previous work we studied quantum mechanically the self-similar black hole formation by collapsing scalar fields and found the wave functions that give the correct semi-classical limit. The reduced Wheeler-DeWitt equation for gravity belongs to an exactly solvable Calogero type system with an inverted potential whose attractive inverse square and repulsive square potential terms give rise to a potential barrier. The boundary condition for black hole formation was that the wave function has both the incoming and the outgoing flux at spatial infinity and only the incoming flux toward the black hole singularity.
Of particular interest is the subcritical case, in which a black hole can be formed through quantum tunneling. Due to the time reversal symmetry, however, the subcritical wave function may be given an interpretation of the reversal process of black hole formation, that is, the decay of the black hole . Then the wave function for black hole decay should have a purely outgoing flux. This wave function is somehow reminiscent of the gravitational wave from a perturbation of a black hole . Moreover, for a certain discrete spectrum of complex frequencies there occur quasi-normal modes that have both purely outgoing modes at spatial infinity and purely incoming ones at the horizon of the black hole .
In this paper we study the pole structure of the S-matrix which is obtained from the wave function for black hole formation. The boundary condition that the wave function should have a purely outgoing flux at spatial infinity and a purely incoming one at the classical apparent horizon leads to a discrete spectrum of complex parameters $`(c_0)`$. It is further shown that this wave function can be obtained through the analytical continuation of a bounded state of the corresponding Euclidean Wheeler-DeWitt equation. Just as quasi-normal modes of perturbations of a black hole can be interpreted as instantons , these exact wave functions of the quantum theory for black hole decay may be interpreted as quantum instantons
The spherically symmetric geometry minimally coupled to a massless scalar field is described by the reduced action in $`(1+1)`$-dimensional spacetime of which the Hilbert-Einstein action is
$$S=\frac{1}{16\pi }_Md^4x\sqrt{g}\left[R2\left(\varphi \right)^2\right]+\frac{1}{8\pi }_Md^3xK\sqrt{h}.$$
(1)
The reduced action is
$$S_{sph}=\frac{1}{4}d^2x\sqrt{\gamma }r^2\left[\left\{{}_{}{}^{(2)}R(\gamma )+\frac{2}{r^2}\left(\left(r\right)^2+1\right)\right\}2\left(\varphi \right)^2\right],$$
(2)
where $`\gamma _{ab}`$ is the $`(1+1)`$-dimensional metric. The spherical spacetime metric is
$$ds^2=2dudv+r^2d\mathrm{\Omega }_2^2,$$
(3)
where $`d\mathrm{\Omega }_2^2`$ is the usual spherical part of the metric, and $`u`$ and $`v`$ are null coordinates. The self-similarity condition is imposed such that
$$r=\sqrt{uv}y(z),\varphi =\varphi (z),$$
(4)
where $`z=+v/(u)=e^{2\tau }`$, $`y`$ and $`\varphi `$ depend only on $`z`$. We introduce another coordinates $`(\omega ,\tau )`$
$$u=\omega e^\tau ,v=\omega e^\tau ,$$
(5)
to rewrite the metric as
$$ds^2=2N^2(\tau )\omega ^2d\tau ^2+2d\omega ^2+\omega ^2y^2d\mathrm{\Omega }_2^2,$$
(6)
where $`N(\tau )`$ is the lapse function of the ADM formulation.
The classical solutions of the field equations were obtained by Roberts , and studied in connection with gravitational collapse by others . Classically black hole formation is only allowed in the supercritical case ($`c_0>1`$), but even in the subcritical situation there are quantum mechanical tunneling processes to form a black hole of which the probability is semiclassically calculated .
In our previous work we quantized the system canonically with the ADM formulation to obtain the Wheeler-DeWitt equation for the quantum black hole formation
$$\left[\frac{\mathrm{}^2}{2K}\frac{^2}{y^2}\frac{\mathrm{}m_P^2}{2Ky^2}\frac{^2}{\varphi ^2}K\left(1\frac{y^2}{2}\right)\right]\mathrm{\Psi }(y,\varphi )=0,$$
(7)
where $`K/\mathrm{}(m_p^2/\mathrm{}^2)(\omega _c^2/2)`$ plays the role of a cut-off parameter of the model, and we use a unit system $`c=1`$. The wave function can be factorized to the scalar and gravitational parts,
$$\mathrm{\Psi }(y,\varphi )=\mathrm{exp}\left(\pm i\frac{Kc_0}{\mathrm{}^{1/2}m_P}\varphi \right)\psi (y).$$
(8)
Here the scalar field part is chosen to yield the classical momentum $`\pi _\varphi =\mathrm{}Ky^2\dot{\varphi }/m_P^2N=\pm Kc_0`$, where $`c_0`$ is the dimensionless parameter determining the supercritical ($`c_0>1`$), the critical ($`c_0=1`$), and the subcritical ($`1>c_0>0`$) collapse.
Now the gravitational field equation of the Wheeler-DeWitt equation takes the form of a Schrödinger equation
$$\left[\frac{\mathrm{}^2}{2K}\frac{d^2}{dy^2}+\frac{K}{2}\left(2y^2\frac{c_0^2}{y^2}\right)\right]\psi (y)=0.$$
(9)
The solution describing black hole formation was obtained in Ref. :
$$\psi _{BH}(y)=\left[\mathrm{exp}\left(\frac{i}{2}\frac{K}{\mathrm{}}y^2\right)\right]\left(\frac{K}{\mathrm{}}y^2\right)^\mu M(a,b,i\frac{K}{\mathrm{}}y^2),$$
(10)
where $`M`$ is the confluent hypergeometric function and
$$a=\frac{1}{2}\frac{i}{2\mathrm{}}(Q+K),b=1\frac{i}{\mathrm{}}Q,\mu =\frac{1}{4}\frac{i}{2\mathrm{}}Q$$
(11)
with
$$Q=\left(K^2c_0^2\frac{\mathrm{}^2}{4}\right)^{1/2}.$$
(12)
Using the asymptotic form at spatial infinity
$$\psi _{BH}(y)\frac{\mathrm{\Gamma }(b)}{\mathrm{\Gamma }(ba)}e^{i\pi a}\left(i\frac{K}{\mathrm{}}y^2\right)^{\mu a}e^{(i/2)(K/\mathrm{})y^2}+\frac{\mathrm{\Gamma }(b)}{\mathrm{\Gamma }(a)}\left(i\frac{K}{\mathrm{}}y^2\right)^{\mu +ab}e^{(i/\mathrm{})(K/\mathrm{})y^2},$$
(13)
we obtain the S-matrix component describing the reflection rate
$$S=\frac{\mathrm{\Gamma }(ba)}{\mathrm{\Gamma }(a)}\frac{(iK/\mathrm{})^{2ab}}{e^{i\pi a}}.$$
(14)
From the S-matrix follows the transmission rate for black hole formation
$`{\displaystyle \frac{j_{trans}}{j_{in}}}`$ $`=`$ $`1|S|^2`$ (15)
$`=`$ $`1{\displaystyle \frac{\mathrm{cosh}\frac{\pi }{2\mathrm{}}(Q+K)}{\mathrm{cosh}\frac{\pi }{2\mathrm{}}(QK)}}e^{(\pi /\mathrm{})Q},`$ (16)
where $`\left|\mathrm{\Gamma }\left(\frac{1}{2}+ix\right)\right|^2=\frac{\pi }{\mathrm{cosh}(\pi x)}`$ is used. Equation (15) gives the probability of black hole formation for the supercritical, critical, and subcritical $`c_0`$-values.
We now consider the analytic structure of the S-matrix: it is an analytic function of $`Q`$ and $`K`$ with simple poles which can be explicitly shown as
$$S=\underset{N=0}{\overset{\mathrm{}}{}}\frac{1}{(QK)/\mathrm{}+i(2N+1)}\left(\frac{2ie^{(\pi /2\mathrm{})Ki(K/\mathrm{})\mathrm{ln}(K/\mathrm{})}}{N!\mathrm{\Gamma }(Ni(K/\mathrm{}))}\right).$$
(17)
The poles reside in the unphysical region of the parameter space of $`Q`$ and $`K`$:
$$QK=i\mathrm{}(2N+1),(N=0,1,2,\mathrm{}).$$
(18)
It should be remarked that these poles make the first term of Eq. (13) vanish since
$$ba=\frac{i}{2\mathrm{}}(QK)+\frac{1}{2}=N,(N=0,1,2,\mathrm{}).$$
(19)
The second term of Eq. (13) has a purely outgoing flux at spatial infinity. The wave function near the apparent horizon, which can be obtained by the steepest descent method in the Appendix of Ref. and by taking the large $`(K/\mathrm{})`$-limit, leads to the flux
$$j_{AH}A^2(y)\left\{\frac{1}{2}y(1y^2)\left[\frac{(y^4+c^22y^2)^{1/2}+(y^4+c^22y^2)^{1/2}}{\left((y^4+c^22y^2)(y^4+c^22y^2)\right)^{1/2}}\right]\frac{1}{2}(c_0^{}+c_0)\frac{1}{y}\right\},$$
(20)
where $`A(y)`$ denotes an amplitude, a real function, and $`c_0=1i(\mathrm{}/K)(2N+1)`$ from Eq. (18) and $`c=(c_0^2\mathrm{}^2/4K^2)^{1/2}i(\mathrm{}/K)`$. At the apparent horizon $`y_{AH}=c_0/\sqrt{2}`$, the wave function has an incoming flux. Therefore, the poles are the outcome of the same boundary condition used to find quasi-normal modes of a black hole . Note that the wave function (10) has also the purely incoming flux toward the black hole singularity at $`y=0`$.
A few comments are in order. First, for physical processes of gravitational collapse there can not be poles because $`K`$ and $`c_0`$ are real-valued. In ordinary quantum mechanics, the poles of S-matrix occur at the bound states , and in relativistic scattering at the resonances or the Regge poles . Our case is analogous to a meta-stable quantum mechanical system of which poles are identified with quasi-stationary states that describe the decay of a particle through a potential barrier. Second, we calculated quantum decay rate of a black hole as a reversed process of gravitational collapse through a barrier by quantum tunneling. This quantum decay process, first studied by Tomimatsu , is a distinctively different decay channel from the Hawking radiation process. It will be interesting to investigate both processes present in one model. Third, it should be pointed out that our discussion based upon the similarity of the boundary condition on the wave function with quasi-normal modes seems to have no deeper physical connection more than analogy because our model works only for a dynamical stage of gravitational collapse and its reversed process, rather than the quasi-stationary stage at late times.
Recalling that the poles in Eqs. (17) or (19) result from the potential barrier and that the exponential behavior of the Wheeler-DeWitt equation under a potential barrier describes a Euclidean geometry, we turn to the Euclidean theory of gravitational collapse. In the Euclidean theory the Wheeler-DeWitt equation has oscillatory wave functions and a well-defined semiclassical limit even under the potential barrier of the Lorentzian theory . The Euclidean geometry with the metric
$$ds_E^2=2N^2(\tau )\omega ^2d\tau _E^2+2d\omega ^2+\omega ^2y^2d\mathrm{\Omega }_2^2,$$
(21)
leads to the Wheeler-DeWitt equation
$$\left[\frac{\mathrm{}^2}{2K}\frac{^2}{y^2}+\frac{\mathrm{}m_P^2}{2Ky^2}\frac{^2}{\varphi ^2}K\left(1\frac{y^2}{2}\right)\right]\mathrm{\Psi }_E(y,\varphi )=0.$$
(22)
According to the transformation rule $`i\pi _\varphi \pi _{E,\varphi }`$ of the scalar field momenta between the Lorentzian and Euclidean geometries , the wave function has the form
$$\mathrm{\Psi }_E(y,\varphi )=\mathrm{exp}\left(\frac{Kc_0}{\mathrm{}^{1/2}m_P}\varphi \right)\psi _E(y).$$
(23)
The Wheeler-DeWitt equation reduces to the gravitational field equation
$$\left[\frac{\mathrm{}^2}{2K}\frac{d^2}{dy^2}+\frac{K}{2}\left(y^2+\frac{c_0^2}{y^2}2\right)\right]\psi _E(y)=0.$$
(24)
Notice that this is a variant of Calogero models with the Calogero-Moser Hamiltonian , but the energy eigenvalue is fixed, and only a quantized $`c_0`$ is allowed. Since Eq. (24) can also be obtained from the Lorentzian equation (9) by letting
$$K=iK_E,$$
(25)
one may obtain, through the analytical continuation of Eq. (10), the solution to Eq. (24)
$$\psi _E(y)=\left[\mathrm{exp}\left(\frac{1}{2}\frac{K_E}{\mathrm{}}y^2\right)\right]\left(\frac{K_E}{\mathrm{}}y^2\right)^{\mu _E}M(a_E,b_E,\frac{K_E}{\mathrm{}}y^2),$$
(26)
where
$$a_E=\frac{1}{2}+\frac{1}{2\mathrm{}}(Q_E+K_E),b_E=1+\frac{Q_E}{\mathrm{}},\mu _E=\frac{1}{4}+\frac{1}{2\mathrm{}}Q_E$$
(27)
with
$$Q_E=\left(K_E^2c_0^2+\frac{\mathrm{}^2}{4}\right)^{1/2}.$$
(28)
The asymptotic form of Eq. (26) leads to the bounded states only when
$$b_Ea_E=N,(N=0,1,2,\mathrm{}),$$
(29)
that is, the condition is satisfied
$$Q_EK_E=\mathrm{}(2N+1).$$
(30)
The condition (30) is identical to the pole position of the S-matrix with $`K=iK_E`$ given in Eqs. (18) and (19).
A few remarks are in order. First, the quantum solution (26) is analogous to an instanton in the sense it is a solution in the Euclidean sector, but is not in the strict sense because the Wheeler-DeWitt equation is already a quantum equation, not a classical one. The semiclassical result from the Bohr-Sommerfeld quantization rule
$$\frac{\pi }{2}\frac{K_E}{\mathrm{}}(1c_0)=\pi \left(N+\frac{1}{2}\right),(N=0,1,2,\mathrm{}),$$
(31)
is the large $`(K_E/\mathrm{})`$-limit of the exact result (30). The instantons, the left hand side of Eq. (31), provides semiclassically the probability of tunneling process . Second, the correspondence between the poles and the Euclidean polynomial solutions breaks down for large $`N`$. While the poles contribute for all $`N`$ without limit, the normalizable Euclidean solutions exist only for $`N<K_E/2\mathrm{}`$. The polynomial solutions for large $`N`$ are well defined, but are not normalizable. We have not yet understood these nonnormalizable solutions. Finally, we consider the classical field equations corresponding to the poles of the S-matrix. In the Lorentzian geometry the relevant equations are
$$\frac{d\varphi }{d\tau }=\frac{c_0}{y^2},$$
(32)
$$\left(\frac{dy}{d\tau }\right)^2=K^2\left(2+y^2+\frac{c_0^2}{y^2}\right),$$
(33)
where $`c_01i\mathrm{}(2N+1)/K`$, for large $`K`$. The complex $`c_0`$ implies complex $`\frac{d\varphi }{d\tau }`$ and $`\frac{dy}{d\tau }`$, which may be imagined as a bound state like complex momentum in quantum mechanics and requires a complex spacetime metric. In the Euclidean geometry ($`K=iK_E`$) these classical equations are the same as those equations with quantized $`c_0`$ in the tunneling region in Ref. .
###### Acknowledgements.
We would like to express our appreciation for the warm hospitality of CTP of Seoul National University where this paper was completed. K.S.S. and J.H.Y. were supported in part by BK21 Project of Ministry of Education. D.B. was supported in part by KOSEF-98-07-02-07-01-5, S.P.K. by KOSEF-1999-2-112-003-5, S.K.K. by KRF-99-015-DI0021, and J.H.Y. by KOSEF-98-07-02-02-01-3.
|
no-problem/0003/cond-mat0003263.html
|
ar5iv
|
text
|
# Random Networks of Spiking Neurons: Instability in the Xenopus Tadpole Moto-Neural Pattern
(6 August 1999)
## Abstract
A large network of integrate-and-fire neurons is studied analytically when the synaptic weights are independently randomly distributed according to a Gaussian distribution with arbitrary mean and variance. The relevant order parameters are identified, and it is shown that such network is statistically equivalent to an ensemble of independent integrate-and-fire neurons with each input signal given by the sum of a self-interaction deterministic term and a Gaussian colored noise. The model is able to reproduce the quasi-synchronous oscillations, and the dropout of their frequency, of the central nervous system neurons of the swimming Xenopus tadpole. Predictions from the model are proposed for future experiments.
PACS numbers: 87.19.La, 05.45.-a, 87.18.Sn
In the recent past the study of neuronal dynamics has been a major topic, both for the importance it has in neuroscience and for the challenge it represents to modern nonlinear mathematics. Particular interest is evoked by the recurrent networks of many neurons, since they are present in many neuronal systems, in mammal cortex as well as in small vertebrates’ central nervous system. Specific of the latter is the case of the Xenopus tadpole embryo, in which the relationship between sensorial stimuli and the consequent behavior of the animal can be studied in detail at the neuronal level . Pools of reciprocally connected excitatory neurons, distributed bilaterally along the spinal cord, are the elements of the moto-neural circuits of the Xenopus tadpole that are mainly responsible for the generation of the swimming pattern, consisting of regular oscillations of the body of the tadpole on the horizontal plane. Each pool includes a few hundreds of excitatory neurons receiving inputs from the sensory system, and producing outputs that are sent to the other neurons of the same pool, to pools of inhibitory neurons that act on the symmetrical contra-lateral excitatory pool, and to neurons responsible for muscular activation . The only inhibitory signals to the neurons of the pool come from outside the pool, mainly caused by activity of the contra-lateral pool. The rhythm of the swim may be not, or at least not only, due to the reciprocal inhibition between the pools on the opposite sides of the spinal cord. Indeed, it has been shown that any excitatory pool is able to present a series of oscillations even if the commissural connections between the two sides of the spinal system are removed and the inhibition is suppressed. The mechanism producing the oscillations in this isolated pool is not fully understood yet.
In order to investigate on the basic mechanisms of the neuronal activity in recurrent networks, particularly in the case of the Xenopus, in the present work a large network of $`N`$ integrate-and-fire (IF) neurons with Gaussian synapses is studied. The membrane potentials are here referred to the reset value $`V_\mathrm{r}=0`$ and are measured in terms of the threshold potential $`V_{\mathrm{th}}=1`$. Time is measured in units of membrane time constant $`\tau =1`$. The equation followed by the membrane potential of neuron $`i`$ between any two consecutive spikes is
$$\frac{dV_i}{dt}(t)=V_i(t)+I+\underset{ji}{}w_{ij}\underset{m=0}{\overset{\mathrm{}}{}}J(tT_j^m),$$
(1)
where $`I`$ is the external current, and $`T_j^m`$ is the instant in which neuron $`j`$ fires its $`m`$-th spike. The coefficient $`w_{ij}`$ is the synaptic weight that multiplies the ionic current density due to the spikes arriving to neuron $`i`$ from neuron $`j`$. The weights $`\{w_{ij},ij\}`$ are taken to be independently randomly distributed according to a Gaussian probability density function with mean and variance equal to $`\mu /N`$ and $`\sigma ^2/N`$, respectively. Thus, every $`w_{ij}`$ includes the ‘sign’ of the interaction: positive for an excitatory synapse and negative for an inhibitory one. The weights are quenched and are not required to be symmetric. The function $`J(t)`$ contains the details of the ionic current density through the synaptic channels, and in all the analytical part of the work its functional form is not relevant. In the numerical study, it is taken as a $`\alpha `$-function that accounts for the dynamics of the synaptic channels: $`J(t)=\alpha ^2(t\tau _a)\theta (t\tau _a)\mathrm{exp}\{\alpha (t\tau _a)\}`$, where also the axonal delay time $`\tau _a`$, for simplicity taken to be the same for all the axonal projections, has been included to simplify the notations.
Let $`\underset{¯}{T}`$ be the matrix of the firing times of all the neurons. Any quantity relative to the network must be a function of $`\underset{¯}{T}`$, for example $`F(\underset{¯}{T})`$. Of particular interest are the quantities that do not depend on the specific realization of the network when the number of neurons is very large (thermodynamic limit). This property (self-averaging) allows one to substitute the calculation of $`F`$ for a specific realization, usually unfeasible, with the calculation of its average over all the possible realizations of the random network. Normally, one cannot prove a priori whether the considered quantity self-averages, and one has to calculate also the fluctuation of $`F`$ across the realizations.
The average of $`F`$ across the ensemble of networks is
$$F(\underset{¯}{T})=𝑑\underset{¯}{w}𝒫(\underset{¯}{w})𝑑\underset{¯}{T}F(\underset{¯}{T})\frac{1}{Z}\delta \left(\underset{¯}{G}(\underset{¯}{w},\underset{¯}{T})\right),$$
(2)
where the partition function $`Z=𝑑\underset{¯}{T}\delta \left(\underset{¯}{G}(\underset{¯}{w},\underset{¯}{T})\right)`$ guarantees the correct normalization of the $`\delta `$-function. The deterministic dynamics of the system is represented by the matrix $`\underset{¯}{G}(\underset{¯}{w},\underset{¯}{T})`$, whose generic element gives a relation between two consecutive spikes of a neuron and the spikes previously fired by the other neurons (coming from the IF dynamics, like in ). For the application of interest in this letter, the uniform initial conditions are adopted here according to which all the neurons are initially stably at membrane rest potential. Thus, for any $`n`$ and $`i`$ one has:
$`G_i^n(\underset{¯}{w},\underset{¯}{T})`$ $`=`$ $`e^{T_i^{n+1}}{\displaystyle \frac{I}{I1}}e^{T_i^n}`$
$``$ $`{\displaystyle \frac{1}{1I}}{\displaystyle _{T_i^n}^{T_i^{n+1}}}𝑑te^t{\displaystyle \underset{ji}{}}w_{ij}{\displaystyle \underset{m}{}}J(tT_j^m).`$
With simple appropriate modifications, the mathematical framework here developed can be applied also to many cases in which the neurons of the net are given different initial conditions and different external currents.
The quantities of interest in this paper can be obtained through the usual techniques based on derivatives of fictitious fields introduced at a later stage during the calculation of the ‘free-energy’ $`\mathrm{ln}Z`$, and thus the following analysis concentrates on that.
To calculate the free-energy, the replica trick is used:
$$\mathrm{ln}Z=\underset{r0}{lim}\frac{1}{r}\mathrm{ln}Z^r,$$
(4)
where the limit is obtained after having calculated the average for integer $`r`$ and then having prolonged $`Z^r`$ analytically to real exponents. Substituting the $`\delta `$-functions with their complex Fourier expansions, one has that
$$Z^r=𝑑\underset{¯}{w}D\underset{¯}{T}Ds𝒫(\underset{¯}{w})\mathrm{exp}\left\{\underset{i,n,\alpha }{}is_i^{n,\alpha }G_i^n(\underset{¯}{w},\underset{¯}{T}^\alpha )\right\},$$
(5)
being $`i`$, $`n`$, $`\alpha `$ indices of, respectively, neuron, spike, and replica. Through functional integration over one-time and two-times auxiliary functions one can decouple neurons. Due to the absence of dynamic noise and of any form of average over the initial conditions, the replicas are identical also in their kinematics, and this allows for an exact replica-symmetry assumption. Following the symmetry, after the saddle-point evaluation allowed by the extensiveness of the network ($`N\mathrm{}`$), only two order parameters remain in the formulas, both having direct physical meaning as averages over the ensemble of networks. They are:
$$\nu (t)=\frac{1}{N}\underset{i}{}\underset{m}{}J(tT_i^m)$$
(6)
and
$$q(t_1,t_2)=\frac{1}{N}\underset{i}{}\underset{m}{}J(t_1T_i^m)\underset{n}{}J(t_2T_i^n).$$
(7)
Thus, the order parameters ‘filter’ the spike train $`\{T_i^n,n\}`$ fired by neuron $`i`$ through the current density function $`J()`$. It suggests that the correct smoothing function to analyse the information conveyed by a spike train in real data may be the synaptic function $`J`$. It can be shown that the arithmetic averages over the neurons, inside the double angle brackets, have vanishing fluctuations in the thermodynamic limit of the mean-field theory (self-averaging); then the order parameter physics make sense also for any single network out of the ensemble. At the level of the single network, the parameter $`\nu (t)`$ may be seen as an estimate of the average firing rate in the network if the firing rate is measured in a delayed back-time window about $`3/\alpha `$ long. Actually, $`\nu (t)`$ weights the preceding spikes according to their synaptic ‘efficacy’ at time $`t`$. The order parameter $`q(t_1,t_2)`$ is the average auto-correlation of the ‘filtered’ spike trains from the neurons of the network. Another two-times physical order parameter would appear if dynamic noise was present in the network. In fact, both their physical meaning and the analysis of the saddle-point equations show that the two two-times order parameters are equal to each other thanks to the absence of dynamic noise.
The analysis allows for the interpretation of the mean-field system in terms of an independent IF neuron with total input equal to $`I+\widehat{\tau }(t)+\mu \nu (t)`$, being $`\widehat{\tau }(t)`$ a zero-mean Gaussian stochastic process with auto-correlation $`\sigma ^2q(t,t^{})`$. Indicating the average over the realizations of the stochastic input with single angle brackets, the order parameters follow the equations
$$\nu (t)=\underset{m}{}J(tT^m),$$
(8)
and
$$q(t,t^{})=\underset{m}{}J(tT^m)\underset{n}{}J(t^{}T^n),$$
(9)
where $`T^n`$ is the $`n`$-th spike time of the equivalent neuron, and the sequence of spiking times is determined by the IF dynamics of the equivalent neuron with input $`I+\widehat{\tau }(t)+\mu \nu (t)`$.
Solving analytically this new set of equations, as well as the original saddle-point equations, is a difficult task. However, simulating the equivalent neuron is easy numerically, and finite-size effects are no longer present, as noted by and , since the thermodynamic limit has already been performed. In order to simulate the equivalent system and obtain quantities of interest for the original network, use is made here of the technique introduced by , where the correlated stochastic input is constructed during the running of a quite large number of independent simulations of the equivalent neuron. In this method, the correlation function is estimated from sample averages, and from that the Gaussian colored signal is created. As suggested by , since each one-dimensional simulation is independent from the others, the error on the averages across a set of $`M`$ trials should scale as $`1/\sqrt{M}`$.
An exhaustive analysis in the parameters’ space is being worked out and will be presented elsewhere, as well as an analysis of the possibility of substituting the averages over the realizations of the Gaussian noise in Eq. 8 and Eq. 9 with time averages over a single realization of the noise (ergodicity) that may be useful, e.g., in numerical computations. The rest of this letter concentrates on the particular range of parameters that is of interest for the study of the neural control of the swimming pattern in the simple vertebrate Xenopus tadpole.
It has been shown that any excitatory pool surgically isolated from the rest of the tadpole nervous system is able to perform several oscillation cycles after a brief external stimulation that is provided to a large number of neurons inside the pool. It is biologically plausible to assume that the synapses between neurons of the excitatory pool in the Xenopus spinal system have weights very similar to each other (in the model it means: $`\mu >0`$ and $`\sigma \mu `$). In the experiments on the tadpole considered here a brief external input is provided simultaneously to a large number of neurons in a quiescent pool. Thus, the neurons start firing synchronously. The external stimulus is brief, and after its removal the network cannot stay stable. Simulations of the equivalent neuron show that the already present activity decays to zero if $`\mu `$ is not too large; if $`\mu `$ is above a certain bound, the activity in the network increases restlessly to saturation. The reason of such instability is discussed later in this letter. For the moment, consider a case with $`\mu `$ below the aforementioned critical bound. Figure 1 shows the time evolution of the order parameter $`\nu (t)`$ in a network with $`\mu =0.9`$ and $`\sigma ^2=0.001`$. At the beginning, all the neurons are in a quiescent state, and a constant uniform input current $`I`$ is applied (0-th time-step). After a while, the external source is abruptly removed, and the network is let to its own dynamic evolution, during which it exhibits a quasi-synchronous activity while the maximal magnitude of the order parameter $`\nu (t)`$ decays, correspondingly to the also evident decrease in spike frequency. Evident as well is the progressive loss of synchrony (vertical bars; cf. figure caption). The frequency drop continues until the network rests in a quiescent state. Here a case of quite rapid dropout is shown, but larger $`\mu `$ (still below the bound) can provide slower dropout. The smoothness of the curve near its local minima is an effect of the small variance of the synaptic strengths, that perturbs the synchrony of the spikes, with a consequent smoothing of the average. In complete absence of noise, that is, when all the synapses have equal weight, the curve of $`\nu (t)`$ presents sharp downward peaks in the local minima (discontinuities of the first time derivative) in delayed correspondence with the spike events. The decrease in synchrony may account for progressive single-neuron spike dropout found in experiments, often interpreted as a spike or synaptic failure. Increasing $`\sigma `$ causes quicker synchrony loss. These results reproduce the experimental data of frequency dropout . A more quantitative comparison will be presented when appropriate experimental data are available. Meanwhile, direct simulations of the IF network have been carried out: the agreement between theoretical predictions and simulation outputs is very good already for networks with as few as 100 neurons. This also supports the use of the thermodynamic limit as an approximation of the real finite network.
The form of instability of self-sustained activity presented by the network in the region of the parameters’ space that mimics the tadpole biological values is due to the absence of any synaptic currents’ and channels’ reset after the generation of a spike, so that they affect the dynamics of the membrane independently of it. It is easily understood how this causes instability in the isolated network: Consider the case of null synaptic variance, for simplicity, and no external input, and assume that all the neurons fire simultaneously the first spike, as an initial condition, for example, so that there are no residual synaptic currents from previous spikes or external sources. Thus, after time $`\tau _a`$, any neuron of the network receives $`N`$ spikes and starts integrating the synaptic currents. With some analysis of Eq. 1, it is possible to show that the minimum synaptic strength $`\mu `$ that allows the membrane potential to reach the threshold depends on $`\alpha `$ as depicted in Fig. 2. If $`\mu `$ is below the curve, the neurons do not fire any other spike. On the contrary, if $`\mu `$ is above the curve the neurons fire at finite time from the arrival of the first spikes; after time $`\tau _a`$, every neuron receives the second wave of spikes, and the new synaptic currents are integrated together with the remains of the currents generated by the first wave, and the neuron fires again. This process is iterated, and the effect of the remaining currents is to decrease monotonically the inter-spike interval, thus driving the network to its maximum firing rate. Simulations of the equivalent neuron seem to indicate that the introduction of a small variance in the synaptic weights lowers the critical bound.
To test the hypothesis about the network instability, numerical solutions have been found introducing in the model the reset of the input currents to zero immediately after every generated spike. The calculations as presented here cannot include this effect, but for small values of $`\sigma ^2`$, like in the case of the tadpole modelling, the results are still valid for not too long times. Simulations with this proviso show that when $`\mu `$ is above the bound, the synaptic variance only destroys synchrony, while the network is in a stable firing state. When $`\mu `$ is below the critical bound, the network is not able to fire more than one spike after the removal of the external stimulus. One can also obtain stability at low firing-rates substituting the synaptic $`\alpha `$-function with a function that vanishes at finite time, without a currents’ hard reset. If two consecutive spikes do not come at intervals shorter than the time necessary for the synaptic current to vanish, current remains do not cumulate and the network can fire stably for large $`\mu `$. This latter case would be easily and exactly reproducible in the mathematical framework developed here.
Since the synaptic currents in the Xenopus tadpole embryo seem to be well approximated by the $`\alpha `$-function , the results of the present work predict that the mean synaptic strength $`\mu `$ in the real network is below the critical bound, and the series of oscillations subsequent to the removal of the external stimulus is due to a ‘surfing’ on the remains of synaptic currents until their exhaustion. The present model also predicts a progressive loss of synchrony during the oscillatory transient, due to the small deviation of the synaptic strengths from the mean. A further implication of the present results is that some mechanisms proposed in the past as responsible for the oscillation dropout, like synaptic depression or habituation, are not needed.
If the predictions about mean synaptic strength, synchrony loss, and single-spike dropout are verified experimentally, the present model may be useful in understanding some neural mechanisms subserving the small vertebrate’s motion. With the aid of the mathematical results presented here, other, more general topics are under current study, like the possible presence of several attractors, chaos, and existence of a glassy phase in random networks.
I thank Professor Paul Bressloff for having brought the spiking neurons’ random networks to my attention and for useful discussions. Research supported by UK EPSRC Grant GR/K86220.
|
no-problem/0003/astro-ph0003201.html
|
ar5iv
|
text
|
# Pre-Supernova Evolution of Massive Stars
## 1 Introduction
More than 20 years of radio observations of supernovae (SNe) have provided a wealth of evidence for the presence of substantial amounts of circumstellar material (CSM) surrounding the progenitors of SNe of type II and Ib/c (see Weiler et al., this Conference, and references therein). Also, the radio measurements indicate that $`(a)`$ the CMS density falls off like $`r^2`$, suggesting a constant velocity, steady wind, and that $`(b)`$ the density is so high as to require a ratio of the mass loss rate, $`\dot{M}`$, to the wind velocity, $`w`$, to be higher than $`\dot{M}/w10^7`$ $`M_{}yr^1`$ $`kms^1`$. These requirements are best satisfied by red supergiants (RSG), with original masses in the range 8-30 $`M_{}`$ , that indeed are the putative progenitors of SNII. Note that in the case of SNe Ib/c, the stellar progenitor cannot provide such a dense CSM directly and that a wind from a binary companion must be invoked to explain the observations (Panagia and Laidler 1988, Boffi and Panagia 1996, 2000).
This scenario is able to account for the basic properties of all radio SNe. However, the evolution of SN 1993J indicated that the progenitor mass loss rate had declined by almost a factor of 10 in the last few thousand years before explosion (Van Dyk et al. 1994). In addition, there are SNe, such as SN 1979C (Montes et al. 2000), SN 1980K (Montes et al. 1998), and SN 1988Z (Lacey et al. 2000), that have displayed relatively sudden changes in their radio emission evolution about 10 years after explosion, which also cannot be explained in term of a constant mass loss rate. Since a SN shock front, where the radio emission originates, is moving at about 10,000 $`kms^1`$ and a RSG wind is typically expanding at 10 $`kms^1`$, a sudden change in the CSM density about ten years after explosion implies a relatively quick change of the RSG mass loss rate about 10,000 years before it underwent the SN explosion. These findings are summarized in Figure 1 that, for several well studied RSNe, displays the mass loss rate implied by radio observations as a function of the look-back time, calculated simply as the actual time since explosion multiplied by a factor of 1000, which is the ratio of the SN shock velocity to the RSG wind velocity.
Additional evidence for enhanced mass loss from SNII progenitors over time intervals of several thousand years is provided also by the detection of relatively narrow emission lines with typical widths of several 100 $`kms^1`$ in the spectra of a number of SNII (e.g. SN 1978K: Ryder et al. 1993, Chugai, Danziger & Della Valle 1995, Chu et al. 1999; SN 19997ab: Salamanca et al. 1998; SN 1996L: Benetti et al. 1999), that indicate the presence of dense circumstellar shells ejected by the SN progenitors in addition to a more diffuse, steady wind activity.
We note that a time of about 10,000 years is a sizeable fraction of the time spent by a massive star in the RSG phases and implies a kind of variability which is not predicted by standard stellar evolution. In particular, a time scale of $`10^4`$ years is considerably shorter than the H and He burning phases but is much longer than any of the successive nuclear burning phases that a massive star goes through before core collapse (e.g. Chieffi et al. 1999). Therefore, some other phenomenon is to be sought to properly account for the observations.
Another problem which needs to be addressed is the actual rate of mass loss for red supergiants. The observational evidence is that mass loss rates in the range 10<sup>-6</sup>–10<sup>-4</sup> $`M_{}yr^1`$ are commonly found in RSG, with a relatively steep increase in mass loss activity for the coolest stars (e.g. Reid, Tinney & Mould 1990, Feast 1991). On the other hand, there is no statisfactory theory to predict mass loss rates in these phases of stellar evolution, and current parametrizations fall short from describing the phenomenon in detail. For example, let us consider the classical formula by Reimers (1975),
$$log(\dot{M})=12.6+log(\frac{LR}{GM})+log(\eta )$$
which can be rewritten as:
$$\dot{M}\eta \frac{L^{1.5}}{MT_{eff}^2}$$
This formula was devised to dimensionally account for the mass loss from low-mass red giants, but has also been widely adopted for evolutionary track calculations. We see that the predicted mass loss rate varies rather slowly when a star is moving from the blue to the red region (i.e. during H-shell burning and/or He-core burning) of the HR diagram, the main functional dependence being a 1.5 power of the luminosity. The corresponding mass loss rates, computed using the evolutionary tracks by Bono et al. (2000b) for stars in the mass range 10–20$`M_{}`$ , are shown in Figures 2 and 3. It is apparent that not only the rates are not as high as suggested by spectroscopic observations of RSGs (this aspect alone could easily be ”fixed” by increasing the efficiency factor $`\eta `$) but, more importantly, are very slowly varying with time and, therefore, cannot account for radio observations of SNe, either.
Other parametrizations of the mass loss rate in the HR diagram have been proposed by different authors (e.g. De Jager, Nieuwenhuijzen & van der Hucht 1988, Salasnich, Bressan and Chiosi 1999), but insofar for RSGs the main dependence of $`\dot{M}`$ is a power of $`2`$ of the luminosity, they all are unable to reproduce appreciable mass loss variations over a timescale of roughly 10<sup>4</sup> years.
Actually, one notices that for masses above 10 $`M_{}`$ , the last phases of the RGS evolution fall within the extrapolation of the Cepheid instability strip (see Figure 4), as calculated by Bono et al. (1996), and therefore, one may expect that pulsational instabilities could represent the additional mechanism needed to trigger high mass loss rates. Indeed, the pioneering work of Heber et al. (1997), based on both linear and nonlinear pulsation models, demonstrated that RSG stars are pulsationally unstable. In particular, they found that, for periods approaching the Kelvin-Helmotz time scale, these stars display large luminosity amplitudes, which could trigger a strong enhancement in their mass loss rate before they explode as supernovae. According to these authors this pulsation behaviour should take place during the last few $`10^4`$ yrs before the core collapse, due to the large increase in the luminosity to mass ratio experienced by RSG stars during these evolutionary phases.
However, the nonlinear calculations performed by Heber et al. (1997) were hampered by the fact that their hydrodynamic code could not properly handle pulsation destabilizations characterized both by small growth rates due to numerical damping, and by large pulsation amplitudes due to the formation and propagation of strong shock waves during the approach to limit cycle stability. Also, as Heber et al. (1997) pointed out, their main theoretical difficulty in dealing with the dynamical instabilities of RSG variables resided in the coupling between convection and pulsation. In fact, they constructed the linear models by assuming that the convective flux is frozen in, and the nonlinear ones by assuming that the convective flux is instantaneously adjusted. However, this treatment does not account for the driving and/or quenching effects caused by the interaction between pulsation and convection: this shortcoming may explain why their nonlinear models could not approach a stable limit cycle.
It is clear that a more general approach must be adopted to solve the problem. This motivated us to start a systematic study of the pulsational properties of massive stars. In the following we shall illustrate briefly the procedures adopted and the first results obtained (Section 2), and will present and discuss our findings on the mass loss rates in the late phases of the evolution of massive stars (Sections 3 and 4).
## 2 Theoretical Framework
The procedures employed to construct both linear and nonlinear models of high-mass radial variables have been described in detail in a number of papers (Bono & Stellingwerf 1994; Bono, Caputo, & Marconi 1998; Bono, Marconi & Stellingwerf 1999), so that a brief outline of the methods adopted will suffice here. In particular:
$``$ We constructed a set of limiting amplitude, nonlinear, convective models of Red Super Giant (RSG) variables during both hydrogen and helium burning evolutionary phases.
$``$ A Lagrangian one-dimensional hydrocode was used in which local conservation equations are simultaneously solved with a nonlocal, time-dependent convective transport equation (Stellingwerf 1982; Bono & Stellingwerf 1994; Bono et al. 1998).
$``$ Nonlinear effects such as the coupling between convection and pulsation, the convective overshooting and the superadiabatic gradients are fully taken into account.
$``$ As for the opacity, which is a key ingredient for constructing stellar envelope models, we adopted OPAL opacities (Iglesias & Rogers 1996) for T $`>`$ 10,000 K and molecular opacities (Alexander & Ferguson 1994) T $`<`$ 10,000K. The method adopted for handling opacity tables was discussed in Bono et al. (1996).
$``$ Artificial viscosity was included following Stellingwerf (1975) prescriptions.
$``$ To provide accurate predictions on the limit cycle behaviour of these objects, the governing equations were integrated in time for a number of periods ranging from 1000 to 5000.
In order to make a detailed analysis of RSG pulsation properties during both H and He burning phases, to be compared with actual properties of real RGS stars, we constructed several sequences of models at fixed chemical composition (Y=0.28, Z=0.02) which cover a wide range of stellar masses $`10`$ $`M_{}`$ $`20`$. Moreover, since we are interested in mapping the properties of RSG stars from H-shell burning up to the central He exhaustion, both the luminosities and the effective temperature values were selected directly along the evolutionary tracks (see Figure 4). The evolutionary calculations were performed at fixed mass – i.e. no mass-loss – and neglecting the effects of both convective core overshooting and rotation. Since the pulsational properties of a star depend mostly on the physical structure of the envelope regions in which H and He undergo partial ionization, i.e. well above the layers in which the nuclear burning takes place, the use of constant mass evolutionary tracks for our study does not limit the qualitative value of our conclusions. However, self-consistent evolutionary models which properly include a parametrization of mass loss will eventually be needed for a full, quantitative description of the phenomenon (see Section 4).
The input physics and physical assumptions adopted for constructing evolutionary and pulsational models will be described in detail in a forthcoming paper (Bono & Panagia 2000). Figure 5 shows the location in the HR diagram of both evolutionary tracks and pulsation models we constructed. The pulsation models characterized by different limiting amplitude behaviour are plotted with different symbols. The squares refer to models which are pulsationally stable, i.e. those in which, after the initial perturbation, the radial motions decay and the structure approaches once again the static configuration. Filled circles and asterisks refer to models which show small ($`\mathrm{\Delta }logL<0.4`$)and large ($`\mathrm{\Delta }logL>0.4`$) pulsation amplitudes together with a periodic behaviour (stable limit cycle). Triangles denote the models which not only present large pulsation amplitudes but also aperiodic radial displacements (unstable limit cycle). The behaviour of pulsation properties discloses several interesting features:
1) For effective temperatures lower than approximately 5100 K high-mass models are, with few exceptions, pulsationally unstable in the fundamental mode both during H shell and He burning phases.
2) The pulsational behaviour is mainly governed by the effective temperature and to a less extent by the luminosity. In fact, the transition from small to large pulsation amplitudes ($`T_e3900`$ K) and from periodic to aperiodic behaviour ($`T_e3800`$ K) take place roughly at constant temperature.
3) Current models support the evidence suggested by Li & Gong (1994) on the basis of linear, nonadiabatic models that RSG variables are pulsating in the fundamental mode. In fact we find that throughout this region of the HR diagram the fundamental mode is pulsationally unstable, whereas the first overtone is stable.
4) The region in which the models attain small amplitudes is the natural extension of the classical Cepheid instability strip. This confirms the empirical evidence originally brought out by Eichendorf & Reipurth (1979) and more recently by Kienzle et al. (1998), as well as the theoretical prediction by Soukup & Cox (1996).
5) Interestingly enough, theoretical light curves of periodic large amplitude models show the characteristic RV Tauri behaviour, i.e. alternating deep and shallow minima, observed in some RSG variables (Eichendorf & Reipurth (1979).
## 3 Pulsationally Induced Mass Loss
As discussed above, in the reddest part of their RSG evolution, massive stars are found to pulsate with large amplitudes in both luminosity and velocity. In these phases, the radial velocity near the stellar surface may reach values of 50 $`kms^1`$ or higher, which may become higher than the effective escape velocity, i.e. the one computed by including both inward gravitational force and outward radiation acceleration. Therefore, the outer layers may become unbound and be lost from the system, thus producing a high mass loss rate. As an illustration, Figure 6 shows the variation of luminosity, temperature and velocity for a 15 $`M_{}`$ red supergiant for three distinct regimes, small amplitude (left panel), large amplitude (central panel), and aperiodic extreme pulsations (right panel).
In order to calculate pulsation induced mass loss, for each model we identify the outer layers for which
$$v_{rad}>v_{esc}+c_s$$
where $`v_{rad}`$ is the radial velocity of a given layer, $`v_{esc}`$ is the effective escape velocity that includes the effects of radiation forces, and $`c_s`$ is the sound speed (Hill & Willson 1979). Those layers are effectively unbound and, therefore, are lost from the star. An example is shown in Figure 7 where we plot the actual velocity of the stellar envelope layers as a function of the stellar radius and compare them with the effective escape velocity. The mass present above the radius where the the actual velocity exceeds the escape velocity reprents the amount of mass which is lost in a pseudo-impulsive event.
The characteristic time between successive pseudo-impulsive events is the Kelvin-Helmotz time, i.e.
$$\tau _{KH}\frac{GM^2}{RL}=63[\frac{M}{10M_{}}]^2[\frac{R}{500R_{}}]^1[\frac{L}{10^5L_{}}]^1yrs$$
Thus, the mass loss rate is given by
$$\dot{M}=\frac{M(v>v_{esc}+c_s)}{\tau _{KH}}$$
An inspection to Figure 7 shows that layers as massive as 10<sup>-2</sup> $`M_{}`$ may become unbound and be lost from the stellar surface within time intervals of several tens of years, thus producing mass loss rates of the order of several 10<sup>-4</sup> $`M_{}yr^1`$ or even higher.
Following this recipe, we have determined the mass loss rates for several values of the stellar mass and for a variety of locations in the HR diagram. As shown in Figure 8, we find that the pulsation induced mass loss can satisfactorily be represented as a power law function of the maximum expansion pulsational velocity (i.e. the maximum velocity that is attained by the outermost layers in a pulsational cycle), i.e.
$$log(\dot{M})=7.24+1.97\times log(v_{max})$$
Since pulsation-induced mass loss is an additional mass loss mechanism that comes on top of more conventional radiation-pressure induced mass loss, the total mass loss rate is assumed to be the straight sum of the two rates.
The main result is that that including the effects of pulsations, the predicted mass loss rate in the RGS phase is a strong function of both the luminosity and the temperature, in the sense that the mass loss process is strongly enhanced by pulsations when a star is moving toward cooler effective temperatures. Figure 9 illustrates this result and shows that now it should only take small adjustment to fully reproduce the observations. Moreover, since a red supergiant may cross the instability strip rather quickly, its mass loss can change considerably on relatively short time scales, thus accounting for another observational fact. It is worth mentioning that Feast (1991) found, on the basis of IRAS data for 16 RSG variables in the LMC, a period-mass loss relation. This relation supports a similar behaviour i.e. a steady increase in the mass-loss when moving from short to long period variables.
## 4 Discussion and Conclusions
The computed mass loss rates, for stars in the range 12-20 M, as a function of look-back time are displayed in Figures 10-12 for short (0-30,000 years), medium (0-150,000 years) and long time-scales (0-1 Myrs), respectively. We see that the mass loss rates may be as high as almost 10<sup>-3</sup> $`M_{}yr^1`$ , i.e. similar to what is measured for extreme red supergiants, and may vary by an order of magnitude over relatively short times, say, 10,000 years or less. In other words the predicted mass loss rates are able to account, at least qualitatively, for all of the features observed in radio supernovae. Moreover, since the predicted mass loss history is a critical function of how a massive star evolves within the pulsation instability strip, a comparison between observations and theory should lead to an accurate determination of the stellar progenitor mass. For example, the mass loss decline of a 20 $`M_{}`$ star may be used to represent the apparent drop of emission of SN 1988Z about 9 years after explosion (cf. Figure 1). Similarly, the quick increase found for our 14 $`M_{}`$ model closely resembles the behaviour observed for SN 1993J. Of course, detailed comparisons will be meaningful only we will have a fully self-consistent set of evolutionary tracks (see below).
The mid- and long-term behaviour of the mass loss rate as a function of look-back time is also interesting because allows one to make predictions about the radio emission, as well as on any other phenomenon linked to a SN shock front and/or ejecta interaction with a dense circumstellar medium, such as relatively narrow optical emission lines and X-ray emission. As we can see in Figures 11 and 12, massive stars are expected to display rather sudden variations of their mass loss rates of all time scales, both because of pulsational instabilities which arise with crossing the instability strip (e.g. the 12 $`M_{}`$ star in the time range 20-60$`\times 10^3`$ years) and because of the so-called blue loops (an effect clearly apparent at look-back times around 0.4–1 Myrs) that are determined by a combination of core He-burning and shell H-burning (e.g. Brocato & Castellani 1993, Langer & Maeder 1995). Because of these effects, one may expect that in some cases, a SN may drop below detection limit for a while but still may have a renaissance, in the X-ray, optical and radio domains, several tens or hundreds of years later.
Also, we note in passing that our findings support the empirical evidence recently brought out by van Loon et al. (1999) on the basis of ISO data on RSG stars in the LMC. In fact, they found that the mass loss rates increase with increasing luminosities and decreasing effective temperatures and range from $`10^6`$ up to $`10^3M_{}yr^1`$. A strong dependence of the mass loss rate on the effective temperature in Tip-AGB stars was recently suggested by Schröder, Winters and Sedlmayer (1999) on the basis of theoretical evolutionary models which account for carbon-rich wind driven by radiation pressure on dust.
Another interesting consequence of our results is that a more efficient mass loss in the RSG phase implies a lower mass cutoff to produce Wolf-Rayet stars and, therefore, one has to expect a more efficient mass return into the ISM than commonly adopted in galactic evolution calculations.
Still there are improvements and refinements to apply to our models, because the calculations we presented here are not fully self-consistent in that we adopted evolutionary tracks computed either with no mass loss whatsoever, or with modest mass loss rates, and on them we performed our pulsational stability analysis and, thus, determined our new mass loss rates. Moreover, our models were constructed by adopting the diffusion approximation even in optically thin layers and therefore we neglected the dust formation processes (Arndt et al. 1997). A macroscopic example of the shortcomings of our current approach is that if we integrate the mass loss rates over time, in many cases we find that the star looses a substantial fraction of their mass before reaching its evolutionary end. Although this is close to what one should expect on the basis of observations, definitely it is at variance with the assumptions that went into the adopted evolutionary model calculations.
It is clear that what we need to do now is to follow an iterative procedure in which we first use our present prescriptions to compute new evolutionary tracks, then we repeat our pulsational stability analysis, then we compute new mass loss rates, and we iterate the procedure until adequate convergence is achieved. This work is in progress and will be presented in future papers. For the time being, our conclusions can be summarized as follow:
– We have defined a new theoretical scenario for pulsation induced mass loss in RSGs.
– RSGs are pulsationally unstable for a substantial portion of their lifetimes.
– Dynamical instabilities play a key role in driving mass loss.
– Bright, cool RGSs undergo mass loss at considerably higher rates than commonly adopted in stellar evolution.
– Comparisons of model predictions with observed CSM phenomena around SNII will provide valuable diagnostics about their progenitors and their evolutionary history.
– More efficient mass loss in the RSG phase implies a lower mass cutoff to produce Wolf-Rayet stars and a more efficient return of polluted material into the ISM, thus affecting the expected chemical evolution of galaxies.
|
no-problem/0003/cond-mat0003192.html
|
ar5iv
|
text
|
# Effect of 𝛾-irradiation on superconducting transition temperature and resistive transition in polycrystalline YBa2Cu3O(7-δ)
## Abstract
A bulk polycrystalline sample of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>(7-δ)</sub> ($`\delta 0.1`$) has been irradiated by $`\gamma `$-rays with <sup>60</sup>Co source. Non-monotonic behavior of T<sub>c</sub> with increasing irradiation dose $`\mathrm{\Phi }`$ (up to about 220 MR) is observed: $`T_c`$ decreases at low doses ($`\mathrm{\Phi }50`$ MR) from initial value ($`93`$ K) by about 2 K and then rises, forming a minimum. At higher doses ($`\mathrm{\Phi }120`$ MR) T<sub>c</sub> goes down again. The temperature width of resistive transition increases rather sharply with dose below 75 MR and drops somewhat at higher dose. The results observed are discussed, taking into account the granular structure of sample studied and the influence of $`\gamma `$-rays on intergrain Josephson coupling.
The influence of crystal-lattice disorder on superconductivity is one of the key points in understanding fundamental properties of high-$`T_c`$ superconductors (HTSCs). To a good approximation, the two main types of disorder, which are essential for superconductivity, can be distinguished . The first is microscopic disorder associated with perturbations of the crystal lattice on the atomic scale (e.g., impurities, vacancies). It is responsible for electron localization and other phenomena which can affect the superconducting order parameter. The second type of disorder is associated with structural inhomogeneity of superconductors (granular structure, phase separation etc.). The disorder scale in this case is far larger than interatomic distances, and, hence, this disorder is called macroscopic. The macroscopic disorder affects mainly the superconducting phase coherence. In experimental studies it is desirable to separate the effects of these types of disorder. Ignoring this point could lead to serious errors in interpretation of results.
The disordering of HTSC can be produced with $`\gamma `$-rays. Attenuation distances of $`\gamma `$-ray with energies of a few MeV are of order of a few centimeters that enables one to investigate bulk samples. The known works in this field are mostly relevant to polycrystalline YBa<sub>2</sub>Cu<sub>3</sub>O<sub>(7-δ)</sub> (YBCO) (see and refs. therein). They are quite contradictory. In some of them no influence of $`\gamma `$-rays on $`T_c`$ and resistivity, $`\rho `$, was found up to dose $`\mathrm{\Phi }1000`$ MR; whereas in others a marked decrease in $`T_c`$ and increase in $`\rho `$ were observed at quite low doses.
In this report, we present a study of the effect of $`\gamma `$-rays on $`T_c`$ and the superconducting transition in bulk polycrystalline sample (with grains about 12 $`\mu `$m) of YBCO with $`\delta 0.1`$. Irradiation was accomplished with a <sup>60</sup>Co source at room temperature in air up to a dose $`\mathrm{\Phi }220`$ MR. The temperature dependence $`\rho (T)`$ is found to be linear above $`T_c`$ up to 300 K, which is the usual behavior for optimally doped YBCO. We have defined the experimental $`T_c`$ to be the temperature at which normal resistance is halved. Before irradiation $`T_c`$ was about 93 K. The temperature $`T_{cz}`$, at which the resistance goes to zero, was used as a second characteristic of the resistive transition. The difference in $`T_c`$ and $`T_{cz}`$, $`\delta T_c=T_cT_{cz}`$, is a quite definite measure of the width of the resistive transition. Sometimes, the temperature $`T_{cb}`$ at the onset of the superconducting transition is used for characterization of HTSCs. But this temperature can be evaluated with much less precision than $`T_c`$ or $`T_{cz}`$.
The changes in $`T_c`$ and $`T_{cz}`$ with $`\gamma `$-ray dose are shown in Fig. 1. It can be seen that that $`T_c`$ decreases at low dose ($`\mathrm{\Phi }50`$ MR) by $`2`$ K and then rises again, forming a minimum. At higher dose ($`\mathrm{\Phi }120`$ MR) $`T_c`$ clearly goes down again. The zero-resistance temperature $`T_{cz}`$ varies with $`\gamma `$-ray dose in a nearly same way as $`T_c`$, but with greater amplitude: the initial decrease is about 4 K. This means that $`\delta T_c`$ (which is a measure of sample inhomogeneity) increases with dose up to $`\mathrm{\Phi }75`$ MR. At higher doses $`\delta T_c`$ stops increasing and even drops somewhat. The effect of $`\gamma `$-rays on the resistivity was found only for temperatures close to $`T_c`$. No radiation effect in resistivity was detected above 200 K.
Granularity of the sample should be taken into account in evaluating the results. The sample resistivity at room temperature ($`3.2`$ m$`\mathrm{\Omega }`$ cm) is larger than that of the YBCO single-crystals by at least by a factor 10. The increased value comes from grain boundaries, which can be poorly conductive or even dielectric in HTSCs. At the same time the measured $`T_c`$ ($`93`$ K) corresponds to the highest $`T_c`$ in YBCO single crystals. This means the occurence of optimal current-carrying chains of grains with strong Josephson coupling.
Our calculations of the cross sections for displacement of lattice atoms in YBCO by $`\gamma `$-rays due to the Compton process have shown that with commonly used $`\gamma `$-ray doses (up to 1000 MR) one should not expect any detectable variations in $`\rho `$ and $`T_c`$ in homogeneous crystals of YBCO. The effects observed in this work (as well as in the previous studies ) are therefore undoubtedly connected with influence of $`\gamma `$-rays on the regions of grain boundaries. In HTSCs, the regions and their environtment are strongly depleted of charge carriers and thus can be very sensitive to $`\gamma `$-rays or particle irradiation.
The initial decrease in $`T_c`$ and $`T_{cz}`$ combined with the simultaneous increase in width of the resistive transition at low dose (Fig. 1) is quite expected for a percolating granular system. Optimal percolation current paths, which have ensured the high $`T_c`$ value before irradiation, surely have some “weak” links. These are grain boundaries, which are strongly enough depleted with charge carriers and, therefore, are sensitive even to small radiation doses. Displacement of atoms from these areas can lead to carrier removal and, therefore, to deterioration of Josephson coupling in “weak” links. This can explain the observed decrease in $`T_c`$ and increase in $`\delta T_c`$ at low doses (Fig. 1).
Although the initial $`T_c`$ drop appears to be explicable, the general non-monotonic picture in Fig. 1 is fairly surprising. To our knowledge, such behavior has not been reported previously. It is likely that a second, independent mechanism of the $`\gamma `$-ray influence (maybe unrelated to radiation damage) operates concurrently with the above-mentioned one. This mechanism enhances the Josephson coupling and causes the increase in $`T_c`$ at higher doses. It can be connected with the ionizing influence of $`\gamma `$-rays. We will consider this hypothesis thoroughly in an extended paper.
|
no-problem/0003/cond-mat0003377.html
|
ar5iv
|
text
|
# The Numerical Renormalization Group Method for correlated electrons
## 0.1 The Numerical Renormalization Group and the Kondo problem
The application of renormalization group (RG) ideas in the physics of condensed matter has been strongly influenced by the work of Wilson . His ‘theory for critical phenomena in connection with phase transitions’ has been awarded the Nobel prize in physics in 1982 . This paper deals with one aspect in the work of Wilson: the numerical renormalization group (NRG) method for the investigation of the Kondo problem.
The history of the Kondo problem goes back to the 1930’s when a resistance minimum was found at very low temperatures in seemingly pure metals . This minimum, and the strong increase of the resistance $`\rho (T)`$ on further lowering the temperature, has been later found to be caused by magnetic impurities (such as iron). Kondo successfully explained the resistance minimum within a perturbative calculation for the $`s`$-$`d`$\- (or Kondo-) model , a model for magnetic impurities in metals. However, Kondo’s result implies a divergence of $`\rho (T)`$ for $`T0`$, in contrast to the saturation found experimentally. It became clear that this shortcoming is due to the perturbative approach used by Kondo.
An important step towards a solution of this problem (the ‘Kondo problem’) has been the scaling approach by Anderson . By successively eliminating high energy states, Anderson showed that the coupling $`J`$ in the effective low energy model diverges. However, the derivation only holds within perturbation theory in $`J`$ and is therefore not necessarily valid in the large $`J`$ limit. A diverging coupling between impurity and conduction electrons corresponds to a perfect screening of the impurity spin; the magnetic moment therefore vanishes for $`T0`$ and the resistivity no longer diverges. This result has been finally verified by Wilson’s NRG, as will be discussed below.
In the following, some details of the NRG method are explained in the context of the single impurity Anderson model (Wilson originally set up the RG transformation for the Kondo model, but the details of the NRG are essentially the same for both models ). The Hamiltonian of this model is given by
$`H`$ $`=`$ $`{\displaystyle \underset{\sigma }{}}\epsilon _\mathrm{f}f_\sigma ^{}f_\sigma +Uf_{}^{}f_{}f_{}^{}f_{}`$ (0.1)
$`+{\displaystyle \underset{k\sigma }{}}\epsilon _kc_{k\sigma }^{}c_{k\sigma }+{\displaystyle \underset{k\sigma }{}}V\left(f_\sigma ^{}c_{k\sigma }+c_{k\sigma }^{}f_\sigma \right).`$
In the model (0.1), $`c_{k\sigma }^{()}`$ denote annihilation (creation) operators for band states with spin $`\sigma `$ and energy $`\epsilon _k`$, $`f_\sigma ^{()}`$ those for impurity states with spin $`\sigma `$ and energy $`\epsilon _\mathrm{f}`$. The Coulomb interaction for two electrons at the impurity site is given by $`U`$ and both subsystems are coupled via a hybridization $`V`$.
The first step to set up the RG-transformation is a logarithmic discretization of the conduction band (see Fig. 0.1): the continuous conduction band is divided into (infinitely many) intervals $`[\xi _{n+1},\xi _n]`$ and $`[\xi _n,\xi _{n+1}]`$ with $`\xi _n=D\mathrm{\Lambda }^n`$ and $`n=0,1,2,\mathrm{}`$. $`D`$ is the half-bandwidth of the conduction band and $`\mathrm{\Lambda }`$ the NRG-discretization parameter (typical values used in the calculations are $`\mathrm{\Lambda }=1.5,\mathrm{},2`$). The conduction band states in each interval are then replaced by a single state. Although this approximation by a discrete set of states involves some coarse graining at higher energies, it captures arbitralily small energies near the Fermi level.
In a second step, the discrete model is mapped on a semi-infinite chain form described by the hamiltonian (see also Fig. 2):
$`H`$ $`=`$ $`{\displaystyle \underset{\sigma }{}}\epsilon _\mathrm{f}f_{1\sigma }^{}f_{1\sigma }+Uf_1^{}f_1f_1^{}f_1`$ (0.2)
$`+`$ $`{\displaystyle \underset{\sigma n=1}{\overset{\mathrm{}}{}}}\epsilon _n\left(f_{n\sigma }^{}f_{n+1\sigma }+f_{n+1\sigma }^{}f_{n\sigma }\right)`$
Here, the impurity operators are written as $`f_{1\sigma }^{()}`$ and the conduction band states as $`f_{n\sigma }^{()}`$ with $`n=0,1,2,\mathrm{}`$. Due to the logarithmic discretization, the hopping matrix elements decrease as $`\epsilon _n\mathrm{\Lambda }^{n/2}`$. This can be easily understood by considering a discretized conduction band with a finite number of states $`M`$ (with $`M`$ even). The lowest energy scale is, according to Fig. 0.1 given by $`D\mathrm{\Lambda }^{M/2}`$. This discrete model is mapped onto a semi infinite chain with the same number of conduction electron degrees of freedom, $`M`$. The only way to generate the low energy scale $`D\mathrm{\Lambda }^{M/2}`$ is now due to the hopping matrix elements $`\epsilon _n`$ so that they have to fall of with the square root of Lambda.
This means that, in going along the chain, the system evolves from high energies (given by $`D`$ and $`U`$) to arbitralily low energies (given by $`D\mathrm{\Lambda }^{M/2}`$). The renormalization group transformation is now set up in the following way.
We start with the solution of the isolated impurity, that is the knowledge of all eigenstates, eigenenergies and matrix elements. The first step of the renormalization group transformation is to add the first conduction electron site, set up the hamiltonian matrices for the enhanced Hilbert space, and obtain the information for the new eigenstates, eigenenergies and matrix elements by diagonalizing these matrices. This procedure is then iterated. An obvious problem occurs after only a few steps of the iteration. The Hilbert space grows as $`4^N`$, which makes it impossible to keep all the states in the calculation. Wilson therefore devised a very simple truncation procedure in which only those states (typically a few hundred) with the lowest energies are kept. This truncation scheme is very successful but relies on the fact that the hopping matrix elements are falling of exponentially. High energy states therefore do not change the very low frequency behaviour and can be neglected.
This procedure gives for each cluster a set of eigenenergies and matrix elements from which a number of physical properties can be derived (this will be illustrated for the calculation of the spectral function in the next section). The eigenenergies itself show the essential physics of the Kondo problem: Fig. 3 shows the dependence of the lowest lying energy levels on the length of the chain (the energies are scaled by a factor $`\mathrm{\Lambda }^{N/2}`$). The system is first approaching an unstable fixed point at $`N1020`$ (the Local Moment fixed point) and is then flowing to a stable fixed point for $`N>50`$ (the Strong Coupling fixed point). By analyzing the structure of the Strong Coupling fixed point and by calculating perturbative corrections about it, Wilson (for the Kondo model ) and Krishnamurthy, Wilkins and Wilson (for the single impurity Anderson model ) found that
* right at the fixed point, the impurity spin is completely screened;
* on approaching the fixed point, the thermodynamic properties are Fermi-liquid like; i.e. the magnetic susceptibility $`\chi (T)`$ approaches a constant value for $`T0`$ and the specific heat $`C=\gamma T`$ is linear in $`T`$ for $`T0`$; the ratio $`R=\chi /\gamma `$ is known as the Wilson ratio and takes the universal value $`R=2`$ in the Kondo model;
## 0.2 Developments and Applications of the NRG method
The NRG approach decribed so far has two main advantages: it is non-perturbative and can deal with arbitrary values of $`U`$ (simply because the impurity part is diagonalized exactly); and it can describe the physics at arbitrary low energies and temperatures (due to the logarithmic discretization). This is important in Wilson’s calculation for the Kondo problem which indeed showed what had been anticipated by Anderson: the development of a ground state with a completely screened impurity (the Fermi-liquid or strong-coupling fixed point). The crossover to this fixed point occurs at the Kondo scale
$$k_\mathrm{B}T_\mathrm{K}=D\left(\frac{\mathrm{\Delta }}{2U}\right)^{1/2}\mathrm{exp}\left(\frac{\pi U}{8\mathrm{\Delta }}\right).$$
(0.3)
(This form is valid in the particle-hole symmetric case $`\epsilon _f=U/2`$; $`\mathrm{\Delta }`$ is defined as $`\mathrm{\Delta }=\frac{1}{2}\pi V^2N(E_\mathrm{F})`$ with $`N(E_\mathrm{F})`$ the density of states of the conduction electrons at the Fermi level). A sufficiently large ratio $`U/\mathrm{\Delta }`$ can therefore generate arbitrarily low energy scales.
On the other hand, the NRG method has one main drawback: it is only applicable to impurity type models and therefore lacks the flexibility of e.g. the Quantum-Monte-Carlo method. A typical example where the NRG fails is the one-dimensional Hubbard model. This model is very similar to the semi-infinite chain model of eq. (0.2), but with constant hopping matrix elements between neighbouring sites and a Coulomb-repulsion $`U`$ on each site. One might therefore expect a similar iterative diagonalization scheme as for the hamiltonian (0.2) to work for the Hubbard model as well. However, the truncation scheme (keeping only the lowest lying states) does not work for a model where the same energy scales ($`U`$ and the bandwidth) are added at each step of the RG procedure. The low energy spectrum of the cluster with one additional site now depends on states from the whole spectrum of energies of the previous iteration. (A solution to this problem, i.e. finding a truncation scheme which gives an accurate description of the larger cluster, is the Density matrix renormalization group method ).
There are, fortunately, a lot of interesting impurity models where the NRG can be applied and where it provided insights into a variety of physical problems. Non-Fermi liquid behaviour has been studied in the context of the Two-Channel-Kondo-Model and related models . The structure of the Non-Fermi liquid fixed point as well as its stability against various perturbations has been clarified using the NRG method.
Another example is the quantum phase transition in impurity models coupling to conduction electrons with a vanishing density of states at the Fermi level: $`\rho _c(\omega )|\omega |^r`$. Here the NRG enables a non-perturbative investigation of both the strong-coupling and local moment phases as well as the quantum critical point seperating these two .
Apart from applying the NRG to generalized impurity models, some important technical developments have been made during the past 10 - 15 years; most notably the calculation of dynamical properties, both at zero and finite temperatures .
Let us briefly discuss how to calculate the single-particle spectral function
$$A(\omega )=\frac{1}{\pi }\mathrm{Im}G(\omega +i\delta ^+),\mathrm{with}G(z)=f_\sigma ,f_\sigma ^{}_z,$$
(0.4)
within the NRG approach. Due to the discreteness of the Hamiltonian, the spectral function $`A(\omega )`$ is given by a discrete set of $`\delta `$-peaks and the general expression for finite temperature reads:
$$A_N(\omega )=\frac{1}{Z_N}\underset{nm}{}\left|<n\left|f_{1\sigma }^{}\right|m>\right|^2\delta \left(\omega (E_nE_m)\right)\left(e^{\beta E_m}+e^{\beta E_n}\right).$$
(0.5)
The index $`N`$ specifies the iteration number (the cluster size) and for each $`N`$ the spectral function is calculated from the matrix elements $`<n\left|f_{1\sigma }^{}\right|m>`$ and the eigenenergies $`E_n,E_m`$. $`Z_N`$ is the grand canonical partition function. Eq. (0.5) defines the spectral function for each cluster and a typical result is shown in Fig. 0.4.
Here, the weight of the $`\delta `$-peaks in eq. (0.5) is represented by the height of the spikes. One can clearly see the typical three peak structure from the result of the 14-site cluster: charge fluctuation peaks centered at $`\omega \pm 0.7`$ ($`\omega \pm U/2`$) and a quasiparticle peak at the Fermi level (here $`\omega =0`$). However, the resolution of the quasiparticle peak appears to be rather unsatisfactory: there is no information on the spectral density below $`|\omega |0.04`$. The advantage of the NRG approach (as compared to e.g. the Exact Diagonalization technique) is that by successively increasing the length of the chain, one can extract the information on the spectral density down to arbitrarily low energy scales. This is seen in the results for the $`N=16`$ and $`N=18`$ clusters in Fig. 0.4. The necessary truncation of states, as decribed in the previous section, is also obvious from Fig. 0.4. There are no excitations for $`|\omega |>0.85`$ ($`|\omega |>0.45`$) in the $`N=16`$ ($`N=18`$) cluster, so that the information on the charge fluctuation peaks is lost for the $`N=16`$ and larger clusters. In order to obtain the spectral density for all energy scales, the data from all cluster sizes have to be put together. This means that each cluster size only provides the information on its relevant energy scale.
The resulting spectrum will still be discrete, of course, with the $`\delta `$-peaks getting closer and closer together for $`\omega 0`$. It is convenient (both for using the results in further calculations and for visualizing the distribution of spectral weight) to broaden the $`\delta `$-peaks in eq. (0.5) via
$$\delta (\omega \omega _n)\frac{e^{b^2/4}}{b\omega _n\sqrt{\pi }}\mathrm{exp}\left[\frac{(\mathrm{ln}\omega \mathrm{ln}\omega _n)^2}{b^2}\right]$$
(0.6)
The broadening function is a gaussian on a logarithmic scale with width $`b`$. In this way, the broadening takes into account the logarithmic distribution of the $`\delta `$-peaks.
Typical results for the spectral function of the single impurity Anderson model are shown in Fig. 0.5. The spectra clearly show the narrowing of the quasiparticle resonance on increasing the ratio $`U/\mathrm{\Delta }`$ – corresponding to the exponential dependence of the low energy scale $`T_\mathrm{K}`$ on $`U/\mathrm{\Delta }`$.
Let us now discuss another, very important development which made it possible to apply the NRG method also to lattice models of correlated electrons: the Dynamical Mean Field Theory (DMFT).
Metzner and Vollhardt showed that one can define a non-trivial limit of infinite spatial dimensions for lattice fermion models (such as the Hubbard model). In this limit, the self energy becomes purely local which allows the mapping of the lattice model onto an effective single impurity Anderson model. This impurity model has the same structure as in eq. (0.1), but the density of states of the conduction band in the impurity Anderson model has to be determined self-consistently and therefore acquires some frequency dependence. The NRG can nevertheless be applied to this case (for details see ). The first attempts to study the Hubbard model is the work of Sakai and Kuramoto . The results obtained later by Bulla, Hewson and Pruschke and Bulla will be discussed in the following section.
## 0.3 NRG results for the Mott-Hubbard metal-insulator transition
The Mott-Hubbard metal-insulator transition is one of the most fascinating phenomena of strongly correlated electron systems. This transition from a paramagnetic metal to a paramagnetic insulator is found in various transition metal oxides, such as $`\mathrm{V}_2\mathrm{O}_3`$ doped with Cr . The mechanism driving the Mott-Hubbard transition is believed to be the local Coulomb repulsion $`U`$ between electrons on a same lattice site, although the details of the transition should also be influenced by lattice degrees of freedom. Therefore, the simplest model to investigate the correlation driven metal-insulator transition is the Hubbard model
$$H=t\underset{<ij>\sigma }{}(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma })+U\underset{i}{}c_i^{}c_ic_i^{}c_i,$$
(0.7)
where $`c_{i\sigma }^{}`$ ($`c_{i\sigma }`$) denote creation (annihilation) operators for a fermion on site $`i`$, $`t`$ is the hopping matrix element and the sum $`_{<ij>}`$ is restricted to nearest neighbors. Despite its simple structure, the solution of this model turns out to be an extremely difficult many-body problem. The situation is particularly complicated near the metal-insulator transition where $`U`$ and the bandwidth are roughly of the same order and perturbative schemes (in $`U`$ or $`t`$) are not applicable.
The DMFT has already been briefly discribed in section 2; this method enabled a very detailed analysis of the phase diagram of the infinite-dimensional Hubbard model . The nature of the Mott-transition, however, has been the subject of a lively debate over the past five years (see ). This debate focusses on the existence (or non-existence) of a hysteresis region at very low temperatures. In such a region, two stable solutions of the DMFT equations should exist: a metallic and an insulating one. This scenario has been proposed by Georges et al. based on calculations using the Iterated Perturbation Theory (IPT), Quantum Monte Carlo and Exact Diagonalization . The validity of this result has been questioned by various authors .
Let us now discuss the NRG results for the infinite dimenional Hubbard model, first of all for $`T=0`$. The spectral function $`A(\omega )`$ for the Bethe lattice is shown in Fig. 0.6 for $`U=0.8U_\mathrm{c}`$, $`U=0.99U_\mathrm{c}`$ and $`U=1.1U_\mathrm{c}`$ ($`U_\mathrm{c}1.47W`$, $`W`$: bandwidth) In the metallic phase (for large enough values of $`U`$) the spectral function shows the typical three-peak structure with upper and lower Hubbard bands centered at $`\pm U/2`$ and a quasiparticle peak at the Fermi level. For $`U=0.99U_\mathrm{c}`$, the quasiparticle peak in both Bethe and hypercubic lattice seems to be isolated (within the numerical accuracy) from the upper and lower Hubbard bands, similar to what has been observed in the IPT calculations for the Bethe lattice . Consequently, the gap appears to open discontinuously at the critical $`U`$ (whether the spectral weight between the Hubbard bands and the quasiparticle peak is exactly zero or very small but finite cannot be decided with the numerical approach used here).
The quasiparticle peak vanishes at $`U_\mathrm{c}1.47W`$ in excellent agreement with the results from the Projective Self-consistent Method (PSCM) $`U_\mathrm{c}1.46W`$. Coexistence of metallic and insulating solutions in an interval $`U_{\mathrm{c},1}<U<U_{\mathrm{c},2}`$ is also found within the NRG approach. Starting from $`U=0`$, the metal to insulator transition occurs at the critical $`U_{\mathrm{c},2}`$ with the vanishing of the quasiparticle peak. Starting from the insulating side, the insulator to metal transition happens at $`U_{\mathrm{c},1}<U_{\mathrm{c},2}`$ (the NRG and IPT give $`U_{\mathrm{c},1}1.25W`$ for the Bethe lattice).
The NRG method for the Hubbard model has only recently been generalized at finite temperatures . Preliminary results for the spectral function are shown in Fig. 0.7 for $`T=0.00625W`$ and increasing values of $`U`$. The upper critical $`U`$ is given by $`U_{\mathrm{c},2}1.24W`$ and the transition at $`U_{\mathrm{c},2}`$ is of first order, i.e. associated with a transfer of spectral weight. The ‘insulator’ for $`U>U_{\mathrm{c},2}`$ does not develop a full gap (this is only possible for $`U\mathrm{}`$ or $`T0`$), but the corresponding transport properties in this temperature range will be certainly insulating-like.
For this temperature, the NRG again finds two stable solutions in an interval $`U_{\mathrm{c},1}(T)<U<U_{\mathrm{c},2}(T)`$: a metallic one, with a quasiparticle peak at the Fermi level and an ‘insulating’ one, with very small spectral weight at the Fermi level (not shown here). The exact shape of the hysteresis region has still to be determined and will be discussed elsewhere .
What we have seen in this section is that the NRG-method (together with the DMFT) can be applied to the infinite-dimensional Hubbard model and allows a non-perturbative calculation of dynamical properties. The calculations can be performed for arbitrary interaction strength and temperature, so that the phase diagram can be (in principle) determined in the full parameter space.
## 0.4 Further developments of the NRG method
As we have discussed in the previous sections, Wilson’s NRG can be applied to two different classes of problems: impurity models and lattice models (the latter ones, however, only within the DMFT).
Concerning impurity models, the NRG has provided important theoretical insight for a variety of problems and certainly will do so in the future. In the light of the increasing possibilities of experimental fabrication, new classes of impurity models are becoming of interest. The behaviour of electrons in quantum dots, for example, can be interpreted as that of an impurity in a conduction band (for an application of the NRG method to this problem, see ). Magnetic impurities can also serve as sensors, put into certain materials in a controlled way. Here one might think of impurities in a correlated host , or impurities in a superconducting or magnetic medium. A lot of theoretical work in applying the NRG method to these problems still needs to be done.
The second class of models are lattice models within the DMFT. Here, the NRG allows (at least in principle) the calculation of a large set of experimentally relevant quantities for a wide range of parameters (especially low temperatures and strong correlation) for a large class of models. Apart from the application to the Hubbard model which has been briefly discussed in section 3, the NRG has already been applied to the periodic Anderson model and to the problem of charge ordering in the extended Hubbard model . Future work will focus on generalizing the NRG method to magnetically ordered states and to systems with a coupling to (dynamical) phonons.
Of particular interest is the generalization of the NRG to multi-band models. In this way, the NRG could further extend the range of applicability of the LDA+DMFT approach . Here, the non-interacting electronic band structure as calculated by the local density approximation is taken as a starting point, with the missing correlations introduces via the DMFT. On a more fundamental level, the basic physics of multi-band models at low temperatures still needs to be clarified, and again, the NRG is the obvious choice for investigating such models in the low $`T`$ and intermediate to large $`U`$ regime.
The author would like to thank T. Costi, D.E. Logan, A.C. Hewson, W. Hofstetter, M. Potthoff, Th. Pruschke, and D. Vollhardt for stimulating discussions and collaboration over the past few years. Part of this work was supported by the Deutsche Forschungsgemeinschaft, grant No. Bu965-1/1 and by the Sonderforschungsbereich 484.
|
no-problem/0003/cond-mat0003138.html
|
ar5iv
|
text
|
# 1 Kink geometry. Broadly speaking, the screw dislocation represents the locus where planes of atoms form a helix. In copper it spreads out into a ribbon along the 𝑥-axis to lower its energy. The kink we study shifts the ribbon by one atom in the x direction. More specifically, the b={𝑎/2}[110] dislocation on the left dissociates on the (11̄1) plane into the Shockley partials: 𝛼"C"={𝑎/6}[121] and "D"𝛼={𝑎/6}[211̄] respectively. The kink is introduced with line vector 𝜉_"kink"={𝑎/4}[1̄12], and dissociates into a wide screw-like- and a bulky edge-like kink located on the partial dislocations. The lighter atoms are on the stacking fault (hcp local environments) and the darker atoms are along the partial dislocations (neither hcp nor fcc).
\[
Calculation of quantum tunneling for a spatially extended defect: the dislocation kink in copper has a low effective mass.
Tejs Vegge<sup>∗,†</sup>, James P. Sethna, Siew-Ann Cheong, K. W. Jacobsen, Christopher R. Myers Daniel C. Ralph,
Center for Atomic Scale Materials Physics (CAMP) and Department of Physics,
Building 307, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark
Materials Research Department, Risø National Laboratory , DK-4000 Roskilde, Denmark
Laboratory of Atomic and Solid State Physics (LASSP), Clark Hall,Cornell University, Ithaca, NY 14853-2501, USA
Cornell Theory Center, Cornell University, Ithaca, NY 14853, USA
> > Several experiments indicate that there are atomic tunneling defects in plastically deformed metals. How this is possible has not been clear, given the large mass of the metal atoms. Using a classical molecular-dynamics calculation, we determine the structures, energy barriers, effective masses, and quantum tunneling rates for dislocation kinks and jogs in copper screw dislocations. We find that jogs are unlikely to tunnel, but the kinks should have large quantum fluctuations. The kink motion involves hundreds of atoms each shifting a tiny amount, leading to a small effective mass and tunneling barrier.
\]
Tunneling of atoms is unusual. At root, the reason atoms don’t tunnel is that their tunneling barriers and distances are set by the much lighter electrons. The tunneling of a proton over a barrier one Rydberg high and one Bohr radius wide is suppressed by the exponential of $`\sqrt{2M_pR_ya_0^2}=\sqrt{M_p/m_e}42.85`$: a factor of $`10^{19}`$.
Nonetheless, atomic quantum tunneling dominates the low temperature properties of glasses and many doped crystals. In glasses, there are rare regions (one per $`10^5`$ or $`10^6`$ molecular units) where an atom or group of atoms has a double well with low enough barrier, tunneling distance, and asymmetry to be active. For certain dopants in crystals, off-center atoms and rotational modes of nearly spherical ionic molecules have unusually low barriers and tunneling distances. Quantitative modeling of these spatially localized tunneling defects has been frustrated by the demands for extremely accurate estimates of energy barriers, beyond even the best density functional electronic structure calculations available today. Also, although all detailed models of tunneling in glasses have basically involved one or a very few atoms, there has long been speculation that large numbers of atoms may be shifting during the tunneling process.
There is much evidence that quantum tunneling is important to the properties of undoped, plastically deformed metals. Quantum creep, glassy low-temperature behavior, and two-channel Kondo scaling seen in the voltage and temperature-dependent electrical conductivity in nanoconstrictions have been attributed to quantum tunneling associated with dislocations. It has never been clear how this can occur, given the large masses of the metal atoms involved.
We show here using a classical effective-medium interatomic potential that quantum fluctuations can indeed be important in the dynamics of one particular defect: a kink in the split-core screw dislocation in copper. The motion of the kink involves a concerted motion of hun-
dreds of copper atoms, leading to a dramatic decrease in its effective mass. This delocalization perhaps lends support to ideas about collective centers in glasses. Also, because our important conclusions rest upon this delocalization they are qualitatively much less sensitive to the accuracy of our potential than calculations for spatially localized tunneling defects. We assert that these kinks are likely the only candidate for quantum tunneling in pure fcc metals.
The kink simulation consists of two screw dislocations with opposite Burgers vectors b=$`\pm \frac{a}{2}\left[110\right]`$, allowing periodic boundary conditions giving us the perfect translational invariance necessary to measure energy differences to the accuracy we need. The two dislocations are placed in different $`(1\overline{1}1)`$ planes separated by 20 $`(1\overline{1}1)`$ planes (4.4 nm), see figure 1. The system is 86 planes wide (19.3 nm) in the two (non-orthogonal) directions, and extends 44.5 b (11.4 nm) along the dislocations. We introduce kinks or jogs on the dislocations by applying skew periodic boundary conditions to system, i.e. we introduce a small mismatch in the dislocation cores at the interface to the next cell. The procedure also introduces a row of interstitial atoms between the kinked dislocations, which is subsequently removed from the system, leaving us with a total of 329 102 atoms. The kinks have a net line vector of $`\xi _{\text{kink}}=\frac{a}{4}[\overline{1}12]`$, with $`a`$ the lattice constant.
To show how unusual the properties of the kink are, we also study the properties of a dislocation jog. The jog simulation, and the associated energy barrier calculation, is similar and is described elsewhere. The elementary jog we study is introduced with a line vector oriented in the $`(\overline{1}11)`$ glide plane of the screw dislocation, $`\xi _{\text{jog}}=\frac{a}{4}[1\overline{1}2]`$, which then transforms into an obtuse lower energy configuration: $`\frac{a}{4}[1\overline{1}2]\frac{a}{2}[101]+\frac{a}{4}[\overline{11}0]`$. This jog is expected to be the most mobile of the jogs, second only to the kink in mobility.
We introduce the two kinked dislocations directly as Shockley partial dislocations, see figure 1, and relax using the MD-min algorithm, using Effective Medium Theory (EMT): a many-body classical potential, which is computationally almost as fast as a pair potential, while still describing the elastic properties well. The elastic constants of the potential are: $`C_{11}=176.2\text{GPa}`$, $`C_{12}=116.0\text{GPa}`$ and $`C_{44}=90.6\text{GPa}`$ with a Voigt average shear modulus of $`\mu =66\text{GPa}`$, and an intrinsic stacking fault energy of $`\gamma _\text{I}=31\text{mJ}/\text{m}^2`$.
We present three quantities for the kink and jog: the Peierls-like barrier for migration along the dislocation, the effective mass, and an upper bound for the WKB factor suppressing quantum tunneling through that barrier. Since the motion of these defects involves several atoms moving in a coordinated fashion, we use instantons: the appropriate generalization of WKB analysis to many-dimensional configuration spaces. An upper bound for the tunneling matrix element is given by the effective mass approximation,
$$\mathrm{\Delta }\mathrm{}\omega _0\mathrm{exp}\left(\sqrt{2M^{}(Q)V^{}(Q)}𝑑Q/\mathrm{}\right),$$
(1)
where $`\omega _0`$ is an attempt frequency, $`V^{}(Q)`$ is the energy of the defect at position $`Q`$ with the neighbors in their relaxed, minimum energy positions $`q_i(Q)`$, and
$$M^{}(Q)=\underset{i}{}M_i(dq_i/dQ)^2$$
(2)
is the effective mass of the defect incorporating the kinetic energy of the surrounding atoms as they respond adiabatically to its motion. The effective mass approximation is usually excellent for atomic tunneling. The method is variational, so equations 1 and 2 remain upper bounds using other assumptions about the tunneling path $`q_i(Q)`$ (such as the straight-line path between the two minima described below for the kinks).
The difficulty of finding models for atomic tunneling is illustrated rather well by the properties of the jog we study. The barrier for migration was determined to be 15 meV: lower than other jogs, or even than surface diffusion barriers calculated with the same potential. The effective mass for the jog, estimated by summing the squared displacement of the 200 atoms with largest motion, is $`M_{\text{jog}}^{}0.36M_{\text{Cu}}`$: the jog is spatially localized (it doesn’t disassociate into partials), with a few atoms in the core of the jog carrying most of the motion. The WKB tunneling matrix element for the jog to tunnel a distance $`Q=2.5`$ Å over a barrier $`V=0.015\text{eV}`$ is suppressed by a factor of roughly $`\mathrm{exp}(\sqrt{2M_{\text{jog}}^{}V}Q/\mathrm{})10^{14}`$. Jogs don’t tunnel much.
For the kinks, we take a relaxed initial configuration and define a final configuration with the kink migrated by one lattice spacing along the dislocation. The final position for each atom is given by the position of the neighboring atom closest to the current position minus the kink migration vector $`l_{\text{migr.}}=\frac{a}{2}\left[110\right]`$ which represents the net motion of the kink. This automatically gives the correct relaxed final position, which is otherwise difficult to locate given the extremely small barriers. The width of the kinks is the traditional name for their extent along the axis of the screw dislocation. We can measure this width by looking at the net displacement of atoms between the initial and final configurations. We find that the displacement field is localized into two partial kinks, localized on the partial dislocation cores, see figure 2. These two partial kinks are quite wide (FWHM of 13 b and 21 b). They differ because the partials are of mixed edge and screw character; it is known that the kink which forces a mixed dislocation towards the screw direction will be wider and have higher energy. This is wider than the $`w<10𝐛`$ predicted for slip dislocations in closed-packed materials by Hirth and Lothe, and Seeger and Schiller using line tension models .
Notice that the maximum net distance moved by an atom during the kink motion in figure 2 is around 0.01 Å. Summing the squares of all the atomic motions, and using equation 2, we find an effective mass $`M_{\text{kink}}^{}M_{\text{Cu}}/130`$ within the straight-line path approximation. This remarkably small mass can be attributed to three factors. (1) The mass is decreased because the screw dislocation is split into two partial dislocations . (2) The cores of the partial dislocations are spread transversally among $`W_T4d`$, figure 3; this factor seems to have been missed in continuum treatments. These first two factors each reduce the total distance moved by an atom as the kink passes from $`z=\mathrm{}`$ to $`+\mathrm{}`$. (3) The kink partials average $`W_L17𝐛`$ wide (above), so the total atomic motion is spread between around 17 kink migration hops. Thus when the kink moves by $`x`$, the atoms in two regions $`1/W_L`$ long and $`1/W_T`$ wide each move by $`x/(2W_LW_T)`$, reducing the effective mass by roughly a factor of $`2W_LW_T136`$.
Evaluating the energy at equally spaced atomic configurations and linearly interpolating between the initial and final states (along the straight-line path) yields an upper bound to the kink-migration barrier of $`0.15\mu `$eV, figure 4. We attribute this extremely small barrier to the wide kink partials: we expect the barrier $`V`$ to scale exponentially with the ratio of the kink width $`W`$ to the (110) interplanar distance b: $`V\mathrm{exp}(W/𝐛)`$. If one thinks of the contribution to the energy of the $`n^{\mathrm{th}}`$ layer as some analytic function $`f(n/W+\delta )`$, then the barrier is given by the variation of the sum $`_{n=\mathrm{}}^{\mathrm{}}f(n/W+\delta )`$ with the position shift $`\delta `$. The difference between this sum and the ($`\delta `$-independent) integral is easily estimated by Fourier transforms, and is approximately $`2\stackrel{~}{f}(2\pi W/a)`$. The Fourier transform of an analytic function decays exponentially. One imagines this could be proven to all orders in perturbation theory.
This small barrier is not only negligible for thermal activation (two mK), but also for quantum tunneling. The WKB factor suppressing the tunneling would be $`\mathrm{exp}(\sqrt{2M_{\text{kink}}^{}V}Q/\mathrm{})=\mathrm{exp}(0.0148)=0.985`$. Even at zero temperature, the kinks effectively act as free particles, as suggested in the literature (, among others).
Our estimated kink migration barrier is thus $`10^5`$ times smaller than that for the most mobile of the jogs. How much can we trust our calculation of this remarkably small barrier? Schottky estimates using a simple line-tension model that the barrier would be $`\mathrm{3\; 10}^5`$ eV in fcc materials, using a Peierls stress $`\sigma _\text{P}=10^2\mu `$ and a kink width $`w=10`$ b. This value is a factor of 200 higher than the barrier we find. On the other hand, both experiments and theoretical estimates predict $`\sigma _\text{P}\mathrm{5\; 10}^6\mu `$ for Cu , yielding barriers orders of magnitude lower than ours. The interatomic potentials we use do not take into account directional bonding. This is usually a good approximation for noble metals; however, small contributions from angular forces may change the kink width. The kink width is like an energy barrier, balancing different competing energies against one another: in analogy, we expect it to be accurate to within twenty or thirty percent. Our small value for the effective mass, dependent on the inverse cube of the spatial extent of the kink, is probably correct within a factor of two. The energy barrier is much more sensitive: if we take the total exponential suppression to be $`10^5`$ (using the jog as a “zero-length” defect) then each 20% change in the width would yield a factor of 10 change in the barrier height. The qualitative result of our calculation, that the barriers and effective masses are small, are robust not only to the use of an approximate classical potential, but may also apply to other noble metals and perhaps simple and late transition metals.
In summary, we have used an atomistic calculation with classical potentials to extract energy barriers and effective masses for the quantum tunneling of dislocation jogs and kinks in copper. For jogs, the atomic displacements during tunneling are primarily localized to a few atoms near the jog core, each moving a significant fraction of a lattice spacing. Consequently, the tunneling barrier and effective mass are relatively large, and tunneling is unlikely. However, the kinks in screw dislocations are much more extended: as a kink moves by one lattice spacing, hundreds of atoms shift their positions by less than 1% of a lattice spacing. Both the energy barrier and the effective mass are reduced, to the extent that tunneling should occur readily. Kinks are likely the only candidate for quantum tunneling in pure crystalline materials. They may explain measurements of quantum creep, glassy internal friction, and non-magnetic Kondo effects seen in plastically deformed metals.
Acknowledgments. Portions of this work were supported by the Digital Material project NSF #9873214 and the Danish Research Councils Grant #9501775, and was done in collaboration with the Engineering Science Center for Structural Characterization and Modeling of Materials in Risø. An equipment grant from Intel and the support of the Cornell Theory Center are also gratefully acknowledged. We had helpful conversations with Nicholas Bailey and Jakob Schiøtz.
|
no-problem/0003/astro-ph0003262.html
|
ar5iv
|
text
|
# A First Look at the Nuclear Region of M31 with Chandra
## 1. Introduction
As our nearest Milky Way analog, M31 offers us a chance to study a galaxy like our own without the obscuring effects of living in the middle of the Galactic plane. For example, the nucleus of our Galaxy (Sgr A), is obscured by $`30`$ magnitudes of visual extinction (Morris and Serabyn 1996), while the nucleus of M31 likely suffers $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 2 magnitudes of extinction (see Section 2.3.1). In addition, the study of x-ray binaries in the Galactic plane is hindered by reddening sometimes reaching $`>10`$ magnitudes, which can be compared to an average $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.22`$ magnitudes for globular clusters in M31 (Barmby et al. 2000).
Ground based measurements of the rotational velocity of stars near the core of M31 provide strong evidence of a central dark, compact object of mass $`3.0\times 10^7`$M, presumably a black hole (Kormendy and Bender 1999 and refs therein). HST observations resolved the M31 nucleus into two components (P1 and P2) separated by $`0.5^{\prime \prime }`$ (Lauer et al. 1993). These observations support the model of the double nucleus of M31 as a torus of stars orbiting the core in a slightly eccentric orbit (Tremaine 1995). Post COSTAR HST observations have shown that there is a group of partially resolved UV-bright stars between P1 and P2 at the position of the central black hole (Brown et al. 1998).
The first identification of an x-ray source with the M31 nucleus came with Einstein observations, which found a source within $`2.1^{\prime \prime }`$ of the nucleus with $`\mathrm{L}_\mathrm{x}=9.6\times 10^{37}`$ erg s<sup>-1</sup>(0.2-4.0 keV, Van Speybroeck et al. 1979). While this source was not variable in this first observation, subsequent Einstein observations showed the nucleus to be variable by factors of $`10`$ (Trinchieri and Fabbiano 1991) on timescales of 6 months. Published ROSAT observations show $`\mathrm{L}_\mathrm{x}=2.1\times 10^{37}`$ erg s<sup>-1</sup>, which is at the faint end of the Einstein range (Primini, Forman and Jones 1993).
Radio observations reveal a weak ($`30\mu \mathrm{Jy}`$) source at the core (Crane, Dickel and Cowan 1992). The luminosity at 3.6 cm is $`1/5`$ that of Sgr A, a puzzle given that the M31 nucleus is $`30`$ times more massive (Melia 1992). The correlation between the radio and x-ray properties of low-luminosity super-massive black holes (Yi and Boughn 1999) might be explained by an ADAF model, but M31 is an outlier in these correlations.
The point sources distributed throughout M31 are likely x-ray binaries and supernova remnants similar to those in our galaxy. The fact that $`40`$% of these sources are variable is consistent with this hypothesis (Primini, Forman and Jones 1993). As in the galaxy, some of these point sources are transient. Comparison of Einstein and ROSAT images shows that $`6`$% of the sources are transient (Primini, Forman and Jones 1993). A comparison of Einstein and EXOSAT observations allowed discovery of two transients (White & Peacock 1988), and a study of the ROSAT archive allowed discovery of a supersoft x-ray transient (White et al. 1995).
The sensitivity and high spatial resolution of Chandra (van Speybroeck et al.1997, Weisskopf and O’Dell 1997) provide new insights into the x-ray properties of M31. A few of those new insights, concerning the nucleus and a new transient, are reported in this letter.
## 2. Observations
### 2.1. Chandra
Chandra was pointed at the nucleus of M31 for 17.5 ks on Oct 13, 1999. This pointing occurred immediately before Chandra operations paused for the passage through the Earth radiation belts, and the radiation environment was already higher than average. This caused high counting rates in the ACIS-S3 chip, which saturated telemetry and caused data dropouts. The S3 counting rate was used as an indicator of high background, and whenever it increased beyond 1.5 c s<sup>-1</sup>we rejected the data. Consequently we obtained 8.8 ks of active observing time.
The standard four ACIS-I (Garmire et al.1992) chips were on; therefore a $`16^{}\times 16^{}`$ region of the center of M31 was covered. In this letter we concentrate on the observations of the central $`1^{}`$ only. The image of this nuclear region is shown in Figure 1.
Data were analyzed with a combination of the CXC Caio V1.1 (Elvis et al.2000), HEASARC XSPEC V10.0 (Arnaud 1996), and software written by Alexey Vikhlinin (Vikhlinin et al. 1998). Unless otherwise specified, all error regions herein are 68% confidence bounds and include a 20% uncertainty in the ACIS effective area below 0.27 keV. We note that this calibration uncertainty is $`<50`$% of the statistical uncertainties for the sources considered herein.
### 2.2. ROSAT
ROSAT imaged the central region of M31 six times from 1990 to 1996, with exposure times ranging from 5 ks to 84.5 ks (see Primini, Forman and Jones 1993. and Primini et al. 2000). The last of these exposures was 84.5 ks in 1996 January. The image of the nuclear region from this observation is shown in Figure 1 (top).
Figure 1: Top: The nuclear region of M31 as it appears in an 84.5 ks ROSAT HRI observation in January 1996. Bottom: The same as seen in an 8.8 ks Chandra ACIS-I observation on Oct 13, 1999. The cross-like shadow seen in the ACIS-I observation is due to the gaps between the 4 ACIS-I chips. These images are 4 arcmin on a side.
### 2.3. Data Analysis
The Nucleus: The central object seen with the ROSAT HRI is clearly resolved into 5 sources (Figure 2). The Chandra aspect solution is based on 5 stars from the Tycho (Hipparcos) catalog, so has the potential to be good to a few tenths of an arc-sec (Aldcroft et al.2000). Based on the aspect solution alone, we find that one of these five sources, CXO J004244.2+411608, is within $`<1^{\prime \prime }`$ of the position of the radio nucleus (Crane et al. 1992). As an independent check on the aspect, we computed a plate solution for the x-ray image using the positions of 10 x-ray detected globular clusters from the Bologna catalog (Battistini et al. 1987). This solution has an uncertainty of $`0.7\mathrm{"}`$ rms in RA and Dec, and agrees (within the errors) with the Chandra aspect.
Figure 2: An enlargement of Figure 1(bottom), showing the nuclear region in detail. The circle surrounding the central sources is $`5^{\prime \prime }`$ in diameter, approximating the resolution of the ROSAT HRI. This image is 1 arcmin on a side.
In order to get a first look at the spectra of the point sources, we performed a wavelet deconvolution (Vikhlinin et al. 1998) of the image and found 121 point sources in the full $`16^{}\times 16^{}`$ FOV of ACIS-I (these sources will be discussed in a separate paper). We then computed the hardness ratio of the 79 sources with more than 20 counts. In the histogram of this ratio (Figure 3) the nuclear source is one of three outliers with extremely soft spectra. The fact that the nuclear spectrum is distinctly different from the mean may indicate that there is something fundamentally different about this source.
Figure 3: The hardness ratio for 79 sources with $`>20`$ total counts found in the ACIS-S image of M31. The nuclear source, CXO J004244.2+411608, has the third lowest hardness ratio, and is indicated by the “N”. The nearby transient is indicated by the “T”. The source $`1^{\prime \prime }`$ North of the nucleus is in the first bin below 0.0, the source $`1.5^{\prime \prime }`$ to the South of the nucleus is in the bin indicated by the “T”.
We extracted 100 counts from a $`3`$ square-arc-sec region surrounding the nucleus. In order to limit contamination from CXO J004244.2+411609, which is only $`1.0\mathrm{"}`$ to the North, we excluded photons more than $`0.5\mathrm{"}`$ to the North of the nuclear source. The resulting PHA spectrum was fit with XSPEC, after first binning the data such that each fitted bin had $`>10`$ counts. Gehrels weighting was used for the fits (Gehrels 1986). The fits were limited to the 0.2-1.5 keV region, as there were insufficient counts outside of this region.
Simple models (powerlaw, black-body, bremsstrahlung, with interstellar absorption) provide acceptably good fits to the data. The power law fits find a slope $`\alpha =5_{2.4}^{+7}`$, and limit $`\mathrm{N}_\mathrm{H}=4_{3.5}^{+9}\times 10^{21}`$ cm<sup>-2</sup>. In order to reduce the error range on the fitted slope we choose to limit the allowed range of absorption to that found for the nearby transient (below), ie, to $`\mathrm{N}_\mathrm{H}=2.8\pm 1.0\times 10^{21}`$ cm<sup>-2</sup>. This then allows us to further restrict the slope (or temperature) of the spectrum to $`\alpha =4.5\pm 1.5`$, kT$`=0.15_{0.03}^{+0.06}`$, or kT$`=0.43\pm 0.17`$ for power-law, black-body or bremsstrahlung fits (respectively).
The detected 0.3-7.0 keV flux, assuming the further restricted range of parameters for the power law model, is $`5.8_{0.5}^{+0.9}\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, corresponding to an observed luminosity of $`3.9_{0.3}^{+0.6}\times 10^{36}`$ erg s<sup>-1</sup>at 770 kpc (Stanek and Garnavich 1998). At the lowest $`\mathrm{N}_\mathrm{H}`$ and flattest $`\alpha `$ in this range, approximately 60% of the 0.3-7.0 keV flux is absorbed by the ISM, while at the highest $`\mathrm{N}_\mathrm{H}`$ and steepest $`\alpha `$, nearly 98% of the flux is absorbed. The corresponding emitted luminosity ranges from $`1.2\times 10^{37}`$ erg s<sup>-1</sup> to $`1.6\times 10^{38}`$ erg s<sup>-1</sup>, and has a nominal value at the best fit parameters of $`4.0\times 10^{37}`$ erg s<sup>-1</sup>.
In order to test our assumption that the $`\mathrm{N}_\mathrm{H}`$ measured for the transient is appropriate to apply to the nucleus, we fit power law spectra to four other bright nearby sources. These sources are all further away from the nucleus, with distances ranging from $`30^{\prime \prime }`$ to $`2^{}`$, and have between 237 and 823 detected counts. In every case the 90% confidence regions for $`\mathrm{N}_\mathrm{H}`$ overlap with the transient. Given that there is no evidence for large variations in $`\mathrm{N}_\mathrm{H}`$ in the region around the nucleus, it is reasonable to assume the nuclear $`\mathrm{N}_\mathrm{H}`$ is the same as that of the transient. Note that the galactic $`\mathrm{N}_\mathrm{H}7\times 10^{20}`$ cm<sup>-2</sup> in the direction of M31 (Dickey & Lockman 1990), so our results are consistent with additional local absorption within M31 itself. If the gas/dust ratio in M31 is similar to that in the Galaxy, the nuclear $`\mathrm{A}_\mathrm{V}=1.5\pm 0.6`$ (Predehl and Schmitt 1995).
The Nearby Transient: We extracted 763 counts for a $`1\mathrm{"}`$ radius circle at the position of CXO J004242.0+411608. This data was similarly grouped into bins with $`>10`$ counts, and fit to simple models with XSPEC. Chi-squared fitting with Gehrels weighting was used to find the minimum chi-squared spectral parameters. Power law, bremsstrahlung, and blackbody fits are all acceptable ($`\chi ^2/\nu <1.13`$ for 71 DOF), but the power law fits produce the lowest $`\chi ^2/\nu 0.56`$. Significant counts are seen out to 7.0 keV. The best fitting power law number slope is $`1.5\pm 0.3`$, with a best fit $`\mathrm{N}_\mathrm{H}=2.8\pm 1.0\times 10^{21}`$ cm<sup>-2</sup>.
Bremsstrahlung and black body fits formally allow $`\mathrm{N}_\mathrm{H}=0`$ cm<sup>-2</sup>, but as the Galactic value to M31 is $`\mathrm{N}_\mathrm{H}=7\times 10^{20}`$ cm<sup>-2</sup>, we restrict the fitting space to values larger than this. Bremsstrahlung fits are not able to set an upper limit to the temperature, but set a lower limit of kT$`>6`$ keV. Black body fits limit the temperature to kT$`=0.75\pm 0.25`$ keV. Assuming a power law model, the detected flux is $`7.4\pm 0.7\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, corresponding to a observed luminosity of $`5.1\pm 0.5\times 10^{37}`$ erg s<sup>-1</sup>, and an emitted luminosity of $`7.0\pm 0.8\times 10^{37}`$ erg s<sup>-1</sup>(0.3-7.0 keV). The hardness ratio is typical of other point sources (Figure 3).
We examined each of the 5 ROSAT HRI observations of the center of M31, and find that there is no source apparent at the position of this transient in any of these exposures. For the deepest (and last) observation, we find 78 counts in a a $`7.5^{\prime \prime }`$ arcsec radius at the position of this transient, which is consistent with the background caused by the diffuse emission in M31 (Primini et al. 1993). From this we compute a 95% (2 $`\sigma `$) upper limit of 17.7 counts. Assuming the power law spectrum determined above for this source in outburst, and applying a small correction for the flux not contained in the $`7.5^{\prime \prime }`$ circle, this corresponds to an upper limit to the emitted luminosity of the source of $`3.0\times 10^{36}`$ erg s<sup>-1</sup>in the 0.3-7.0 keV band. Thus the transient brightened by at least a factor of $`20`$.
### 2.4. Discussion
The Nucleus: Several authors have previously noted the unusual x-ray and radio luminosity of the nucleus of M31 (Melia 1992, Yi and Boughn 1999). We note that the x-ray luminosity we find herein is substantially lower than that quoted in several recent papers comparing x-ray and radio luminosities of low luminosity super-massive black holes (eg, Franceschini, Vercellone and Fabian 1998, Yi and Boughn 1999). At this revised luminosity the M31 nucleus appears to be even more of an outlier on the correlations between radio luminosity, x-ray luminosity, and black hole mass found for low luminosity super-massive black holes (Yi and Boughn 1999, Figures 4 & 5).
The unusual x-ray and radio luminosity has lead to the suggestion that perhaps the source may not be associated with the central black hole, but is merely a chance co-incidence (van Speybroeck et al. 1979, Yi and Boughn 1999). The probability of a chance co-incidence depends upon what search region one uses, and a posteriori, it is hard to know what the relevant search region is. If we use the full ACIS FOV as the search region, then the chance of any one of the 121 detected sources source being within $`1^{\prime \prime }`$ of the nucleus is $`4\times 10^4`$. However, the surface density of sources increases towards the nucleus, so the chance probability may be higher than this. If one limits the search region to the $`25`$ square arc-sec area which contains the five sources ROSAT and Einstein were not able to resolve, the the chance probability is $`20`$%. This is most likely an overestimate, as can been seen by carrying this argument to its extreme (and non-sensible) limit: if one limits the search region to the 1 square arc-sec region around the nucleus, the chance that the one source within that region is within $`1^{\prime \prime }`$ of the nucleus is 100%!.
While it may be unclear what the appropriate search region is, it seems clear that a chance alignment cannot be dismissed out of hand. This motivated us to search for other unusual characteristics of the central source, which led to the discovery that it has an unusually soft spectrum. We speculate that the unusual spectrum is due, at least in part, to the high mass of the nucleus, and that the unusual spectrum may provide a clue to the origin of the unusually weak radio emission.
However, because there are no observational precedents or strong theoretical arguments which would lead us to expect the spectrum of the a $`10^7`$Mlow luminosity black hole to be very soft, we cannot identify the unusual spectrum as a signature of the central black hole. Our identification is based solely on the positional co-incidence, and the unusual spectrum is left as a challenge to models.
While previous Einstein and ROSAT observations are unable to separate the nuclear source from the surrounding four sources, the fluxes indicate that the nucleus (or surrounding emission) is highly variable. In order to compare these fluxes to the Chandra flux, we assume the nuclear power law spectrum found above, and use the counting rates from the literature (Van Speybroeck et al. 1979, Trinchieri & Fabbiano 1991, Primini et al. 1993) to calculate 0.2-4.0 keV detected fluxes. The uncertainty in the nuclear spectrum allows up to 40% uncertainty in the conversion from counting rate to flux. In order to make a fair comparison, Table 1 lists the summed flux from all 5 nuclear sources in the Chandra image.
Table 1: M31 Nuclear X-ray Flux
Date Observatory Flux ($`10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>) 1979 Jan Einstein $`7.07\pm 0.06`$ 1979 Aug Einstein $`0.60\pm 0.18`$ 1980 Jan Einstein $`3.50\pm 0.64`$ 1990 July ROSAT $`1.70\pm 0.12`$ 1999 Oct Chandra (5) $`1.43\pm 0.15`$
Strong variability of unresolved sources is often cited as evidence for a small number of sources, simply because it is more likely that a single source varies rather than a group of sources varies coherently. If we apply this argument to the M31 nucleus, it implies that one of these five sources (perhaps the nucleus itself?) is highly variable. It would then be appropriate to assume that the average flux of the surrounding four sources is $``$constant, and subtract this flux from the Einstein and ROSAT measurements in order to determine the flux of the nucleus alone. From the Chandra image, the flux from these four sources is $`0.85\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Subtracting this, we see that the lowest Einstein flux measurement is consistent with zero flux from the nucleus, and indicates a factor of $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 40 variability.
As an aside, we note that the detection of Sgr Awith Chandra (Garmire 1999) does not necessarily rule out an M31-like spectrum. The much higher $`\mathrm{A}_\mathrm{V}30`$ to Sgr Awould reduce the observed count rate from an M31-like spectrum by $`60`$ times, but the $`100`$ times smaller distance would more than make up for this.
Standard ADAF models are not able to explain the ratio of x-ray to radio luminosity of the nucleus (Yi and Boughn 1999). However, models including winds (Di Matteo et al. 1999) and/or convective flows (Narayan, Igumenshchev & Abramowicz 1999) may be able to explain this ratio. These models generally predict hard spectra in the x-ray region, so may not be able to explain the extremely soft spectrum reported herein (Quataert 2000, pc). We note that the x-ray luminosity of M31 is several orders of magnitude below that typically considered in these models, implying that the models may not fully describe this parameter space.
The Nearby Transient: The nature of the bright transient is uncertain. By analogy to Milky Way sources, its transient nature and luminosity imply that it is either a massive X-ray binary, typically consisting of a Be-star and a pulsar, or an x-ray nova, often consisting of a late-type dwarf and a black hole (White, Nagase and Parmar 1995, Tanaka and Lewin 1995). The spectral slope of $`\alpha =1.5`$ is between the hard spectra typically seen in x-ray pulsars ($`0.0<\alpha <1.0`$, White, Nagase and Parmar 1995) and the softer spectra seen in x-ray novae in outburst ($`\alpha 2.5`$, Ebisawa et al. 1994; Sobczak et al. 1999). At late times in the decay of an x-ray novae the spectrum often hardens to $`\alpha 1.5`$, but this would imply that the peak outburst luminosity of this transient was $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ $`10^{39}`$ erg s<sup>-1</sup>.
The absorption of $`\mathrm{N}_\mathrm{H}=2.8\pm 1.0\times 10^{21}`$ cm<sup>-2</sup>is more typical of x-ray novae than Be-star pulsar systems, which often have $`\mathrm{N}_\mathrm{H}>10^{22}`$. Perhaps the strongest argument in favor of an x-ray nova hypothesis is the location of the transient: stars in the inner bulge of M31 are likely old, disk/bulge population stars typical of those in x-ray novae, rather than the young, Be stars typically found in star forming regions and in Be-star pulsar systems.
We note that in either case the optical magnitude of the transient in outburst is likely to be V$`22`$, making the object visible with HST. An x-ray nova would be expected to show a large variation in V from quiescence to outburst, while a Be-star pulsar would show a more moderate variation. HST observations are underway in an attempt to clarify the nature of this transient.
We thank Pauline Barmby for providing results on M31 globular cluster reddenings and positions prior to publication, Eliot Quataert for comments on ADAF models, and the CXC team for help with ACIS data reductions. This work was supported in part by NASA Contract NAS8-39073.
|
no-problem/0003/astro-ph0003073.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
VLBI is sensitivity limited. Most sources that can be robustly detected by conventional self-calibration techniques have peak fluxes in excess of 10 mJy. The success of phase-referencing techniques, as applied to mJy and a few sub-mJy radio sources, are often limited (particularly in terms of image fidelity) to the brighter sources for which subsequent self-calibration (over much longer solution intervals) is then possible. So far, few attempts have been made to detect sub-mJy sources, despite the fact that with a coherent integration time of 24 hours, global VLBI arrays can routinely produce images with $`1\sigma `$ rms noise levels better than $`30\mu `$Jy/beam.
Nevertheless, the focus of VLBI over the last 3 decades (and in particular Space VLBI – SVLBI) has been directed towards the study of the brightest and most compact radio sources in the sky. At these flux levels ($`>10`$mJy), the radio sky is virtually empty, with most radio sources associated with relatively distant AGN. As a result the overlap with other wave-bands is sometimes limited. In this paper, I suggest a new strategy for the routine detection and imaging of faint sub-mJy and $`\mu `$Jy radio sources. The strategy relies on a combination of in-beam phase-referencing (with obvious advantages for SVLBI but also VLBI generally - see Fomalont et al. 1999), wide-field VLBI imaging (see Garrett et al. 1999) and simultaneous correlation of multiple field centres. These techniques, together with the steeply rising radio source counts at $`\lambda `$cm wavelengths, should permit high resolution, VLBI investigations of the faint sub-mJy and microJy source populations to begin.
## 2 Towards Routine Imaging of Faint Radio Sources
It is a well known and auspicious fact that the radio source counts increase steeply as one goes to fainter flux levels. At $`\lambda `$18cm the source counts derived from WSRT observations of the Hubble Deep Field, HDF, (Garrett et al. 2000) imply source counts of up to $`40S_{\mu Jy}^1`$ per square arcmin. Thus within the central regions of the primary beam of a typical 25-m VLBI antenna, one can expect to find over $`100`$ potential target sources with $`S>120\mu `$Jy (the $`3\sigma `$ noise level routinely achieved in ground based VLBI images). If we extrapolate the preliminary results of Garrington, Garrett and Polatidis (1999), we can deduce that for every continuum VLBI observation conducted today, there are perhaps a dozen or so faint radio sources in the beam that might be compact enough to be detected and imaged, in addition to the brighter target source! This suggests a new strategy for the routine imaging of a large number of faint radio sources:
(i) Reverse the traditional approach of selecting the target before the calibrator (see also Garrington, Garrett & Polatidis 1999). The field chosen should satisfy the following criteria: (a) it should be an area for which high quality optical/IR, and deep, sub-arcsec resolution radio data are available (several such fields are expected to become available over the next year) and (b) the same field should also contain a reasonably bright radio source (but not too bright) that can act as a in-beam (secondary) phase-calibrator.
(ii) Split the antenna primary beam into manageable $`4^{}\times 4^{}`$ patches (the size of these patches is currently determined by the limiting integration time and frequency resolution provided by current generation correlators, not to mention throughput, offline storage sizes and processing speed!). Each patch can be generated via simultaneous multi-field centre processing (currently being developed for the JIVE correlator, Pogrebenko 2000) or standard multiple-pass correlation, and can share the phase corrections provided by the in-beam phase-calibrator located close to the centre of the beam.
(iii) Divide each phase calibrated (but unaveraged) data patch into many small sub-fields of a few arcseconds across (small enough to employ 2-D FFTs and not so large that the image size becomes unmanageable - at least in terms of casual inspection by eye). FTT the data and produce a dirty map of the sub-fields of interest.
Some simple “proof-of-concept” tests have been conducted with a total of 10 minutes of Global VLBI $`\lambda `$6cm data (taken from the $`\lambda `$ 6cm Global VLBI Faint Source Survey of Garrington, Garrett & Polatidis 1999). Fig. 1 shows the clear VLBI detection of a known VLA FIRST source in a sub-field that is part of a patch of the primary beam that is located $`1`$ arcminute from the phase-centre and target source. Details of the processing requirements for this (and longer runs) is beyond the scope of this paper but they are not unreasonable. A more important limitation is the minimum integration time provided by today’s working correlators (these are currently inadequate to cope with SVLBI at Perigee - using this particular strategy \- but improvements can be expected over the next few years).
## 3 The Structure of Faint Radio Sources & SVLBI-2
Exceptionally deep radio observations of the HDF (Richards et al. 1999, Muxlow et al. 1999, Garrett et al. 2000) show that the bulk of the sub-mJy and $`\mu `$Jy source population have steep radio spectra and are for the most part identified with distant disk or irregular, interacting galaxies (often with ISO detections). This argues strongly that these faint sources are associated with very luminous Starburst galaxies. Nevertheless, a significant fraction (perhaps as much as 30%) are probably faint AGN, especially the brighter sub-mJy sources. Using the techniques described here, the brighter AGN could be reasonable targets for the next generation of SVLBI missions now planned. Indeed, SVLBI observations are probably crucial: from simple SSA theory faint sources are also expected to be small. In addition, emission from both compact AGN and larger-scale star-forming regions (principaly young SNRs, relic SNR emission and ultra-compact HII regions) might not be uncommon in the same system. Even for relatively distant ($`350`$ Mpc) but ultra-luminous star-forming disk galaxies, hypernovae (such as those SNR in Arp 220 and 41.95+575 in M82) might be detected, and more importantly resolved by SVLBI-2 missions. The prospects of detecting these faint, steep sepctrum radio sources with next generation SVLBI missions depends crucially on the availability of L or S-band receivers. The contribution future SVLBI-2 missions could make to unravelling the nature, structure and composition of the faint radio source population cannot, and should not, be underestimated.
## References
Fomalont, E.B., Goss, W.M., Beasley, A.J., Chatterjee, S. 1999, AJ, 117, 3025-3030.
Garrett, M.A., Porcas, R.W., Pedlar, A., Muxlow, T.W.B. & Garrington, S.T. 1999, NewAR 43, 519 (astro-ph/9906108).
Garrett, M.A., de Bruyn, A.G., Baan, W. & Schilizzi, R.T. 2000, IAU Symposium 199, submitted (astro-ph/0001523).
Garrington, S.T., Garrett, M.A. & Polatidis, A.G. 1999, NewAR43, 629 (astro-ph/9906158).
Muxlow, T.W.B., Wilkinson, P.N., Richards, A.M.S., Kellermann, K.I., Richards, E.A., Garrett, M.A. 1999, NewAR 43, 629
Pogrebenko, S.V. 2000, EVN Document Series, in preparation.
Richards, E. A., Kellermann, K. I., Fomalont, E.B., Windhorst, R.A., Partridge, R.B. 1998, AJ, 116, 1039.
|
no-problem/0003/quant-ph0003002.html
|
ar5iv
|
text
|
# Optimal local preparation of an arbitrary mixed state of two qubits. Closed expression for the single copy case.
## I Introduction.
Quantum correlations are important not only as a fundamental aspect of quantum theory but also as a resource in recent developments of quantum information theory . Consequently, such correlations are presently subject to several investigations, both at the theoretical and experimental levels.
pure-state entanglement has already been extensively studied for bipartite systems. As proposed in the pioneering works of Bennett et al. , the paradigmatic setting assumes that the distant parties sharing the composite system are only allowed to perform local operations and communicate classically (LOCC). Within this restricted set of transformations they will try to manipulate optimally the non-local resources contained in an initial entangled state. This approach has been applied to two different, complementary contexts, for which all the relevant magnitudes have been successfully identified. And thus, explicit protocols for the optimal manipulation of pure-state entanglement are presently well-known.
The so-called finite regime is concerned with the manipulation of a composite system with a finite dimensional Hilbert space. Arguably the main step forward was attained by Nielsen , who reported the conditions for a pure state $`\psi `$ to be locally convertible into another pure state $`\varphi `$ deterministically. Subsequently, his result has been extended in several ways to the case when deterministic local transformations cannot achieve the target state $`\varphi `$. Thus optimal conclusive , probabilistic and approximate transformations are now well-known. Such efforts, that have also led to obtain closed expressions for entanglement concentration and unveil the surprising phenomena of entanglement catalysis , show that during entanglement manipulation some of the non-local properties of the system are irreversibly lost, and that entanglement does not behave in this regime as an additive quantum resource.
Recall that a quantitative description of pure-state entanglement in a $`𝒞^M𝒞^N`$ bipartite system is given by a set of $`n=`$ min$`(M,N)`$ independent entanglement monotones introduced by the author in , in the sense that the local feasibility of a transformation is determined by the non-increasing character of all these functions. For instance, in a two-qubit system we have two monotones, namely
$`E_1(\psi )`$ $``$ $`\lambda _1+\lambda _2=1,`$ (1)
$`E_2(\psi )`$ $``$ $`\lambda _2,`$ (2)
where $`\lambda _1`$ and $`\lambda _2`$ ($`\lambda _1\lambda _2`$) are the square of the two Schmidt coefficients of a pure state (or normalized vector) $`\psi 𝒞^2𝒞^2`$. The parties can then conclusively transform the state $`\psi `$ into another state $`\varphi `$ with an a priori probability of success $`p`$ iff
$$p\mathrm{min}\{\frac{E_1(\psi )}{E_1(\varphi )},\frac{E_2(\psi )}{E_2(\varphi )}\}=\mathrm{min}\{1,\frac{\lambda _2^\psi }{\lambda _2^\varphi }\}.$$
(3)
More generally, Jonathan and Plenio showed that the parties can probabilistically convert the initial state into one of the states $`\{\varphi _i\}`$, with a priori probability $`p_i`$ for outcome $`i`$, iff none of the monotones $`E_k`$ ($`k=1,\mathrm{},n`$) were increased on average during the transformation, i.e in the two-qubit case, iff $`_ip_i1`$ and
$$E_2(\psi )\underset{i}{}p_iE_2(\varphi _i).$$
(4)
On the other hand the asymptotic regime –which was actually the first exhaustively studied – assumes that the parties into play share an infinite number of copies of a given entangled state. It benefits from the possibility of using block-coding techniques and from the law of large numbers: it allows for some inaccuracy in the output of the transformation, which then becomes irrelevant in the limit of a large number of copies of the entangled state under manipulation. In this regime, the suitable one when dealing with asymptotic aspects of quantum information theory, the only relevant parameter is the entropy of entanglement $`(\psi )`$ , an additive measure which for two-qubit states reads
$$(\psi )\lambda _1\mathrm{log}_2\lambda _1\lambda _2\mathrm{log}_2\lambda _2.$$
(5)
Its conservation dictates the ratio at which any two pure entangled states can be asymptotically converted into each other, and it implies that such conversions are fully reversible. Optimal local manipulation can be made asymptotically with a vanishing classical communication cost , which confirms entanglement as a truly inter-convertible resource.
Despite its behavior being presently quite well understood, pure-state entanglement is just an idealization. In any realistic physical situation noise plays its role and the state of the system is unavoidably mixed. Thus, understanding also mixed-state entanglement is necessary in order to be able, in practice, to successfully exploit this quantum resource.
From a theoretical point of view, entangled mixed states turn out to be more difficult to deal with. To begin with, no practical criterion is known that tells us in general whether a mixed state is entangled or separable (unentangled) . Then, although it has been remarkably shown that some entangled mixed states can be asymptotically distilled into pure entangled states (that is, they contain distillable entanglement), also that others cannot (they only contain bound entanglement), it is again not even known how to identify in general these states. At a quantitative level, the amount of pure-state entanglement needed to prepare a given mixed state (or its entanglement of formation ) and that which can be distilled out of it (its distillable entanglement ) –both asymptotic measures– remain to be computed, a celebrated exception being Wootters’ closed expression for the entanglement of formation $`(\rho )`$ of an arbitrary state of two qubits <sup>*</sup><sup>*</sup>*Although the interpretation of Wootters’ expression as the entanglement of formation relies on the unproved assumption that such a measure of entanglement is additive. .
In this note we present the finite-regime analog of Wootters’ results, namely a closed expression for the minimal amount of pure-state entanglement required in order to create a single copy of an arbitrary mixed state of two qubits. This expression will allow us to determine, for instance, whether the parties can locally create a given mixed state $`\rho `$ starting from some pure entangled state $`\psi `$. Also, in those cases where this is not possible with certainty, we will be able to construct a local strategy which succeeds with the greatest a priori probability, and, more in general, we will be able to assess the feasibility by means of LOCC of the most general transformation starting from a pure state $`\psi `$, namely that producing one of the final states $`\{\rho _i\}`$ with corresponding a priori probabilities $`\{p_i\}`$.
It turns out that the parameter governing all the transformations above is an extension to mixed states of the entanglement monotone $`E_2`$. Recall that two-qubit pure states depend only on one independent non-local parameter (for instance the smallest Schmidt coefficient) and thus it is not surprising that $`E_2`$ suffices to quantify their entanglement. More remarkable is the fact that just one extension of the same parameter also rules the local preparation procedures for mixed states, the set of mixed states depending on $`9`$ non-local parameters .
## II Closed expression for $`E_2(\rho )`$ in a two-qubit system.
One of the main problems met while quantifying entanglement of mixed states in terms of pure-state entanglement comes from the fact that any mixed state $`\rho `$ accepts many decompositions as a mixture of pure states, namely
$$\rho =\underset{k}{}p_k|\psi _k\psi _k|$$
(6)
for infinitely many pure-state ensembles $`\{\psi _k,p_k\}`$. Let us consider a generic entanglement monotone $`\mu (\psi )`$ for pure states (see for a general way to construct $`\mu (\psi )`$). It turns out to be often interesting to extend it to mixed states as a convex roof –which preserves its monotonicity under LOCC– by defining
$$\mu (\rho )\underset{\{\psi _k,p_k\}}{\mathrm{min}}\underset{k}{}p_k\mu (\psi _k),$$
(7)
where the minimization needs to be performed over all pure-state ensembles $`\{\psi _k,p_k\}`$ realizing $`\rho `$ as in (6). This is typically a strenuous optimization problem that prevents from obtaining an analytical expression for any of such measures. The value of $`\mu (\rho )`$ must then rely on impractical numerical computations.
In spite of being a difficult task, Wootters did solve analytically this optimization for the particular case of $`\mu (\psi )=(\psi )`$ (entropy of entanglement) and a two-qubit system ($`=𝒞^2𝒞^2`$). Next we consider and solve, also for the two-qubit case, the choice $`\mu (\psi )=E_2(\psi )`$. We start by briefly rephrasing Wootters’ argumentation, of which we will be making a substantial use.
Wootters’ strategy consisted in
* introducing the so-called concurrence, defined for two-qubit pure states as $`C(\psi )2\sqrt{(1\lambda _2)\lambda _2}`$ and extended to mixed states as
$$C(\rho )\underset{\{\psi _k,p_k\}}{\mathrm{min}}\underset{k}{}p_kC(\psi _k);$$
(8)
* computing $`C(\rho )`$ for any two-qubit mixed state;
* showing that the convex roof $`(\rho )`$ of the entropy of entanglement $`(\psi )`$ increases monotonically with $`𝒞(\rho )`$.
Let us recover the closed expression for the concurrence: denote by $`\overline{\rho }`$ the complex conjugation of an arbitrary two-qubit mixed state $`\rho `$ in the standard local basis $`\{|00,|01,|10,|11\}`$, and by $`\sigma _y`$ the matrix $`\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right)`$. The “spin-flipped” density matrix $`\stackrel{~}{\rho }`$ is defined as $`(\sigma _y\sigma _y)\overline{\rho }(\sigma _y\sigma _y)`$. Then Wootters proved that the concurrence of $`\rho `$ is given by
$$C(\rho )=\text{ max }\{v_1v_2v_3v_4,0\},$$
(9)
$`v_i`$ being the square roots of the eigenvalues of $`\rho \stackrel{~}{\rho }`$, in decreasing order. In the way he also proved that for any mixed state $`\rho `$ there is always an optimal pure-state ensemble with at most four states, and with all of them having the same concurrence, i.e.
$$\rho =\underset{k=1}{\overset{l4}{}}p_k|\varphi _k\varphi _k|,C(\rho )=C(\varphi _k).$$
(10)
The closed expression (9) will be very useful for the purposes of this note, because as we will show next also the entanglement monotone $`E_2`$ for mixed states can be expressed in terms of the concurrence. Explicitly,
Result:
$$E_2(\rho )=\frac{1\sqrt{1C(\rho )^2}}{2}.$$
(11)
Proof: It is essential to notice that the function $`E_2(\psi )=\lambda _2`$ is a convex, monotonically increasing function of $`C(\psi )`$ for pure states (which is exactly what happens also with the entropy of entanglement, the following argument being parallel to that implicit in ). Thus, we have that
$$E_2(\psi )=\frac{1\sqrt{1C(\psi )^2}}{2}.$$
(12)
We first want to see that the value of
$$E_2(\rho )\underset{\{\psi _k,p_k\}}{\mathrm{min}}\underset{k}{}p_kE_2(\psi _k)$$
(13)
cannot be smaller than that of equation (11). But let us just suppose that this were not the case, i.e. that we could find a pure-state ensemble $`\{\dot{\psi }_k,\dot{p}_k\}`$ for $`\rho `$ such that
$$\underset{k}{}\dot{p}_kE_2(\dot{\psi }_k)<\frac{1\sqrt{1𝒞(\rho )^2}}{2}.$$
(14)
Define the function $`f(x)2\sqrt{(1x)x}`$, which in the relevant interval $`x[0,1/2]`$ is a concave, monotonically increasing function. Then it would follow, by taking both sides of equation (14) as arguments of this function and using its concavity \[namely $`_kp_kf(x_k)f(_kp_kx_k)`$\], that $`_k\stackrel{~}{p}_k𝒞(\stackrel{~}{\psi }_k)<𝒞(\rho )`$, which would be in contradiction with the definition (8). Finally, the four-term, pure-state ensemble (10) achieves the optimal value (11) for $`E_2`$. $`\mathrm{}`$
## III Optimal preparation of a two-qubit mixed state.
As we will explain now, the explicit expression (11) for $`E_2(\rho )`$ is useful because it tells us explicitly whether the entanglement of a pure state $`\psi `$ allows us to locally create the state $`\rho `$. We can then construct local preparation procedures for mixed states that require the minimal amount of entanglement.
In we discussed the necessary and sufficient conditions that make possible, by means of LOCC, a probabilistic transformation $`\psi \{\rho _i,p_i\}`$, namely that of the pure state $`\psi `$ into one of the mixed states $`\{\rho _i\}`$ with corresponding a priori probabilities $`\{p_i\}`$. Notice that this is the most general transformation a pure state can undertake. The existence of the inequivalent pure-state monotones $`E_2,\mathrm{},E_{\mathrm{min}(M,N)}`$ in a generic $`𝒞^M𝒞^N`$ system did not allow to express such conditions in terms of their convex roof extensions $`E_2(\rho _i),\mathrm{},E_{\mathrm{min}(M,N)}(\rho _i)`$. However, for a $`𝒞^2𝒞^2`$ system, and actually also for any $`𝒞^2𝒞^N`$ system, we announce:
Theorem 1.a: The transformation $`\psi \rho `$ can be achieved by means of LOCC iff
$$E_2(\psi )E_2(\rho ),$$
(15)
where $`E_2(\rho )`$ corresponds to the convex roof extension of the entanglement monotone $`E_2(\varphi )\lambda _2^\varphi `$ as in (13).
Notice that when condition (15) is fulfilled, then we can say that the pure state $`\psi `$ is more entangled than the mixed state $`\rho `$, this being an incomplete extension to mixed states of Nielsen’s partial order for pure states .
More generally, we can announce
Theorem 1.b: The transformation $`\psi \{\rho _i,p_i\}`$ can be achieved by means of LOCC iff
$$E_2(\psi )\underset{i}{}p_iE_2(\rho _i).$$
(16)
Corollary: The maximal a priori probability $`P(\psi \rho )`$ of successfully transforming $`\psi `$ into $`\rho `$ by means of LOCC is given by
$$P(\psi \rho )=\mathrm{min}\{\frac{E_1(\psi )}{E_1(\rho )},\frac{E_2(\psi )}{E_2(\rho )}\}=\mathrm{min}\{1,\frac{\lambda _2^\psi }{E_2(\rho )}\}.$$
(17)
We remark that the previous statements, when complemented with expression (11) for the two-qubit case, result in a complete and explicit account of what the set of transformations so-called LOCC can produce out of the initial pure state $`\psi `$.
Let us marginally note that had the parties started with a pure state $`\psi `$ from a $`𝒞^M𝒞^N`$ system, $`M,N>2`$, the previous results would still hold, but with $`E_2(\varphi )_{i2}\lambda _i^\varphi =1\lambda _1^\varphi `$ (here, as before, $`\lambda _i^\varphi `$ are the square of the Schmidt coefficients of $`\varphi `$, ordered decreasingly).
Now, the results above follow straightforwardly from the fact that the entanglement monotone $`E_2`$ does not increase on average under LOCC, and from the fact that for any transformation such that $`E_2`$ is not increased we can find an explicit local protocol realizing it. Indeed, to see the latter let us suppose, for instance, that Alice and Bob want to prepare locally the state $`\rho `$ from $`\psi `$ and that condition (15) is fulfilled. This means that we can think of $`\rho `$ as a probabilistic mixture $`_kp_k|\psi _k\psi _k|`$, where the pure-state ensemble satisfies that $`E_2(\psi )_kp_kE_2(\psi _k)`$. That is, condition (4) is fulfilled and the probabilistic transformation $`\psi \{\psi _k,p_k\}`$ can be realized locally (see for an explicit protocol). After the probabilistic transformation the parties only need to discard the information concerning the index $`k`$ in order to obtain the state $`\rho `$. The generalization to a probabilistic transformation $`\psi \{\rho _i,p_i\}`$ is straightforward, as we only need to add an extra index $`i`$ to the previous protocol, which is not discarded in the final step.
As mentioned before, Wootters also proved that in the $`𝒞^2𝒞^2`$ case we can always find an optimal decomposition (10) for $`\rho `$ such that there are at most only four states, they being equally entangled, and thus also $`\lambda _2^{\varphi _k}=E_2(\rho )`$ for $`k=1,\mathrm{},l;l4`$. This provides us with somehow simple protocols. For instance, we can construct a local, optimal preparation procedure for $`\rho `$ by composing the following two steps: first, Alice and Bob transform $`\psi `$ into one of the states $`\varphi _k`$, say $`\varphi _1`$, following Nielsen’s deterministic protocol ; then they choose, randomly and with a priori probabilities $`\{p_k\}`$, to perform one of the bi-local unitary operations $`\{U_kV_k\}`$, where $`\varphi _k=U_kV_k\varphi _1`$. Finally, they discard the information concerning the index $`k`$. Similar procedures also apply for the probabilistic and conclusive transformations (cf. equations (16) and (17)).
## IV conclusions
We have analyzed the problem of optimally preparing a two-qubit mixed state by means of LOCC, when the parties initially share an entangled pure state. We have presented necessary and sufficient conditions for such preparation to be possible in terms of the entanglement monotone $`E_2(\rho )`$, for which we have obtained a closed expression. These results highlight the role the quantity $`E_2`$ plays in $`𝒞^2𝒞^N`$ systems: not only does it determines whether a pure-state transformation can be performed locally, but it also provides the preparation cost for mixed states.
In view of the fact that almost no closed expression for mixed state entanglement is known, we expect our result to be also of practical interest as a handy tool for future related studies.
The author thanks Wolfgang Dür and Ignacio Cirac for their comments. He also acknowledges a CIRIT grant 1997FI-00068 PG (Autonomic Government of Catalunya) and a Marie Curie Fellowship HPMF-CT-1999-00200 (European Community).
|
no-problem/0003/quant-ph0003086.html
|
ar5iv
|
text
|
# Planar Dirac Electron in Coulomb and Magnetic Fields
## I Introduction
Planar nonrelativistic electron systems in a uniform magnetic field are fundamental quantum systems which have provided insights into many novel phenomena, such as the quantum Hall effect and the theory of anyons, particles obeying fractional statistics . Planar electron systems with energy spectrum described by the Dirac Hamiltonian have also been studied as field-theoretical models for the quantum Hall effect and anyon theory . Related to these field-theoretical models are the recent interesting studies regarding the instability of the naive vacuum and spontaneous magnetization in (2+1)-dimensional quantum electrodynamics, which is induced by a bare Chern-Simons term . In view of these developments, it is essential to have a better understanding of the properties of planar Dirac particles in the presence of external electromagnetic fields.
In we studied exact solutions of planar Dirac equation in the presence of a strong Coulomb field, and the stability of the Dirac vacuum in a regulated Coulomb field. Quite recently, there appear interesting studies on the quantum spectrum of a two-dimensional hydrogen atom in a homogenous magnetic field . As is well known, hydrogen atom in a homogeneous magnetic field has attracted great interest in recent years because of its classical chaotic behavior and its rich quantum structures. The main result found in is that, unlike the three-dimensional case, the two-dimensional Schrödinger equation and the Klein-Gordon equation can be solved analytically for a denumerably infinite set of magnetic field strengths. The solutions cannot be expressed in terms of special functions (see also ).
In this paper we discuss the motion of Dirac electron in two spatial dimensions in the Coulomb and homogeneous magnetic fields, and try to obtain exact solutions of a particular form. As in the case of the two-dimensional Schrödinger and the Klein-Gordon equation, by imposing a sufficient condition that guarantees normalizability of the wavefunctions (see the paragraph after eq.(44)), we can obtain the exact energy levels for a denumerably infinite set of magnetic fields. In the Dirac case, however, not all values of the total angular momentum $`j`$ allow exact solutions with the form of wavefunctions we assumed here. Solutions for the nonrelativistic limit of the Dirac equation in 2+1 dimensions are briefly discussed by means of the method of factorization.
We emphasize that in this paper, by assuming an ansatz which guarantees normalizability of the wavefunction, only parts of the energy spectrum of the system are solved exactly. In particular, we do not obtain energy levels with magnitude below the mass value, which include the most interesting ground state solution. This is the same as in the Schrödinger and the Klein-Gordon case. All these three cases can therefore be considered as examples of the newly discovered quasi-exactly solvable models . In $`(3+1)`$-dimension, no analytic solutions, even for parts of the spectrum, are possible so far.
## II Motion of Dirac electron in the Coulomb and magnetic fields
To describe an electron by the Dirac equation in 2+1 dimensions we need only three anticommuting $`\gamma ^\mu `$matrices. Hence, the Dirac algebra
$`\{\gamma ^\mu ,\gamma ^\nu \}=2g^{\mu \nu },g^{\mu \nu }=\mathrm{diag}(1,1,1)`$ (1)
may be represented in terms of the Pauli matrices as $`\gamma ^0=\sigma _3`$, $`\gamma ^k=i\sigma _k`$, or equivalently, the matrices $`(\alpha _1,\alpha _2)=\gamma ^0(\gamma ^1,\gamma ^2)=(\sigma _2,\sigma _1)`$ and $`\beta =\gamma ^0`$ . Then the Dirac equation for an electron minimally coupled to an external electromagnetic field has the form (we set $`c=\mathrm{}=1`$)
$`(i_tH_D)\mathrm{\Psi }(t,𝐫)=0,`$ (2)
where
$`H_D=\alpha 𝐏+\beta meA^0\sigma _1P_2\sigma _2P_1+\sigma _3meA^0`$ (3)
is the Dirac Hamiltonian, $`P_k=i_\mu +eA_\mu `$ is the operator of generalized momentum of the electron, $`A_\mu `$ the vector potential of the external electromagnetic field, $`m`$ the rest mass of the electron, and $`e(e>0)`$ is its electric charge. The Dirac wave function
$`\mathrm{\Psi }(t,𝐫)=\left(\begin{array}{c}\psi _1(t,𝐫)\\ \psi _2(t,𝐫)\end{array}\right)`$ (6)
is a two-component function (i.e. a $`2`$-spinor). Here $`\psi _1(t,𝐫)`$ and $`\psi _2(t,𝐫)`$ are the “large” and “small” components of the wave functions.
We shall solve for both positive and negative energy solutions of the Dirac equation (2) and (3) in an external Coulomb field and a constant homogeneous magnetic field $`B>0`$ along the $`z`$ direction:
$`A^0(r)=Ze/r(e>0),A_x=By/2,A_y=Bx/2.`$ (7)
We assume the wave functions to have the form
$`\mathrm{\Psi }(t,𝐱)={\displaystyle \frac{1}{\sqrt{2\pi }}}\mathrm{exp}(iEt)\psi _l(r,\phi ),`$ (8)
where $`E`$ is the energy of the electron, and
$`\psi _l(r,\phi )=\left(\begin{array}{c}f(r)e^{il\phi }\\ g(r)e^{i(l+1)\phi }\end{array}\right)`$ (11)
with integral number $`l`$. The function $`\psi _l(r,\phi )`$ is an eigenfunction of the conserved total angular momentum $`J_z=L_z+S_z=i/\phi +\sigma _3/2`$ with eigenvalue $`j=l+1/2`$. One can of course consider wavefunctions which are eigenfunctions of $`J_z`$ with eigenvalues $`l1/2`$. These functions are of the forms of (8) with $`\psi _l`$ given by
$`\psi _l(r,\phi )=\left(\begin{array}{c}f(r)e^{i(l1)\phi }\\ g(r)e^{il\phi }\end{array}\right).`$ (14)
But ansatz (14) is equivalent to ansatz (11) if one makes the change $`ll1`$. It should be reminded that $`l`$ is not a good quantum number. This is evident from the fact that the two components of $`\psi _l`$ depend on the integer $`l`$ in an asymmetric way. Only the eigenvalues $`j`$ of the conserved total angular momentum $`J_z`$ are physically meaningful. For definiteness, in the rest of this paper, all statements and conclusions, whenever angular momentum number $`l`$ is mentioned, are made with reference to ansatz (8) and (11).
Substituting (8) and (11) in (2), and taking into account of the equations
$`P_x\pm iP_y=ie^{\pm i\phi }\left({\displaystyle \frac{}{r}}\pm \left({\displaystyle \frac{i}{r}}{\displaystyle \frac{}{\phi }}{\displaystyle \frac{eBr}{2}}\right)\right),`$ (15)
we obtain
$`{\displaystyle \frac{df}{dr}}\left({\displaystyle \frac{l}{r}}+{\displaystyle \frac{eBr}{2}}\right)f+\left(E+m+{\displaystyle \frac{Z\alpha }{r}}\right)g=0,`$ (16)
$`{\displaystyle \frac{dg}{dr}}+\left({\displaystyle \frac{1+l}{r}}+{\displaystyle \frac{eBr}{2}}\right)g\left(Em+{\displaystyle \frac{Z\alpha }{r}}\right)f=0,`$ (17)
where $`\alpha e^2=1/137`$ is the fine structure constant. If we let
$`F(r)=\sqrt{r}f(r),G(r)=\sqrt{r}g(r),`$ (18)
eq. (17) becomes:
$`{\displaystyle \frac{dF}{dr}}\left({\displaystyle \frac{l+\frac{1}{2}}{r}}+{\displaystyle \frac{eBr}{2}}\right)F+\left(E+m+{\displaystyle \frac{Z\alpha }{r}}\right)G=0,`$ (19)
$`{\displaystyle \frac{dG}{dr}}+\left({\displaystyle \frac{l+\frac{1}{2}}{r}}+{\displaystyle \frac{eBr}{2}}\right)G\left(Em+{\displaystyle \frac{Z\alpha }{r}}\right)F=0.`$ (20)
By eliminating $`G`$ in (19) and $`F`$ in (20), one can obtain the decoupled second order differential equations for $`F`$ and $`G`$. At large distances, these equations have the asymptotic forms (neglecting $`r^2`$ terms):
$`{\displaystyle \frac{d^2F}{dr^2}}`$ $`+`$ $`\left[E^2m^2eB(l+1)+{\displaystyle \frac{2EZ\alpha }{r}}{\displaystyle \frac{1}{4}}(eBr)^2\right]F=0,`$ (21)
$`{\displaystyle \frac{d^2G}{dr^2}}`$ $`+`$ $`\left[E^2m^2eBl+{\displaystyle \frac{2EZ\alpha }{r}}{\displaystyle \frac{1}{4}}(eBr)^2\right]G=0.`$ (22)
The last term in these two equations, which is proportional to $`r^2`$, may be viewed as the “effective confining potential”.
The exact solutions and the energy eigenvalues with $`0<E<m`$ corresponding to stationary states of the Dirac equation (17) with $`B=0`$ were found in . The electron energy spectrum in the Coulomb field has the form
$`E=m\left[1+{\displaystyle \frac{(Z\alpha )^2}{(n_r+\sqrt{(l+1/2)^2(Z\alpha )^2})^2}}\right]^{1/2},`$ (23)
where the values of the quantum number $`n_r`$ are : $`n_r=0,1,2,\mathrm{}`$, if $`l0`$, and $`n_r=1,2,3,\mathrm{}`$ if $`l<0`$. It is seen that
$`E_0=m\sqrt{1(2Z\alpha )^2}`$ (24)
for $`l=n_r=0`$, and $`E_0`$ becomes zero at $`Z\alpha =1/2`$, whereas in three spatial dimensions $`E_0`$ equals zero at $`Z\alpha =1`$. Thus, in two spatial dimensions the expression for the electron ground state energy in the Coulomb field of a point-charge $`Ze`$ no longer has a physical meaning at $`Z\alpha =1/2`$. It is worth noting that the corresponding solution of the Dirac equation oscillates near the point $`r0`$.
For weak magnetic field the wave functions and energy levels with $`E<m`$ can be found from (19) and (20) in the semiclassical approximation. We look for solutions of this system in the standard form
$`F(r)=A(r)\mathrm{exp}(iS(r)),G(r)=B(r)\mathrm{exp}(iS(r)).`$ (25)
Here $`A(r)`$ and $`B(r)`$ are slowly varying functions. Substituting (25) into (19) and (20), we arrive at an ordinary differential equation for $`S(r)`$ in the form
$`\left({\displaystyle \frac{dS}{dr}}\right)^2Q=E^2m^2eB(l+1/2)+{\displaystyle \frac{2EZ\alpha }{r}}+{\displaystyle \frac{(Z\alpha )^2(l+1/2)^2}{r^2}}{\displaystyle \frac{(eBr)^2}{4}}.`$ (26)
The energy levels with $`E<m`$ are defined by the formula
$`{\displaystyle \underset{r_{min}}{\overset{r_{max}}{}}}\sqrt{Q}𝑑r=\pi \left(\sqrt{(l+1/2)^2(Z\alpha )^2}+{\displaystyle \frac{EZ\alpha }{\sqrt{|m^2+eB(l+1/2)E^2|}}}\right),`$ (27)
where $`r_{max}`$ and $`r_{min}`$ ($`r_{max}>r_{min}`$) are roots of equation $`Q=0`$. In obtaining (27), the term $`(eBr)^2`$ in $`Q`$ has been dropped. If we require the energy spectrum to reduce to (23) when $`B=0`$, we must equate the right-hand side of (27) to $`\pi n_r`$. As a result we obtain (for $`l0`$)
$`E=\left[m+{\displaystyle \frac{eB}{2m}}\left(l+{\displaystyle \frac{1}{2}}\right)\right]\left[1+{\displaystyle \frac{(Z\alpha )^2}{(n_r+\sqrt{(l+1/2)^2(Z\alpha )^2})^2}}\right]^{1/2}.`$ (28)
In the nonrelativistic approximation the energy spectrum takes the form
$`E_{non}={\displaystyle \frac{(Z\alpha )^2m}{2(n_r+|l+1/2|)^2}}+{\displaystyle \frac{eB}{2m}}\left(l+{\displaystyle \frac{1}{2}}\right).`$ (29)
Semiclassical motion of electron in the magnetic and Coulomb fields can be characterized by means of the so-called “magnetic length” $`l_B=\sqrt{1/eB}`$ and the Bohr radius $`a_\mathrm{B}=1/Z\alpha m`$ of a hydrogen-like atom of charge $`Ze`$. When the magnetic field is weak so that $`l_Ba_\mathrm{B}`$, or equivalently, $`BB_{cr}(Z\alpha )^2m^2/e`$, the energy spectrum is simply the spectrum of a hydrogen-like atom perturbed by a weak magnetic field. We obtain the Zeeman splitting of atomic spectrum depending linearly upon the magnetic field strength and the “magnetic quantum number” $`l+1/2`$.
In strong magnetic field the asymptotic solutions of $`F(r)`$ and $`G(r)`$ have the forms $`\mathrm{exp}(ar^2/2)`$ with $`a=eB/2`$ at large $`r`$, and $`r^\gamma `$ with
$`\gamma =\sqrt{(l+1/2)^2(Z\alpha )^2}`$ (30)
for small $`r`$. One must have $`Z\alpha <1/2`$, otherwise the wave function will oscillate as $`r0`$ when $`l=0`$ and $`l=1`$. In this paper we shall look for solutions of $`F(r)`$ and $`G(r)`$ which can be expressed as a product of the asymptotic solutions (for small and large $`r`$) and a series in the form
$`F(r)=r^\gamma \mathrm{exp}(ar^2/2){\displaystyle \underset{n=0}{}}\alpha _nr^n,`$ (31)
$`G(r)=r^\gamma \mathrm{exp}(ar^2/2){\displaystyle \underset{n=0}{}}\beta _nr^n,`$ (32)
with $`\alpha _00,\beta _00`$. Substituting (31) and (32) into (19) and (20), we obtain
$`\left[\gamma \left(l+{\displaystyle \frac{1}{2}}\right)\right]\alpha _0`$ $`+`$ $`Z\alpha \beta _0=0,`$ (33)
$`\left[\left(\gamma +1\right)\left(l+{\displaystyle \frac{1}{2}}\right)\right]\alpha _1`$ $`+`$ $`Z\alpha \beta _1+\left(E+m\right)\beta _0=0,`$ (34)
$`\left[\left(n+\gamma \right)\left(l+{\displaystyle \frac{1}{2}}\right)\right]\alpha _n`$ $`+`$ $`Z\alpha \beta _n+\left(E+m\right)\beta _{n1}2a\alpha _{n2}=0(n2)`$ (35)
from (19), and
$`\left(\gamma +l+{\displaystyle \frac{1}{2}}\right)\beta _0`$ $``$ $`Z\alpha \alpha _0=0,`$ (36)
$`\left(n+\gamma +l+{\displaystyle \frac{1}{2}}\right)\beta _n`$ $``$ $`Z\alpha \alpha _n\left(Em\right)\alpha _{n1}=0(n1)`$ (37)
from (20).
Eq.(33) and (36) allow us to express $`\beta _0`$ in terms of $`\alpha _0`$ in two forms:
$`\beta _0`$ $`=`$ $`{\displaystyle \frac{Z\alpha }{\gamma +l+\frac{1}{2}}}\alpha _0`$ (38)
$`=`$ $`{\displaystyle \frac{\gamma l\frac{1}{2}}{Z\alpha }}\alpha _0,`$ (39)
which are equivalent in view of the fact that $`\gamma =\sqrt{(l+1/2)^2(Z\alpha )^2}`$ . Solving (34) and (37) with $`n=1`$ gives
$`\alpha _1`$ $`=`$ $`{\displaystyle \frac{\left(\gamma +l+\frac{1}{2}\right)(Em)+\left(\gamma +l+\frac{3}{2}\right)(E+m)}{\left(2\gamma +1\right)\left(\gamma +l+\frac{1}{2}\right)}}Z\alpha \alpha _0,`$ (40)
$`\beta _1`$ $`=`$ $`{\displaystyle \frac{2\left(\gamma +l\right)Em}{\left(2\gamma +1\right)}}\alpha _0.`$ (41)
From (37) one sees that $`\beta _n(n1)`$ are obtainable from $`\alpha _n`$ and $`\alpha _{n1}`$. To determine the recursion relations for the $`\alpha _n`$, we simply eliminate $`\beta _n`$ and $`\beta _{n1}`$ in (35) by means of (37). This leads to (for $`n2`$):
$`\left(n+\gamma +l{\displaystyle \frac{1}{2}}\right)\left(n^2+2n\gamma \right)\alpha _n`$ (42)
$`+Z\alpha \left[\left(n+\gamma +l{\displaystyle \frac{1}{2}}\right)(Em)+\left(n+\gamma +l+{\displaystyle \frac{1}{2}}\right)(E+m)\right]\alpha _{n1}`$ (43)
$`+\left(n+\gamma +l+{\displaystyle \frac{1}{2}}\right)\left[E^2m^22a\left(n+\gamma +l{\displaystyle \frac{1}{2}}\right)\right]\alpha _{n2}=0.`$ (44)
Following , we impose the sufficient condition that the series parts of $`F(r)`$ and $`G(r)`$ should terminate appropriately in order to guarantee normalizability of the eigenfunctions. It follows from (44) that the solution of $`F(r)`$ becomes a polynomial of degree $`(n1)`$ if the series given by (44) terminates at a certain $`n`$ when $`\alpha _n=\alpha _{n+1}=0`$, and $`\alpha _m=0(mn+2)`$ follow from (44). Then from (37) we have $`\beta _{n+1}=\beta _{n+2}=\mathrm{}=0`$. Thus in general the polynomial part of the function $`G(r)`$ is of one degree higher than that of $`F`$. Now suppose we have calculated $`\alpha _n`$ in terms of $`\alpha _0`$ ($`\alpha _00`$) from (40) and (44) in the form:
$`\alpha _n=K(l,n,E,a,Z)\alpha _0.`$ (45)
Then two conditions that ensure $`\alpha _n=0`$ and $`\alpha _{n+1}=0`$ are
$`K(l,n,E,a,Z)=0`$ (46)
and
$`E^2m^2=2a\left(n+\gamma +l+{\displaystyle \frac{1}{2}}\right),n=1,2,\mathrm{}`$ (47)
Since the right hand side of (47) is always non-negative <sup>*</sup><sup>*</sup>*For $`l0`$, this is obvious. For $`l1`$, one has $`1/2\gamma +l+\frac{1}{2}0`$, recalling that $`Z\alpha <1/2`$., we must have $`|E|m`$ for the energy. We note here that, similar to the Schrödinger and the Klein-Gordon case, the adopted ansatz guarantees the normalizability of the wavefunction, but does not provide energy levels with magnitudes below $`|E|=m`$.
For any integer $`n`$, eqs.(46) and (47) give us a certain number of pairs $`(E,a)`$ of energy $`E`$ and the corresponding magnetic field $`B`$ (or $`a`$) which would guarantee normalizability of the wave function. Thus only parts of the whole spectrum of the system are exactly solved. The system can therefore be considered as an example of the quasi-exactly solvable models defined in . In principle the possible values of $`E`$ and $`a`$ can be obtained by first expressing the $`a`$ (or $`E`$) in (46) in terms of $`E`$ ($`a`$) according to (47). This gives an algebraic equation in $`E`$ ($`a`$) which can be solved for real $`E`$ ($`a`$). The corresponding values of $`a`$ ($`E`$) are then obtained from (47). In practice the task could be tedious. We shall consider only the simplest cases below, namely, those with $`n=1,2`$ and $`3`$. In these cases, the solution of the pair ($`E,a`$) is unique for fixed $`Z`$ and $`l`$. In general, for $`n>3`$, there could exist several pairs of values ($`E,a`$) (cf. ). Unlike the non-relativistic case, here negative energy solutions are possible. As in the case of the (3+1)-dimensional Dirac equation , the unfilled negative energy solutions are interpreted as positrons with positive energies.
We mention once again that all the exact solutions presented below, including the restrictions for the values of $`l`$ (or more appropriately, the values of the conserved total quantum number $`j=l+1/2`$), are obtained according to the ansatz (11), and (31) and (32) with polynomial parts. Exact solutions for the other parts of the energy spectrum, if at all possible, would require ansatz of different forms which are not known yet.
### 1 $`n=1`$.
In this case we have $`\alpha _00`$ and $`\alpha _n=0(n1)`$. From (40) one obtains the energies
$$E=\frac{m}{2(\gamma +l+1)}.$$
(48)
Eq.(47) with $`n=1`$ then gives the corresponding values of magnetic fields $`a`$. This results show that, with the ansatz assumed here, solution with positive energy cannot be obtained with $`n=1`$. Furthermore, the previously mentioned requirement that $`Em`$ can only be met with $`l<0`$.
### 2 $`n=2`$.
We now consider the next case, in which $`\alpha _0,\alpha _10`$, and $`\alpha _n=0(n2)`$. This also implies $`\beta _n0(n=0,1,2)`$ and $`\beta _n=0(n3)`$. From (47), (44) and (40), we must solve the following set of coupled equations for the possible values of $`E`$ and $`a`$:
$`E^2m^2=2a\left(2+\gamma +l+{\displaystyle \frac{1}{2}}\right),`$ (49)
$`Z\alpha \left[(\mathrm{\Gamma }+1)(Em)+(\mathrm{\Gamma }+2)(E+m)\right]\alpha _1+2a\left(\mathrm{\Gamma }+2\right)\alpha _0=0,`$ (50)
$`(2\gamma +1)\mathrm{\Gamma }\alpha _1+Z\alpha \left[\mathrm{\Gamma }(Em)+(\mathrm{\Gamma }+1)(E+m)\right]\alpha _0=0.`$ (51)
Here $`\mathrm{\Gamma }\gamma +l+1/2`$. From these equations one can check that $`E`$ satisfies the quadratic equation
$`\left[(2\mathrm{\Gamma }+1)(2\mathrm{\Gamma }+3){\displaystyle \frac{2\gamma +1}{(Z\alpha )^2}}\mathrm{\Gamma }\right]E^2+4m(\mathrm{\Gamma }+1)E+m^2\left[1+{\displaystyle \frac{2\gamma +1}{(Z\alpha )^2}}\mathrm{\Gamma }\right]=0.`$ (52)
This can be solved by the standard formula. One must be reminded of the constraint $`|E|m`$. For $`l0`$, we can obtain analytic solutions with both positive and negative energies. But when $`l<0`$, analytic solutions can only be obtained for negative energy $`Em`$. Furthermore, it can be checked that $`|E|`$ is a monotonic decreasing (increasing) function of $`|l|`$ ($`Z\alpha `$) at fixed $`Z\alpha `$ ($`l`$).
For $`Z\alpha 1/2`$, i.e. for light hydrogen-like atoms, we can write down approximate expression for energy near the mass value, i.e. $`|E|m`$ . We can obtain from (52) the approximate values of $`E`$:
$`E_+=m\left[1+{\displaystyle \frac{2(Z\alpha )^2}{(2\gamma +1)\mathrm{\Gamma }}}\left(\mathrm{\Gamma }+1\right)\left(\mathrm{\Gamma }+2\right)\right],l0,`$ (53)
for positive energies, and
$`E_{}=m\left[1+{\displaystyle \frac{2(Z\alpha )^2}{(2\gamma +1)}}\left(\mathrm{\Gamma }+1\right)\right],l0\mathrm{and}l<0,`$ (54)
for negative energies (in fact, it can be checked from (52) that for $`l<0`$, $`E`$ is always close to $`m`$ for any $`Z\alpha <1/2`$).
When $`Z\alpha `$ is close to $`Z\alpha =1/2`$, we have $`|E|m`$ for $`l0`$. In this case the energy $`E`$ can be approximated by:
$`E=\pm m\left[1\left(Z\alpha \right)^2{\displaystyle \frac{(2\mathrm{\Gamma }+1)(2\mathrm{\Gamma }+3)}{(2\gamma +1)\mathrm{\Gamma }}}\right]^{1/2}.`$ (55)
A consequence following from this formula is that, for each $`l0`$, there is a critical value of $`Z`$ beyond which polynomial solution with $`n=2`$ is impossible. The critical value of $`Z`$ for each $`l`$ is found by setting the expression in the square-root of (55) to zero. For $`l=0`$ and $`l=1`$, the critical values of $`Z`$ are $`Z\alpha =1/2.936`$ and $`1/2.316`$, respectively.
In the non-relativistic limit (see Sect. III), it is the upper, or the large, component $`f(r)`$ of the Dirac wave function that reduces to the Schrödinger wave function. Hence, in order to compare with the results considered in , it would be appropriate to study the nodal structures of the function $`F(r)`$ for positive energy solutions in the limit $`Em`$. It is easy to see from (50) or (51) that in this limit, $`\alpha _0`$ and $`\alpha _0`$ have opposite signs. Thus $`F(r)`$ has only one node in this limit, which is the same as in the Schrödinger case.
### 3 $`n=3`$.
For the case of $`n=3`$, exact solution of (46) and (47) becomes much more tedious. Now the values of $`E`$ and $`a`$ are solved by the following coupled equations:
$`E^2m^2=2a\left(\mathrm{\Gamma }+3\right),`$ (56)
$`Z\alpha \left[(\mathrm{\Gamma }+2)(Em)+(\mathrm{\Gamma }+3)(E+m)\right]\alpha _2+2a\left(\mathrm{\Gamma }+3\right)\alpha _1=0,`$ (57)
$`4(\gamma +1)(\mathrm{\Gamma }+1)\alpha _2+Z\alpha \left[(\mathrm{\Gamma }+1)(Em)+(\mathrm{\Gamma }+2)(E+m)\right]\alpha _1+4a\left(\mathrm{\Gamma }+2\right)\alpha _0=0,`$ (58)
$`(2\gamma +1)\mathrm{\Gamma }\alpha _1+Z\alpha \left[\mathrm{\Gamma }(Em)+(\mathrm{\Gamma }+1)(E+m)\right]\alpha _0=0.`$ (59)
In place of (52) we now have a cubic equation for the energy $`E`$. We shall not attempt to solve it here. It turns out that the equation satisfied by $`E`$ can be reduced to quadratic ones without linear term in $`E`$ in the low energy ($`Em`$) and the high energy ($`Em`$) limit, which correspond to small and large $`Z`$, respectively. The results are
$`E_+=m\left[1{\displaystyle \frac{2\left(Z\alpha \right)^2(\mathrm{\Gamma }+1)(\mathrm{\Gamma }+2)(\mathrm{\Gamma }+3)}{(2\gamma +1)\mathrm{\Gamma }(\mathrm{\Gamma }+2)+2(\gamma +1)(\mathrm{\Gamma }+1)^2}}\right]^{1/2}`$ (60)
and
$`E_{}=m\left[1{\displaystyle \frac{2\left(Z\alpha \right)^2(\mathrm{\Gamma }+1)(\mathrm{\Gamma }+2)(\mathrm{\Gamma }+3)}{(2\gamma +1)(\mathrm{\Gamma }+2)^2+2(\gamma +1)(\mathrm{\Gamma }+1)(\mathrm{\Gamma }+3)}}\right]^{1/2}`$ (61)
for $`|E|m`$, and
$`E=\pm m\left[1{\displaystyle \frac{2\left(Z\alpha \right)^2(\mathrm{\Gamma }+1/2)(\mathrm{\Gamma }+3/2)(\mathrm{\Gamma }+5/2)(\mathrm{\Gamma }+3)}{(2\gamma +1)\mathrm{\Gamma }(\mathrm{\Gamma }+5/2)(\mathrm{\Gamma }+2)+2(\gamma +1)(\mathrm{\Gamma }+1/2)(\mathrm{\Gamma }+1)(\mathrm{\Gamma }+3)}}\right]^{1/2}`$ (62)
for $`|E|m`$. The corresponding values of the magnetic field are obtained by substituting (60), (61), or (62) into (56). For $`l=1`$, eq.(62) is real only for $`1/2.65<Z\alpha <1/2`$.
As in the $`n=2`$ case, we shall also investigate the nodal structures of the function $`F(r)`$ for positive energy solutions in the limit $`Em`$. The zeros of the polynomial part of $`F(r)`$ is given by
$$r_0=\frac{1}{2}\left[\left(\frac{\alpha _1}{\alpha _2}\right)\pm \sqrt{\left(\frac{\alpha _1}{\alpha _2}\right)^24\left(\frac{\alpha _0}{\alpha _2}\right)}\right].$$
(63)
Note that physical solutions of $`r_0`$, if exist, must be non-negative. In the limit $`Em`$, eq.(57) and (59) give approximately
$`{\displaystyle \frac{\alpha _1}{\alpha _2}}`$ $`=`$ $`{\displaystyle \frac{EZ\alpha }{a}},`$ (64)
$`{\displaystyle \frac{\alpha _0}{\alpha _2}}`$ $`=`$ $`{\displaystyle \frac{(2\gamma +1)\mathrm{\Gamma }}{2EZ\alpha (\mathrm{\Gamma }+1)}}{\displaystyle \frac{\alpha _1}{\alpha _2}}.`$ (65)
We see from (64) that $`\alpha _1/\alpha _2<0`$ in this limit.
For negative $`l<0`$, which implies $`1/2\mathrm{\Gamma }<0`$, we also have $`\alpha _0/\alpha _2<0`$. Eq.(63) then implies that there is only one positive zero of $`F(r)`$. Hence the wave function has only one node for $`l<0`$.
When $`l0`$, we have $`\mathrm{\Gamma }>0`$, and hence $`\alpha _0/\alpha _2>0`$. It can be checked from (56), (60), (64) and (65) that $`(\alpha _1/\alpha _2)^2>4(\alpha _0/\alpha _2)`$. Thus $`F(r)`$ has two positive zeros. This is also consistent with the results presented in for the Schrödinger case (see also the last part of the following Section).
## III Non-relativistic limit and method of factorization
The electron in 2+1 dimensions in the nonrelativistic approximation is described by one-component wave function. This can easily be shown in full analogy with the (3+1)-dimensional case. Let us represent $`\mathrm{\Psi }`$ in the form
$`\mathrm{\Psi }=\mathrm{exp}(imt)\left(\begin{array}{c}\psi \\ \chi \end{array}\right).`$ (68)
and substitute (68) into (2). This results in, to the first order in $`1/c`$, the following Schrödinger-type equation (instead of the Schrödinger-Pauli equation in 3+1 dimensions):
$`i{\displaystyle \frac{\psi }{t}}=\left({\displaystyle \frac{P_1^2+P_2^2}{2m}}+{\displaystyle \frac{eB}{2m}}{\displaystyle \frac{Ze^2}{r}}\right)\psi ,`$ (69)
where, as before, $`P_k=i_\mu +eA_\mu `$ denote the generalized momentum operators. The term $`eB/2m`$ in (69) indicates that the electron has gyromagnetic factor $`g=2`$ as in the $`(3+1)`$-dimensional case .
One can now proceed in the same manner as in the Dirac case to solve for the possible energies and magnetic fields. We shall not repeat it here. More simply, we make use of the fact that eq.(69) differs from the Schrödinger equation discussed in only by the positive spin correction term $`\omega _L=eB/2m`$, which is the Larmor frequency. We thus conclude that the denumerably infinite set of magnetic field strengths obtained in are still intact, but the corresponding values of the possible energies are all shifted by an amount $`\omega _L`$, i.e.
$`E=\omega _L(n+1+l+|l|).`$ (70)
Simply put, the quantum number $`n`$ in is changed to $`n+1`$.
Let us note here that the energies and magnetic fields in this case may also be found by means of a method closely resembling the method of factorization in nonrelativistic quantum mechanics. We shall discuss this method briefly below. Both the attractive and repulsive Coulomb interactions will be considered, since planar two electron systems in strong external homogeneous magnetic field (perpendicular to the plane in which the electrons is located) are also of considerable interest for the understanding of the fractional quantum Hall effect. Let us assume
$`\psi (t,𝐱)={\displaystyle \frac{1}{\sqrt{2\pi }}}\mathrm{exp}(iEt+il\phi )r^{|l|}\mathrm{exp}(ar^2/2)Q(r),`$ (71)
where $`Q`$ is a polynomial, and $`a=eB/2`$ as defined before. Substituting (71) into (69), we have
$`\left[{\displaystyle \frac{d^2}{dx^2}}+\left({\displaystyle \frac{2\gamma }{x}}x\right){\displaystyle \frac{d}{dx}}+\left(ϵ\pm {\displaystyle \frac{b}{x}}\right)\right]Q(x)=0,`$ (72)
Here $`x=r/l_B`$, $`l_B=1/\sqrt{eB}`$, $`\gamma =|l|+1/2`$, $`b=2m|Z|\alpha l_B=|Z|\alpha \sqrt{2m/\omega _L}`$, and $`ϵ=E/\omega _L(2+l+|l|)`$. The upper (lower) sign in (72) corresponds to the case of attractive (repulsive) Coulomb interaction. This will be assumed throughout the rest of the paper.
It is seen that the problem of finding spectrum for (72) is equivalent to determining the eigenvalues of the operator
$`H={\displaystyle \frac{d^2}{dx^2}}\left({\displaystyle \frac{2\gamma }{x}}x\right){\displaystyle \frac{d}{dx}}{\displaystyle \frac{b}{x}}.`$ (73)
We want to factorize the operator (73) in the form
$`H=a^+a+p,`$ (74)
where the quantum numbers $`p`$ are related to the eigenvalues of (72) by $`p=ϵ`$. The eigenfunctions of the operator $`H`$ at $`p=0`$ must satisfy the equation
$`a\psi =0.`$ (75)
Suppose polynomial solutions exist for (72), say $`Q=\underset{k=1}{\overset{s}{}}(xx_k)`$, where $`x_k`$ are the zeros of $`Q`$, and $`s`$ is the degree of $`Q`$. Then the operator $`a`$ must have the form
$`a={\displaystyle \frac{}{x}}{\displaystyle \underset{k=1}{\overset{s}{}}}{\displaystyle \frac{1}{xx_k}},`$ (76)
and the operator $`a^+`$ has the form
$`a^+={\displaystyle \frac{}{x}}{\displaystyle \frac{2\gamma }{x}}+x{\displaystyle \underset{k=1}{\overset{s}{}}}{\displaystyle \frac{1}{xx_k}}.`$ (77)
Substituting (76) and (77) into (74) and then comparing the result with (73), we obtain the following set of equations for the zeros $`x_k`$ (the so-called Bethe ansatz equations ):
$`{\displaystyle \frac{2\gamma }{x_k}}x_k2{\displaystyle \underset{jk}{\overset{s}{}}}{\displaystyle \frac{1}{x_jx_k}}=0,k=1,\mathrm{},s,`$ (78)
as well as the two relations:
$`\pm b=2\gamma {\displaystyle \underset{k=1}{\overset{s}{}}}x_k^1,s=p.`$ (79)
Summing all the $`s`$ equations in (78) enables us to rewrite the first relation in (79) as
$`\pm b={\displaystyle \underset{k=1}{\overset{s}{}}}x_k.`$ (80)
From these formulas we can find the simplest solutions as well as the values of energy and magnetic field strength. The second relation in (79) gives $`E=\omega _L(2+s+l+|l|)`$, which is the same as in (70) noting that $`n=s+1`$.
For $`s=1,2`$ the zeros $`x_k`$ and the values of the parameter $`b`$ for which solutions in terms of polynomial of the corresponding degrees exist can easily be found from (78) and (80) in the form
$`s=1,x_1=\pm \sqrt{2|l|+1},b=\sqrt{2|l|+1},`$ (81)
$`s=2,x_1=(2|l|+1)/x_2,x_2=\pm (1+\sqrt{4|l|+3})/\sqrt{2},b=\sqrt{2(4|l|+3)}.`$ (82)
From (82) and the definition of $`b`$ one has the corresponding values of magnetic field strengths
$`\omega _L=2m{\displaystyle \frac{(Z\alpha )^2}{2|l|+1}},s=1,`$ (83)
$`\omega _L=m{\displaystyle \frac{(Z\alpha )^2}{4|l|+3}},s=2,`$ (84)
as well as the energies
$`E_1={\displaystyle \frac{2m(Z\alpha )^2}{2(2|l|+1)}}(3+l+|l|),`$ (85)
$`E_2={\displaystyle \frac{m(Z\alpha )^2}{(4|l|+3)}}(4+l+|l|).`$ (86)
The corresponding polynomials are
$`Q_1=xx_1=xb,`$ (87)
$`Q_2={\displaystyle \underset{k=1}{\overset{2}{}}}(xx_k)=x^2bx+2|l|+1.`$ (88)
The wave functions are described by (71). For $`s=1,2`$ for the repulsive Coulomb field the wave functions do not have nodes (for $`|l|=0,1`$), i.e. the states described by them are ground states, while for the attractive Coulomb field the wave function for $`s=1`$ has one node (first excited state) and the wave function for $`s=2`$ has two nodes (second excited state).
## IV Conclusions
In this paper we consider solutions of the Dirac equation in two spatial dimensions in the Coulomb and homogeneous magnetic fields. It is shown by using semiclassical approximation that for weak magnetic fields all discrete energy eigenvalues are negative levels of a hydrogen-like atom perturbed by the magnetic field. For large magnetic fields, analytic solutions of the Dirac equation are possible for a denumerably infinite set of magnetic field strengths, if the two components of the wave function are assumed to have the forms (31) and (32) with terminating polynomial parts. Such forms will guarantee normalizability of the wave functions. We present the exact recursion relations that determine the coefficients of the series expansion for solutions of the Dirac equation, the possible energies and the magnetic fields. Exact and/or approximate expressions of the energy are explicitly given for the three simplest cases. For low positive energy solutions, we also investigate the nodal structures of the large components of the Dirac wave functions, and find that they are the same as in the Schrödinger case. We emphasize that, by assuming a sufficient condition on the wavefunction that guarantees normalizability, only parts of the energy spectrum of this system are exactly solved for. In this sense the system can be considered a quasi-exactly solvable model as defined in . As in the Schrödinger and the Klein-Gordon case, energy levels with magnitude below the mass value, which include the most interesting ground state solution, cannot be obtained by our ansatz. For the corresponding case in $`(3+1)`$-dimension, no analytic solutions, even for parts of the spectrum, are possible.
Acknowledgment
This work was supported in part by the Republic of China through Grant No. NSC 89-2112-M-032-004.
|
no-problem/0003/math0003066.html
|
ar5iv
|
text
|
# Generalized Jordanian 𝑅-matrices of Cremmer-Gervais type
## Introduction
Let $`𝔽`$ be an algebraically closed field of characteristic zero. The skew-symmetric solutions of the classical Yang-Baxter equation for a simple Lie algebra are classified by the quasi-Frobenius subalgebras; that is, pairs of the form $`(𝔣,\omega )`$ where $`𝔣`$ is a subalgebra and $`\omega :𝔣𝔣𝔽`$ is a nondegenerate 2-cocycle on $`𝔣`$. By a result of Drinfeld , the associated Lie bialgebras admit quantizations. This is done by twisting the enveloping algebra $`U(𝔤)[[h]]`$ by an appropriate Hopf algebra 2-cocycle. However neither construction lends itself easily to direct calculation and few explicit examples exist to illustrate this theory. The most well-known is the Jordanian quantum group associated to the classical $`r`$-matrix $`EH`$ inside $`𝔰𝔩(2)𝔰𝔩(2)`$. In , Gerstenhaber and Giaquinto constructed explicitly the $`r`$-matrix $`r_𝔭`$ associated to certain maximal parabolic subalgebras $`𝔭`$ of $`𝔰𝔩(n)`$. In particular for the parabolic subalgebra $`𝔭`$ generated by $`𝔟^+`$ and $`F_1,\mathrm{}F_{n2}`$, their construction yields
$$r_𝔭=n\underset{i<j}{}\underset{k=i}{\overset{j1}{}}E_{k,i}E_{i+jk1,j}+\underset{i,j}{}(j1)E_{j1,j}E_{i,i}$$
In , they raise the problem of quantizing this $`r`$-matrix, in the sense of constructing an invertible $`RM_n(𝔽)M_n(𝔽)𝔽[[h]]`$ satisfying the Yang-Baxter equation and of the form $`I+hr+O(h^2)`$. When $`n=2`$, the solution is the well-known Jordanian $`R`$-matrix. Gerstenhaber and Giaquinto construct a quantization of $`r_𝔭`$ in the $`n=3`$ case and verify the necessary relations by direct calculation. We give below the quantization of $`r_𝔭`$ in the general case. Moreover, we are able to give three separate constructions which emphasize the fundamental position occupied by this $`R`$-matrix.
In the first section we construct $`R`$ (somewhat indirectly) as an extreme degeneration of the Belavin $`R`$-matrix. We do this by following the construction by Shibukawa and Ueno of solutions of the Yang-Baxter equation for linear operators on meromorphic functions. In , they showed that from any solution of Riemann’s three-term equation, they could construct such a solution of the Yang-Baxter equation. These solutions occur in three types: elliptic, trigonometric and rational. Felder and Pasquier showed that in the elliptic case, these operators, after twisting and restricting to suitable finite dimensional subspaces, yield Belavin’s $`R`$-matrices. In the trigonometric case, the same procedure yields the affinization of the Cremmer-Gervais quantum groups; sending the spectral parameter to infinity then yields the Cremmer-Gervais $`R`$-matrices themselves. Repeating this procedure in the rational case yields the desired quantization of $`r_𝔭`$, which we shall denote $`R_𝔭`$.
In the second section we show that these $`R`$-matrices occur as boundary solutions of the modified quantum Yang-Baxter equation, in the sense of Gerstenhaber and Giaquinto . It was observed in that if $`𝔐`$ is the set of solutions of the modified classical Yang-Baxter equation, then $`𝔐`$ is a locally closed subset of $`(𝔤𝔤)`$ and $`\overline{𝔐}𝔐`$ consists of solutions to the classical Yang-Baxter equation. The element $`r_𝔭`$ was found to lie on the boundary of the orbit under the adjoint action of $`SL(n)`$ of the modified Cremmer-Gervais $`r`$-matrix. In , Gerstenhaber and Giaquinto began an investigation into the analogous notion of boundary solutions of the quantum Yang-Baxter equation. They conjectured that the boundary solutions to the classical Yang-Baxter equation described above should admit quantizations which would be on the boundary of the solutions of their modified quantum Yang-Baxter equation. They confirmed this conjecture for the Cremmer-Gervais $`r`$-matrix in the $`𝔰𝔩(3)`$ case using some explicit calculations. We prove the conjecture for the general Cremmer-Gervais $`r`$-matrix by verifying that the matrices $`R_𝔭`$ do indeed lie on the boundary of the set of solutions to the modified quantum Yang-Baxter equation.
In the third section we show that these matrices may also be constructed via a “Vertex-IRF” transformation from certain solutions of the dynamical Yang-Baxter equation given in . This construction is analogous to the original construction of the Cremmer-Gervais $`R`$-matrices given in .
The position of $`R_𝔭`$ with relation to other fundamental solutions of the YBE and DYBE can be summarized heuristically by the diagram below.
$$\begin{array}{ccccc}R_B& & & & R_F\\ & & & & & & \\ \widehat{R}_{CG}& & R_{CG}& & \widehat{R}_{GN}& & R_{GN}\\ & & & & & & & & \\ R_{B,r}& & R_𝔭& & R_{F,r}& & R_{GN,r}\end{array}$$
YBE DYBE
On the left hand side, $`R_B`$ is Belavin’s elliptic $`R`$-matrix; $`R_{CG}`$ the Cremmer-Gervais $`R`$-matrix; $`\widehat{R}_{CG}`$ is the affinization of $`R_{CG}`$ which is also the trigonometric degeneration of the Belavin $`R`$-matrix; $`R_{B,r}`$ is a rational degeneration of the Belavin $`R`$-matrix. The vertical arrows denote degeneration of the coefficient functions (from elliptic to trigonometric and from trigonometric to linear); the horizontal arrows denote the limit as the spectral parameter tends to infinity. On the right hand side, $`R_F`$ is Felder’s elliptic dynamical $`R`$-matrix; $`\widehat{R}_{GN}`$ and $`R_{F,r}`$ are trigonometric and rational degenerations; $`R_{GN}`$ is the Gervais-Neveu dynamical $`R`$-matrix and $`R_{GN,r}`$ is a rational degeneration of the Gervais-Neveu matrix given in . The passage between the two diagrams is performed by Vertex-IRF transformations. The relationships involved in the top two lines of this diagram are well-known . This paper is concerned with elucidating the position of $`R_𝔭`$ in this picture.
The authors would like to thank Tony Giaquinto for many helpful conversations concerning boundary solutions of the Yang-Baxter equation.
## 1. Construction of $`R_𝔭`$
### 1.1. The YBE for operators on function fields
Recall that if $`A`$ is an integral domain and $`\sigma `$ is an automorphism of $`A`$, then $`\sigma `$ extends naturally to the field of rational functions $`A(x)`$ by acting on the coefficients. Denote by $`𝔽(z_1,z_2)`$ the field of rational functions in the variables $`z_1`$ and $`z_2`$. Then for any $`\sigma \mathrm{Aut}𝔽(z_1,z_2)`$, and any $`i,j\{1,2,3\}`$, we may define $`\sigma _{ij}\mathrm{Aut}𝔽(z_1,z_2,z_3)`$ by realizing $`𝔽(z_1,z_2,z_3)`$ as $`𝔽(z_i,z_j)(z_k)`$. Set $`\mathrm{\Gamma }=\mathrm{Aut}𝔽(z_1,z_2)`$. Elements $`R=\alpha _i(z_1,z_2)\sigma _i`$ of the group algebra $`𝔽(z_1,z_2)[\mathrm{\Gamma }]`$ act as linear operators on $`𝔽(z_1,z_2)`$ and we may define in this way $`R_{ij}`$ as linear operators on $`𝔽(z_1,z_2,z_3)`$. Thus we may look for solutions of the Yang-Baxter equation $`R_{12}R_{13}R_{23}=R_{23}R_{13}R_{12}`$ amongst such operators. Denote by $`P`$ the operator $`Pf(z_1,z_2)=f(z_2,z_1)`$.
###### Theorem 1.1.
The operator
$$R=\frac{\kappa }{z_1z_2}P+\left(1+\frac{\kappa }{z_1z_2}\right)I=I+\frac{\kappa }{z_1z_2}(IP)$$
satisfies the Yang-Baxter equation for any $`\kappa 𝔽`$.
###### Proof.
Consider an operator of the general form
$$R=\alpha (z_1z_2)P+\beta (z_1z_2)I$$
Then it is easily seen that $`R`$ satisfies the Yang-Baxter equation if and only if
$$\alpha (x)\alpha (y)=\alpha (xy)\alpha (y)+\alpha (x)\alpha (yx)$$
and
$$\alpha (x)\alpha (y)^2+\beta (y)\beta (y)\alpha (x+y)=\alpha (x)^2\alpha (y)+\beta (x)\beta (x)\alpha (x+y)$$
These equations are satisfied when $`\alpha (x)=\kappa /x`$ and $`\beta (x)=1\alpha (x)`$. Moreover these are essentially the only such solutions . ∎
In fact, (at least when $`𝔽`$ is the field of complex numbers) this operator is the limit as the spectral parameter tends to infinity of certain solutions of the Yang-Baxter equation with spectral parameter on meromorphic functions constructed by Shibukawa and Ueno. Recall that in , they showed that operators of the form
$$R(\lambda )=G(z_1z_2,\lambda )PG(z_1z_2,\kappa )I$$
satisfied the Yang-Baxter equation
$$R_{12}(\lambda _1)R_{13}(\lambda _1+\lambda _2)R_{23}(\lambda _2)=R_{23}(\lambda _2)R_{13}(\lambda _1+\lambda _2)R_{12}(\lambda _1)$$
for any $`\kappa 𝔽`$ if $`G`$ was of the form
$$G(z,\lambda )=\frac{\theta ^{}(0)\theta (\lambda +z)}{\theta (\lambda )\theta (z)}$$
and $`\theta `$ satisfied the equation
$`\theta (x+y)`$ $`\theta (xy)\theta (z+w)\theta (zw)+\theta (x+z)\theta (xz)\theta (w+y)\theta (wy)`$
$`+\theta (x+w)\theta (xw)\theta (y+z)\theta (yz)=0`$
The principal solution of this equation is $`\theta (z)=\theta _1(z)`$, the usual theta function (as defined in, say, ), along with the degenerations of the theta functions, $`\mathrm{sin}(z)`$ and $`z`$, as one or both of the periods tend to infinity. Felder and Pasquier showed that in the case where $`\theta `$ is a true theta function, these operators, when twisted and restricted to suitable subspaces, yield the Belavin $`R`$-matrices. When $`\theta `$ is trigonometric, the operator yields in a similar way the affinizations of the Cremmer-Gervais $`R`$-matrices . Letting the spectral parameter tend to infinity in a suitable way yields a constant solution of the YBE on the function field which again yields the usual Cremmer-Gervais $`R`$-matrices on restriction to finite dimensional subspaces. In the rational case, the same twisting and restriction procedure yields the desired quantization of $`r_𝔭`$.
When $`\theta (z)=z`$ we have $`G(z,\lambda )=1/\lambda +1/z`$. Sending $`\lambda `$ to infinity (and adjusting by a factor of $`\kappa `$), we obtain the solution of the Yang-Baxter equation given in the theorem above. Write $`R=I+\kappa r`$ where $`r=(IP)/(z_1z_2)`$. Then $`r`$ is a particularly interesting operator. It satisfies the classical Yang-Baxter equation, both forms of the quantum Yang-Baxter equation and has square zero. Its quantization is then just the exponential $`\mathrm{exp}\kappa r=I+\kappa r=R`$.
Let $`V_n`$ be the space of polynomials in $`z_1`$ of degree less than $`n`$. Then we may identify the space $`V_nV_n`$ with the subspace of $`𝔽(z_1,z_2)`$ consisting of polynomials of degree less than $`n`$ in both $`z_1`$ and $`z_2`$. Since $`Rz_1^iz_2^j=z_1^iz_2^j+\kappa (z_1^iz_2^jz_2^iz_1^j)/(z_1z_2)`$, $`R`$ restricts to an operator on $`V_nV_n`$. With respect to the natural basis, $`R`$ has the form
$$R(e_ie_j)=e_ie_j\kappa \underset{k}{}\eta (i,j,k)e_ke_{i+jk1}$$
where
$$\eta (i,j,k)=\{\begin{array}{cc}1\hfill & \text{ if }ik<j\hfill \\ 1\hfill & \text{ if }jk<i\hfill \\ 0\hfill & \text{ otherwise }\hfill \end{array}$$
We now apply a simple twist. Define the operator $`\stackrel{~}{F}_p`$ by $`\stackrel{~}{F}_pf(z_1,z_2)=f(z_1+p,z_2p)`$.
###### Lemma 1.2.
Let $`F=\stackrel{~}{F}_p`$. Then $`F`$ and the above $`R`$ satisfy:
1. $`F_{21}=F_{12}^1`$
2. $`F_{12}F_{13}F_{23}=F_{23}F_{13}F_{12}`$
3. $`R_{12}F_{23}F_{13}=F_{13}F_{23}R_{12}`$
4. $`R_{23}F_{12}F_{13}=F_{13}F_{12}R_{23}`$
Hence $`R_F=F_{21}^1RF_{12}`$ also satisfies the Yang-Baxter equation.
###### Proof.
The four relations are routine verifications. The fact that $`R_F`$ then satisfies the Yang-Baxter equation is a well-known fact about $`R`$-matrices extended to this slightly more general situation. ∎
Notice that $`F_{21}^1PF_{12}=P`$ and $`F_{21}^1F_{12}=F^2=\stackrel{~}{F}_{2p}`$. Taking $`p=h/2`$ yields
$$R_F=\stackrel{~}{F}_h+\frac{\kappa }{z_1z_2+h}(\stackrel{~}{F}_hP)$$
Notice that
$$R_Fz_1^iz_2^j=(z_1+h)^i(z_2h)^j+\kappa \frac{(z_1+h)^i(z_2h)^jz_2^iz_1^j}{z_1z_2+h}$$
and again $`R_F`$ restricts to an operator on $`V_nV_n`$.
###### Definition 1.3.
Let $`n`$ be a positive integer. Define
$$R_𝔭=\stackrel{~}{F}_h\frac{hn}{z_1z_2+h}(\stackrel{~}{F}_hP)$$
restricted to $`V_nV_n`$.
Putting all the above together yields the main result.
###### Theorem 1.4.
For any $`h𝔽`$ and positive integer $`n`$, $`R_𝔭`$ satisfies the Yang-Baxter equation.
### 1.2. Explicit form of $`R_𝔭`$
We now find an explicit formula for the matrix coefficients of $`R_𝔭`$ with respect to the natural basis.
Define the coefficients of $`R_𝔭`$ by $`R_𝔭z_1^iz_2^j=_{a,b}R_{ij}^{ab}z_1^az_2^b`$.
###### Proposition 1.5.
The coefficients of $`R_𝔭`$ are given by
$$R_{ij}^{ab}=(1)^{jb}\left[\left(\genfrac{}{}{0pt}{}{i}{a}\right)\left(\genfrac{}{}{0pt}{}{j}{b}\right)+n\underset{k}{}(1)^{ka}\left(\genfrac{}{}{0pt}{}{i}{k}\right)\left(\genfrac{}{}{0pt}{}{j+ka1}{b}\right)\eta (j,k,a)\right]h^{i+jab}$$
###### Proof.
Recall that
$$R_𝔭z_1^iz_2^j=(z_1+h)^i(z_2h)^jhn\frac{(z_1+h)^i(z_2h)^jz_2^iz_1^j}{z_1z_2+h}$$
For the second term we note that
$$\begin{array}{c}\frac{z_1^jz_2^i(z_1+h)^i(z_2h)^j}{z_1z_2+h}=\hfill \\ \hfill \underset{k,b,a}{}(1)^{j+kab}\left(\genfrac{}{}{0pt}{}{i}{k}\right)\left(\genfrac{}{}{0pt}{}{j+ka1}{b}\right)\eta (j,k,a)h^{i+jab1}z_1^az_2^b\end{array}$$
Combining this with the binomial expansion of the first term yields the assertion. ∎
The explicit form of this matrix in the case when $`n=3`$ can be found in \[13, Page 136\].
### 1.3. The semiclassical limit
The operator $`R_𝔭`$ is a polynomial function of the parameter $`h`$ of the form $`I+rh+O(h^2)`$. By working over a suitably extended field, we may assume that $`h`$ is a formal parameter. Hence $`r`$ satisfies the classical Yang-Baxter equation. We now verify that $`r`$ is the boundary solution $`r_𝔭`$ associated to the classical Cremmer-Gervais $`r`$-matrix found by Gerstenhaber and Giaquinto in .
Recall that their solution of the CYBE on the boundary of the component containing the modified Cremmer-Gervais $`r`$-matrix was (up to a scalar)
$$b_{CG}=n\underset{i<j}{}\underset{k=1}{\overset{ji}{}}E_{i,jk+1}E_{j,i+k}+\underset{i,j}{}(nj)E_{i,i}E_{j,j+1}.$$
(Here as usual we are taking the $`E_{ij}`$ to be the basis of $`\mathrm{End}V`$ defined by $`E_{ij}e_k=\delta _{jk}e_i`$ for a fixed basis $`\{e_1,\mathrm{},e_n\}`$ of $`V`$; we shall use the convention $`xy=xyyx`$). To pass from the $`b_{CG}`$ to our matrix $`r_𝔭`$, one applies the automorphism $`\varphi (E_{ij})=E_{n+1j,n+1i}`$. Thus our matrix is again a boundary solution but for a Cremmer-Gervais $`r`$-matrix associated to a different choice of parabolic subalgebras.
###### Theorem 1.6.
The operator $`R_𝔭`$ is of the form $`I+r_𝔭h+O(h^2)`$ where
$$r_𝔭z_1^iz_2^j=n\eta (i,j,k)z_1^kz_2^{i+jk1}+iz_1^{i1}z_2^jjz_1^iz_2^{j1}$$
In particular the matrix representation of $`r_𝔭`$ with respect to the usual basis is
$$n\underset{i<j}{}\underset{k=i}{\overset{j1}{}}E_{k,i}E_{i+jk1,j}+\underset{i,j}{}(j1)E_{j1,j}E_{i,i}.$$
###### Proof.
From Proposition 1.5, the coefficients $`r_{ij}^{ab}`$ are non-zero only when $`b=i+ja1`$ and in this case,
$`r_{ij}^{a,i+ja1}`$ $`={\displaystyle \frac{1}{h}}R_{ij}^{a,i+ja1}=(1)^{ai+1}\left({\displaystyle \genfrac{}{}{0pt}{}{i}{a}}\right)\left({\displaystyle \genfrac{}{}{0pt}{}{j}{ai+1}}\right)+n\eta (i,j,a)`$
$`=i\delta _{a,i1}j\delta _{a,i}+n\eta (i,j,a).`$
Hence
$$r_𝔭z_1^iz_2^j=n\eta (i,j,k)z_1^kz_2^{i+jk1}+iz_1^{i1}z_2^jjz_1^iz_2^{j1}.$$
Thus interpreting $`r_𝔭`$ as an operator on $`VV`$ we get
$$r_𝔭e_ie_j=n\eta (i,j,k)e_ke_{i+jk1}+(i1)e_{i1}e_j(j1)e_ie_{j1}.$$
In matrix form,
$$r_𝔭=n\underset{i<j}{}\underset{k=i}{\overset{j1}{}}E_{k,i}E_{i+jk1,j}+\underset{i,j}{}(j1)E_{j1,j}E_{i,i}.$$
## 2. Boundary solutions of the Yang-Baxter equation
### 2.1. The modified Yang-Baxter equation
In , Gerstenhaber and Giaquinto introduced the modified (quantum) Yang-Baxter equation (MQYBE). An operator $`R\mathrm{End}VV`$ is said to satisfy the MQYBE if
$$R_{12}R_{13}R_{23}R_{23}R_{13}R_{12}=\lambda (P_{123}R_{12}P_{213}R_{23})$$
for some nonzero $`\lambda `$ in $`𝔽`$. Here by $`P_{ijk}`$ we mean the permutation operator $`P_{ijk}(v_1v_2v_3)=v_{\sigma (1)}v_{\sigma (2)}v_{\sigma (3)}`$ where $`\sigma `$ is the permutation $`(ijk)`$.
Denote by $``$ the set of solutions of the YBE in $`\mathrm{End}VV`$ and by $`^{}`$ the set of solutions of the MQYBE. Then $`^{}`$ is a quasi-projective subvariety of $`(M_{n^2}(𝔽))`$ and $`\overline{^{}}^{}`$ is contained in $``$ . The elements of $`\overline{^{}}^{}`$ are naturally called boundary solutions of the YBE. Little is currently known about this set though we conjecture that it contains some interesting $`R`$-matrices closely related to the quantizations of Belavin-Drinfeld $`r`$-matrices . Let $`R`$ be a solution of the YBE for which $`PR`$ satisfies the Hecke equation $`(PRq)(PR+q^1)=0`$. Set $`\lambda =(1q^2)^2/(1+q^2)^2`$. Then $`Q=(2R+(q^1q)P)/(q+q^1)`$ is a unitary solution of the MQYBE. Roughly speaking what we expect to find is the following. If $`R`$ is a quantization (in the algebraic sense) of a Belavin-Drinfeld $`r`$-matrix on $`𝔰𝔩(n)`$, then on the boundary of the component of $`^{}`$ containing $`Q`$, we should find the quantization of the skew-symmetric $`r`$-matrix associated (in the sense of Stolin) with the parabolic subalgebra of $`𝔰𝔩(n)`$ associated to $`r`$. We prove this conjecture here for the most well-known example, the Cremmer-Gervais $`R`$-matrices.
If $`R\mathrm{End}(VV)\widehat{}𝔽[[h]]`$ satisfies the QYBE and is of the form $`I+hr+O(h^2)`$, then $`r`$ satisfies the classical Yang-Baxter equation and $`R`$ is said to be a quantization of $`r`$. The situation for the MQYBE is slightly more complicated and applies only to the $`𝔰𝔩(n)`$ case. Recall that the modified classical Yang-Baxter equation (MCYBE) for an element $`r𝔰𝔩(n)𝔰𝔩(n)`$ is the equation
$$[r_{12},r_{13}]+[r_{12},r_{23}]+[r_{13},r_{23}]=\mu \mathrm{\Omega }$$
where $`\mathrm{\Omega }`$ is the unique invariant element of $`^3𝔰𝔩(n)`$ (which in the standard representation is the operator $`P_{123}P_{213}`$). If $`R`$ is of the form $`I+hr+O(h^2)`$ and is a solution of the MQYBE then $`\lambda `$ is of the form $`\nu h^2+O(h^3)`$ for some scalar $`\nu `$. If $`\nu 0`$, then $`r`$ satisfies the MCYBE. In this case we say that $`R`$ is a quantization of $`r`$.
There is an analogous notion of boundary solution for the classical Yang-Baxter equation. In , Gerstenhaber and Giaquinto showed that the matrix $`b_{CG}`$ lies on the boundary of the component of the set of solutions to the MCYBE containing the modified Cremmer-Gervais classical $`r`$-matrix. They conjectured that its quantization should lie on the boundary of the component of $`^{}`$ containing the modified Cremmer-Gervais $`R`$-matrix and proved this in the case $`n=3`$ in . We prove now this conjecture in general by showing that $`R_𝔭`$ lies on the boundary of this component of $`^{}`$.
### 2.2. The Cremmer-Gervais solution of the MQYBE
Consider the linear operator on $`𝔽(z_1,z_2)`$
$$R=\frac{\widehat{q}pz_2}{pz_2z_1}P+\left(q\frac{\widehat{q}pz_2}{pz_2z_1}\right)F_p$$
where $`\widehat{q}=qq^1`$ and $`F_pf(z_1,z_2)=f(p^1z_1,pz_2)`$. When restricted to $`V_nV_n`$, the above operator becomes the usual 2-parameter Cremmer-Gervais $`R`$-matrix . When $`p^n=q^2`$, this is the original Cremmer-Gervais $`R`$-matrix which induces a quantization of $`SL(n)`$ .
If $`R`$ is any solution of the YBE for which $`PR`$ satisfies the Hecke equation $`(PRq)(PR+q^1)=0`$ then $`Q=(2R+(q^1q)P)/(q+q^1)`$ is a unitary solution of the MQYBE for $`\lambda =(1q^2)^2/(1+q^2)^2`$. Hence the operator $`Q_{p,q}=(2R\widehat{q}P)/(q+q^1)`$ satisfies the MQYBE. Explicitly,
$$Q_{p,q}=F_p\frac{\widehat{q}(z_2+p^1z_1)}{(q+q^1)(z_2p^1z_1)}(F_pP).$$
We call the corresponding matrices induced from these operators, the modified Cremmer-Gervais $`R`$-matrices.
### 2.3. Deformation to the boundary
Henceforth take $`q^2=p^n`$. Then the operator $`Q_{p,q}`$ becomes
$$Q_p=F_p\frac{(p^n1)(z_2+p^1z_1)}{(p^n+1)(z_2p^1z_1)}(F_pP)$$
This is the modified version of the one-parameter Cremmer-Gervais operator described above. Again $`Q_p`$ may be restricted to the subspace $`V_nV_n`$ where its action is given by
$$Q_pz_1^iz_2^j=p^{ji}z_1^iz_2^j\frac{(p^n1)}{(p^n+1)}[\eta (i,j,k)+\eta (i,j,k1)]p^{jk}z_1^kz_2^{i+jk}.$$
Fix $`h𝔽`$ and $`p𝔽^{}`$, define $`\stackrel{~}{F}_{p,h}`$ by $`\stackrel{~}{F}_{p,h}f(z_1,z_2)=f(p^1z_1+p^1h,pz_2h)`$. Define further,
$$\begin{array}{c}B_{p,h,n}=\stackrel{~}{F}_{p,h}\frac{(p^n1)(pz_2+z_1)}{(p^n+1)(pz_2z_1h)}(\stackrel{~}{F}_{p,h}P)\hfill \\ \hfill +\frac{h(p^n1)(p+1)}{(p^n+1)(p1)(pz_2z_1h)}(\stackrel{~}{F}_{p,h}P)\end{array}$$
Note that
$$B_{1,h,n}=\frac{hn}{(z_2z_1h)}(\stackrel{~}{F}_hP)+\stackrel{~}{F}_h$$
since $`\stackrel{~}{F}_h=\stackrel{~}{F}_{1,h}`$. This is the operator $`R_F`$ described above (with $`\kappa =hn`$) that restricts to $`R_𝔭`$ on finite dimensional subspaces.
###### Proposition 2.1.
For all $`h`$ and $`p1`$, $`B_{p,h,n}`$ is a solution of the MQYBE similar to $`Q_p`$.
###### Proof.
Define a shift operator $`\varphi _t:𝔽(z_1,z_2)𝔽(z_1,z_2)`$ by $`\varphi _tf(z_1,z_2)=f(z_1t,z_2t)`$ and let $`\varphi _t`$ act as usual on operators by conjugation. Then, if $`F_{p,t}=\varphi _tF_p`$,
$$\varphi _tQ_p=F_{p,t}\frac{(p^n1)(pz_2+z_1t(p+1))}{(p^n+1)(pz_2z_1t(p1))}(F_{p,t}P)$$
Choose $`t=h/(p1)`$. Then $`\varphi _tQ_p=B_{p,h,n}`$. This shows that $`B_{p,h,n}`$ is similar to $`Q_p`$ and hence satisfies the MQYBE when $`p1`$. ∎
Now the restriction of $`B_{p,h,n}`$ to $`V_nV_n`$ is a rational function of $`p`$ which belongs to $`^{}`$ and which for $`p=1`$ is $`R_𝔭`$. Thus $`R_𝔭`$ must be a “boundary solution” of the Yang-Baxter equation.
## 3. Vertex-IRF transformations and solutions of the dynamical YBE
The original construction of the Cremmer-Gervais $`R`$-matrices was by a generalised kind of change of basis (a “vertex-IRF transformation”) from the Gervais-Neveu solution of the constant dynamical Yang-Baxter equation. Given the above construction of $`R_𝔭`$ as a rational degeneration of the Cremmer-Gervais matrices, it is natural to expect that $`R_𝔭`$ should be connected in the same way with some kind of rational degeneration of the Gervais-Neveu matrices. In fact this is precisely what happens. The appropriate solutions to the constant dynamical Yang-Baxter equation (DYBE) were found by Etingof and Varchenko in . In classifying certain kinds of solutions to the constant DYBE, they found that all such solutions were equivalent to either a generalized form of the Gervais-Neveu matrix or to a rational version of this matrix. It turns out that $`R_𝔭`$ is connected via a vertex-IRF transformation with the simplest of this family of rational solutions to the constant DYBE.
Recall the framework for the dynamical Yang-Baxter equation given in . Let $`H`$ be a commutative cocommutative Hopf algebra. Let $`B`$ be an $`H`$-module algebra with structure map $`\sigma :HBB`$. Denote by $`𝒞`$ the category of right $`H`$-comodules. Define a new category $`𝒞_\sigma `$ whose objects are right $`H`$-comodules but whose morphisms are $`\mathrm{hom}_{𝒞_\sigma }(V,W)=\mathrm{hom}_H(V,WB)`$ where $`B`$ is given a trivial comodule structure. Composition of morphisms is given by the natural embedding of $`\mathrm{hom}_H(V,WB)`$ inside $`\mathrm{hom}_H(VB,WB)`$.
A tensor product $`\stackrel{~}{}:𝒞_\sigma \times 𝒞_\sigma 𝒞_\sigma `$ is defined on this category in the following way. For objects $`V`$ and $`W`$, $`V\stackrel{~}{}W`$ is the usual tensor product of $`H`$ comodules $`VW`$. In order to define the tensor product of two morphisms, define first for any $`H`$-comodule $`W`$, a linear twist map $`\tau :BWWB`$ by
$$\tau (bw)=w_{(0)}\sigma (w_{(1)}b).$$
where $`ww_{(0)}w_{(1)}`$ is the structure map of the comodule $`W`$. Then for any pair of morphisms $`f:VV^{}`$ and $`g:WW^{}`$, define
$$f\stackrel{~}{}g=(1m_B)(1\tau 1)(fg)$$
Etingof and Varchenko showed in that the bifunctor $`\stackrel{~}{}`$ makes $`𝒞_\sigma `$ into a tensor category. Let $`V𝒞_\sigma `$ For any $`R\mathrm{End}_{𝒞_\sigma }(V\stackrel{~}{}V)`$ we define elements of $`\mathrm{End}_{𝒞_\sigma }(V\stackrel{~}{}V\stackrel{~}{}V)`$, $`R_{12}=R\stackrel{~}{}1`$ and $`R_{23}=1\stackrel{~}{}R`$. Then $`R`$ is said to satisfy the $`\sigma `$-dynamical braid equation ($`\sigma `$-DBE) if $`R_{12}R_{23}R_{12}=R_{23}R_{12}R_{23}`$. If $`R`$ is a solution of the $`\sigma `$-DBE then $`RP`$ satisfies the $`\sigma `$-dynamical Yang-Baxter equation:
$$R_{12}R_{23}^{12}R_{12}^{123}=R_{23}R_{12}^{23}R_{23}^{132}$$
where for instance $`R_{12}^{132}=P_{132}R_{12}P_{123}`$.
A vertex-IRF transformation of a solution of the $`\sigma `$-DBE can then be defined \[15, Section 3.3\] as an invertible linear operator $`A:VVB`$ (that is, invertible in the sense of the composition of such operators defined above) such that the conjugate operator $`R^A=A_2^1A_1^1RA_1A_2`$ is a “scalar” operator in the sense that $`R^A(VV)VV𝔽`$. In this case $`R^A`$ satisfies the traditional braid equation \[15, Proposition 3.3\]. Thus a vertex-IRF transformation transforms a solution of the $`\sigma `$-DYBE to a solution of the usual YBE.
Let $`T`$ be the usual maximal torus of $`SL(n)`$. Let $`V`$ be the standard representation of $`SL(n)`$ considered as a comodule over $`H=𝔽[T]`$ which we may consider as the group algebra of the weight lattice $`P`$; i.e., $`H=𝔽[K_\lambda \lambda P]`$. Then $`V`$ has a basis $`\{e_i\}`$ of weight vectors with weights $`\nu _i`$. Denote the structure map by $`\rho :VV𝔽[T]`$. Then $`\rho (e_i)=e_iK_{\nu _i}`$.
Let $`S(𝔥^{})`$ be the symmetric algebra on $`𝔥^{}`$ and set $`B=\text{Frac}(S(𝔥^{}))`$. Define an action $`\sigma :HBB`$ by
$$\sigma (K_\lambda \nu )=\nu (\lambda ,\nu ).$$
Denote $`\sigma (K_\lambda b)`$ by $`b^\lambda `$. Recall that $`(\nu _i,\nu _j)=\delta _{ij}1/n`$. This fact will be used repeatedly in the calculations below.
Let $`R`$ be the matrix $`R_𝔭`$ defined in Section 1.1 with $`h=1/n`$, considered as an operator on the space $`VV`$ where $`V`$ has basis $`\{e_1,\mathrm{},e_n\}`$. Set $`\stackrel{~}{R}=RP`$ and let $`\stackrel{~}{R}_{ij}^{kl}`$ be the matrix coefficients of $`\stackrel{~}{R}`$ defined by $`\stackrel{~}{R}e_ie_j=_{k,l}\stackrel{~}{R}_{ij}^{kl}e_ke_l`$. From Definition 1.3 we have that for any $`z_1`$ and $`z_2`$,
$$\underset{k,l}{}\stackrel{~}{R}_{ij}^{kl}z_1^{k1}z_2^{l1}=\alpha (z_1z_2)z_1^{i1}z_2^{j1}+\beta (z_1z_2)(z_1+1/n)^{j1}(z_21/n)^{i1}.$$
where $`\alpha (x)=1/(x+1/n)`$ and $`\beta (x)=1\alpha (x)`$. Define the operator $`\mathrm{End}_{𝒞_\sigma }V\stackrel{~}{}V`$ by
$`(e_ie_j)`$ $`=e_ie_j\alpha (\nu _i^{\nu _j}\nu _j)+e_je_i\beta (\nu _i^{\nu _j}\nu _j)`$
$`=e_ie_j{\displaystyle \frac{1}{\nu _i\nu _j+\delta _{ij}}}+e_je_i\left(1{\displaystyle \frac{1}{\nu _i\nu _j+\delta _{ij}}}\right).`$
This is the solution of the DBE corresponding to the standard example of solution of the DYBE of the type given in \[7, Theorem 1.2\]. Finally define an operator $`A\mathrm{End}_{𝒞_\sigma }(V)`$ by $`A(e_i)=e_k\nu _k^{i1}`$.
###### Theorem 3.1.
$`^A=\stackrel{~}{R}`$
###### Proof.
We prove that $`A_1A_2=A_1A_2\stackrel{~}{R}`$. In matrix form this is equivalent to
$$\underset{c,d}{}_{cd}^{ms}(A_i^c)^{\nu _d}A_j^d=\underset{k,l}{}\stackrel{~}{R}_{ij}^{kl}(A_k^m)^{\nu _s}A_l^s.$$
Using the fact that $`\beta (\nu _m^{\nu _s}\nu _s)=0`$ when $`m=s`$
$`{\displaystyle \underset{k,l}{}}`$ $`\stackrel{~}{R}_{ij}^{kl}(A_k^m)^{\nu _s}A_l^s={\displaystyle \underset{k,l}{}}\stackrel{~}{R}_{ij}^{kl}(\nu _m^{\nu _s})^{k1}\nu _s^{l1}`$
$`=\alpha (\nu _m^{\nu _s}\nu _s)(\nu _m^{\nu _s})^{i1}\nu _s^{j1}+\beta (\nu _m^{\nu _s}\nu _s)(\nu _s{\displaystyle \frac{1}{n}})^{i1}(\nu _m^{\nu _s}+{\displaystyle \frac{1}{n}})^{j1}`$
$`=\alpha (\nu _m^{\nu _s}\nu _s)(\nu _m^{\nu _s})^{i1}\nu _s^{j1}+\beta (\nu _m^{\nu _s}\nu _s)(\nu _s^{\nu _m})^{i1}(\nu _m)^{j1}`$
$`={\displaystyle \underset{c,d}{}}_{cd}^{ms}(A_i^c)^{\nu _d}A_j^d`$
as required. ∎
|
no-problem/0003/astro-ph0003258.html
|
ar5iv
|
text
|
# GRB990510: on the possibility of a beamed X-ray afterglow
## 1 Introduction
Among the about 25 $`\gamma `$-ray bursts localized by the BeppoSAX Wide Field Cameras (WFCs), most of those followed-up with the Narrow Field Instruments (NFIs) onboard the same satellite have exhibited afterglows at X-ray energies (e.g., Costa et al. 1997), whereas less than half of them have exhibited afterglows in the optical, IR, and/or radio (e.g., van Paradijs et al. 1997; Frail et al. 1997). Most X-ray afterglows show a smooth power-law decay (with indices between $``$1.1 to $``$1.9), the exceptions being GRB970508 (Piro et al. 1998) and GRB970828 (Yoshida et al. 1999), which exhibit re-bursting events on time scales of a few hours and a day, respectively, superimposed on a power-law trend. The brightest X-ray afterglow so far, i.e. that of GRB990123, provided the first detection of hard X-ray (15–60 keV) afterglow emission (Heise et al. 2000).
Here we discuss BeppoSAX observations of the prompt $`\gamma `$-ray emission and the X-ray afterglow of GRB990510. On 1999 May 10 the BATSE experiment onboard the Compton Gamma Ray Observatory (CGRO) was triggered by GRB990510 at 8:49:06.29 UT (trigger 7560, see Kippen et al. 1999). The GRB was also detected by the BeppoSAX Gamma-Ray Burst Monitor (GRBM; Amati et al. 1999a) and WFC unit 2 (Dadina et al. 1999; Briggs et al. 2000), as well as by Ulysses (Hurley et al. 2000) and the Near Earth Asteroid Rendezvous (NEAR) spacecraft (Hurley 1999, private communication). In the WFC energy range (2–28 keV) the GRB had a duration of $``$80 s and reached a peak intensity of 4.3 Crab. The WFC error box was followed up in X-rays by the Narrow Field Instruments (NFIs) onboard BeppoSAX $``$8 hrs after the event and a strong decaying source was found (Piro et al. 1999b, Kuulkers et al. 1999). About 8.5 hr after the $`\gamma `$-ray/X-ray event the optical counterpart was found (Vreeswijk et al. 1999a) with a redshift of $`z>1.62`$ (Vreeswijk et al. 1999b). A linear polarization of 2% was measured (Covino et al. 1999; Wijers et al. 1999). Extended emission around the optical counterpart of GRB990510 has not been clearly detected, which indicates that a possible underlying host galaxy must be very faint (Israel et al. 1999; Fruchter et al. 1999b; Beuermann et al. 1999).
The light curve of the optical afterglow of GRB990510 does not follow a simple power-law decay, but showed smooth steepening after about one and a half day after the $`\gamma `$-ray burst (Harrison et al. 1999; Stanek et al. 1999; Israel et al. 1999). Traces of such a characteristic have also been found in the optical afterglow of GRB990123 (Kulkarni et al. 1999; Fruchter et al. 1999a) and the near-infrared afterglow of GRB990705 (Masetti et al. 2000). It has been regarded as the signature of a decreasing collimation in a relativistic flow (Sari et al. 1999; Rhoads 1999). Such behavior has never been observed in the X-ray afterglows of GRBs. The relatively large brightness of the GRB990510 X-ray afterglow allows an excellent opportunity to study the X-ray light curve in search of such a feature.
## 2 Observations
### 2.1 GRBM
The GRBM consists of the 4 anti-coincidence shields of the Phoswich Detection System (PDS; Frontera et al. 1997; Costa et al. 1998). The GRBM detector operates in the 40–700 keV energy band. The normal directions of two GRBM shields are co-aligned with the pointing direction of the WFCs. The on-axis effective area of the GRBM shields, averaged over the 40–700 keV band, is 420 cm<sup>2</sup>. The data from the GRBM include rates with 1 s time resolution and energy ranges of 40–700 keV and $`>`$100 keV, and average 240-channel spectra in the 40–700 keV band every 128 s (independently phased from GRB trigger times). For our spectral analysis we use data in the 70–650 keV band, since in this energy range the GRBM 240-channels response matrix is known with sufficient accuracy. For studying the GRB spectral evolution we use the 1 s ratemeters, and we check their consistency with the GRB time averaged spectra obtained from the 240 channel data (see Amati et al. 1999b).
GRB990510 was detected by the GRBM, but the instrument was not triggered to a GRB data acquisition mode, because a previous false event prevented this. Therefore, no high time resolution data have been acquired for this burst and the time resolution is limited to 1 s.
### 2.2 NFI
The NFI include two imaging and two non-imaging instruments. The imaging instruments are the Low-Energy Concentrator Spectrometer (LECS), sensitive from 0.1 to 10 keV (Parmar et al. 1997), and the Medium-Energy Concentrator Spectrometer (MECS), sensitive from 2 to 10 keV (Boella et al. 1997). They both have circular fields of view with diameters of 37$`\mathrm{}`$ and 56$`\mathrm{}`$, respectively. The non-imaging instruments are the Phoswich Detector System (PDS), sensitive from 13 to 300 keV (Frontera et al. 1997), and the Gas Scintillation Proportional Counter, sensitive from 4 to 120 keV (Manzo et al. 1997). In our analysis we used data from the LECS, MECS and PDS.
The 3$`\mathrm{}`$ radius WFC error box of GRB990510 was observed by the NFI from 8.0 to 44.3 hours after the BATSE trigger time, i.e. from MJD 51308.70–51310.22 (UT 1999 May 10.70–12.22). The total LECS, MECS and PDS on-source exposure times were 31.7, 67.9 and 41.5 ksec, respectively.
## 3 Data analysis
### 3.1 Prompt $`\gamma `$-ray emission
The GRBM light curve of the burst is shown in Fig. 1 (top). Two main pulses $``$40 s apart are observed. Between these pulses the GRB flux level is consistent with zero. The first pulse contains two sub-pulses with peak fluxes in the ratio 3:1. The second pulse consists of 5 sub-pulses with the first two having the highest peak flux and the following three being much weaker (by a factor of about 6). The entire GRB duration is 75 s. The GRB fluence in the 40–700 keV band is (1.9$`\pm `$0.2) $`\times 10^5`$ erg cm<sup>-2</sup>, while the peak flux reached in the same energy band is (2.4 $`\pm `$0.2) $`\times 10^6`$ erg cm<sup>-2</sup> s<sup>-1</sup> (all errors quoted in this paper are 1$`\sigma `$ uncertainties, unless noted otherwise.)
We performed a spectral analysis of the prompt emission of GRB990510 in the 70–650 keV band, by following approximately the same procedure used for, e.g., GRB970228 and GRB980329 (see Frontera et al. 1998; In ’t Zand et al. 1998). The average spectrum of the prompt emission of GRB990510 can be satisfactorily described ($`\chi ^2`$ = 6.84 for 9 d.o.f.) by a broken power-law, with a break energy $`E_{\mathrm{break}}=200\pm 27`$ keV and power-law indices before and after the break of $`1.36\pm 0.16`$ and $`2.34\pm 0.24`$, respectively. We note that a fit to the canonical $`\gamma `$-ray burst spectral model as introduced by Band et al. (1993) is also acceptable ($`\chi ^2`$ = 7.65 for 9 dof); however, we could not constrain the value of the power-law index below the peak energy $`E_p`$, $`\alpha _B`$ ($`\alpha _B=0.7\pm 0.8`$). The other Band parameters, i.e. the break energy or cut-off energy, $`E_0`$, and power-law index above $`E_p`$, $`\beta _\mathrm{B}`$, in this fit are $`184\pm 76`$ keV and $`2.68\pm 0.64`$, respectively.
We also studied the spectral evolution of the prompt emission by assuming that a power-law $`F(E)E^\mathrm{\Gamma }`$ connects the two energy bands 40–100 keV and 100–700 keV, and that the burst flux above 700 keV is negligible. The photon index $`\mathrm{\Gamma }`$, computed as a color index between the two energy ranges, is reported in the bottom panel of Fig. 1: the spectrum seems to slightly soften during the first main pulse. When the second main pulse starts, the spectrum is harder before it softens. The indices have typical values for GRBs.
### 3.2 X-ray afterglow
The combined image from MECS units 2 and 3 shows clearly the presence of a previously unknown bright source within the WFC error circle, formerly proposed as the X-ray counterpart of the GRB (Piro et al. 1999b, Kuulkers et al. 1999). It is elongated toward the NNW direction (see Fig. 2). This extension is likely due to the presence of an unresolved point source, partially contaminating the bright source. Another X-ray source is present $``$13 arcmin NNW of the bright source, and outside the WFC error circle of GRB990510. The LECS image, albeit less exposed than the MECS image, also shows a bright source, with a similar extension.
The proximity of the probable X-ray afterglow candidate to the weaker, contaminating source makes an accurate spatial analysis necessary, accounting for the extended tails of the point-spread functions of the LECS and MECS instruments, in order to separate the two point sources. We used different independent approaches to resolve this problem and performed fits of the resulting spectra. We here discuss one of these approaches, the maximum likelihood method, since the main results of this paper are obtained using this approach. The other two approaches serve as a consistency check of the former approach and are described in Appendix A. For all spectral fits, the background was evaluated from blank sky observations, after checking its stability with several source-free regions around the X-ray afterglow. The LECS spectrum was considered only in the 0.2–4 keV interval, due to calibration uncertainties above $``$4 keV. The MECS spectrum was considered in the 2–10 keV range.
In the maximum likelihood method one searches for single point sources on top of a background model (assumed to be flat, i.e. cosmic diffuse X-ray and particle-induced background). With this method, which allows a simultaneous analysis of several sources, one retrieves all photons from the point sources as detected by the instrument (for a more detailed description of the method we refer to Kuiper et al. 1998 and In ’t Zand et al. 2000).
This method shows that the LECS and MECS image can be satisfactorily described by three point sources on top of a flat background model. Their best-fit positions from the MECS measurements are given in Table 1, together with the source designations as given by Kuulkers et al. (1999). In Fig. 2, we show the maximum likelihood map using the MECS data (2–10 keV) sampled over the full duration of the observation. In this figure we also show the best derived WFC position of the prompt emission (Dadina et al. 1999), the Ulysses/GRBM triangulation annulus of the prompt emission (Hurley et al. 2000) and the position of the optical afterglow (Vreeswijk et al. 1999a). Since 1) 1SAX J1338.1$``$8030 is positionally consistent with the position of GRB990510 as derived by the WFC and the triangulation annulus, 2) the optical afterglow of GRB990510 lies within the confidence contours of this X-ray source, and 3) the X-ray emission of 1SAX J1338.1$``$8030 decayed during our observations (see below) we conclude that 1SAX J1338.1$``$8030 is the X-ray afterglow of GRB990510.
We note that no other source near 1SAX J1337.6$``$8027 has been reported previously at other wavelengths, while 1SAX J1336.0$``$8018 lies close (within $``$2$`\mathrm{}`$) to the radio source PMN J1335$``$8016 (Wright et al. 1994).
X-ray spectra of 1SAX J1338.1$``$8030 were generated in 9 energy bins, logarithmically distributed between 0.2 and 4.0 keV for the LECS data, and in 20 energy bands, logarithmically distributed between 1.6 and 10 keV for the MECS data. The best-fit value of the normalization between the LECS and MECS was found to be $``$0.7. This is within the range usually found (0.6–0.9, see e.g., Favata et al. 1997; Piro et al. 1999a). We therefore fixed this normalization to 0.7. For 1SAX J1337.6$``$8027 we performed spectral fits only using the MECS data and fixed the hydrogen column density, N<sub>H</sub>, to that found for 1SAX J1338.1$``$8030. The best-fit parameters of the mean spectra using the maximum likelihood method, and that of the other methods, are given in Table 2. We note that the best-fit values of N<sub>H</sub> are close to that derived for the mean Galactic value from the HI maps by Dickey & Lockman (1990) in the region of GRB990510, i.e. 0.94 $`\times `$ 10<sup>21</sup> atoms cm<sup>-2</sup>. We derive an unabsorbed average flux of $``$1.47 $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-1</sup> (2–10 keV) for 1SAX J1338.1$``$8030 during our observation.
To search for possible changes of the afterglow spectral shape during the observation, we logarithmically divided the whole MECS observation into three time bins so that in each time interval there were approximately equal amounts of counts. The spectra are at all times statistically well described by a single power-law, whose index does not change significantly during the afterglow decay (see Table 3).
It has been suggested that (red-shifted) iron K lines may be present in the spectra of X-ray afterglows (Piro et al. 1999a; Yoshida et al. 1999). Since the redshift to GRB990510 has been reported to be $`>`$1.62 (Vreeswijk et al. 1999b), one might expect such a line below 2.5 keV. We do not see, however, clear evidence for lines in this region, neither in the total spectrum nor in the three time intervals, with 90% confidence upper limits on the line intensity of typically 7 $`\times `$ 10<sup>-6</sup> photons s<sup>-1</sup> cm<sup>-1</sup> for the total averaged afterglow spectrum.
The X-ray afterglow of GRB990510 was not detected with the PDS instrument. By assuming a power-law spectrum with a photon index of $``$2.1, the 2$`\sigma `$ upper limits on the flux from the GRB990510 region are 2.6 $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-1</sup> and 5.0 $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-1</sup>, for the energy ranges 15–30 keV and 15–60 keV, respectively. This is consistent with that estimated from extrapolation of the LECS/MECS spectra.
We obtained light curves in the MECS 2–10 keV range of the individual sources in 15 temporal bins, logarithmically spaced in time since the BATSE trigger (Fig. 3). 1SAX J1338.1$``$8030 clearly fades during our observations. By fitting this decline with a power-law $`I`$(t) $`(\mathrm{t}\mathrm{t}_0)^{\alpha _X}`$, we obtain $`\alpha _X=1.42\pm 0.07`$ ($`\chi ^2`$/dof=10.2/13). The corresponding fit is shown in Fig. 4 (solid line). The count rate of 1SAX J1337.6$``$8027 is consistent with being constant ($`\chi ^2`$=13.4 for 14 d.o.f.) at $``$0.003 cts s<sup>-1</sup>.
## 4 Discussion
GRB990510 ranks among the top 25% of the brightest GRB observed by the GRBM, while it ranks among the top 4% (9%) of the BATSE burst flux (fluence) distribution (Kippen et al. 1999). The mean prompt $`\gamma `$-ray spectrum is well described by a broken power-law, with a break energy of $``$200 keV. The fluence, peak flux and spectrum as measured with the GRBM are comparable to those measured with BATSE. With the repeated pulses and ”hard-to-soft” spectral evolution, the $`\gamma `$-ray light curve and spectral behavior of GRB990510 are reminiscent of GRB970228 (Frontera et al. 1998).
The X-ray counterpart of GRB990510, 1SAX J1338.1$``$8030, is also very bright if compared with other GRB X-ray afterglows (see e.g. Piro 2000), and decays according to a typical power-law with index $``$1.42, which is consistent with that expected in relativistically expanding fireball models (e.g., Wijers, Rees & Mészáros 1997). However, the optical light curve smoothly steepens $``$1–2 days after the prompt $`\gamma `$-ray emission (Stanek et al. 1999; Harrison et al. 1999; Israel et al. 1999; see also Fig. 4). It was found that this steepening occurs at the same time in the different optical bands. To characterize its shape, the (V,R,I)-band data were simultaneously fitted by Harrison et al. (1999) with the following four-parameter function<sup>1</sup><sup>1</sup>1As noted by Harrison et al. (1999), the function which describes the optical (B,V,R,I) light curve by Stanek et al. (1999) and Israel et al. (1999) is different, leading to somewhat different values of the break time, i.e. $``$1.57 days.:
$$F_\nu (t)=f_{}(t/t_{})^{\alpha _1}[1\mathrm{exp}(J)]/J;J(t,t_{},\alpha _1,\alpha _2)=(t/t_{})^{(\alpha _1\alpha _2)},$$
(1)
with $`t_{}=1.20\pm 0.08`$ days, $`\alpha _1=0.82\pm 0.02`$, and $`\alpha _2=2.18\pm 0.05`$. In Fig. 4 we plot the optical R-band data taken in the same time span as the X-ray data, together with the above described function. It is clear that the optical data are not consistent with a power-law decay in that time span. We fitted the X-ray afterglow light curve with the same function as above, while fixing the decay indices to those derived in the optical. We find that the corresponding fits are bad, with $`\chi ^2`$ values of 24–27 for 14 d.o.f., depending on which parameter values we use among those reported by the different authors (Stanek et al. 1999, Harrison et al. 1999, Israel et al. 1999).
A steepening in the light curves can be expected in the fireball model when the cooling frequency moves towards lower frequencies in the observed frequency range. In that case the decay index $`\alpha `$ changes by 0.25 (Sari et al. 1998). However, the steepening of the optical decay is independent of wavelength (or achromatic) and the optical decay index $`\alpha `$ changes by $``$$``$1.36 (Harrison et al. 1999). We provide additional evidence against a changing cooling frequency. In that case one would expect the optical decay index to be similar to that in the X-ray band, in contrast to what is observed.
It has recently been realized that not all afterglow light curves are consistent with emission from expanding shells that are spherically symmetric, but that beaming may be important (i.e., jets, see e.g. Sari et al. 1999; Rhoads 1999). Such jets explain the presence of the steepening observed in the optical afterglow light curves of GRB990510 (e.g. Harrison et al. 1999). Sari et al. (1999) presented general expressions for the expected spectral and decay index, appropriate for both spherical shell and jet evolutions shortly after the $`\gamma `$-ray event. Our observed X-ray spectral index of $``$1.03$`\pm `$0.08 implies a value of the index $`p`$ of the electron energy distribution in the expanding material of $`p2.1`$ in the case of fast cooling (i.e., when the cooling frequency is below the X-ray range). In the alternate case (i.e., the cooling frequency is above the X-ray range) we derive $`p3.1`$. Harrison et al. (1999) found that the optical light curves can only imply $`p2.1`$, where the cooling frequency is above the optical wavelength range. Therefore, we conclude that the cooling frequency is between the optical and X-ray wavelengths. Note that the cooling frequency stays constant for a spreading jet (Sari et al. 1999).
At early times after the burst the decay light curve of a collimated source is identical to that of a spherical one, since then only a small portion of the emitting surface is visible due to relativistic beaming (the opening angle then is $`1/\gamma `$, where $`\gamma `$ is the Lorentz factor). In that case the decay index, $`\alpha `$, is expected to be $`(3p2)/41.1`$ in the case of fast cooling (i.e. steeper than in the optical: $`(3p1)/41.3`$; Sari et al. 1999). As the fireball evolves, $`\gamma `$ decreases, and the beaming angle will eventually exceed the jet opening angle. At that time one will see a break in the light curve, with $`\alpha =p2.1`$, while the optical and X-ray decay index are similar after the break. Therefore, we fitted the X-ray afterglow light curve again, now fixing $`\alpha _1`$ and $`\alpha _2`$ to $``$1.1 and $``$2.1, respectively, and $`t_{}`$ to that found in the optical. This leads to good fits with $`\chi ^2`$ values of about 12 for 14 dof. The corresponding fit is also shown in Fig. 4 (dotted line) with extrapolations to the boundaries of the plot. This shows that the observed X-ray afterglow of GRB990510 is consistent with the jet interpretation. As evident from Fig. 4, X-ray observations of the very early afterglow or a long time after the break time could have clearly discriminated whether the X-ray afterglow light curve is described by a single power law or consistent with the jet interpretation.
We conclude that, even if we could not distinguish the presence of a clear break in the X-ray light curve, the only explanation within the fireball model consistent with the X-ray and optical data is a jet evolution, where the cooling frequency lies between the optical and X-ray wavelengths. Future observations of afterglows at late times with the X-ray observatories recently launched (Chandra and XMM-Newton) may provide a direct evidence of such a temporal X-ray feature.
The BeppoSAX mission is a joint Italian and Dutch program. We thank M.R. Daniele, S. Rebecchi (SDC, Telespazio, Rome), G. Scotti (SOC, Telespazio, Rome) and G. Gennaro (OCC, Telespazio, Rome), for their prompt help in coordinating the ToO observations and the preparation of the FOT. We made use of the SIMBAD astronomical database.
## Appendix A Other image analysis methods
### A.1 ‘SPEX’ method
One of the other methods to investigate source-rich regions is currently implemented in the X-ray spectral fitting code SPEX (Kaastra et al. 1996). This approach consists of simultaneously fitting the spectra from different detector sections, taking into account the spill-over from photons coming from one detector section into another detector section of the sky. For a detailed description of this method we refer to Vink et al. (2000) and Kaastra et al. (2000). The detector sections we used are two circles of 3$`\mathrm{}`$ radius (which limits the effect of contamination due to source proximity), centered on the positions of 1SAX J1338.1$``$8030 and 1SAX J1337.6$``$8027 obtained from the maximum likelihood method. Note that in the case of the LECS a non-negligible amount of photons will lie outside the extraction regions, especially at low energies ($``$20% for $``$1 keV in the case of a 3$`\mathrm{}`$ extraction radius), due to its relatively large point spread function; this leads to some degradation in the sensitivity at these energies. The extraction of the spectra and the generation of response matrices takes into account the characteristics of both the LECS and MECS. The spectral resolution is oversampled by the LECS and MECS energy channels, therefore we rebinned the spectra so that roughly each resolved energy bin contains three spectral channels.
The hydrogen absorption column density, N<sub>H</sub>, was forced to be similar for both 1SAX J1338.1$``$8030 and 1SAX J1337.6$``$8027, since leaving them both free led to unstable fits for 1SAX J1337.6$``$8027. The best-fit value of the normalization between the LECS and MECS was found to be $``$0.8, and we therefore fixed it to 0.8. Both the spectrum of 1SAX J1338.1$``$8030 and 1SAX J1337.6$``$8027 are simultaneously well described by power-law models (subject to interstellar absorption). The best-fit parameters for both sources are given in Table 2.
### A.2 ”Canonical” method
We also evaluated the spectral fitting results offered by the maximum likelihood method, by following, for the spectral extraction, the classical method, which is more or less appropriate only for isolated point sources. We extracted the spectra within circles of radius 8$`\mathrm{}`$ and 4$`\mathrm{}`$, for the LECS and MECS, respectively, centered on the best-fit position of 1SAX J1338.1$``$8030 obtained with the maximum-likelihood method. The resulting spectra were also rebinned to bin sizes corresponding to roughly one third of the detector spectral FWHM resolution, with the additional constraint that each bin contained at least 20 counts.
Since the fitted centroids of 1SAX J1338.1$``$8030 and of 1SAX J1337.6$``$8027 are only 3.8 arcmin apart, the resulting spectra clearly contain the summed contribution of the two sources. Therefore, it is expected that the results of the spectral fits are reasonable if the X-ray flux of 1SAX J1337.6$``$8027 is negligible with respect to that of 1SAX J1338.1$``$8030 and/or if 1SAX J1337.6$``$8027 does not vary over the observed time interval. Since the maximum likelihood method showed that the spectrum and emission level of 1SAX J1337.6$``$8027 do not significantly vary with time, we fixed the contribution from 1SAX J1337.6$``$8027 to that found by the SPEX method and fitted the afterglow spectrum by leaving the hydrogen column density as a free parameter. The best-fit parameters are also reported in Table 2.
|
no-problem/0003/cond-mat0003323.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It has been shown that for a dynamical relaxation process in which a system, evolving according to a dynamics of model A , and is quenched from very high temperature to the critical temperature, there emerges a universal dynamical scaling behaviour already within the short-time regime. This rather unexpected scaling seems to exist, since spatial correlations and correlation time diverge simultaneously as the system approaches long- time regime at the critical temperature. For the short-time regime, the finite-size scaling form of the time evolution of a $`k`$-th moment of the magnetization is written as
$$M^{(k)}(t,ϵ,m_0)=b^{k\beta /\nu }M^{(k)}(t/\tau (L),b^{1/\nu }ϵ,b^{x_0}m_0).$$
(1)
Here $`b`$ is the scale change, $`L`$ is the linear dimension of the system, $`\beta `$, $`\nu `$ are the well known static critical exponents, $`\tau `$ is the autocorrelation time, and $`ϵ=(TT_c)/T_c`$ is the reduced temperature. Short-time dynamic behaviour also requires a new independent critical exponent $`x_0`$ which is the scaling dimension of the initial magnetization $`m_0`$. It has been shown numerically that dynamic scaling exists even at the very early stages of the relaxation process.
Rigorous formulation of the finite-size scaling for first-order phase transitions resulted in better understanding of the dynamics of the first-order phase transitions. In this formalism it has been shown that the phase transition is governed by the surface tension between the ordered and disordered phases. The system tunnels between these two metastable states and these transitions are observed during simulation studies of the long-term behaviour of the system. For finite systems, undergoing first-order transitions, the autocorrelation time $`\tau `$ for the relaxation process has been calculated for cluster algorithms and is given as
$$\tau =L^{d/2}\mathrm{exp}(\sigma _{od}L^{d1}),$$
(2)
where $`d`$ is the dimensionality of the system.This form of $`\tau `$ can be used to identify the order of the phase transition.
In a series of previous work , behavioral differences between first- and second-order phase transitions have been studied. In these works, empirically distinct change in the time evolution of the operator in initial stages of the simulation gave a clear indication that first- and second-order phase transitions are grouped into two different evolutionary processes. Since short-time dynamic behaviour of second-order phase transitions are well understood in terms of dynamic scaling formalism , in analogy with second-order transition, scaling for first-order transitions may be put into more rigorous footing. The success of the finite size scaling arguments and explicit form of $`\tau `$ given in Eq.(2) led us to study the existence of short-time dynamic scaling in first-order phase transitions. In first-order phase transitions the singularities are governed by the volume of the system. Hence in first-order transitions the thermal and magnetic critical indices are replaced by the dimension of the system. Combining this information with Eq.(2), we have formulated dynamic scaling form of various operators in anology with Eq. (1). In this work, our aim is to show that a system exhibiting first-order phase transition obeys well defined dynamic finite-size scaling rules during quenching from disordered state to the infinite lattice transition temperature. We have studied the short-time relaxation processes by using $`q=6`$ and $`7`$ state Potts models, which are known to exhibit first-order phase transition. In this model we studied the time evolutions of the order parameter, the largest cluster and the Binder cumulant .
## 2 Model and Method
The Hamiltonian of the $`q`$-state Potts model is given as
$$\beta H=\underset{<ij>}{}K\delta _{s_i,s_j}$$
(3)
where the spin $`s`$ can take values $`1,\mathrm{},q`$, $`\beta =1/k_BT`$ is the inverse temperature, $`K=J/(k_BT)`$, $`\delta `$ is the Kroneker delta function, and sum is over all nearest-neighbour pairs on two-dimensional lattice. In equilibrium the $`q`$-state Potts model is exactly solvable. The critical point locates at $`K_c=\mathrm{log}(1+\sqrt{q})`$. In principle, any type of dynamics can be given to the system to study non-equilibrium time evolution process. In this work, we used nonconserved dynamics of Model $`A`$ . In order to study dynamic scaling in systems exhibiting first-order phase transitions the following operators are considered:
1. Moments of the order parameter ($`M`$)
$$M^{(k)}=(\frac{q\rho ^\alpha 1}{q1})^k$$
(4)
$`\rho ^\alpha =N^\alpha /L^d`$, $`N^\alpha `$ being the number of spins with $`s=\alpha `$, $`L`$ the linear size and $`d`$ is the dimensionality of the system.
2. Binder cumulant ($`B`$)
$$B=\mathrm{\hspace{0.33em}1}\frac{M^{(4)}}{3M_{}^{(2)}{}_{}{}^{2}}$$
(5)
3. Largest cluster ($`C_m`$)
$$C_m=\frac{1}{L^d}N_{C_m}$$
(6)
$`N_{C_m}`$ is the number of spins belonging to the largest cluster in each configuration. Largest cluster gives the time evolution of the average of the largest cluster found in each configuration. This quantity scales like the susceptibility. Hence in a first-order phase transition, it grows like volume.
For the first-order phase transitions, since the static critical exponents are replaced by the dimension of the system, rather than calculating the static critical indices, one can test the validity of the dynamic scaling assumption at the initial stages of the simulation and obtain the surface tension as the result of the scaling. For the computational simplicity, the initial magnetization $`m_0`$ is set to zero. For second-order phase transitions, the finite-size behaviour of the magnetization is given by Eq.(1). Here, $`\beta /\nu =Y_Hd`$. Since $`Y_H`$ and $`Y_T`$ are equal to the dimension of the system, for first-order phase transitions, the order parameter ($`M^{(1)}`$), Binder cumulant ($`B`$) and the largest cluster ($`C_m`$) scale according to
$$f_{L_1}(t/\tau (L_1),0,L_1)=f_{L_2}(t/\tau (L_2),0,L_2)$$
(7)
where $`\tau (L)`$ is autocorrelation time of the lattice with the linear size $`L`$. Application of this form to data can show scaling for various size lattices. In the following section we have presented our results.
## 3 Results and Discussions
Following the considerations started in previous section, we have studied the two-dimensional $`q=6`$ and $`7`$ state Potts models evolving in time according to dynamics of model $`A`$ . Our main objective is to observe the dynamic scaling, hence we have prepared lattices with vanishing order parameter, avoiding the complications due to having an extra parameter $`x_0`$. This is achived for $`q=6`$ and $`7`$ state Potts models by choosing the lattice sizes as the integer multiples of $`q`$. Totally random initial configurations are quenched at the corresponding infinite lattice transition temperature. Simulations are performed on $`6`$ different lattice sizes by using Wolff cluster update algorithm. For each $`q`$ and $`L`$ the averages are taken over $`10000`$ different samples. Errors are calculated by dividing the samples into ten subsamples. As the lattice size grows, number of iterations for thermalization grows according to growing tunneling time (Eq. 2). For $`q=7`$ and larger lattices up to $`30000`$ iterations are necessary for thermalization. The chosen lattice sizes are $`L=42,60,72,90,96,102`$ and $`L=35,49,63,77,91,105`$ for $`q=6`$ and $`q=7`$ respectively.
The two-dimensional $`q`$-state Potts model is known to undergo first-order phase transition for $`q>\mathrm{\hspace{0.33em}4}`$ . Even though the $`q=7`$ state Potts model exhibits strong first-order behaviour, the correlation length is about $`50`$ lattice sites. Hence for $`q=7`$ , the largest lattices are expected to show good scaling behaviour without any need to correction to scaling terms. For smaller lattices, however, one needs to consider the correction to scaling according to the finite-size scaling theory for first-order phase transitions. The general form of the corrections to the scaling can be given as polynomial in $`\frac{1}{L^d}`$, which can be written as
$$A_L=A_0(1+\frac{A_1}{L^d}+\frac{A_2}{L^{2d}}+\mathrm{}).$$
(8)
This form indicates that all of the observables scale if one calculates $`A_0`$ by fitting the correction to scaling terms . The correction to scaling plays even more profound role for $`q=6`$ state Potts model where the correlation length is larger than even the largest lattice. The correction to scaling for each observable is obtained by fitting the averages, taken over $`10000`$ iterations after the thermalization, to Eq. (8) and the expansion coefficients $`A_1,A_2`$,.. are calculated for $`q=6`$ and $`7`$.
In Figure 1.a, the time evolution of the order parameter is plotted for $`q=6`$. As one can observe, for each lattice size, starting from totally random configuration $`m_0=\mathrm{\hspace{0.17em}0}`$, the order parameter evolves to a plato. For large enough lattices, since $`Y_Hd`$ vanishes for first-order phase transitions, one can expect the same long-term behaviour for all different lattice sizes. In fact this is the case, within the errorbars, for our largest two lattices. In order to see scaling for smaller lattice sizes we have performed long runs, after thermalization, and the correction to scaling terms (Eq. 8) are fitted to the order parameter values. In figures 1.a and 1.b the raw data and the scale form is presented for $`q=6`$ and $`7`$ respectively. Figures 1.c and 1.d show the scaled forms of the data in figures 1.a and 1.b respectively. For $`q=7`$, the correction to scaling is almost negligible for lattices larger than $`L=65`$.
Similarly, for the averages of the maximum cluster, which is expected to grow like the volume, we have observed that similar scaling behaviour exists. Figures 2.a and 2.b are plots of the Monte Carlo data for $`q=6`$ and $`q=7`$ respectively. Figures 2.c and 2.d are the scaled form of the above mentioned data.
The last quantity that we have observed is the Binder cumulant. Binder cumulant is a scaling function and also is a ratio of two quantities of equal anomolous dimensionality. Hence, correction to scaling terms are almost negligible even for very small lattices.
These scaling studies enable us to calculate the order-disorder surface tension $`2\sigma _{od}`$. The surface tensions $`2\sigma _{od}`$ of $`q=6`$ and $`q=7`$ state Potts models are calculated from the autocorrelation time of the relaxation processes for the observables. In Table 1. we have presented $`2\sigma _{od}`$ which are obtained from the relaxation of three different quantities. Depending on the quantity, the value of the surface tension is observed to vary slightly. Nevertheless, the surface tension, within errorbars, is $`0.008\pm 0.001`$ and $`0.017\pm 0.004`$ for $`q=6`$ and $`q=7`$ respectively. The error on the surface tension can be taken as the fluctuation of the values obtained using different operators.
## 4 Conclusions
In conclusion we have numerically simulated the dynamic relaxation process of the two-dimensional $`q=6`$ and $`7`$ state Potts models starting from random initial states with vanishing initial order parameter. Here in this preliminary work we have investigated the dynamical scaling properties of the first-order phase transitions. This work is based on two well established facts that the autocorrelation time of the critical relaxations in first-order phase transitions are given by the instanton calculations and all infinities of the thermodynamic quantities are governed by the volume of the system . Under these assumptions one may expect that any thermodynamical quantity exhibits dynamical scaling considering the correction to scaling terms.
We have demonstrated that for first-order phase transitions a universal scaling behaviour emerges already in the macroscopic short-time regime of the dynamical evolution. This scaling behaviour resembles closely dynamic scaling which seems to exist in second-order phase transitions. Furthermore, such a scaling opens new and alternative methods of calculating surface tension and it can be used to distinguish weak-first-order phase transitions from the second-order one.
## Figure captions
Figure 1. (a) and (b) are the time evolution of the order parameter M, and (c) and (d) are their scaled form for $`q=6\mathrm{and}\mathrm{\hspace{0.33em}7}`$-state Potts model respectively. (The errorbars are omitted from the scaled forms for clarity of the figures.)
Figure 2. Same as fig. 1 but plots are for the maximum clusters.
|
no-problem/0003/nlin0003063.html
|
ar5iv
|
text
|
# Test of the Quantum Chaoticity Criterion for Diamagnetic Kepler Problem
## 1 Introduction
It is believed for decades that the main feature of the classically chaotic system is the instability of its trajectories to minor variations of the initial conditions. Since the concept of the trajectory in phase space does not apply in quantum mechanics, the possibility of quantum chaos is still opened to discussion. It was suggested to look for the ”quantum signatures of classical chaos”. The only more or less generally accepted ”signature” found up to now is the Wigner level repulsion. In this paper we proceed with the illustration of our alternative suggestions \[2-5\] concerning the definition of chaos for the Hamiltonian quantum (and classical) systems.
According to Liouville-Arnold theorem in classical mechanics, the Hamiltonian system with $`N`$ degrees of freedom is regular if it has $`M=N`$ independent global integrals of motion. If the number $`M`$ of global integrals becomes less than $`N`$, the system becomes chaotic. The well known Noether’s theorem connects the existence of global integrals of the system with the symmetries of its Hamiltonian. According to this theorem, breaking the symmetry of the initially regular system decreases the number of its independent global integrals of motion. Thus the system becomes chaotic only in the case of such a symmetry-breaking which makes the number $`M`$ of global integrals less than $`N`$.
Our first (and major) suggestion is to generalize this definition of chaoticity for the case of quantum systems. Since the concept of symmetry (unlike the trajectory) is universal for both classical and quantum mechanics, this generalization seems to be quite straightforward - one should simply substitute the integrals of motion by the corresponding ’good’ quantum numbers, resulting from the symmetries of quantum Hamiltonian. This approach immediately allows to treat the only generally accepted signature of quantum chaos - Wigner’s level repulsion - as a signature of symmetry-breaking leading to chaos. Indeed, the general property of the highly symmetrical regular quantum system is the high degeneracy of its eigenstates. The immediate consequence of perturbation breaking the original symmetry is the removal of this degeneracy (in other words, Wigner’s level repulsion). It is worthwhile to remind that Wigner’s level repulsion was first observed for the resonance states of compound nucleus, whose only good quantum numbers are their energy and spin. This property comes from the fact that the symmetries of the nuclear mean field are destroyed by the pair-wise ”residual” interactions \[2–5\].
Our second (rather technical) suggestion is to use the concept of spreading width $`\mathrm{\Gamma }_{spr}`$ (and the related criterion $`\ae `$) as a sensitive measure of symmetry-breaking of the Hamiltonian $`H_0`$ caused by the perturbation $`V`$. Indeed, consider a Hamiltonian $`H`$ of the non-integrable system as a sum:
$$H=H_0+V$$
(1)
of the highly symmetrical regular Hamiltonian $`H_0`$ (say, of non-interacting particles or quasi-particles in the spherically-symmetrical mean field):
$$H_0\varphi _k=ϵ_k\varphi _k$$
(2)
and of the perturbation $`V`$ which destroys the symmetries of $`H_0`$ (the pair-wise particle-particle forces in nuclear case). Expand now the eigenstates $`\psi _i`$ of $`H`$ over the ”regular” basis $`\varphi `$:
$$\psi _i=\underset{k}{}c_i^k\varphi _k$$
(3)
and look for the probability $`P_k(E_i)=|c_i^k|^2`$ to find the original ”regular” component $`\varphi _k`$ in the different eigenstates $`\psi _i`$ (with eigenenergies $`E_i`$) of our nonintegrable system. It is obvious, that for sufficiently small perturbations $`V`$ the probability $`P_k(E_i)`$ is centered around the ”original” energy $`ϵ_k`$ and tends to saturate to unity over some characteristic energy interval $`\mathrm{\Gamma }_{spr}`$ which is called ”the spreading width” of the initially unperturbed state $`\varphi _k`$. Various realistic models (see e.g. chapter 2 of ref.) give the Lorentzian shape of the strength function energy dependence:
$$S_k(E_i)=\frac{|c_i^k|^2}{D}\frac{}{2\pi }\frac{\mathrm{\Gamma }_{spr}}{(E_iϵ_k)^2+\mathrm{\Gamma }_{spr}^2/4}$$
(4)
where D is the average level spacing of the nonintegrable system. A slight generalization of the derivation given in ref. allows to express the spreading width in terms of the ”mean square root” matrix element $`\stackrel{~}{v}=\sqrt{<v^2>}`$ of the interaction $`V`$ mixing the basic states $`\varphi `$ (angular brackets imply averaging over all the basic components admixed by $`V`$ to a given one):
$$\mathrm{\Gamma }_{spr}\stackrel{~}{v}\sqrt{N_d}$$
(5)
Here $`N_d`$ stands for the degeneracy rank of the initial level $`ϵ_k`$.
Thus the system formally becomes nonintegrable as soon as $`\mathrm{\Gamma }_{spr}`$ deviates from zero. However, while the ratio
$$\ae =\frac{\mathrm{\Gamma }_{spr}}{D_0}$$
(6)
(where $`D_0`$ is the level spacing of the initial regular system) is smaller than unity - the traces of the initial good quantum numbers are quite obvious as isolated maxima of the strength function. We can easily distinguish between the maxima corresponding to the different values of the originally good quantum numbers. This is the analogue of the classical ”weak chaos” governed by the KAM theorem. When $`\ae `$ exceeds unity these traces of regularity disappear since it becomes impossible to distinguish between the successive maxima of the strength function corresponding to the different values of the original quantum numbers $`k`$. This situation is the quantum analogue of the smearing out and disappearance of the invariant tori. It means that we approach the domain of ”global” or ”hard” chaos.
Furrier transforming Eq. (4) one can show (see e.g. ) that $`\mathrm{\Gamma }_{spr}/\mathrm{}`$ defines the rate of decay of the ”regular” states $`\varphi `$ resulting from the instability caused by the perturbation $`V`$. One can even form the wave packets $`|A>`$ of the states $`\varphi _k`$ and analyze the recurrence probabilities $`P(t)=|<A(t)|A(0)>|^2`$. This analysis shows the periodic recurrences with classical period $`T`$ modulated exponentially by the factor $`exp(\mathrm{\Gamma }_{spr}t/\mathrm{})`$ arising from the above instability. Combining these results with the results of Heller’s wave-packet experience (see e.g. or paragraph 15.6 of ref. ), one can show that the quantity $`\mathrm{\Gamma }_{spr}/\mathrm{}`$ transforms in the classical limit into the Lyapunov exponent $`\mathrm{\Lambda }`$:
$$\frac{\mathrm{\Gamma }_{spr}}{\mathrm{}}\mathrm{\Lambda }$$
(7)
The corresponding classical limit for the dimensionless chaoticity criterion is:
$$\ae \frac{\mathrm{\Lambda }T}{2\pi }=\frac{\chi }{2\pi }$$
(8)
where $`T`$ is the classical period and $`\chi `$ is the stability parameter of the classical monodromy matrix (see, e.g. ).
Thus the particular quantity $`\mathrm{\Gamma }_{spr}`$ and the parameter $`\ae `$ seem to be more accurate numerical measures of quantum chaoticity than the level distribution law - this is proved by the nuclear physics experience \[2-4\] and by its application to one of the most popular in classical mechanics cases of transition from regularity to chaos - Hennon-Heiles problem .
## 2 Diamagnetic Kepler Problem
Another very popular model for studies of transition from regularity to chaos in classical mechanics is the non-relativistic hydrogen atom in the uniform magnetic field (see e.g. ) with the Hamiltonian:
$$H=p^2/2me^2/r+\omega l_z+\frac{1}{2}m\omega ^2(x^2+y^2)$$
(9)
Here the frequency $`\omega =eB/2mc`$ is a half of the cyclotron frequency and $`B`$ is the strength of the magnetic field acting along z-axis. The dimensionless field strength parameter $`\gamma =\mathrm{}\omega /`$ (here $``$ is the Rydberg energy) is usually combined with the electron energy $`E`$ to produce the scaled energy $`ϵ=E\gamma ^{2/3}`$. When the scaled energy varies from $`\mathrm{}`$ (for $`B=0`$) to 0 (for $`B=\mathrm{}`$) the regular motion of the system becomes more and more chaotic. The fraction $`R`$ of available phase space covered by regular trajectories was calculated in ref. \[9 - 10\] as a function of scaled energy for the case of $`l_z=0`$ (see Fig.1), showing the rapid chaotisation of the system in the range $`0.48ϵ0.125`$.
We analyzed the quantum analogue of this system (with the Hamiltonian (9)) on the same lines as it was done in for the quantum Henon-Heiles problem, namely we traced the gradual destruction of the $`O(4)`$ symmetry characteristic of the unperturbed motion in Coulomb potential by the external magnetic field B. In other words, we traced the disappearance of the ”good” quantum numbers (integrals of motion) which characterize the regular motion in this potential. In order to do this, we diagonalized the Hamiltonian matrix (9) in parabolic coordinates on the basis of purely Coulomb wave functions $`\varphi _{n_1n_2m}`$, whose eigenvalues in the unperturbed case are defined by the principal quantum number n:
$$n=n_1+n_2+|m|+1$$
and are highly ($`n^2`$ times) degenerate. Diagonalizing the Hamiltonian matrix, we defined the new eigenvalues $`E_i`$ and the eigenstates $`\psi _i`$ in terms of the expansion coefficients $`c_i^k`$ (see Eq. (3)). As a next step, we plotted the energy distribution of Eq. (4) for the coefficients’ squares of the $`n`$-th shell over the ”new” eigenstates. Fig. 2 shows the examples of these distributions for $`n=10`$, $`m=0`$ and the magnetic field $`\gamma `$ equal to $`410^4`$, $`610^4`$, $`810^4`$ and $`1210^4`$, respectively. In order to increase the statistical accuracy, we performed the averaging over all the components of the basis with the same $`n`$ value, as it is usually done in nuclear physics and as it was done in the case of quantum Henon-Heiles problem . Assuming now that the shape of these distributions is approximately Lorentzian, like in the case of the neutron strength function in nuclear physics, we define $`\mathrm{\Gamma }_{spr}`$ as the energy range around the maximum where the sum of the squares of the coefficients $`_i|c_i^k|^2`$ saturates to 0.5. Thus obtained values of $`\mathrm{\Gamma }_{spr}`$ were divided then by the level spacing $`D_0`$ between the adjacent maxima of the strength function to give the desired parameter $`\ae `$. The plot of this parameter versus the scaled energy $`ϵ`$ is given in Fig. 1.
We see that our parameter reaches the critical value of $`\ae =1`$ at the critical scaled energy $`ϵ0.45`$ in fairly good agreement with the classical critical value $`ϵ0.48`$ of refs. . It is worthwhile to remind here that in the previous studies of the quantum diamagnetic Kepler problem \[11 - 14\] the existence was pointed of the approximately good quantum number $`K`$, corresponding to the eigenvalues of the operator $`\mathrm{\Sigma }`$ built as a combination of the Runge-Lenz vector $`A`$:
$$\mathrm{\Sigma }=4A^25A_z^2$$
(10)
The eigenstates of this operator are obtained by prediagonalization of the unperturbed Coulomb basis within a single manifold $`n`$ (which physically corresponds to the values of our $`\ae 1`$). The appreciable $`K`$-mixing (disappearance of the integral of motion $`\mathrm{\Sigma }`$) starts when $`\gamma ^2n^716`$. In our case of $`n=10`$ this corresponds to the scaled energy $`ϵ0.45`$.
## 3 Conclusion
Thus we confirmed once more the plausibility of the suggested approach to quantum chaoticity, based on its connection with symmetry-breaking of the regular motion which makes the number $`M`$ of the system’s global integrals of motion less than the number $`N`$ of its degrees of freedom. We had also demonstrated that the spreading width $`\mathrm{\Gamma }_{spr}`$ and the dimensionless parameter $`\ae `$ might serve a good quantitative criterion of quantum chaoticity. Likewise in the case of Henon-Heiles problem , the critical scaled energy value $`ϵ_c`$ when the parameter $`\ae `$ reaches unity corresponds to the onset of ”global” chaos on the classical phase portrait for the diamagnetic Kepler problem. Here, however, the origin of the approximate regularity of the perturbed system for $`ϵϵ_c`$ is more evident. Although formally the external magnetic field makes the system nonintegrable by reducing the number of global integrals of motion to $`M=2`$ (energy and $`l_z`$), the third approximate integral of motion ($`\mathrm{\Sigma }`$) survives much longer making the system practically regular.
We should add in conclusion that the importance of studying the particular example of hydrogen atom in the uniformed magnetic field was stressed (see, e.g. ) because it ”is not an abstract model system but a real physical system that can be and has been studied in the laboratory”. These studies were indeed started in 1986 (see ). One should point, however, that atomic nucleus is also ”not an abstract model”, whose experimental and theoretical studies are going on for more than half a century. As we had already mentioned, Wigner developed his random matrix approach in order to describe the experimentally observed properties of compound nuclear resonances. Since those times nuclear physics accumulated a vast arsenal of theoretical methods which allow Schrödinger’s equation to be solved in some effective manner, even when the system is not integrable and its behavior is chaotic by the criteria of level repulsion. As shown in \[2-4\], most of them are based on the smallness of the chaoticity parameter $`\ae `$, which seems to be the most important small parameter of nuclear physics.
|
no-problem/0003/cond-mat0003293.html
|
ar5iv
|
text
|
# The role of Berry phase in the spectrum of order parameter dynamics: a new perspective on Haldane’s conjecture on antiferromagnetic spin chains
\[
## Abstract
We formulate the dynamics of local order parameters by extending the recently developed adiabatic spinwave theory involving the Berry curvature, and derive a formula showing explicitly the role of the Berry phase in determining the spectral form of the low-lying collective modes. For antiferromagnetic spin chains, the Berry phase becomes a topological invariant known as the Chern number. Our theory predicts the existence of the Haldane gap for a topologically trivial ground state, and a linear dispersion of low-lying excitations for a non-trivial ground state.
\]
Ever since Landau’s formulation of continuous phase transition, the study of order parameter and its associated dynamics has occupied a central part of modern physics. Dynamics of the order parameter gives rise to collective excitations known as Goldstone modes, with important consequences on thermal, mechanical, electrical or magnetic properties of physical systems. Also, the absence of symmetry breaking in lower dimensional systems at finite temperatures can be understood as a result of thermal fluctuations of the low-lying modes of the order parameter dynamics. For many one-dimensional systems, there exist well-defined collective modes, such as the ‘spinwaves’ in the antiferromagnetic Heisenberg spin chain , even though the ground state is disordered. It is very attempting to regard these collective modes also as that of the order parameter dynamics while the destruction of the long range ordering in the ground state is attributed to their quantum fluctuations .
In this Letter, we formulate a theory of order parameter dynamics based on the local ordering in the system alone. We follow the approach of Ref. for symmetry breaking magnetic systems to derive the equations of motion and a formula for the collective excitation spectrum:
$$\mathrm{}\omega =\frac{\mathrm{\Delta }E}{B},$$
(1)
where $`\mathrm{\Delta }E`$ is the energy increase from the ground state for a frozen configuration of the order parameter and $`B`$ is the Berry phase of the many-body wave function during a cycle of the collective motion. We apply our theory to the systems of antiferromagnetic spin chains, showing that the presence or absence of a Haldane gap is directly tied to a topological charge in the ground state.
Haldane conjectured that the excitations of an antiferromagnetic Heisenberg chain have a gap for spins of integer $`S`$ and are gapless for spins of half-integer values . This was based on a mapping to a nonlinear sigma-model in the large $`S`$ limit, where a topological action term is present for half-integer spins but not for integer spins. Without the topological term, the nonlinear sigma-model was known to be gapped, but a rather elaborate renormalization group analysis was needed to show that the presence of the topological term can render the excitations gapless . Haldane’s conjecture seems to be correct also for small spins, because it conforms with the exact solutions for the extreme cases of $`S=1/2`$ and $`S=1`$ and with numerical results. The success of Haldane’s conjecture is highly celebrated in the theoretical physics community, because it gives a prime example that topology can play such a decisive role in measurable effects.
Here we present a direct mechanism showing how topology works its way to determine the spectral form of the excitations. For the antiferromagnetic chains, we will show that the Berry phase for a mode of wave number $`k`$ can be written for small $`k`$ as
$$B\frac{kL}{2\pi }Q+O(k^2),$$
(2)
where $`L`$ is the length of the chain and $`Q`$ is the topological charge defined as the Chern number of the wave function mapped to the order parameter configuration in the excitation mode. On the other hand, the energy increase in a frozen configuration of the order parameter should have the form
$$\mathrm{\Delta }ELk^2.$$
(3)
Therefore, depending on the presence or absence of this topological charge $`Q`$, the excitation spectrum is linearly dispersed for small $`k`$ or becomes gapped:
$$\mathrm{}\omega \{\begin{array}{cc}k,\hfill & \text{if }Q0\text{;}\hfill \\ \mathrm{\Delta },\hfill & \text{if }Q=0\text{,}\hfill \end{array}$$
(4)
where $`\mathrm{\Delta }`$ is a constant. In many one dimensional antiferromagnetic models of integer spin, such as the AKLT model for $`S=1`$ and its SU$`(N)`$ generalization , the exactly soluble ground states are topologically trivial ($`Q=0`$). On the other hand, several spin half-integer models have been constructed with topologically non-trivial ground states, e.g., the resonating-valence-bond ground state with a twofold degeneracy in the spin-Pereils order . The Lieb-Schultz-Mattis theorem for spin-half and its generalization to arbitrary half-integer spins also indicated this non-triviality. In lights of these facts and the general arguments given in Ref., we can thus conclude that our spectral formula is really consistent with Haldane’s conjecture .
The standard procedure of introducing the order parameter is to apply a weak external field to force the system to order in a particular way, corresponding to some non-zero expectation value of the operator in conjugation to the field $`\phi _x^j=\widehat{O}_x^j`$, where $`x`$ denotes the position and $`j`$ labels the internal components. If the ordering persists after the field is turned-off, we say that there is a spontaneous symmetry breaking, and the nonzero expectation value is called the order parameter. Standard examples of order parameter include the magnetization field in magnetic materials, the condensate wave function for superfluids , the Ginzburg-Landau order parameters in superconductivity and many other condensed matter systems . To facilitate the discussion of its dynamics, we generalize the notion by defining the order parameter in any state simply as the expectation value of $`\widehat{O}_x^j`$ in that state. In this way, we can also talk about the order parameter even for systems without long range order.
As long as there is a strong local ordering, the low-lying excitations should be dominated by the order parameter dynamics in the following sense. Consider the set of constrained ground states defined as the union of the lowest energy state for each configuration of the order parameter. If an initial state prepared from this set will evolve entirely within this set, then we have a closed dynamics of the order parameter because such states are labeled uniquely by the order parameter configuration. We assume this is the case, which can be justified at least for those long wavelength deviations of the ground state configuration. We can then apply the time dependent variational principle along the line of Refs. to derive the equations of motion of the order parameter dynamics
$$\underset{j^{},x^{}}{}\mathrm{}\mathrm{\Omega }_{xx^{}}^{jj^{}}\dot{\phi }_x^{}^j^{}=\frac{E}{\phi _x^j},$$
(5)
which involve the energy $`E=\psi |H|\psi `$ of the constrained ground state and the Berry curvature
$$\mathrm{\Omega }_{xx^{}}^{jj^{}}=\frac{}{\phi _x^j}\psi |\frac{i}{\phi _x^{}^j^{}}|\psi \frac{}{\phi _x^{}^j^{}}\psi |\frac{i}{\phi _x^j}|\psi .$$
(6)
The spectral formula (1) can be derived directly from the equations of motion. In Ref. , the formula was obtained for the case of spinwave by linearizing the equations of motion around the ground state for ferro-, ferri- and antiferromagets. There it was also shown that this Berry phase is actually given by the reduction of the total magnetization from the ground state due to the spinwave, thus proving and generalizing an earlier result from Ref. for the spinwave spectrum. This formula has now served the basis for a number of successful first principle calculations for ferromagnetic crystals , and similar work on other types of magnetic materials are expected in the near future. Exactly the same derivation of the spectral formula can be applied for the order parameter dynamics of any system with a symmetry breaking ground state. The same reasoning should also give the Berry phase in terms of the deviation from the ground state expectation value of the generator of the collective motion.
For a system with no spontaneous breaking of symmetry, such as the antiferromagnetic chain, the spectral formula (1) still stands as shown by the following arguments. We multiply both side of (5) by $`dt\delta _E\phi _x^j`$ and sum over $`x`$ and $`j`$, i.e.
$$\underset{a,a^{}}{}\mathrm{}\mathrm{\Omega }_{aa^{}}\frac{\phi ^a^{}}{t}dt\delta _E\phi ^a=\underset{a}{}\frac{E}{\phi ^a}dt\delta _E\phi ^a,$$
(7)
where the two labels have been condensed into one for simplicity ($`\phi ^a\phi _x^j`$), and $`\delta _E\phi ^a`$ is the variation in a direction perpendicular to the constant energy trajectory of the order parameter. We then integrate (7) over the two-dimensional domain $`𝒟_\phi `$ consisting of a one parameter family of trajectories ranging from the fixed point in the absolute ground state to a trajectory $`𝒞_\phi `$ of finite amplitude of the collective motion. In the harmonic regime, where the collective modes may be defined, we expect that the time period $`T`$ to be a constant, so that the integration yields
$$\mathrm{}_{𝒟_\phi }\underset{aa^{}}{}\delta _t\phi ^a\delta _E\phi ^a^{}\mathrm{\Omega }_{aa^{}}=T\mathrm{\Delta }E,$$
(8)
where $`\mathrm{\Delta }E=EE_0`$ is the energy increase from the ground state, and $`\delta _t\phi ^a`$ denotes the variation of the order parameter along the trajectory (i.e., $`\delta _t=dt/t`$). Because the time period $`T`$ relates to the frequency $`\omega `$ of the collective mode by $`T=\frac{2\pi }{\omega }`$, we arrive at the formula (1) with the Berry phase given by
$`B`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _{𝒟_\phi }}{\displaystyle \underset{aa^{}}{}}\delta _t\phi ^a\delta _E\phi ^a^{}\mathrm{\Omega }_{aa^{}}`$ (9)
$`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _{𝒞_\phi }}{\displaystyle \underset{a}{}}\delta _t\phi ^a\psi |{\displaystyle \frac{i}{\phi ^a}}|\psi .`$ (10)
where the second equality results from the Stokes theorem.
To appreciate how the Berry phase determines the spectral form of the low-lying collective mode, we expand it in powers of the wave number $`k`$
$$B=B_0+B_1k+B_2k^2+\mathrm{}.$$
(11)
For ferro- and ferrimagnets, we have $`B_00`$ because the total magnetization reduction due to a spinwave is nonzero even in the limit of zero $`k`$. This yields a quadratic dispersion for the spectrum at small $`k`$ in light of Eq.(3). For lattices with antiferromagnetic ordering, we have $`B_0=0`$ due to sublattice symmetry , while $`B_10`$ from a careful analysis of magnetization reduction in the presence of a spinwave of small but nonzero $`k`$. This reproduces the standard result that antiferromagnetic spinwaves have a linear dispersion at small $`k`$.
For antiferromagnetic spin chains, where the spin rotation symmetry cannot be spontaneously broken according to Coleman’s theorem, the results drawing from the total spin reduction may not be applicable. Fortunately, we have two observations that help to establish the topological interpretation of the Berry phase shown in (2). First, the total Berry phase due to a cyclic motion in the order parameter configuration space can be expressed as a sum of the Berry phases due to the cyclic motion of local order parameter at each site. In other words, we may write the Berry phase (10) in the form
$$B=\underset{x}{}\frac{1}{2\pi }_{C_x}\delta _t\stackrel{}{\phi }_x\psi |\frac{i}{\stackrel{}{\phi }_x}|\psi ,$$
(12)
where $`C_x`$ denotes the path of the local spin moment $`\stackrel{}{\phi }_x`$ as the projection of the configurational path $`𝒞_\phi `$ on site $`x`$. For each term in the sum, the constrained ground state $`|\psi `$ is evaluated with the spin moments on all sites except $`x`$ set to their true ground state value, i.e., zero. This observation may fail for itinerant spin systems such as the $`tJ`$ model, because the Berry curvature is known to have inter-site terms which prevent the resolution of the Berry phase into contributions from each site. However, we expect the observation to be true for localized spin systems such as the Heisenberg model.
A direct consequence of the above observation is that the total Berry phase (12) can be written as the number of the space periods, $`n=\frac{kL}{2\pi }`$, times the Berry phase in one period,
$$B=\frac{k}{2\pi }LB_\mathrm{p},$$
(13)
where $`L`$ is the size of the chain and $`B_p`$ is defined by (12) but with the sum over $`x`$ confined in one space period $`(0,\lambda )`$. Our second observation then shows that $`B_p`$ is proportional to a topological charge. We note that because of the local antiferromagnetic ordering, the directions of spin moments on neighboring sites tend to be opposite to each other, so is the sense of chirality of their motion. Therefore, the contributions to the Berry phase from neighboring sites almost cancel each other for long wavelength collective modes. It will thus be convenient to introduce the staggered order parameter $`\stackrel{}{m}_x=(1)^x\stackrel{}{\phi }_x`$, so that the Berry phase per period becomes
$$B_p=\frac{1}{2\pi }\underset{x(0,\lambda )}{}(1)^x_{C_x^{}}\delta _t\stackrel{}{m}_x\psi |\frac{i}{\stackrel{}{m}_x}|\psi ,$$
(14)
where $`C_x^{}`$ denotes the orbit of $`\stackrel{}{m}_x`$. For small $`k`$, we may take the continuum limit by replacing the difference by a differential,
$`B_p={\displaystyle \frac{1}{2\pi }}{\displaystyle \underset{x(0,\lambda )}{}}{\displaystyle _{C_x^{}}}\delta _t\stackrel{}{m}_x{\displaystyle \underset{j}{}}\delta _xm_x^j`$ (15)
$`\left[{\displaystyle \frac{}{m_x^j}}\psi |{\displaystyle \frac{i}{\stackrel{}{m}_x}}|\psi {\displaystyle \frac{}{\stackrel{}{m}_x}}\psi |{\displaystyle \frac{i}{m_x^j}}|\psi \right],`$ (16)
where $`\delta _x`$ stands for $`dx\frac{}{x}`$ and the second term is an added zero term. Due to the spatial periodicity, the sum over $`x`$ corresponds to a closed loop integral, so that (16) becomes an integral over the closed space-time torus $`T^2`$
$$B_pQ=\frac{1}{2\pi }_{T^2}\underset{j,j^{}}{}\delta _xm_x^j\delta _tm_x^j^{}\mathrm{\Omega }_{jj^{}}(\stackrel{}{m}_x),$$
(17)
with the curvature
$$\mathrm{\Omega }_{jj^{}}(\stackrel{}{m})=\frac{}{m^j}\psi |\frac{i}{m^j^{}}|\psi \frac{}{m^j^{}}\psi |\frac{i}{m^j}|\psi .$$
(18)
Thus, the Berry phase per spatial period of a collective excitation in an antiferromagnetic spin chain is in fact a topological invariant, the first Chern class for the mapping of the constrained ground state to the space-time structure of the local order parameter. This reduces to the standard semiclassical result of
$$Q=\frac{1}{2\pi }_{T^2}\delta _x\stackrel{}{m}\times \delta _t\stackrel{}{m}\stackrel{}{m}/|\stackrel{}{m}|^2,$$
(19)
if we take $`\mathrm{\Omega }_{jj^{}}=_lϵ_{jj^{}l}m^l/|\stackrel{}{m}|^2`$. Our expression (17) is a generic result and model-independent. It has been expected that the topology of the ground states of spin chains is trivial for integer $`S`$ but non-trivial for half-integer $`S`$ . For an $`S=1`$ chain, the authors of Ref. provided an exact valence-bond-solid ground state which is topologically trivial . Read and Sachdev discussed the SU($`N`$) antiferromagnetic chains in the large $`N`$ limit . By using a trial ground state wave function, they showed that there is a spin-Peierls order parameter proportional to the topological charge of the ground state. They explicitly gave the dependence of the ground state energies on the topology of the state. They concluded that the valence-bond-solid ground state of the integer spin chains is topologically trivial and not degenerate with a vanishing spin-Pereils order parameter; The resonating-valence-bond ground state for the half-integer chains, on the other hand, is topologically nontrivial and degenerate due to the different spin-Pereils order parameters. We can also see the topological property of the chains in the twist introduced in , which, in some sense, gave a finite $`S`$ version of the topological term in the nonlinear sigma model .
Our spectral formula eq.(1) can also serve the basis for numerical calculation of the excitation spectrum. A quantitative comparison of the numerical result with known theoretical and experimental results should constitute a stringent test of our theory. For example, in the spin-1/2 antiferromagnetic chain, the spinwave speed is $`\pi /2`$ times larger than the semiclassical result , and it would be interesting to see if the numerical calculation based on our formula will give the correct result.
In conclusion, we have formulated a theory of local order parameter dynamics and derived a formula for the spectral form of collective excitations in terms of the Berry phase. For antiferromagnetic spin chains, we have shown in a model-independent manner that the presence or absence of a gap is directly tied to the topological structure of the constrained ground state. For all known exact or model solutions of the spin chains, our result is consistent with Haldane’s conjecture. We also recognize that the topological consideration may not be valid for itinerant spin systems because of the non-vanishing inter-site Berry curvature.
The authors thank Wu-Ming Liu for the suggestion of this collaboration and discussions. They are grateful to Ping Ao, Hong Chen, Sui-Tat Chui, E. Fradkin and especially, Shao-Jing Qin and Zhao-Bin Su for useful discussions. QN thanks Institute of Theoretical Physics (Beijing) for the warm hospitality, where the work was initiated. This work was supported in part by the NSF (DMR 9614040, PHY 9722610) and NSF of China.
|
no-problem/0003/nucl-ex0003010.html
|
ar5iv
|
text
|
# Semi-Inclusive Λ and KS Production in p-Au Collisions at 17.5 GeV/c
## Abstract
The first detailed measurements of the centrality dependence of strangeness production in p-A collisions are presented. $`\mathrm{\Lambda }`$ and K<sub>S</sub> $`dn/dy`$ distributions from 17.5 GeV/c p-Au collisions are shown as a function of “grey” track multiplicity and the estimated number of collisions, $`\nu `$, made by the proton. The $`\nu `$ dependence of the $`\mathrm{\Lambda }`$ yield deviates from a scaling of p-p data by the number of participants, increasing faster than this scaling for $`\nu 5`$ and saturating for larger $`\nu `$. A slower growth in K<sub>S</sub> multiplicity with $`\nu `$ is observed, consistent with a weaker $`\nu `$ dependence of $`\mathrm{K}\overline{\mathrm{K}}`$ production than $`YK`$ production.
Significant effort has been devoted in the last decade to measuring strange particle production in nucleus-nucleus (A-A) collisions, motivated by predictions that quark-gluon plasma (QGP) formation could enhance strangeness . Experiments at both the Brookhaven National Laboratory (BNL) AGS and the CERN SPS accelerators have reported large increases in relative strange particle yields in central light (Si, S) and heavy ion (Au, Pb) induced collisions compared to p-p collisions. However, we still cannot claim a true understanding of the physics of strangeness enhancement due to the complexity of the hadronic interactions underlying A-A collisions and the competing mechanisms proposed and/or used in models to explain the data.
The difficulty in interpreting the A-A data and the observation that the enhancement is already present in light-ion collisions suggest the use of p-A collisions to study this problem further. The simpler final state of p-A collisions may allow the production rate for strange particles to be directly connected with the scattering dynamics of the incoming proton. Previously published p-A data demonstrated an increase in the inclusive K$`/\pi `$ ratio with increasing A at the AGS, suggesting that the strangeness enhancement mechanism is already at work in p-A collisions. A more thorough analysis of inclusive data at higher energies suggested no overall strangeness enhancement but a possible modest enhancement in $`\mathrm{\Lambda }`$ production offset by a decrease in $`\overline{\mathrm{K}}`$ production. This result has led to claims that the observed strangeness enhancement in A-A collisions at SPS energies may result from QGP formation . However, extrapolations to central A-A data from inclusive p-A data are intrinsically flawed: Centrally selected A-A events necessarily involve more scatterings of the participant nucleons, and the dynamics of strangeness production may be quite sensitive to these additional scatters. A resolution of this problem requires a detailed measurement of the centrality dependence of strangeness production in p-A collisions.
In this paper we present the first such measurement, made by BNL experiment 910 at the AGS accelerator. The data consist of $`\mathrm{\Lambda }`$ and K<sub>S</sub> rapidity spectra and integrated yields obtained as a function of “grey” track multiplicity from p-Au collisions at a beam momentum of 17.5 GeV/c. E910’s nearly complete rapidity coverage allows us to accurately estimate the total $`\mathrm{\Lambda }`$ and K<sub>S</sub> multiplicities and study the variation of absolute yields with $`\nu (N_{\mathrm{grey}})`$, the estimated number of collisions of the proton in the Au nucleus. A common benchmark for evaluating strangeness enhancement in A-A collisions is the scaling of p-p data by the number of participants, $`N_{\mathrm{part}}`$, in the collision. For p-A collisions, an $`N_{\mathrm{part}}`$ scaling of p-p data would yield
$$N_{\mathrm{prod}}=\frac{1}{2}N_{\mathrm{prod}}^{\mathrm{pp}}(1+\nu ),$$
(1)
since there are two participants in p-p collisions. We compare our multiplicities to the yields expected from Eq. 1 to evaluate whether we see enhanced strange particle production using the same benchmark as in A-A collisions. We observe that for $`\mathrm{\Lambda }`$ production Eq. 1 is physically sensible since baryon number is conserved and $`\mathrm{B}\overline{\mathrm{B}}`$ processes are negligible at our energies. Deviations from Eq.1 would imply contributions from target nucleons not directly struck by the projectile or changes in the probability of participants to fragment into $`\mathrm{\Lambda }`$’s. E910 was staged in the Multi-Particle Spectrometer (MPS) facility at the AGS. For this data, the Cherenkov tagged secondary proton beam had a mean momentum of $`17.5\pm 0.1`$ GeV/c and a 1.5% momentum spread. The E910 spectrometer was previously described in ; the results presented here rely on the EOS time projection chamber and the beamline/trigger counters. A 3.9 $`\mathrm{g}/\mathrm{cm}^2`$ Au target was located 20.5 cm upstream of the active area of the TPC and was immediately followed by a $`10\mathrm{cm}\times 10\mathrm{cm}`$ two-layer scintillating-fiber hodoscope. This multiplicity detector provided two triggers, a minimum-bias (MB) trigger that required 2 hits on each layer and a “central” trigger that required a total of 20 hits in the two layers and selected approximately the 25% most central events. Due to the low light yield of the fibers the MB trigger suffered significant efficiency loss for low multiplicity events with no highly ionizing tracks.
After finding and fitting the recorded pulses in the TPC we obtained typical resolutions of 0.7 mm (vertical) and 0.5 mm (horizontal) for position measurements in each sample. The momentum resolution for particles bending in the 0.5 T magnetic field varied from 1.2% ($`p<`$ 2 GeV/c) to 5.5% ($`p`$ 17 GeV/c). The $`dE/dx`$ measurement, obtained from a truncated-mean of the TPC samples on each track, provided a resolution of $`\sigma `$$`/`$$`dE/dx`$ $`=6\%`$ for typical track lengths. $`\mathrm{\Lambda }`$’s and K<sub>S</sub>’s were measured and identified through a combination of topological reconstruction and $`dE/dx`$ identification of the decay daughters. We paired and removed conversion electrons and positrons, the dominant source of background, with an efficiency of $`50\%`$. We further reduced background from conversions and from false vertices by applying tighter geometric cuts to small opening-angle pairs. We attempted topological fits on $`+`$/$``$ track pairs satisfying applied geometric and $`dE/dx`$ cuts and accepted as $`\mathrm{V}_0`$ candidates those passing applied $`\chi ^2`$ cuts with origin $`3.5cm`$ from the target. We used a combined likelihood from the single track $`dE/dx`$’s and hypothetical $`p\pi ^{}`$ and $`\pi ^+\pi ^{}`$ invariant masses ($`M_{\mathrm{inv}}`$) of the pairs to identify the decaying particle, obtaining the $`M_{\mathrm{inv}}`$ distributions shown in Fig. 1. We obtained mass resolutions of FWHM=4 MeV, 13 MeV for $`\mathrm{\Lambda }`$’s and K<sub>S</sub>’s, respectively and corresponding S/B ratios of 35:1 and 30:1. The acceptances were calculated via GEANT simulations of the detector response to 20M pure $`\mathrm{\Lambda }`$ and 10M pure K<sub>S</sub> decays . We show in Fig. 1 the $`yp_{}`$ regions with $`>10\%`$ acceptance. For the centrality measurement, we identified as “grey” tracks protons and deuterons in the momentum ranges \[0.25,1.2\] GeV/c and \[0.5,2.4\] GeV/c, respectively. We obtain the multiplicity of grey tracks, $`N_{\mathrm{grey}}`$, within our geometric acceptance event-by-event and from this quantity estimate the mean number of collisions suffered by the beam proton, $`\overline{\nu }(N_{\mathrm{grey}})`$, using an established technique .
The data presented in this paper resulted from a combined 4.65M triggers. We required a valid event to have at least one secondary charged particle in the final state and a $`\mathrm{\Sigma }p_{}>85`$ MeV/c. In addition, we vetoed one and two-track events containing a high-momentum positive track consistent with a quasi-elastically scattered proton. After applying quality and the above interaction cuts we obtained 2.97M events, 1.88M minimum-bias and 2.07M central. From these, we reconstructed a total of 156.8k $`\mathrm{\Lambda }`$’s and 76.8k K<sub>S</sub>’s. Using beam-triggered events we determined trigger efficiency corrections as a two-dimensional function of charged-particle multiplicity and $`N_{\mathrm{grey}}`$. The correction for multiplicity 1,2 events with $`N_{\mathrm{grey}}=0`$ is large ($`5.1`$) while the average correction for all interactions is $`1.1`$.
We calculated $`\mathrm{\Lambda }`$ and K<sub>S</sub> yields $`\mathrm{\Delta }N(m_{},y,M_{\mathrm{inv}})`$ per event as a function of $`N_{\mathrm{grey}}`$ after subtracting the $`\mathrm{\Lambda }`$ and K<sub>S</sub> background in each bin. We corrected these for acceptance (A), trigger efficiency ($`\epsilon `$), and branching ratio (BR) to obtain an invariant differential yield,
$$\frac{d^2n_{\mathrm{\Lambda }/K_s}}{m_{}dm_{}dy}=\frac{1}{m_{}}\frac{1}{A\epsilon \mathrm{BR}}\frac{\mathrm{\Delta }N_{\mathrm{\Lambda }/K_s}(m_{},y)}{N_{\mathrm{evt}}\mathrm{\Delta }m_{}\mathrm{\Delta }y}.$$
(2)
For $`N_{\mathrm{grey}}<4`$, we used only MB triggers, but for larger $`N_{\mathrm{grey}}`$ we combined both triggers, weighting by the number of events in each sample. The $`m_{}`$ spectra are uniformly well-described by exponential distributions except at the highest $`m_{}`$ values ($`m_{}m>0.6`$ at mid-rapidity) where we do not have sufficient statistics to accurately determine the background subtraction. We fit the $`m_{}`$ spectra excluding these points to the form,
$$\frac{1}{2\pi m_{}}\frac{d^2n}{dm_{}dy}=\frac{1}{2\pi (m_0+T)T}\frac{dn}{dy}e^{(m_{}m_0)/T}.$$
(3)
where $`dn/dy`$ is a direct parameter of the fit representing the integral of Eq. 3 over $`m_{}`$. The fits give inverse slopes, $`T`$, that vary from 0.05 GeV/c at low and high $`y`$ to 0.14 GeV/c at mid-rapidity, consistent with proton spectra obtained from p-A collisions at a similar energy . For $`N_{\mathrm{grey}}`$ bins where we do not have enough statistics to perform the fits we directly sum over $`m_{}`$ to obtain $`dn/dy`$. Fig. 2 shows the resulting $`dn/dy`$ distributions for a sub-set of the available $`N_{\mathrm{grey}}`$ bins. We estimate 90% CL point-to-point systematic errors in the $`\mathrm{\Lambda }`$ and K<sub>S</sub> $`dn/dy`$ measurements, including contributions from the fitting, to be $`<5\%`$ except for the lowest rapidity bin (20%), and estimate the normalization systematic error to be $`\pm 10\%`$.
We observe that with increasing $`N_{\mathrm{grey}}`$, the $`\mathrm{\Lambda }`$ and K<sub>S</sub> yields decrease at high rapidity and increase at low rapidity. For $`\mathrm{\Lambda }`$’s, the decrease in yield at large rapidity is a direct consequence of the increased “stopping” of the projectile baryon resulting from the multiple interactions in the target nucleus . The strong increase in production of strange particles at low rapidity is qualitatively consistent with previously observed trends in secondary particle production in p-A collisions . We show in Fig. 3 the integrated yields that we have obtained by summing our measured $`dn/dy`$ values as a function of $`\nu `$. The error on the in-acceptance yield is dominated by and taken to be the same as our uncertainty in the absolute normalization $`\pm 10\%`$ at $`90\%\mathrm{CL}`$. We show (solid line) in Fig. 3 the expectations from Eq. 1 with $`N_{pp}^\mathrm{\Lambda }=0.054\pm 0.002(syst)`$ and $`N_{pp}^{K_s}=0.035\pm 0.002(syst)`$ obtained by parameterizing the $`\sqrt{s}`$ dependence of $`\mathrm{\Lambda }`$ and K<sub>S</sub> multiplicities and interpolating to our energy. The $`\mathrm{\Lambda }`$ yields initially increase faster with $`\nu `$ than expected from the $`N_{\mathrm{part}}`$ scaling of p-p yields and then saturate and start to decrease. The K<sub>S</sub> yields behave similarly with a slower initial increase. The apparent decrease of the yields may result from the fact that we miss a larger fraction of the total $`\mathrm{\Lambda }`$ and K<sub>S</sub> yield with increasing $`N_{\mathrm{grey}}`$ or $`\nu `$ due to our low-rapidity cut-off. We have estimated the missing yield by fitting the $`dn/dy`$ distributions to gamma distributions as shown in Fig. 3 and extrapolating these into the unmeasured region to produce the estimated total yields shown in Fig. 3. The uncertainty in the total yield is largest for the larger $`N_{\mathrm{grey}}`$ or $`\nu `$ bins where the $`dn/dy`$ distribution peaks near the edge of our acceptance. We show in Fig. 3 $`90\%\mathrm{CL}`$ systematic errors on the yields with larger errors on the high side to account for the possibility that an unknown mechanism may produce a larger $`\mathrm{\Lambda }`$ yield below $`y=0`$ than we estimate. The resulting total $`\mathrm{\Lambda }`$ yields shown in Fig. 3 saturate at large $`\nu `$ and remain flat. We have fit the extrapolated $`\mathrm{\Lambda }`$ yields to an empirical function,
$$N_\mathrm{\Lambda }=N_{pp}^\mathrm{\Lambda }(1e^{\kappa \nu ^\alpha })/(1e^\kappa ),$$
(4)
where $`N_{pp}`$ is the $`\mathrm{\Lambda }`$ multiplicity in p-p collisions. The obtained function with $`\kappa =0.299\pm 0.008`$ and $`\alpha =1.29\pm 0.03`$ describes both the initial rapid rise of the yield and the saturation at large $`\nu `$. To evaluate the significance of this fast initial increase in the $`\mathrm{\Lambda }`$ yield we plot in Fig. 3 the yield that would result from a “binary-collision” scaling of p-p data,
$$N^{\mathrm{BC}}(\nu )=N_{pp}\nu ,$$
(5)
which we view as the fastest plausible increase that could be expected from the multiple scattering of the incoming proton. The $`\mathrm{\Lambda }`$ yields are consistent with this “upper limit” for $`\nu 3`$ indicating a rate of increase in $`\mathrm{\Lambda }`$ yield with $`\nu `$ in this region that is approximately twice that given by Eq. 1. We note that the systematic error on the $`\nu `$ scale of $`\pm 15\%`$ is small compared to this difference in slope.
We observe that the K<sub>S</sub> yield increases more slowly with $`\nu `$ than the $`\mathrm{\Lambda }`$ yield and appears to decrease slightly at large $`\nu `$ even after we have accounted for the missing yield. The difference in behavior between the $`\mathrm{\Lambda }`$ and K<sub>S</sub> yields may result from the mixture of K<sup>0</sup> and $`\overline{\mathrm{K}^0}`$ in the K<sub>S</sub> and the fact that these are produced through different processes. In p-p collisions at comparable energies, $`\frac{1}{2}`$ of the K<sub>S</sub> are produced as K<sup>0</sup>’s in association with hyperons with the other half produced as $`\overline{\mathrm{K}^0}`$’s associated with kaons . If we assume that this proportion is not modified in p-A collisions and that the total hyperon yields increase in proportion to the $`\mathrm{\Lambda }`$ yield, we can estimate the $`\overline{\mathrm{K}^0}`$ component of the measured K<sub>S</sub> yields shown in Fig. 3. The $`\overline{\mathrm{K}^0}`$ component appears to increase by a factor of two at $`\nu =3`$, roughly consistent with the $`N_{\mathrm{part}}`$ scaling of p-p data shown in Fig. 3, before starting to decrease slowly with $`\nu `$. While our data suggests an increase in $`\mathrm{K}\overline{\mathrm{K}}`$ production with $`\nu `$, because of uncertainties in the above assumptions we cannot make a stronger statement. A forthcoming analysis of K<sup>-</sup> production will provide clearer insight into this problem.
In conclusion, we have reported on the first detailed investigation of the centrality dependence of strange particle production in p-A collisions using grey track multiplicity and the estimated number of collisions of the projectile nucleon to characterize centrality. We have measured $`dn/dy`$ distributions for $`\mathrm{\Lambda }`$ and K<sub>S</sub> that both show a strong backward shift with increasing $`N_{\mathrm{grey}}`$ and $`\nu `$. The estimated total $`\mathrm{\Lambda }`$ yields increase with $`\nu `$ at a rate approximately twice that expected from the $`N_{\mathrm{part}}`$ scaling of p-p data for $`\nu 3`$ and saturate for $`\nu >5`$. As noted above, this violation of $`N_{\mathrm{part}}`$ scaling implies either that additional nucleons not directly struck by the projectile contribute to $`\mathrm{\Lambda }`$ production or that the probability for one or more of the participants to fragment into a $`\mathrm{\Lambda }`$ increases with $`\nu `$. We observe a slower but significant increase in K<sub>S</sub> multiplicity with $`\nu `$ that apparently results from different behavior of the K<sup>0</sup> and $`\overline{\mathrm{K}^0}`$ components of the K<sub>S</sub>. The observed increase in K<sub>S</sub> yield is large enough to allow for a statistically significant increase in $`\mathrm{K}\overline{\mathrm{K}}`$ production with $`\nu `$ for $`\nu 3`$ using a reasonable extrapolation of p-p data.
We conclude that at AGS energies, p-A data show a clear violation of a simple $`N_{\mathrm{part}}`$ scaling of p-p data. This result has clear qualitative implications for use of such scaling for interpreting strangeness yields in A-A collisions. To quantitatively evaluate the potential implications of our results, we assume that the target contribution to the p-Au $`\mathrm{\Lambda }`$ yield grows as $`\nu N_{pp}/2`$ and attribute the remainder to the fragmentation of the projectile and/or energy deposition of the projectile in the nucleus. Then, our data show that the “projectile” contribution increases proportional to $`\nu `$ for $`\nu 3`$ with a slope that is the same as for the target nucleons. In A-A collisions where both the projectile and target nucleons multiply scatter, this picture implies that the hyperon and associated kaon yields per participant would increase rapidly with the average number of scatters of the participants, $`\nu `$, for $`\nu 3`$ giving a maximum possible increase in yield per participant of a factor $`3`$. This is precisely the behavior seen in the K<sup>+</sup> production in Si-Au and Au-Au collisions at the AGS and $`\mathrm{\Lambda }`$ production in Pb-Pb collisions at the SPS. In p-A collisions, the enhancement is more modest – a 50% increase over $`N_{\mathrm{part}}`$ scaling at $`\nu =3`$ – simply because the target nucleons scatter only once. As we have shown, however, this modest enhancement may have profound consequences for interpretation of the strangeness enhancement in nuclear collisions. We note that the above picture picture is consistent with the additive quark model used by Kadija et al. to explain the observed strangeness enhancement in light-ion collisions at the SPS. In particular, the increase in the projectile-like component up to $`\nu =3`$ is exactly what is expected from the additive quark model. Since the saturation of the $`\mathrm{\Lambda }`$ yield for $`\nu >5`$ is very likely due to the stopping of the incident baryon we predict that at higher energies the $`\mathrm{\Lambda }`$ yield will continue to increase for $`\nu >5`$.
We wish to thank Dr. R. Hackenburg and the MPS staff, J. Scaduto and Dr. G. Bunce. This work has been supported by the U.S. Department of Energy under contracts with BNL (DE-AC02-98CH10886), Columbia (DE-FG02-86ER40281), ISU (DOE-FG02-92ER4069), KSU (DE-FG02-89ER40531), LBNL (DE-AC03-76F00098), LLNL (W-7405-ENG-48), ORNL (DE-AC05-96OR22464) and UT (DE-FG02-96ER40982) and the National Science Foundation under contract with FSU (PHY-9523974).
|
no-problem/0003/cond-mat0003067.html
|
ar5iv
|
text
|
# Energy landscapes in random systems, driven interfaces and wetting
0pt0.4pt
## Abstract
We discuss the zero-temperature susceptibility of elastic manifolds with quenched randomness. It diverges with system size due to low-lying local minima. The distribution of energy gaps is deduced to be constant in the limit of vanishing gaps by comparing numerics with a probabilistic argument. The typical manifold response arises from a level-crossing phenomenon and implies that wetting in random systems begins with a discrete transition. The associated “jump field” scales as $`hL^{5/3}`$ and $`L^{2.2}`$ for (1+1) and (2+1) dimensional manifolds with random bond disorder.
PACS # 75.50.Lk, 05.70.Np, 68.45.Gd, 74.60.Ge
The physics of systems with quenched disorder is related to the energy landscape. The free energy is at low temperatures governed by zero temperature effects, which in turn are ruled by the scaling of the disorder-dependent contribution. Random magnets, as spin glasses and random field systems, flux line lattices in superconductors, and granular materials are examples of physical systems in which frustration and disorder play an important role. Disorder may dominate also in non-equilibrium conditions, like driven systems (domain walls in magnets, flux lines in superconducting materials). In that case temperature-driven dynamics (creep, aging) and the external drive change the system from one metastable state to another .
A lot of information about energy landscapes is contained in how the number of local energy minima and the typical scale of their energy differences scale with system size, $`L`$ . This can be interpreted in a geometric fashion in that one compares the energy difference of two states with their overlap in terms of the spin configuration (as for magnets). In spin glasses an intense debate still goes on: whether in the thermodynamic limit the thermodynamic state is trivial (“droplet” picture ) or not (as in the “replica symmetry breaking” picture ).
Consider now the problem of the energetics of $`D`$ dimensional elastic manifolds in random media , of which the best-known case is a directed polymer (DP) in a random medium with $`D=1`$, often called a ’baby spin-glass’ . For these systems the interface energy is proportional to the area, and the sample-to-sample energy fluctuations scale with the exponent $`\theta `$, ($`\theta =`$ 1/3 for a DP in $`d=D+1=2`$ embedding dimensions). The geometry is often self-affine, characterized by a roughness exponent $`\zeta `$, (2/3, when $`d=2`$). In the simplest energy landscape the valleys and excitations are separated by energy gaps proportional to $`l^\theta `$ where $`l`$ is the length scale of the perturbation.
Here the susceptibility of elastic manifolds is studied in the presence of weak fields numerically and by scaling arguments. By investigating each sample separately, we explore the changes in the energy landscape with applied fields. These lead to discrete ’jumps’ in the physical configuration. As a consequence scaling arguments of wetting in random systems do not work in the limit of weak fields if the original interface-to-wall distance is much larger than the interface roughness . With pre-conditioned systems we obtain the detailed probability distribution of the energy differences (gaps) between local minima and the global one. We find that the average interface behavior can be explained with scaling arguments, but the susceptibility can not, and it is directly related to the exact properties of the gap distribution. Thus the detailed statistics of the landscape is important. This contradicts considerations for random systems that assume well-defined thermodynamic functions and scaling arguments with a single parameter ($`L^\theta `$). These findings agree with claims that the susceptibility of a DP to thermal perturbations or applied fields, is anomalous . The reason is that the response to a very weak field, say applied locally at the end-point of a DP, is governed by rare samples. The disorder-averaged response differs from the typical one because the ground state can be almost degenerate with a local minimum. Likewise, numerical studies of $`d=(1+1)`$ DP susceptibility reveal aging phenomena reminiscent of real spin-glasses .
The continuum Hamiltonian for a $`D`$ dimensional elastic manifold ($`𝐱`$ is an internal coordinate and a $`z`$ (scalar) displacement)
$$=\mathrm{d}^D𝐱\left[\mathrm{\Gamma }\{z(𝐱)\}^2+V_r(𝐱,z)+h(z)\right],$$
(1)
with an elastic energy ($`\mathrm{\Gamma }`$ is the interface stiffness), and $`V_r`$ a random pinning energy (we use a random bond correlator, $`V_r(𝐱,z)V_r(𝐱^{},z^{})=2𝒟\delta (𝐱𝐱^{})\delta (zz^{})`$). $`h(z)`$ couples the interface to an external perturbation, e.g. it describes a constant magnetic field $`H`$ in Ising magnets with antiperiodic boundary conditions.
The Hamiltonian (1) describes also complete wetting in a random system, where $`h(z)`$ equals to the chemical potential difference of the wetting layer and the bulk phase . For $`h`$ non-negligible the wetting-inducing external potential competes with the tendency of the interface to win pinning energy. Assuming that these balance, the average interface-wall separation $`z`$ becomes $`zh^\psi ,\psi =\frac{1}{\tau +\kappa }`$ where $`\psi `$ is the depinning exponent. $`\tau `$ measures the scaling of the elastic and pinning energy and is given by $`\tau =2(1\zeta )/\zeta `$, and $`\kappa `$ is the scaling exponent of the external field $`h(z)z^\kappa `$ (here we use $`\kappa =1`$). For random bond systems $`\tau =1`$ in $`d=1+1`$ dimensions, and $`\tau `$ 2.9 in $`d=2+1`$ using the known bulk roughness exponent values $`2/3`$ and 0.41 in $`d=2`$ and 3, respectively . In $`d=2`$ numerical simulations in random Ising systems indicate, in agreement, $`\psi 0.5`$ .
A network flow algorithm, invented by Goldberg and Tarjan , is used here for the numerical procedure. It solves the minimum cut - maximum network flow problem, and produces in polynomial time the exact ground state energy and interface configuration given a sample ($`L\times L_z`$ or $`L\times L\times L_z`$) with fixed quenched disorder. $`L_z`$ is the $`z`$-directional system size. The algorithm is convenient when one makes systematic perturbations to the original problem $`(h=0)`$ . Figure 1 illustrates the sample-to-sample behavior, as the external field $`h(z)`$ is switched on slowly (see Eq. (1)). At $`h=0`$ the interface is in the ground state. It has a mean wall distance $`\overline{z}_0`$ and a width $`wL^\zeta `$ in a system of transverse size $`L_z`$. As the field is increased the interfaces move intermittently with jumps to positions ($`\overline{z}_1,\overline{z}_2,\mathrm{},\overline{z}_n,\mathrm{}`$) . This corresponds to a first-order transition. Instead of finite-size excitations the first change in the interface configuration is a macroscopic jump with zero overlap between the old and new states. The first transition point defines a jump field $`h_1`$. It assumes the role of a latent heat, and corresponds to the landscape-dependent energy to move the interface.
The two possible mechanisms are compared in the inset of Fig. 1. Either the interface adjusts itself gradually by forming ’bubbles’ or local excitations, or it jumps completely (compare with the main figure). The scenarios are linked to the structure of the energy landscape. If the first excitation is localized and has the transverse spatial extension $`\mathrm{\Delta }`$ ($`l\mathrm{\Delta }^{1/\zeta }`$, the energy cost scales with $`\mathrm{\Delta }^{a/\zeta }`$ and the energy win in the field scales with $`h_1\mathrm{\Delta }^{1+(d1)/\zeta }`$. Assuming that $`a=\theta `$ the jump field $`h_1\mathrm{\Delta }^{\overline{\alpha }}=\mathrm{\Delta }^{\theta /\zeta 1(d1)/\zeta }`$. The exponent is negative, and thus small excitations are the more expensive ones . Numerically, the fraction of jumps leading to a non-zero overlap with the ground state decreases towards zero slowly with $`L`$. Also, the scaling function of the interface jump lengths approaches a constant shape. The mean jump length ($`\mathrm{\Delta }z_1=\overline{z}_0\overline{z}_1`$, $`\overline{z}_1<\overline{z}_0`$) scales extensively, $`\mathrm{\Delta }z_1L_z`$, not with e.g. $`L^\zeta `$.
So for small fields $`h`$ and $`L^\zeta L_z`$ the sample-to-sample fluctuations lead to a discrete (wetting) transition. The average behavior with $`z(h)`$ and typical interface behavior with $`\overline{z}(h)`$ do not coincide, since the asymptotic $`h0`$ limit is dominated by the near-degeneracy of the ground state. In the limit $`L^\zeta L_z`$ there are many independent ’valleys’ in the energy landscape for directed surfaces. Each of these has an energy $`E_n`$ corresponding to a local minimum and their energy difference to the ground state (with $`E_0`$) is expected to scale as with two independent sets of disorder. That is $`E_nE_0L^\theta `$. This energy difference equated with the jump energy $`h_1L^D\mathrm{\Delta }z_1`$ leads (with the choice $`L_z=L`$) to the scaling
$$h_1L^{\theta d}=L^\alpha .$$
(2)
The jump field exponents are $`\alpha =5/3`$ and $`\alpha 2.18`$ in $`d=2`$ and $`d=3`$ random bond systems, respectively. In $`d=3`$ random field interfaces have $`\alpha =5/3`$ ($`\zeta =2/3`$ and $`\theta =2\zeta +D2`$ ). It is assumed that $`\mathrm{\Delta }z_1L`$, since the valley energies are independent, except for the bias caused by the field $`h`$. Figure 2 compares the exponent values to numerical data with only the non-overlapping jumps being considered (without this pruning the same exponent is obtained asymptotically). For $`D=1`$ $`\alpha `$ becomes $`1.62\pm 0.04`$, close to the scaling estimate of 5/3. The inset shows the disorder-averaged jump distance $`\mathrm{\Delta }z_1`$ vs. $`L`$ and shows that the interface response geometry scales linearly with $`L`$ (as discussed above). For $`D=2`$ random bond manifolds we obtain $`\alpha 2.2`$, in reasonable agreement again. In the limit $`z_n(h)\overline{z}_n(h)wL^\zeta `$ (after $`n`$ jumps of sizes $`\mathrm{\Delta }z_n=\overline{z}_{n1}\overline{z}_n`$) the mean-field wetting theory applies, and indeed we obtain for the depinning exponent for $`d=2`$ $`\psi 1/2`$, and for $`d=(2+1)`$ $`\psi 0.26`$, in rough accordance with the Lipowsky-Fisher prediction. In $`d=(2+1)`$ there are deviations including a dewetting transition for weak disorder and the exponent converges very slowly ($`\overline{z}_0wL^\zeta `$ at $`L10^4`$ if $`L_z=50`$ ).
If the initial interface position is random, the jump statistics are an average over the initial number of available valleys (recall that the field breaks the up-and-down-symmetry, see Fig. 1). Thus we also consider the limit in which the initial position is set to be inside a fixed-size window, $`\overline{z}_0/L_z\mathrm{const}`$. We expect that the number of local valleys in the landscape, accessible with $`h>0`$, has a well-defined average (in the grand-canonical sense), and that the relevant scaling parameter is $`L_z/L^\zeta `$. Figure 3 shows the scaling function of the probability distribution $`P(h_1)`$ obtained with this initial condition. We find the form $`P(h_1/h_1)=A(L)f(h_1/h_1)`$ where $`A`$ depends on the energy gap scale $`L^\theta `$ and $`f`$ is a scaling function with the limiting behaviors $`f(x)1,x0`$ and $`f(x)\mathrm{exp}(ax^\beta ),x>1,\beta 1.3`$. The distribution is constant for small fields and has an almost exponential cut-off. The scaling properties imply in particular that the disorder-averaged susceptibility diverges. The change in magnetization is given by the number of interfaces that have moved times the mean distance $`\mathrm{\Delta }z_1`$. Thus the divergence is not $`\chi _{tot}L^3`$ . Figure 4 shows the average jump field in the fixed height ensemble with varying $`L_z`$ and constant $`L`$. We have fitted the data with $`h_1L_z^\gamma `$, and the best fit is obtained by the scaling exponent $`\gamma 4/3`$.
Consider now the energy landscape for small $`h`$. It has $`k=1,\mathrm{},N_z`$ associated minima ($`N_zL_z/L^\zeta `$) with the energies $`E_k`$ picked out of an associated energy gap probability distribution $`\widehat{P}(\mathrm{\Delta }E_k)`$, where $`\mathrm{\Delta }E_k=E_kE_0`$ and $`E_0`$ is the ground state energy. When $`h>0`$, all the local minima attain an energy of $`E_k+h\mathrm{\Delta }z_k`$ with respect to the reference state with $`\overline{z}_0`$ and $`E_0`$. Now we make the assumption, analogous to the Random Energy Model , that all the gap energies $`\mathrm{\Delta }E_k`$ are independent random variables. We can now simply compute the probability for the original ground state being stable for any $`h`$ (i.e. no jump has taken place) by the joint probability $`P_0`$ that all the $`E_k+h\mathrm{\Delta }z_k`$’s are still higher than the original one with the given $`h`$. $`P_0/h`$ gives then the probability that this level crossing occurs at exactly $`h`$. By computing
$$\frac{P_0}{h}=\mathrm{e}^{_1^{N_z}_0^{kh/N_z}\widehat{P}(x)𝑑x𝑑k}_1^{N_z}\frac{\widehat{P}(kh/N_z)}{1_0^{kh/N_z}\widehat{P}(x)𝑑x}𝑑k$$
(3)
one can show that the only $`\widehat{P}`$ that reproduces the numerical $`P(h_1)`$ is a constant one, whereas all other functional forms of $`\widehat{P}`$ fail, see Fig. 3. This $`\widehat{P}`$ is in fact exactly the marginal one needed for the susceptibility per spin $`\chi =lim_{h0}\overline{z}/h`$ to diverge in the thermodynamic limit. In particular for a distribution $`P(h_1)`$ that vanishes in the zero field limit the susceptibility would stay finite. Using the obtained form for the probability distribution gives $`\chi L^\theta \left(\frac{L_z}{L^\zeta }\right)^\gamma `$ where $`\gamma 1\zeta `$ relates to the density of valleys. This slightly disagrees with the above result ($`\gamma 4/3`$) since with $`L=\mathrm{const}`$ $`\chi L_z^\gamma `$, $`\gamma =1`$. In the isotropic limit $`LL_z`$ the extensive susceptibility simply reads $`\chi _{tot}=L^d\chi L^{d+1+\theta \zeta }L^{2D+\zeta }`$. To conclude, $`\chi `$ (or $`\chi _{tot}`$) is determined by the exact low-energy properties of $`\widehat{P}`$, or by the rare events in the low $`\mathrm{\Delta }E`$ tail.
To summarize we have studied the coupling between the energy landscape structure and the response of interfaces, related for instance to complete wetting. A disorder averaging that reflects correctly the level-crossing character of the problem reveals that the wetting starts with a discrete transition. Thus the randomness of the energy landscape drives a second-order transition to a first-order one. The ’jump’ is associated with an effective specific heat, which can be understood in terms of scaling arguments. The susceptibility is governed by the infrequent cases with low-lying local minima, which allows us to derive a constant energy gap probability distribution. The results should be relevant for other problems like flux line lattices in superconducting materials with quenched randomness . It will also be of interest to see if the energetics and the geometrical character of the response can be coupled with arguments concerning the energy barriers in each specific configuration . This would allow to understand the dynamics in the creep regime, when the interface moves between metastable states.
Phil Duxbury is acknowledged for a crucial suggestion, and Simone Artz, Martin Dubé, and Heiko Rieger for discussions. We thank the Academy of Finland for support.
|
no-problem/0003/astro-ph0003083.html
|
ar5iv
|
text
|
# SUPERNOVAE
## 1 Introduction
As their name indicates, supernovae (SNe) are discovered in the sky as “new stars” (-novae) of exceptionally high brightness (super-). The fact that SNe are formidable explosions completely different from, and vastly more energetic than classical novae <sup>a</sup><sup>a</sup>aNovae are produced by sudden nuclear ignition of a very thin layer of hydrogen near the surface of a degenerate star accreting matter from a binary companion., was first recognized by Baade and Zwicky (1934). They noticed that novae during explosion become no brighter than about 1 million times (i.e. 15 mag) they are in a quiescent phase. Therefore, any historical event in our Galaxy that had reached a magnitude as bright as 0 or brighter but is not detectable at present, had to belong to a separate class of intrinsically brighter objects. And indeed, the distribution of observed magnitudes of explosive events detected in galaxies of the Local Group indicated the presence of two peaks, one at the expected brightness of classical novae and another at more than thousand times brighter luminosities.
Supernovae represent the explosive death of both low mass stars (type Ia) and moderate and high mass stars (types Ib/c and II). They are extremely bright, (roughly 10$`{}_{}{}^{9}L_{}^{}`$<sup>b</sup><sup>b</sup>bIn Astronomy the symbol $``$ is used to denote the Sun. Thus, $`L_{}=3.8\times 10^{33}ergs^1`$ is the solar luminosity and $`M_{}=2.0\times 10^{33}g`$ is the solar mass. rivalling, for a few days, the combined light of the entire host galaxy. In all cases, a SN explosion injects highly metal-enriched material (at least 1 $`M_{}`$) and a conspicuous amount of kinetic energy (about 10<sup>51</sup> ergs) into the surrounding medium (see Section 2.1). In addition, the blast waves from SN explosions produce powerful sources of radio and X-ray emission – supernova remnants – that can be seen and studied many thousands of years after the event (see Section 2.2). Therefore, it is clear that SN explosions are crucial events that determine most of the aspects of the evolution of galaxies, i.e. most of the visible Universe.
Some SNe in our Milky Way galaxy have been close enough to be visible to the naked eye, and records of their occurrence can be found in ancient annals. In particular, during the past 2000 years 9 such events have been recorded. A few of these events were very bright. The supernova of 1006 AD, for example, was about 1/10 as bright as the full moon! The last supernova to be seen in our Galaxy was discovered in 1604 by the famous astronomer Kepler. On the basis of these historical records one may infer that the average rate of SN explosions in the Galaxy be of the order of 5 per millennium. However, one has to allow for the fact that most SNe are either too far or are too obscured by dark dust clouds of the galactic disk to be visible. Actually, one can estimate that only about 10% have been close enough and bright enough to be detectable by naked eye. Therefore, a more realistic SN explosion rate for our Galaxy is about one every twenty years (see also Section 2.3).
Being so bright, SNe are ideal probes of the distant Universe. And indeed studies of SNIa up to redshifts $`1.2`$ have allowed us to explicitely measure both the local expansion rate of the Universe and other cosmological parameters (see Section 2.4). The brightest supernova discovered in the last three centuries is supernova 1987A in the Large Magellanic Cloud, a small satellite galaxy to the Milky Way. Section 3 is devoted to it.
## 2 Properties of Supernovae
### 2.1 Supernova Types
Morphologically, Supernovae are distinguished into two main classes, Type I and Type II according to the main criterion of whether their spectra (thus, their ejecta) contain Hydrogen (Type II) or no Hydrogen (Type I).
Type II SNe are produced by the core collapse of massive stars, say, more massive than 8 $`M_{}`$ and at least as massive as 20 $`M_{}`$ (SN 1987A) or even 30 or more $`M_{}`$ (SN 1986J). Thus, the lifetime of a SNII progenitor is shorter than about 100 million years (and can be as short as a FEW million years). Therefore, SNII can be found only in galaxies that are either just formed or that have efficient, ongoing star formation, such as spiral and irregular galaxies.
The class of Type I supernovae has been recognized (e.g., Panagia 1985) to consist of two subclasses, Type Ia and Type Ib/c that, although sharing the common absence of Hydrogen, are widely apart in other properties and, especially, in their origins. The spectroscopic criterion to discern the two subclasses from each other is the presence (Ia) or absence (Ib/c)<sup>c</sup><sup>c</sup>cThey are classified Ib if strong He lines are present in their spectra, and Ic otherwise. of a strong Si<sup>+</sup> 6150Å absorption feature which is prominent in their early epoch spectra. The astrophysical difference between Type Ia and Ib/c SNe is that the former are found in all type of galaxies, from ellipticals through spirals to irregulars, whereas the latter are found exclusively in spiral galaxies, mostly associated with spiral arms and frequently in the vicinities of large ionized nebulae (giant HII regions). These characteristics indicate that SNIb/c are the end result of a relatively young population of stars (ages less than 100 million years) while SNIa progenitors must be stellar systems that have considerably longer lifetimes, of the order of 10<sup>9</sup> years or more.
The progenitors of SNIa are believed to be stars that would not produce a SN explosion if they were single stars but that end up exploding because, after reaching the white dwarf stage, they accrete enough mass from a binary companion to exceed the Chandrasekhar mass, and ignite explosive nucleosynthesis in their cores. This process of “nuclear bomb” is expected to disrupt the entire star while synthetizing about 0.6 $`M_{}`$ (Ia) of radioactive <sup>56</sup>Ni, which will power the SN optical light curves. SNIa are very luminous objects and form a quite homogeneous class of SNe, both in their maximum brightness and their time evolution. Thus, SNIa constitute ideal “standard candles” for distance determinations on cosmological scales (see Sect. 2.4).
Type Ib/c, on the other hand, must be significantly more massive because they are only found in spiral galaxies, and often associated with their spiral arms: this suggests progenitor masses in excess of 5$`M_{}`$. Therefore, either they represent the upper end of the SNIa class or they are a subclass of core collapse supernovae, possibly massive stars that occur in binary systems and are able to shed most of their outer H-rich layers before undergoing the explosion.
### 2.2 Radio Properties
A series of papers published over the past 18 years on radio supernovae (RSNe) has established the radio detection and/or radio evolution for 25 objects: 2 Type Ib supernovae, 5 Type Ic supernovae, and 18 Type II supernovae. A much larger list of almost 80 more SNe have low radio upper limits (e.g., Weiler et al. 1986, 1998). A summary of the radio information can be found at: http://rsd-www.nrl.navy.mil/7214/weiler/sne-home.html.
All known RSNe appear to share common properties of: 1) non-thermal synchrotron emission with high brightness temperature; 2) a decrease in absorption with time, resulting in a smooth, rapid turn-on first at shorter wavelengths and later at longer wavelengths; 3) a power-law decline of the flux density with time at each wavelength after maximum flux density (optical depth $`1`$) is reached at that wavelength; and 4) a final, asymptotic approach of spectral index $`\alpha `$ to an optically thin, non-thermal, constant negative value.
The current model for radio supernovae includes acceleration of relativistic electrons and compression of the magnetic field, necessary for synchrotron emission. These processes occur at the SN shock interface with a relatively high-density circumstellar medium (CSM) which has been ionized and heated by the initial UV/X-ray flash Chevalier (1982a,b). This CSM, which is also the source of the initial absorption, is presumed to have been established by a constant mass-loss ($`\dot{M}`$) rate, constant velocity ($`w`$) wind (i.e., $`\rho r^2`$) from a red supergiant (RSG) progenitor or a binary companion.
In our extensive study of the radio emission from SNe, several effects have been noted: 1) Type Ia are not radio emitters to the detection limit of the VLA<sup>d</sup><sup>d</sup>d The VLA is operated by the NRAO of the AUI under a cooperative agreement with the NSF.; 2) Type Ib/c are radio luminous with steeper spectral indices and a fast turn-on/turn-off, usually peaking at 6 cm near or before optical maximum; and 3) Type II show a range of radio luminosities with flatter spectral indices and a relatively slow turn-on/turn-off. These results lead to the conclusion that most SNII progenitors were RSGs, SNIb/c result from the explosion of more compact stars, members of relatively massive binary systems, and SNIa progenitors had little or no appreciable mass loss before exploding, excluding scenarios that involve binary systems with red giant companions. In some individual cases, it has also been possible to detect thermal hydrogen along the line of sight (Montes, Weiler & Panagia 1997, Chu et al. 1999), to demonstrate binary properties of the stellar system, and to show clumpiness of the circumstellar material (e.g., Weiler, Sramek & Panagia 1990). More speculatively, it may be possible to provide distance estimates to radio supernovae (Weiler et al. 1998).
As an illustration we show that case of SN 1979C that exploded in April 1979 in the spiral galaxy NGC 4321=M100. This supernova was first detected in the radio in early 1980 (Weiler et al. 1981) and is still bright enough to be accurately measured at different frequencies, thus offering a unique opportunity to do a very thorough study of its radio properties, the nature of the radio emission mechanisms and the late evolution of the SN progenitor. Figure 1 displays the time evolution of SN 1979C radio flux at two frequencies (1.47 and 4.88 GHz). One can recognize the “canonical” properties (non-thermal spectral index, flux peaking at later times for lower frequencies, asymptotic power law decline) that allows one to estimate the circumstellar material distribution, corresponding to a constant velocity pre-SN wind with a mass loss rate of $`2\times 10^4`$$`M_{}`$ /year and a probable 20$`M_{}`$ progenitor. In addition, the almost sinosoidal modulation of the light curves reveals the presence of a 5$`M_{}`$ binary companion in a slightly elliptical orbit (Weiler et al. 1992). And the marked jump up of the flux about ten years after the explosion (Montes et al.2000) suggests that the progenitor had a rather sudden change in its mass loss rate about 10,000 years before exploding, possibly due to pulsational instability (Bono & Panagia 1999, in preparation).
### 2.3 Supernova Rates
Determining the rates of SN explosions in galaxies requires knowing how many SNe have exploded in a large number of galaxies over the period of time during which they were monitored. Although it sounds easy, this process is rather tricky because data collected from literature usually do not report the control times over which the searches were conducted. On the other hand, more systematic searches that record all needed information have been started rather recently and the number of events thus recorded is rather limited, so that the statistics is still rather uncertain. In a recent study, Cappellaro et al.(1999) have thoroughly discussed this problem and, from the analysis of all combined data set available, have derived the most reliable SN rates for different types of galaxies. We have taken their rates and, for each galaxy class, we have renormalized them to the appropriate H-band ($`1.65\mu m`$) luminosity rather than the B-band ($`0.45\mu m`$) luminosity as done by Cappellaro et al. (1999). These new rates, displayed in Table 2, are essentially rates per unit galaxy mass because the H-band luminosity of a galaxy is roughly proportional to its mass. We see that SN rates closely reflect the star formation activity of the various classes, not only for type II and Ib/c SNe but also for SNIa. In particular, the rates for SNII-Ib/c are 3-4 times higher in late type spirals (Sbc-d) and irregulars than they are in early type spirals (S0-Sb): this is clear evidence that star formation is considerably more active in the former than it is in the latter group. Also, we notice that late type galaxies (i.e. the ones with most active star formation, Sbc through Irr) have SNIa rates which are 4-10 times higher that the earliest type galaxies (i.e. E-S0). This is a new result (Panagia 1999, in preparation) and implies that SNIa progenitors are intermediate mass stars (say, $`8>M/M_{}>3`$) and that early type galaxies are likely to capture and accrete star forming galaxies on a time scale of one to few billion years to replenish their reservoir of SNIa progenitors.
Recent estimates of the global history of star formation in the Universe were used by Madau, Della Valle & Panagia (1998) to compute the theoretical Type Ia and Type II SN rates as a function of cosmic time from the present epoch to high redshifts. They show that accurate measurements of the frequency of SN events already in the range $`0<z<1`$, and even more so at higher redshifts, will be valuable probes of the nature of Type Ia progenitors and the evolution of the stellar birthrate in the Universe.
### 2.4 Cosmological Applications
As mentioned before, SNIa are virtually ideal standard candles (e.g., Hamuy et al. 1996) to measure distances of truly distant galaxies, currently up to redshift around 1 and, considerably more in the foreseeable future (for a review, see Macchetto and Panagia 1999). In particular, Hubble Space Telescope observations of Cepheids in parent galaxies of SNe Ia (an international project lead by Allan Sandage) have lead to very accurate determinations of their distances and the absolute magnitudes of SNIa at maximum light, i.e. $`M_B=19.50\pm 0.06`$ and $`M_V=19.49\pm 0.06`$ (e.g., Sandage et al. 1996, Saha et al. 1999). Using these calibrations it is possible to determine the distances of much more distant SNe Ia. A direct comparison with the Hubble diagram (i.e. a plot of the observed magnitudes of SNIa versus their cosmological velocities) of distant SNe Ia ($`30,000kms^1>v>3,000kms^1`$) gives a Hubble constant (i.e. the expansion rate of the local Universe) of $`H_0=60\pm 6kms^1Mpc^1`$ (Saha et al. 1999). Studying more distant SNIa (i.e. $`z>0.1`$) it has benn possible to extend our knowledge to other cosmological parameters. The preliminary results of two competing teams (Riess et al. 1998, Perlmutter et al. 1999) agree in indicating a non-empty inflationary Universe with parameters lying along the line $`0.8\mathrm{\Omega }_M0.6\mathrm{\Omega }_\mathrm{\Lambda }=0.2\pm 0.1`$. Correspondingly, the age of the Universe can be bracketed within the interval 12.3–15.3 Gyrs to a 99.7% confidence level (Perlmutter et al. 1999).
## 3 Supernova 1987A in the Large Magellanic Cloud
### 3.1 The Early Story
Supernova 1987A was discovered on February 24, 1987 in the nearby, irregular galaxy, the Large Magellanic Cloud, which is located in the southern sky. SN 1987A is the first supernova to reach naked eye visibility after the one studied by Kepler in 1604 AD and is undoubtedly the supernova event best studied ever by the astronomers. Actually, despite the fact that SN 1987A has been more than hundred times fainter than its illustrious predecessors in the last millennium, it has been observed in such a detail and with such an accuracy that we can define this event as a first under many aspects (e.g. neutrino flux, identification of its progenitor, gamma ray flux) and in any case as the best of all. Reviews of both early and more recent observations and their implications can be found in Arnett et al. (1989) and Gilmozzi and Panagia (1999), respectively.
SN 1987A early evolution has been highly unusual and completely at variance with the wisest expectations. It brightened much faster than any other known supernova: in about one day it jumped from 12th up to 5th magnitude at optical wavelengths, corresponding to an increase of about a factor of thousand in luminosity. However, equally soon its rise leveled off and took a much slower pace indicating that this supernova would have never reached those peaks in luminosity as the astronomers were expecting. Similarly, in the ultraviolet, the flux initially was very high, even higher than in the optical. But since the very first observation, made with the International Ultraviolet Explorer (IUE in short) satellite less than fourteen hours after the discovery, the ultraviolet flux declined very quickly, by almost a factor of ten per day for several days. It looked as if it was going to be a quite disappointing event and, for sure, quite peculiar, thus not suited to provide any useful information about the other more common types of supernova explosions. But, fortunately, this proved not to be true and soon it became apparent that SN 1987A is the most valuable mean to test our ideas and theories about the explosion of supernovae.
And even particle emission was directly measured from Earth: on February 23, around 7:36 Greenwich time, the neutrino telescope (”Kamiokande II”, a big cylindrical “tub” of water, 16 m in diameter and 17 m in height, containing about 3300 m<sup>3</sup> of water, located in the Kamioka mine in Japan, about 1000 m underground) recorded the arrival of 9 neutrinos within an interval of 2 seconds and 3 more 9 to 13 seconds after the first one. Simultaneously, the same event was revealed by the IMB detector (located in the Morton-Thiokol salt mine near Faiport, Ohio) and by the “Baksan” neutrino telescope (located in the North Caucasus Mountains, under Mount Andyrchi) which recorded 8 and 5 neutrinos, respectively, within few seconds from each other. This makes a total of 25 neutrinos from an explosion that allegedly produces 10 billions of billions of billions of billions of billions of billions of them! But a little more than two dozens neutrinos was more than enough to verify and confirm the theoretical predictions made for the core collapse of a massive star (e.g., Arnett et al. 1989 and references therein). This process was believed to be the cause of the explosion of massive stars at the end of their lives, and SN 1987A provided the experimental proof that the theoretical model was sound and correct, promoting it from a nice theory to the description of the truth.
### 3.2 SN 1987A Progenitor Star
From both the presence of hydrogen in the ejected matter and the conspicuous flux of neutrinos, it was clear that the star which had exploded was quite massive, about twenty times more than our Sun. And all of the disappointing peculiarities were due to the fact that just before the explosion the supernova progenitor was a blue supergiant star instead of being a red supergiant as common wisdom was predicting. There is no doubt about this explanation because SN 1987A is exactly at the same position as that of a well known blue supergiant, Sk $`69^{}`$ 202. And the IUE indicated that such a star was not shining any more after the explosion: the blue supergiant had gone BANG (Gilmozzi et al. 1987, Kirshner et al. 1987).
On the other hand, common wisdom cannot be wrong and it was not quite wrong, after all. At later times, in late May 1987, the IUE revealed the presence of emission lines of nitrogen, oxygen, carbon and helium in the ultraviolet spectrum. They kept increasing in intensity with time and proved to be quite narrow, indicating that the emitting matter was moving at much lower speeds (less than a factor of hundred slower) than the supernova ejecta. The chemical abundances and the slow motion were clear sign that that was matter ejected by a red supergiant in the form of a gentle wind. But there was no such a star in sight just before the explosion. Therefore, the same star that exploded, had also been a red supergiant, less than hundred thousand years before the explosion itself: a short time in the history of the star but quite enough to make all the difference.
### 3.3 Explosive Nucleosynthesis
The optical flux reached a maximum around mid-May, 1987, and declined at a quick pace until the end of June, 1987, when rather abruptly it slowed down, setting at a much more gentle decline of about 1% a day (Pun et al. 1995). Such a decay has been followed since then quite regularly: a perfect constant decay with a characteristic time of 114 days, just the same as that of the radioactive isotope of cobalt, <sup>56</sup>Co, while transforming into iron. This is the best evidence for the occurrence of nucleosynthesis during the very explosion: <sup>56</sup>Co is in fact the result of <sup>56</sup>Ni and this latter can be formed at the high temperatures which occur after the core collapse of a massive star. So now, not only are we sure that such a process is operating in a supernova explosion, just as theorists predicted, but we can also determine the amount of nickel produced in the explosion, slightly less than 8/100 of a solar mass or, approximately, 1% of the mass of the stellar core before the explosion. And the hard X-ray emission detected since July 1987 and the subsequent detection of gamma-ray emission confirm the reality of this process and provide more details about its exact occurrence (e.g., Arnett et al. 1989 and references therein).
### 3.4 HST Observations
The Hubble Space Telescope was not in operation when the supernova exploded, but it did not miss its opportunity in due time and its first images, taken with the ESA-FOC in August 23 and 24, 1990, revealed the inner circumstellar ring in all its “glory” and detail (cf. Jakobsen et al. 1991), showing that, despite spherical aberration, HST was not a complete disaster, after all.
Since those early times, Hubble has kept an attentive eye on SN 1987A, obtaining both imaging and spectrographic observations (e.g., Fig. 2) at least once a year, accumulating valuable data and revealing quite a number of interesting results (see Gilmozzi & Panagia 1999), such as:
\- The sequence of images obtained over more than 8 years has allowed us to measure the expansion of the supernova material directly: this the first time it has ever been possible and has permitted to identify the correct models to understand the explosion phenomenon (Pun et al. 1999, in preparation).
\- The origin and the nature of the beautiful circumstellar rings are still partly a mistery. They have been measured to expand rather slowly, about 10-20 $`kms^1`$ , i.e. 100-2000 times slower than the SN ejecta, and to be highly N rich: both these aspects indicate that the rings were expelled from the progenitor star when it was a red supergiant, about 20,000 years before the explosion (Panagia et al. 1996). However, one would have expected such a star to eject material in a more regular fashion, just pushing away material gently in all directions rather than puffing rings like a pipe smoker. Another puzzle is that the star was observed to be a “blue” supergiant in the years before the explosion, and not a red supergiant anymore. This forces one to admit that the star had a rather fast evolution, which was not predicted by “standard” stellar evolution theory, and still is hard to understand fully.
\- The highest velocity material expelled in SN 1987A explosion has been detected for the first time by the Space Telescope Imaging Spectrograph (STIS) (e.g., Sonneborn et al. 1998). The spectrograph has found the first direct evidence for material from SN 1987A colliding with its inner circumstellar ring. The fastest debris, moving at 15,000 $`kms^1`$ are now colliding with the slower moving gas of the inner circumstellar ring (Fig. 2) .
In less than a decade the full force of the supernova fast material will hit the inner ring, heating and exciting its gas and producing a new series of cosmic fireworks that will offer a spectacular view for several years. This is going the “beginning of the end” because in about another century most, if not all, the material in the rings will be swept away and disappear, loosing their identities and merging into the interstellar medium of the Large Magellanic Cloud. This is not a complete loss, however, because by studying this destructive process, we will be able to probe the ring material with a detail and an accuracy which are not possible with current observations.
## References
|
no-problem/0003/cond-mat0003003.html
|
ar5iv
|
text
|
# Meissner - London state in superconductors of rectangular cross-section in perpendicular magnetic field
## Abstract
The distribution of magnetic induction in Meissner state with finite London penetration depth is analyzed for platelet samples of rectangular cross-section in a perpendicular magnetic field. The exact 2D numerical solution of the London equation is extended analytically to the realistic 3D case. Data obtained on Nb cylinders and foils as well as single crystals of YBCO and BSCCO are in a good agreement with the model. The results are particularly relevant for magnetic susceptibility, rf and microwave resonator measurements of the magnetic penetration depth in high-$`T_c`$ superconductors.
The temperature and field dependencies of the magnetic penetration depth yield basic information about the microscopic pairing state of a superconductor as well as vortex static and dynamic behavior . Since most high-$`T_c`$ superconductors are highly anisotropic, a measurement in which the applied magnetic field lies at an arbitrary angle relative to the conducting planes yields a Meissner response arising from both in-plane and inter-plane supercurrents. The corresponding penetration depths $`\lambda _{ab}`$ and $`\lambda _c`$ can differ widely in their magnitude and temperature dependence and it is desirable to separate the two contributions to the measured penetration depth. To study $`\lambda _{ab}`$ one must resort to a configuration in which the applied field is normal to the conducting planes so as to generate only in-plane supercurrents. Unfortunately, the London equations in this geometry cannot be solved analytically, making it difficult to reliably relate the experimental response (typically a frequency shift or change in magnetic susceptibility) to changes in $`\lambda _{ab}`$. Exact analytical solutions are known only for special geometries: an infinite bar or cylinder in longitudinal field, a cylinder in perpendicular field, a sphere, or a thin film. These solutions are not practical since most high-$`T_c`$ superconducting crystals are thin plates with aspect ratios typically ranging from 1 to 30. Brandt developed a general numerical method to calculate magnetic susceptibility for plates and discs but this method is difficult to apply in practice and the solutions are limited to two dimensions.
In this paper we describe the numerical solution of the London equations in two dimensions for long slabs in a perpendicular field. The results are then extended analytically to three dimensions. We first compare our calculations in the limit of $`\lambda =0`$ with SQUID measurements on cylindrical Nb samples of differing aspect ratio . We then compare our calculations for finite $`\lambda `$ with data from Nb foils and platelets of both BSCCO and YBCO high-$`T_c`$ superconductors, obtained by using an rf LC resonator . Using numerical results and analytical approximations we derive a formula which can be used to interpret frequency shift data obtained from rf and microwave resonator experiments as well as sensitive magnetic susceptibility measurements.
Consider an isotropic superconducting slab of width $`2w`$ in the $`x`$-direction, thickness $`2d`$ in the $`y`$-direction, and infinite in $`z`$ direction. A uniform magnetic field $`H_0`$ is applied along the $`y`$ \- direction. In this $`2D`$ geometry the vector potential is $`𝐀=\{0,0,A\}`$, so that the magnetic field has only two components $`𝐇=\{A/y,A/x,0\}`$ and the London equation takes the form: $`\mathrm{\Delta }A\lambda ^2A=0`$. Outside the sample $`\mathrm{\Delta }A=4\pi j/c=0`$ and $`A/n`$ is continuous along the sample boundary. Here $`n`$ is the direction normal to the sample surface. A numerical solution of this equation was obtained using the finite-element method on a triangular adaptive mesh using Gauss-Newton iterations scheme. The boundary conditions were chosen to obtain constant magnetic field far from the sample, i.e., $`A(x,y)=H_0x`$ for $`y>>d`$ and $`x`$ $`>>w`$.
Figure 1 presents the distribution of the the magnetic field in and around the sample with $`w/d=5`$ and $`\lambda /d=0.5`$. The black color on a gray scale image corresponds to $`\left|𝐁\right|=0`$. The left half of the sample shows contour lines of the vector potential. Figure 2 shows profiles of the y-component of the magnetic field at different distances $`y`$ from the sample middle plane.
The inset shows the corresponding profiles of the vector potential, normalized by its value $`A^0(x=w)`$ in the absence of a sample (a uniform-field curve $`A^0=x`$ is shown by the dotted line). Using the London relation $`4\pi \lambda ^2j=cA`$ and the definition of the magnetic moment $`M=(2c)^1𝐫\times 𝐣d^3r`$ we calculate numerically the susceptibility per unit volume (unit of surface cross-section in 2D case):
$$4\pi \chi =\frac{1}{dw\lambda ^2H_0}\underset{0}{\overset{d}{}}𝑑y\underset{0}{\overset{w}{}}A(x,y)x𝑑x$$
(1)
It is easy to check that for an infinite slab of width $`2w`$ in parallel field, where $`A=\lambda H_0\mathrm{sinh}(x/\lambda )/\mathrm{cosh}(w/\lambda )`$, Eq.(1) results in a known expression similar to Eq.(4) below (with $`N=0`$ and $`R=w`$). In finite geometry there will be a contribution to the total susceptibility from the currents flowing on top and bottom surfaces. These currents are due to shielding of the in-plane component of the magnetic field, $`H_x=A/y`$, appearing due to demagnetization. Figure 3 shows profiles of $`H_x`$ on the sample surface, at $`y=d`$, calculated for three different samples, $`w/d=`$ 8, 5, and 2.5. An analytical form for the surface magnetic field is known only for elliptical samples. We find, however, that it can be mapped onto the flat surface, so that the distribution of $`H_x`$ is given by:
$$H_x=\frac{H_0r}{\sqrt{a^2r^2}}$$
(2)
where $`rx/w`$ and $`a^2=1+(2d/w)^2`$. This equation is similar to that obtained for an ideal Meissner screening . Solid lines in Fig.3 are the fits to Eq.(2). The agreement between numerical and analytical results is apparent.
Next we find a simple analytical approximation to the exact numerical results by calculating the ratio of the volume penetrated by the magnetic field to the total sample volume. This procedure automatically takes into account demagnetization and non-uniform distribution of the magnetic field along sample top and bottom faces. The exact calculation requires knowledge of $`A(x,y)`$ inside the sample or $`𝐇(x,y)`$ in a screened volume outside, proportional to $`w^2`$. The penetrated volume is:
$$V_p=\underset{S}{}\frac{\lambda \left|H_s\right|}{H_0}𝑑s$$
(3)
where integration is conducted over the sample surface in a $`3D`$ case or sample cross-section perimeter in a $`2D`$ case. Using Eq.(2) for magnetic field on top and bottom surfaces and assuming $`H_s=H_0/(1N)`$ on sides we obtain:
$$4\pi \chi =\frac{1}{\left(1N\right)}\left[1\frac{\lambda }{R}\mathrm{tanh}\left(\frac{R}{\lambda }\right)\right]$$
(4)
Here $`N`$ is an effective demagnetization factor and $`R`$ is the effective dimension. Both depend on the dimensionality of the problem. As mentioned earlier, Eq.(4) is similar to the well-known solution for the infinite slab of width $`2w`$ in parallel field. In that case $`R=w`$ and the effective demagnetizing factor $`N=0`$. In a 3D case ($`2w\times 2w`$ slab, infinite in the $`z`$ direction), $`R=w/2`$ and $`N=0`$. The $`\mathrm{tanh}(R/\lambda )`$ term in Eq. (4)was inserted to insure a correct limit at $`\lambda \mathrm{}`$. This correction becomes relevant at $`\lambda /R0.4`$, which is realized only at about $`T/T_c0.9`$ for typical high-$`T_c`$ samples.
For the actual geometry studied here, both $`R`$ and $`N`$ depend upon the aspect ratio $`w/d`$. Unlike the case of an elliptical cross-section, the magnetic field is not constant within the sample so there is no true demagnetizing factor for a slab. However, $`N`$ can still be defined in the limit of $`\lambda 0`$, through the relation, $`4\pi M/V_s=H/\left(1N\right)`$. We find numerically that in a 2D case, for not too large aspect ratio $`w/d`$, $`1/(1N)1+w/d`$. Calculating the expelled volume as described above, the effective dimension $`R`$ is given by:
$$R_{2D}=\frac{w}{1+\mathrm{arcsin}(a^1)}$$
(5)
In the thin limit, $`dw`$ ($`a1`$), we obtain $`R_{2D}0.39w`$.
The natural extension of this approach for the 3D disk of radius $`w`$ and thickness $`2d`$ leads to $`1/(1N)1+w/2d`$ and
$$R_{3D}=\frac{w}{2\left(1+\left[1+(\frac{2d}{w})^2\right]\mathrm{arctan}(\frac{w}{2d})\frac{2d}{w}\right)}$$
(6)
In a thin limit, $`R_{3D}0.2w`$. Eq.(6) was derived for a disk but the more experimentally relevant geometry is a rectangular slab. There is no analytical solution for the slab. However, $`a^2=1+(2d/w)^2`$ is relatively insensitive to $`w`$ in the thin limit and so we approximate $`w`$ for a slab by the geometric mean of its two lateral dimensions. The validity of this approach will be determined shortly.
To verify Eqs.(4) and (5) we calculated $`\chi \left(\lambda \right)`$ numerically. The result is shown it in Fig. 4 by symbols. The solid line is a fit to Eq.(4) with $`N=0.86`$ and $`R/w=0.36`$. The effective dimension calculated using Eq. (5) gives $`R/w=0.39`$ and the corresponding susceptibility curve is shown as a dotted line. The calculated effective demagnetization factor is $`N=0.84`$. It is seen that our approximations are reasonably good. It should be borne in mind that these are all $`2D`$ results - the sample extends to infinity in the z-direction. Demagnetizing effects are significantly larger in two dimensions than in three owing to the much slower decay of fields as we move away from the sample (compare $`3D`$ sphere, $`N=1/3`$, and cylinder in perpendicular field, $`N=1/2`$). Therefore we expect our approximations to work better in three dimensions.
In a 3D case the validity of our results can be verified experimentally by independently measuring the demagnetization factor as a function of the aspect ratio and the magnetic susceptibility for a finite London penetration depth $`\lambda `$. To achieve the first goal, we measured niobium cylinders of radius $`w`$ and length $`2d`$ using a Quantum Design MPMS-5 SQUID magnetometer. Sample dimensions were typically of the order of millimeters, which allows us to disregard London penetration depth of Nb (about 500 Å). The initial susceptibility obtained from the magnetization loops at $`T=8`$ K is shown in Fig. 5. The solid line is a plot of $`1+w/2d`$ (not the fit) and for an aspect ratio up to $`w/d=10`$ the agreement is excellent.
To test our result for $`R`$ (Eq.6) in actual samples we need the magnetic penetration depth. It is common to measure changes in the penetration depth by using the frequency shift of a microwave cavity or an LC resonator. In these techniques, the relative frequency shift $`\left(ff_0\right)/f_0`$ due to a superconducting sample is proportional to $`H^2𝐌_{ac}𝐇𝑑V`$, which in turn is proportional to the sample linear magnetic susceptibility ($`𝐌_{ac}`$ is the ac component of the total magnetic moment, $`𝐇`$ is the external magnetic field and $`f_0`$ is the resonance frequency in the absence of a sample). Using Eq.(4) and Eq.(6) we obtain for $`\lambda <<R`$:
$$\frac{\mathrm{\Delta }f}{f_0}=\frac{V_s}{2V_0\left(1N\right)}\left(1\frac{\lambda }{R}\right)$$
(7)
where $`V_s`$ is the sample volume, $`V_0`$ is the effective coil volume. The apparatus and sample - dependent constant $`\mathrm{\Delta }f_0V_sf_0/(2V_0\left(1N\right))`$ is measured directly removing the sample from the coil. Thus, the change in $`\lambda `$ with respect to its value at low temperature is
$$\mathrm{\Delta }\lambda =\delta f\frac{R}{\mathrm{\Delta }f_0}$$
(8)
where $`\mathrm{\Delta }\lambda \lambda \left(T\right)\lambda \left(T_{\mathrm{min}}\right)`$ and $`\delta f\mathrm{\Delta }f(T)\mathrm{\Delta }f(T_{min})`$.
We used an rf tunnel-diode resonator to measure $`\delta f`$ in Nd foils, YBCO and BSCCO single crystals. Combining $`\delta f`$ with an independent measurement of $`\mathrm{\Delta }\lambda (T)`$ and a measured value for $`\mathrm{\Delta }f_0`$, we then arrived at an experimental determination of the effective dimension $`R`$. For the Nb and YBCO samples, $`\mathrm{\Delta }\lambda (T)`$ was obtained using the demagnetization-free orientation (rf magnetic field along the sample $`ab`$ plane) where $`R=w`$ and $`1/(1N)=1`$. In BSCCO, the large anisotropy prohibits using this method and we used reported values of $`d\lambda /dT`$ 10 Å/K . Figure Fig. 6 summarizes our experimental results. The upper line represents the ”infinite slab” model, where $`R=w/2`$, whereas the lower solid line is $`R=0.2w`$ obtained in a thin limit of Eq. (6). Symbols show the experimental data obtained on different samples, indicated on plot. In three samples: YBCO1 (w/d = 57), Nb1 (w/d = 29) and Nb2 (w/d = 15), $`R`$ agrees with Eq.(6) to better than 5 %. The standard result, $`R=w/2`$, is too large by a factor of 2.5. Both YBCO2 and BSCCO give $`R`$ roughly 20 % smaller than predicted. For the BSCCO data, it is possible that a sample tilt combined with the very large anisotropy of $`\lambda `$ produces an additional contribution from $`\lambda _c`$. If the c-axis is tilted by an angle, $`\theta `$ away from the field direction, the frequency shift is given by
$`{\displaystyle \frac{\mathrm{\Delta }f}{f_0}}={\displaystyle \frac{V_s}{2V_0\left(1N\right)}}\left(1{\displaystyle \frac{\lambda _{ab}}{R}}\right)\mathrm{cos}^2\left(\theta \right)+`$ (9)
$`{\displaystyle \frac{V_s}{2V_0}}\left(1\left[{\displaystyle \frac{\lambda _{ab}}{d}}+{\displaystyle \frac{\lambda _c}{w}}\right]\right)\mathrm{sin}^2\left(\theta \right)`$ (10)
The importance of the tilt depends upon the relative changes in $`\lambda _{ab}`$ and $`\lambda _c`$ with temperature. From Eq.(10) we obtain for the relative contribution to the frequency shift:
$$\frac{\delta f\left(\theta \right)}{\delta f\left(\theta =0\right)}1+\frac{2}{5}\mathrm{tan}^2\left(\theta \right)\left(1+\frac{d}{w}\frac{\mathrm{\Delta }\lambda _c}{\mathrm{\Delta }\lambda _{ab}}\right)$$
(11)
where we used the previous estimates of $`N`$ and $`R`$. For BSCCO we take, $`d\lambda _c/dT`$ 170 Å/K and $`d\lambda _{ab}/dT`$ 10 Å/K , Eq. (11) reduces to $`1+\mathrm{tan}^2\left(\theta \right)`$. We then find that for sample tilt to produce an additional 20 % frequency shift a misalignment of $`\theta 20^o`$ would be required. Our estimated misalignment was a factor of 10 smaller than this so the discrepancy between measured and predicted R was not due to tilt. Both the BSCCO and the YBCO2 sample were more rectangular than square and our use of the geometric mean for $`w`$ could be the source of the error.
In conclusion, we solved numerically the London equations for samples of rectangular cross-section in perpendicular magnetic field. We obtained approximate formulae to estimate finite-$`\lambda `$ magnetic susceptibility of platelet samples (typical shape of high-$`T_c`$ superconducting crystals).
We thank M. V. Indenbom, E. H. Brandt, and J. R. Clem for useful discussions. This work was supported by Science and Technology Center for Superconductivity Grant No. NSF-DMR 91-20000. FMAM gratefully acknowledges Brazilian agencies FAPESP and CNPq for financial support.
|
no-problem/0003/cond-mat0003088.html
|
ar5iv
|
text
|
# Printed on Electronic and Structural Properties of Carbon Nano-Horns
## Abstract
We use parametrized linear combination of atomic orbitals calculations to determine the stability, optimum geometry and electronic properties of nanometer-sized capped graphitic cones, called “nano-horns”. Different nano-horn morphologies are considered, which differ in the relative location of the five terminating pentagons. Simulated scanning tunneling microscopy images of the various structures at different bias voltages reflect a net electron transfer towards the pentagon vertex sites. We find that the local density of states at the tip, observable by scanning tunneling spectroscopy, can be used to discriminate between different tip structures. Our molecular dynamics simulations indicate that disintegration of nano-horns at high temperatures starts in the highest-strain region near the tip.
Since their first discovery , carbon nanotubes have drawn the attention of both scientists and engineers due to the large number of interesting new phenomena they exhibit , and due to their potential use in nanoscale devices: quantum wires , nonlinear electronic elements , transistors , molecular memory devices , and electron field emitters . Even though nanotubes have not yet found commercially viable applications, projections indicate that this should occur in the very near future, with the advent of molecular electronics and further miniaturization of micro-electromechanical devices (MEMS). Among the most unique features of nanotubes are their electronic properties. It has been predicted that single-wall carbon nanotubes can be either metallic or semiconducting, depending on their diameter and chirality . Recently, the correlation between the chirality and conducting behavior of nanotubes has been confirmed by high resolution scanning tunneling microscopy (STM) studies .
Even though these studies have demonstrated that atomic resolution can be achieved , the precise determination of the atomic configuration, characterized by the chiral vector, diameter, distortion, and position of atomic defects, is still a very difficult task to achieve in nanotubes. Much of the difficulty arises from the fact that the electronic states at the Fermi level are only indirectly related to the atomic positions. Theoretical modeling of STM images has been found crucial to correctly interpret experimental data for graphite , and has been recently applied to carbon nanotubes . As an alternative technique, scanning tunneling spectroscopy combined with modeling has been used to investigate the effect of the terminating cap on the electronic structure of nanotubes .
Among the more unusual systems that have been synthesized in the past few years are cone-shaped graphitic carbon structures . Whereas similar structures have been observed previously near the end of multi-wall nanotubes , it is only recently that an unusually high production rate of up to $`10`$ g/h has been achieved for single-walled cone-shaped structures, called “nano-horns”, using the CO<sub>2</sub> laser ablation technique at room temperature in absence of a metal catalyst . These conical nano-horns have the unique opening angle of $`20^{}`$.
We consider a microscopic understanding of the electronic and structural properties of nano-horns a crucial prerequisite for understanding the role of terminating caps in the physical behavior of contacts between nanotube-based nano-devices. So far, neither nano-horns nor other cone-shaped structures have been investigated theoretically. In the following, we study the structural stability of the various tip morphologies, and the inter-relationship between the atomic arrangement and the electronic structure at the terminating cap, as well as the disintegration behavior of nano-horns at high temperatures.
Cones can be formed by cutting a wedge from planar graphite and connecting the exposed edges in a seamless manner. The opening angle of the wedge, called the disclination angle, is $`n(\pi /3)`$, with $`0n6`$. This disclination angle is related to the opening angle of the cone by $`\theta =2\mathrm{sin}^1(1n/6)`$. Two-dimensional planar structures (e.g. a graphene sheet) are associated with $`n=0`$, and one-dimensional cylindrical structures, such as the nanotubes, are described by $`n=6`$. All other possible graphitic cone structures with $`0<n<6`$ have been observed in a sample generated by pyrolysis of hydrocarbons . According to Euler’s rule, the terminating cap of a cone with the disclination angle $`n(\pi /3)`$ contains $`n`$ pentagon(s) that substitute for the hexagonal rings of planar graphite.
The observed cone opening angle of $`20^{}`$, corresponding to a $`5\pi /3`$ disclination, implies that all nano-horns contain exactly five pentagons near the tip. We classify the structure of nano-horns by distinguishing the relative positions of the carbon pentagons at the apex which determine the morphology of the terminating cap. Our study will focus on the influence of the relative position of these five pentagons on the properties of nano-horns.
The cap morphologies investigated in this study are presented in Fig. 1. Nano-horns with all five pentagons at the “shoulder” of the cone, yielding a blunt tip, are shown in Figs. 1(a)–(c). Nano-horns with a pentagon at the apex of the tip, surrounded by the other four pentagons at the shoulder, are shown in Figs. 1(d)–(f). Note that the cone angle of each nano-horn is $`20^{}`$, even though the size of the terminating cap varies with the relative position of the pentagons.
To determine the structural and electronic properties of carbon nano-horns, we used the parametrized linear combination of atomic orbitals (LCAO) technique with parameters determined by ab initio calculations for simpler structures . This approach has been found useful to describe minute electronic structure and total energy differences for systems with too large unit cells to handle accurately by ab initio techniques. Some of the problems tackled successfully by this technique are the electronic structure and superconducting properties of the doped C<sub>60</sub> solid , the opening of pseudo-gaps near the Fermi level in a (10,10) nanotubes rope and a (5,5)@(10,10) double-wall nanotube , as well as fractional quantum conductance in nanotubes . This technique, combined with the recursion technique to achieve an $`O(N)`$ scaling, can determine very efficiently the forces on individual atoms , and had previously been used with success to describe the disintegration dynamics of fullerenes , the growth of multi-wall nanotubes and the dynamics of a “bucky-shuttle” .
To investigate the structural stability and electronic properties of carbon nano-horns, we first optimized the structures with various cap morphologies, shown in Fig. 1. For the sake of an easier interpretation of our results, we distinguish the $`N_{\mathrm{cap}}4050`$ atoms at the terminating cap from those within the cone-shaped mantle, that is terminated by $`N_{\mathrm{edge}}`$ atoms at the other end. We associate the tip region of a hypothetically infinite nano-horn with all the sites excluding the edge. Structural details and the results of our stability calculations are presented in Table I. These results indicate that atoms in nano-horns are only $`0.1`$ eV less stable than in graphite. The relative differences in $`<E_{\mathrm{coh},\mathrm{tot}}>`$ reflect the strain energy changes induced by the different pentagon arrangements. To minimize the effect of under-coordinated atoms at the edge on the relative stabilities, we excluded the edge atoms from the average when calculating $`<E_{\mathrm{coh},\mathrm{tip}}>`$. Since our results for $`<E_{\mathrm{coh},\mathrm{tip}}>`$ and $`<E_{\mathrm{coh},\mathrm{tot}}>`$ follow the same trends, we believe that the effect of edge atoms on the physical properties can be neglected for structures containing hundreds of atoms. Even though the energy differences may appear minute on a per-atom basis, they translate into few electron-volts when related to the entire structure. Our results suggest that the under-coordinated edge atoms are all less stable than the cone mantle atoms by $`0.5`$ eV. Also atoms in pentagons are less stable than those in hexagons by $`0.1`$ eV, resulting in an energy penalty of $`0.5`$ eV to create a pentagon if the strain energy induced by bending the lattice could be ignored.
When comparing the stabilities of the tip regions, described by $`<E_{\mathrm{coh},\mathrm{tip}}>`$, we found no large difference between blunt tips that have all the pentagons distributed along the cylinder mantle and pointed tips containing a pentagon at the apex. We found the structure shown in Fig. 1(c) to be more stable than the other blunt structures with no pentagon at the apex. Similarly, the structure shown in Fig. 1(e) is most stable among the pointed tips containing a pentagon at the apex. Equilibrium carbon-carbon bond lengths in the cap region are $`d_{CC}=1.431.44`$ Å at the pentagonal sites and $`d_{CC}=1.39`$ Å at the hexagonal sites, as compared to $`d_{CC}=1.411.42`$ Å in the mantle. This implies that the “single bonds” found in pentagons should be weaker than the “double bonds” connecting hexagonal sites, thus confirming our results in Table I and the analogous behavior in the C<sub>60</sub> molecule.
Since pentagon sites are defects in an all-hexagon structure, they may carry a net charge. To characterize the nature of the defect states associated with these sites, we calculated the electronic structure at the tip of the nano-horns. The charge density associated with states near $`E_F`$, corresponding to the local density of states at that particular position and energy, is proportional to the current observed in STM experiments. To compute the local charge density associated with a given eigenstate, we projected this state onto a local atomic basis. The projection coefficients were used in conjunction with real-space atomic wave functions from density functional calculations to determine the charge density corresponding to a particular level or the total charge density. To mimic a large structure, we convoluted the discrete level spectrum by a Gaussian with a full-width at half-maximum of $`0.3`$ eV. Using this convoluted spectrum, we also determined the charge density associated with particular energy intervals corresponding to STM data for a given bias voltage.
In Fig. 2, we present such simulated STM images for the nano-horns represented in Figs. 1(c) and 1(d). We show the charge density associated with occupied states within a narrow energy interval of $`0.2`$ eV below the Fermi level as three-dimensional charge density contours, for the density value of $`\rho =1.35\times 10^3`$ electrons/Å<sup>3</sup>. Very similar results to those presented in Fig. 2 were obtained at a higher bias voltage of $`0.4`$ eV. As seen in Fig. 2, $`pp\pi `$ interactions dominate the spectrum near $`E_F`$. These images also show a net excess charge on the pentagonal sites as compared to the hexagonal sites. This extra negative charge at the apex should make pointed nano-horn structures with a pentagon at the apex better candidates for field emitters than structures with no pentagon at the apex and a relatively blunt tip.
It has been shown previously that theoretical modeling of STM images is essential for the correct interpretation of experimental data. Atomically resolved STM images, however, are very hard to obtain especially near the terminating caps of tubes and cones, due to the large surface curvature that can not be probed efficiently using current cone-shaped STM tips. A better way to identify the tip structure may consist of scanning tunneling spectroscopy (STS) measurements in the vicinity of the tip. This approach is based on the fact that in STS experiments, the normalized conductance $`(V/I)(dI/dV)`$ is proportional to the local density of states which, in turn, is structure sensitive. We have calculated the local density of states at the terminating cap of the tips for the different nano-horn structures shown in Fig. 1. Our results are shown in Fig. 3, convoluted using a Gaussian with a full-width at half-maximum of $`0.3`$ eV.
To investigate the effect of pentagonal sites on the electronic structure at the tip, we first calculated the local density of states only at the $`25`$ atoms contained in the five terminating pentagons. The corresponding densities of states, shown by the dashed curve in Fig. 3, are found to vary significantly from structure to structure near the Fermi level . Thus, a comparison between the densities of states at $`EE_F`$ should offer a new way to discriminate between the various tip morphologies. For an easy comparison with experiments, we also calculated the local density of states in the entire terminating cap, including all five pentagons and consisting of $`N4050`$ atoms, depending on the structure. The corresponding density of states, given by the solid line in Fig. 3, is vertically displaced for easier comparison. Our results show that the densities of states, both normalized per atom, are very similar. Thus, we conclude that the pentagonal sites determine all essential features of the electronic structure near the Fermi level at the tip.
Next, we have studied the heat resilience as well as the decay mechanism of nano-horns at extremely high temperatures using molecular dynamics simulations . In our canonical molecular dynamics simulations, we keep the structure at a constant temperature using a Nosé-Hoover thermostat , and use a fifth–order Runge–Kutta interpolation scheme to integrate the equations of motion, with a time step of $`\mathrm{\Delta }t=5\times 10^{16}`$ s. We found the system to remain structurally intact within the temperature range from $`T=2,0004,000`$ K. Then, we heated up the system gradually from $`T=4,000`$ K to $`5,000`$ K within 4,000 time steps, corresponding to a time interval of $`2`$ ps. Our molecular simulations show that nano-horn structures are extremely heat resilient up to $`T4,500`$ K. At higher temperatures, we find these structures to disintegrate preferentially in the vicinity of the pentagon sites. A simultaneous disintegration of the nano-horn structures at the exposed edge, which also occurs in our simulations, is ignored as an artifact of finite-size systems. The preferential disintegration in the higher strain region near the pentagon sites, associated with a large local curvature, is one reason for the observation that nano-horn tips are opened easily at high temperatures, in presence of oxygen .
In summary, we used parametrized linear combination of atomic orbitals calculations to determine the stability, optimum geometry and electronic properties of nanometer-sized capped graphitic cones, called nano-horns. We considered different nano-horn morphologies that differ in the relative location of the five terminating pentagons. We found a net electron transfer to the pentagonal sites of the cap. This extra charge is seen in simulated scanning tunneling microscopy images of the various structures at different bias voltages. We found that the local density of states at the tip, observable by scanning tunneling spectroscopy, can be used to discriminate between different tip structures. Our molecular dynamics simulations indicate that disintegration of nano-horns at high temperatures starts in the highest-strain region near the tip.
We thank S. Iijima for fruitful discussions on carbon nano-horns. We acknowledge financial support by the Office of Naval Research and DARPA under Grant Number N00014-99-1-0252.
|
no-problem/0003/astro-ph0003396.html
|
ar5iv
|
text
|
# Geometric phase in phasing of antenna arrays
## Abstract
The response of a pair of differently polarized antennas is determined by their polarization states and a phase between them which has a geometric part which becomes discontinuous at singular points in the parameter space. Some consequences are described.
| | Raman Research Institute, |
| --- | --- |
| | Bangalore 560 080, India. |
| | email: bhandari@rri.ernet.in |
———————————————————————–
Submitted for Proceedings, IAU199 Symposium : The Universe at low radio frequencies, Pune, India, Nov.30-Dec.4, 1999.
Introduction : The geometric phase (also popularly known as Berry’s phase) finds some of its most easily visualizable manifestations in the physics of polarized light . In 1956, Pancharatnam defined the ‘in-phase’ condition for two different, non-orthognal polarization states to be one for which their interference yields maximum intensity and discovered that under a cycle of transformations of the polarization state along a closed geodesic polygon on the Poincaré sphere (PS) the beam acquires a phase equal to half the solid angle subtended by the polygon at the centre. Further work by the present author has shown that the above geometric phase exhibits measurable jumps at singular points in the parameter space such that a circuit around the singularity results in a measurable phase shift equal to $`2n\pi `$ . The flat behaviour of the phase near a singularity has been used in adaptive optics to make a spatial light modulator for pure intensity modulation, keeping phase constant . In arrays phased by geometric phase shifters , phase singularities lead to the possibility of an array looking in two different directions at two different wavelengths .
A pair of antennas with different polarization : Take two identical elliptically polarized antennas, in phase with each other, so that their resultant intensity response is maximum ‘on-axis’. Now rotate one of the antennas with respect to the other by an angle $`\varphi /2`$ (figure 1). The two will no longer be in phase in that their combined response will not be maximum ‘on-axis’. The phase difference $`\psi `$ between them (Pancharatnam), given by $`tan\psi =cos\theta tan(\varphi /2)`$, is shown in figure 1 for a few values of the polar angle $`\theta `$ of the states on the PS. Note the phase shift (i) is of magnitude $`\pi `$ for a $`2\pi `$ rotation on the PS and (ii) jumps through $`\pm \pi `$ near $`\theta =90^{},\varphi =180^{}`$ (as happens for $`2\pi `$ rotation in real space of particles with odd half integer spin, verified in analogous polarization experiments in an optical interferometer . We note that a similar behaviour is implicit in the special case $`\theta _1=\theta _2`$, Q=V=U=0 of equation (8) in Morris, Radhakrishnan Seielstad <sup>1</sup><sup>1</sup>1This equation has been re-derived by Weiler and Nityananda ..
Interference nulls for non orthogonal states: When radiation from a partially polarized source with degree of polarization p and eigenpolarizations $`P`$ and $`\stackrel{~}{P}`$ is picked up by two antennas tuned to polarizations $`A_1`$ and $`A_2`$, then for every polarization state $`A_1`$, there is a state $`A_2`$, not orthogonal to $`A_1`$, such that the correlation of the two outputs is zero . A simple way to prove this curious result is to consider a superposition of two interference patterns; (i) due to a fraction $`(1+p)/2`$ of the radiation in state $`P`$ and (ii) due to a fraction $`(1p)/2`$ in state $`\stackrel{~}{P}`$, with a phase difference of magnitude $`\pi `$, which is a geometric phase due to the surface enclosed by the closed geodesic curve $`P`$ $`A_1`$ $`\stackrel{~}{P}`$ $`A_2`$ $`P`$ on the PS (a hemisphere).
|
no-problem/0003/cond-mat0003033.html
|
ar5iv
|
text
|
# Probing spin-charge separation using spin transport
## 1 Introduction
The normal state properties in the high temperature superconductors are anomalous in the context of the Fermi liquid theory. Two examples come from charge dynamics: The electrical resistivity and inverse Hall angle have a linear and quadratic temperature dependences, respectively. For the optimally doped cuprates, such temperature dependences occur all the way to temperatures as high as about 1000 K. These anomalous properties suggest an unusual excitation spectrum. Precisely what are the elementary excitations, however, remains an open question.
One particular debate is whether or not spin-charge separation exists. Since the initial suggestion, considerable efforts have been devoted to interpret the existing experimental data using spin-charge separation based pictures. On the other hand, these data have also been analyzed without invoking spin-charge separation. Readers are referred to this proceedings for a snapshot of this continuing debate.
Theoretically, it is still not certain how spin-charge separation arises in specific models in two dimensions. The theoretical challenge is how to study in a controlled fashion the non-perturbative effects of electron-electron interactions, as it is known that the Fermi liquid theory is stable when interactions are treated perturbatively. The situation is different from one dimension, where the phenomenon of spin-charge separation was first discovered theoretically, as well as the opposite limit of large dimensions where metallic states with spin-charge separation have also been shown to occur in some specific models.
Instead of discussing these microscopic issues, here we address a phenomenological question: What would be (unambiguous) experimental signatures of spin-charge separation? In the remainder of this paper, we elaborate on what constitutes a signature of spin-charge separation, review a specific proposal of using spin transport as such a probe, and discuss the prospect of experimentally measuring spin transport as well as the status of some on-going spin injection experiments in the cuprates.
## 2 What is needed to probe spin-charge separation
Spin-charge separation is defined in terms of the excitations of a many-electron system. In essence, it says that A) there are two types of elementary excitations and B) the quantum numbers are such that, one elementary excitation has spin $`\frac{1}{2}\mathrm{}`$ and charge $`0`$ (“spinon”) while the other has spin $`0`$ and charge $`e`$ (“holon”). More precisely, imagine we have solved all the many-body eigenstates of the 10<sup>23</sup> or so electrons. Consider the many-body excited states whose energies are not very far above the ground state energy. Spin-charge separation describes the situation when it is necessary to introduce an elementary object with spin $`\frac{1}{2}\mathrm{}`$ and charge $`0`$, and another with spin $`0`$ and charge $`e`$, to reproduce the wavefunctions of these low-lying many-body excited states from the ground state wavefunction $`|\mathrm{gs}>`$. Namely, the many-body excited states have the general form $`[A_{\mathrm{holon}}^{}]^l[A_{\mathrm{holon}}]^l[A_{\mathrm{spinon}}^{}]^m[A_{\mathrm{spinon}}]^n|\mathrm{gs}>`$, where $`A_{\mathrm{holon}}^{}`$ ($`A_{\mathrm{holon}}`$) and $`A_{\mathrm{spinon}}^{}`$ ($`A_{\mathrm{spinon}}`$) create (annihilate) a holon and a spinon, respectively. In particular, the many-body states in the purely spin and charge sectors, $`|\mathrm{excited}\mathrm{states}\mathrm{I}>`$ and $`|\mathrm{excited}\mathrm{states}\mathrm{II}>`$, have to be separately constructed from the ground state,
$`|\mathrm{excited}\mathrm{states}\mathrm{I}>[A_{\mathrm{spinon}}^{}]^n[A_{\mathrm{spinon}}]^m|\mathrm{gs}>`$
$`|\mathrm{excited}\mathrm{states}\mathrm{II}>[A_{\mathrm{holon}}^{}]^n[A_{\mathrm{holon}}]^n|\mathrm{gs}>`$ (1)
This definition parallels that for a Fermi liquid, where only a single species of elementary excitations is needed. Introducing $`A_{\mathrm{qp}}^{}`$ ($`A_{\mathrm{qp}}`$) which creates (annihilates) a quasiparticle with both spin $`\frac{1}{2}\mathrm{}`$ and charge $`e`$, we can write for all the low-lying many-body excited states,
$`|\mathrm{excited}\mathrm{states}>[A_{\mathrm{qp}}^{}]^n[A_{\mathrm{qp}}]^n|\mathrm{gs}>`$
$`\mathrm{for}\mathrm{a}\mathrm{Fermi}\mathrm{liquid}`$ (2)
In a Fermi liquid, Landau parameters are also needed to specify the residual interactions between these quasiparticles (as well as to create collective excitations out of the quasiparticles). Similarly, in a spin-charge separated metal, there are also parameters characterizing the residual interactions among the spinons and holons.
Following the above definition, we can now specify the basic elements of a signature of spin-charge separation. It should not only show the existence of two kinds of elementary excitations, but also provide the quantum numbers of these excitations. Namely, we need to know that one type of elementary excitations carry spin but no charge, while the other carry charge but no spin.
We comment in passing on the angle-resolved photoemission spectroscopy (ARPES), which has been extensively discussed in the literature in the context of spin-charge separation. In ARPES, a physical electron - containing both spin $`\frac{1}{2}\mathrm{}`$ and charge $`e`$ of course - is ejected. One doesn’t know a priori whether any ARPES peak results from a) a convolution involving a coherent spinon, a coherent holon or both; b) a convolution involving other objects of exotic quantum numbers; or c) simply a quasiparticle-like excitation. From this perspective, ARPES does not directly tell the quantum numbers of the elementary excitations and, hence, doest not directly probe spin-charge separation. Further discussions along this line can be found in Ref. .
## 3 Probing spin-charge separation using spin transport
Such a proposal was made a few years ago. The basic idea is as follows. Consider a spin current which will be generated by accelerating spins, and charge current generated by accelerating charges. We can infer that spin-charge separation exists if the carriers for the two currents are two separated excitations. Similarly, we can infer about the absence of spin-charge separation if the carriers for the two currents actually correspond to the same excitation. Our proposal is that, we can determine which is the case by comparing the temperature dependence of the spin resistivity with that of the electrical resistivity.
To see this, we first note that the spin resistivity can be defined in parallel to electrical resistivity. An electrical current, $`J`$, is established in the steady state when charges are accelerated by an electric field, $`E`$. The electrical resistivity is of course defined by the linear-response ratio
$`\rho =E/J`$ (3)
Similarly, a spin current, $`J_M`$, will be established when spins are accelerated by a magnetic field gradient, $`(H)`$. The ratio
$`\rho _{spin}=(H)/J_M`$ (4)
defines the spin resistivity. In a metal, the electrical resistivity $`\rho `$ is proportional to the transport relaxation rate $`1/\tau _{tr}`$, which is the decay rate of the charge current. Similarly, $`\rho _{spin}`$ is proportional to the spin transport relation rate $`1/\tau _{tr,spin}`$, the decay rate of the spin current.
In a spin-charge separated metal, spin current and charge current are carried by different elementary excitations. It then follows that the scattering processes and the corresponding scattering phase space are in general different for the decay of spin current and decay of charge current. Therefore, the two resistivities will have different temperature dependences. We have calculated spin resistivities in models for spin-charge separation, with results which indeed follow the above general conclusions. One model is the Luttinger liquid in which $`\rho _{spin}T^{\alpha _{spin}}`$ and $`\rho T^{\alpha _{charge}}`$, where the difference in the powers $`\alpha _{spin}\alpha _{charge}`$ is non-zero and interaction dependent (for the one dimensional Hubbard model $`0<\alpha _{spin}\alpha _{charge}<2`$). The other model is the U(1) gauge theory of the t$``$J model, in which $`\rho _{spin}T^{4/3}`$ while $`\rho T`$.
Consider next the case without spin-charge separation. Here the same quasiparticle-like excitations carry both the spin current and charge current. Any scattering process which causes the decay of one current will necessarily also lead to the decay of the other current. The two resistivities will then have the same temperature dependences, under three conditions; all these conditions are satisfied by at least the optimally doped cuprates. The conditions are a) spin fluctuations are not dominated by the ferromagnetic component; b) Fermi surface is large; and c) inelastic scattering dominates over elastic scattering. The conditions a) and b) have to do with the fact that in establishing a spin current the spin up and spin down excitations move in opposite directions. This leads to a matrix element for the decay of spin current that is slightly different from its counterpart for the decay of charge current. Conditions a) and b) guarantee that this difference in the matrix elements leads only to a difference in the numerical prefactors of the two resistivities, but not in their temperature dependences. Condition c) has to do with the possible fluctuating conductivities coming from fluctuating collective modes. In the clean limit specified by condition c), any fluctuating conductivity is better thought of as a drag contribution; a very general gauge invariance argument exists which guarantees that its temperature dependence is the same as that of the corresponding Boltzmann contribution. Unlike for a) and b), which are necessary, condition c) is sufficient but is in most cases not necessary. From neutron scattering results, we know that condition a) is satisfied for all cuprates. Condition b) is satisfied at least for the optimally doped cuprates, as can be inferred from the ARPES results. Finally, from charge transport data we can infer that condition c) is satisfied for most cuprates.
In short, the spin resistivity can be used to test spin-charge separation. Over the temperature range where the electrical resistivity $`\rho `$ is linear in temperature, a non-linear temperature dependence of spin resistivity $`\rho _{spin}`$ would imply spin-charge separation while a linear temperature dependence of $`\rho _{spin}`$ signals the absence of spin-charge separation.
## 4 Experimental measurement of spin resistivity
While it is not easy to measure spin currents directly, the linear-response spin resistivity can be related to the spin diffusion constant $`D_s`$ through the Einstein relation:
$`\rho _{spin}=1/(\chi _sD_s)`$ (5)
where $`\chi _s`$ is the uniform spin susceptibility. There are a number of experimental techniques that can in principle be used to measure spin diffusion. One feasible means for the cuprates seems to be the spin-injection-detection technique.
The illustrative set-up can be found in Ref. , and also in Refs. . Basically, a cuprate is in contact with either one or two ferromagnetic metal(s) (FM1 and in the latter case FM2 as well). An electrical current (I) is applied across the FM1-cuprate interface, injecting spins into the cuprates. In a steady state, the spatial distribution of the injected magnetization depends on the spin diffusion of the cuprates. The injected magnetization can be detected either by measuring the voltage induced across the cuprate-FM2 interface, or some alternative means.
To assess the feasibility of this kind of experiments, we need to estimate the spin diffusion length of the cuprates. For the normal state, we can relate the relaxation time for the total spin, $`T_1`$, to the relaxation time for the spin current, $`\tau _{tr,spin}`$, as follows,
$`1/T_1\lambda _{so}^2(1/\tau _{tr,spin})`$ (6)
where $`\lambda _{so}`$ is the dimensionless spin-orbit coupling constant, which for the cuprates is of the order of $`0.1`$. Note that this relationship is generally valid independent of the precise interactions responsible for the decay of the spin current and spin; $`\lambda _{so}`$ measures the fraction of the interaction processes which lose the total spin. Combining Eq. (6) with the defining expression for the spin diffusion length, $`\delta _{spin}=\sqrt{2D_sT_1}`$, we derive the following relationship between the spin diffusion length and spin transport mean free path,
$`\delta _{spin}(1/\lambda _{so})l_{tr,spin}`$ (7)
From the above we have estimated the spin diffusion length in the normal state to be in the range of a thousand $`\AA `$ to a micron, long enough for spin-injection-detection experiments.
The recent spin injection experiments in the cuprates became possible when perovskite manganites were used as the ferromagnetic layer(s). Presumably such manganite-cuprate heterostructures have much cleaner interfaces, since the two materials are structurally similar. So far the experiments are restricted to the superconducting cuprates. They have raised many interesting theoretical questions of their own, such as the physics of Andreev reflection and the combined effects of Andreev reflection and single-particle transport on spin-injection characteristics involving a ferromagnetic metal and a $`d`$wave superconductor.
From the perspective of probing spin-charge separation, it is necessary to extend these experiments to the normal state and to carry out quantitative spin transport measurements. Note that only the linear response regime is needed for this purpose. As discussed in the previous section, what is needed is a plot of spin resistivity as a function of temperature. There are at least four possible ways to experimentally implement such a plot. a) The most rigorous is to extract the spin diffusion constant $`D_s`$ from experiments which, through Eq. (5), provides the spin resistivity directly; b) Another possibility is to extract the spin diffusion length $`\delta _{spin}`$ from experiments. Eq. (7) implies that $`\delta _{spin}`$ is proportional to $`l_{tr,spin}`$ which, in turn, is proportional to the spin conductivity. The temperature dependence of $`\frac{1}{\delta _{spin}}`$ is then the same as that of $`\rho _{spin}`$; c) Along a similar line, it was shown that the temperature dependence of $`\rho _{spin}`$ can be directly inferred from the spin-dependent voltage $`V_s`$ in a spin-injection-detection experiment. When the sample thickness $`d`$ is small compared to $`\delta _{spin}`$, $`\rho _{spin}`$ is proportional to $`I/(\chi _sV_s)`$ where $`I`$ is the injection current. In the opposite limit of $`d\delta _{spin}`$, $`\rho _{spin}`$ has the same temperature dependence as $`\mathrm{ln}(I/\chi V_s)`$; d) Finally, from Eq. (6), $`1/T_1`$ is proportional to the spin transport relaxation rate $`1/\tau _{tr,spin}`$ which in turn is proportional to the spin resistivity. The temperature dependence of $`1/T_1`$ is then the same as that of $`\rho _{spin}`$. There are experimental techniques - such as ESR - which can measure $`1/T_1`$ (or $`1/T_2`$) directly. Following Eq. (6) we estimate the ESR linewidth to be in the range of $`330`$GHz.
This work has been supported by NSF Grant No. DMR-9712626, Robert A. Welch Foundation, Research Corporation, and Sloan Foundation.
|
no-problem/0003/quant-ph0003093.html
|
ar5iv
|
text
|
# Casimir and van der Waals force between two plates or a sphere (lens) above a plate made of real metals
## I INTRODUCTION
Recently considerable attention has been focussed on the van der Waals and Casimir forces acting between macroscopic bodies. As for the van der Waals force, interest in it has quickened owing to its application in atomic force microscopy (see, e.g., the monographs and references therein). Interest in the Casimir force was rekindled after the new experiments where it was measured more precisely in the case of metallic test bodies.
It is common knowledge that both forces are connected with the existence of zero point vacuum oscillations of the electromagnetic field . For closely spaced macroscopic bodies the virtual photon emitted by an atom of one body reaches an atom of the second body during its lifetime. The correlated oscillations of the instantaneous induced dipole moments of those atoms give rise to the non-retarded van der Waals force. The Casimir force arises when the distance between two bodies is so large that the virtual photon emitted by an atom of one body cannot reach the second body during its lifetime. Nevertheless, the correlation of the quantized electromagnetic field in a vacuum state is not equal to zero at two points where the atoms belonging to different bodies are situated. Hence the non-zero correlations of the induced atomic dipole moments arise once more resulting in the Casimir force (which is also known as the retarded van der Waals force).
As is shown in , the corrections to the Casimir force due to the finite conductivity of the metal and surface roughness play an important role in the proper interpretation of the measurement data. Temperature corrections are negligible in the measurement range of (data of do not support the presence of finite conductivity, surface roughness and temperature corrections which is in disagreement with the theoretically estimated values of these corrections in the measurement range of ). In the values of the finite conductivity corrections to the Casimir force were found by the use of perturbation expansion in relative penetration depth of electromagnetic zero point oscillations into the metal which starts from the general Lifshitz formula \[9–11\]. The parameter of this expansion is $`\lambda _p/(2\pi a)`$, where $`\lambda _p`$ is the effective plasma frequency of the electrons, $`a`$ is the distance between interacting bodies. Note that the coefficient near the first order correction was obtained in , and near the second order one in for the configuration of two plane parallel plates. In the results of and, correspondingly, were modified for the configuration of a spherical lens above a plate. To do this the proximity force theorem was applied. The coefficients to the third and fourth order terms of that expansion were first obtained in for both configurations.
In applications to atomic force microscopy and the van der Waals force the Lifshitz formula and plasma model were used in for different configurations of a tip above a plate. In , the density-functional theory along with the plasma model was used in the calculation of the van der Waals force. More complicated analytical representation for the dielectric permittivity (Drude model with approximate account of absorption bands) was used in to calculate the van der Waals force between objects covered with a chromium layer with the Lifshitz formula.
The parameters of plasma and Drude models (plasma wavelength, electronic relaxation frequency) are not known very precisely. Due to this in the attempt was undertaken to apply Lifshitz formalism numerically to gold, copper, and aluminum (see also ). The tabulated data for the frequency dependent complex refractive index of these metals were used together with the dispersion relation to calculate the values of dielectric permittivity on the imaginary frequency axis. Thereupon the Casimir force was calculated in for configurations of two plates and a spherical lens above a plate in a distance range from 0.05$`\mu \text{m}`$ to 2.5$`\mu \text{m}`$. The same computation based on Lifshitz formalism and optical tabulated data for the dielectric permittivity was repeated in in a distance range from 0.1$`\mu \text{m}`$ to 10$`\mu \text{m}`$. The two sets of results are in disagreement (see also ). Note that the higher-order perturbative calculations of in their application range are in agreement with but also disagree with .
In this paper we present a brief derivation of the van der Waals and Casimir energy density and force between two parallel metallic plates or a plate and a sphere covered by the thin layers of another metal (the configuration used in the experiments ). Two plates of sufficient thickness can be modelled by two semi-spaces with some gap between them. The case of multilayered plane walls was considered in . In contrast to where the removal of the infinities of the zero-point energy was not considered, we present explicitly the details of the regularization procedure and its physical justification. We next perform an independent computation using optical tabulated data for the frequency dependent complex refractive index of aluminum and gold with the goal to resolve the disagreement between earlier results. Our results turn out to be in agreement with with a precision of computational error less than 1%. Also the influence of the thin covering metallic layers onto the Casimir force is determined. The range of applicability and exceptions to using the bulk metal optical data for the dielectric permittivity of the thin metallic layers is discussed. For smaller distances the intermediate (transition) region between the Casimir and van der Waals forces is examined. It is shown that the transition region is very wide ranging from several nanometers to hundreds of nanometers. The pure van der Waals regime for aluminum and gold is restricted to separations in the interval from 0.5 nm till (2–4) nm only. The more exact values of the Hamaker constant for aluminum and gold are determined with the use of obtained computational data.
The paper is organized as follows. In Sec. II the general formalism is briefly presented giving the Casimir and van der Waals forces including the effect of covering layers on the surface of interacting bodies (two plates or a sphere above a plate). In Sec. III the influence of finite conductivity of the metal onto the Casimir force is reexamined. Sec. IV contains the calculation of the Casimir force between the aluminum surfaces covered by the thin gold layers. In Sec. V the van der Waals force is calculated in both configurations and the transition region to the Casimir is examined. Sec. VI contains determination of the Hamaker constant values for aluminum and gold. In Sec. VII we present conclusions and discussion, in particular, of possible applications of the obtained results in experimental investigations of the Casimir force and for obtaining stronger constraints on the constants of hypothetical long-range interactions.
## II THE VAN DER WAALS AND CASIMIR FORCE BETWEEN LAYERED SURFACES: GENERAL FORMALISM
We consider first two semi-spaces bounded by planes ($`x,y`$) and filled with material having a frequency-dependent dielectric permittivity $`\epsilon _2(\omega )`$. Let the planes bounding the semi-spaces be covered by layers of thickness $`d`$ made of the another material with a dielectric permittivity $`\epsilon _1(\omega )`$. The magnetic permeabilities of both materials are taken to be equal to unity. The region of thickness $`a`$ between the layers (see Fig. 1) is empty space. According to van der Waals and Casimir forces for the configuration under consideration can be found by consideration of the surface modes for which $`\text{div}𝑬=0`$, $`\text{curl}𝑬=0`$. The infinite zero-point energy of electromagnetic field, dependent on $`a`$ and $`d`$, is given by
$$E(a,d)=\frac{1}{2}\mathrm{}\underset{𝒌,n}{}\left(\omega _{𝒌,n}^{(1)}+\omega _{𝒌,n}^{(2)}\right).$$
(1)
Here $`\omega _{𝒌,n}^{(1,2)}`$ are the proper frequencies of the surface modes with two different polarizations of the electric field (parallel and perpendicular to the plane formed by $`𝒌`$ and $`z`$ axis correspondingly), $`𝒌`$ is the two-dimensional propagation vector in the $`xy`$-plane.
For the vacuum energy density per unit area of the bounding planes (which is also infinite) one obtains from (1)
$$(a,d)=\frac{E(a,d)}{L^2}=\frac{\mathrm{}}{4\pi }\underset{0}{\overset{\mathrm{}}{}}k𝑑k\underset{n}{}\left(\omega _{𝒌,n}^{(1)}+\omega _{𝒌,n}^{(2)}\right),$$
(2)
where $`L`$ is the side-length of bounding plane.
The frequencies of the surface modes $`\omega _{𝒌,n}^{(1,2)}`$ are found from the boundary conditions for the electric field and magnetic induction imposed at the points $`z=\frac{a}{2}d`$, $`\frac{a}{2}`$, $`\frac{a}{2}`$, and $`\frac{a}{2}+d`$ . These boundary conditions for each polarization lead to a system of eight linear homogeneous equations. The requirements that these equations have non-trivial solutions are
$`\mathrm{\Delta }^{(1)}\left(\omega _{𝒌,n}^{(1)}\right)e^{R_2(a+2d)}\left\{\left(r_{10}^+r_{12}^+e^{R_1d}r_{10}^{}r_{12}^{}e^{R_1d}\right)^2e^{R_0a}\left(r_{10}^{}r_{12}^+e^{R_1d}r_{10}^+r_{12}^{}e^{R_1d}\right)^2e^{R_0a}\right\}=0,`$ (3)
$`\mathrm{\Delta }^{(2)}\left(\omega _{𝒌,n}^{(2)}\right)e^{R_2(a+2d)}\left\{\left(q_{10}^+q_{12}^+e^{R_1d}q_{10}^{}q_{12}^{}e^{R_1d}\right)^2e^{R_0a}\left(q_{10}^{}q_{12}^+e^{R_1d}q_{10}^+q_{12}^{}e^{R_1d}\right)^2e^{R_0a}\right\}=0.`$ (4)
Here the following notations are introduced
$$r_{\alpha \beta }^\pm =R_\alpha \epsilon _\beta \pm R_\beta \epsilon _\alpha ,q_{\alpha \beta }^\pm =R_\alpha \pm R_\beta ,R_\alpha ^2=k^2\epsilon _\alpha \frac{\omega ^2}{c^2},\epsilon _0=1,\alpha =0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2}.$$
(5)
Note that to obtain Eqs. (3) we set the determinants of the linear system of equations equal to zero and do not perform any additional transformations. This is the reason why (3) does not coincide with the corresponding equations of where some transformations were used which are not equivalent in the limit $`|\omega |\mathrm{}`$ (see below).
Summation in (2) over the solutions of (3) can be performed with the help of the argument principle which was applied for this purpose in . According to this principle
$$\underset{n}{}\omega _{𝒌,n}^{(1,2)}=\frac{1}{2\pi i}\left[\underset{i\mathrm{}}{\overset{i\mathrm{}}{}}\omega d\mathrm{ln}\mathrm{\Delta }^{(1,2)}(\omega )+\underset{C_+}{}\omega d\mathrm{ln}\mathrm{\Delta }^{(1,2)}(\omega )\right],$$
(6)
where $`C_+`$ is a semicircle of infinite radius in the right one-half of the complex $`\omega `$-plane with a center at the origin. Notice that the functions $`\mathrm{\Delta }^{(1,2)}(\omega )`$, defined in (3), have no poles. For this reason the sum over their poles is absent from (6).
The second integral in the right-hand side of (6) is simply calculated with the natural supposition that
$$\underset{\omega \mathrm{}}{lim}\epsilon _\alpha (\omega )=1,\underset{\omega \mathrm{}}{lim}\frac{d\epsilon _\alpha (\omega )}{d\omega }=0$$
(7)
along any radial direction in complex $`\omega `$-plane. The result is infinite, and does not depend on $`a`$:
$$\underset{C_+}{}\omega d\mathrm{ln}\mathrm{\Delta }^{(1,2)}(\omega )=4\underset{C_+}{}𝑑\omega .$$
(8)
Now we introduce a new variable $`\xi =i\omega `$ in (6), (8). The result is
$$\underset{n}{}\omega _{𝒌,n}^{(1,2)}=\frac{1}{2\pi }\underset{\mathrm{}}{\overset{\mathrm{}}{}}\xi d\mathrm{ln}\mathrm{\Delta }^{(1,2)}(i\xi )+\frac{2}{\pi }\underset{C_+}{}𝑑\xi ,$$
(9)
where both contributions in the right-hand side diverge. To remove the divergences we use the regularization procedure which goes back to the original Casimir paper (see also ). The idea of this procedure is that the regularized physical vacuum energy density vanishes for the infinitely separated interacting bodies. From Eqs. (3), (9) it follows
$$\underset{a\mathrm{}}{lim}\underset{n}{}\omega _{𝒌,n}^{(1,2)}=\frac{1}{2\pi }\underset{\mathrm{}}{\overset{\mathrm{}}{}}\xi d\mathrm{ln}\mathrm{\Delta }_{\mathrm{}}^{(1,2)}(i\xi )+\frac{2}{\pi }\underset{C_+}{}𝑑\xi ,$$
(10)
where the asymptotic behavior of $`\mathrm{\Delta }^{(1,2)}`$ at $`a\mathrm{}`$ is given by
$$\mathrm{\Delta }_{\mathrm{}}^{(1)}=e^{(R_0R_2)a2R_2d}\left(r_{10}^+r_{12}^+e^{R_1d}r_{10}^{}r_{12}^{}e^{R_1d}\right)^2,\mathrm{\Delta }_{\mathrm{}}^{(2)}=e^{(R_0R_2)a2R_2d}\left(q_{10}^+q_{12}^+e^{R_1d}q_{10}^{}q_{12}^{}e^{R_1d}\right)^2.$$
(11)
Now the regularized physical quantities are found with the help of (9)–(11)
$$\left(\underset{n}{}\omega _{𝒌,n}^{(1,2)}\right)_{reg}\underset{n}{}\omega _{𝒌,n}^{(1,2)}\underset{a\mathrm{}}{lim}\underset{n}{}\omega _{𝒌,n}^{(1,2)}=\frac{1}{2\pi }\underset{\mathrm{}}{\overset{\mathrm{}}{}}\xi d\mathrm{ln}\frac{\mathrm{\Delta }^{(1,2)}(i\xi )}{\mathrm{\Delta }_{\mathrm{}}^{(1,2)}(i\xi )}.$$
(12)
They can be transformed to a more convenient form with the help of integration by parts
$$\left(\underset{n}{}\omega _{𝒌,n}^{(1,2)}\right)_{reg}=\frac{1}{2\pi }\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\xi \mathrm{ln}\frac{\mathrm{\Delta }^{(1,2)}(i\xi )}{\mathrm{\Delta }_{\mathrm{}}^{(1,2)}(i\xi )},$$
(13)
where the term outside the integral vanishes.
To obtain the physical, regularized Casimir energy density one should substitute the regularized quantities (13) into (2) instead of (9) with the result
$$_{reg}(a,d)=\frac{\mathrm{}}{4\pi ^2}\underset{0}{\overset{\mathrm{}}{}}k𝑑k\underset{0}{\overset{\mathrm{}}{}}𝑑\xi \left[\mathrm{ln}Q_1(i\xi )+\mathrm{ln}Q_2(i\xi )\right],$$
(14)
where
$`Q_1(i\xi ){\displaystyle \frac{\mathrm{\Delta }^{(1)}(i\xi )}{\mathrm{\Delta }_{\mathrm{}}^{(1)}(i\xi )}}=1\left({\displaystyle \frac{r_{10}^{}r_{12}^+e^{R_1d}r_{10}^+r_{12}^{}e^{R_1d}}{r_{10}^+r_{12}^+e^{R_1d}r_{10}^{}r_{12}^{}e^{R_1d}}}\right)^2e^{2R_0a},`$ (15)
$`Q_2(i\xi ){\displaystyle \frac{\mathrm{\Delta }^{(2)}(i\xi )}{\mathrm{\Delta }_{\mathrm{}}^{(2)}(i\xi )}}=1\left({\displaystyle \frac{q_{10}^{}q_{12}^+e^{R_1d}q_{10}^+q_{12}^{}e^{R_1d}}{q_{10}^+q_{12}^+e^{R_1d}q_{10}^{}q_{12}^{}e^{R_1d}}}\right)^2e^{2R_0a}.`$ (16)
In (14) $`Q_{1,2}`$ are even functions of $`\xi `$ has been taken into account.
For the convenience of numerical calculations below we introduce the new variable $`p`$ instead of $`k`$ defined by
$$k^2=\frac{\xi ^2}{c^2}(p^21).$$
(17)
In terms of $`p,\xi `$ the Casimir energy density (14) takes the form
$$_{reg}(a,d)=\frac{\mathrm{}}{4\pi ^2c^2}\underset{1}{\overset{\mathrm{}}{}}p𝑑p\underset{0}{\overset{\mathrm{}}{}}\xi ^2𝑑\xi \left[\mathrm{ln}Q_1(i\xi )+\mathrm{ln}Q_2(i\xi )\right],$$
(18)
where a more detailed representation for the functions $`Q_{1,2}`$ from (16) is
$`Q_1(i\xi )=1\left[{\displaystyle \frac{(K_1\epsilon _1p)(\epsilon _2K_1+\epsilon _1K_2)(K_1+\epsilon _1p)(\epsilon _2K_1\epsilon _1K_2)e^{2\frac{\xi }{c}K_1d}}{(K_1+\epsilon _1p)(\epsilon _2K_1+\epsilon _1K_2)(K_1\epsilon _1p)(\epsilon _2K_1\epsilon _1K_2)e^{2\frac{\xi }{c}K_1d}}}\right]^2e^{2\frac{\xi }{c}pa},`$ (19)
$`Q_2(i\xi )=1\left[{\displaystyle \frac{(K_1p)(K_1+K_2)(K_1+p)(K_1K_2)e^{2\frac{\xi }{c}K_1d}}{(K_1+p)(K_1+K_2)(K_1p)(K_1K_2)e^{2\frac{\xi }{c}K_1d}}}\right]^2e^{2\frac{\xi }{c}pa}.`$ (20)
Here all permittivities depend on $`i\xi `$ and
$$K_\alpha =K_\alpha (i\xi )\sqrt{p^21+\epsilon _\alpha (i\xi )}=\frac{c}{\xi }R_\alpha (i\xi ),\alpha =1,\mathrm{\hspace{0.17em}2}.$$
(21)
For $`\alpha =0`$ one has $`p=cR_0/\xi `$ which is equivalent to (17).
Notice that the expressions (14), (18) give us the finite values of the Casimir energy density which is in less common use than the force. Thus in no finite expression for the energy density is presented for two semi-spaces. In the omission of infinities is performed implicitly, namely instead of Eqs. (3) the result of their division by the terms containing $`\mathrm{exp}(R_0a)`$ was presented. The coefficient near $`\mathrm{exp}(R_0a)`$, however, turns into infinity on $`C_+`$. In other words the Eqs. (3) are divided by infinity. As a result the integral along $`C_+`$ is equal to zero in and the quantity (2) would seem to be finite. Fortunately, this implicit division is equivalent to the regularization procedure explicitly presented above. That is why the final results obtained in are indeed correct. In the energy density is not considered at all.
From (18) it is easy to obtain the Casimir force per unit area acting between semi-spaces covered with layers
$$F_{ss}(a,d)=\frac{_{reg}(a,d)}{a}=\frac{\mathrm{}}{2\pi ^2c^3}\underset{1}{\overset{\mathrm{}}{}}p^2𝑑p\underset{0}{\overset{\mathrm{}}{}}\xi ^3𝑑\xi \left[\frac{1Q_1(i\xi )}{Q_1(i\xi )}+\frac{1Q_2(i\xi )}{Q_2(i\xi )}\right].$$
(22)
This expression coincides with Lifshitz result \[9–11\] for the force per unit area between semi-spaces with a dielectric permittivity $`\epsilon _2`$ if the covering layers are absent. To obtain this limiting case from (22) one should put $`d=0`$ and $`\epsilon _1=\epsilon _2`$
$$F_{ss}(a)=\frac{\mathrm{}}{2\pi ^2c^3}\underset{1}{\overset{\mathrm{}}{}}p^2𝑑p\underset{0}{\overset{\mathrm{}}{}}\xi ^3𝑑\xi \left\{\left[\left(\frac{K_2+\epsilon _2p}{K_2\epsilon _2p}\right)^2e^{2\frac{\xi }{c}pa}1\right]^1+\left[\left(\frac{K_2+p}{K_2p}\right)^2e^{2\frac{\xi }{c}pa}1\right]^1\right\}.$$
(23)
The corresponding quantity for the energy density follows from (18)
$$_{reg}(a)=\frac{\mathrm{}}{4\pi ^2c^2}\underset{1}{\overset{\mathrm{}}{}}p𝑑p\underset{0}{\overset{\mathrm{}}{}}\xi ^2𝑑\xi \left\{\mathrm{ln}\left[1\left(\frac{K_2\epsilon _2p}{K_2+\epsilon _2p}\right)^2e^{2\frac{\xi }{c}pa}\right]+\mathrm{ln}\left[1\left(\frac{K_2p}{K_2+p}\right)^2e^{2\frac{\xi }{c}pa}\right]\right\}.$$
(24)
The other possibility to obtain the force between semi-spaces (but with a permittivity $`\epsilon _1`$) is to consider limit $`d\mathrm{}`$ in (22). In this limit we obtain once more the results (23), (24) where $`K_2`$, $`\epsilon _2`$ are replaced by $`K_1`$, $`\epsilon _1`$. Note also that we do not take into account the effect of non-zero point temperature which is negligible for $`a\mathrm{}c/T`$.
The independent expression for the physical energy density is especially important because it allows the possibility to obtain approximate value of the force for the configuration of a sphere (or a spherical lens) above a semi-space. Both bodies can be covered by the layers of another material. According to the proximity force theorem this force is
$$F_{sl}(a,d)=2\pi R_{reg}(a,d)=\frac{\mathrm{}R}{2\pi c^2}\underset{1}{\overset{\mathrm{}}{}}p𝑑p\underset{0}{\overset{\mathrm{}}{}}\xi ^2𝑑\xi \left[\mathrm{ln}Q_1(i\xi )+\mathrm{ln}Q_2(i\xi )\right],$$
(25)
where $`R`$ is the sphere radius, $`Q_{1,2}`$ are defined in (20). In the absence of layers $`_{reg}(a,d)`$ should be substituted by $`_{reg}(a)`$ from (24).
Although the expression (25) is not exact it allows the possibility to calculate the force with a very high accuracy. As was shown in (see also ) the proximity force theorem is equivalent to additive summation of interatomic van der Waals and Casimir force potentials with a subsequent normalization of the interaction constant. As was shown in the accuracy of such method is very high (the relative error of the obtained results is less than 0.01%) if the configuration corresponds closely with two semi-spaces which is the case for a sphere (lens) of a large radius $`Ra`$ above a semi-space.
In the following Sections the above general results will be used for computation of the Casimir and van der Waals forces acting between real metals.
## III THE INFLUENCE OF FINITE CONDUCTIVITY ON THE CASIMIR FORCE
Let us first consider semi-spaces made of aluminum or gold. Aluminum covered interacting bodies (a plate and a lens) were used in the experiments because of its high reflectivity for wavelengths (plate-sphere separations) larger than 100 nm. The thickness of $`Al`$ covering layer was 300 nm. It is significantly greater than the effective penetration depth of the electromagnetic zero point oscillations into $`Al`$ which is $`\delta _0=\lambda _p/(2\pi )17`$nm (see Introduction). That is why $`Al`$ layer can be considered as infinitely thick and modelled by a semi-space. In the experiment the test bodies were covered by a 500 nm $`Au`$ layer which also can be considered as infinitely thick. In and $`Al`$ surfaces were covered, respectively, by $`d<20`$nm and $`d=8`$nm sputtered $`Au/Pd`$ layers to reduce the oxidation processes in $`Al`$ and the effect of any associated electrostatic charges. The influence of such additional thin layers on the Casimir force is discussed in Sec. IV.
The values of the force per unit area for the configuration of two semi-spaces and the force for a sphere above a semi-space are given by Eq. (23) and Eqs. (24), (25). For the distance $`a`$ much larger than the characteristic wavelength of absorption spectra of the semi-space material $`\lambda _0`$ Eqs. (23), (24) lead to the following results in the case of ideal metal ($`\epsilon _2\mathrm{}`$)
$$F_{ss}^{(0)}(a)=\frac{\pi ^2}{240}\frac{\mathrm{}c}{a^4},F_{sl}^{(0)}(a)=\frac{\pi ^3}{360}R\frac{\mathrm{}c}{a^3}.$$
(26)
To calculate numerically the corrections to (26) due to the finite conductivity of a metal we use the tabulated data for the complex index of refraction $`n+ik`$ as a function of frequency . The values of dielectric permittivity along the imaginary axes can be expressed through Im$`\epsilon (\omega )=2nk`$ with the help of dispersion relation
$$\epsilon (i\xi )=1+\frac{2}{\pi }\underset{0}{\overset{\mathrm{}}{}}\frac{\omega \text{Im}\epsilon (\omega )}{\omega ^2+\xi ^2}𝑑\omega .$$
(27)
Here the complete tabulated refractive indices extending from 0.04 eV to 10000 eV for Al and from 0.1 eV to 10000 eV for Au from are used to calculate Im$`\epsilon (\omega )`$. For frequencies below 0.04 eV in the case of Al and below 0.1 eV in the case of Au, the table values of can be extrapolated using the free electron Drude model. In this case, the dielectric permittivity along the imaginary axis is represented as:
$$\epsilon _\alpha (i\xi )=1+\frac{\omega _{p\alpha }^2}{\xi (\xi +\gamma )},$$
(28)
where $`\omega _{p\alpha }=(2\pi c)/\lambda _{p\alpha }`$ is the plasma frequency and $`\gamma `$ is the relaxation frequency. A $`\omega _p`$=12.5 eV and $`\gamma `$=0.063 eV was used for the case of Al based on the last results in Table XI on p.394 of . In the case of Au the analysis is not as straightforward, but proceeding in the manner outlined in we obtain $`\omega _p`$=9.0 eV and $`\gamma `$=0.035 eV. While the values of $`\omega _p`$ and $`\gamma `$ based on the optical data of various sources might differ slightly we have found that the resulting numerically computed Casimir forces to differ by less than 1%. In fact, if for Al metal, a $`\omega _p`$=11.5 eV and $`\gamma `$=0.05 eV as in is used, the differences are extremely small. Of the values tabulated below, only the value of the force in the case of a sphere and a semi-space at 0.5 $`\mu `$m separation is increased by 0.1% which on round-off to the second significant figure leads to an increase of 1%. The results of numerical integration by Eq. (27) for $`Al`$ (solid curve) and $`Au`$ (dashed curve) are presented in Fig.2 in a logarithmic scale. As is seen from Fig 2 the dielectric permittivity along the imaginary axis decreases monotonically with increasing frequency (in distinction to Im$`\epsilon (\omega )`$ which possesses peaks corresponding to inter-band absorption).
The obtained values of the dielectric permittivity along the imaginary axis were substituted into Eqs. (23) and (25) (with account of (24)) to calculate the Casimir force acting between real metals in configurations of two semi-spaces (ss) and a sphere (lens) above a semi-space (sl). Numerical integration was done from an upper limit of $`10^4`$eV to a lower limit of $`10^6`$eV. Changes in the upper limit or lower limit by a factor of 10 lead to changes of less than 0.25% in the Casimir force. If the trapezoidal rule is used in the numerical integration of Eqs. (27) the corresponding Casimir force decreases by a factor less than 0.5%. The results are presented in Fig. 3(a) (two semi-spaces) and in Fig. 3(b) for a sphere above a semi-space by the solid lines 1 (material of the test bodies is aluminum) and 2 (material is gold). In the vertical axis the relative force $`F_{ss}/F_{ss}^{(0)}`$ is plotted in Fig. 3(a) and $`F_{sl}/F_{sl}^{(0)}`$ in Fig. 3(b). These quantities provide a sense of the correction factors to the Casimir force due to the effect of finite conductivity. In the horizontal axis the space separation is plotted in the range 0.1–1$`\mu \text{m}`$. We do not present the results for larger distances because the temperature corrections to the Casimir force become significant. At room temperature the temperature corrections contribute only 2.6% of $`F_{sl}^{(0)}`$ at $`a=1\mu `$m, but at $`a=3\mu `$m they contribute 47% of $`F_{sl}^{(0)}`$, and at $`a=5\mu `$m — 129% of $`F_{sl}^{(0)}`$ . It is seen that the relative force for $`Al`$ is larger than for $`Au`$ at the same separations as it should be because of better reflectivity properties of $`Al`$.
It is interesting to compare the obtained results with those of Refs. and where the similar computations were performed (in the analytical expressions equivalent to Eqs. (23) and (24) were used, in , however, the energy density between plates was obtained by a numerical integration of the force which can lead to some additional error). All the results for the several values of distance between the test bodies are presented in the Table 1.
As is seen from Table 1, our calculational results (column 6) are in agreement with (column 5) up to 0.01. At the same time the results of (column 4) for $`Au`$ are in disagreement with both and this paper. The results for $`Al`$ are presented in at $`a=0.1\mu `$m only. Note that the results at $`a=3\mu `$m (the last four lines of the Table 1) are valid only at zero temperature. They do not take into account temperature corrections which are significant for such separation. Also the results of for $`Cu`$ covered bodies are in disagreement with . We do not consider $`Cu`$ here because the outer surfaces in the recent experiments were covered by the thick layers of $`Au`$ and $`Al`$ . The hypothesis of that the $`Au`$ film of 0.5$`\mu \text{m}`$ thickness could significantly diffuse into the $`Cu`$ layer of the same thickness at room temperatures seems unlikely. In any case it is not needed because the dielectric permittivity of $`Au`$ and $`Cu`$ along the imaginary axis is almost the same and, consequently, will also lead to the same Casimir force.
The computational results obtained here are in good agreement with analytical perturbation expansions of the Casimir force in powers of relative penetration depth $`\delta _0=\lambda _p/(2\pi )`$ of the electromagnetic zero point oscillations into the metal. Representation (28) with $`\gamma =0`$ is applicable for the wavelengths (space separations) larger than $`\lambda _{p\alpha }`$ (the corrections due to relaxation processes are small for the distances $`a5\mu `$m). It can be substituted into Eqs. (23), (24) to get the perturbation expansion. According to the results of Ref. the relative Casimir force with finite conductivity corrections up to the 4th power is
$$\frac{F_{ss}(a)}{F_{ss}^{(0)}(a)}=1\frac{16}{3}\frac{\delta _0}{a}+24\frac{\delta _0^2}{a^2}\frac{640}{7}\left(1\frac{\pi ^2}{210}\right)\frac{\delta _0^3}{a^3}+\frac{2800}{9}\left(1\frac{163\pi ^2}{7350}\right)\frac{\delta _0^4}{a^4}$$
(29)
for two semi-spaces and
$$\frac{F_{sl}(a)}{F_{sl}^{(0)}(a)}=14\frac{\delta _0}{a}+\frac{72}{5}\frac{\delta _0^2}{a^2}\frac{320}{7}\left(1\frac{\pi ^2}{210}\right)\frac{\delta _0^3}{a^3}+\frac{400}{3}\left(1\frac{163\pi ^2}{7350}\right)\frac{\delta _0^4}{a^4}$$
(30)
for a sphere (lens) above a semi-space.
In Fig. 3(a) (two semi-spaces) the dashed line 1 represents the results obtained by (29) for $`Al`$ with $`\lambda _p=107`$nm (which corresponds to $`\omega _p=11.5`$eV), and the dashed line 2 — the results obtained by (29) for $`Au`$ with $`\lambda _p=136`$nm ($`\omega _p=9`$eV) . In Fig. 3(b) the dashed lines 1 and 2 represent the perturbation results obtained for $`Al`$ and $`Au`$ by (30) for a lens above a semi-space. As is seen from the last column of Table 1, the perturbation results are in good (up to 0.01) agreement with computations for all distances larger than $`\lambda _p`$. Only at $`a=0.1\mu `$m for $`Au`$ there are larger deviations because $`\lambda _{p1}\lambda _p^{Au}>0.1\mu `$m.
## IV THE CASIMIR FORCE BETWEEN LAYERED <br>SURFACES
In this Section we consider the influence of the thin outer metallic layers on the Casimir force value. Let the semi-space made of $`Al`$ ($`\epsilon _2`$) be covered by $`Au`$ ($`\epsilon _1`$) layers as shown in Fig. 1. For a configuration of a sphere above a plate such covering made of $`Au/Pd`$ was used in experiments with different values of layer thickness $`d`$. In this case the Casimir force is given by the Eqs. (22), (25), where the quantities $`Q_{1,2}(i\xi )`$ are expressed by Eqs. (20), (21). The computational results for $`\epsilon _\alpha (i\xi )`$ are obtained in the previous Section by Eq. (27). Substituting them into (22), (25) and performing a numerical integration in the same way as above one obtains the Casimir force including the effect of covering layers. The computational results for a configuration of two semi-spaces are shown in Fig. 4(a). Here the solid lines represent once more the Casimir force between semi-spaces of pure $`Al`$ and $`Au`$ respectively, the dashed and dotted lines are for the case of $`Au`$ layers of thickness $`d=20`$nm and $`d=30`$nm covering $`Al`$. When the layers are present, the space separation $`a`$ is measured from their outer surfaces according to Eqs. (22), (25). In Fig. 4(b) the analogous results with the same notations are presented for the configuration of a sphere (lens) above a semi-space.
As is seen from Fig. 4, the $`Au`$ layer of $`d=20`$nm thickness significantly decreases the relative Casimir force between $`Al`$ surfaces. With this layer the force approaches the value for pure $`Au`$ semi-spaces. For a thicker $`Au`$ layer of $`d=30`$nm thickness the relative Casimir force is scarcely affected by the underlying $`Al`$. For example, at a space separation $`a=300`$nm in the configuration of two semi-spaces we have $`F_{ss}/F_{ss}^{(0)}=0.773`$ for pure $`Al`$, $`F_{ss}/F_{ss}^{(0)}=0.727`$ for $`Al`$ with 20 nm $`Au`$ layer, $`F_{ss}/F_{ss}^{(0)}=0.723`$ for $`Al`$ with 30 nm $`Au`$ layer, and $`F_{ss}/F_{ss}^{(0)}=0.720`$ for pure $`Au`$. In the same way for the configuration of a sphere above a semi-space the results are: $`F_{sl}/F_{sl}^{(0)}=0.817`$ (pure $`Al`$), 0.780 ($`Al`$ with 20 nm $`Au`$ layer), 0.776 ($`Al`$ with 30 nm $`Au`$ layer), 0.774 (pure $`Au`$). Both limiting cases $`d\mathrm{}`$ and $`d0`$ were considered and the results are shown to coincide with that of Sec. III.
Let us now discuss the application range of the obtained results for the case of covering layers. First from a theoretical standpoint, the main question concerns the layer thicknesses to which the obtained formulas (22), (25) and the above computations can be applied. In the derivation of Sec. II the spatial dispersion is neglected and, as a consequence, the dielectric permittivities $`\epsilon _\alpha `$ depend only on $`\omega `$ not on the wave vector $`𝒌`$. In other words the field of vacuum oscillations is considered as time-dependent but space homogeneous. Except for the thickness of a skin layer $`\delta _0`$ the main parameters of our problem are the velocity of the electrons on the Fermi surface $`v_F`$, the characteristic frequency of the oscillation field $`\omega `$, and the mean free path of the electrons $`l`$. For the considered region of high frequencies (micrometer distances between the test bodies) the following conditions are valid
$$\frac{v_F}{\omega }<\delta _0l.$$
(31)
Note that the quantity $`v_F/\omega `$ in the left-hand side of Eq. (31) is the distance travelled by an electron during one period of the field, so that the first inequality is equivalent to the assumption of spatial homogeneity of the oscillating field. Usually the corresponding frequencies start from the far infrared part of spectrum which means the space separation $`a100\mu `$m . The region of high frequencies is restricted by the short-wave optical or near ultraviolet parts of the spectrum which correspond to the surface separations of several hundred nanometers. For smaller distances absorption bands, photoelectric effect and other physical phenomena should be taken into account. For these phenomena, the general Eqs. (22), (25), however, are still valid if one substitutes the experimental tabulated data for the dielectric permittivity along the imaginary axis incorporating all these phenomena.
Now let us include one more physical parameter — the thickness $`d`$ of the additional, i.e. $`Au`$, covering layer. It is evident that Eqs. (22), (25) are applicable only for layers of such thickness that
$$\frac{v_F}{\omega }<d.$$
(32)
Otherwise an electron goes out of the thin layer during one period of the oscillating field and the approximation of space homogeneity is not valid. If $`d`$ is so small that the inequality (32) is violated the spatial dispersion should be taken into account which means that the dielectric permittivity would depend not only on frequency but on a wave vector also: $`\epsilon _1=\epsilon _1(\omega ,𝒌)`$. So, if (32) is violated the situation is analogous to the anomalous skin effect where only space dispersion is important and the inequalities below are valid
$$\delta _0(\omega )<\frac{v_F}{\omega },\delta _0(\omega )<l.$$
(33)
In our case, however, the role of $`\delta _0`$ is played by the layer thickness $`d`$ (the influence of nonlocality effects on van der Waals force is discussed in ).
From (31), (32) it follows that for pure $`Au`$ layers ($`\lambda _p136`$nm) the space dispersion can be neglected only if $`d(2530)`$nm. For thinner layers a more general theory taking into account nonlocal effects should be developed to calculate the Casimir force. Thus for such thin layers the bulk tabulated data of the dielectric permittivity depending only on frequency cannot be used (see experimental investigation demonstrating that for $`Au`$ the bulk values of dielectric constants can only be obtained from films whose thickness is about 30 nm or more). That is why the dashed lines in Fig. 4 ($`d=20`$nm layers) are subject to corrections due to the influence of spatial dispersion, whereas the solid lines represent the final result. From an experimental standpoint thin layers of order a few nm grown by evaporation or sputtering techniques are highly porous. This is particularly so in the case of sputtered coatings as shown in . The nature of porosity is a function of the material and the underlying substrate. Thus it should be noted that the theory presented here which used the bulk tabulated data for $`\epsilon _1`$ cannot be applied to calculate the influence of thin covering layers of $`d<20`$nm and of $`d=8`$nm on the Casimir force. The measured high transparency of such layers for the characteristic frequencies corresponds to a larger change of the force than what follows from the Eqs. (22), (25). This is in agreement with the above qualitative analyses.
The role of spatial dispersion was also neglected in the paper where an attempt was made to describe theoretically the influence of thin metallic covering layers onto the Casimir force in experiments . Also the bulk materials properties were used for the $`Au/Pd`$ films. As shown in , the resistivity of sputtered $`Au/Pd`$ films even of 60nm thickness have been shown to be extremely high of order 2000 ohm$``$cm. In it was concluded that the maximum possible theoretical values of the force including the covering layers is significantly smaller than the measured ones. The data of is however shown to be consistent with a theory neglecting the influence of layers. In the surface separations are calculated from $`Al`$ surfaces. Including the thickness of covering layers reduces the distance between the outer surfaces which is now smaller than the distance between $`Al`$ surfaces. Thus contrary to , the theoretical value of force should increase when the presence of the layers is included. The error made in can be traced to the following. The authors of changed the data of “by shifting all the points to larger separations on $`2h=16`$nm” (where $`h=8`$nm is the layer thickness in ) instead of shifting to smaller separations by 16nm as based on . If the correct shift is done then the theoretical values of the force, including the effect of covering layers, are not smaller than the experimental values. Hence the conclusion in about the probable influence of new hypothetical attractions based on the experiments is unsubstantiated.
## V THE VAN DER WAALS FORCE AND INTERMEDIATE REGION
As is seen from Figs. 3,4 at room temperature the Casimir force does not follow its ideal field-theoretical expressions (26). For the space separations less than $`a=1\mu `$m the corrections due to finite conductivity of the metal are rather large (thus, at $`a=1\mu `$m they are around 7–9% for a lens above a semi-space, and 10–12% for two semi-spaces; at $`a=0.1\mu `$m — around 38–44% (sl), and 45–52% (ss)). For $`a>1\mu `$m the temperature corrections increase very quickly (see Sec. III). Actually, the range presented in Figs. 3,4 is the beginning of a transition with decreasing $`a`$ from the Casimir force to the van der Waals force. Our aim is to investigate the intermediate region in more detail for smaller $`a`$ and to find values of $`a`$ where the pure (non-retarded) van der Waals regime starts. To do this for the case when no additional covering layers are present we numerically evaluate the integrals in Eqs. (23)–(25) for $`a<100`$nm.
The computational results obtained by the same procedures as in Sec. III are presented in Fig. 5(a) for two semi-spaces and 5(b) for a sphere above a semi-space. In both Figs. the solid line represents the results for aluminum test bodies, and the dashed line for gold ones. The absolute values of the van der Waals force and surface separation $`a`$ are plotted along the vertical and horizontal axes in a logarithmic scale. The asymptotic expressions in the limit of $`a\lambda _0`$ following from Eqs. (23)–(25) respectively are
$$F_{ss}^{(0)}(a)=\frac{H}{6\pi a^3},F_{sl}^{(0)}(a)=\frac{HR}{6a^2}.$$
(34)
Here it is important to note that the Hamaker constant $`H`$ is dependent on the material properties of the boundaries and is a priori unknown. This is in contrast to the ideal Casimir force limit of Eq. (26) (obtained for $`a\lambda _0`$) which is material independent and is only a function of $`\mathrm{}`$ and $`c`$. Thus it is not reasonable to express the van der Waals force as a ratio relative to Eq. (34). The asymptotic behavior (34) will be used below to determine the value of $`H`$.
The computations were performed with a step $`\mathrm{\Delta }a=5`$nm in the interval 10 nm$`a100`$nm, $`\mathrm{\Delta }a=1`$nm in the interval 4 nm$`a10`$nm, $`\mathrm{\Delta }a=0.2`$nm in the interval 2 nm$`a4`$nm, and $`\mathrm{\Delta }a=0.1`$nm for 0.5 nm$`a2`$nm. At $`a=100`$nm the force values coincide with those in Fig. 3. For $`a<0.5`$nm the repulsive exchange forces dominate. As is seen from Fig. 5 for both configurations and two metals under consideration ($`Al`$ and $`Au`$) the range of purely van der Waals force described by Eqs. (34) turn out to be extremely narrow. It extends from 0.5 nm till 2–4 nm only. For larger distances the transition from the force-distance dependence $`a^3`$ to the dependence $`a^4`$ begins (for two semi-spaces) and from the dependence $`a^2`$ to $`a^3`$ (for a lens above a semi-space). This conclusion is in a qualitative agreement with the results of where the van der Waals force between a metallic sample and a metallic tip of the atomic force microscope was calculated (our choice of a sphere is formally equivalent to the paraboloidal tip considered in ). Calculation in was performed by numerical integration of Lifshitz-type equation for the force with the permittivity of a metal given by the plasma model \[Eq.(28) with $`\gamma =0`$\]. Strictly speaking plasma model is not applicable for $`a\lambda _0`$ (see Sec. III). That is why we have used the optical tabulated data for the complex refractive index in our computations. However, the correct conclusion about the extremely narrow distance range of the purely van der Waals region for metals is obtainable by using the plasma model to represent their dielectric properties. Note that for dielectric test bodies the pure van der Waals regime extends to larger distances. For example in the configuration of two crossed mica cylinders (which is formally equivalent to a sphere above a semi-space) the van der Waals regime extends from 1.4 nm till 12 nm as was experimentally shown in .
## VI DETERMINATION OF HAMAKER CONSTANTS FOR $`Al`$ AND $`Au`$
The results of the previous Sec. make it possible to determine the values of the Hamaker constant $`H`$ from Eq.(34) for aluminum and gold. Let us start with the configuration of two semi-spaces. As is seen from the computational results presented in Fig. 5(a) (solid curve) the asymptotic regime for $`Al`$ extends here from $`a=0.5`$nm till $`a=4`$nm. We use a more narrow interval 0.5 nm–2 nm for the determination of $`n`$ and $`H`$. The power index $`n`$ of the force-distance relation given by the first formula of Eq. (34) is equal to $`n=3.02\pm 0.01`$ in the considered interval. To obtain this value the slopes between adjacent points, i.e. (0.5–0.6) nm, (0.6–0.7) nm etc were calculated and then the average and the standard deviation were found. The corresponding mean value of the Hamaker constant is
$$H_{ss}^{Al}=(3.67\pm 0.02)\times 10^{19}\text{J}.$$
(35)
Considering the computational results for $`Au`$ (dashed curve of Fig. 5(a)) we find the asymptotic regime in a more narrow interval 0.5 nm – 2 nm with the power index $`n=3.04\pm 0.02`$. The mean value of the Hamaker constant turns out to be equal to
$$H_{ss}^{Au}=(4.49\pm 0.07)\times 10^{19}\text{J}.$$
(36)
For the configuration of a sphere (lens) above a semi-space the results are presented in Fig. 5(b) (solid curve for $`Al`$ and dashed curve for $`Au`$). In both cases the asymptotic region extends from $`a=0.5`$nm till $`a=2`$nm only with the mean values of power index in the second formula of Eq. (34) $`n=2.04\pm 0.02`$ ($`Al`$) and $`n=2.08\pm 0.03`$ ($`Au`$). The corresponding mean values of the Hamaker constant are
$$H_{sl}^{Al}=(3.60\pm 0.06)\times 10^{19}\text{J},H_{sl}^{Au}=(4.31\pm 0.14)\times 10^{19}\text{J}.$$
(37)
It is seen that in the case of $`Au`$ and a sphere above a semi-space configuration the behavior of the force shows less precise agreement with the second formula of Eq. (34).
The above results obtained for the two configurations independently give the possibility to derive new values of the Hamaker constant for $`Al`$ and $`Au`$. Taking into account the value of (35) and the first expression from (37) we get
$$H^{Al}=(3.6\pm 0.1)\times 10^{19}\text{J}.$$
(38)
The absolute error here was chosen in such a way to cover both permitted intervals in (35) and (37).
For $`Au`$ the tolerances of the second value from (37) are two times wider than the permitted interval from (36). That is why the most probable final value of the Hamaker constant for gold can be estimated as
$$H^{Au}=(4.4\pm 0.2)\times 10^{19}\text{J}.$$
(39)
The decreased accuracy than in (38) is explained by the extremely narrow region of pure van der Waals force law for gold. These values of $`H`$ for gold are compatible with those obtained previously. For example, in values between $`(24)\times 10^{19}`$J were obtained using different procedures.
## VII CONCLUSIONS AND DISCUSSION
In the above, general expressions were obtained both for the Casimir energy density and force in the configuration of two plates (semi-spaces) with different separations between them. The case of where the surfaces were covered by the thin layers made of the another material was also considered. Additional clarifications of the regularization procedure were given. This is important for obtaining a finite physical value for the energy density. The latter quantity is very important for obtaining the Casimir force for the configuration of a sphere (lens) above a plate (semi-space) which was used in the recent experiments. For this configuration the general expression for the Casimir force with account of layers covering a lens and a semi-space was arrived at by the use of proximity force theorem.
The Casimir force was recalculated between $`Al`$ and $`Au`$ test bodies for the configurations of two semi-spaces and a sphere (lens) above a semi-space. The disagreement between the results of and was resolved in favor of . Additionally, computational results were compared with perturbation expansion up to the fourth order in powers of relative penetration depth of electromagnetic zero point oscillations into the metal. The perturbation results are also in agreement with and our computations for the space separations larger than a plasma wavelength of the metal under study (not much larger as it to be expected from general considerations). We have performed the first computations of the Casimir force between $`Al`$ test bodies covered by $`Au`$ thin layers. The monotonous decrease of the correction factor to the Casimir force was observed with increase of the layer thickness. The qualitative analysis leads to the conclusion that the thickness of the layer should be large enough to allow neglect of the spatial dispersion of the dielectric permittivity and the use of bulk optical tabulated data for the complex refractive index. For the $`Au`$ layers the minimal allowed thickness for such an approximation was estimated as $`d=30`$nm in agreement with the experimental evidence of . For smaller layer thicknesses the bulk optical tabulated data cannot be used. In this case the calculation of the Casimir force would require a direct measurement of the complex refractive index for the particular metal (not only the frequency dependence but also its dependence on the wave vector).
The van der Waals force was calculated between the $`Al`$ and $`Au`$ test bodies in configurations of two semi-spaces and a sphere (lens) above a semi-space. The computations were performed starting from the same general expressions as in the case of the Casimir force and using the same numerical procedure and optical tabulated data. The extremely narrow region where the pure non-retarded van der Waals power-law force acts was noted. This region extends from $`a=0.5`$nm till $`a=(24)`$nm only. For larger distances a wide transition region starts, where the non-retarded van der Waals force described by the Eq. (34) gradually transforms into the retarded van der Waals (Casimir) force from the Eq. (26) when the space separation approaches the value $`a=1\mu `$m. The values of the Casimir force given by the Eq. (34) are never achieved at room temperature (at $`a=1\mu `$m due to the finite conductivity of the metal while for larger distances the temperature corrections make a strong contribution). Using the asymptotic region of the pure non-retarded van der Waals force the new values of the Hamaker constant for $`Al`$ and $`Au`$ were obtained. For $`Al`$ the reported accuracy corresponds to a relative error of 2.8%, and for $`Au`$ it is around 4.5%.
The obtained results do not exhaust all the problems connected with the role of finite conductivity of the metal in the precision measurements of the Casimir force. The main problem to be solved is the investigation of corrections to the force due to thin covering layers. This would demand theoretical work on the generalization of the Lifshitz formalism for the case when the spatial dispersion can be important in addition to the frequency dependence. Also the new measurements of the complex refractive index are needed for the layers under consideration. What’s more the finite conductivity corrections to the Casimir force should be considered together with the corrections due to the surface roughness (see, e.g., where the non-additivity of both influential factors is demonstrated) and corrections due to finite temperature. This combined research is necessary for both applied and fundamental applications of the Casimir effect. It has been known that the measurements of the Casimir force give the possibility to obtain strong constraints for the constants of long-range interactions and light elementary particles predicted by the unified gauge theories, supersymmetry and supergravity . Such information is unique and cannot be obtained even by means of the most powerful modern accelerators. In Ref. the constraints for the Yukawa-type hypothetical interactions were strengthened up to 30 times in some distance range on the base of Casimir force measurements of Ref. . The increased precision of the Casimir force in gave the possibility to strengthen constraints up to 140 times on the Yukawa-type interactions at smaller distances . It is highly probable that the new measurements of the Casimir force with increased accuracy will serve as an important alternative source of information about the elementary particles and fundamental interactions.
## ACKNOWLEDGMENTS
G.L.K. and V.M.M. are grateful to the Department of Physics of the Federal University of Paraiba, where this work was partly done, for their hospitality.
List of captions
| FIG.1. | | The configuration of two semi-spaces with a dielectric permittivity $`\epsilon _2(\omega )`$ covered by layers of thickness $`d`$ with a permittivity $`\epsilon _1(\omega )`$. The space separation between the layers is $`a`$. |
| --- | --- | --- |
| FIG.2. | | The dielectric permittivity as a function of imaginary frequency for $`Al`$ (solid line) and $`Au`$ (dashed line). |
| FIG.3. | | The correction factor to the Casimir force due to finite conductivity of the metal as a function of the surface separation. The solid line 1 and 2 represents the computational results for $`Al`$ and $`Au`$ respectively in the configuration of two semi-spaces (a) and for a sphere (lens) above a semi-space (b). The dashed lines 1 and 2 represent the perturbation correction factor up to the 4th order for $`Al`$, and $`Au`$ respectively. |
| FIG.4. | | The correction factor to the Casimir force due to finite conductivity of the metal as a function of the surface separation for $`Al`$ test bodies covered by thin layers of $`Au`$. The dashed lines represent the results for a layer thickness $`d=20`$nm and the dotted lines for $`d=30`$nm. The case of the configuration of two semi-spaces is shown in (a) and for a sphere (lens) above a semi-space is shown in (b). The solid lines represent the results for pure $`Al`$ and $`Au`$ test bodies respectively. |
| FIG.5. | | The absolute value of the van der Waals force as a function of surface separation is shown on a logarithmic scale. The solid lines represent the results for $`Al`$ and the dashed lines represent the case of $`Au`$. The configuration of two semi-spaces is shown in (a) and that for a sphere (lens) above a semi-space is shown in (b). |
|
no-problem/0003/astro-ph0003134.html
|
ar5iv
|
text
|
# Type Ia Supernovae: Progenitors and Evolution with Redshift
## I Introduction
Type Ia supernovae (SNe Ia) are good distance indicators, and provide a promising tool for determining cosmological parameters (e.g., bra98 ). SNe Ia have been discovered up to $`z1.32`$ gil99 . Both the Supernova Cosmology Project per97 ; per99 and the High-z Supernova Search Team gar98 ; rie98 have suggested a statistically significant value for the cosmological constant.
However, SNe Ia are not perfect standard candles, but show some intrinsic variations in brightness. When determining the absolute peak luminosity of high-redshift SNe Ia, therefore, these analyses have taken advantage of the empirical relation existing between the peak brightness and the light curve shape (LCS). Since this relation has been obtained from nearby SNe Ia only phi93 ; ham95 ; rie95 , it is important to examine whether it depends systematically on environmental properties such as metallicity and age of the progenitor system.
High-redshift supernovae present us very useful information, not only to determine cosmological parameters but also to put constraints on the star formation history in the universe. They have given the SN Ia rate at $`z0.5`$ pai99 but will provide the SN Ia rate history over $`0<z<1`$. With the Next Generation Space Telescope, both SNe Ia and SNe II will be observed through $`z4`$. It is useful to provide a prediction of cosmic supernova rates to constrain the age and metallicity effects of the SN Ia progenitors.
SNe Ia have been widely believed to be a thermonuclear explosion of a mass-accreting white dwarf (WD) (e.g., nom97a for a review). However, the immediate progenitor binary systems have not been clearly identified yet bra95 . In order to address the above questions regarding the nature of high-redshift SNe Ia, we need to identify the progenitors systems and examine the “evolutionary” effects (or environmental effects) on those systems.
In §2, we summarize the progenitors’ evolution where the strong wind from accreting WDs plays a key role hac96 ; hac99a ; hac99b . In §3, we address the issue of whether a difference in the environmental properties is at the basis of the observed range of peak brightness ume99b . In §4, we make a prediction of the cosmic supernova rate history as a composite of the different types of galaxies kob00 .
## II Evolution of progenitor systems
There exist two models proposed as progenitors of SNe Ia: 1) the Chandrasekhar mass model, in which a mass-accreting carbon-oxygen (C+O) WD grows in mass up to the critical mass $`M_{\mathrm{Ia}}1.371.38M_{}`$ near the Chandrasekhar mass and explodes as an SN Ia (e.g., nom84 ; nom94 ), and 2) the sub-Chandrasekhar mass model, in which an accreted layer of helium atop a C+O WD ignites off-center for a WD mass well below the Chandrasekhar mass (e.g., arn96 ). The early time spectra of the majority of SNe Ia are in excellent agreement with the synthetic spectra of the Chandrasekhar mass models, while the spectra of the sub-Chandrasekhar mass models are too blue to be comparable with the observations hof96 ; nug97 .
For the evolution of accreting WDs toward the Chandrasekhar mass, two scenarios have been proposed: 1) a double degenerate (DD) scenario, i.e., merging of double C+O WDs with a combined mass surpassing the Chandrasekhar mass limit ibe84 ; web84 , and 2) a single degenerate (SD) scenario, i.e., accretion of hydrogen-rich matter via mass transfer from a binary companion (e.g., nom82 ; nom94 ). The issue of DD vs. SD is still debated (e.g., bra95 ), although theoretical modeling has indicated that the merging of WDs leads to the accretion-induced collapse rather than SN Ia explosion sai85 ; sai98 ; seg97 .
In the SD Chandrasekhar mass model for SNe Ia, a WD explodes as a SN Ia only when its rate of the mass accretion ($`\dot{M}`$) is in a certain narrow range (e.g., nom82 ; nom91 ). In particular, if $`\dot{M}`$ exceeds the critical rate $`\dot{M}_\mathrm{b}`$, the accreted matter extends to form a common envelope nom79 . This difficulty has been overcome by the WD wind model (see below). For the actual binary systems which grow the WD mass ($`M_{\mathrm{WD}}`$) to $`M_{\mathrm{Ia}}`$, the following two systems are appropriate. One is a system consisting of a mass-accreting WD and a lobe-filling, more massive, slightly evolved main-sequence or sub-giant star (hereafter “WD+MS system”). The other system consists of a WD and a lobe-filling, less massive, red-giant (hereafter “WD+RG system”).
### II.1 White dwarf winds
Optically thick WD winds are driven when the accretion rate $`\dot{M}`$ exceeds the critical rate $`\dot{M}_\mathrm{b}`$. Here $`\dot{M}_\mathrm{b}`$ is the rate at which steady burning can process the accreted hydrogen into helium as $`\dot{M}_\mathrm{b}0.75\times 10^6\left(\frac{M_{\mathrm{WD}}}{M_{}}0.40\right)M_{}\mathrm{yr}^1`$.
With such a rapid accretion, the WD envelope expands to $`R_{\mathrm{ph}}0.1R_{}`$ and the photospheric temperature decreases below $`\mathrm{log}T_{\mathrm{ph}}5.5`$. Around this temperature, the shoulder of the strong peak of OPAL Fe opacity igl93 drives the radiation-driven wind hac96 ; hac99b . The ratio of $`v_{\mathrm{ph}}/v_{\mathrm{esc}}`$ between the photospheric velocity and the escape velocity at the photosphere depends on the mass transfer rate and $`M_{\mathrm{WD}}`$. (see Fig.6 in hac99b ). We call the wind strong when $`v_{\mathrm{ph}}>v_{\mathrm{esc}}`$. When the wind is strong, $`v_{\mathrm{ph}}1000`$ km s<sup>-1</sup> being much faster than the orbital velocity.
If the wind is sufficiently strong, the WD can avoid the formation of a common envelope and steady hydrogen burning increases its mass continuously at a rate $`\dot{M}_\mathrm{b}`$ by blowing the extra mass away in a wind. When the mass transfer rate decreases below this critical value, optically thick winds stop. If the mass transfer rate further decreases below $``$ 0.5 $`\dot{M}_\mathrm{b}`$, hydrogen shell burning becomes unstable to trigger very weak shell flashes but still burns a large fraction of accreted hydrogen.
The steady hydrogen shell burning converts hydrogen into helium atop the C+O core and increases the mass of the helium layer gradually. When its mass reaches a certain value, weak helium shell flashes occur. Then a part of the envelope mass is blown off but a large fraction of He can be burned to C+O kat99h to increase the WD mass. In this way, strong winds from the accreting WD play a key role to increase the WD mass to $`M_{\mathrm{Ia}}`$.
### II.2 WD+RG system
This is a symbiotic binary system consisting of a WD and a low mass red-giant (RG). A full evolutionary path of the WD+RG system from the zero age main-sequence stage to the SN Ia explosion is described in hac99b ; tut77 . The occurrence frequency of SNe Ia through this channel is much larger than the earlier scenario, because of the following two evolutionary processes, which have not considered before.
(1) Because of the AGB wind, the WD + RG close binary can form from a wide binary even with such a large initial separation as $`a_i\text{ }<40,000R_{}`$. Our earlier estimate hac96 is constrained by $`a_i\text{ }<1,500R_{}`$.
(2) When the RG fills its inner critical Roche lobe, the WD undergoes rapid mass accretion and blows a strong optically thick wind. Our earlier analysis has shown that the mass transfer is stabilized by this wind only when the mass ratio of RG/WD is smaller than 1.15. Our new finding is that the WD wind can strip mass from the RG envelope, which could be efficient enough to stabilize the mass transfer even if the RG/WD mass ratio exceeds 1.15. If this mass-stripping effect is strong enough, though its efficiency $`\eta _{\mathrm{eff}}`$ is subject to uncertainties, the symbiotic channel can produce SNe Ia for a much (ten times or more) wider range of the binary parameters than our earlier estimation.
With the above two new effects (1) and (2), the WD+RG (symbiotic) channel can account for the inferred rate of SNe Ia in our Galaxy. The immediate progenitor binaries in this symbiotic channel to SNe Ia may be observed as symbiotic stars, luminous supersoft X-ray sources, or recurrent novae like T CrB or RS Oph, depending on the wind status.
### II.3 WD+MS system
In this scenario, a C+O WD is originated, not from an AGB star with a C+O core, but from a red-giant star with a helium core of $`0.82.0M_{}`$. The helium star, which is formed after the first common envelope evolution, evolves to form a C+O WD of $`0.81.1M_{}`$ with transferring a part of the helium envelope onto the secondary main-sequence star. A full evolutionary path of the WD+MS system from the zero age main-sequence stage to the SN Ia explosion is described in hac99a .
This evolutionary path provides a much wider channel to SNe Ia than previous scenarios. A part of the progenitor systems are identified as the luminous supersoft X-ray sources heu92 during steady H-burning (but without wind to avoid extinction), or the recurrent novae like U Sco if H-burning is weakly unstable. Actually these objects are characterized by the accretion of helium-rich matter.
### II.4 Realization frequency
For an immediate progenitor system WD+RG of SNe Ia, we consider a close binary initially consisting of a C+O WD with $`M_{\mathrm{WD},0}=0.61.2M_{}`$ and a low-mass red-giant star with $`M_{\mathrm{RG},0}=0.73.0M_{}`$ having a helium core of $`M_{\mathrm{He},0}=0.20.46M_{}`$. The initial state of these immediate progenitors is specified by three parameters, i.e., $`M_{\mathrm{WD},0}`$, $`M_{\mathrm{RG},0}=M_{\mathrm{d},0}`$, and the initial orbital period $`P_0`$ ($`M_{\mathrm{He},0}`$ is determined if $`P_0`$ is given).
We follow binary evolutions of these systems and obtain the parameter range(s) which can produce an SN Ia. In Figure 1, the region enclosed by the thin solid line produces SNe Ia for several cases of the initial WD mass, $`M_{\mathrm{WD},0}=`$ 0.75 \- 1.1 $`M_{}`$. For smaller $`M_{\mathrm{WD},0}`$, the wind is weaker, so that the SN Ia region is smaller. The regions of $`M_{\mathrm{WD},0}=0.6M_{}`$ and $`0.7M_{}`$ vanish for both the WD+MS and WD+RG systems.
In the outside of this region, the outcome of the evolution at the end of the calculations is not an SN Ia but one of the followings: (i) Formation of a common envelope for too large $`M_\mathrm{d}`$ or $`P_0`$ day, where the mass transfer is unstable at the beginning of mass transfer. (ii) Novae or strong hydrogen shell flash for too small $`M_{\mathrm{d},0}`$, where the mass transfer rate becomes below $`10^7`$ $`M_{}`$ yr<sup>-1</sup>. (iii) Helium core flash of the red giant component for too long $`P_0`$, where a central helium core flash ignites, i.e., the helium core mass of the red-giant reaches $`0.46M_{}`$. (iv) Accretion-induced collapse for $`M_{\mathrm{WD},0}>1.2M_{}`$, where the central density of the WD reaches $`10^{10}`$ g cm<sup>-3</sup> before heating wave from the hydrogen burning layer reaches the center. As a result, the WD undergoes collapse due to electron capture without exploding as an SN Ia nom91 .
It is clear that the new region of the WD+RG system is not limited by the condition of $`q<1.15`$, thus being ten times or more wider than the region of hac96 ’s model (depending on the the stripping efficiency of $`\eta _{\mathrm{eff}}`$).
The WD+MS progenitor system can also be specified by three initial parameters: the initial C+O WD mass $`M_{\mathrm{WD},0}`$, the mass donor’s initial mass $`M_{\mathrm{d},0}`$, and the orbital period $`P_0`$. For $`M_{\mathrm{WD},0}=1.0M_{}`$, the region producing an SN Ia is bounded by $`M_{\mathrm{d},0}=1.83.2M_{}`$ and $`P_0=0.55`$ d as shown by the solid line in Figure 1. The upper and lower bounds are respectively determined by the common envelope formation (i) and nova-like explosions (ii) as above. The left and right bounds are determined by the minimum and maximum radii during the main sequence of the donor star hac99a .
We estimate the rate of SNe Ia originating from these channels in our Galaxy by using equation (1) of ibe84 . The realization frequencies of SNe Ia through the WD+RG and WD+MS channels are estimated as $``$ 0.0017 yr<sup>-1</sup> (WD+RG) and $``$ 0.001 yr<sup>-1</sup> (WD+MS), respectively. The total SN Ia rate of the WD+MS/WD+RG systems becomes $``$ 0.003 yr<sup>-1</sup>, which is close enough to the inferred rate of our Galaxy.
### II.5 Low metallicity inhibition of type Ia supernovae
The optically thick winds are driven by a strong peak of OPAL opacity at $`\mathrm{log}T(\mathrm{K})5.2`$ (e.g., igl93 ). Since the opacity peak is due to iron lines, the wind velocity $`v_\mathrm{w}`$ depends on the iron abundance \[Fe/H\] (kob98 ; hac00 ), i.e., $`v_\mathrm{w}`$ is higher for larger \[Fe/H\]. The metallicity effect on SNe Ia is clearly demonstrated by the size of the regions to produce SNe Ia in the diagram of the initial orbital period versus initial mass of the companion star (see Fig. 3). The SN Ia regions are much smaller for lower metallicity because the wind becomes weaker.
The wind velocity depends also on the luminosity $`L`$ of the WD. The more massive WD has a higher $`L`$, thus blowing higher velocity winds (hac99b ). In order for the wind velocity to exceed the escape velocity of the WD near the photosphere, the WD mass should be larger than a certain critical mass for a given \[Fe/H\]. This implies that the initial mass of the WD $`M_{\mathrm{WD},0}`$ should already exceed that critical mass in order for the WD mass to grow to the Ch mass. This critical mass is larger for smaller \[Fe/H\], reaching $`1.1M_{}`$ for \[Fe/H\] $`=1.1`$ (Fig. 3). Here we should note that the relative number of WDs with $`M_{\mathrm{WD},0}\text{ }>1.1M_{}`$ is quite small in close binary systems (ume99a ). And for $`M_{\mathrm{WD},0}\text{ }>1.2M_{}`$, the accretion leads to collapse rather than SNe Ia (nom91 ). Therefore, no SN Ia occurs at \[Fe/H\] $`1.1`$ in our model.
It is possible to test the metallicity effects on SNe Ia with the chemical evolution of galaxies.
In the one-zone uniform model for the chemical evolution of the solar neighborhood, the heavy elements in the metal-poor stars originate from the mixture of the SN II ejecta of various progenitor masses. The abundances averaged over the progenitor masses of SNe II predicts \[O/Fe\] $`0.45`$ (e.g., tsu95 ; nom97c ). Later SNe Ia start ejecting mostly Fe, so that \[O/Fe\] decreases to $`0`$ around \[Fe/H\] $`0`$. The low-metallicity inhibition of SNe Ia predicts that the decrease in \[O/Fe\] starts at \[Fe/H\] $`1`$. Such an evolution of \[O/Fe\] well explains the observations (kob98 ).
However, we should note that some anomalous stars have \[O/Fe\] $``$ 0 at \[Fe/H\] $`\text{ }<1`$. The presence of such stars, however, is not in conflict with our SNe Ia models, but can be understood as follows: The formation of such anomalous stars (and the diversity of \[O/Fe\] in general) indicates that the interstellar materials were not uniformly mixed but contaminated by only a few SNe II (or even single SN II) ejecta. This is because the timescale of mixing was longer than the time difference between the supernova event and the next generation star formation. The iron and oxygen abundances produced by a single SN II vary depending on the mass, energy, mass cut, and metallicity of the progenitor. Relatively smaller mass SNe II ($`1315M_{}`$) and higher explosion energies tend to produce \[O/Fe\] $`0`$ (nom97c ; ume00 ). Those metal poor stars with \[O/Fe\] $`0`$ may be born from the interstellar medium polluted by such SNe II.
The metallicity effect on SNe Ia can also be checked with the metallicity of the host galaxies of nearby SNe Ia. There has been no evidence that SNe Ia have occurred in galaxies with a metallicity of \[Fe/H\] $`\text{ }<1`$, although host galaxies are detected only for one third of SNe Ia and the estimated metallicities of host galaxies are uncertain. Three SNe Ia are observed in low-metallicity dwarf galaxies; SN1895B and SN1972E in NGC 5253, and SN1937C in IC 4182. Metallicities of these galaxies are estimated to be \[O/H\] $`=0.25`$ and $`0.35`$, respectively koc97 . If \[O/Fe\] $`0`$ as in the Magellanic Clouds, \[Fe/H\]$`0.25`$ and $`0.35`$ which are not so small. Even if these galaxies have extremely SN II like abundance as \[O/Fe\] $`0.45`$, \[Fe/H\] $`0.7`$ and $`0.8`$ (being higher than $`1`$), respectively. Since these host galaxies are blue ($`BV=0.44`$ for NGC 5253 and $`BV=0.37`$ for IC 4182 according to RC3 catalog), the MS+WD systems are dominant progenitors for the present SNe Ia. The rate of SNe Ia originated from the MS+WD systems is not so sensitive to the metallicity as far as \[Fe/H\] $`>1`$ (hac00 ). Even if \[Fe/H\] $`0.7`$ in such blue galaxies, therefore, the SN Ia rate is predicted to be similar to those in more metal-rich galaxies.
## III The origin of diversity of SNe Ia and environmental effects
There are some observational indications that SNe Ia are affected by their environment. The most luminous SNe Ia seem to occur only in spiral galaxies, while both spiral and elliptical galaxies are hosts for dimmer SNe Ia. Thus the mean peak brightness is dimmer in ellipticals than in spiral galaxies ham96 . The SNe Ia rate per unit luminosity at the present epoch is almost twice as high in spirals as in ellipticals cap97 . Moreover, wan97 ; rie99 found that the variation of the peak brightness for SNe located in the outer regions in galaxies is smaller.
hof98 ; hof99 examined how the initial composition of the WD (metallicity and the C/O ratio) affects the observed properties of SNe Ia. ume99a obtained the C/O ratio as a function of the main-sequence mass and metallicity of the WD progenitors. ume99b suggested that the variation of the C/O ratio is the main cause of the variation of SNe Ia brightness, with larger C/O ratio yielding brighter SNe Ia. We will show that the C/O ratio depends indeed on environmental properties, such as the metallicity and age of the companion of the WD, and that our model can explain most of the observational trends discussed above. We then make some predictions about the brightness of SN Ia at higher redshift.
### III.1 C/O ratio in WD progenitors
In this section we discuss how the C/O ratio in the WD depends on the metallicity and age of the binary system. The C/O ratio in C+O WDs depends primarily on the main-sequence mass of the WD progenitor and on metallicity.
We calculated the evolution of intermediate-mass ($`39M_{}`$) stars for metallicity $`Z`$=0.001 – 0.03. In the ranges of stellar masses and $`Z`$ considered in this paper, the most important metallicity effect is that the radiative opacity is smaller for lower $`Z`$. Therefore, a star with lower $`Z`$ is brighter, thus having a shorter lifetime than a star with the same mass but higher $`Z`$. In this sense, the effect of reducing metallicity for these stars is almost equivalent to increasing a stellar mass.
For stars with larger masses and/or smaller $`Z`$, the luminosity is higher at the same evolutionary phase. With a higher nuclear energy generation rate, these stars have larger convective cores during H and He burning, thus forming larger He and C-O cores.
As seen in Figure 4, the central part of these stars is oxygen-rich. The C/O ratio is nearly constant in the innermost region, which was a convective core during He burning. Outside this homogeneous region, where the C-O layer grows due to He shell burning, the C/O ratio increases up to C/O $`\text{ }>1`$; thus the oxygen-rich core is surrounded by a shell with C/O $`\text{ }>`$ 1. In fact this is a generic feature in all models we calculated. The C/O ratio in the shell is C/O $``$ 1 for the star as massive as $`7M_{}`$, and C/O $`>1`$ for less massive stars.
When a progenitor reaches the critical mass for the SNe Ia explosion, the central core is convective up to around 1.1 $`M_{}`$. Hence the relevant C/O ratio is between the central value before convective mixing and the total C/O of the whole WD. Using the results from the C6 model nom84 , we assume that the convective region is 1.14 $`M_{}`$ and for simplicity, C/O = 1 outside the C-O core at the end of second dredge-up. Then we obtain the C/O ratio of the inner part of the SNe Ia progenitors (Fig. 5).
From this figure we find three interesting trends. First, while the central C/O is a complicated function of stellar mass ume99a , as shown here the C/O ratio in the core before SNe Ia explosion is a decreasing monotonic function of mass. The central C/O ratio at the end of second dredge-up decreases with mass for $`M_{\mathrm{ms}}\text{ }>5M_{}`$, while the ratio increases with mass for $`M_{\mathrm{ms}}\text{ }>4M_{}`$; however, the convective core mass during He burning is smaller for a less massive star, and the C/O ratio during shell He burning is larger for smaller C+O core. Hence, when the C/O ratio is averaged over 1.1 $`M_{}`$ the C/O ratio decreases with mass. Second, as shown in ume99a , although the C/O ratio is a complicated function of metallicity and mass, the metallicity dependence is remarkably converged when the ratio is seen as a function of the C+O core mass ($`M_{\mathrm{CO}}`$) instead of the initial main sequence mass.
According to the evolutionary calculations for 3$``$9 $`M_{}`$ stars by ume99a , the C/O ratio and its distribution are determined in the following evolutionary stages of the close binary.
(1) At the end of central He burning in the 3$``$9 $`M_{}`$ primary star, C/O$`<1`$ in the convective core. The mass of the core is larger for more massive stars.
(2) After central He exhaustion, the outer C+O layer grows via He shell burning, where C/O$`\text{ }>1`$ ume99a .
(3a) If the primary star becomes a red giant (case C evolution; e.g., van94 ), it then undergoes the second dredge-up, forming a thin He layer, and enters the AGB phase. The C+O core mass, $`M_{\mathrm{CO}}`$, at this phase is larger for more massive stars. For a larger $`M_{\mathrm{CO}}`$ the total carbon mass fraction is smaller.
(3b) When it enters the AGB phase, the star greatly expands and is assumed here to undergo Roche lobe overflow (or a super-wind phase) and to form a C+O WD. Thus the initial mass of the WD, $`M_{\mathrm{WD},0}`$, in the close binary at the beginning of mass accretion is approximately equal to $`M_{\mathrm{CO}}`$.
(4a) If the primary star becomes a He star (case BB evolution), the second dredge-up in (3a) corresponds to the expansion of the He envelope.
(4b) The ensuing Roche lobe overflow again leads to a WD of mass $`M_{\mathrm{WD},0}`$ = $`M_{\mathrm{CO}}`$.
(5) After the onset of mass accretion, the WD mass grows through steady H burning and weak He shell flashes, as described in the WD wind model. The composition of the growing C+O layer is assumed to be C/O=1.
(6) The WD grows in mass and ignites carbon when its mass reaches $`M_{\mathrm{Ia}}=1.367M_{}`$, as in the model C6 of nom84 . Because of strong electron-degeneracy, carbon burning is unstable and grows into a deflagration for a central temperature of $`8\times 10^8`$ K and a central density of $`1.47\times 10^9`$ g cm<sup>-3</sup>. At this stage, the convective core extends to $`M_r=1.14M_{}`$ and the material is mixed almost uniformly, as in the C6 model.
In Figure 5, we show the carbon mass fraction $`X`$(C) in the convective core of this pre-explosive WD, as a function of metallicity ($`Z`$) and initial mass of the WD before the onset of mass accretion, $`M_{\mathrm{CO}}`$. Figure 5 reveals that: 1) $`X`$(C) is smaller for larger $`M_{\mathrm{CO}}M_{\mathrm{WD},0}`$. 2) The dependence of $`X`$(C) on metallicity is small when plotted against $`M_{\mathrm{CO}}`$, even though the relation between $`M_{\mathrm{CO}}`$ and the initial stellar mass depends sensitively on $`Z`$ ume99a .
### III.2 Brightness of SNe Ia and the C/O ratio
In the Chandrasekhar mass models for SNe Ia, the brightness of SNe Ia is determined mainly by the mass of <sup>56</sup>Ni synthesized ($`M_{\mathrm{Ni56}}`$). Observational data suggest that $`M_{\mathrm{Ni56}}`$ for most SNe Ia lies in the range $`M_{\mathrm{Ni56}}0.40.8M_{}`$ (e.g., maz98 ). This range of $`M_{\mathrm{Ni56}}`$ can result from differences in the C/O ratio in the progenitor WD as follows.
In the deflagration model, a larger C/O ratio leads to the production of more nuclear energy and buoyancy force, thus leading to a faster propagation. The faster propagation of the convective deflagration wave results in a larger $`M_{\mathrm{Ni56}}`$. For example, a variation of the propagation speed by 15% in the W6 – W8 models results in $`M_{\mathrm{Ni56}}`$ values ranging between 0.5 and $`0.7M_{}`$ nom84 , which could explain the observations.
In the delayed detonation model, $`M_{\mathrm{Ni56}}`$ is predominantly determined by the deflagration-to-detonation-transition (DDT) density $`\rho _{\mathrm{DDT}}`$, at which the initially subsonic deflagration turns into a supersonic detonation kho91 . As discussed in ume99b , $`\rho _{\mathrm{DDT}}`$ could be very sensitive to $`X`$(C), and a larger $`X`$(C) is likely to result in a larger $`\rho _{\mathrm{DDT}}`$ and $`M_{\mathrm{Ni56}}`$.
Here we postulate that $`M_{\mathrm{Ni56}}`$ and consequently brightness of a SN Ia increase as the progenitors’ C/O ratio increases (and thus $`M_{\mathrm{WD},0}`$ decreases). As illustrated in Figure 5, the range of $`M_{\mathrm{Ni56}}0.50.8M_{}`$ is the result of an $`X`$(C) range $`0.350.5`$, which is the range of $`X`$(C) values of our progenitor models. The $`X`$(C) – $`M_{\mathrm{Ni56}}`$$`M_{\mathrm{WD},0}`$ relation we adopt is still only a working hypothesis, which needs to be proved from studies of the turbulent flame during explosion (e.g., nie95 ).
### III.3 Metallicity and age effects
#### III.3.1 Metallicity effects on the minimum $`M_{\mathrm{WD},0}`$
As mentioned in §2.5, $`M_\mathrm{w}`$ is the metallicity-dependent minimum $`M_{\mathrm{WD},0}`$ for a WD to become an SN Ia (strong wind condition in Fig. 5). The upper bound $`M_{\mathrm{WD},0}1.07M_{}`$ is imposed by the condition that carbon should not ignite and is almost independent of metallicity. As shown in Figure 5, the range of $`M_{\mathrm{CO}}M_{\mathrm{WD},0}`$ can be converted into a range of $`X`$(C). From this we find the following metallicity dependence for $`X`$(C):
(1) The upper bound of $`X`$(C), which is determined by the lower limit on $`M_{\mathrm{CO}}`$ imposed by the metallicity-dependent conditions for a strong wind, e.g., $`X`$(C) $`\text{ }<0.51`$, 0.46 and 0.41, for $`Z`$=0.02, 0.01, and 0.004, respectively.
(2) On the other hand, the lower bound, $`X`$(C) $`0.350.33`$, does not depend much on $`Z`$, since it is imposed by the maximum $`M_{\mathrm{CO}}`$.
(3) Assuming the relation between $`M_{\mathrm{Ni56}}`$ and $`X`$(C) given in Figure 5, our model predicts the absence of brighter SNe Ia in lower metallicity environment.
#### III.3.2 Age effects on the minimum $`M_{\mathrm{WD},0}`$
In our model, the age of the progenitor system also constrains the range of $`X`$(C) in SNe Ia. In the SD scenario, the lifetime of the binary system is essentially the main-sequence lifetime of the companion star, which depends on its initial mass $`M_2`$. hac99a ; hac99b have obtained a constraint on $`M_2`$ by calculating the evolution of accreting WDs for a set of initial masses of the WD ($`M_{\mathrm{WD},0}M_{\mathrm{CO}}`$) and of the companion ($`M_2`$), and the initial binary period ($`P_0`$). In order for the WD mass to reach $`M_{\mathrm{Ia}}`$, the donor star should transfer enough material at the appropriate accretion rates. The donors of successful cases are divided into two categories: one is composed of slightly evolved main-sequence stars with $`M_21.73.6M_{}`$ (for $`Z`$=0.02), and the other of red-giant stars with $`M_20.83.1M_{}`$ (for $`Z`$=0.02) (Fig. 1).
If the progenitor system is older than 2 Gyr, it should be a system with a donor star of $`M_2<1.7M_{}`$ in the red-giant branch. Systems with $`M_2>1.7M_{}`$ become SNe Ia in a time shorter than 2 Gyr. Likewise, for a given age of the progenitor system, $`M_2`$ must be smaller than a limiting mass. This constraint on $`M_2`$ can be translated into the presence of a minimum $`M_{\mathrm{CO}}`$ for a given age, as follows: For a smaller $`M_2`$, i.e. for the older system, the total mass which can be transferred from the donor to the WD is smaller. In order for $`M_{\mathrm{WD}}`$ to reach $`M_{\mathrm{Ia}}`$, therefore, the initial mass of the WD, $`M_{\mathrm{WD},0}M_{\mathrm{CO}}`$, should be larger. This implies that the older system should have larger minimum $`M_{\mathrm{CO}}`$ as indicated in Figure 5. Using the $`X`$(C)-$`M_{\mathrm{CO}}`$ and $`M_{\mathrm{Ni56}}`$-$`X`$(C) relations (Fig. 5), we conclude that WDs in older progenitor systems have a smaller $`X`$(C), and thus produce dimmer SNe Ia.
### III.4 Comparison with observations
The first observational indication which can be compared with our model is the possible dependence of the SN brightness on the morphology of the host galaxies. ham96 found that the most luminous SNe Ia occur in spiral galaxies, while both spiral and elliptical galaxies are hosts to dimmer SNe Ia. Hence, the mean peak brightness is lower in elliptical than in spiral galaxies.
In our model, this property is simply understood as the effect of the different age of the companion. In spiral galaxies, star formation occurs continuously up to the present time. Hence, both WD+MS and WD+RG systems can produce SNe Ia. In elliptical galaxies, on the other hand, star formation has long ended, typically more than 10 Gyr ago. Hence, WD+MS systems can no longer produce SNe Ia. In Figure 6, we show the frequency of the expected SN I for a galaxy of mass $`2\times 10^{11}M_{}`$ for WD+MS and WD+RG systems separately as a function of $`M_{\mathrm{CO}}`$. Here we use the results of hac99b ; hac99a , and the $`M_{\mathrm{CO}}X`$(C) and $`M_{\mathrm{Ni56}}X`$(C) relations given in Figure 5. Since a WD with smaller $`M_{\mathrm{CO}}`$ is assumed to produce a brighter SN Ia (larger $`M_{\mathrm{Ni56}}`$), our model predicts that dimmer SNe Ia occur both in spirals and in ellipticals, while brighter ones occur only in spirals. The mean brightness is smaller for ellipticals and the total SN Ia rate per unit luminosity is larger in spirals than in ellipticals. These properties are consistent with observations.
The second observational suggestion is the radial distribution of SNe Ia in galaxies. wan97 ; rie98 found that the variation of the peak brightness for SNe Ia located in the outer regions in galaxies is smaller. This behavior can be understood as the effect of metallicity. As shown in Figure 5, even when the progenitor age is the same, the minimum $`M_{\mathrm{CO}}`$ is larger for a smaller metallicity because of the metallicity dependence of the WD winds. Therefore, our model predicts that the maximum brightness of SNe Ia decreases as metallicity decreases. Since the outer regions of galaxies are thought to have lower metallicities than the inner regions zar94 ; kob99 , our model is consistent with observations. wan97 also claimed that SNe Ia may be deficient in the bulges of spiral galaxies. This can be explained by the age effect, because the bulge consists of old population stars.
### III.5 Evolution of SNe Ia at high redshift
We have suggested that $`X`$(C) is the quantity very likely to cause the diversity in $`M_{\mathrm{Ni56}}`$ and thus in the brightness of SNe Ia. We have then shown that our model predicts that the brightness of SNe Ia depends on the environment, in a way which is qualitatively consistent with the observations. Further studies of the propagation of the turbulent flame and the DDT are necessary in order to actually prove that $`X`$(C) is the key parameter.
Our model predicts that when the progenitors belong to an old population, or to a low metal environment, the number of very bright SNe Ia is small, so that the variation in brightness is also smaller, which is shown in Figure 7. In spiral galaxies, the metallicity is significantly smaller at redshifts $`z\text{ }>1`$, and thus both the mean brightness of SNe Ia and its range tend to be smaller (Fig. 7). At $`z\text{ }>2`$ SNe Ia would not occur in spirals at all because the metallicity is too low. In elliptical galaxies, on the other hand, the metallicity at redshifts $`z13`$ is not very different from the present value. However, the age of the galaxies at $`z1`$ is only about 5 Gyr, so that the mean brightness of SNe Ia and its range tend to be larger at $`z\text{ }>1`$ than in the present ellipticals because of the age effect.
We note that the variation of $`X`$(C) is larger in metal-rich nearby spirals than in high redshift galaxies. Therefore, if $`X`$(C) is the main parameter responsible for the diversity of SNe Ia, and if the LCS method is confirmed by the nearby SNe Ia data, the LCS method can also be used to determine the absolute magnitude of high redshift SNe Ia.
### III.6 Possible evolutionary effects
In the above subsections, we consider the metallicity effects only on the C/O ratio; this is just to shift the main-sequence mass - $`M_{\mathrm{WD},0}`$ relation, thus resulting in no important evolutionary effect. However, some other metallicity effects could give rise to evolution of SNe Ia between high and low redshifts (i.e., between low and high metallicities).
Here we point out just one possible metallicity effect on the carbon ignition density in the accreting WD. The ignition density is determined by the competition between the compressional heating due to accretion and the neutrino cooling. The neutrino emission is enhanced by the local Urca shell process of, e.g., <sup>21</sup>Ne–<sup>21</sup>F pair pac73 . (Note that this is different from the convective Urca neutrino process). For higher metallicity, the abundance of <sup>21</sup>Ne is larger so that the cooling is larger. This could delay the carbon ignition until a higher central density is reached nom97d .
Since the WD with a higher central density has a larger binding energy, the kinetic energy of SNe Ia tends to be smaller if the same amount of <sup>56</sup>Ni is produced. This might cause a systematically slower light curve evolution at higher metallicity environment. The carbon ignition process including these metallicity effects as well as the convective Urca neutrino process need to be studied (see also iwa99 for nucleosynthesis constraints on the ignition density).
## IV Cosmic supernova rates
Attempts have been made to predict the cosmic supernova rates as a function of redshift by using the observed cosmic star formation rate (SFR) rui98 ; sad98 ; yun98 . The observed cosmic SFR shows a peak at $`z1.4`$ and a sharp decrease to the present mad96 . However, UV luminosities which is converted to the SFRs may be affected by the dust extinction pet98 . Recent updates of the cosmic SFR suggest that a peak lies around $`z3`$.
kob98 predicts that the cosmic SN Ia rate drops at $`z12`$, due to the metallicity-dependence of the SN Ia rate. Their finding that the occurrence of SNe Ia depends on the metallicity of the progenitor systems implies that the SN Ia rate strongly depends on the history of the star formation and metal-enrichment. The universe is composed of different morphological types of galaxies and therefore the cosmic SFR is a sum of the SFRs for different types of galaxies. As each morphological type has a unique star formation history, we should decompose the cosmic SFR into the SFR belonging to each type of galaxy and calculate the SN Ia rate for each type of galaxy.
Here we first construct the detailed evolution models for different type of galaxies which are compatible with the stringent observational constraints, and apply them to reproduce the cosmic SFR for two different environments, e.g., the cluster and the field. Secondly with the self-consistent galaxy models, we calculate the SN rate history for each type of galaxy and predict the cosmic supernova rates as a function of redshift.
### IV.1 In Clusters
Galaxies that are responsible for the cosmic SFR have different timescales for the heavy-element enrichment, and the occurrence of supernovae depends on the metallicity therein. Therefore we calculate the cosmic supernova rate by summing up the supernova rates in spirals (S0a-Sa, Sab-Sb, Sbc-Sc, and Scd-Sd) and ellipticals with the ratio of the relative mass contribution. The relative mass contribution is obtained from the observed relative luminosity proportion and the calculated mass to light ratio in B-band kob00 . The photometric evolution is calculated with the spectral synthesis population database taken from kod97 . We adopt $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_0=0.2`$, $`\lambda _0=0`$, and the galactic age of $`t_{\mathrm{age}}=15`$ Gyr.
First, we make a prediction of the cosmic supernova rates by using the galaxy models which are constructed to meet the observational constraints of cluster galaxies. We assume that elliptical galaxies are formed by a single star burst and stop the star formation at $`t1`$ Gyr due to a supernova-driven galactic wind, while spiral galaxies are formed by a relatively continuous star formation. The infall rates and the SFRs are given by kob00 . These models are constructed to meet the latest observational constraints such as the present gas fractions and colors for spirals, and the mean stellar metallicity and the color evolution from the present to $`z1`$ for ellipticals (see kob00 for the figures).
The synthesized cosmic SFR has an excess at $`z\text{ }>3`$ due to the early star burst in ellipticals and a shallower slope from the present to the peak at $`z1.4`$, compared with Madau’s plot mad96 . Figure 8 shows the cosmic supernova rates in cluster galaxies. The SN Ia rate in spirals drops at $`z1.9`$ because of the low-metallicity inhibition of SNe Ia. We can test the metallicity effect by finding this drop of the SN Ia in spirals, if high-redshift SNe Ia at $`z\text{ }>1.5`$ and their host galaxies are observed with the Next Generation Space Telescope. In ellipticals, the chemical enrichment takes place so early that the metallicity is large enough to produce SNe Ia at $`z\text{ }>2`$. The two peaks of SN Ia rates at $`z2.6`$ and $`z1.6`$ come from the MS+WD and the RG+WD systems, respectively. The SN Ia rate in ellipticals decreases at $`z2.6`$, which is determined from the shortest lifetime of SNe Ia of $`0.5`$ Gyr. Thus, the total SN Ia rate decrease at the same redshift as ellipticals, i.e., $`z2.6`$. (Note, the decrease of the SN Ia rate at $`z1.6`$ disappears if we adopt $`z_\mathrm{f}3`$, because the peak from the MS+WD systems moves to lower redshifts.)
### IV.2 In Field
We also predict the cosmic supernova rates assuming that the formation of ellipticals in field took place for over the wide range of redshifts, which is imprinted in the observed spectra of ellipticals in the Hubble Deep Field fra98 . The adopted SFRs are the same as the case of cluster galaxies, but for the formation epochs $`z_\mathrm{f}`$ of ellipticals distribute as $`\mathrm{exp}(((z2)/2)^2)`$ in the range of $`0z5`$,
The synthesized cosmic SFR has a broad peak around $`z3`$, which is in good agreement with the recent sub-mm observation hug98 . Figure 9 shows the cosmic supernova rates in field galaxies. As in Figure 8, the SN Ia rate in spirals drops at $`z1.9`$. The averaged SN Ia rate in ellipticals decreases at $`z2.2`$ as a result of $`0.5`$ Gyr delay of the decrease in the SFR at $`z\text{ }>3`$. Then, the total SN Ia rate decreases gradually from $`z2`$ to $`z3`$.
The rate of SNe II in ellipticals evolves following the SFR without time delay. Then, it is possible to observe SNe II in ellipticals around $`z1`$. The difference in the SN II and Ia rates between cluster and field ellipticals reflects the difference in the galaxy formation histories in the different environments.
### IV.3 Summary
(1) In the cluster environment, the predicted cosmic supernova rate suggests that in ellipticals SNe Ia can be observed even at high redshifts because the chemical enrichment takes place so early that the metallicity is large enough to produce SNe Ia at $`z\text{ }>2.5`$. In spirals the SN Ia rate drops at $`z2`$ because of the low-metallicity inhibition of SNe Ia.
(2)In the field environment, ellipticals are assumed to form at such a wide range of redshifts as $`1\text{ }<z\text{ }<4`$. The SN Ia rate is expected to be significantly low beyond $`z\text{ }>2`$ because the SN Ia rate drops at $`z2`$ in spirals and gradually decreases from $`z2`$ in ellipticals.
|
no-problem/0003/cond-mat0003108.html
|
ar5iv
|
text
|
# Topological Defects in Size-Dispersed Solids
## Abstract
We study the behavior of the topological defects in the inherent structures of a two-dimensional binary Lennard-Jones system as the size dispersity varies. We find that topological defects arising from the particle size dispersity are responsible for destabilizing the solid as follows: (i) for particle density $`\rho 0.9`$, the solid melts through intermediate states of decreasing hexatic order arising from the proliferation of unbounded dislocations, (ii) for $`\rho >0.9`$, the dislocations form grain boundaries, dividing the system into micro-crystallites and destroying the translational and orientational order.
Topological defects play a crucial role in melting of a solid, especially in two dimensions. In two dimensions (2D) these defects, present in bound pairs at low temperature solid phase, are believed to unbind and destroy the crystalline order as temperature is raised causing melting . Like temperature, size-dispersity (inhomogeneity in particle size) disfavors crystalline order, and can even convert a solid to a liquid . Here we study the defect morphology in the inherent structures of a 2D Lennard-Jones system with a bimodal distribution of particle sizes.
We simulate $`N=10^4`$ particles interacting with a truncated “shifted-force Lennard-Jones” pair-potential in 2D. We choose half the particles to be smaller than the rest, and define the size dispersity $`\mathrm{\Delta }`$ to be the ratio of the difference in their sizes to the mean size. We start by placing the particles randomly on the sites of a triangular lattice, embedded in a rectangular box of edges $`L_x`$ and $`L_y`$ with aspect ratio $`L_x/L_y=\sqrt{3}/2`$ (to accommodate close-pack hexagonal structure without distortion). We apply periodic boundary condition and use the velocity Verlet method to integrate Newton’s equation of motion. Units are set by choosing the mass of the particles and the Lennard-Jones (LJ) energy and length scales to be unity. The density $`\rho `$ is the ratio of the area occupied by the particles to the box area.
We equilibrate a state, defined by $`(\rho ,\mathrm{\Delta })`$, at a constant temperature $`T=1`$ using Berendsen’s thermostat; at this temperature the 2D solid can form for low enough dispersity. We run our simulation at constant energy until the temperature $`T`$, the pressure $`P`$, and the energy $`E`$ stabilize with less than 1% fluctuation (typically for $`2\times 10^5`$ time steps, where the time step is 0.01 in LJ units). We also check that the average particle displacement is at least a few times the average particle size. To obtain higher density states, we increase $`\rho `$ in steps of 0.01 from 0.85 (liquid state) to 1.05 by gradually compressing the box, keeping the aspect ratio fixed, and equilibrating the system at each of these densities.
We perform defect analysis on 100 equilibrated configurations for each of the state points ($`\rho ,\mathrm{\Delta }`$). Clear identification of geometrical defects is difficult in simulations due to the presence of many “virtual defects” which arise because of vibrational excitations. To overcome this difficulty, we analyze the inherent structure of each configuration , obtained by removing the vibrational excitations—or equivalently by minimizing (locally) the potential energy using the conjugate gradient method.
In order to find the defects, we construct a Voronoi cell around each particle, thereby uniquely defining its nearest neighbors. The ordered close-packed structure of the 2D solid is hexagonal, so each particle $`i`$ at position $`\text{r}_i`$ has $`n_i=6`$ neighbors. A defect arises when a particle has $`n_i6`$, which generates a “topological charge” $`q_in_i6`$ . Defects which are nearest neighbors are grouped as defect clusters with total charge $`Q_iq_i`$ and total dipole moment $`\text{P}_i\text{r}_iq_i`$, where the sum is over all defects $`i`$ in the cluster. Figure 1 shows an example of defects in an otherwise ideal triangular lattice.
We find that the defects fall into three categories: (i) Monopoles. Clusters with $`Q0`$. The simplest case is a disclination—a size-one cluster (Fig. 1a). (ii) Dipoles. Clusters with $`\text{P}0`$. The simplest case is a dislocation, a size-two cluster (Figs. 1a and 1b), composed of a “bound pair” of neighboring defects of opposite charge. (iii) Blobs. Clusters with $`Q=\text{P}=0`$. The most common case is a quadrupole (Fig. 1a) made of a “bound pair” of neighboring dislocations with oppositely-oriented dipole moments (see, e.g., ).
Figure 2 shows typical snapshots of the inherent structure at $`\rho =0.9`$ for three values of $`\mathrm{\Delta }`$. The corresponding phase points ($`\rho ,\mathrm{\Delta }`$) represent solid, hexatic and liquid phases . In the low-dispersity solid phase \[Figs. 2a and 2b\], defects occur mostly in the form of blobs (quadrupoles). Such defects, being without charge or dipole moment, cause little distortion in the nearby order and so are energetically inexpensive. We find that the defects in the inherent structure of the solid phase aggregate to form domains separated by nearly defect-free regions, suggesting that there is an effective attraction between defects. This effective attraction may arise from packing constraints since in dense packing, formation of a local large-amplitude defect is improbable. As $`\mathrm{\Delta }`$ increases, more defects in the form of dipoles are created.
At large defect density, free dislocations appear. Figure 2c shows their presence, in the hexatic phase. A dislocation destroys the long-range translational order as it introduces an extra half row (Fig. 1b) that can only terminate in another dislocation with equal but opposite dipole moment. Translational order is destroyed over the range of separation of the dislocation pair . Dislocations, however, retain orientational order. We find in configurations such as Fig. 2c that 2D translational order is lost (due to the abundance of dipoles). However, orientational order shows an algebraic decay—the characteristic features of a hexatic phase . The system breaks up into crystalline patches of finite length $`\xi _t`$ (the translational correlation length) which are shifted but not rotated with respect to one another, so the range of orientational correlation $`\xi _6`$ far exceeds $`\xi _t`$. On further increasing $`\mathrm{\Delta }`$, more defects are created: many monopoles appear which destroy the orientational order and the system melts \[Fig. 2d\] (see ).
In the hexatic phase, we find a steep increase in the number of dipoles (Fig. 3a) between $`\mathrm{\Delta }_{\text{SH}}`$ (the solid-hexatic transition value for $`\mathrm{\Delta }`$) and $`\mathrm{\Delta }_{\text{HL}}`$ (the hexatic-liquid transition value for $`\mathrm{\Delta }`$). Also, we detect a gentler increase in the number of monopoles as the hexatic-liquid transition at $`\mathrm{\Delta }=\mathrm{\Delta }_{\text{HL}}`$ is approached. KTHNY theory predicts that in the liquid phase, the orientational correlation function $`C_6(r)`$ decays exponentially, $`C_6(r)e^{r/\xi _6}`$, with an orientational correlation length $`\xi _6`$ that diverges as the liquid-hexatic transition is approached . Figure 3b reveals a rapid increase of $`\xi _6`$ as the liquid-hexatic phase boundary is approached from the liquid side.
Figures 4b–d show typical snapshots of the inherent structure at $`\rho =1.0`$, for $`\mathrm{\Delta }=0.04`$, 0.1 and 0.12. A first order solid-liquid transition is found at the value $`\mathrm{\Delta }_{\text{SL}}0.1`$ , the solid-liquid transition value for $`\mathrm{\Delta }`$. Our defect analysis shows that for $`\mathrm{\Delta }<\mathrm{\Delta }_{\text{SL}}`$, there are free dislocations, and near the transition these defects line up in “strings” to form long chains of large-angle grain boundaries (Fig. 4c). As $`\mathrm{\Delta }\mathrm{\Delta }_{\text{SL}}`$, these chains percolate, fragmenting the system into micro-crystallites rotated with respect to each other. Thus the grain boundaries simultaneously destroy both the translational and the rotational order. In the theory of grain-boundary-induced melting , the Landau free-energy expansion yields a first-order solid-liquid transition with the absence of any hexatic phase. We find that the formation of defects and the proliferation of grain boundaries occurs abruptly at $`\mathrm{\Delta }_{\text{SL}}0.1`$. Within the resolution of our simulations, we do not see any hexatic phase .
In summary, we have seen that size-dispersity induces topological defects which in turn destroy crystalline order, and that the mechanism of dispersity-induced melting displays surprising parallels with the mechanism proposed for the case of temperature-induced melting. Depending on the value of $`\rho `$, dispersity-induced melting can be either a first-order transition or a continuous transition (with an intervening hexatic phase), and the defect morphologies display completely different behavior in the two cases.
|
no-problem/0003/cond-mat0003461.html
|
ar5iv
|
text
|
# How native state topology affects the folding of Dihydrofolate Reductase and Interleukin-1𝛽
## I Introduction
Explaining how proteins self-assemble into well defined structures is a longstanding challenge. Energy landscape theory and the funnel concept have provided the theoretical framework necessary for improving our understanding of this problem — efficient folding sequences minimize frustration. Frustration may arise from the inability to satisfy all native interactions and from strong non-native contacts which can create conformational traps. The difficulty of minimizing energetic frustration by sequence design, however, is also dependent on the choice of folding motif. Some folding motifs are easier to design than others , suggesting the possibility that evolution not only selected sequences with sufficiently small energetic frustration but also selected more easily designable native structures. To address this difference in foldability, we have introduced the concept of “topological frustration” — even when sequences have been designed with minimal energetic frustration, variations in the degree of nativeness of contacts in the transition state ensemble (TSE) are observed because of asymmetries imposed by the chosen final structure.
Recent theoretical and experimental evidences suggest that proteins, especially small fast folding (sub-millisecond) proteins, have sequences with a sufficiently reduced level of energetic frustration that the global characteristics of the observed heterogeneity observed in the TSE are strongly influenced by the native state topology. We have shown that the overall structure of the TSE for Chymotrypsin Inhibitor 2 (CI2) and for the SH3 domain of the src tyrosine-protein kinease can be obtained by using simplified models constructed by using sequences that have almost no energetic frustration (Gō–like potentials). These models drastically reduce the energetic frustration and energetic heterogeneity for native contacts, leaving the topology as the primary source of the residual frustration. Topological effects, however, go beyond affecting the structure of the TSE. The overall structure of the populated intermediate state ensembles during the folding of proteins such as Barnase, Ribonuclease H and CheY have also been successfully determined using a similar model . It is interesting to notice that although these model, since they consider totally unfrustrated sequences, may not reproduce the precise energetics of the real proteins, such as the value of the barrier heights and the stability of the intermediates, they are able to determine the general structure of these ensembles. Therefore, the fact that these almost energetically unfrustrated models reproduce most of the major features of the TSE of these proteins indicate that real protein sequences are sufficiently well designed (i.e. with reduced energetic frustration) that much of the heterogeneity observed in the TSE’s and intermediates have a strong topological dependence.
Do these conclusions hold to larger and slower folding proteins with a more complex folding kinetics than two–state folders as CI2 and SH3? The success obtained with Barnase, Ribonuclease H and CheY intermediates already provides some encouragement — topology appears to be important in determining on-pathway folding intermediates. In this paper this approach is extended to a pair of larger proteins: Dihydrofolate Reductase (DHFR) and Interleukin-1$`\beta `$ (IL-1$`\beta `$). The synoptic analysis of these two proteins is particularly interesting because they have a comparable size (slightly over 150 amino–acids), but different native structures, folding mechanisms and functions: DHFR is a two–domain $`\alpha `$/$`\beta `$ enzyme that maintains pools of tetrahydrofolate used in nucleotide metabolism while IL-1$`\beta `$ is a single domain all $`\beta `$ cytokine with no catalytic activity on its own but elicits a biological response by binding to its receptor.
## II Numerical procedures
The energetically unfrustrated model of DHFR and IL-1$`\beta `$ are constructed by using a Gō–like Hamiltonian . A Gō–like potential takes into account only native interactions, and each of these interactions enters in the energy balance with the same weight. Residues in the proteins are represented as single beads centered in their C–$`\alpha `$ positions. Adjacent beads are strung together into a polymer chain by means of bond and angle interactions, while the geometry of the native state is encoded in the dihedral angle potential and a non-local bead-bead potential.
A detailed description of this energy function can be found elsewhere . The local (torsion) and non-local terms have been adjust so that the stabilization energy residing in the tertiary contacts is approximately twice as large as the torsional contribution. This balance among the energy terms is optimal for the folding of our Gō–like protein models . Solvent mediation and side chain effects are already included in these effective energy functions. Therefore, entropy changes are associated to the configurational entropy of the chain. The native contact map of a protein is derived with the CSU software, based upon the approach developed in ref. . Native contacts between pairs of residues $`(i,j)`$ with $`ji+4`$ are discarded from the native map as any three and four subsequent residues are already interacting in the angle and dihedral terms. A contact between two residues $`(i,j)`$ is considered formed if the distance between the $`C_\alpha `$s is shorter than $`\gamma `$ times their native distance $`\sigma _{ij}`$. It has been shown that the results are not strongly dependent on the choice made for the cut–off distance $`\gamma `$. In this work we used $`\gamma =1.2`$.
For both (DHFR and IL-1$`\beta `$) protein models, folding and unfolding simulations have been performed at several temperatures around the folding temperature. The results from the different simulations have been combined using the WHAM algorithm . Several very different initial unfolded structures for the folding simulations have been selected and they have been obtained from high temperature unfolding simulations. In order to have appropriate statistics, we made sure that for every transition state ensemble or intermediate, we have sampled about 500 uncorrelated conformations (thermally weighted). For smaller proteins such as SH3 and CI2 (that have about 1/3 of the tertiary contacts of DHFR and IL-1$`\beta `$) we have determined that about 200 uncorrelated conformations in the transition state ensemble are necessary to have an error on the estimates of contact probabilities (or $`\mathrm{\Phi }`$ values) of $`\pm 0.05`$ .
## III Comparing simulations and experiments for Dihydrofolate Reductase and Interleukin–1$`\beta `$
Dihydrofolate Reductase and Interleukin-1$`\beta `$ not only have dissimilar native folds<sup>*</sup><sup>*</sup>*The 162 residues of DHFR arrange themselves in 8 $`\beta `$-strands and 4 $`\alpha `$-helices, grouped together in the folded state in as detailed in Fig.2 (d), while IL-1$`\beta `$ is a 153 residues, all-$`\beta `$ protein, composed by 12 $`\beta `$ strands packed together as shown in Fig.4 (c)-(d). but also the nature of the intermediate states populated during the folding event is remarkably different. To explore the connection between the protein topology and the nature of the intermediates, we used an energetically minimally frustrated $`C_\alpha `$ model for these two proteins, with a potential energy function defined by considering only the native local and non-local interactions as being attractive (see Numerical Procedures, for details). This is a very simplified potential that retains only information about the native fold — energetic frustration is almost fully removed. Notice that although the real amino–acid sequence is not included in this model, the chosen potential is like a “perfect” sequence for the target structure, without the energetic frustration of real sequences (since this potential includes attractive native tertiary contacts, it implicitly incorporates hydrophobic interactions). Therefore, this model provide us with the perfect computational tool to investigate how much of the structural heterogeneity observed during folding mechanism could be inferred from the knowledge of the native structure alone without contributions from energetic frustration.
Since early work suggests that proteins (at least small fast folding proteins) have sufficiently reduced energetic frustration, they have a funnel-like energy landscape with a solvent–averaged potential strongly correlated with the degree of nativeness (but with some roughness due to the residual frustration). In this situation, the folding dynamics can be described as the diffusion of an ensemble of protein configurations over a low dimensional free energy surface — defined in terms of the reaction coordinate $`Q`$, where $`Q`$ represents the fraction of the native contact formed in a conformation ($`Q=0`$ at the fully unfolded state and $`Q=1`$ at the folded state) . The ensemble of intermediates observed in this free energy profile are expected to mimic the real kinetic intermediates.
Fig. 1 shows a comparison between the folding mechanism obtained from our simulations for the minimally-frustrated analogue of DHFR (panels (a) and (c)) and IL-1$`\beta `$ (panels (b) and (d)). The different nature of the folding intermediates of the two proteins and their native ensembles emerging from these data is in substantial agreement with the experimental observations, with the adenine binding domain of DHFR being folded in the main intermediate in the simulation and the central $`\beta `$ strands of IL-1$`\beta `$ being formed early in this single domain protein. The absolute values of the free energy barriers resulting from simulations may not necessarily agree with the experimental ones because we are dealing with unfrustrated designed sequences. Thus, quantitative predictions that depend on barrier heights and stability of the intermediate ensembles (e.g. folding time, rate determining barriers and lifetime of intermediates) are not possible for this kind of models. However we show that topology is sufficient to correctly detect the positions of the transition state and intermediate states. A more detailed description follows.
### A Dihydrofolate Reductase
The folding process emerging from the dynamics of the Gō–like analogue of DHFR (as summarized in Fig. 1 (a) and (c)) is interestingly peculiar and consistent with the experimentally proposed folding mechanism (see Fig. 3 (d)). Refolding initiates by a barrierless collapse to a quasi–stable species (Q=0.2) which corresponds to the formation of a burst–phase intermediate, $`I_{BP}`$, with little stability but some protection from $`H`$-exchange across the central $`\beta `$ sheet . This initial collapse is followed by production of the main intermediate $`I_{HF}`$ (Highly Fluorescent), which is described in the mechanism of Fig. 3 (d) as the collection of intermediates $`I_1I_4`$. $`I_1I_4`$ are structurally similar to each other, but differentiated experimentally by the rate at which they proceed towards the native protein. Finally, after the overcoming of a second barrier, the protein visits an ensemble of native structures with different energies. The experimentally determined folding mechanism of DHFR shows transient kinetic control in the formation of native conformers ($`N_4`$ dominant). This is later overridden by thermodynamic considerations ($`N_2`$ dominant) at final equilibrium . This latter finding is consistent with the nature of the folding ensemble determined by the simulations. As shown in Fig. 3 (b) a set of structures close to to the native state (Q around 0.7-0.8) is transiently populated beside the fully folded state (Q = 1). Since the main intermediate $`I_{HF}`$ has been recently characterized by experimental studies , we take our analysis a step farther by comparing the average structure of the $`I_{HF}`$ ensemble from our simulations to the one experimentally determined. For this purpose we compute the formation probability $`Q_{ij}(Q)`$ for each native DHFR contact –involving residues $`(i,j)`$– at different stages of the folding process by averaging the number of times the contact occurs over the set of structures existent in a selected range of $`Q`$. As detailed in Fig. 2, the central result from this analysis is that the main intermediate $`I_{HF}`$ is characterized by a largely different degree of formation in different parts of the protein: domain 1 (i.e. interactions among strands 2-5 and helices 2-3) appears to be formed with probability greater than 0.7 while domain 2 (i.e. interactions among strands 6-8, helix 1 and helix 4) is almost non existent.
The formation of domain 1 and domain 2 during the folding event is more closely understood from Fig. 3 (panels (a) and (c)), where the RMS distance of the parts of the protein constituting each domain from the corresponding native structures is shown for a typical folding simulation. Indeed the two domains fold in a noticeably different way: in the stable intermediate $`I_{HF}`$, domain 1 is closer than 5 Å (RMS) to that found in the native structure while domain 2 is highly variable (RMS distance greater than 15 Å from its native structure). Still, in agreement with hydrogen exchange studies , some protection is expected across domains from our simulations and complete protection from exchange is expected only after the formation of the fully folded protein. A combination of fluorescence, CD mutagenic and new drug binding studies on DHFR indeed demonstrate that domain 1 is largely folded with specific tertiary contacts formed and that this collection of intermediates is obligatory in the folding route .
### B Interleukin–1$`\beta `$
Supported by some recent experiments, Heidary et al have proposed a kinetic mechanism for the folding of IL-1$`\beta `$ that requires the presence of a well defined on-pathway intermediate species. The structural details of these species were determined from NMR and hydrogen exchange techniques . We have compared these experimental data with our simulations for the IL-1$`\beta `$ Gō–like analogue ( Fig. 1 (b) and (d)). The folding picture emerging from these numerical studies differs substantially from that observed for DHFR (see panels (a) and (c) of Fig. 1). An intermediate state is populated for $`Q`$ around 0.55, followed by a rate limiting barrier (around $`Q=0.7`$) after which the system proceeds to the well defined native state.
Is the theoretical intermediate similar to the one observed experimentally? Using the same procedure employed for the DHFR, a comparison between the average structure of the IL-1$`\beta `$ intermediate ensemble and the one emerging from experimental studies is shown in Fig. 4. These results indicate that the calculated intermediate has residues 40-105 (strands 4-8) folded into a native-like topology but with interactions between strands 5 and 8 not fully completed. Experimental results confirm that strands 6-8 are well folded in the intermediate state and that strands 4-5 are partially formed. However results of experiments and theory differ in the region between residues 110-125 where hydrogen exchange shows early protection and theory predicts late contact formation. This region contains 4 aromatic groups PHE 112, PHE 117, TYR 120, TRP 121 which may be sequestered from solvent due to clustering of these residues and from removal from unfavorable solvent interactions. This effect would not be fully accounted for our model, where all native interactions are considered as energetically equivalent and large stabilizing interactions are not differentiated. Thus, energetics may favor early formation of the structure corresponding to residues 105-125 while topology considerations favor the formation of strands 4-8.
## IV Conclusions
Theoretical and experimental studies of protein folding at times appear to be at odds. Theoretical analysis of simple model systems oftentimes predict a large number of routes to the native protein whereas experimental work on larger systems indicates that folding proceeds through a limited number of intermediate species. Although in the eyes of some people these two descriptions are inconsistent with each other, this is clearly not true. The large number of routes may or may not lead to the production of on-route kinetic intermediate ensembles depending on the result of the competition between configurational entropy and the effective folding energy. In this study, we show that productive intermediate species are produced by using simplified protein models, with funnel-like landscapes, based on purely topological considerations and the results are in good agreement with the available experimental data. The fact that these simplified minimally frustrated models for DHFR and IL-1$`\beta `$ can predict the overall features of the folding intermediates and transition states experimentally measured for these two proteins, with completely different folding mechanisms and functions, support our general picture that real proteins have a substantially reduced level of energetic frustration and a large component of the observed heterogeneity during the folding event is topologically determined. Such observations lead us to propose that the success in designing sequences that fold to a particular shape is constrained by topological effects. What is more challenging are the consequences of this conclusion — are these topological constraints something that only have to be tolerated during the folding event or are they actually used by biology towards helping function? Here we speculate only in the context of these two examples, but this question really should be addressed more generally in the future.
## V acknowledgments
This work has been supported by the NSF (Grant # 96-03839), the La Jolla Interfaces in Science program (sponsored by the Burroughs Wellcome Fund), and the NIH (grant # 6M54038). We warmly thank Angel García for many fruitful discussions. One of us (C.C.) expresses her gratitude to Giovanni Fossati for his suggestions and helpful discussions.
FIGURE CAPTIONS
Fig. 1 (a) RMS distances between the DHFR native structure and several computationally determined structures at different values of the reaction coordinate $`Q`$ for an unfolding simulation at a temperature slightly above the folding temperature ($`T=1.01T_f`$) and (c) free energy $`F(Q)`$ of the DHFR Gō–like model as a function of $`Q`$ around the folding temperature. The folding temperature $`T_f`$ is estimated as the temperature where a sharp peak appears in the specific heat plotted as a function of the temperature (data not shown). Both temperatures and free energies are presented in units of $`T_f`$. Notice that the thermal fluctuations around the lowest energy state (i.e. $`Q=1`$, by construction of the model) account for motions around the free energy minimum. Therefore, the folded state ensemble has a minimum close to $`Q=1`$,at $`Q0.9`$, but not exactly at $`Q=1`$. Indeed at $`Q=1`$ the structure would be frozen in the native configuration. A similar remark applies for the IL-1$`\beta `$ free energy profile shown in panel (c). The energy of a configuration, as quantified by the color scale on the top of the figure, is here defined as the bare value of the effective potential function in that configuration (i.e. no configurational entropy is accounted in the energy). Differences between energy and free energy (at finite temperature) are due to the configurational entropy contribution to the free energy. In panel (c) a main intermediate ensemble $`I_{HF}`$ emerges in the folding process as a local minimum at $`Q`$ around 0.4 after the overcoming of the first barrier. Indeed this local minimum corresponds to a populated region in panel (a) (after the scarcely populated barrier around $`Q`$ = 0.3) with energy significantly lower than in the unfolded state. This main intermediate then evolves toward a set of structures close to the native states (located between $`Q=0.7`$ and $`Q=0.8`$) that eventually interconvert into the fully folded state. A transient set of structures, close to the native state, is also apparent from Fig. 3 (b). The folding scheme resulting from these simulations is consistent with the sketch of Fig. 3 (d), proposed from the experimental data .
(b) RMS distances between the native structure and several computationally determined structures at different values of the reaction coordinate $`Q`$ for a folding simulation of the Gō–like model of IL-1$`\beta `$. The simulation is performed at a temperature near to the folding temperature ($`T=0.99T_f`$). (d) Free energy $`F(Q)`$ as a function of $`Q`$ around to the folding temperature. The folding temperature is estimated from the sharp peak in the specific heat curve as a function of the temperature (data not shown). An intermediate ensemble is populated during the folding event and it is identified by the broad local minimum in the free energy profile (around $`Q`$ = 0.55), and the corresponding populated region in panel (b) (with energy significantly lower than in the unfolded state). These results are consistent with the kinetic mechanism for the folding of IL-1$`\beta `$ proposed by Heidary et al. . A set of structures close to the native conformation is transiently populated for $`Q`$ between 0.75 and 0.8 (see panel (b) and the corresponding “flat” region in the free energy panel (d) ). This fact could be interpreted as the presence of an additional intermediate state close to the native state. Experimentally the possibility that another partially unfolded form could be populated during the folding process is currently under investigation. Several constant temperature simulations (both folding and unfolding simulations) of the two protein models were made and combined to generate the free energy plots.
Fig. 2 The probability $`Q_{ij}(Q)`$ of the native DHFR contacts to be formed, as resulting from the simulations at different stages of the folding process: (a) at an early stage ($`Q=0.1\pm 0.05`$), (b) at the main intermediate – located in the interval $`Q=0.4\pm 0.05`$ (see panels (a) and (c) of Fig. 1) and (c) at a late stage of the folding process ($`Q=0.7\pm 0.05`$). In an topologically and energetically perfectly smooth funnel-like energy landscape, at any value $`Q`$ during the folding, any contact $`(i,j)`$ should have a probability $`Q_{ij}(Q)`$ to be formed equal to $`Q`$ . By computing $`Q_{ij}(Q)`$ for each contact over different windows of the reaction coordinate $`Q`$, we can quantify the deviations from this smooth funnel behavior and locate the early and late contacts along the folding process. It is worth noticing that any deviation from the “perfectly smooth” behavior is mainly due to topological constraints, since energetic frustration has been mostly removed from the system. Different colors in the contact maps indicate different probability values from 0 to 1, as quantified by the color scale at the top. The preference to form more local structure than non-local in the almost unfolded state (b) is due to the smaller conformational entropy loss by forming local contacts than by pinching off longer loops . The most interesting result is that domain 1, identified by the interactions among strands 2-5 and helices 2-3, is substantially formed at the intermediate $`I_{HF}`$ (probabilities for individual contacts grater than 0.7), while the formation of domain 2 (i.e. interactions among strand 1, strands 6-8, helix 1 and helix 4) is highly unfolded (contact probabilities between 0 and 0.4). Helix 1 and helix 4 are largely formed, but their interactions with the remainder of the proteins are loose (probabilities less than 0.4). Overall, this description of the structure of the main intermediate $`I_{HF}`$domain 1 almost formed and domain 2 largely unformed – is in agreement with the structure of $`I_{HF}`$ experimentally observed. Moreover, the latest events in the folding process (panel (c)) appear to be the formation of interactions between strands 7-8 and the remainder of the protein. This again has been experimentally determined. Panel (d) illustrates the regions of the native structure that simulations and experiments agree to indicate as formed at the intermediate $`I_{HF}`$.
Fig. 3 The RMS distances between the regions of the DHFR structure identified as (a) domain 1 and (c) domain 2 and their corresponding native configurations are plotted versus the reaction coordinate $`Q`$. Domain 1 collapses to a structure close to its native conformation (RMS distance less than 5 Å) in the early stages of folding, leading to the formation of the main intermediate $`I_{HF}`$ (located at $`Q`$ around 0.4) whereas domain 2 remains largely unfolded (RMS distance larger than 15 Å). In the interval of $`Q`$ from 0.6 to 0.8 there are several possible structures. Consistently with the multi-channel folding model proposed from experimental evidences , from panels (a) and (c) one can propound several possible ways to proceed from $`I_{HF}`$ to the folded state. In panel (b) the fraction of native contacts formed, $`Q`$, is plotted versus the simulation time for a region of our simulations where the transition from folded to unfolded state is observed (at a simulation temperature slightly higher than the folding temperature, $`T=1.01T_f`$). A set of structures close to the native state ($`Q`$ around 0.7) is transiently populated. Different colors represent different energies of a configuration (quantified by the top energy scale), as well as for panels (a) and (c). (d) The kinetics mechanism for the folding of DHFR, proposed on the basis of experimental results . Experimentally a first step of folding is detected as a very rapid collapse of the unfolded form to the burst–phase intermediate ($`I_{BP}`$) which has a significant content of secondary structure. The folding state is reached through four different channel, involving the formations of the main intermediate $`I_{HF}`$. $`I_{HF}`$ is represented as a set of structures $`I_1I_4`$ structurally similar to each other but proceeding towards the native state with a different rate. These intermediate structures evolve to the native forms $`N_1N_4`$ via slow-folding reactions.
Fig. 4 Probability of the contact formation for the native contacts, as obtained during a typical folding simulation (data shown are obtained at $`T=0.99T_f`$) of the IL-1$`\beta `$ at different stages of the folding: (a) in a range of $`Q`$ between 0.3 and 0.4 that corresponds to the early stage of folding leading to the formation of the intermediate ensemble; and (b) at the intermediate ($`Q`$ between 0.45 and 0.55). At the intermediate the interactions involving strands from 4 to 8 are almost completely formed; interactions among strands 1-3 are likely formed but interactions between them and the rest of the protein are loose. Contacts involving strands 9-12 appear weakened and the interactions between N (residues 1-40) and C terminus (residues 110-153) are completely unformed. Experimental results confirm that strands 6-8 are well folded in the intermediate state and that strands 4-5 are partially formed. Panels (c) and (d) show the regions of the IL-1$`\beta `$ native structure formed at the intermediate, as resulting from simulations (c) and experiments (d). The small difference between simulations and experiments (contact formation in the region between residues 110-125) may be due to energetic considerations that are not taken into account in the model, as discussed in the text. In agreement with experimental results, the formation of contacts between N and C terminus is not accomplished until the late stage of folding – these contacts are still unformed for $`Q=(0.60.7)`$ (data not shown).
|
no-problem/0003/cond-mat0003195.html
|
ar5iv
|
text
|
# Like-Charge Attraction through Hydrodynamic Interaction
## Abstract
We demonstrate that the attractive interaction measured between like-charged colloidal spheres near a wall can be accounted for by a nonequilibrium hydrodynamic effect. We present both analytical results and Brownian dynamics simulations which quantitatively capture the one-wall experiments of Larsen and Grier (Nature 385, 230, 1997).
Colloidal spheres provide a simple model system for understanding the interactions of charged objects in a salt solution. Hence, it came as a great surprise when it was observed that two like-charged spheres can attract each other when the spheres are confined by walls . Since both the charge densities and sizes of the spheres in question are in the range of large proteins, it would be expected that a change in sign of this interaction would have important implications for biological systems . Theorems by Sader and Chan and Neu demonstrate that under very general conditions the Poisson-Boltzmann equation for the potential between like-charged spheres in a salt solution will not admit attractive interactions. Explanations for the observed attraction have thus exclusively focused on deviations from the classical Derjaguin, Landau, Verwey and Overbeek (DLVO) theory.
Herein, we propose that an attractive interaction of two like-charged colloidal spheres measured in the presence of a single wall can arise from a non-equilibrium hydrodynamic effect. The idea is that the relative motion between two spheres depends on both the forces acting between them and in addition, their hydrodynamic coupling. In a bulk solution, far from solid boundaries, an external force acting on two identical spheres cannot change their relative positions. This is a consequence of the kinematic reversibility of Stokes flow and of the symmetries inherent in the problem.
However, these symmetries are broken in confined geometries, where the hydrodynamic effect of boundaries is important. In this situation, relative motion between the particles could stem from either an interparticle force, or from a hydrodynamic coupling caused by forces acting on each of the particles individually. In a typical experiment with charged colloidal spheres, the charge density on the walls of the cell is of order the charge density on the spheres. We demonstrate that the hydrodynamic coupling between two spheres caused by their repulsion from a wall leads to motion which, if interpreted as an equilibrium property, is consistent with an effective potential between the spheres with an attractive well. Our calculations quantitatively reproduce the experimental measurements of these potentials.
The response of a particle to an external force is significantly changed near a wall because the flow field must vanish identically on the wall. For point forces, Lorentz determined this wall-corrected flow field , which Blake later expressed using the method of image forces , analogous to image charges used in electrostatics. Images of the appropriate strength on the opposite side of the wall exactly cancel out the fluid flow on the wall. When two particles are pushed away from a wall, the flow field from one particle’s image tends to pull the other particle towards it, and vice versa (Fig. 1). This decreases the distance between the particles.
The attractive interaction between two charged spheres in the presence of a wall can now be understood with a simple picture. When the spheres are sufficiently close to the wall, they are electrostatically repelled from it. The net force on each sphere thus includes both their mutual electrostatic repulsion and their repulsion from the wall. How the spheres respond depends on their hydrodynamic mobility: when the spheres are close together (Fig. 2a), their mutual repulsion overwhelms any hydrodynamic coupling, and the spheres will separate as expected for like-charged bodies. However, when they are beyond some critical separation (Fig. 2b), the hydrodynamic coupling due to the wall force overcomes the electrostatic repulsion, so that the particles move together as they move away from the wall.
Although this decrease in mutual separation is a non-equilibrium kinetic effect, it could be interpreted as the result of an attractive equilibrium pair-potential. This is most clearly understood without Brownian motion. Two particles initially located a distance $`r`$ apart move because of both interparticle forces and the repulsive force from the wall. The response of these two particles to forces $`𝐅_1`$ and $`𝐅_2`$ is expressed by the hydrodynamic mobility tensor $`𝐛(𝐗_1,𝐗_2)`$, defined by
$$𝐯=𝐛(𝐗_1,𝐗_2)𝐅,$$
(1)
where $`𝐯=(\dot{𝐗}_1,\dot{𝐗}_2)`$ are the particle velocities and $`𝐅=(𝐅_1,𝐅_2)`$ are the forces on the particles. Thus, the distance between the spheres (measured in the plane parallel to the walls) will change by an amount $`\mathrm{\Delta }r=\mathrm{\Delta }x_2\mathrm{\Delta }x_1`$ in a small time $`\mathrm{\Delta }t`$, where we denote the $`x`$-direction to be along the line connecting the spheres, and the $`z`$-direction to be perpendicular to the wall. Utilizing symmetries of the mobility tensor, it is straightforward to show that $`\mathrm{\Delta }r`$ will be
$$\mathrm{\Delta }r=\left\{2(b_{X_2X_2}b_{X_2X_1})\right|F_p|+2b_{X_2Z_1}F_w\}\mathrm{\Delta }t,$$
(2)
where $`F_p`$ and $`F_w`$ are respectively the repulsive electrostatic sphere-sphere and sphere-wall forces. The tensor component $`b_{X_2Z_1}`$ refers to the $`x`$-motion of particle 2 due to a force in the $`z`$-direction on particle 1, and so on.
If this system were assumed to be in equilibrium, then the relative motion would be interpreted as the result of an effective potential, so that an effective force $`F_{eff}=_rU_{eff}`$
$$\mathrm{\Delta }r=\left\{2(b_{X_2X_2}b_{X_2X_1})|F_{eff}|\right\}\mathrm{\Delta }t,$$
(3)
so that one would determine this effective potential to be given by
$$U_{eff}(r,h)=U_p(r)F_w_{\mathrm{}}^r\frac{b_{X_2Z_1}(r,h)}{b_{X_2X_2}(h)b_{X_2X_1}(r,h)}𝑑r,$$
(4)
where $`U_p(r)`$ is the interparticle thermodynamic pair potential, $`r`$ is the separation between particles, and $`h`$ is their distance from the wall.
In order to compare our results with experiments, we determine the hydrodynamic mobilities in the point-force limit, using Blake’s solution . We use the DLVO potential for the electrostatic interaction of two spheres in the form presented by Larsen and Grier ,
$$\frac{U_{DLVO}}{k_BT}=Z^2\lambda _B\left(\frac{e^{\kappa a}}{1+\kappa a}\right)^2\frac{e^{\kappa r}}{r},$$
(5)
where $`a`$ and $`Z`$ are respectively the radius and effective charge of each sphere, the Bjerrum length $`\lambda _B=e^2/\epsilon k_BT`$, and the Debye-Hückel screening length $`\kappa ^1=(4\pi n\lambda _B)^{1/2},`$ with a concentration $`n`$ of simple ions in the solution. This formula is obtained using effective point charges in a linear superposition approximation. To determine the repulsive electrostatic force between each sphere and the wall, we used the same effective point-charge approach to obtain
$$\frac{U_{wall}}{k_BT}=Z\sigma _g\lambda _B\frac{e^{\kappa a}}{\kappa (1+\kappa a)}e^{\kappa h},$$
(6)
where $`\sigma _g`$ is the effective charge density on the glass wall. We note that while the functional form of this equation is correct, it is not clear that the effective charges in equations (5) and (6) will be exactly the same, as geometric factors buried in each effective charge will vary from situation to situation. A more reliable description of sphere-sphere and wall-sphere interactions will be necessary for quantitative comparisons with independently measured charge densities.
Using all of Larsen and Grier’s experimental parameters as inputs to the theory, we numerically integrate (4) to obtain this apparent effective potential. The only necessary parameter not given is the surface charge density of the glass walls $`\sigma _g`$, which we take to be $`\sigma _g=5\sigma _p`$, consistent with Kepler and Fraden’s measurements . Fig. 3 shows this effective potential for various sphere-wall separations. The hydrodynamic coupling of collective motion away from the wall with relative motion in the plane of the wall leads to an attractive component. It is important to emphasize that this hydrodynamic coupling is a kinematic effect, and has no thermodynamic significance–all forces acting on the spheres are purely repulsive.
We note as well that a simple approximate expression exists for the hydrodynamic term in the effective potential (4), since $`b_{X_2X_2}(h)/b_{X_2X_1}(r,h)O(h/a)>>1.`$ Approximating the denominator in the integrand as simply $`b_{X_2X_2}`$, we explicitly evaluate the integral to give
$$U_{eff}(r,h)=U_p(r)\frac{F_w}{1\frac{9a}{16h}}\frac{3h^3a}{(4h^2+r^2)^{3/2}}.$$
(7)
As a complement to this analytic approach, we simulate the dynamics of this system, using (5) and (6) for the sphere-sphere and wall-sphere forces, respectively. We account for Brownian motion of the particles in the standard Stokes-Einstein fashion, whereby the diffusion tensor is proportional to the mobility tensor, $`𝐃=k_BT𝐛`$ . Using all experimental parameters and $`\sigma _g=5\sigma _p`$ as explained above, we performed a computer version of Larsen and Grier’s experiment, and analyzed the resulting data using their methods. Our results suggest that this approach includes all of the essential ingredients necessary for quantitatively understanding their observations.
In Fig. 4, we present simulations for the two cases presented by Larsen and Grier: the first with the spheres $`2.5`$ microns from the wall, so that they interact significantly with the charge double layer of the wall, and the second starting $`9.5`$ microns from the wall, well outside of the wall’s charge double layer.
Our theoretical picture agrees quantitatively with measured data. Moreover, there are many consequences of the theory that can be tested experimentally: (1) Effective kinetic potentials can be predicted for different sets of conditions and quantitatively compared with experiments; (2) The hydrodynamic mechanism requires a net drift of the particles away from the wall, which could be independently measured. (3) Finally, the theory provides a simple explanation for the observation that the attraction disappears when the salt concentration is increased. While this at first seems counterintuitive–the particles are mutually attractive only when they are mutually repulsive–the significance of the wall-driven hydrodynamic coupling makes this clear.
Several pieces of experimental evidence have been collected which seemed to suggest the existence of an attractive minimum in the thermodynamic pair potential of like-charged colloidal particles in confined geometries. Besides the one wall experiment under discussion, attractive pair potentials have been observed for two spheres trapped between two walls, and for a suspension of spheres trapped between two walls. In addition, it has been shown that metastable colloidal crystals take orders of magnitude longer to melt than would be expected without a thermodynamic attraction. Similarly, voids in colloidal crystals take much longer to close than expected. It is not clear how the theory presented here will bear upon these experiments.
The theory presented in this paper offers a non-equilibrium hydrodynamic explanation for the attractive potential in the single-wall experiments without invoking a novel thermodynamic attraction. We have found quantitative agreement with experimental results when the effective wall charge density is chosen to be $`\sigma _g=5\sigma _p`$, which is in the ballpark of measured estimates. Without a quantitative measurement of this parameter, this work does not strictly rule out the possibility that a novel attraction exists. This situation can be definitively resolved by more quantitative comparisons with experiments.
Acknowledgments: We are indebted to D. Grier and E. Dufresne for introducing us to their experiments, and for a stimulating collaboration. Useful discussions with J. Crocker, H. Stone, and D. Weitz are gratefully acknowledged. This research was supported by the Mathematical Sciences Division of the National Science Foundation, the A.P. Sloan Foundation, and the NDSEG Fellowship Program (TS).
|
no-problem/0003/astro-ph0003314.html
|
ar5iv
|
text
|
# Physics of Grain Alignment
## 1. Introduction
Magnetic fields are extremely important for star formation, galactic feedback processes etc. and polarized radiation arising from absorption and emission by aligned grains provides an important means for studying magnetic field topology. However, the interpretation of polarimetry data requires clear understanding of processes of grain alignment and the naive rule of thumb that dust grains are aligned everywhere and with longer axes perpendicular to magnetic field may be misleading (see Goodman et al. 1995, Rao et al. 1998).
Physics of grain alignment is deep and exciting. It is enough to say that its study resulted in a discovery of a few new solid state effects. However, let us start by recalling a few simple facts. The grain alignment in interstellar medium always happens in respect to magnetic field. It is fast Larmor precession of grains that makes magnetic field the reference axis. Note, that grains may align with their longer axes perpendicular or parallel to magnetic field direction. Similarly, magnetic fields may change their configuration and orientation in space (e.g. due to Alfven waves), but if the time for such a change is much longer than the Larmor period the alignment of grains in respect to the field lines persists as the consequence of preservation of the adiabatic invariant.
The alignment of grain axis is described by the Rayleigh reduction factor:
$$RG(\mathrm{cos}^2\theta )G(\mathrm{cos}^2\beta )$$
(1)
where angular brackets denote ensemble averaging, $`G(x)3/2(x1/3)`$, $`\theta `$ is the angle between the axis of the largest moment of inertia (henceforth the axis of maximal inertia) and the magnetic field $`𝐁`$, while $`\beta `$ is the angle between the angular momentum $`𝐉`$ and $`𝐁`$. One may see (e.g. Hildebrand 1988) that $`R`$ is directly related to the degree of polarization. To characterize $`𝐉`$ alignment in grain axes and in respect to magnetic field, the measures $`Q_XG(\theta )`$ and $`Q_JG(\beta )`$ are used. Unfortunately, these statistics are not independent and therefore $`R`$ is not equal to $`Q_JQ_X`$ (see Roberge & Lazarian 1999). This considerably complicates the treatment of grain alignment.
This review attempts to cover the recent advancements of our understanding of grain alignment and places them in the context of the earlier works done by giants of E. Purcell and L. Spitzer caliber. It happened that several times the problem of grain alignment seemed to be solved and theorists got satisfied. However, accumulation of new observational facts and deeper insights into grain physics caused the changes of paradigms. Thus in what follows we describe three periods of grain alignment theory. A more detailed treatment of various aspects of grain alignment the interested reader can find in earlier reviews (e.g. Hildebrand 1988, Roberge 1996, Lazarian, Goodman & Myers 1997).
## 2. Evolution of Grain Alignment Ideas
Foundations.
The first stage of alignment theory development started directly after the discovery of starlight polarization by Hiltner (1949) and Hall (1949). Nearly simultaneously Davis & Greenstein (1950) and Gold (1951) proposed their scenarios of alignment.
Paramagnetic Alignment: Davis-Greenstein Process
Davis-Greenstein mechanism (henceforth D-G mechanism) is based on the paramagnetic dissipation that is experienced by a rotating grain. Paramagnetic materials contain unpaired electrons which get oriented by the interstellar magnetic field $`𝐁`$. The orientation of spins causes grain magnetization and the latter varies as the vector of magnetization rotates in grain body coordinates. This causes paramagnetic loses at the expense of grain rotation energy. Note, that if the grain rotational velocity ! is parallel to $`𝐁`$, the grain magnetization does not change with time and therefore no dissipation takes place. Thus the paramagnetic dissipation acts to decrease the component of $`\omega `$ perpendicular to $`𝐁`$ and one may expect that eventually grains will tend to rotate with $`\text{!}𝐁`$ provided that the time of relaxation $`t_{DG}`$ is much shorter than $`t_{gas}`$, the time of randomization through chaotic gaseous bombardment. In practice, the last condition is difficult to satisfy. For $`10^5`$ cm grains in diffuse medium $`t_{DG}`$ is of the order of $`7\times 10^{13}a_{(5)}^2B_{(5)}^2`$s , while $`t_{gas}`$ is $`3\times 10^{12}n_{(20)}T_{(2)}^{1/2}a_{(5)}`$ s ( see table 2 in Lazarian & Draine 1997) if magnetic field is $`5\times 10^6`$ G and temperature and density of gas are $`100`$ K and $`20`$ cm<sup>-3</sup>, respectively. However, in view of uncertainties in interstellar parameters the D-G theory looked OK initially.
Mechanical Alignment: Gold Process
Gold mechanism is a process of mechanical alignment of grains. Consider a needle-like grain interacting with a stream of atoms. Assuming that collisions are inelastic, it is easy to see that every bombarding atom deposits angular momentum $`\delta 𝐉=m_{atom}𝐫\times 𝐯_{atom}`$ with the grain, which is directed perpendicular to both the needle axis $`𝐫`$ and the velocity of atoms $`𝐯_{atom}`$. It is obvious that the resulting grain angular momenta will be in the plane perpendicular to the direction of the stream. It is also easy to see that this type of alignment will be efficient only if the flow is supersonic<sup>1</sup><sup>1</sup>1Otherwise grains will see atoms coming not from one direction, but from a wide cone of directions (see Lazarian 1997a) and the efficiency of alignment will decrease.. Thus the main issue with the Gold mechanism is to provide supersonic drift of gas and grains. Gold originally proposed collisions between clouds as the means of enabling this drift, but later papers (Davis 1955) showed that the process could align grains over limited patches of interstellar space only and thus the process cannot account for the ubiquitous grain alignment in diffuse medium.
Quantitative Treatment and Enhanced Magnetism
The first detailed analytical treatment of the problem of D-G alignment was given by Jones & Spitzer (1967) who described the alignment of $`𝐉`$ using a Fokker-Planck equation. This approach allowed to account for magnetization fluctuations within grain material and thus provided a more accurate picture of $`𝐉`$ alignment. $`Q_X`$ was assumed to follow the Maxwellian distribution, although the authors noted that this might not be correct. The first numerical treatment of D-G alignment was presented by Purcell (1969). By that time it became clear that the D-G mechanism is too weak to explain the observed grain alignment. However, Jones & Spitzer (1969) noticed that if interstellar grains contain superparamagnetic, ferro- or ferrimagnetic (henceforth SFM) inclusions, the $`t_{DG}`$ may be reduced by orders of magnitude. Since $`10\%`$ of atoms in interstellar dust are iron the formation of magnetic clusters in grains was not far fetched (see Spitzer & Turkey 1950, Martin 1995) and therefore the idea was widely accepted. Indeed, with enhanced magnetic susceptibility the D-G mechanism was able to solve all the contemporary problems of alignment. The conclusive at this stage was the paper by Purcell & Spitzer (1971) where all various models of grain alignment, including, for instance, the model of cosmic ray alignment by Salpeter & Wickramasinche (1969) and photon alignment by Harwit (1970) were quantitatively discussed and the D-G model with enhanced magnetism was endorsed. It is this stage of development that is widely reflected in many textbooks.
Facing Complexity
Barnett Effect and Fast Larmor Precession
It was realized by Martin (1972) that rotating charged grains will develop magnetic moment and the interaction of this moment with the interstellar magnetic field will result in grain precession. The characteristic time for the precession was found to be comparable with $`t_{gas}`$. However, soon a process that renders much larger magnetic moment was discovered (Dolginov & Mytrophanov 1976). This process is the Barnett effect, which is converse of the Einstein-Haas effect. If in Einstein-Haas effect a paramagnetic body starts rotating during remagnetizations as its flipping electrons transfer the angular momentum (associated with their spins) to the lattice, in the Barnett effect the rotating body shares its angular momentum with the electron subsystem causing magnetization. The magnetization is directed along the grain angular velocity and the value of the Barnett-induced magnetic moment is $`\mu 10^{19}\omega _{(5)}`$ erg gauss<sup>-1</sup> (where $`\omega _{(5)}\omega /10^5\mathrm{s}^1`$). Therefore the Larmor precession has a period $`t_{Lar}3\times 10^6B_{(5)}^1`$ s and the magnetic field defines the axis of alignment as we explained in section 1.
Suprathermal Paramagnetic Alignment: Purcell Mechanism
The next step was done by Purcell(1975, 1979), who discovered that grains can rotate much faster than were previously thought. He noted that variations of photoelectric yield, the H<sub>2</sub> formation efficiency, and variations of accommodation coefficient over grain surface would result in uncompensated torques acting upon a grain. The H<sub>2</sub> formation on the grain surface clearly illustrates the process we talk about: if H<sub>2</sub> formation takes place only over particular catalytic sites, these sites act as miniature rocket engines spinning up the grain. Under such uncompensated torques the grain will spin-up to velocities much higher than Brownian and Purcell termed those velocities “suprathermal”. Purcell also noticed that for suprathermally rotating grains internal relaxation will bring $`𝐉`$ parallel to the axis of maximal inertia (i.e. $`Q_X=1`$). Indeed, for an oblate spheroidal grain with angular momentum $`𝐉`$ the energy can be written
$$E(\theta )=\frac{J^2}{I_{max}}\left(1+\mathrm{sin}^2\theta (h1)\right)$$
(2)
where $`h=I_{max}/I_{}`$ is the ratio of the maximal to minimal moments of inertia. Internal forces cannot change the angular momentum, but it is evident from Eq. (2) that the energy can be decreased by aligning the axis of maximal inertia along $`𝐉`$, i.e. decreasing $`\theta `$. Purcell (1979) discusses two possible causes of internal dissipation, the first one related to the well known inelastic relaxation, the second is due to the mechanism that he discovered and termed “Barnett relaxation”. This process may be easily understood. We know that a freely rotating grain preserves the direction of $`𝐉`$, while angular velocity precesses about $`𝐉`$ and in grain body axes. We learned earlier that the Barnett effect results in the magnetization vector parallel to !. As a result, the Barnett magnetization will precess in body axes and cause paramagnetic relaxation. The “Barnett equivalent magnetic field”, i.e. the equivalent external magnetic field that would cause the same magnetization of the grain material, is $`H_{BE}=5.6\times 10^3\omega _{(5)}`$ G, which is much larger than the interstellar magnetic field. Therefore the Barnett relaxation happens on the scale $`t_{Bar}4\times 10^7\omega _{(5)}^2`$, i.e. essentially instantly compared to $`t_{gas}`$ and $`t_{DG}`$.
Theory of Crossovers
If $`Q_X=1`$ and the suprathermally rotating grains are immune to randomization by gaseous bombardment, will paramagnetic grains be perfectly aligned with $`R=1`$? This question was addressed by Spitzer & McGlynn (1979) (henceforth SM79) who observed that adsorption of heavy elements on a grain should result in the resurfacing phenomenon that, e.g. should remove early sites of H<sub>2</sub> formation and create new ones. As the result, H<sub>2</sub> torques will occasionally change their direction and spin the grain down. SM79 showed that in the absence of random torques the spinning down grain will flip over preserving the direction of its original angular momentum. However, in the presence of random torques this direction will be altered with the maximal deviation inflicted over a short period of time just before and after the flip, i.e. during the time when the value of grain angular momentum is minimal. The actual value of angular momentum during this critical period depends on the ability of $`𝐉`$ to deviate from the axis of maximal inertia. SM79 observed that as the Barnett relaxation couples $`𝐉`$ with the axis of maximal inertia it makes randomization of grains during crossover nearly complete. With the resurfacing time $`t_{res}`$ estimated by SM79 to be of the order of $`t_{gas}`$ the gain of the alignment efficiency was insufficient to reconcile the theory and observations unless the grains had SFM inclusions.
Radiative Torques
If the introduction of the concept of suprathermality by Purcell changed the way researchers thought of grain dynamics, the introduction of radiative torques passed essentially unnoticed. Dolginov (1972) argued that quartz grains may be spun up due to their specific rotation of polarization while later Dolginov & Mytrophanov (1976) discovered that irregular grain shape may allow grains scatter left and right hand polarized light differentially thus spinning up helical grains through scattering of photons. They stressed that the most efficient spin-up is expected when grains size is comparable with the wavelength and estimated the torque efficiency for particular helical grain shapes, but failed to provide estimates of the relative efficiency of the mechanism in the standard interstellar conditions. In any case, this ingenious idea had not been appreciated for another 20 years.
Observational tests: Serkowski Law
All in all, by the end of seventies the the following alignment mechanisms were known: 1. paramagnetic( a. with SMF inclusions, b. with suprathermal rotation), 2. mechanical, 3. radiative torques. The third was ignored, the second was believed to be suppressed for suprathermally rotating grains, which left the two modifications of the paramagnetic mechanism as competing alternatives. Mathis (1986) noticed that the interstellar polarization-wavelength dependence known as the Serkowski law (Serkowski et al. 1975) can be explained if grains larger that $`10^5`$ cm are aligned, while smaller grains are not. To account for this behavior Mathis (1986) stressed that the SFM inclusions will have a better chance to be in larger rather than smaller grains. The success of fitting observational data persuaded the researchers that the problem of grain alignment is solved at last.
New Developments
Optical and near infrared observations by Goodman et al. (1992), Goodman et al. (1995) showed that polarization efficiency may drop within dark clouds while far infrared observations by Hildebrand et al. (1984), Hildebrand et al. (1990) revealing aligned grains within star-forming dark clouds. This renewed interest to grain alignment problem.
New Life of Radiative Torques
Probably the most dramatic change of the picture was the unexpected advent of radiative torques. Before Bruce Draine realized that the torques can be treated with the versatile discrete dipole approximation (DDA) code, their role was unclear. For instance, earlier on difficulties associated with the analytical approach to the problem were discussed in Lazarian (1995a). However, very soon after that Draine (1996) modified the DDA code to calculate the torques acting on grains of arbitrary shape. The magnitude of torques were found to be substantial and present for grains of various irregular shape. After that it became impossible to ignore these torques. Being related to grain shape, rather than surface these torques are long-lived, i.e. $`t_{spinup}t_{gas}`$, which allowed Draine & Weingartner (1996) to conclude that in the presence of isotropic radiation the radiative torques can support fast grain rotation long enough in order for paramagnetic torques to align grains (and without any SFM inclusions). However, the important question was what would happen in the presence of anisotropic radiation. Indeed, in the presence of such radiation the torques will change as the grain alignes and this may result in a spin-down. Moreover, anisotropic flux of radiation will deposit angular momentum which is likely to overwhelm rather weak paramagnetic torques. These sort of questions were addressed by Draine & Weingartner (1997) and it was found that for most of the tried grain shapes the torques tend to align $`𝐉`$ along magnetic field. The reason for that is yet unclear and some caution is needed as the existing treatment ignores the dynamics of crossovers which is very important for the alignment of suprathermally rotating grains. Nevertheless, radiative torques are extremely appealing as their predictions are consistent with observational data (see Lazarian, Goodman & Myers 1995, Hildebrand et al. 1999).
New Elements of Crossovers
Another unexpected development was a substantial change of the picture of crossovers. As we pointed out earlier the Purcell’s discovery of fast internal dissipation resulted in a notion that $`𝐉`$ should always stay along the axis of maximal inertia as long as $`t_{dis}t_{gas}`$. Calculations in SM79 were based on this notion. However, this perfect coupling was questioned in Lazarian (1994) (henceforth L94), where it was shown that thermal fluctuations within grain material partially randomize the distribution of grain axes in respect to $`𝐉`$. The process was quantified in Lazarian & Roberge (1997) (henceforth LR97), where the distribution of $`\theta `$ for a freely rotating grain was defined through the Boltzmann distribution $`\mathrm{exp}(E(\theta )/kT_{grain})`$, where the energy $`E(\theta )`$ is given by Eq. (2). This finding changed the understanding of crossovers a lot. First of all, Lazarian & Draine (1997)(henceforth LD97) observed that thermal fluctuations partially decouple $`𝐉`$ and the axis of maximal inertia and therefore the value of angular moment at the moment of a flip is substantially larger than SM79 assumed. Thus the randomization during a crossover is reduced and LD97 obtained a nearly perfect alignment for interstellar grains rotating suprathermally, provided that the grains were larger than a certain critical size $`a_c`$. The latter size was found by equating the time of the crossover and the time of the internal dissipation $`t_{dis}`$. For $`a<a_c`$ Lazarian & Draine (1999a) found new physical effects, which they termed “thermal flipping” and “thermal trapping”. The thermal flipping takes place as the time of the crossover becomes larger than $`t_{dis}`$. In this situation thermal fluctuations will enable flipovers. However, being random, thermal fluctuations are likely to produce not a single flipover, but multiple ones. As the grain flips back and forth the regular (e.g. H<sub>2</sub>) torques average out and the grain can spend a lot of time rotating with thermal velocity, i.e. being “thermally trapped”. The paramagnetic alignment of grains rotating with thermal velocities is small (see above) and therefore grains with $`a<a_c`$ are expected to be marginally aligned. The picture of preferential alignment of large grains, as we know, corresponds to the Serkowski law and therefore the real issue is to find the value of $`a_c`$. The Barnett relaxation<sup>2</sup><sup>2</sup>2A study by Lazarian & Efroimsky (1999) corrected the earlier estimate by Purcell (1979), but left the conclusion about the Barnett relaxation dominance, and therefore the value of $`a_c`$, intact. provides a comforting value of $`a_c10^5`$ cm. However, in a recent paper Lazarian & Draine (1999b) reported a new solid state effect that they termed “nuclear relaxation”. This is an analog of Barnett relaxation effect that deals with nuclei. Similarly to unpaired electrons nuclei tend to get oriented in a rotating body. However the nuclear analog of “Barnett equivalent” magnetic field is much larger and Lazarian & Draine (1999) concluded that the nuclear relaxation can be a million times faster than the Barnett relaxation. If this is true $`a_c`$ becomes of the order $`10^4`$ cm, which means that the majority of interstellar grains undergo constant flipping and rotate essentially thermally in spite of the presence of uncompensated Purcell torques. The radiative torques are not fixed in body coordinates and it is likely that they can provide a means for suprathermal rotation for grains that are larger than the wavelength of the incoming radiation. Naturally, it is of utmost importance to incorporate the theory of crossovers into the existing codes and this work is under way.
New Ideas and Quantitative Theories
An interest to grain alignment resulted in search of new mechanisms. For instance, Sorrell (1995a,b) proposed a mechanism of grain spin-up due to interaction with cosmic rays that locally heat grains and provide evaporation of adsorbed H<sub>2</sub> molecules. However, detailed calculations in Lazarian & Roberge (1997b) showed that the efficiency of the torques was overestimated; the observations (Chrysostomou et al. 1996) did not confirm Sorrell’s predictions either. A more promising idea that ambipolar diffusion can align interstellar grains was put forward in Roberge & Hanany (1990)(calculations are done in Roberge et al. 1995). Within this mechanism ambipolar drift provides the supersonic velocities necessary for mechanical alignment. Independently L94 proposed a mechanism of mechanical grain alignment using Alfven waves. Unlike the ambipolar diffusion, this mechanism operates even in ideal MHD and relies only on the difference in inertia of atoms and grains. An additional boost to interest to mechanical processes was gained when it was shown that suprathermally rotating grains can be aligned mechanically (Lazarian 1995, Lazarian & Efroimsky 1996). As it was realized that thermally rotating grains do not $`𝐉`$ tightly coupled with the axis of maximal inertia (L94) and the effect was quantified (LR97), it got possible to formulate quantitative theories of Gold (Lazarian 1997a) and Davis-Greenstein (Lazarian 1997b, Roberge & Lazarian 1999) alignments. Together with a better understanding of grain superparamagnetism (Draine & Lazarian 1998) and resurfacing of grains (Lazarian 1995c) these developments increased the predictive power of the grain alignment theory.
Alignment of PAH
All the studies above dealt with classical “large” grains. What about very small (e.g. $`a<10^7`$ cm) grains? Can they be aligned? The answer to this question became acute after Draine & Lazarian (1998) explained the anomalous galactic emission in the range $`10100`$ GHz as arising from rapidly (but thermally!) spinning tiny grains. This rotational dipole emission will be polarized if grains are aligned. Lazarian & Draine (2000) (henceforth LD00) found that the generally accepted picture of the D-G relaxation is incorrect when applied to such rapidly rotating ($`\omega >10^8`$ s<sup>-1</sup>) particles. Indeed, the D-G mechanism assumes that the relaxation rate is the same whether grain rotates in stationary magnetic field or magnetic field rotates around a stationary grain. However, as grain rotates, we know that it gets magnetized via Barnett effect and it is known that the relaxation rate within a magnetized body differs from that in unmagnetized body. A non-trivial finding in LD00 was that the Barnett magnetization provides the optimal conditions for the paramagnetic relaxation which enables grain alignment at frequencies for which the D-G process is quenched (see Draine 1996). LD00 termed the process “resonance relaxation” to distinguish from the D-G process and calculated the expected alignment values for grains of different sizes. Will this alignment be seen through infrared emission of small transiently heated small grains (e.g. PAH)? The answer is probably negative. The trouble is that internal alignment of $`𝐉`$ and the axis of maximal inertia is being essentially destroyed if a grain is heated up to high temperatures (LR97). Therefore even if $`𝐉`$ vectors are well aligned, grain axes, and therefore the direction of polarization of emitted infrared photons, will be substantially randomized.
## 3. Summary and Future work
Let us summarize what we learned about the dynamics of grain alignment. For a $`10^5`$ cm grain in cold diffuse interstellar medium the fastest motion is the grain rotation, which happens on the time scale less than $`10^4`$ s. The grain tumbling and rotation of angular velocity about $`𝐉`$ happens on approximately the same time scale. The alignment of $`𝐉`$ with the axis of maximal inertia happens as a matter of hours due to the very efficient nuclear relaxation. On the time scale of days $`𝐉`$ rotates about $`𝐁`$ due to its magnetic moment (Dolginov & Mytrophanov 1976), while gaseous damping time takes $`t_{gas}10^5`$ years. An alignment mechanism is efficient if the alignment time is a fraction of $`t_{gas}`$ for thermally rotating grains, but it may be many $`t_{gas}`$ if grains rotate suprathermally. In the latter case the dynamics of crossovers is all-important.
At the moment radiative torques look as the most promising means of aligning dust. Due to thermal trapping the Purcell alignment is suppressed. The superparamagnetic hypothesis looks OK (see Goodman & Whittet 1996), but the mechanism faces the problem with driving grain rotation. The same thermal trapping makes grain alignment less efficient in molecular clouds where grain rotational temperature approaches its body temperature. It is likely that the radiative torques are still required to drive grain rotation.
The most challenging problem right now is to understand the radiative torque mechanism. For this purpose it is necessary to describe crossovers induced by radiative torques and include the recently discovered flipovers into existing codes. It looks necessary to understand why grains align (not always, but very frequently) $`𝐉`$ with $`𝐁`$ when subjected to anisotropic radiation. My experiments with slightly irregular grains (using the code kindly provided to me by Bruce Draine) interacting with anisotropic monochromatic radiation made me believe that it is possible to get a theoretical insight into the underlying physics. However, whatever theory says, observational tests are necessary. Inversion of the polarimetric data (see Kim & Martin 1995) allows to find for different environments the critical grain size starting with which grains are aligned. Comparing this size with predictions calculated for radiative torques should enable testing the mechanism.
Whatever the success of the radiative torques, it is necessary to proceed with further development of alternative alignment mechanisms. Some of them, e.g. the mechanism of mechanical alignment is suspected to cause alignment at least in some regions (see Rao et al. 1998). Ward-Thompson et al. (2000) reported 850 $`\mu `$m polarization from dense pre-stellar cores, where radiative torques should be inefficient. Could the grain larger than $`a_c`$ and aligned via modified Purcell mechanism (LD97) be responsible? Or should we apeal to Alfven waves or ambipolar diffusion? Further research will provide us with the answer. In general, the variety of Astrophysical conditions allows various mechanisms (see Lazarian, Goodman & Myers 1997) to have their niche. Clear understanding of grain alignment will make polarimetry much more informative. Although so far grain alignment theory was applied only to interstellar environments, it is clear that its potential is great for circumstellar and interplanetary studies (see Lazarian 2000).
## References
Chrysostomou A., et al. Hough, J.H., Whittet, D.C.B., Aitken, D.K., Roche, P.F., Lazarian, A. 1996, ApJ, 465, L61
Davis, L. 1955, Vistas in Astronomy, ed. A.Beer, 1, 336
Davis, L. & Greenstein, J.L., 1951, ApJ, 114, 206
Dolginov A.Z. 1972, Ap&SS, 16, 337
Dolginov A.Z. & Mytrophanov, I.G. 1976, Ap&SS, 43, 291
Draine, B.T. 1996, in Polarimetry of the Interstellar Medium, eds Roberge W.G. and Whittet, D.C.B., A.S.P. 97. 16
Draine, B.T. & Weingartner, J.C. 1996, ApJ, 470, 551.
1997, ApJ, 480, 633
Draine, B.T. & Lazarian A. 1998a, ApJ, 494, L19
1998b, ApJ, 508, 157
1999, ApJ, 512, 740
Gold, T. 1951, Nature, 169, 322
Goodman, A.A., Jones, T.J., Lada, E.A., & Myers P.C. 1992, ApJ, 399, 108
1995, ApJ, 448, 748
Goodman, A.A., & Whittet, D.C.B. 1995, ApJ, 455, L181
Hall, J.S. 1949, Science, 109, 166
Harwit, M. 1970, Nature, 226, 61
Hildebrand, R.H. 1988, QJRAS, 29, 327
Hildebrand, R.H., & Dragovan, M. & Novak, G. 1984, ApJ, 284, L51
Hildebrand, R.H., Gonatas, D.P., Platt, S.R., Wu, X.D., Davidson, J.A., & Werner, M.W. 1990, ApJ, 362, 114
Hildebrand, R. H., Dotson, J. L., Dowell, C. D., Schleuning, D. A., Vaillancourt, J. E. 1999, ApJ, 516, 834
Hiltner, W.A. 1949, ApJ, 109, 471
Jones, R.V., & Spitzer, L.,Jr, 1967, ApJ, 147, 943
Kim, S.-H., & Martin, P., G. 1995, ApJ, 444, 293
Lazarian, A. 1994, MNRAS, 268, 713, (L94)
1995a, ApJ, , 453, 229
1995b, MNRAS, 277, 1235
1995c, MNRAS, 274, 679
1997a, ApJ, 483, 296
1997b, MNRAS, 288, 609
2000, Icarus, submitted
Lazarian, A., & Efroimsky, M. 1996, ApJ, 466, 274
1999, MNRAS, 303, 673
Lazarian, A., & Draine, B.T., 1997, ApJ, 487, 248
1999a, ApJ, 516, L37
1999b, ApJ, 520, L67
2000, ApJ, submitted
Lazarian, A., & Roberge, W.G., 1997a ApJ, 484, 230, (LR97)
1997b, MNRAS, 287, 941
Martin, P.G. 1971, MNRAS, 153, 279
1995, ApJ, 445, L63
Mathis, J.S. 1986, ApJ, 308, 281
Purcell, E.M. 1969, On the Alignment of Interstellar Dust, Physica, 41, 100
1975, in Dusty Universe, eds. G.B. Field & A.G.W. Cameron, New York, Neal Watson, p. 155
1979, ApJ, 231, 404
Purcell, E.M., & Spitzer, L., Jr 1971, ApJ, 167, 31
Rao, R, Crutcher, R.M., Plambeck, R.L., Wright, M.C.H. 1998, ApJ, 502, L75
Roberge, W.G. 1996 in Polarimetry of the Interstellar Medium, eds, Roberge W.G. and Whittet, D.C.B., A.S.P. Vol. 97, p. 401
Roberge, W.G., & Hanany, S. 1990, B.A.A.S., 22, 862
Roberge, W.G., DeGraff, T.A., & Flaherty, J.E., 1993, ApJ, 418, 287
Roberge, W.G., & Lazarian, A. 1999, MNRAS, 305, 615
Salpeter, E.E., & Wickramasinche, N.C. 1969, Nature, 222, 442
Serkowski, K., Mathewson, D.S. & Ford, V.L. 1975, ApJ, 196, 261
Sorrell, W.H., 1995a b, MNRAS, 273, 169 and 187
Spitzer, L., Jr, Tukey, J.W. 1950, ApJ, 187
Spitzer, L.,Jr & McGlynn T.A. 1979, ApJ, 231, 417, (SM79)
Ward-Thompson, D., Kirk, J.M., Crutcher, R.M., Greaves, J.S., Holland, W.S., & Andre, P. 2000, ApJ, submitted
|
no-problem/0003/cond-mat0003023.html
|
ar5iv
|
text
|
# Controlling anomalous stresses in soft field-responsive systems.
## I introduction
Field-responsive systems constitute a class of soft condensed matter systems undergoing significant responses leading to important macroscopic changes upon application of an external field -. This peculiar characteristics has been used in many applications and may become useful in the implementation of different devices . Electro- and magneto-rheological fluids, ferrofluids and magnetic holes are typical examples of field-responsive systems which have been subject of many recent investigations-.
These systems consist essentially in two phases, one is a dispersion of *smart* active units, whereas the other is a liquid, or more generally a soft phase, practically inactive to the action of the field. The mechanical response of such units to the applied field depends on their nature. If the particles bear permanent dipoles, they induce stresses in the liquid phase during their reorientation process even in the single-particle domain. When the dipoles are induced their dipolar moments are always collinear with the field. Therefore in this case the only way to induce mechanical responses is through the formation of assemblies of particles, which occurs at higher concentrations, when dipolar interactions start to play a significant role. The elementary assembled unit exhibiting mechanical response is a bounded pair of induced dipoles (dimer).
Our purpose in this paper is to show that stresses induced by these field-responsive elementary units, the bounded dimers, exhibit a multiplicity of regimes emerging from the nonlinear nature of the dynamics, not observed in other field-responsive phases analyzed up to now. The stresses are anomalous as they do not necessarily vary monotonously with the characteristic parameters and reversible as their appearance is not subjected to intrinsic structural changes of the system. This peculiar property has an important consequence: it can be used to control the induction of stresses in the solvent phase.
We have organized the paper in the following way. In Section II we introduce the model describing the dynamics of the system. Section III is devoted to analyze the stresses generated by the particles, whereas in section IV we discuss the rheology of the suspension. Finally, the last section is intended as a summary of our main results.
## II The model
To illustrate this phenomenon, we consider a 2D model in which the dynamics of the orientation $`\phi `$ of the bounded dimer captures the two basic ingredients present in experimental situations, namely a pure rotation caused by the applied field and a term breaking the symmetry of the dynamics which originates from the presence of a shear flow:
$$\dot{\phi }=A(t)\mathrm{sin}(2(\phi \alpha (t)))b\mathrm{sin}^2\phi $$
(1)
where the overdot denotes total time derivative. Here we have considered the general case in which the pure rotation is modulated by the frequency $`w_h`$ by
$$A(t)=w_c(cos^2(w_ht)+r^2sin^2(w_ht))$$
(2)
where $`w_c`$ is a characteristic frequency and $`r`$ denotes its degree of polarization ranging from $`r=1`$ corresponding to circular polarization to $`r=0`$ holding for linear polarization; $`\alpha (t)`$ is a time dependent phase and $`b`$ is the shear rate.
Physical realizations of this model are in general the 2D dynamics of a bounded pair of spherical induced dipoles in the presence of a shear flow with velocity profile $`b`$$`y\widehat{𝐱}`$, and of an external rotating field with frequency $`\omega _h`$ and components $`H_x`$$`\mathrm{cos}\left(\omega _ht\right)\widehat{𝐱}`$ and $`H_y`$$`\mathrm{sin}\left(\omega _ht\right)\widehat{𝐲}`$. The equation of motion of the rotating dimer Eq. 1 then emerges from balancing out the hydrodynamic torque
$$T_{hy}=6\pi \eta _0(b\mathrm{sin}^2\phi +\dot{\phi })$$
(3)
and the external field torque
$$T_M=6\pi \eta _0A(t)\mathrm{sin}(2(\alpha (t)\phi ))$$
(4)
arising from the energy of dipolar interaction
$$U(\phi )=\frac{M_V^2(13\mathrm{cos}^2(\alpha (t)\phi ))}{d^3}$$
(5)
In the previous equations, $`\eta _0`$ is the viscosity of the liquid phase, $`\alpha (t)=\mathrm{arctan}(r\mathrm{tan}(w_ht))`$ is the direction of the field, and $`M_V=V\chi _{eff}H`$ is the induced moment with $`\chi _{eff}`$ the effective susceptibility and $`V`$ is the volume of the sphere with diameter $`d`$. Within this context, the value of the characteristic frequency can be identified with $`w_c=\frac{\chi _{eff}^2H_x^2V^2}{2\pi \eta _0d^3}`$, and $`r=H_y/H_x`$. The model, for a magnetic rotor in the absence of the symmetry-breaking term has been discussed by Skjeltorp et al. in the context of the nonlinear dynamics of a bounded pair of magnetic holes -. For the particular case $`w_c=\frac{mH_x}{6\pi \eta _0}`$and $`b=0`$ Eq. 1 also describes the 2D dynamics of a ferrofluid particle with magnetic moment *$`m`$ in* a static magnetic field and a vorticity field $`w_h\widehat{𝐳}`$, in the absence of noise.
The motion of the pair induces stresses in the whole system emerging from the conversion of field torque into tensions in the fluid. In this sense, this process can be viewed as a mechanism of transduction of field energy into stresses whose efficiency is determined by the dynamics. The induced stress is simply the averaged density of hydrodynamic torque:
$$S=6\eta _0c\dot{\phi }+b\mathrm{sin}^2\phi $$
(6)
where $`c`$ represents the volume fraction of dimers.
## III Dynamics and stresses
### A Circular polarization
In order to elucidate the main features of this model, we have solved numerically Eq. 1. In Figure 1a we have depicted the stress as a function of the frequency of the field, for different values of shear rate corresponding to the case of circular polarization ($`r=1`$).Since the stress is an homogeneous function, $`S(\lambda w_c,\lambda b,\lambda w_h)=\lambda S(w_c,b,w_h)`$, its behavior can be analyzed in terms of the scaled quantities $`b^{}=b/w_c`$, $`w_h^{}=w_h/w_c`$ and $`S^{}(w_h^{},b^{})=S/w_c`$.
In the absence of shear flow ($`b^{}=0`$), the interplay between hydrodynamic and field effects originates two basic dynamic regimes determined by the value of $`\left|w_h^{}\right|`$. When this frequency is smaller than the threshold $`\left|w_h^{}\right|=1`$, the dimer follows the field with a fixed phase-lag and the same angular velocity, performing uniform oscillations. At frequencies higher than the threshold value the system is not longer able to follow the field and undergoes periodic rotations with stops and backwards motions (”jerky” oscillations) . These two modes of motion are manifested in two different regimes of the stress in Fig 1a. A linear regime, for $`\left|w_h^{}\right|<1`$, in which the scaled stress is just the frequency of the field and a monotonous decay regime for $`\left|w_h^{}\right|>1`$, where the modulus of the stress decrease due to jerky oscillations. During backwards rotations field energy is wasted inducing “wrong sign tensions”. When they become as important as forward rotations, which occurs at high frequencies, the net transduction of energy, and consequently the induced stress, is practically inexistent.
The presence of a shear flow completely modifies the dynamical response leading to the appearance of a richer phenomenology. The role played by the flow is manifold. On one hand, it breaks the symmetry of the dynamics by fixing a direction of rotation which implies that the property $`S^{}(w_h^{})=S^{}(w_h^{})`$, which holds in the absence of shear, is no longer valid. On the other hand, the regimes in which one of the two competing rotational mechanisms, related to the field and to the flow, dominates are intrinsically different. Finally, the presence of the new time scale $`b`$ is responsible for the appearance of new *synchronization* mechanisms.
For $`b^{}<1`$ the strength of the field dominates and the behavior of the stress is similar to that of the case $`b^{}=0`$ but shifted in frequency by an amount $`b^{}/2`$ as one can notice in Fig 1a. In Fig 1b, we have represented some snapshots of the dynamical modes of rotation corresponding to the different regimes in the stress. Upon increasing $`w_h^{}`$ we can generate the sequence of modes: jerky(first two snapshots)-uniform-jerky-localized oscillations. We have found a dynamic transition, from *jerky* to *localized oscillations,* at a characteristic positive frequency, which depends on $`b^{}`$, with its subsequent macroscopic consequences in the stress. Moreover, competition between flow and field involves the breaking of the symmetry of the stress and leads to the decrease of the stress peak at positive frequencies.
In the opposite case, when $`b^{}>1`$, the effects of the flow dominate and in this situation, even at frequencies near to zero, the rotation imposed by the field is very different from the one dictated by the shear. As we can see in Fig. 1a, the positive peak has definitively disappeared and the behavior of the stress is characterized by the development of small *multi-resonances* followed by linear increases and decreases of the stress with slopes -1, -2… The origin of this behavior is the synchronization of the field and the shear, exciting mode-locks of the pair with frequencies ratio $`S^{}:w_h^{}`$ 1:1 or 2:1, etc. It is important to highlight that, for every value of the shear rate, stress curves overlap at high and moderate frequencies.
As an illustrative example, in Fig 1c we have depicted some snapshots of the dynamics for $`b^{}=8`$ corresponding to the different regimes of the stress obtaining a sequence of modes: jerky-uniform-jerky (3rd and 4th snapshots)-localized oscillations, upon increasing $`w_h^{}`$.
It is worth pointing out that the frequency of the negative stress minimum, where transition between linear and jerky oscillations regime occurs, follows a power law in terms of the shear rate: $`w_{\mathrm{min}}^{}(b^{}+1)^{0.45}`$, as it is shown in Fig. 2.
### B Elliptical polarization
Even more interesting is the case of elliptical polarization. In Fig 3a we have depicted the stress against frequency for $`r=0.5`$ and different values of the shear rate. In the absence of shear, the dynamics basically exhibits three different modes as we increase $`\left|w_h^{}\right|`$: i) a *phase-locked* mode, where the system performs modulated (by the term $`A(t)`$) uniform rotations with average frequency $`w_h^{}`$; ii) a *modulated ”jerky”* oscillations mode above a critical frequency ; iii) and *localized oscillations*, with null average velocity above another characteristic frequency.These modes are responsible for three different behaviors in the stress observed in Fig 2a: a linear regime near $`\left|w_h^{}\right|=0`$ , a decay in the modulus due to jerky oscillations and a non-stress zone when the net rotation vanishes, respectively.
The introduction of the flow changes these regimes significantly. For very low values of $`b^{}`$, the modes of rotation are slightly modified, as shown in Fig 3b; the curve is simply shifted by $`b^{}/2`$ in frequencies; and the scaled stress when localization appears is no longer zero but saturates at approximately $`b^{}`$, positive even for $`w_h^{}<0`$.
At a critical value of the shear rate, the positive stress maximum disappears as shown in Fig 3a for $`b^{}=0.5`$. Above this value of the shear rate, *multi-resonances* develop near $`w_h^{}0`$. The critical frequency denoting the transition from uniform to jerky oscillations, corresponding to the position of the minimum of the stress, is shifted following a power law with an exponent near 0.5. Additionally, jerky oscillation mode for negative frequencies persist in a wider range, which causes in turn persistence of the negative stress region at moderate/ high frequencies. The dynamic transition from jerky to localized oscillations at negative frequencies is the signature of a change in the sign of the stress.
## IV Rheology
When represented as a function of the shear rate the stress exhibits a wide variety of different anomalous behaviors. This feature contrasts with the monotonous behavior observed in systems inert to the applied field . The existence of such a rich phenomenology is manifested in Figs. 4 and 5. Their most salient feature is that upon fixing a proper value of the frequency of the field, we can monitor and promote drastic changes in the mechanical response of the system.
For some determined values of the shear rate the induced stress has a steep increase. Consequently, the system exhibits a *multi-resonant response*, as we can see for $`w_h^{}=0.5`$ in Fig 4. These resonances originate from the synchronization of the field with the hydrodynamic response of the system, which enhances the induction of tensions.
For a wide range of values of the frequency and a wide interval of values of the shear rate the response of the system to the variations of the shear rate is inhibited (*no-response* or *“blockade” regime*, corresponding to the flat curves in Fig. 4).
There also exists a regime where the transducted field energy improves the rotation of the pair in the shear, leading to a reduction of the apparent viscosity of the fluid. This phenomenon - has been referred to as the *negative viscosity effect.*
Finally, there appears monotonous *shear thickening*, *shear thinning* regimes or combinations of both as is manifested in Fig 4.
It is worth pointing out that the stress curves are quite self-similar, as Fig 6 manifests. Therefore, we can tune the regime we are interested in by properly modifying one of the parameters of the problem. Moreover, the existence of the scaling invariance ensures its accessibility in all the range of values.
## V Discussion and conclusions
In this paper, we have shown the possibility of generating stresses of very different nature in assemblies of pairs of induced dipoles. The implementation of a model which mimics the dynamics of the field-responding unit leads to the appearance of a rich variety of nonlinear stress regimes involving multi-resonances, shear- thickening and thinning, negative viscosity or blockades.
This multiplicity of intrinsically different behaviors together with the reversible nature of the transition mechanisms can be utilized to control the induction of stresses in the inactive phase. A broad field of applications of this phenomenon can then be open. The importance of the control of the stress lies in the fact that stress itself may induce significant modifications in soft condensed matter phases. To mention just a few examples, stresses may induce structural transitions in surfactant solutions or gelation; they can also modify the orientation of surfactant phases, liquid-crystals or polymers. Moreover, alterations in the distribution of stresses may lead to important changes in the rheological properties of the system.
In the cases we have analyzed, possible noise sources have not been considered. Whereas absence of noise constitutes a good approximation for large particles, as magnetic holes, smaller particles, as in the case of a ferrofluid, are affected by Brownian torques. In the first case, the model we have proposed through Eq. 1 is enough to describe the dynamics of the suspended phase. For the ferrofluid, however, the model must and can be easily generalized to include noise sources.
Our findings may open new perspectives for research in these systems offering some insight into the mesoscopic mechanisms controlling macroscopic nonlinear behaviors.
## VI Acknowledgments
We would like to thank T. Alarcón for valuable discussions. This work has been supported by DGICYT of the Spanish Government under grant PB98-1258, and by the INCO-COPERNICUS program of the European Commission under Contract IC15-CT96-0719. D. Reguera wishes to thank Generalitat de Catalunya for financial support.
|
no-problem/0003/astro-ph0003153.html
|
ar5iv
|
text
|
# A Rapid X-ray Flare from Markarian 501
## 1 Introduction
BL Lacertae objects (BL Lacs) are members of the blazar class of active galactic nuclei (AGN). Blazars exhibit rapid, large amplitude variability at all wavelengths, high optical and radio polarization, apparent superluminal motion, and in some cases, gamma-ray emission. All of these observational properties lead to the broadly held belief that blazars are AGN with jets oriented nearly along our line of sight. The broadband spectral energy distribution of blazars, when plotted as $`\nu `$F<sub>ν</sub> versus frequency, shows a double peaked shape, with a smooth extension from radio to between IR and X-ray frequencies (depending on the specific blazar type), followed by a distribution that typically starts in the X-ray band and can peak in the gamma-ray band, at energies as high as several hundred GeV. The low energy part is believed to be incoherent synchrotron radiation from a relativistic electron-positron plasma in the blazar jet. The origin of the high energy emission is still a matter of considerable debate (e.g., Buckley 1998; Mannheim 1998).
Markarian 501 (Mrk 501) is one of the closest BL Lacs ($`z=0.034`$) known and it is one of the brightest in the X-ray band. Because its peak spectral power output occurs at UV/X-ray energies, Mrk 501 is classified as a high frequency peaked BL Lac. It is one of only six BL Lacs reported as sources of very high energy (VHE, E $`>`$ 250 GeV) gamma rays and, along with Mrk 421, one of only two that have been confirmed as VHE sources (for a review, see Catanese & Weekes 1999). Mrk 501 has also been detected as a source of gamma rays at GeV energies with the Energy Gamma-Ray Experiment Telescope (EGRET; Kataoka et al. 1999) and at a few hundred keV by the Oriented Scintillation Spectrometer Experiment (OSSE; Catanese et al. 1997) on the Compton Gamma-Ray Observatory.
Just as all other blazars, Mrk 501 exhibits rapid, large amplitude variability over a wide range of wavelengths. In X-rays, variations of from 30% to 300% were observed on time scales of days in 1997 during a high emission state (Pian et al., 1998; Lamer & Wagner, 1998; Catanese, 1999) but no sub-day scale flares were seen. The fastest variation observed in X-rays was a flux increase of approximately 20% in about 12 hours seen with EXOSAT in 1986 (Giommi et al., 1990). Spectral variability in X-rays from Mrk 501 has been both moderate, with changes in the spectral index of $``$0.1-0.3 on several day scales (e.g., Pian et al. 1998), and rapid, with spectral variations of $``$0.5 on 2-3 day time-scales, observed in 1998 June (Sambruna et al., 2000). The Whipple Observatory has observed VHE gamma-ray variations spanning a factor of $`>`$70 in flux in four years of observations and has observed significant variability with time-scales as short as 2 hours (Quinn et al., 1999). Similar variability ranges are observed by other VHE telescopes (Hayashida et al., 1998; Aharonian et al., 1999; Djannati-Ataï et al., 1999). In the R-band, Miller et al. (1999) reported the detection of a flare in which the flux increased by 4% (from $`V_R=13.90`$ to 13.80) in 15 minutes with a decay to the previous level in the same amount of time.
Multi-wavelength observations have revealed correlations between VHE gamma rays and X-rays in this object (Catanese et al., 1997) and in 1997, the synchrotron spectrum of Mrk 501 was observed to extend up to approximately 100 keV (Catanese et al., 1997; Pian et al., 1998; Catanese, 1999), the highest seen in any blazar and a 50-fold increase over what was observed only one year before (Kataoka et al., 1999). This behavior has established Mrk 501 as the prototype for a subset of BL Lacs that exhibit large shifts in the peak of their synchrotron spectra during flares.
In this paper, we report on observations of Mrk 501 taken with the Rossi X-ray Timing Explorer (RXTE) in 1998 May as part of a multi-wavelength campaign. The full multi-wavelength results will be reported in a future work. Here, we concentrate on the observation of a very rapid flare and discuss its implications.
## 2 Observations and Analysis
RXTE consists of the Proportional Counter Array (PCA; Jahoda et al. 1996) which is sensitive to 2-60 keV photons, the High Energy X-ray Transient Experiment (HEXTE; Rothschild et al. 1998) which is sensitive to 15-150 keV photons, and the All-Sky Monitor (ASM; Levine et al. 1996) which is sensitive to 2-12 keV photons. Here we report on the results of the PCA observations. The count rate for HEXTE was too low to obtain significant count rate variations on the time scales observed. There are no observations with the ASM during the period of the flare. Also, because the X-ray flare occurred at approximately 18 hours Universal Coordinated Time (UTC), there are no overlapping TeV observations from Whipple, HEGRA, CAT, or the Telescope Array.
The observations of Mrk 501 occurred between 1998 May 15 and 29. They consisted of roughly four 2-3 ks pointings per day for the full two weeks. After screening for good time intervals, as described below, the data set consists of 114 ks of observations. Each observation resulted in a very significant detection of Mrk 501 that allowed spectra to be resolved and short-term variability to be investigated.
We used FTOOLS v4.2 to analyze this data. The background was estimated using the weak source background models (appropriate for sources with count rates $`<`$40 counts/s/PCU) and the latest response matrices obtained from the RXTE Guest Observer Facility (GOF) web site.<sup>1</sup><sup>1</sup>1http://heasarc.gsfc.nasa.gov/docs/xte/xte<sub>-</sub>1st.html Good time intervals were selected from the standard 2 data files using the screening criteria recommended by the RXTE GOF. Finally, only proportional counter units (PCUs) 0, 1, and 2 were active for the vast majority of the observations presented here, so we only use those PCUs in this work. For spectra and all other light curves, we used all three layers. Spectral fits were performed using XSPEC 10.0. We use the Galactic hydrogen column density of $`1.73\times 10^{20}`$ cm<sup>-2</sup> (Elvis, Wilkes & Lockman, 1989) to model the effect of photoelectric absorption, which is negligible for the energy range covered by the PCA for this object located far from the Galactic plane.
## 3 Lightcurves
The 2-10 keV lightcurve for the 1998 May observations is shown in Figure 1. The data points are shown in 1024-second bins. The count rate varies by approximately a factor of 1.8 from 21 cts/s to 38 cts/s. By comparison, RXTE observations of Mrk 501 in 1997 April exhibited 2-10 keV lightcurve count rates of 80 cts/s to 160 cts/s (Catanese, 1999). Thus, Mrk 501 was in a much lower emission state during these observations than in 1997, though the flux was relatively high by historical standards. The light curve begins with a 20% drop over two days, follows with a 50% rise over three days, and ends with a gradual decline in flux of 65% over the remaining nine days of the observation. During the decline, small amplitude, day-scale flares are evident at MJD 50955 and 50960.
Most notably, toward the end of MJD 50958 (indicated by the two vertical lines in Fig. 1), the flux is much higher than the surrounding observations, and is actually the highest count rate in this light curve. For the remainder of the paper we concentrate on this flare observation. Detailed analysis of the entire 1998 data set will be presented in a future paper. This flare occurs during what is otherwise a somewhat unremarkable observation period. There is no evidence of significant short term variability within any other single observation nor is the variability seen on day scales in the other observations of such large amplitude. The largest variation on any other day is a 30% increase of the flux between MJD 50951 and 50952. Our comparisons of background files with the PCA background model and investigations of the data quality monitors (e.g., electron activation) confirm that this flare is not a spurious effect.
In Figure 2, we show the 2-10 keV and 10-15 keV light curves for the observation indicated by the vertical lines in Figure 1. The observation was taken on May 25 between 17.9 and 18.6 hours Universal Coordinated Time (UTC). The data are shown in 96-second bins. During this observation, Mrk 501 exhibited low flux, very steady emission for about half of the observation. After this, the count rate increased from approximately 26 cts/s to 41 cts/s in approximately 200 seconds, corresponding to a brightness increase of 13%/minute. This is followed by a steady decrease in the count rate to approximately 30 cts/s in approximately 580 seconds, a decline rate of 3%/minute. Though not quite as well measured due to lower statistics, the flare is clearly evident in the 10-15 keV light curve, with a similar rise and fall time-scale. The 10-15 keV flux variation rate is 18%/minute and 3%/minute for the rise and fall of the flare, respectively.
Observations were also taken with RXTE approximately 5.5 hours before and 5.5 hours after this observation. Neither showed any significant variability. The observation before the flare had an average 2-10 keV count rate of 24.5 cts/s and the one after the flare had an average count rate of 27 cts/s. Both are consistent with little or no change in flux compared to that seen at the start and end of the flare observation.
## 4 Spectra
The average spectrum for the observations between May 15 and 29 is best fit by a broken power law model with a photon spectral index of 1.92$`\pm `$0.01 up to a break at 6.2$`\pm `$0.3 keV, above which the spectral index is 2.07$`\pm `$0.01. The 2-10 keV flux during this period is $`(1.12\pm 0.02)\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. The spectrum extends to at least 40 keV with no evidence of another break. This spectrum connects smoothly with OSSE measurements during this period (Buckley, 1999). Observations taken with Beppo SAX on April 28 and 29 and May 1 (Pian et al., 1999) indicate a somewhat harder spectrum with peak power output at $``$20 keV. The X-ray flux during the Beppo SAX observation is $``$50% higher than in the RXTE observations reported here, so the spectral shift is consistent with the previously observed tendency of Mrk 501 to increase the energy at which the spectral energy distribution peaks as its flux increases (e.g., Pian et al. 1998).
Spectral analysis of the flare observation reveals a rapid change during the course of the flare. We fit the spectrum from 3-15 keV where there are sufficient photon statistics and no problems with the PCA response function. The average spectrum for the observation is well-fit by a simple power law with a photon spectral index of $`\mathrm{\Gamma }=1.95\pm 0.03`$. To investigate the evolution of the spectrum during the course of the flare, we break up the data into three regions: before the flare, during the rise, and during the decay. All are well-fit by simple power laws and a summary of those fits is given in Table 1. The photon spectral index is $`\mathrm{\Gamma }=2.02\pm 0.03`$ before the flare, $`2.04\pm 0.11`$ during the rise of the flare, and $`1.87\pm 0.04`$ during the decay of the flare. The spectrum during the flare follows this power law at least out to 30 keV, indicating a large shift in the location of the peak power output during the flare. Dividing the decay of the flare into two parts does not reveal significantly different spectra than the average spectrum for the entire decay region.
The observation taken approximately 5.5 hours before the flare indicates a spectral index of $`2.02\pm 0.02`$ and a 2-10 keV flux of $`(0.98\pm 0.04)\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, consistent with the observations just prior to the flare. The observation taken approximately 5.5 hours after the flare observation indicates a spectral index of $`2.08\pm 0.04`$ and a 2-10 keV flux of $`(1.10\pm 0.08)\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, indicating a significant softening of the spectrum following the flare.
## 5 Discussion
The variation in the spectral index during the course of the flare can provide insights into the dominant flaring timescales and acceleration process. As discussed by Kirk & Mastichiadis (1999), for a flare in which the variability and acceleration time-scales are much less than the cooling time-scale a plot of the spectral index versus flux should follow a clockwise pattern, i.e., the harder energies vary first. For a flare where the variability, acceleration, and cooling time-scales are similar, the spectral index versus flux diagram should move in a counter-clockwise direction, i.e., the softer energies vary first because the number of particles changes due to the acceleration process which proceeds from low energy to high energy. Clockwise patterns are most commonly observed in the TeV sources Mrk 421 (e.g., Takahashi et al. 1996) and PKS 2155-304 (e.g., Kataoka et al. 2000) but counter-clockwise patterns have been recently observed from these objects (Fossati et al., 2000; Sambruna, 2000).
Because the data do not have sufficient statistics to plot spectral index versus flux on such short time scales, we instead plot the hardness ratio (10-15 keV count rate/2-10 keV count rate) versus flux for the flare on May 25 in Figure 3. The numbers in the plot indicate the development of the hardness ratio in time during the flare. The large cluster of filled circles represents the observations before the onset of the flare. The filled triangles represent the rising part of the flare. The point indicated by the tail of the arrow marked with the “1” is the last low flux point before the flare starts. The filled squares are data taken during the decay of the flare. During the rise of the flare, the hardness ratio increases steadily. During the decay of the flare, there is a slight trend for the hardness ratio to increase even further. Thus, these observations are consistent with a counter-clockwise pattern. A clockwise pattern seems precluded by the significantly harder spectrum during the decay of the flare than the rise (see Table 1) but other patterns in the hardness ratio versus flux diagram cannot be ruled out given the statistical errors in this observation. The counter-clockwise pattern and the large shift in the synchrotron peak imply that the acceleration process dominates the development of the flare, accelerating a fresh population of high energy electrons which causes the flare.
In summary, RXTE observations of Mrk 501 in 1998 May have revealed a flux state which was approximately one-fourth as strong as observed in 1997, and an average spectrum with peak power output at approximately 6 keV. This is a decrease of more than a factor of 15 from the 100 keV peak seen in 1997. During these observations, a very rapid flare was observed in which the location of peak power output increased from $``$3 keV to $`gtrsim`$30 keV. This large shift in peak power output energy is similar to the behavior of Mrk 501 in 1997 and in 1998 June. The evolution of the hardness ratio of the flare is consistent with the flare development being dominated by the acceleration process but the lack of simultaneous multi-wavelength observations prohibits further detailed testing of emission models.
Though rapid variations have been seen from Mrk 501 at other wavelengths before, it was generally regarded as a more slowly varying object than the other TeV sources, Mrk 421 and PKS 2155-304. Thus, one could conduct less dense observations of Mrk 501 and still sample the variations with adequate coverage to resolve the shape of the variations and the correlations between wavelengths. These observations show that very dense multi-wavelength observations are required for Mrk 501 as well since it can vary on time-scales comparable to the fastest seen in the other TeV sources. They also indicate that, as has been seen in Mrk 421 (Gaidos et al., 1996), these very rapid flares can occur when the source is not in a particularly high emission state, so dense observations must be used regardless of the flux level observed from these objects. Multi-wavelength observations of such rapid flares will provide stringent tests of emission models for these TeV sources and may lead to a better understanding of the acceleration process that occurs in their jets.
The authors wish to thank K. Jahoda and T. Jaffe for advice about the RXTE analysis. MC wishes to thank T. Weekes, D. Carter-Lewis, J. Finley, J. Buckley, C. Dermer, N. Johnson, and F. Krennrich for their support of these observations. MC acknowledges grant support from NASA and the U. S. Department of Energy. RMS acknowledges support from NASA contract NAS–38252.
|
no-problem/0003/physics0003067.html
|
ar5iv
|
text
|
# 1 Problem
## 1 Problem
A variant on the electro- or magnetostatic boundary value problem arises in accelerator physics, where a specified field, say $`𝐁(0,0,z)`$, is desired along the $`z`$ axis. In general there exist static fields $`𝐁(x,y,z)`$ that reduce to the desired field on the axis, but the “boundary condition” $`𝐁(0,0,z)`$ is not sufficient to insure a unique solution.
For example, find a field $`𝐁(x,y,z)`$ that reduces to
$$𝐁(0,0,z)=B_0\mathrm{cos}kz\widehat{𝐱}+B_0\mathrm{sin}kz\widehat{𝐲}$$
(1)
on the $`z`$ axis. In this, the magnetic field rotates around the $`z`$ axis as $`z`$ advances.
The use of rectangular or cylindrical coordinates leads “naturally” to different forms for B. One 3-dimensional field extension of (1) is the so-called helical wiggler , which obeys the auxiliary requirement that the field at $`z+\delta `$ be the same as the field at $`z`$, but rotated by angle $`k\delta `$.
## 2 Solution
### 2.1 Solution in Rectangular Coordinates
We first seek a solution in rectangular coordinates, and expect that separation of variables will apply. Thus, we consider the form
$`B_x`$ $`=`$ $`f(x)g(y)\mathrm{cos}kz,`$ (2)
$`B_x`$ $`=`$ $`F(x)G(y)\mathrm{sin}kz,`$ (3)
$`B_z`$ $`=`$ $`A(x)B(y)C(z).`$ (4)
Then
$$𝐁=0=f^{}g\mathrm{cos}kx+FG^{}\mathrm{sin}kx+ABC^{},$$
(5)
where the indicates differentiation of a function with respect to its argument. Equation (5) can be integrated to give
$$ABC=\frac{f^{}g}{k}\mathrm{sin}kz+\frac{FG^{}}{k}\mathrm{cos}kx.$$
(6)
The $`z`$ component of $`\times 𝐁=0`$ tells us that
$$\frac{B_x}{y}=fg^{}\mathrm{cos}kz=\frac{B_y}{x}=F^{}G\mathrm{sin}kz,$$
(7)
which implies that $`g`$ and $`F`$ are constant, say 1. Likewise,
$$\frac{B_x}{z}=fk\mathrm{sin}kz=\frac{B_z}{x}=A^{}BC=\frac{f^{\prime \prime }}{k}\mathrm{sin}kz,$$
(8)
using (6-7). Thus, $`f^{^{\prime \prime }}k^2f=0`$, so
$$f=f_1e^{kx}+f_2e^{kx}.$$
(9)
Finally,
$$\frac{B_y}{z}=Gk\mathrm{cos}kz=\frac{B_z}{y}=AB^{}C=\frac{G^{\prime \prime }}{k}\mathrm{sin}kz,$$
(10)
so
$$G=G_1e^{ky}+G_2e^{ky}.$$
(11)
The “boundary conditions” $`f(0)=B_0=G(0)`$ are satisfied by
$$f=B_0\mathrm{cosh}kx,G=B_0\mathrm{cosh}ky,$$
(12)
which together with (6) leads to the solution
$`B_x`$ $`=`$ $`B_0\mathrm{cosh}kx\mathrm{cos}kz,`$ (13)
$`B_y`$ $`=`$ $`B_0\mathrm{cosh}ky\mathrm{sin}kz,`$ (14)
$`B_z`$ $`=`$ $`B_0\mathrm{sinh}kx\mathrm{sin}kz+B_0\mathrm{sinh}ky\mathrm{cos}kz,`$ (15)
This satisfies the last “boundary condition” that $`B_z(0,0,z)=0`$.
However, this solution does not have helical symmetry.
### 2.2 Solution in Cylindrical Coordinates
Suppose instead, we look for a solution in cylindrical coordinates $`(r,\theta ,z)`$. We again expect separation of variables, but we seek to enforce the helical symmetry that the field at $`z+\delta `$ be the same as the field at $`z`$, but rotated by angle $`k\delta `$. This symmetry implies that the argument $`kz`$ should be replaced by $`kz\theta `$, and that the field has no other $`\theta `$ dependence.
We begin constructing our solution with the hypothesis that
$`B_r`$ $`=`$ $`F(r)\mathrm{cos}(kz\theta ),`$ (16)
$`B_\theta `$ $`=`$ $`G(r)\mathrm{sin}(kz\theta ).`$ (17)
To satisfy the condition (1) on the $`z`$ axis, we first transform this to rectangular components,
$`B_z`$ $`=`$ $`F(r)\mathrm{cos}(kz\theta )\mathrm{cos}\theta +G(r)\mathrm{sin}(kz\theta )\mathrm{sin}\theta ,`$ (18)
$`B_y`$ $`=`$ $`F(r)\mathrm{cos}(kz\theta )\mathrm{sin}\theta +G(r)\mathrm{sin}(kz\theta )\mathrm{cos}\theta ,`$ (19)
from which we learn that the “boundary conditions” on $`F`$ and $`G`$ are
$$F(0)=G(0)=B_0.$$
(20)
A suitable form for $`B_z`$ can be obtained from $`(\times 𝐁)_r=0`$:
$$\frac{1}{r}\frac{B_z}{\theta }=\frac{B_\theta }{z}=kG\mathrm{cos}(kz\theta ),$$
(21)
so
$$B_z=krG\mathrm{sin}(kz\theta ),$$
(22)
which vanishes on the $`z`$ axis as desired.
From either $`(\times 𝐁)_\theta =0`$ or $`(\times 𝐁)_z=0`$ we find that
$$F=\frac{d(rG)}{dr}.$$
(23)
Then, $`𝐁=0`$ leads to
$$(kr)^2\frac{d^2(krG)}{d(kr)^2}+kr\frac{d(krG)}{d(kr)}[1+(kr)^2](krG)=0.$$
(24)
This is the differential equation for the modified Bessel function of order 1 . Hence,
$`G=C{\displaystyle \frac{I_1(kr)}{kr}}={\displaystyle \frac{C}{2}}\left[1+{\displaystyle \frac{(kr)^2}{8}}+\mathrm{}\right],`$ (25)
$`F=C{\displaystyle \frac{dI_1}{d(kr)}}=C\left(I_0{\displaystyle \frac{I_1}{kr}}\right)={\displaystyle \frac{C}{2}}\left[1+{\displaystyle \frac{3(kr)^2}{8}}+\mathrm{}\right].`$ (26)
The “boundary conditions” (20) require that $`C=2B_0`$, so our second solution is
$`B_r`$ $`=`$ $`2B_0\left(I_0(kr){\displaystyle \frac{I_1(kr)}{kr}}\right)\mathrm{cos}(kz\theta ),`$ (27)
$`B_\theta `$ $`=`$ $`2B_0{\displaystyle \frac{I_1}{kr}}\mathrm{sin}(kz\theta ),`$ (28)
$`B_z`$ $`=`$ $`2B_0I_1\mathrm{sin}(kz\theta ),`$ (29)
which is the form discussed in .
|
no-problem/0003/cond-mat0003133.html
|
ar5iv
|
text
|
# Short Coherence Length Superconductivity: A Generalization of BCS Theory for the Underdoped Cuprates
## 1 Introduction and Formalism
While two main alternatives have been suggested for addressing the cuprate pseudogap (the phase fluctuation approach and the “nodal” $`d`$-wave quasi-particle picture), this paper presents a third alternative which is based on the presumption that one should investigate small excursions from BCS without abandoning it altogether. This philosophy rests on the observation that (i) $`\xi `$ is short so that the mean field theoretic approach of BCS should not be expected to be correct in detail and (ii) on Uemura’s observation that the cuprate superconductors belong to a large class of “exotic” materials. While the possible role of stripes, spin-charge separation and $`d`$-wave pairing has received much attention in the cuprate literature, here we argue that Uemura’s famous plot suggests that one formulate a theory of superconductivity around a more universal approach without invoking specific materials-dependent features.
In this paper we present an extension of BCS theory, capable of describing short $`\xi `$ superconductors, which is intermediate between a BCS and Bose-Einstein condensation (BEC) picture (the “BCS–BEC crossover scenario”). At the heart of our approach is the distinction between the fermionic excitation gap $`\mathrm{\Delta }`$ and superconducting order parameter $`\mathrm{\Delta }_{sc}`$ which distinction holds for $`T0`$, both above and below $`T_c`$. The difference parameter (or pseudogap energy scale) $`\mathrm{\Delta }_{pg}^2=\mathrm{\Delta }^2\mathrm{\Delta }_{sc}^2`$ can be associated with low energy, weakly damped, pair excitations of finite momentum.
It is essential to note that our generalization of BCS theory is based on the particular ground state of the crossover problem: the Leggett state ( $`\mathrm{\Psi }_𝐤=\mathrm{\Pi }_𝐤(u_𝐤+v_𝐤c_𝐤^{}c_𝐤^{})|0`$), which has a BCS-like character but should be thought of as applicable to arbitrary values of the attractive coupling constant $`g`$. Here $`u_𝐤,v_𝐤`$ are given by the usual BCS-like expressions in terms of the fermionic dispersion $`E_𝐤`$. The variational conditions which ensue from Leggett’s calculations are the same as those written below in Eqs. (1) and (2), for $`T=0`$. We have shown, based on the work of Kadanoff and Martin (KM), that these self consistent equations can be derived from a Green’s function decoupling scheme. We stress here the three important but subtle observations of KM: (i) to obtain BCS-like theories, it is sufficient to truncate the system of equations so that only one and two particle propagators enter and (ii) the correlation functions associated with this KM scheme – at the level of the gap equations and thermodynamics – are not the same correlation functions which enter into the electrodynamical response. (iii) The pair susceptibility $`\chi `$ which appears in the T-matrix, or pair propagator $`𝒯=g/(1+g\chi )`$, is of the form $`\chi =GG_0`$; this differs from other T-matrix schemes (where, frequently, $`\chi =GG`$), but is required to obtain the BCS-like gap equations and their thermodynamics. It should also be noted that this ground state has 100% condensation for all $`g`$, unlike that of the non-ideal Bose gas.
Without any detailed calculations, we can anticipate the form of the generalized BCS theory, which applies below $`T_c`$. As expected by a natural extension of the Leggett ground state variational conditions (and, as is consistent with BCS theory) we have
$`1+g{\displaystyle \underset{𝐤}{}}{\displaystyle \frac{12f(E_𝐤)}{2E_𝐤}}\phi _𝐤^2`$ $`=`$ $`0,`$ (1)
$`{\displaystyle \underset{𝐤}{}}\left[1{\displaystyle \frac{ϵ_𝐤}{E_𝐤}}+{\displaystyle \frac{2ϵ_𝐤}{E_𝐤}}f(E_𝐤)\right]`$ $`=`$ $`n`$ (2)
Here $`ϵ_𝐤`$ is the bare fermion dispersion, measured from the chemical potential $`\mu `$, $`E_𝐤=\sqrt{ϵ_𝐤^2+\mathrm{\Delta }^2\phi _𝐤^2}`$ is the quasi-particle dispersion and $`\phi _𝐤`$ represents the symmetry of the pairing state. The new physics of the pseudogap phase is embodied in a third equation which represents the difference of the two energy gap parameters in terms of the number of excited pairs (with Bose distribution $`b(\mathrm{\Omega }_q)`$), as
$$\mathrm{\Delta }^2\mathrm{\Delta }_{sc}^2=\mathrm{\Delta }_{pg}^2=a_0\underset{𝐪}{}b(\mathrm{\Omega }_𝐪),$$
(3)
where the coefficient of proportionality, $`a_0`$, and the pair excitation energy, $`\mathrm{\Omega }_𝐪`$, can both be determined from the above microscopic theory. These calculations indicate that (at small, but non-zero ($`q,\mathrm{\Omega }`$)) the pair propagator can be approximated by
$$𝒯a_0/[\mathrm{\Omega }\mathrm{\Omega }_q+\mu _{pair}+i\mathrm{\Gamma }_q]$$
(4)
for the purposes of calculating the self energy $`\mathrm{\Sigma }`$ of the fermions (which self energy enters into the resulting gap equations and thermodynamics). One can interpret this theoretical approach as follows. Here we go beyond BCS to include self energy $`\mathrm{\Sigma }`$ effects associated with the incoherent, finite $`𝐪`$ pair excitations. Whereas, BCS is a mean field treatment of the particles, the present approach should be thought of as a mean field treatment of the pairs. This represents the next level in a hierarchy of (superconducting) mean field theories, which hierarchy is similar to that encountered in magnetic problems. This mean field approximation of the pairs is associated with the truncation of the equations of motion, so that pair-pair interactions are not directly present. Residual inter-boson interactions arise only indirectly via the self energy of the fermions. As a consequence, both above and below $`T_c`$, the pair dispersion is quadratic $`\mathrm{\Omega }_q=q^2/M_{pair}`$, as in a quasi-ideal Bose gas. In this way, as in the KM paper, the pair correlation function is to be distinguished from that associated with the collective modes of the order parameter.
## 2 Results
The results summarized here represent those deduced from solving the coupled equations for the pair propagator $`𝒯`$ and its single particle analogue ($`G`$), along with the fermionic number constraint.
Pseudogap onset: In our earliest work we found that the temperature $`T^{}`$ at which the Fermi liquid state first breaks down corresponds to the onset of a splitting of the single peaked (broadened Fermi liquid-like) electronic spectral function into two peaks separated by a (pseudo)gap. This pseudogap state is characterized by the presence of metastable pairs or “resonances” which effectively reduce the single particle density of states. Near, but above $`T_c`$, the latter takes the form of a broadened BCS-like structure. Moreover, between $`T_c`$ and $`T^{}`$, the wave-vector dependence of $`\mathrm{\Sigma }`$ (and of the pseudogap) departs from that of a strict $`\phi _𝐤`$ symmetry. Note, also, that in the present approach we always have $`T^{}T_c`$.
Superconducting transition: As the temperature decreases from $`T^{}`$, the density of meta-stable pairs with momentum $`q`$ continues to increase, until at $`T=T_c`$, a macroscopic fraction with $`q=0`$ undergoes Bose condensation. At this temperature, the pair propagator or T-matrix diverges as expected from the Thouless criterion, so that both $`\mu _{pair}`$ and $`\mathrm{\Gamma }_{q=0}`$ vanish. Moreover, near $`T_c`$, the inverse lifetime of the $`q0`$ pairs, as well as that associated with fermion states $`\gamma `$ becomes very small. This is an important set of observations. Once the vicinity of $`T_c`$ is reached, the interaction between the long wavelength (soft) bosons and the (gapped) fermions becomes weak. This same behavior continues to hold below $`T_c`$ so that to leading order, lifetime effects can be dropped in Eqs. (1)-(3).
The dependence of $`T_c`$ on $`g`$ which results from Eqs (1)-(3) is highly non-monotonic, even for the $`s`$-wave case. At low $`g`$, the curve for $`T_c`$ vs $`g`$ follows the BCS result until $`\mathrm{\Delta }_{pg}`$ becomes sufficiently large; then $`T_c`$ decreases, with $`g`$, thereby reflecting the difficulty of forming a superconducting state in the presence of a fermionic gap. Once $`\mu `$ becomes negative, $`T_c`$ then increases with increasing $`g`$ approaching a (mass) renormalized, but otherwise ideal BEC limit. For the $`d`$-wave case, the situation is even more complex. As a result of the extended size of the $`d`$-wave pairs, the effects of the Pauli principle repulsion are more extreme, and $`T_c`$ vanishes well before the system reaches the $`\mu <0`$ limit (except in the limit of unphysically small densities). In strictly $`2d`$, we find $`T_c`$ is always zero.
Behavior of the superconducting gap equations: Equations (1)-(3) were solved to yield the results plotted in Figure 1, for the case of weak, moderate and large $`g`$ . Also indicated is a “cartoon” characterizing the excitations of the condensate at each value of $`g`$. Above $`T_c`$, where the computations are more complicated we used a simple extrapolation procedure to facilitate the numerics. As can be seen from the Figure and from Eq. (3), as the number of excited pair states increases, so does the difference between the fermionic excitation gap and the order parameter. These figures represent a natural generalization of BCS theory.
Thermodynamical consequences; mostly $`C_v`$: The pair excitations shown schematically in Figure 1 necessarily lead to corrections to the quasi-particle-derived thermodynamics of BCS theory, via new low $`T`$ power laws. Indeed, it would be difficult to imagine how fermionic quasi-particles could be relevant to the thermodynamics of the BEC limit. That these excited pair states have a nearly ideal Bose gas character is a consequence of the BCS-like, Leggett ground state; they arise from a mean field treatment of the pair propagator. One might imagine an improved approach to the strong coupling limit (which includes direct pair-pair interactions) would lead to results which are more analogous to those of the non-ideal Bose gas. It should, however, be noted that the cuprates are in the clear fermionic regime where $`\mu `$ is essentially $`E_F`$, as is consistent with our calculations. Moreover, for the $`d`$-wave case, our analysis shows that the BEC limit is virtually inaccessible, and concerns about inaccuracies of the KM approach, in this limit, appear to be less relevant. What is most important is to properly capture the physics of the BCS regime, so that small excursions from it can be considered in a controlled manner. As a result of these pair excitations, in a quasi-2d system such as the cuprates we find $`C_v=\alpha T^2+\gamma ^{}T`$. At this time, a low $`T`$, linear specific heat contribution is well documented in the cuprates, although it is not known whether it is intrinsic or extrinsic. Elsewhere in this journal, we compare our quantitative calculations for $`C_v`$ with experiment.
We have not yet completed a full calculation of the thermodynamical properties such as $`C_v`$ from above to below $`T_c`$. Nevertheless, it can be anticipated on physical grounds that $`C_v/T`$ starts to decrease once $`T^{}`$ is reached and that the overall behavior of that contribution to $`C_v`$ which is associated with the fermionic degrees of freedom, is rather similar to that of a much-broadened BCS theory above $`T_c`$. The density of states is depleted precisely at $`E_F`$, once the temperature passes below $`T_c`$. This long range order-induced depletion, is, thereby, reflected in thermodynamical properties, such as $`C_v`$ jumps which become progressively weaker the stronger the coupling $`g`$.
Behavior of the penetration depth: We find, that the penetration depth at low $`T`$, behaves as $`\lambda =\lambda _0+AT+BT^{3/2}`$. Here the second and third terms represent respectively the fermionic quasi-particle and the bosonic contributions. This additional $`T^{3/2}`$ bosonic contribution may be difficult to distinguish from the “dirty” $`d`$-wave $`T^2`$ term, which was previously invoked in fits to the data, at the lowest $`T`$. A quantitative discussion of this term is given elsewhere in this journal, along with comparisons to the data. \[The hole concentration ($`x`$) dependence must be also addressed, as will be summarized below\]. It should be stressed that this bosonic term is responsible for the quasi-universal scaling of the penetration depth, which we have reported. Indeed, our calculations indicate that there are systematic deviations from perfect scaling, and the general trends for these deviations appear to have been observed by the Cambridge group.
The present approach should be contrasted with the “$`d`$-wave nodal quasiparticle” picture (in its Fermi liquid rendition) where Landau parameters $`F_{1s}`$ are thought to be important below $`T_c`$. In the present context, these $`F_{1s}`$ effects are not as primary as is incorporating the new (non Fermi-liquid) physics of the pseudogap state. Ultimately Landau effects can be added here, as elsewhere, for detailed fits to data.
Phase Diagram: In order to incorporate the Mott insulator constraint, the band-width or alternatively $`n/m^{}`$ must be fit to the $`x`$-dependence of the zero temperature penetration depth. In the absence of any detailed information about the source of the pairing attraction $`g`$ (which presumably derives from Coulomb interactions in some direct or indirect form, in the $`d`$-wave channel), we assume that $`g`$ is $`x`$-independent and fit its ratio to the “bare bandwidth” via one adjustable parameter, which is chosen to optimize agreement with the entire phase diagram. With this transcription, one can see that the ratio of $`g`$ to the effective Fermi energy must increase as the insulator is approached, so that the bosonic degrees of freedom become more evident with underdoping. This prescription provides a quite good fit to $`T^{},T_c,\mathrm{\Delta }(0)`$ over the entire range of $`x`$. It also can be used as a “tool” for addressing the $`x`$ and $`T`$ dependence of a wide collection of experimental data, including the critical current $`I_c`$, Knight shift and NMR relaxation rate. All of these can be written in terms of the contributions from “three fluids”: the condensate (via $`\mathrm{\Delta }_{sc}`$), the fermionic excitations (via $`\mathrm{\Delta }`$), and the pair excitations (via $`\mathrm{\Omega }_q`$, or $`\mathrm{\Delta }_{pg}`$).
Neutron scattering and relation to condensation energy: Elsewhere, and in this journal we have shown that the incommensurate and commensurate peaks in the neutron cross section can be interpreted as reflecting the $`d`$-wave superconducting gap. Moreover, above $`T_c`$, the pseudogap will lead to a residue of these peaks, albeit broadened. Within the present picture the neutron resonance (commensurate structure) and the behavior of the specific heat with its discontinuity at and around $`T_c`$ are both consequences of the same underlying pseudogap physics, but not directly of each other. Moreover, in contrast to BCS theory, where one can extrapolate the zero field normal state to estimate the condensation energy, here we find that because of the non-Fermi liquid nature of the normal state (and its instability at $`T_c`$) such extrapolations are problematical. This (along with other features, such as coherent and incoherent contributions to ARPES and tunneling, and general finite magnetic field effects on the pseudogap) will be discussed in more detail in future work.
## 3 Acknowledgements
This work was supported by the NSF under awards No. DMR-91-20000 (through STCS) and No. DMR-9808595 (through MRSEC).
|
no-problem/0003/hep-ph0003120.html
|
ar5iv
|
text
|
# CURRENT ISSUES FOR INFLATION
### Most inflation models create a lot of gravitinos
I will focus on papers that appeared in 1999, building on a fairly comprehensive review of earlier work, and starting with gravitino creation.<sup>1</sup><sup>1</sup>1Updated version of a talk given at COSMO99 International Workshop on Particle Physics and the Early Universe, 27 September–2 October 1999, Trieste, Italy. Gravitinos are created at reheating by thermal collisions . If the gravitino mass $`m_{3/2}`$ is of order $`100\text{GeV}`$, as in gravity-mediated models of SUSY breaking, these gravitinos upset nucleosynthesis unless $`\gamma T_\mathrm{R}\text{ }<10^9\text{GeV}`$, where $`T_\mathrm{R}`$ is the reheat temperature, and $`\gamma ^1`$ is the increase in entropy per comoving volume (if any) between reheating and nucleosynthesis. If instead $`m_{3/2}100\text{keV}`$, an in typical gauge-mediated models of SUSY breaking, the gravitino is stable and will overclose the Universe unless $`\gamma T_\mathrm{R}\text{ }<10^4\text{GeV}`$. Only if $`m_{3/2}\text{ }>60\text{TeV}`$, as might be the case in anomaly-mediated models of SUSY breaking, are the gravitinos from thermal collisions completely harmless.
Gravitinos will also be created after inflation , by the amplification of the vacuum fluctuation. The evolution equations for the helicity $`1/2`$ and $`3/2`$ mode functions, required to calculate this second effect, have been presented only this year. A suitably chosen helicity $`3/2`$ mode function satisfies the Dirac equation in curved spacetime, with mass $`m_{3/2}(t)`$ (the gravitino mass in the background of the time-dependent scalar field(s) which dominate the Universe after inflation). This implies that helicity $`3/2`$ gravitinos created from the vacuum are cosmologically insignificant, compared with those created from particle collisions .
The situation for helicity $`1/2`$ is more complicated, because this state mixes with the fermions involved in SUSY breaking (the super-Higgs effect). So far, the evolution equation for the mode function has been presented only for the simplest possible case, that the only relevant superfield is a single chiral superfield. Using this idealized equation, its authors estimated (see also ) that gravitinos created just after inflation have, at nucleosynthesis, the abundance
$$\frac{n}{s}10^2\frac{\gamma T_\mathrm{R}M^3}{V}.$$
(1)
The abundance is specified by the ratio of $`n`$, the gravitino number density, and $`s`$, the entropy density. It is determined by $`V`$, the potential at the end of inflation, and $`M`$, the mass of the oscillating field which is responsible for the energy density just after inflation.
Entropy increase can come from a late-decaying particle, with or without thermal inflation . If there is no thermal inflation, the requirement that final reheating occurs before nucleosynthesis gives
$$\gamma T_\mathrm{R}\text{ }>10\text{MeV}.$$
(2)
One bout of thermal inflation typically multiplies $`\gamma `$ by a factor of order $`e^{10}10^{15}`$.
Eq. (1) is not the end of the story. Rather, close examination of the idealized mode function equation reveals that gravitino creation continues until $`H`$ falls below the true gravitino mass $`m_{3/2}`$. This increases the abundance to<sup>2</sup><sup>2</sup>2This late-time creation occurs only when SUSY is broken in the vacuum, leading to a nonzero value for $`m_{3/2}`$. It occurs because global supersymmetry then ceases to be a good approximation, every time the potential dips through zero. The models considered in have unbroken SUSY in the vacuum, so that global SUSY is a good approximation at all times, and helicity $`1/2`$ gravitino production becomes the same as Goldstino production. As is the case for any spin $`1/2`$ particle, the production of the Goldstino ceases soon after inflation ends.
$$\frac{n}{s}10^2\frac{\gamma T_\mathrm{R}M^3}{M_\mathrm{S}^4},$$
(3)
where $`M_\mathrm{S}=\sqrt{M_\mathrm{P}m_{3/2}}`$ is the intermediate scale. (The energy density is of order $`M_\mathrm{S}^4`$ when $`Hm_{3/2}`$.)
The idealized mode function equation, used to obtain the above results, assumes that the superfield responsible for SUSY breaking in the vacuum is the same as the superfield(s) describing inflation. This will presumably not be the case in reality. On the other hand, the non-adiabaticity responsible for gravitino creation, present in the idealized case that these two superfields are identical, is unlikely to disappear just because they are different. Therefore, Eq. (3) should provide a reasonable estimate of the gravitino abundance if reheating takes place after the epoch $`Hm_{3/2}`$.<sup>3</sup><sup>3</sup>3Just after inflation ends, a significant fraction of the energy of the oscillating field may be drained off by preheating, into marginally relativistic bosons and/or fermions. If this occurs, the idealized model will certainly be invalidated for a while, but because the new energy redshifts, and is anyhow never completely dominant, the idealized model is likely to become reasonable again after a few Hubble times. If so, it will survive until reheating, defined as the epoch when practically all of the oscillating energy is converted into thermalized radiation. If, in contrast, reheating occurs earlier, gravitino creation will certainly stop then because there is no coherently oscillating field, and the abundance will be
$$\frac{n}{s}10^2\gamma \left(\frac{M}{T_\mathrm{R}}\right)^3(T_\mathrm{R}>M_\mathrm{S}).$$
(4)
Combining Eqs. (3) and (4), we see that the maximal abundance occurs if $`T_\mathrm{R}M_\mathrm{S}`$, with smaller abundance if we either decrease or increase $`T_\mathrm{R}`$.
In typical models of inflation and reheating, these gravitino abundances are huge compared with the abundance from thermal collisions, and lead to far stronger constraints on the $`T_\mathrm{R}`$ and $`\gamma `$. Consider first the case of gravity-mediated supersymmetry breaking, corresponding to $`m_{3/2}100\text{GeV}`$ and $`M_\mathrm{S}10^{10}\text{GeV}`$. Then, nucleosynthesis requires $`n/s\text{ }<10^{13}`$, and
$`\gamma `$ $`\text{ }<`$ $`10^{11}\left({\displaystyle \frac{10^{10}\text{GeV}}{T_\mathrm{R}}}\right)\left({\displaystyle \frac{10^{10}\text{GeV}}{M}}\right)^3(T_\mathrm{R}\text{ }<10^{10}\text{GeV})`$ (5)
$`\gamma `$ $`\text{ }<`$ $`10^{11}\left({\displaystyle \frac{T_\mathrm{R}}{M}}\right)^3(10^{10}\text{GeV}\text{ }<T_\mathrm{R}).`$ (6)
Alternatively, consider the case of gauge-mediated SUSY breaking, with the favoured values $`m_{3/2}100\text{keV}`$ and $`M_S10^7\text{GeV}`$. Then the gravitino is stable, and the requirement that it should not overclose the Universe gives $`n/s\text{ }<10^5`$, and
$`\gamma `$ $`\text{ }<`$ $`10^3\left({\displaystyle \frac{10^7\text{GeV}}{T_\mathrm{R}}}\right)\left({\displaystyle \frac{10^7\text{GeV}}{M}}\right)^3(T_\mathrm{R}\text{ }<10^7\text{GeV})`$ (7)
$`\gamma `$ $`\text{ }<`$ $`10^3\left({\displaystyle \frac{T_\mathrm{R}}{M}}\right)^3(10^7\text{GeV}\text{ }<T_\mathrm{R}).`$ (8)
These constraints, are very strong in most models of inflation. For instance, the popular $`D`$-term inflation model (and other models) requires $`V^{1/4}M10^{15}\text{GeV}`$. Then, Eqs. (2), (5) and (6) require at least one bout of thermal inflation if SUSY-breaking is gravity-mediated. If instead it is gauge-mediated, Eqs. (2), (7) and (8) require $`T_\mathrm{R}>10^{11}\text{GeV}`$, and again entropy production (though not necessarily thermal inflation). The only popular models where the constraints are completely ineffective are those with soft supersymmetry breaking during inflation, leading to $`M`$ perhaps of order $`m_{3/2}`$. Such models include modular inflation , and hybrid inflation with soft supersymmetry breaking (using a tree-level or loop-corrected potential).
### What sort of field is the inflaton?
The rest of this review deals with various issues in inflation model-building. At the most primitive level, a model of inflation is simply a specification of the form of the potential, but one normally requires also that the form of the potential looks reasonable in the context of particle physics. In particular, one might be concerned if the field values are big compared with the ultra-violet cutoff $`\mathrm{\Lambda }_{\mathrm{UV}}<M_\mathrm{f}<M_\mathrm{P}`$.<sup>4</sup><sup>4</sup>4The fundamental quantum gravity scale $`M_\mathrm{f}`$ is less than the 4-dimensional Planck scale $`M_\mathrm{P}`$ if there are large extra dimensions. We are, of course, talking about the values of the canonically-normalized fields, with the origin at a fixed point of the symmetries. However, string theory gives us different kinds of scalar field. There are, indeed, the ordinary fields (matter fields) whose values should be small compared with $`\mathrm{\Lambda }_{\mathrm{UV}}`$, if the form of the potential is to be under control. Most models of inflation have been built with such fields in mind, though all too often one notices at the end of the calculation that the magnitude of the inflaton field is at the Planck scale or bigger.
On the other hand, there are also moduli, which determine things like the gauge couplings and the size of extra dimensions. String theory can give guidance about the form of their potential at field values of order $`M_\mathrm{P}`$, even if $`M_\mathrm{f}`$ is much less than $`M_\mathrm{P}`$ owing to the presence of large extra dimensions. It is marginally flat enough to support inflation , a detailed investigation being necessary to see whether viable inflation occurs in a given model . Yet more exotic fields might be contemplated. For instance, it has been suggested that the inflaton corresponds to the distance between $`D`$-branes, which are coincident now but were separated at early times. The canonically normalized inflaton field is $`\varphi M_\mathrm{f}^2r`$, where $`r`$ is the distance between the branes and $`M_\mathrm{f}`$ is the fundamental quantum gravity scale. The regime $`r\text{ }>M_\mathrm{f}^1`$ presumably required by quantum gravity now corresponds to $`\varphi `$ bigger than $`\varphi M_\mathrm{f}`$. (From this viewpoint it is not clear how to justify also the regime $`\varphi M_\mathrm{f}`$, invoked in .) At present, we do not know which type of model Nature has chosen. On the other hand, future measurements of the spectral index will confirm or rule out most of the forms of the inflationary potential, that are natural in the context of matter fields .
### Hybrid inflation needs fairly large field values
During hybrid inflation, the slowly-rolling inflaton field $`\varphi `$ couples to a second field $`\chi `$, holding the latter at the origin during inflation. Ignoring loop corrections, both $`\varphi `$ during inflation, and the vev $`\chi `$ can be taken to be very small on the Planck scale. Somewhat remarkably, it has been shown recently that this is no longer the case with the loop correction included. For instance, it is found that in the usual case that the hybrid inflation is supposed to give the primordial curvature perturbation, $`\varphi `$ during inflation and/or $`\chi `$ must be at least $`10^9\text{GeV}`$. While far below the Planck scale, this number is far above the electroweak scale. This means that hybrid inflation, with matter fields, cannot work in the context of TeV-scale quantum gravity. Also, if $`\chi `$ is identified with an electroweak Higgs field, $`\varphi `$ has to be bigger than $`M_\mathrm{P}`$, even if the curvature perturbation comes from an earlier era of inflation. This second result calls into question the viability of an otherwise attractive model of electroweak baryogenesis.
### Extra dimensions
The growth industry this year has been the possibility that we live on a three-dimensional brane, with $`n1`$ large extra dimensions. I will confine my remarks to the case $`n>1`$ , because the situation for the case $`n=1`$ is changing too rapidly so say anything useful.
It is assumed that Einstein gravity holds in the $`4+n`$ dimensions with some Planck scale $`M_\mathrm{f}`$. To avoid obvious conflict with collider experiments one needs at least $`M_\mathrm{f}\text{TeV}`$, and this extreme case is the one that has received the most attention. With $`n>1`$, and the extra dimensions stabilized, Einstein gravity holds in our 4-dimensional spacetime on scales bigger than the radius $`R`$ of the extra dimensions. The 4-dimensional Planck scale $`M_\mathrm{P}`$ is given by $`M_\mathrm{P}^2R^nM_\mathrm{f}^{2n}`$. The thickness of our brane is presumably of order $`M_\mathrm{f}^1`$. Then, in the regime where the $`4+n`$ dimensional energy density is much less than $`M_\mathrm{f}^{4+n}`$ (ie., well below the quantum gravity scale) the energy density on our brane is much less than $`M_\mathrm{f}^4`$ . Assuming that the extra dimensions are stabilized, the Hubble parameter in this regime is given by $`3H^2=\rho /M_\mathrm{P}^4R^2`$. We learn that, well below the quantum gravity regime, Einstein gravity will correctly describe the evolution of the Robertson-Walker Universe, through the usual Friedmann equation .
While cosmological scales are leaving the horizon during inflation, the extra dimensions must indeed be stabilized, since significant variation would spoil the observed scale independence of the spectrum of the primordial curvature perturbation. The simplest hypothesis is that they remain stabilized thereafter, so that they have their present value while cosmological scales leave the horizon. In that case, the mass of the inflaton during inflation (not necessarily in the vacuum) must be tiny, $`m_\varphi \text{ }<M_\mathrm{f}^2/M_\mathrm{P}`$. This mass presumably requires protection from supersymmetry , but sufficient protection is problematic because the inflaton has to communicate with the visible sector so as to reheat, while in that sector the chiral supermultiplets have TeV mass splitting. Leaving aside that problem, new as opposed to hybrid inflation may be quite viable . Another proposal is to use the field corresponding to the distance between D-branes, though this does not seem to give a viable curvature perturbation.
An alternative is to assume that while the curvature perturbation is generated, the extra dimensions are stabilized, while cosmological scales are leaving the horizon, with sizes much smaller than at present. One still needs a second, short period of inflation to get rid of the dangerous cosmological relics (moduli) associated with the oscillation of the extra dimension about its present value. (Indeed, it has been shown that when entropy production finally ends, the moduli must have their present size, with an accuracy $`10^{14}(T_\mathrm{R}/10\text{MeV})^{3/2}`$.) This late inflation might be thermal or slow-roll , thermal having the advantage that it allows a bigger inflaton mass (though one that will still require protection from supersymmetry ).
### Some other recent work
Many other papers on inflation have appeared in 1999. Some of them address the problem of keeping the inflationary potential flat, in the face of supergravity corrections. For instance, presents a no-scale type model, while several works pursue the paradigm of assisted inflation. There has been further consideration of hybrid inflation with a running mass . Finally, a completely new paradigm of inflation has been proposed , in which the coefficient of the kinetic term of the inflaton passes through zero.
## References
|
no-problem/0003/gr-qc0003031.html
|
ar5iv
|
text
|
# Making use of geometrical invariants in black hole collisions.
## I Introduction
Einstein’s theory of gravity demands the equivalence of all coordinate representations of gravitational dynamics. This coordinate gauge invariance makes general relativity very simple and beautiful because there are no special families of observers to be considered. On the other hand, though, it can also make it difficult to distinguish whether differences observed in two spacetime representations are true physical (geometrical) differences or gauge differences. A natural way to limit confusion between gauge and physical differences is to work, whenever possible, with geometrically defined scalars, which are invariant under (passive) coordinate transformations. Geometric curvature invariants have a long productive history in the classification and distinction of exact analytic solutions of Einstein’s equations, particularly for the algebraic Petrov classification and characterization of curvature singularities . To a lesser extent, curvature invariants have also been applied in the field of numerical relativity, primarily for code testing when evolving exact solutions numerically. Here gauge invariant methods have the distinct advantage that they can be applied to evolutions using numerically generated coordinates which are not understood analytically. What seems to have received less attention is the application of curvature invariants in a region of spacetime which is perturbatively close to an exact solution of Einstein’s equations.
Black hole perturbation theory has recently generated much interest as a model for the late stages of a black hole collision spacetime . When two black holes are close to each other one can simply treat the problem as a single distorted black hole that ‘rings down’ into its final equilibrium state. Perturbative calculations applied in this final regime have become an important tool in the verification and interpretation of numerically generated results . More ambitiously, the perturbative approach can be used in conjunction with full scale 3D numerical relativity simulations to directly “take over” and continue a previously computed numerical black hole spacetime. It is in the context of such an approach to black hole collisions that the authors expect to make special use of curvature invariants.
In setting up such a perturbative approach one encounters a family of Cauchy data sets which encode distorted black holes such as a family of slices from the late-time region of a numerically calculated black hole coalescence. We would expect, on physical grounds, that for some of the initial data sets, perturbation theory can provide an accurate model of the corresponding future-evolved spacetime. We would like to have a working criteria for when we can expect perturbation theory to be effective based only on numerical data.
Motivated by this purpose we introduce an invariant quantity $`𝒮`$, which geometrically measures local deviation from algebraic speciality, and which we expect to be a very useful tool for numerical and perturbative work involving near-stationary regions of black hole spacetimes. Other indicators of the potential success of perturbation theory, like for instance the size of the distortions of the apparent horizon, have been previously applied to numerical results , but these require establishing a coordinate system by an intuitively reasonable method and, in most of the cases, are computationally expensive and only applicable when perturbations are reasonably small. Our invariant index is simple, elegant and has none of these shortcomings. In fact, it is not at all limited to perturbation studies, but can be applied directly to full 3D numerical evolutions to directly explore the transition from nonlinear to the linear regime, or for invariant interpretation of numerical spacetimes. In all well-known examples of axisymmetric black hole collisions we studied, $`𝒮`$ has already proven to be very useful.
## II The speciality index
Curvature invariants are part of the standard analysis of exact solutions of Einstein’s equations. From the Weyl tensor, $`C_{abcd}`$, which carries information about the gravitational fields in the spacetime, one can algebraically derive two complex curvature invariants usually called $`I`$ and $`J`$. These are essentially the square and cube the self-dual part, $`\stackrel{~}{C}_{abcd}=C_{abcd}+(i/2)ϵ_{abmn}C_{cd}^{mn}`$, of the Weyl tensor:
$$I=\stackrel{~}{C}_{abcd}\stackrel{~}{C}^{abcd}\mathrm{and}J=\stackrel{~}{C}_{abcd}\stackrel{~}{C}_{mn}^{cd}\stackrel{~}{C}^{mnab}.$$
(1)
These scalars are useful in the algebraic classification of exact solutions. The different algebraic Petrov types are distinguished by the degeneracies among the (up to) four principal null directions (PNDs) associated pointwise with the Weyl tensor. Type I is the algebraically general case with four distinct principal null directions. The other types II, III, D, and N have at least two coincident PNDs and are referred to as algebraically special. A notable characteristic common to all stationary isolated black hole solutions of general relativity is that they are all algebraically special, of Type D, with two pairs of coincident principal null directions (PNDs) at each point. We contend however that for interesting cases involving nontrivial dynamics, perturbed black hole spacetimes are generically not algebraically special, of type I.
Significantly, the invariants $`I`$ and $`J`$ satisfy the relation, $`I^3=27J^2`$ if and only if the Weyl tensor is algebraically special . Since the Weyl tensor in a perturbed black hole spacetime is not expected to be algebraically special, we expect $`I^327J^2`$. Our proposal, then, is to use this relation to construct an invariant index for algebraic speciality as a local measure of the size of the distortions from some background black hole. This violation can be in general quantified by considering the following speciality index
$$𝒮=\frac{27J^2}{I^3}.$$
(2)
For the unperturbed algebraically special background Kerr spacetime $`𝒮=1`$. In the perturbed spacetime we generically expect $`𝒮=1+\mathrm{\Delta }𝒮`$, and the size of the deviation $`\mathrm{\Delta }𝒮0`$ can be used as a guide to predicting the effectiveness of black hole perturbation theory.
The theory of perturbations on a background Kerr spacetime was worked out first by Teukolsky and has been extensively studied by many authors . In this context it is natural to use the Newman-Penrose decomposition of the Weyl tensor into five complex quantities, $`\psi _0`$, $`\psi _1`$, $`\psi _2`$, $`\psi _3`$, and $`\psi _4`$, defined with respect to some choice of a null tetrad basis. In terms of the Weyl components, for an arbitrary tetrad choice:
$`I`$ $`=`$ $`3\psi _{2}^{}{}_{}{}^{2}4\psi _1\psi _3+\psi _4\psi _0,`$ (3)
$`J`$ $`=`$ $`\psi _2^3+\psi _0\psi _4\psi _2+2\psi _1\psi _3\psi _2\psi _4\psi _1^2\psi _0\psi _3^2.`$ (4)
For any Type D spacetime such as Kerr, a tetrad can be conveniently chosen such that only $`\psi _2`$ is non-vanishing. By expressing $`𝒮`$ with respect to any perturbation of such a tetrad we find that
$$𝒮=13ϵ^2\frac{\psi _0^{(1)}\psi _4^{(1)}}{(\psi _2^{(0)})^2}+𝒪(ϵ^3),$$
(5)
were $`ϵ`$ is a perturbation parameter, and the superscript $`(0)`$ and $`(1)`$ stand respectively for background and first order pieces of the perturbed Weyl scalars. Thus, the lowest order term in the deviation is second order in the perturbation parameter $`ϵ`$. In the perturbative context this means that, when the speciality index is significantly different from unity, one can see that a potentially second order quantity has become significant, and the first order theory should no longer be trusted. For the case of Schwarzschild Eq. (5) can be reexpressed in terms of the gauge invariant Moncrief functions..
## III Example applications
Since $`𝒮=1`$ in the background, a reasonable criterion for when we can expect perturbation theory to be applicable is that $`𝒮`$ should differ from one by no more than a “factor of two”. In regions where this condition is violated we can expect significant violations of the perturbative dynamics. As test cases, to verify our interpretation of $`𝒮`$ we have considered some well-studied cases of axisymmetric black hole collisions. We find it useful to consider the location of such violating regions with respect to the background black hole horizon and the perturbative “potential barrier.” As is well-known, perturbative black hole dynamics are governed by a wave equation with a potential concentrated in the vicinity of $`r=2M`$ in the isotropic coordinates used for our examples. The potential has the effect of largely preventing waves from crossing this region.
The examples we consider here all correspond to even-parity modes implying that $`𝒮`$ is real, so we leave out reference to its imaginary component in the following discussion. We first consider the case of two initially resting equal-mass black holes. This initial configuration is represented by the equal-mass time-symmetric Misner datasets parameterized by $`\mu _0`$ as a measure of the initial separation. In this case, Schwarzschild black hole perturbation theory has been shown to provide a very good estimate of the total radiated energy for cases with $`\mu _0<1.8`$ even though the black holes share a common apparent horizon only when $`\mu _0<1.36`$. Comparisons with numerical calculations and second order calculations have demonstrated that the linear perturbation approach overestimates the radiation energy by only a factor two up to $`\mu _0=1.8`$ but beyond that the differences grow quickly. Second order perturbations have been applied in this case as a useful tool for assessing the domain of validity of perturbation theory. We can obtain similar conclusions by applying our speciality index test. Fig. 1 shows the initial values of $`𝒮`$ along the equator for Misner data, at several initial separations. In isotropic coordinates, as used in previous studies, the horizon of the background black hole is located at $`r=0.5M`$ and location of the perturbative “potential barrier” is near $`r=2.0M`$. First consider the case, $`\mu _0=1.2`$. While there is small region near the horizon where the criterion fails, this region is well inside the potential barrier which should prevent any local errors in the perturbative dynamics from having a significant effect on the outgoing radiation. In the marginal $`\mu _0=1.8`$ case, the error in $`𝒮`$ begins to be significant near the potential barrier, and the radiation should be somewhat affected. For larger values of $`\mu _0`$ the violation is significant in the vicinity of the potential barrier itself, invalidating the perturbation dynamics. The sudden drop in the value of $`𝒮`$ in these cases makes our interpretations insensitive to the choice of a “factor of two” cutoff.
Another well-studied case is the collision of black holes with non-vanishing initial linear momentum $`P`$. Here again, first order perturbation theory has been very successful, unexpectedly providing a good estimate of the radiation energies even for large values of $`P`$. We consider configurations corresponding to a fixed $`\mu =1.5`$, for various values of $`P`$. The corresponding initial values of $`𝒮`$ are shown in Fig. 2. The presence of momentum in the initial slice introduces qualitatively different features to the initial values of $`𝒮`$ exhibiting now a region of $`𝒮>1`$ which falls off more slowly at large r. There is no question in this case of the location of this region, but rather the magnitude of the violation. In this case the violation seems to grow quadratically with $`P`$ and reaches a factor of two just after $`P=M`$. Suggesting perturbation theory should be successful up to this $`PM`$. The prediction of our prescription is consistent with the accuracy to 10% for radiation energies shown in Ref. for $`P<M`$ but does not explain the mere doubling of this discrepancy out to $`P=3M`$.
In the above analysis we have considered the location of the $`𝒮1`$ regions in relation to the local properties of the “background” black hole. The specification of the background metric in this standard treatment is not itself gauge independent though. We can evaluate the applicability of a perturbative treatment in a gauge invariant way by utilizing a gauge-invariant specification of the background. A simple way to do this is to specify the background Schwarzschild radial coordinate by $`r_{Schw}^6=3M^2/I`$. The location of the horizon and potential barrier are then found with respect to this coordinate. A two dimensional representation of the results for Misner initial data is given in Fig. 3. These plots show three curves, representing the locations of the background horizon, the potential barrier, and the $`𝒮=0.5`$ surfaces in a quadrant of the $`xz`$-plane. The qualitative features observed in the preceeding interpretation of $`𝒮`$ for the Misner problem, are reproduced precisely in this more complete, fully invariant perspective.
## IV Discussion
We have identified a gauge invariant quantity which provides a particularly interesting local reduction of the geometric data in a (numerical) black hole spacetime. We foresee three areas of application where $`𝒮`$ should be a useful quantity: perturbation studies, which we have discussed in detail, numerical spacetime interpretation, and numerical code testing. In the context of perturbation theory we have demonstrated that $`𝒮`$ provides an invariant criterion for predicting when perturbation theory might provide a reliable approximation for part of a black hole spacetime. This method has the advantage over other estimates such as apparent horizon formation that it is genuinely gauge invariant. In practice it can serve as simple alternative to second order perturbation theory, but our prediction does not provide such a direct validation of a perturbative calculation. Knowing that $`𝒮1`$ does not, for example, identify the appropriate background spacetime, a vital step in any application, but it is very useful predictor of when perturbation theory may be a useful alternative to numerical simulation in a generic black hole spacetime. The quantity $`𝒮`$ itself is not restricted to perturbation studies but should also be useful in the general interpretation of numerical spacetimes. As an example, looking at $`𝒮`$ in numerical simulations, Misner data evolve after a short time to qualitatively resemble the black hole data with inward momentum discussed above. After longer evolutions in this family $`𝒮`$ tends to approach unity in the horizon/potential region with evidence of radiation moving away in both directions. In this context we also note that the presence of an isolated horizon, a recent construct of growing theoretical interest in black hole spacetimes, implies locally that $`𝒮=1.`$ Lastly, because $`𝒮`$ is an invariant with often predictable behavior, it can be very useful in numerical code testing. In Kerr spacetimes for example $`𝒮=1`$ exactly in any coordinates. Also in typical cases of Bowen-York binary black hole data that we have looked at $`𝒮`$ falls off quickly toward unity away from the black holes. In light of its simplicity and straightforward significance, we expect $`𝒮`$ to become a standard, very useful tool for analyzing numerically generated spacetimes and interpreting their physical content.
###### Acknowledgements.
We thank our colleagues at AEI, Richard Price and Jorge Pullin for their support and helpful suggestions. This work was supported by AEI. M. C. holds a Marie-Curie Fellowship (HPMF-CT-1999-00334). All our numerical computations have been performed with a full 3D code, Cactus, on an 8 GB SGI Origin 2000 with 32 processors at AEI.
|
no-problem/0003/nlin0003018.html
|
ar5iv
|
text
|
# Deformations and dilations of chaotic billiards, dissipation rate, and quasi-orthogonality of the boundary wavefunctions
## Abstract
We consider chaotic billiards in $`d`$ dimensions, and study the matrix elements $`M_{nm}`$ corresponding to general deformations of the boundary. We analyze the dependence of $`|M_{nm}|^2`$ on $`\omega =(E_nE_m)/\mathrm{}`$ using semiclassical considerations. This relates to an estimate of the energy dissipation rate when the deformation is periodic at frequency $`\omega `$. We show that for dilations and translations of the boundary, $`|M_{nm}|^2`$ vanishes like $`\omega ^4`$ as $`\omega 0`$, for rotations like $`\omega ^2`$, whereas for generic deformations it goes to a constant. Such special cases lead to quasi-orthogonality of the eigenstates on the boundary.
Chaotic cavities (billiards) in $`d`$ dimensions are prototype systems for the study of classical chaos and its fingerprints on the properties of the quantum-mechanical eigenstates. As the properties of static billiards are beginning to be understood, questions naturally arise about deformations and their time dependence. It is perhaps not widely appreciated that certain deformations are very special, and that there is a close connection between the quantum and classical mechanics of such deformations in the case of ergodic systems. In this paper, which takes a fresh approach to these issues, we explore a special class of deformations which do not ‘heat’ in the limit of small frequencies. We also establish a rather surprising relationship to a very successful numerical technique for finding billiard eigenfunctions.
We start with the one-particle Hamiltonian $`_0(𝐫,𝐩)=𝐩^2/2m+V(𝐫)`$, where $`m`$ is the particle mass, $`𝐫`$ is the position of the particle inside the cavity and $`𝐩`$ is the conjugate momentum. We will take the limit $`V(𝐫)\mathrm{}`$ outside the cavity, zero otherwise, corresponding to Dirichlet boundary conditions. In this limit, the Hamiltonian is completely defined by the boundary shape. The volume of the cavity we call $`𝖵𝖫^d`$. Upon quantization a second length scale $`\lambda _\text{B}2\pi /k`$ appears, where $`k`$ is the wavenumber. For simple geometries the typical time between collisions with the walls is $`\tau _{\text{col}}𝖫/v`$, where $`v`$ is the particle speed. The energy is $`E=\frac{1}{2}mv^2`$. Upon quantization the eigenenergies are $`E_n=(\mathrm{}k_n)^2/2m`$.
A powerful tool for the classical analysis is known as the ‘Poincare section’. Rather than following trajectories in the full $`(𝐫,𝐩)`$ phase-space, it is much more efficient to record only successive collisions with the boundary. This way we can deal with a canonical transformation (map) which is defined on a $`2(d1)`$ dimensional phase space. A similar idea is used in quantum-mechanics: By Green’s theorem is is clear that all the information about an eigenstate $`\psi (𝐫)`$ is contained in the boundary normal derivative function $`\phi (𝐬)𝐧\psi `$, where $`𝐬`$ is a $`(d1)`$ dimensional coordinate on the boundary, and $`𝐧(𝐬)`$ the outward unit normal vector.
However, unlike the classical case, the reduction to the boundary is not satisfactory. One cannot define an associated Hilbert space that consists of the boundary functions. In particular, the orthogonality relation $`\psi _n|\psi _m=\delta _{nm}`$ does not have an exact analog on the boundary. Still, the boundary functions ‘live’ in an effective Hilbert space of dimension $`(𝖫/\lambda _\text{B})^{d1}`$, and it has been realized that the following quasi-orthogonality relation holds. Define an inner product
$`M_{nm}{\displaystyle \frac{1}{2k^2}}{\displaystyle \phi _n(𝐬)\phi _m(𝐬)(𝐧\widehat{𝐃})𝑑𝐬}`$ (1)
where $`𝐃(𝐬)=𝐫(𝐬)`$ is the displacement field corresponding to dilation (about an arbitrary origin), and $`k_nk_mk`$ . It is well known that the normalization condition $`\psi _n|\psi _n=1`$ implies $`M_{nn}=1`$. We give a proof of this exact result in the Appendix. On the other hand the off-diagonal elements are only approximately zero .
The main purpose of this Letter is to study the band profile of the matrix $`M_{nm}`$ for a general displacement field $`𝐃(𝐬)`$. In particular we want to understand why for special choices of $`𝐃(𝐬)`$, notably dilations, we have quasi-orthogonality. Later we will explain that $`M_{nm}`$ can be interpreted as the matrix element of a perturbation $`\delta `$ associated with a deformation of the boundary, such that $`(𝐧𝐃)\delta x`$ is the normal displacement of a wall element, given a control parameter $`\delta x`$. In the following two paragraphs we explain the main motivations for our study.
The matrix elements $`|M_{nm}|^2`$ determine the rate of irreversible energy absorption by the particle (i.e. dissipation) due to external driving. Here ‘external driving’ means time-dependent deformation of the boundary. Having exceptionally small $`|M_{nm}|^2`$ for special choices of $`𝐃(𝐬)`$, such as dilations, translations and rotations, implies exceptionally small dissipation rate (‘non-heating’ effect). This observation goes against the naive kinetic picture that the rate of heating should not depend on how we ‘shake’ the boundary. The special nature of translations and rotations for $`\omega =0`$ has been recognized in the context of nuclear dissipation . Our present approach allows us to analyze the non-heating effect present for dilations as well, and provide the form of the low-frequency response of the system in all three cases (dilations, translations and rotations).
There is another good motivation to study this issue. Recently, a powerful technique for finding clusters of billiard eigenstates and eigenenergies has been found by Vergini and Saraceno , with a speed typically $`10^3`$ greater than previous methods. This efficiency relies on the above quasi-orthogonality relation, the associated numerical error being given by the deviation of $`M_{nm}`$ from $`\delta _{nm}`$. Those authors tried to establish quasi-orthogonality using the identity $`M_{nm}=\delta _{nm}+[(k_m^2k_n^2)/2k^2]B_{nm}`$, with $`B_{nm}\psi _n|𝐫|\psi _m`$, and by assuming that $`|B_{nm}|O(1)`$. However, a naive random wave argument would predict $`|B_{nm}|O(𝖫/\lambda _\text{B})^{(d1)/2}`$.
Fig. 2 displays the band profile $`|M_{nm}|^2`$ for three choices of the displacement field $`𝐃(𝐬)`$. The band profile can be regarded as either a function of $`\kappa =k_nk_m`$, or equivalently of $`\omega =(E_nE_m)/\mathrm{}`$, related via $`\omega =v\kappa `$. The three band profiles differ in their peak structure, but also in their $`\omega 0`$ limits: notably for dilations $`|M_{nm}|^2`$ vanishes in this limit. Our aim is to understand the overall $`\omega `$ dependence, and the small $`\omega `$ behavior in particular. For the calculation of band profile we used all 451 eigenstates of the 2D quarter-stadium (see Fig. 1) lying between $`398<k<402`$, found using the method of Vergini and Saraceno . For this particular chaotic shape a remarkably good basis set (size of order $`𝖫/\lambda _\text{B}`$) of real and evanescent plane waves has been devised , which allows the tension error (defined as the boundary integral of $`\psi ^2`$) to be typically $`3\times 10^{11}`$ in our calculation, (maximum $`2\times 10^{10}`$ for any state). The resulting errors in $`\phi `$ manifest themselves only when $`|M_{nm}|^2`$ reaches its lowest reliable value $`10^{10}`$, visible as bottoming-out in the leftmost point of the inset of Fig. 2.
In order to understand the quantum-mechanical band profile, we can first assume that the eigenstates look like uncorrelated random waves. A lengthy but straightforward calculation leads to the result
$`|M_{nm}|^2{\displaystyle \frac{2|\mathrm{cos}(\theta )|^3}{\mathrm{\Omega }_d}}{\displaystyle \frac{\lambda _\text{B}^{d1}}{𝖵^2}}{\displaystyle (𝐧𝐃)^2𝑑𝐬},`$ (2)
where the geometric factors for $`d=2,3,\mathrm{}`$ are $`\mathrm{\Omega }_d=2\pi ,4\pi \mathrm{}`$ and $`|\mathrm{cos}(\theta )|^3=4/(3\pi ),1/4,\mathrm{}`$. If the displacement field is normalized such that $`|𝐃|𝖫`$, then we get $`|M_{nm}|^2(\lambda _\text{B}/𝖫)^{d1}`$. Note that the above result implies that $`|M_{nm}|^2`$ is independent of $`\omega `$.
To go beyond the random-wave estimate (2), we adopt a more physically appealing point of view. We include a parametric deformation of the billiard shape via the Hamiltonian $`(𝐫,𝐩;x)=𝐩^2/2m+V\text{(}𝐫x𝐃(𝐫)\text{)}`$, where $`x`$ controls the deformation. Note that the displacement field $`𝐃`$ is regarded as a function of $`𝐫`$. The normal displacement of a wall element is $`(𝐧𝐃)x`$. The position of a particle in the vicinity of a wall element is conveniently described by $`Q=(𝐬,z)`$, where $`𝐬`$ is a surface coordinate and $`z`$ is a perpendicular ‘radial’ coordinate. We set $`V(𝐫)=V_0`$ outside the undeformed billiard; later we take the limit $`V_0\mathrm{}`$. We have $`/x=[𝐧(𝐬)\widehat{𝐃}(𝐬)]V_0\delta (z)`$. The logarithmic derivative with respect to $`z`$ of an eigenfunction on the boundary is $`\phi (𝐬)/\psi (𝐬)`$. For $`z>0`$ the wavefunction $`\psi (𝐫)`$ is a decaying exponential. Hence the logarithmic derivative of the wavefunction on the boundary should be equal to $`\sqrt{2mV_0}/\mathrm{}`$. Consequently one obtains $`(/x)_{nm}=[(\mathrm{}k)^2/m]M_{nm}`$, Thus the band profile of $`M_{nm}`$ is equal (up to a factor) to the band profile of the perturbation $`\delta `$ due to a deformation of the boundary. See also .
We can now use semiclassical considerations . The application to the cavity example has been introduced in . Here we summarize the recipe. First one should generate a very long (ergodic) classical trajectory, and define for it the fluctuating quantity $`(t)=(𝐫,𝐩;x)/x|_{x=0}`$, where the time-dependence of $``$ is due to the trajectory $`\text{(}𝐫(t),𝐩(t)\text{)}`$. Hence
$`(t)={\displaystyle \underset{\text{col}}{}}2mv\mathrm{cos}(\theta _{\text{col}})D_{\text{col}}\delta (tt_{\text{col}})`$ (3)
where $`t_{\text{col}}`$ is the time of a collision, $`D_{\text{col}}`$ stands for $`𝐧𝐃`$ at the point of the collision, and $`v\mathrm{cos}(\theta _{\text{col}})`$ is the normal component of the particle’s collision velocity. If the deformation is volume-preserving then $`(t)=0`$, otherwise it is convenient to subtract the (constant) average value. Now one can calculate the correlation function $`C(\tau )`$ of the fluctuating quantity $`(t)`$, and its Fourier transform $`\stackrel{~}{C}(\omega )C(\tau )\mathrm{exp}(i\omega \tau )𝑑\tau `$. The semiclassical estimate for the matrix element is
$`\left|\left({\displaystyle \frac{}{x}}\right)_{nm}\right|^2{\displaystyle \frac{\mathrm{\Delta }}{2\pi \mathrm{}}}\stackrel{~}{C}\left({\displaystyle \frac{E_nE_m}{\mathrm{}}}\right)`$ (4)
where $`\mathrm{\Delta }`$ is the mean level spacing. In practice it is convenient, without loss of generality, to work with units such that in (3) the time $`t`$ is measured in units of length, and we make the replacements $`m1`$ and $`v1`$. Then (4) can be cast into the form $`|M_{nm}|^2(\mathrm{\Delta }_k/2\pi )\stackrel{~}{C}(\kappa )`$ where $`\mathrm{\Delta }_k`$ is the mean level spacing in $`k`$.
Fig. 2 shows the excellent agreement between the actual band profile and that predicted by Eq.(4) for generic deformations and dilation. Note that there were no fitted parameters in this match. In all estimations of $`\stackrel{~}{C}(\omega )`$ we have used single trajectories of $`10^6`$ consecutive collisions.
Understanding the band profile of $`|M_{nm}|^2`$ has now been reduced to a matter of finding a classical theory for $`\stackrel{~}{C}(\omega )`$. If we assume that Eq.(3) is a train of uncorrelated impulses, then its power spectrum would be that of white noise, namely $`\stackrel{~}{C}(\omega )\text{const}`$. A straightforward calculation then leads to the random wave result (2) already presented. However, in reality there are correlations in this train, and therefore we should expect $`\stackrel{~}{C}(\omega )`$ to have some structure on the frequency scale $`\omega 1/\tau _{\text{col}}`$. Looking at Fig. 2 we see that the white noise expectation is reasonably satisfied for one of the ‘generic’ deformations (G), but not in the other two cases (D, Gp). We also see non-universal peaks at $`\omega 1/\tau _{\text{col}}1`$. We now explain that for $`\omega 1/\tau _{\text{col}}`$ there is total failure of the white noise result for dilations, as well as for translations and rotations, and discuss further complications that may arise if the billiard system is not strongly chaotic.
In Fig. 3 we display $`\stackrel{~}{C}(\omega )`$ for a different billiard shape, a generalized Sinai billiard (Fig. 1), chosen because it does not suffer from the non-generic marginally stable orbits found in the quarter-stadium. Here we see very convincing evidence that for small frequencies we have $`\stackrel{~}{C}(\omega )\text{const}`$ for generic deformation, while $`\stackrel{~}{C}(\omega )\omega ^4`$ for dilation and translation and $`\stackrel{~}{C}(\omega )\omega ^2`$ for rotation. Thus the white noise expectation is indeed satisfied in the $`\omega 1/\tau _{\text{col}}`$ regime for generic deformations, but fails for dilations, translations and rotations, for which $`\stackrel{~}{C}(\omega )0`$ as $`\omega 0`$. This property is known (in the context of eigenvalue spectra) as ‘rigidity’ . It implies that the train of impulses is strongly correlated, a result which at first sight seems inconsistent with the assumption of chaotic motion. We will explain that there is no inconsistency here.
The quantity $`(t)=/x`$ is related to $`\dot{𝐩}=/𝐫=V`$, the instantaneous force on the particle, by $`(t)=𝐃(𝐫)\dot{𝐩}`$. For translations we have $`𝐃=\stackrel{}{𝐞}`$, where $`\stackrel{}{𝐞}`$ is a constant vector that defines a direction in space. We can write $`(t)=(d/dt)^2𝒢(t)`$ where $`𝒢(t)=m\stackrel{}{𝐞}𝐫`$. A similar relation holds for dilation $`𝐃=𝐫`$ with $`𝒢(t)=\frac{1}{2}m𝐫^2`$. It follows that $`\stackrel{~}{C}(\omega )=\omega ^4\stackrel{~}{C}_G(\omega )`$, where $`\stackrel{~}{C}_G(\omega )`$ is the power spectrum of $`𝒢(t)`$. Assuming that $`𝒢(t)`$, unlike $`(t)`$, is a generic fluctuating quantity that looks like white noise, it follows that $`\stackrel{~}{C}(\omega )`$ is generically characterized by $`\omega ^4`$ behavior for either translations or dilations. For rotations we have $`𝐃=\stackrel{}{𝐞}\times 𝐫`$, and we can write $`(t)=(d/dt)𝒢(t)`$, where $`𝒢(t)=\stackrel{}{𝐞}(𝐫\times 𝐩)`$, is a projection of the particle’s angular momentum vector. Consequently $`\stackrel{~}{C}(\omega )=\omega ^2\stackrel{~}{C}_G(\omega )`$, and we expect $`\stackrel{~}{C}(\omega )`$ to be generically characterized by $`\omega ^2`$ behavior in the case of rotations.
In the previous paragraph we have assumed that generic fluctuating quantities such as $`𝐫^2`$ and $`\stackrel{}{𝐞}𝐫`$ and $`\stackrel{}{𝐞}(𝐫\times 𝐩)`$, as well as $`(t)`$ for any generic deformations, have a white noise power spectrum as $`\omega 0`$. Obviously, this ‘white noise assumption’ should be verified for any particular example. If the motion is not strongly chaotic, meaning that $`C(\tau )`$ decays like a power law (say $`1/\tau ^{1\gamma }`$ with $`0<\gamma <1`$) rather than an exponential, then the universal behavior is modified: we may have $`\omega ^\gamma `$ behavior for small frequencies. For a generic system, for instance the generalized Sinai billiard, we do not have this complication. The stadium example on the other hand is non-generic: the trajectory can remain in the marginally stable ‘bouncing ball’ orbit (between the top and bottom edges) for long times, with a probability scaling as a power law in time. Depending on the choice of $`𝐃(𝐫)`$ this may manifest itself in $`C(\tau )`$. For example, in Fig. 2 the deformation Gp involves a parallel displacement of the upper edge, and the resulting sensitivity to the bouncing ball orbit leads to large enhancement of the fluctuations intensity $`\stackrel{~}{C}(\omega =0)`$, and is suggestive of singular $`\omega ^\gamma `$ behavior for small $`\omega `$.
Finally, consider the time-dependent problem which is described by the Hamiltonian $`\text{(}𝐫,𝐩;x(t)\text{)}`$. It is well known that under quite general circumstances the dissipation is ohmic ($`\dot{x}^2`$). See and references therein. If $`x(t)=A\mathrm{sin}(\omega t)`$, linear response theory gives the long-time heating rate $`d/dt=\mu \frac{1}{2}(\omega A)^2`$. The dissipation coefficient $`\mu `$ is determined by the matrix elements of (4), \[which up to a factor equals $`|M_{nm}|^2`$\], and therefore $`\mu `$ is proportional to $`\stackrel{~}{C}(\omega )`$. Our results imply that $`\mu `$ vanishes in the limit $`\omega 0`$ for translations. One should not be surprised , since this follows from Galilean invariance: One can view the limit $`\omega 0`$ as corresponding to the special case of constant $`\dot{x}`$. For constant nonzero $`\dot{x}`$ the particle(s) in the cavity accommodate their motion to the reference frame of the cavity, and there is no dissipation. A similar argument holds for rotations. On the other hand it is somewhat surprising that the same conclusion holds for dilations (the only other shape-preserving deformation) as well. This observation, as far as we know, has not been introduced previously in the literature.
Appendix: There exist a couple of lengthy vector-identity proofs of the normalization $`M_{nn}=1`$ for the dilation case $`𝐃=𝐫`$, for $`d=\mathrm{\hspace{0.17em}2}`$. Here we present a physically illuminating alternative that works for arbitrary $`d`$. We use a phase-space-preserving definition of dilation operator $`U(\alpha )\mathrm{exp}(i\alpha G/\mathrm{})`$. It is generated by the hermitian operator $`G=\frac{1}{2}(𝐫𝐩+𝐩𝐫)`$. Applying this dilation on wavefunctions gives the expansion:
$`U(\alpha )\psi (𝐫)\psi (𝐫)+\alpha ((d/2)\psi +𝐫\psi )+𝒪(\alpha ^2)`$ (5)
The operator also has the effect $`U^{}𝐫U=\text{e}^\alpha 𝐫`$ and $`U^{}𝐩U=\text{e}^\alpha 𝐩`$. Consider now any Hamiltonian $`_0=𝐩^2/(2m)+V(𝐫)`$. Defining the parameter-dependent version $`(𝐫,𝐩;\alpha )=U(\alpha )_0(𝐫,𝐩)U(\alpha )^{}`$, it is straightforward to obtain
$`{\displaystyle \frac{}{\alpha }}|_{\alpha =0}={\displaystyle \frac{𝐩^2}{m}}𝐫V,`$ (6)
whose matrix elements in the case of the billiard potential are $`(/\alpha )_{nm}=((\mathrm{}k)^2/m)[\delta _{nm}M_{nm}]`$. Thus the non-diagonal terms are the same as those of the deformation $`𝐃=𝐫`$. The diagonal elements can be calculated directly by taking the limit $`\alpha 0`$ of the expression $`\left(U\psi |_0|U\psi \psi |_0|\psi \right)/\alpha `$. Using (5) and the fact that $`\psi |𝐫|\psi =d/2`$ one can easily show that the result equals zero. From here it follows that $`M_{nn}=1`$.
We gratefully thank Eduardo Vergini and Mike Haggerty for important discussions. This work was supported by ITAMP and the National Science Foundation.
|
no-problem/0003/cond-mat0003479.html
|
ar5iv
|
text
|
# Polymer Principles of Protein Calorimetric Two-State Cooperativity
## Abstract
The experimental calorimetric two-state criterion requires the van’t Hoff enthalpy $`\mathrm{\Delta }H_{\mathrm{vH}}`$ around the folding/unfolding transition midpoint to be equal or very close to the calorimetric enthalpy $`\mathrm{\Delta }H_{\mathrm{cal}}`$ of the entire transition. We use an analytical model with experimental parameters from chymotrypsin inhibitor 2 to elucidate the relationship among several different van’t Hoff enthalpies used in calorimetric analyses. Under reasonable assumptions, the implications of these $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s being approximately equal to $`\mathrm{\Delta }H_{\mathrm{cal}}`$ are equivalent: Enthalpic variations among denatured conformations in real proteins are much narrower than some previous lattice-model estimates, suggesting that the energy landscape theory “folding to glass transition temperature ratio” $`T_\mathrm{f}/T_\mathrm{g}`$ may exceed 6.0 for real calorimetrically two-state proteins. Several popular three-dimensional lattice protein models, with different numbers of residue types in their alphabets, are found to fall short of the high experimental standard for being calorimetrically two-state. Some models postulate a multiple-conformation native state with substantial pre-denaturational energetic fluctuations well below the unfolding transition temperature and/or predict a significant post-denaturational continuous conformational expansion of the denatured ensemble at temperatures well above the transition point. These scenarios either disagree with experiments on protein size and dynamics, or are inconsistent with conventional interpretation of calorimetric data. However, when empirical linear baseline subtractions are employed, the resulting $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s for some models can be increased to values closer to unity; and baseline subtractions are found to correspond roughly to an operational definition of native-state conformational diversity. These results necessitate a re-assessment of theoretical models and experimental interpretations.
Key words: calorimetry; van’t Hoff enthalpy; lattice models; radius of gyration;
Key words: baseline subtraction; native state definition
Introduction
In recent years, protein folding has been investigated extensively by statistical mechanical modeling (see reviews in Refs. 1–14, Refs. 15–23, and references therein). The relevance of these models to the basic understanding of microscopic energetics is premised on the tenet that macroscopic properties of a system are consequences of the properties of its microscopic constituent parts. It follows that insight and rationalization can be gained by constructing models and testing whether the presumed microscopic interactions are effective in reproducing experimental macroscopic behaviors.<sup>24</sup> High-resolution force-field potentials have been used to study protein folding<sup>25</sup> and unfolding.<sup>26-28</sup> Obviously, atomistic models are indispensable for structural details. But at present it is not computationally feasible to use them to model thermodynamics and kinetics at millisecond or longer time scales. Also, it remains an open question whether empirical force fields would ultimately be adequate for predicting dynamics over long simulations.<sup>29</sup> Currently, a significant fraction of thermodynamics and kinetics data of proteins can only be addressed by complementary approaches, mainly via polymer models with highly simplified representations of the geometry and interactions of the polypeptide chain.<sup>1-4,15,30</sup> Aside from their computational tractability, it is hoped that these simplified models may lead to the development of novel, (as-yet-undiscovered<sup>31</sup>) concepts. Such “mesoscopic” organizing principles<sup>31</sup> may be needed to bridge our understanding over gaps of many orders of magnitude in time and length scales separating the fundamental constituent atomic processes and the global features of a bio-macromolecule.
Simple self-contained polymer models can be used to explore microscopic energetics of proteins.
How do simple polymer protein models contribute to our physical understanding of proteins? Typically, the ingredients of such a model are (i) a conformational space that accounts for chain connectivity and excluded volume, and is sufficiently simple to be enumerated exhaustively<sup>4,32</sup> or sampled extensively,<sup>3,7,11</sup> and (ii) a set of rules (a potential function) that describes the “microscopic” interactions among the constituent parts of the chain. The most important feature of such a model is the conceptual clarity it offers because it is self-contained. This means that all properties and predictions of the model are derived solely from the postulated elementary microscopic ingredients. In particular, conformational ensembles are determined by applying the model potential function (ii) to ascertain the energetic favorability of every conformation in the model conformational space (i). Most recent lattice protein models belong to this category. However, some protein models are not self-contained in this sense. In some thermodynamic treatments<sup>33-36</sup>, for example, the unfolded or denatured state of a protein is postulated to contain only random-coil-like conformations, but with no specification as to what microscopic interactions are responsible for such a remarkable property in discriminating against compact nonnative conformations. (See discussions in Ref. 23.) As such, non-self-contained models involve either unspecified or unjustified mechanisms that are not explicitly considered as parts of their microscopic potential functions. Therefore, their explanatory power is limited because they cannot make a full logical connection between the macroscopic properties they predict and the microscopic interactions they explicitly consider, though they can provide important insight and be very useful in other respects.
Self-contained simple polymer models of proteins help frame our discourse in terms of basic physical interactions. They sharpen our focus on whether certain global properties can or cannot arise from the microscopic interactions presumed by a model. In these models, however, the necessity to simplify implies that one has to rely to a degree on intuitive judgement in the design of appropriate model representations to capture polypeptide properties. In principle, many simple models can give similar results. A successful predictions can therefore be fortuitous. It follows that the ability to reproduce a protein property is necessary but not sufficient for the validity of the presumed microscopic features of a model. On the other hand, if properties of a model are in disagreement with experimental data, it is a clear indication of deficiencies. Since simple models appear to enjoy a high degree of latitude in their design, it might be expected that reproducing general, “generic”<sup>37</sup> properties of proteins would be straightforward. This is not the case. To the contrary, using simple models with physically plausible interactions to reproduce several thermodynamic<sup>23,38</sup> and kinetic<sup>19,39</sup> properties of proteins has been shown to be not trivial and requires in-depth analyses. This may be a blessing in disguise, because it means that a lot can be learnt about microscopic protein energetics from generic protein properties by using the latter as restrictive experimental constraints on models, to provide insight into what form of microscopic interactions are more likely to be proteinlike.
The calorimetric criterion for thermodynamic two-state cooperativity requires a narrow denatured-state enthalpy distribution.
One generic protein property that apparently has not been fully appreciated by modelers is the calorimetric two-state behavior of many small single-domain proteins,<sup>40,41</sup> which requires that the van’t Hoff enthalpy $`\mathrm{\Delta }H_{\mathrm{vH}}`$ around the folding/unfolding transition midpoint to be equal or very close to the calorimetric enthalpy $`\mathrm{\Delta }H_{\mathrm{cal}}`$ of the entire transition. Thermodynamic properties of several simple polymer models have recently been compared with this experimental criterion for two-state cooperativity.<sup>22,23</sup> One of us<sup>23</sup> argued that, under reasonable assumptions, the calorimetric two-state condition requires the average enthalpy difference between the denatured and native ensembles around the heat denaturation midpoint not to further increase appreciably as the temperature is raised to complete the unfolding process. From analyses of analytic as well as two-dimensional lattice models, this is found to imply that the enthalpy distribution among the denatured ensemble of conformations has to be narrow in comparison with the average enthalpy difference between the native state and the denatured state.<sup>23</sup> In the present study, we provide further support for this view by determining systematically the effects of using several slightly different common definitions of van’t Hoff enthalpy for the calorimetric two-state criterion.
A number of two-dimensional lattice protein models have been evaluated against the calorimetric criterion.<sup>23</sup> Interestingly and unexpectedly, both a Gō<sup>15,19</sup> and a Gō-like HP+ (Ref. 19) model are found to be far away from being calorimetrically two-state. Apparently, insofar as the underlying chain model is highly flexible, even for these models with native-specific pairwise additive contact interactions (these interaction schemes are sometimes referred to as being “nearly maximally unfrustrated”<sup>42,43</sup>), the denatured enthalpy distributions in these two-dimensional models are still too board to satisfy the calorimetric two-state standard. Based on these results, it has been suggested that a cooperative interplay between local and nonlocal interactions in proteins may be necessary to give rise to calorimetrically two-state behaviors.<sup>23</sup> In the present work, we evaluate six three-dimensional lattice protein models. These include two-<sup>44</sup> and three-letter<sup>45</sup> models, a Gō model,<sup>46</sup> a “solvation” model<sup>47</sup> and 20-letter models with<sup>48</sup> and without<sup>49</sup> sidechains. Their thermodynamics are checked against the calorimetric criterion. We also evaluate the physical pictures of native and denatured states offered by some of these models in light of other experimental measurements on protein folding/denaturation transitions.
Results and Discussion
Overview of an analytical treatment.
To provide a basic theoretical underpinning, we first re-examine several definitions of van’t Hoff enthalpy ($`\mathrm{\Delta }H_{\mathrm{vH}}`$’s) in the protein folding literature, and the consequences of using different $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s in the calorimetric two-state criterion $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$. The main result of this section, to be demonstrated below, is that under reasonable, minimal assumptions regarding protein conformational properties, calorimetric two-state criteria using several commonly employed $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s imply essentially equivalent requirements on a protein’s density of states.<sup>23</sup> We approach this by comparing the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ values using different $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s computed for a series of analytical models with a wide range of thermodynamic cooperativities.
We begin by recalling a few basic relations. As discussed in detail previously,<sup>23</sup> the main thermodynamic quantities of interest for the issues at hand are the excess enthalpy and heat capacity. Experimentally, raw calorimetric data consists of heat capacity scans over a range of temperatures, from which an excess enthalpy
$$\mathrm{\Delta }H(T)=H(T)H_\mathrm{N}$$
$`(1)`$
as a function of absolute temperature $`T`$ can be obtained by standard baseline subtraction and numerical integration techniques.<sup>41</sup> Here $`H`$ is the enthalpy of the entire “excess” system,<sup>23,41</sup> $`H_\mathrm{N}`$ is the enthalpy of the native state, and $`\mathrm{}`$ denotes Boltzmann averaging. In general, the native enthalpy $`H_\mathrm{N}`$ should be replaced by a Boltzmann average $`H_\mathrm{N}(T)`$ over conformational variations in the native state. (See discussions below on 20-letter models with and without sidechains.) Here we adopt as a working assumption that the native state become effectively a single conformation with a single temperature-independent enthalpy value after proper baseline subtractions.<sup>23</sup> The calorimetric enthalpy $`\mathrm{\Delta }H_{\mathrm{cal}}`$ $`=`$ $`\mathrm{\Delta }H(T_1)`$ at a sufficiently high temperature $`T_1`$ at which the heat denaturation process is completed ($`T_1`$ may be formally taken to be $`\mathrm{}`$ in model considerations).<sup>23</sup> The expression for the excess heat capacity function
$$C_P=\frac{\mathrm{\Delta }H(T)}{T}=\frac{H^2(T)H(T)^2}{k_BT^2},$$
$`(2)`$
follows from standard statistical mechanics,<sup>23</sup> where $`k_B`$ is Boltzmann’s constant. Equation (2) corresponds to $`\mathrm{\Delta }C_P`$ in the calorimetric literature ($`\mathrm{\Delta }C_{P,\mathrm{tr}}`$ in Ref. 41). We drop the symbol $`\mathrm{\Delta }`$ here for the excess heat capacity as in Ref. 23 to simplify notation.
Several different definitions of $`\mathrm{\Delta }H_{\mathrm{vH}}`$ have been put forth in the protein calorimetric literature.<sup>22,23,40,41,50,51</sup> In general, their values can be very different. This raises the possibility of complications in comparison between theory and experiment. In Ref. 23, one of us noted that while different $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s may be different when the transition is far from being calorimetrically two state — i.e., two-state as defined by the condition $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$ using any one of the $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s, a semi-quantitative argument can infer that for proteins which can be fully denatured by heat, $`\mathrm{\Delta }H_{\mathrm{vH}}`$ $``$ $`\mathrm{\Delta }H_{\mathrm{cal}}`$ for one $`\mathrm{\Delta }H_{\mathrm{vH}}`$ would imply that the same approximate equality also holds for other $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s. Here we substantiate this inference by quantitatively analyzing a class of models for protein densities of states.
Definitions of protein folding van’t Hoff enthalpies.
In general, a temperature-dependent van’t Hoff enthalpy is given by
$$\mathrm{\Delta }H_{\mathrm{vH}}(T)=k_BT^2\frac{d\mathrm{ln}K^{\mathrm{eff}}}{dT}=k_BT^2\frac{1}{\theta (1\theta )}\frac{d\theta }{dT},$$
$`(3)`$
where $`K^{\mathrm{eff}}`$ is the apparent<sup>52,53</sup> or effective<sup>22,51</sup> equilibrium constant of the system, and $`\theta `$ $`=`$ $`\theta (T)`$ is a two-state progress parameter for tracking the transition process; $`K^{\mathrm{eff}}=\theta /(1\theta )`$ and $`\theta `$ takes values from zero (at low temperatures in the present cases) to unity (at high temperatures). For heat denaturation of proteins, $`\theta =0`$ and $`\theta =1`$ correspond respectively to the completely native (fully folded) and fully denatured (unfolded) states.<sup>*</sup><sup>*</sup>*$`\theta `$ is equivalent to Lumry et al.’s<sup>52</sup> ($`[\alpha (T)\alpha _\mathrm{A}(T)]`$ $`/`$ $`[\alpha _\mathrm{B}(T)\alpha _\mathrm{A}(T)]`$, where $`\alpha `$ is an observable \[their Eq. (4)\]. Therefore, at the midpoint temperature $`T_{\mathrm{midpoint}}`$ of the parameter $`\theta `$, i.e., when $`\theta (T=T_{\mathrm{midpoint}})=1/2`$,
$$\mathrm{\Delta }H_{\mathrm{vH}}=4k_BT_{\mathrm{midpoint}}^2\frac{d\theta }{dT}|_{T=T_{\mathrm{midpoint}}}.$$
$`(4)`$
As in Ref. 23 and is customary in the calorimetric literature, $`\mathrm{\Delta }H_{\mathrm{vH}}`$ is understood to be evaluated at a certain midpoint temperature when its $`T`$ dependence is not shown explicity.
It follows that different choices of $`\theta `$ would result in different van’t Hoff enthalpies and different midpoint temperatures. The theoretical population-based $`\mathrm{\Delta }H_{\mathrm{vH}}`$ in Ref. 23 corresponds to $`\theta `$ = \[D\] — the denatured fraction of the total population, and a midpoint temperature $`T_{1/2}`$ at which one half of the chain population is denatured. Here we use $`\kappa _0`$ to denote the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratio of this population-based van’t Hoff enthalpy to the calorimetric enthalpy. Experimentally, the heat absorbed by the system is often used to quantitate the degree of progress of the transition process under a two-state assumption by setting $`\theta =\mathrm{\Delta }H/\mathrm{\Delta }H_{\mathrm{cal}}`$, with a corresponding midpoint temperature $`T_d`$ at which one half of the total calorimetric heat ($`\mathrm{\Delta }H_{\mathrm{cal}}/2`$) has been absorbed (Ref. 51). This leads to a van’t Hoff enthalpy which is proportional to the excess specific heat at $`T_d`$ (see below).
On the other hand, a “square-root” van’t Hoff enthalpy formula has also been used by Privalov and coworkers<sup>40,50</sup> to analyze experimental data. It takes the form
$$\mathrm{\Delta }H_{\mathrm{vH}}=2T_{\mathrm{midpoint}}\sqrt{k_BC_P(T_{\mathrm{midpoint}})}.$$
$`(5)`$
Apparently, this corresponds to setting $`\theta (T)`$ $`=`$ $`\mathrm{\Delta }H(T)/\mathrm{\Delta }H_{\mathrm{vH}}`$, and assuming that it is a valid progress parameter. Equation (5) is used in conjunction with either the peak temperature $`T_{\mathrm{max}}`$ of $`C_P`$ (Ref. 40) or $`T_d`$ (Ref. 50) as midpoint temperatures at which $`\theta =1/2`$ is presumably a good approximation (see also Ref. 23). To ascertain the effects of different $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s on the calorimetric criterion, we compare the population-based $`\kappa _0`$ defined above with the following possible van’t Hoff to calorimetric enthalpy ratios using different midpoint temperatures for the square-root formula<sup>23,40,50</sup>:
$`\kappa _1`$ $`=`$ $`2T_{1/2}\sqrt{k_BC_P(T_{1/2})}/\mathrm{\Delta }H_{\mathrm{cal}},`$
$`\kappa _2`$ $`=`$ $`2T_{\mathrm{max}}\sqrt{k_BC_P(T_{\mathrm{max}})}/\mathrm{\Delta }H_{\mathrm{cal}},(6)`$
$`\kappa _3`$ $`=`$ $`2T_d\sqrt{k_BC_P(T_d)}/\mathrm{\Delta }H_{\mathrm{cal}}.`$
Finally, it is not difficult to see that the van’t Hoff to calorimetric enthalpy ratio for $`\theta =\mathrm{\Delta }H/\mathrm{\Delta }H_{\mathrm{cal}}`$ above is given<sup>51</sup> by $`(\kappa _3)^2`$. So we also consider $`(\kappa _1)^2`$ $`(\kappa _2)^2`$ and $`(\kappa _3)^2`$ as possible van’t Hoff to calorimetric enthalpy ratios. The definitions and usage of these quantities are summarized in Table I.
Despite their different definitions, several van’t Hoff enthalpies give essentially the same calorimetric two-state criterion.
We now compute these different van’t Hoff to calorimetric enthalpy ratios for a class of models that intuitively capture the most basic features of protein energetics, which are an essentially unique native state as the lowest (ground) enthalpic state of the system, and a huge number of unfolded (denatured) conformations with higher enthalpies. For this purpose, we use simple random-energy-like models with Gaussian enthalpy distributions for the denatured states. Their (continuum) densities of states $`\mathrm{g}(H)`$ are given by<sup>23</sup>
$$\mathrm{g}(H)=\delta (H)+\theta (H)\frac{\mathrm{g}_\mathrm{D}}{\sqrt{2\pi }\sigma _H}\mathrm{e}^{(HH_\mathrm{D})/(2\sigma _{H}^{}{}_{}{}^{2})},$$
$`(7)`$
where $`\delta (H)`$ is the Dirac delta function, the native enthalpy $`H_\mathrm{N}=0`$, the step function $`\theta (H)=1`$ for $`H0`$, and $`\theta (H)=0`$ for $`H<0`$. $`\mathrm{g}_\mathrm{D}`$ ($`1`$) and $`H_\mathrm{D}`$ are respectively the total number and average enthalpy of the denatured conformations, whereas the standard deviation $`\sigma _H`$ specifies the width of the enthalpy distribution among them (Figure 1); see Ref. 23 for details. The corresponding partition function $`Q=Q_\mathrm{N}+Q_\mathrm{D}`$, whose native part $`Q_\mathrm{N}=1`$ is the statistical weight of the native state, and the denatured part
$$Q_\mathrm{D}(T)=\frac{\mathrm{g}_\mathrm{D}}{\sqrt{2\pi }\sigma _H}𝑑H\mathrm{e}^{(HH_\mathrm{D})/(2\sigma _{H}^{}{}_{}{}^{2})}\mathrm{e}^{H/(k_BT)}.$$
$`(8)`$
Hence \[D\] = $`Q_\mathrm{D}/Q`$. We perform numerical integrations over $`H`$ to obtain thermodynamic averages such as native and denatured populations \[Eq. (8)\], average enthalpy, and heat capacity as functions of temperature, from which the midpoint temperatures and $`\kappa `$’s defined above are determined. To simplify these calculations, rather than integrating through $`H+\mathrm{}`$, we use a high $`H`$ cutoff that set $`\mathrm{g}(H)=0`$ for $`H>4H_\mathrm{D}`$ in Eq. (7). The special case of a strictly two-state model (corresponding to $`\sigma _H0`$) is discussed in the Appendix.
For the class of models we study, we fix both the average enthalpy ($`H_\mathrm{D}`$) and entropy (parametrized by $`\mathrm{g}_\mathrm{D}`$) of the denatured state. This leads<sup>23</sup> to an essentially constant $`\mathrm{\Delta }H_{\mathrm{cal}}`$ $`=`$ $`H_\mathrm{D}`$. Only the denatured enthalpy distribution width $`\sigma _H`$ is varied. Here we use $`H_\mathrm{D}/k_B=3\times 10^4`$ (equivalent to $`H_\mathrm{D}=60.0`$ kcal mol<sup>-1</sup>), and $`\mathrm{g}_\mathrm{D}=5.68\times 10^{38}`$ (Figure 1). These values are the same as those used in our previous study.<sup>23</sup> They correspond approximately to the experimental data obtained by Jackson et al.<sup>54</sup> for the Ile$``$Val76 mutant of chymotrypsin inhibitor 2 (CI2; see Fig. 3 of Ref. 54). Hence we believe that realistic protein energetics can be explored using this class of models.
Figure 2 shows how the model midpoint temperatures and thermodynamic cooperativity vary with $`\sigma _H`$. The calorimetric two-state criterion allows for some tolerance. This is because even small single-domain proteins deviate slightly from a strictly two-state description,<sup>33</sup> with $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ slightly less than unity. So we do not have to require model $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ to be exactly equal to unity. Nonetheless, it is also clear that the experimental observation of $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$ imposes severe constraints on enthalpy distributions in proteins. Experimentally, $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}=0.96`$ is reported by Fersht and coworkers<sup>54</sup> for CI2, other calorimetric two-state proteins have similar $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s (Ref. 33.) For the present models, if the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s are to be $`0.96`$, it requires $`\sigma _H775`$ (Figure 2b, in units of $`k_B`$). This means a very narrow denatured enthalpy distribution, as the standard deviation $`\sigma _H`$ has to be less than or equal to $`775/(3\times 10^4)1/40`$ of the average enthalpic separation between the native and the denatured states, $`\mathrm{\Delta }H_{\mathrm{cal}}`$ (see Figure 1). Within this class of models, thermodynamic stability correlates with cooperativity (Figure 2a). For $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$, the folding transition temperature $`65^{}`$C corresponds to that observed experimentally.<sup>54</sup> However, stability decreases as the denatured enthalpy distribution widens. The transition temperature falls below $`0^{}`$C when $`\sigma _H`$ exceeds $`1/17`$ of $`\mathrm{\Delta }H_{\mathrm{cal}}`$.
Figure 2a shows the relation among the three midpoint temperatures. They are essentially identical when the model protein is highly cooperative (small $`\sigma _H`$). The difference between $`T_d`$ and the other two temperatures increases as cooperativity diminishes. This is because when the enthalpy distribution in the denatured state is wide (large $`\sigma _H`$), there are more low-lying nonnative enthalpies, which tend to lower the overall average enthalpy. As a result, more than half of the chain population has to be denatured (hence a higher temperature than $`T_{1/2}`$ is required) to achieve an average enthalpy of $`\mathrm{\Delta }H_{\mathrm{cal}}/2`$ than when the denatured enthalpy distribution is narrower (smaller $`\sigma _H`$). This accounts for the differences among the three $`\kappa `$’s \[Eq. (6)\] and $`(\kappa )^2`$’s in Figures 2c and d. For real two-state proteins, $`T_d`$ can differ from $`T_{\mathrm{max}}`$ by $`1^{}`$C (Ref. 50). On the other hand, $`T_{\mathrm{max}}`$ is practically identical to $`T_{1/2}`$ for a much wider range of cooperativity for these models. It appears that $`T_{\mathrm{max}}T_{1/2}`$ is a consequence of $`\mathrm{g}_\mathrm{D}1`$. Model proteins with less conformational freedom<sup>23</sup> than those considered in Figures 1 and 2 have non-negligible differences between $`T_{1/2}`$ and $`T_{\mathrm{max}}`$ (see Appendix and discussions on three-dimensional lattice models below).
Figures 2c and d compare the population-based<sup>23</sup> $`\kappa _0=\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ with experimental formulas and their variations. For this class of models, $`\kappa _0`$ $`=`$ $`\kappa _1`$ $`=`$ $`\kappa _2`$ holds almost exactly. Owing to the behavior of $`T_d`$ discussed above, $`\kappa _3`$ deviates from the other three $`\kappa `$’s when the model is not cooperative, but all four $`\kappa `$’s are practically identical if their values are $`0.9`$. When the enthalpy ratios $`\kappa `$’s are less than one, naturally the square-root ($`\kappa `$) formulas Eq. (6) gives larger van’t Hoff to calorimetric enthalpy ratios than the $`(\kappa )^2`$ formulas. The latter equate $`\mathrm{\Delta }H_{\mathrm{vH}}`$ with $`4k_BT_{\mathrm{midpoint}}^2C_P(T_{\mathrm{midpoint}})/\mathrm{\Delta }H_{\mathrm{cal}}`$ (Ref. 22,40,41,51). However, when any one of the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s equals unity, it implies that all other $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s also equal unity.
These observations suggest that the following general conclusion should be valid: Insofar as a protein can be fully denatured by heat<sup>23</sup> (as these models are), which implies that it has a sufficiently high denatured-state entropy relative to the native state (which should be satisfied by all proteins because of their polymeric nature), all of the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s considered in this paper provide essentially the same calorimetric two-state conditions, and thus have the same requirement on the density of states of the proteins.
Recently, Zhou et al.<sup>22</sup> used a homopolymer tetramer model to show that it is possible to have $`(\kappa _3)^2>1`$, and that the deviation from the calorimetric criterion is not simply related to the population with intermediate enthalpies. Remarkably, the thermodynamic properties of their continuum tetramer model are very similar<sup>23</sup> to that of a lattice tetramer toy model introduced previously by Dill et al.<sup>4</sup> Since the ground-state populations of these small systems are substantial<sup>23</sup> even under athermal conditions ($`T=\mathrm{}`$), they cannot be fully “denatured.” Hence this interesting and important observation of Zhou et al. is not inconsistent with our general conclusion regarding proteins. The present study does not address the application of van’t Hoff analysis to chemical reactions in solutions<sup>55</sup> because of fundamental differences between chemical reactions and the conformational transition of polymeric systems treated here.
Calorimetric two-state cooperativity implies a very low “glass transition” temperature for the folding of two-state proteins.
The above thermodynamic results are relevant to folding kinetics, especially landscape theories that utilize the spin-glass approach put forth in the seminal work of Bryngelson and Wolynes.<sup>56,57</sup> It has been argued, and has been generally accepted, that in order for a protein to fold in a kinetically efficient manner, its folding transition temperature $`T_\mathrm{f}`$ must be significantly greater than a glass temperature $`T_\mathrm{g}`$ that characterizes the onset of sluggish folding kinetics as the temperature is lowered<sup>58</sup> (reviewed in Refs. 3, 4). Subsequently, based on a series of insightful studies by Onuchic, Wolynes and coworkers,<sup>45,59,60</sup> it has been further argued that a “law of corresponding states”<sup>6,59,60</sup> can be used to predict the ratio $`T_\mathrm{f}/T_\mathrm{g}`$ for real proteins from simulations of a 27mer 3-letter code (3LC) model protein configured on three-dimensional cubic lattices<sup>45,59</sup> (see discussion below). This approach provided an estimate of $`T_\mathrm{f}/T_\mathrm{g}`$ $`=`$ $`1.6`$ for small $`\alpha `$-helical proteins.<sup>6,42,43,59</sup> More recently, Onuchic et al.<sup>9</sup> considered the thermodynamics of a Gaussian random energy model similar to the one employed here and derived the relation $`T_\mathrm{f}/T_\mathrm{g}`$ $`=`$ $`(H_\mathrm{D}/\sigma _H)\sqrt{2/\mathrm{ln}\mathrm{g}_\mathrm{D}}`$ (in the present notation). Solvent-mediated (effective) intraprotein interactions can have enthalpic as well as entropic contributions. However, heat-induced conformational changes would be impossible if these interactions do not contain enthalpic parts. The interaction energy $`E`$ was taken to be purely enthalpic in Onuchic et al.’s random-energy treatment of temperature dependences that leads to Eq. (12) in Ref. 9.
The estimate $`T_\mathrm{f}/T_\mathrm{g}1.6`$ was based on kinetic simulations. As such, it may be viewed as a lower bound for a protein to satisfy a certain requirement for foldability. A previous random-energy-model analysis already suggests that a higher thermodynamic $`T_\mathrm{f}/T_\mathrm{g}`$ ratio may be needed to satisfy the additional constraint imposed by calorimetric two-state cooperativity.<sup>23</sup> Figure 2b shows calorimetric cooperativity as a function of $`T_\mathrm{f}/T_\mathrm{g}`$ (the horizonal axis is marked by the inverse of this ratio, $`T_\mathrm{g}/T_\mathrm{f}`$, by applying Eq. (12) of Onuchic et al.<sup>9</sup>). Using realistic protein parameters,<sup>23,54</sup> Figure 2b shows that in the context of the present random-energy model analysis, for a protein’s $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}>0.96`$ (Ref. 54), it is necessary for $`T_\mathrm{f}/T_\mathrm{g}>5.8`$; $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}>0.99`$ implies $`T_\mathrm{f}/T_\mathrm{g}>10.0`$; and $`T_\mathrm{f}/T_\mathrm{g}1.6`$ would imply that the protein is not calorimetrically two-state, with $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}<0.2`$.
Therefore, combining our results with Onuchic et al.’s analysis<sup>9</sup> leads us to the conclusion that for proteins that are calorimetrically two-state, $`T_\mathrm{f}/T_\mathrm{g}`$ should be higher than the earlier estimate of $`1.6`$, and may well exceed $`6.0`$. In that case, even for an hypothetical highly stable two-state protein with $`T_\mathrm{f}100^{}`$C (373.15K), $`T_\mathrm{g}`$ is still very low, at $`62`$K. This folding glass transition temperature is a theoretical construct for quantitating a “rugged” landscape’s impediment to the kinetics of folding from the denatured to the native state. The physics it describes is different from the “glass transition” of native proteins observed experimentally at $`200`$K (see, for example, Ref. 61), though it has been suggested<sup>59</sup> that the two phenomena might be related. The present calorimetric estimate of $`T_\mathrm{g}62`$K is much lower than temperatures at which folding actually takes place. While the idealized enthalpy distribution of a random-energy model without explicit chain representation might have underestimated the chance of having low-enthalpy kinetic traps, such traps should nevertheless be improbable given this extremely low estimate for $`T_\mathrm{g}`$. Therefore, our results suggest that in general kinetic traps should have at most minimal effects on the folding of real calorimetrically two-state proteins of sizes comparable to CI2.<sup>19,37,42,43</sup> This view is apparently supported by recent folding experiments on proteins with no kinetic intermediates.<sup>62-67</sup> In this perspective, it would be particularly revealing to elucidate the relationship between multi-phasic kinetics and calorimetric cooperativity for real proteins that do fold with kinetic intermediates (see, for example, Refs. 68–70, and theoretical perspectives in Refs. 3–8, and 11).
Lattice protein models: Why compare them against the calorimetric two-state criterion?
We now turn to protein models with explicit chain representations. Recent years have seen sustained efforts in using highly simplified lattice models to understand general properties of proteins. Lattice protein models were pioneered by Gō and coworkers.<sup>15</sup> Gō models assume that only those contact interactions that occur in the native conformation can be favorable, whereas all nonnative interactions are neutral. This approach to modeling may be characterized as teleological, because the native conformation is hardwired explicitly into the model potential function. A lot of useful insight has been gained by this methodology. But it is important to realize that a Gō model leaves open the question as to what physical interactions can conspire to produce the remarkable molecular recognition effect it has assumed.
An essential difference between Gō models and models introduced in the past decade — beginning with the simplest 2-letter HP potential,<sup>30,32</sup> is that many of the more recent models have adopted microscopic interaction schemes that are independent of a particular native conformation. Therefore, these models offer the possibility to better explore the physico-chemical bases of protein folding. While much have been learnt (see, for example, Refs. 1–9, 11, 12), the goal of using these models to elucidate general protein properties has not been fully realized. One of the most generic thermodynamic properties of many small single-domain proteins is their calorimetric two-state cooperativity. However, no three-dimensional lattice model has been evaluated against the calorimetric two-state criterion. We do so here for six representative models. This was motivated by a previous study of two-dimensional models,<sup>23</sup> which has led us to suspect that to design a physically plausible three-dimensional interaction scheme to reproduce calorimetric two-state behaviors might be non-trivial, and that other deficiencies of lattice models in describing real two-state protein properties<sup>37</sup> might be intimately related to their lack of calorimetric two-state cooperativity.
We take this as the first step in an endeavor to build simple tractable self-contained models to capture more proteinlike features. It is hoped that once models are required to better conform to the calorimetric two-state criterion, mechanisms for other two-state proteinlike properties would either be apparent or become more easily decipherable. From this vantage point, the substantial amount of lattice model data accumulated over the years constitutes a valuable repository of information. By applying appropriate experimental tests on these models for their similarities with and their differences from real protein behaviors, one would gain new insight into what novel energetic ingredients might be necessary for building better models.
We consider six models,<sup>44-49</sup> as shown in Figure 3. We choose to analyze these models in depth because they are representative and instructive, covering a varieties of approaches and assumptions employed in recent efforts to model proteins as chains configured on three-dimensional simple cubic lattices. Some models in Figure 3 have been studied extensively and contributed significantly to the advances in theoretical understanding. All these models are based upon additive pairwise nearest-lattice-neighbor contact energies. As described in the original references,<sup>44-49</sup> the contact energies are all assumed to be temperature independent. We therefore refer to these energies as enthalpies, as in Ref. 23, to conform to the terminology in the experimental calorimetric literature.
Lattice simulation methods.
Using the model potential functions described in their respective original studies,<sup>44-49</sup> thermodynamic quantities of these models were computed using standard Metropolis Monte Carlo (MC) histogram techniques.<sup>71,72</sup> The chain move set we used consists of end, corner, and crankshaft moves, as described by Socci and Onuchic,<sup>44</sup> with additional sidechain moves for the 20-letter sidechain model (Figure 3f).<sup>48</sup> Each histogram was computed using a total of $`4.5\times 10^8`$ attempted moves, whereby data was collected after allowing for an initial equilibrating run of $`5\times 10^7`$ attempted moves. Every attempted move is counted as elapsed MC time in computing Boltzmann averages, whether it is accepted or rejected; and if rejected, regardless of whether it is caused by excluded volume violation or by the stochastic Metropolis algorithm for an attempted move that involves a finite increase in energy (enthalpy). The simulation temperatures are given in the captions for Figures 4–9. In one case (the Gō model in Figure 7), we also performed several independent MC simulations at different temperatures to confirm the MC histogram results. Our sampling of the densities of states should be adequate since we obtained essentially the same midpoint temperatures as the original studies for all six models.For the 20-letter model, the temperature at which the Boltzmann average $`𝐐`$ of the number of native contact Q equals one half of the total number $`𝐐_\mathrm{N}`$ of native contacts was reported to be $`0.272`$ in Ref. 49 (note that this Q is different from the symbol $`Q`$ for partition function), whereas the present simulation gives $`0.279`$. The discrepancy is not big. However, it is not clear whether the discrepancy merely reflects numerical uncertainties or is it related to a possible systematic deviation from the correct Boltzmann distribution in previous simulations in which attempted moves rejected by excluded volume violations were not counted as elapsed MC time (page 185 of Ref. 49, page 1617 of Ref. 73), as has been noted recently (Ref. 47).
Thermodynamic functions relevant to calorimetric considerations are plotted in Figures 4–9. In these figures, $`T_{1/2}`$ is the temperature at which the chain population \[N\] in the single lowest-enthalpy conformation equals $`1/2`$. This single-lattice-conformation definition of the model native state and the corresponding identification of $`T_{1/2}`$ with the folding transition temperature coincide with the original formulations in four of the models.<sup>44-47</sup> However, a multiple-lattice-conformation native state containing other conformations in addition to the lowest-enthalpy conformation was advocated by the authors of the two 20-letter models.<sup>48,49</sup> Hence, according to their definitions, the “native” populations in their models<sup>48,49</sup> are larger than \[N\] in Figures 6 and 9. We will give more detailed consideration to the issue of native state definition below.
Evaluating lattice protein models against the calorimetric two-state criterion.
A First Step: Modeling Heat Capacity Functions With No Baseline Subtractions
We first apply the model heat capacity and enthalpy functions in Figures 4–9 directly to the relation<sup>23</sup> $`\kappa _0`$ $`=`$ $`\mathrm{\Delta }H(T_{1/2})_\mathrm{D}/\mathrm{\Delta }H_{\mathrm{cal}}`$ and Eq. (6) above to compute various $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratios in Table II. This is equivalent to assuming that for each model (as for the random-energy models above), the entire model $`C_P`$ function is directly comparable to the “transition” part of an experimental excess heat capacity function,<sup>41</sup> the analyses of which has led to the calorimetric two-state condition $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$ for many small proteins. Experimentally, the transition part of the excess heat capacity is obtained by performing baseline subtractions on the raw data.<sup>23,41</sup> This exercise we now undertake is a necessary and instructive starting point that involves minimal assumption,<sup>23</sup> as it does not entail performing any baseline subtraction on model results. After a basic perspective has been gained, we will discuss in a later section the feasibility and appropriateness of applying baseline subtractions to model specific heat functions.
In addition to the $`C_P`$ functions, the upper panels of Figures 4–9 also show the heat capacity contributions $`(C_P)_\mathrm{D}[\mathrm{D}]`$ from thermal transitions among nonnative (in these cases, non-ground-state) conformations.<sup>23</sup> When a large fraction of $`C_P`$ arises from transitions among nonnative conformations instead of transitions between native (N) and nonnative (D) conformations, significant deviations from calorimetic two-state behaviors by the $`\kappa _01`$ standard are expected<sup>23</sup> (Table II). This is because a large $`(C_P)_\mathrm{D}[\mathrm{D}]`$ contribution means that even after passing the denaturation transition midpoint (when \[D\]$`>1/2`$), the average denatured enthalpy will continue to rise substantially when the temperature is further raised (see the lower panel of Figure 4, for example), as denatured chains are propelled to populate conformations at higher and higher enthalpies. Table II summarizes the six models’ conformity to calorimetric two-state criteria based on different $`\mathrm{\Delta }H_{\mathrm{vH}}`$’s. Calorimetric cooperativities measured by common experimental $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ formulas such as $`(\kappa _2)^2`$ and $`(\kappa _3)^2`$ (see Table I) can readily be calculated from Table II.
None of the Models Tested Meets the Calorimetric Two-State Standard
Table II shows that all six models tested by the present method do not meet the experimental calorimetric two-state standard. Among them, the Gō model appears to be most cooperative, with $`\kappa _0=0.54`$ and $`\kappa _2`$ $``$ $`\kappa _3`$ $`=0.87`$. If the common experimental formulas $`(\kappa _2)^2`$ (Ref. 41) and $`(\kappa _3)^2`$ (Ref. 51) for van’t Hoff to calorimetric enthalpy ratio are used, this translates into $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ $``$ $`0.75`$ for this particular Gō model. This is still low when compared with experimental values of $`0.96`$ (Ref. 54) for calorimetrically two-state proteins. For five small compact globular proteins — ribonuclease A, lysozyme, $`\alpha `$-chymotrypsin, cytochrome $`c`$, and metmyoglobin, Privalov<sup>51</sup> reported an average $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ $`=`$ $`(\kappa _3)^2`$ $`=`$ $`0.96\pm 0.03`$.
Different Calorimetric Criteria are Related to Definitions of the Native State — 20-Letter Models
For the models tested, the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ values ($`\kappa `$’s) vary considerably depending on what definition of van’t Hoff enthalpy is used (Table II). The variation is mildest for the 2- and 3-letter models, for which the population-based $`\kappa _0`$ is almost identical to one of the experimental square-root formulas, $`\kappa _3`$. And while $`\kappa _2`$’s are different from $`\kappa _3`$’s for these two models, they are only 27–38% larger than $`\kappa _0`$. For the other four models, the difference between $`\kappa _0`$ and the experimental formulas $`\kappa _2`$ or $`\kappa _3`$ is larger: $`\kappa _3`$ is $`1.6`$$`1.8`$ times $`\kappa _0`$ for the Gō and modified HP models, whereas $`\kappa _3`$ is $`7`$ times bigger than $`\kappa _0`$ for the two 20-letter models. For the latter four models, however, $`\kappa _2`$ is virtually identical to $`\kappa _3`$.
The differences among $`\kappa `$’s are often related to differences in the midpoint temperatures used to define them. For the 2- and 3-letter models (Figures 4 and 5), the temperature $`T_{1/2}`$ for the population-based $`\kappa _0`$ (and $`\kappa _1`$) are well within the peak region of the specific heat capacity function and quite close to the temperature $`T_{\mathrm{max}}`$ for $`\kappa _2`$. This accounts for the relative small differences among $`\kappa _0`$, $`\kappa _1`$, and $`\kappa _2`$ in these models. The difference between $`\kappa _0`$ and $`\kappa _2`$ is larger for the Gō and modified HP models, but $`T_{1/2}`$ still lies within the peak region of the $`C_P`$ function and not that far away from $`T_{\mathrm{max}}`$ (Figures 7 and 8). The difference between $`\kappa _0`$ and $`\kappa _2`$ is much larger for the two 20-letter models. In these constructs, $`T_{1/2}`$ is well outside the peak region of $`C_P`$ ($`T_{1/2}T_{\mathrm{max}}`$, see Figures 6 and 9). On the other hand, $`T_{\mathrm{max}}`$ $``$ $`T_d`$ for the Gō model and the 20-letter model without sidechains (Figures 6 and 7), hence they have $`\kappa _2`$ $``$ $`\kappa _3`$.
The large temperature differences between $`T_{1/2}`$ and $`T_{\mathrm{max}}`$ in Figures 6 and 9 highlight one peculiar feature of the two 20-letter models which is qualitatively different from the other four models. For both of them, the population \[N\] of the single ground-state conformation is below 10% at $`T_{\mathrm{max}}`$, whereas the $`C_P`$ at $`T_{1/2}`$ (when \[N\] = 1/2) is very low. This feature is intimately related to the rationale for adopting a multiple-lattice-conformation native state in these models.<sup>48,49</sup> In physical terms, it means that $`C_P`$ is dominated at low temperatures by transitions among the single ground-state conformation and other conformations with very low (close to ground-state) enthaplies, most of these conformations belong to these models’ multiple-conformation native state as defined by their authors<sup>48,49</sup> (see below). When the temperature is raised, population in the single ground-state conformation continues to decrease as more of it is being transferred to other low-enthalpy conformations. Therefore, when the temperature reaches $`T_{\mathrm{max}}`$, contributions to the peak value of $`C_P`$ are dominated by transitions between the group of low-enthalpy conformations as a whole with the large number of high-enthalpy conformations. By that time the population \[N\] in the single ground-state conformation has become quite insignificant. This is the basic reason why $`\kappa _2`$ $``$ $`\kappa _0`$ $``$ $`\kappa _1`$ for these two models (Table II).
Model Heat Capacity Functions can be Compared Directly with Experiments — Gō and 20-Letter Mainchain Models are More Cooperative
By considering random-energy models, we have argued above that all common calorimetric criteria using different $`\kappa `$’s are essentially equivalent when $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ $``$ $`1`$ and the native state is represented by a single enthalpy value in an effective density of states that describes the transition part of an experimental excess heat capacity function after proper baseline subtractions.<sup>23</sup> The behavior of the two 20-letter models prompts us to ask a more general question: Which $`\kappa `$ computed from a model would be most relevant for comparing theory with experiment when $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ deviates significantly from unity and the native state of the chain model may have multiple enthalpy levels?
From an operational standpoint, among the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s considered, $`\kappa _2`$, $`\kappa _3`$, $`(\kappa _2)^2`$, and $`(\kappa _3)^2`$ are most directly related to experiments. This is because they can be determined by analyzing the model $`C_P`$ function alone (which corresponds to an experimental calorimetric scan) without involving an a priori definition of the “native state” (whereas such a definition is needed to determine $`T_{1/2}`$ for $`\kappa _0`$ and $`\kappa _1`$). It is also prudent to not commit prematurely to a general single-lattice-conformation definition of the native state.
By this operational standard, the 20-letter model without sidechains is second most cooperative after the Gō model, with $`\kappa _2`$ $``$ $`\kappa _3`$ $`=`$ $`0.66`$. On the other hand, the 2-letter, 3-letter and modified HP models are far from being calorimetrically two-state by all standards considered here: none of their $`\kappa `$’s exceeds $`0.46`$; in fact they are often much lower (Table II). In these models, at any one of the transition midpoints, the average enthalpic difference $`\mathrm{\Delta }H(T)_\mathrm{D}`$ between the denatured state and the single native conformation is low relative to $`\mathrm{\Delta }H_{\mathrm{cal}}`$ (lower panels of Figures 4, 5, 9).
2- and 3-Letter Models are Less Cooperative — “Variable Two-State” Does Not Equal “Calorimetrically Two-State”
For the 2-letter model in Figures 3a and 4, a previous study has shown that its denatured enthalpy distribution is a broad shifting peak whose center position is moving continuously to higher values as temperature is increased (for example, the peak $`H64`$ at $`T=1.26`$ whereas the peak $`H16`$ at $`T=5.00`$, see Fig. 5 of Ref. 72, $`H`$ is equivalent to their $`E`$). Therefore, this 2-letter example corresponds to the “variable two-state” case of Dill and Shortle (Fig. 1B of Ref. 74) with heat (increasing temperature) as the “denaturing agent.” The observation here implies that the variable two-state scenario can differ substantially from a calorimetric two-state transition if it entails significant post-denaturational shifting of the enthalpy distribution among the denatured conformations. The present calorimetric analysis agrees with previous assessments<sup>75</sup> that the 3-letter model is more cooperative (has larger $`\kappa `$’s, Table II) than the 2-letter model, though both are far from being calorimetrically two-state. We will consider the 3-letter model in more detail below. The modified HP model in Figures 3e and 8 was motivated by considerations of hydration effects. Its potential function is based on two residue types (H and P), with novel features<sup>47</sup> such that it effectively interpolates between the standard HP potential<sup>30,32</sup> (when chain conformations are open) and the “AB” potential<sup>76-78</sup> (when chain conformations are compact). In the AB potential, like residues attract and unlike residues repel. Repulsive interactions<sup>19,77</sup> of the AB type facilitate sequence design and enhance kinetic foldability in this modified model relative to the standard HP model,<sup>47</sup> though it is insufficient for calorimetric two-state cooperativity (Table II). It is interesting to note that the spatial organization of residues in the native conformation of this modified HP model (Figure 3e) is dictated mainly by the AB potential. Consequently, the two types of residues are segegrated to opposite sides of the structure to minimize contact, rather than organizing into a hydrophobic (H) core surrounded by polar (P) residues as in typical HP ground-state conformations.<sup>79</sup>
Short 20-letter Sidechain Models are not Calorimetically Cooperative
We have also calculated Klimov and Thirumalai’s<sup>48</sup> cooperativity parameter $`\mathrm{\Omega }_c`$ by extending the MC histogram technique to compute the temperature dependence of their structural overlap function $`\chi `$ (Refs. 23, 48). The results are included in Table II. While $`\mathrm{\Omega }_c`$ is basically a measure of the sharpness of a transition and does not always correlate with the degree of conformity to calorimetric two-state cooperativity,<sup>23</sup> for these six models the rank ordering of the three most cooperative models by $`\kappa _2`$ coincides with their rank ordering by $`\mathrm{\Omega }_c`$. This suggests that $`\mathrm{\Omega }_c`$ may correlate reasonably well with calorimetric cooperativity if the conformational entropies of the chain models in question are similar. The calorimetric cooperativity as measured by $`\kappa _2`$ and $`\kappa _3`$ of the 15mer 20-letter sidechain model of Klimov and Thirumalai<sup>48</sup> is low (Figures 3f and 9, their “sequence A”), and is comparable to that of the 2-letter, 3-letter, and the modified HP model. Remarkably, by the $`\mathrm{\Omega }_c`$ measure, it is by far the least cooperative among the six models. We have also completed the same analysis for their other sidechain model, “sequence B.” The results are similar ($`\kappa _2=0.25`$, other data not shown). The low levels of calorimetric cooperativity in these sidechain models may be a consequence of the shortness of the chains, as it has been observed that models with sidechains on average have higher $`\mathrm{\Omega }_c`$’s than non-sidechain models with the same number of mainchain monomers.<sup>48</sup> Nonetheless, the present results mean that how sidechains may enhance thermodynamic cooperativity in longer chain models is a question that remains to be ascertained.
The Enthalpy Distribution of Gō Model is Trimodal
We now take a closer look, as an example, at how the underlying enthalpy distribution of the Gō model (Figures 3d, 7) gives rise to its relatively high cooperativity by the calorimetric criterion. Figure 10 shows that the Gō model enthalpy distribution is very different from that of models with much lower cooperativities, such as the 2-letter model of Socci and Onuchic.<sup>44</sup> The enthalpy distribution of the 2-letter model in Figures 3a and 4 is bimodal — the lower mode peaks at the ground-state native enthalpy ($`84`$) and encompasses enthalpies $`<77`$, whereas the higher mode has a shifting peak, corresponding to a temperature-dependent variable enthalpy distribution in the denatured ensemble (Fig. 5 of Ref. 72; see above). In contrast, the denatured enthalpy distribution of the Gō model consists of two widely separated peaks (Figure 10), the lower one is at $`H=54`$ and the higher one is around $`H=6`$ to $`4`$. Together with the native population at $`H=57`$, these give rise to a trimodal distribution of enthalpy. (The native peak is not shown in Figure 10.)
The data in Figures 7 and 10 implies that the heat denaturation of the Gō model takes place in the following manner. At low temperatures, $`T<0.5`$ for example, $`>95\%`$ of the chain population is in the single native conformation (Figure 3d). As temperature is raised to $`T=0.65`$$`0.70`$, a fraction of the native population is transferred to a group of low-enthalpy conformations with $`H`$’s around $`54`$ (Figure 10). There is an enthalpy (energy) gap of 3 units between the ground state and the lowest-enthalpy ($`H=54`$) nonnative conformations. Using MC histogram techniques, we estimated that there are $`10^5`$ nonnative conformations with $`H<44`$. (For this Gō model, the number of native contacts $`𝐐=H`$.) The heat capacity associated with these initial thermal transitions is small in comparison with the heat absorption peak because of the relatively narrow enthalpy differences between the native and the low-enthalpy nonnative conformations. As temperature continues to increase to $`T_{1/2}=0.75`$, chains start to unfold substantially, and a concentration of population at very high enthalpies ($`H6`$) begins to develop. This temperature coincides with the sharp peak of the heat capacity function (Figure 7, upper panel), which reflects the large-enthalpy thermal transitions from both the single ground-state conformation ($`H=57`$) and the low-enthalpy nonnative conformations ($`H54`$$`40`$) to the large number of high enthalpy conformations around $`H6`$. There are non-vanishing chain populations at enthalpy levels intermediate between the two nonnative peaks, but they are not appreciable at any temperature. When the temperature is raised further to $`T=0.83`$$`0.95`$, the population at the single ground-state and the low-enthalpy nonnative conformations greatly diminishes and practically all the chains have enthalpies above $`H=16`$.
Why is the Gō Model More Cooperative Than Others?
Several features of this process contribute to the Gō model’s relatively high cooperativity. First, unlike the 2-letter model discussed above, the population peak of high-enthalpy conformations is quite insensitive to temperature: it shifts by merely $`2`$ enthalpy units, from $`H6`$ to $`4`$, when the temperature is increased from $`T=0.75`$ to $`0.95`$ (Figure 10). Second, unlike the 20-letter models whose single ground-state conformational populations become $`<0.1`$ when the temperature is raised to $`T_{\mathrm{max}}`$ (Figures 6, 9, see above), the population of the single-conformation Gō-model ground state remains substantial ($`0.3`$) at the peak of the heat capacity function. In fact, all three transition midpoint temperatures are well within the peak region of $`C_P`$ for the Gō model. And among the models tested, it is the one with both $`T_{1/2}`$ and $`T_d`$ closest to $`T_{\mathrm{max}}`$ — within 1.4% and 0.4%, respectively (Figure 7, upper panel).
These observations rationalize certain differences in cooperativity between models. For instance, the Gō model is more cooperative than the 2-letter model in Figure 4 by all $`\kappa `$ measures in Table II. This is because the Gō model’s bimodal distribution of nonnative enthalpies (i.e., the denatured part of an overall trimodal distribution) implies that a larger variance in $`H`$ is possible, hence a higher peak value for $`C_P`$ \[Eq. (2)\], and therefore a larger $`\kappa _2`$, than the 2-letter model with a single shifting broad distribution of denatured enthalpies. The bimodal denatured enthalpy distribution of the Gō model also means that the average denatured enthalpy near $`T_{1/2}`$ should be approximately one half of the entire range of possible enthalpy variations. Hence $`\kappa _0`$ should be $`0.5`$ (Table II indeed gives $`\kappa _0=0.54`$.) This is higher than the $`\kappa _0`$ of the 2-letter model because the latter’s denatured state is dominated by low-enthalpy conformations at its $`T_{1/2}`$. The Gō model is more cooperative than the 20-letter model in Figure 6. For the $`\kappa _2`$ measure, it is because at $`T_{\mathrm{max}}`$ the Gō model has $`3`$ times as much chain population \[N\] in its single ground-state conformation as the 20-letter model. The highly specific, teleological interactions of the Gō model also lead to much smaller probabilities for intermediate enthalpies. These factors translate into the possibility of having a larger variance in enthalpy distribution, thus a higher peak $`C_P`$ value, and hence a higher $`\kappa _2`$ for the Gō model than for the 20-letter model.
Summary of Analysis With No Baseline Subtractions
The analysis above has shown that none of the models tested is calorimetrically two-state, though there are wide variations in their deviation from being so. For models with relatively high cooperativities such as the 36mer 20-letter model and the 48mer Gō model, this conclusion is still somewhat tentative because baseline subtraction schemes<sup>22,23</sup> are yet to be explored (see below). These schemes can lead to effectively higher $`\kappa `$’s (Ref. 22). However, for models that deviate far from $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}1`$ for all van’t Hoff enthalpies considered, in particular the modified HP and short sidechain models, the analysis carried out so far is already quite sufficient in establishing that they are not good thermodynamic models for real calorimetrically two-state proteins.
It is noteworthy that the present three-dimensional 48mer Gō model is significantly more cooperative by the $`\kappa _2`$ criterion ($`=0.87`$) than a two-dimensional 18mer Gō model studied previously ($`\kappa _2=0.64`$).<sup>23</sup> Apparently, the longer chain length, the ability to form a three-dimensional core, and even the particular fold topology of the present Gō model might have contributed to its higher calorimetric cooperativity. These factors need to be better elucidated. As we have emphasized, the interactions in Gō models are highly artifical as they are not based explicitly on a set of plausible microscopic physical interactions. But Gō model results are nonetheless instructive as they may highlight intrinsic limitations to what can be achieved by contact interactions. At least in the context of an underlying flexible polymer model, the above observations on all six models suggest that there always exists conformations with enthalpies (energies) close to the ground state, even when conformational distribution is governed by the highly specific Gō potential. This raises the question as to whether it is natural to group them together with the ground-state conformation<sup>46</sup> to define a multiple-lattice-conformation native state as advocated by the authors of 20-letter models.<sup>48,49</sup> As will be seen below, this is a substantive physical question, not merely an issue of semantics. In fact, it is directly relevant to gaining a better physical understanding of baseline subtraction and devising more appropriate means to compare model predictions with calorimetric experiments.
Effects of discarding a part of model specific heat capacity to mimic experimental baseline subtractions.
Physical Meaning of Baseline Subtractions
As a first approximation, we have so far assumed, as in a previous study,<sup>23</sup> that the heat capacity functions predicted by simple protein lattice models are directly comparable to the standard “transition part” of experimental excess heat capacity function. The latter were obtained from calorimetric data by subtracting a sigmoidal weighted baseline after first subtracting the buffer baseline.<sup>23,36,41</sup> This follows from the conventional experimental interpretation<sup>33,36,51</sup> that only the peak region of $`C_P`$ involves appreciable heat capacity contributions from thermal transitions between conformations that are both structurally and enthalpically significantly different from one another. In this conventional view, by subtracting the baselines, the heat capacity contributions discarded were essentially only those from solvation effects and small-amplitude motions of the protein, i.e., contributions that are regarded as unimportant in accounting for significant conformational changes. This assumption also underlies the standard empirical approach of using temperature-independent solvent accessible surface areas for both the folded and the unfolded states of a protein in thermodynamic analyses of calorimetric data.<sup>33,36</sup> However, this picture does not correspond exactly to the properties of polymer protein models, which invariably predict a non-negligible heat capacity contribution from conformational transitions well above the peak $`C_P`$ transition region, though the amount of this contribution varies from model to model (see below).
There are other reasons to believe that the real physical situation may be more complex than the picture implied by our first approximation and conventional empirical interpretation of calorimetric data. Bond vector motions measured by NMR spin relaxation indicate that protein backbone fluctuations contribute 8 – 14 cal mol<sup>-1</sup>K<sup>-1</sup> per residue,<sup>80,81</sup> and thus account for $`20\%`$ of the heat capacity of an unfolded protein. On the other hand, similar measurements on the folded state of two proteins suggest that backbone fluctuations on average contribute only 0.5 cal mol<sup>-1</sup>K<sup>-1</sup> per residue, and account for $`1\%`$ of the heat capacity of the native state. While the connection between NMR-measured bond vector motions and conformational diversity remains to be better elucidated, the huge difference in heat capacity contribution from backbone motions between the folded and unfolded states strongly suggests that the possibility of enthalpic transitions between structurally dissimilar conformations in the denatured ensemble cannot be neglected, and that conventional baseline subtractions might have discarded heat capacity contributions from these transitions.
More recently, a molecular dynamics simulation study using implicit solvent interactions also suggests that in addition to differences in solvation effects, there are significant heat capacity contributions to the difference between native and denatured baselines from noncovalent intraprotein interactions.<sup>38</sup> While the heat capacity contributions from model vibrational motions of the covalent bonds<sup>82</sup> are essentially the same in the native and the denatured states, these simulations suggest that noncovalent interactions change more with temperature in nonnative conformations than in the native state.<sup>38</sup> Owing to limited sampling, large numerical uncertainties were reported in this molecular dynamics study. Nonetheless, its prediction that on average non-solvation intraprotein interactions account for $`71\%`$ of the heat capacity difference between native and denatured baselines (Table 2 of Ref. 38) appears to be consistent with the NMR experiments described above: If we perform a rough estimate based on cytochrome $`c`$ data (Ref. 33), and take $`16`$$`23`$ cal mol<sup>-1</sup>K<sup>-1</sup> per residue to be typical native-denatured baseline differences, the NMR results<sup>81</sup> suggest that $`50`$$`70\%`$ of this difference may originate from the difference in backbone motions in the native vs. the denatured state, which is in the same range as the average molecular dynamics result.
From a polymer perspective, it is also intuitive to expect non-vanishing heat capacity contributions from thermal transitions between conformations at different enthalpic levels with significant structural differences even at temperatures above the peak $`C_P`$ region. Given the immense diversity in conformational structures, it is physically quite inconceivable how enthalpic diversity in the denatured ensemble can be entirely eliminated such that it behaves as if all conformations occupy only a single enthalpy level, which would have meant that all intraprotein solvent-mediated interactions in the denatured ensemble were exclusively entropic.
Among the lattice protein models evaluated here, in which we have taken all interactions to be enthalpic for simplicity, even the heat capacity function of the Gō model with relatively high calorimetric cooperativity has a long high-temperature tail (Figure 7, upper panel). This indicates that for this model, non-vanishing contributions to $`C_P`$ from conformational transitions are not negligible at high temperatures. A relatively long (native) tail extending to temperatures far lower than the peak $`C_P`$ region is also present for the two 20-letter models (Figures 6, 9, upper panels). On the other hand, in conventional analyses of calorimetric data, no such long tails are ever present to be considered in the transition part of the excess heat capacity function obtained from baseline subtractions.<sup>36,41,50,51,54</sup> Even in calorimetric analyses of non-cooperative nonprotein homopolymers,<sup>83</sup> their existence is routinely precluded by empirical baseline subtraction techniques. This mismatch between theoretical predictions and standard transition excess heat capacities necessitates a closer examination of the correspondence between the physical pictures emerging from polymer protein models and the conventional interpretation of calorimetric experiments.
Applying Baseline Subtractions to Model Heat Capacities Can Result in Higher Predicted Calorimetric Cooperativities
We now explore the effects of using an ad hoc empirical procedure, similar to what has been carried out on experimental calorimetric data, to eliminate both the native and denatured tails in model $`C_P`$ functions plotted in the upper panels of Figures 4–9. Physically, this exercise was motivated by our recognition, based on the evidence above, that conventional calorimetric baseline analyses might have substracted out “tail” contributions that are relevant for the evaluation of polymer model predictions. Hence, as an effort to put theoretical predictions on the same footing as the (no-tail) experimental transition excess heat capacities, we now perform baseline subtractions on model data to eliminate their tail contributions. We do expect, nonetheless, that the corresponding “tail” contributions in real experimental data are only a minor part of the heat capacity contributions discarded by conventional baseline subtraction on calorimetric measurements. There are reasons to expect that conventional interpretation is at least partially correct in that a majority of the contributions subtracted by standard baseline analyses are indeed heat capacity contributions from solvation effects and small-amplitude protein motions. For instance, the molecular dynamics simulation discussed above estimated<sup>38</sup> that only $`11\%`$ of native-state heat capacity came from non-covalent interactions.
Following standard experimental procedures,<sup>50,51</sup> (see also Ref. 22) baselines are constructed as plausible linear extrapolations from low temperature and high temperature parts of the $`C_P`$ function to its peak region; they are referred to as native (low temperature) and denatured (high temperature) baselines. These constructions are depicted in Figure 11 and the upper panels of Figures 12 and 13 for the six lattice protein models we have been considering. More details are described in the caption for Figure 11. Baseline subtraction has two opposite effects on the predicted calorimetric cooperativity. On one hand, it decreases the value of calorimetric enthalpy, because some areas under the $`C_P`$ curve are excluded from the integration for $`\mathrm{\Delta }H_{\mathrm{cal}}`$. This tends to increase the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratio. On the other hand, it decreases the effective peak value of heat capacity. This tends to decrease the $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratio, as $`\mathrm{\Delta }H_{\mathrm{vH}}`$ is proportional to the effective peak $`C_P`$ value or its square root. Here we define an effective post-baseline-subtraction $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratio by substituting the new effective peak heat capacity and effective calorimetric enthalpy into the expression for $`\kappa _2`$ in Eq. (6):
$$\kappa _2\kappa _2^{(\mathrm{s})}\frac{2T_{\mathrm{max}}\sqrt{k_BC_{P,\mathrm{max}}^{(\mathrm{s})}}}{\mathrm{\Delta }H_{\mathrm{cal}}^{}{}_{}{}^{(\mathrm{s})}}.$$
$`(9)`$
Table III shows that for all six models, baseline subtractions lead to increases in apparent (effective) calorimetric cooperativity. However, both the modified HP model ($`\kappa _2^{(\mathrm{s})}=0.41`$) and the short 20-letter sidechain model ($`\kappa _2^{(\mathrm{s})}=0.54`$) remain very far away from being calorimetrically two-state, despite some improvements. On the other hand, the effective calorimetric cooperativities of the 2- and 3-letter models increase dramatically (from $`\kappa _2=0.36`$ and $`0.46`$ to $`\kappa _2^{(\mathrm{s})}0.94`$) after large areas (thick denatured tails) under their $`C_P`$ functions have been subtracted out (Figure 11a and upper panel of Figure 12). Remarkably, the Gō model’s $`\kappa _2^{(\mathrm{s})}`$ of $`1.00`$ now meets the experimental standard. The 36mer 20-letter model’s $`\kappa _2^{(\mathrm{s})}`$ also rises above $`0.94`$ (upper panel of Figure 13). We will use the 27mer 3-letter and the 36mer 20-letter models to discuss the physical implications of these enhancements of apparent calorimetric cooperativity by baseline subtractions.
Nonlinear “Formal Two-State” Baselines and Multiple-Conformation Native States
Recently, Zhou et al. made a pertinent observation that any density of states can be formally decomposed into two arbitrary “states,” and that its thermal behavior made to satisfy the calorimetric two-state criterion if one is willing to introduce (non-standard) nonlinear baseline subtractions.<sup>22</sup> To gain further insight into the physical meaning of baseline subtractions, we found it instructive to contrast and compare the present empirical analysis with their construction. Here is a brief summary of their formulation (in our notation). Any partition function $`Q`$ can be written as a sum of a pair of partition functions for two “states,” denoted here by “0” and “1”; viz., $`Q(T)=Q_0(T)+Q_1(T)`$. Let $`(C_P)_0`$ and $`(C_P)_1`$ be the individual heat capacities of the two states, computed from $`Q_0`$ and $`Q_1`$ respectively, and $`T_m`$ be the midpoint temperature at which the population in the two states are equal, i.e., $`Q_0(T_m)=Q_1(T_m)`$. Zhou et al.’s baselines are defined by the individual heat capacities: $`(C_P)_0(T)`$ for $`T<T_m`$ and $`(C_P)_1(T)`$ for $`T>T_m`$. Naturally, a calorimetric enthalpy $`\mathrm{\Delta }_0^1H_{\mathrm{cal}}`$ is defined to be the area between the $`C_P`$ curve and this baseline, and a midpoint heat capacity value $`\mathrm{\Delta }_0^1C_P(T_m)`$ $`C_P(T_m)`$ $``$ $`[(C_P)_0(T_m)+(C_P)_1(T_m)]/2`$. A population-based van’t Hoff enthalpy $`\mathrm{\Delta }_0^1H_{\mathrm{vH}}(T)`$ is then computed using Eq. (3) above with $`\theta =`$ $`Q_1(T)/Q(T)`$. Zhou et al. showed that in general $`\mathrm{\Delta }_0^1H_{\mathrm{vH}}(T_m)`$ $`=`$ $`\mathrm{\Delta }_0^1H_{\mathrm{cal}}`$ $`=`$ $`4k_BT_m^2\mathrm{\Delta }_0^1C_P(T_m)/\mathrm{\Delta }_0^1H_{\mathrm{cal}}`$ \[Eqs. (3), (4), (12) and (15) of Ref. 22\]. This identity, which corresponds to $`\kappa _0`$ $`=`$ $`(\kappa _1)^2=1`$ if $`T_{1/2}`$ is formally replaced by $`T_m`$ \[see Eq. (6)\], means that the calorimetric two-state condition is always satisfied with this particular choice of baselines.
We have computed baselines for the six models according to this recipe<sup>22</sup> and included them as dotted curves in Figure 11 and the upper panels of Figures 12 and 13. (In the discussion below, they are referred to simply as “nonlinear baselines.”) For models that assume a single-conformation native state,<sup>44-47</sup> $`Q_0=Q_\mathrm{N}`$ and $`Q_1=Q_\mathrm{D}`$. For the two 20-letter models, $`Q_0`$ is constructed as the partition function for the multiple-conformation native state defined by the original authors,<sup>48,49</sup> while $`Q_1`$ is defined to account for the rest of the conformations. These nonlinear “formal two-state” baselines are conceptually enlightening (see below), however, it is our view that they should not be used directly to evaluate protein models. The first reason is logical — since by construction they always lead to perfect agreements with the calorimetric two-state condition, using them on model systems would abolish the substantive physical question of whether polymer protein models conform to the experimental calorimetric requirements. Second, and more importantly, such baselines had not been used by experimentalists to analyze calorimetric data. For all cases studied here, these nonlinear baselines invariably subtract more from the peak $`C_P`$ region than conventional linear or weighted baselines (Figures 11–13). This means that using these nonlinear baselines on model $`C_P`$ functions would most likely lead to an effective heat capacity function that does not physically match the experimental transition excess heat capacity function,<sup>41</sup> and thus would make it extremely difficult to conduct meaningful comparisons between theory and experiment.<sup>23</sup>
Much insight can be gained, however, by comparing the nonlinear baselines with the ad hoc empirical linear baselines we used. As the nonlinear baselines of Zhou et al.<sup>22</sup> are guaranteed to produce perfect (apparent) calorimetrically two-state behaviors, it is not unreasonable to expect that if the linear baselines are close to the nonlinear baselines, the apparent calorimetric cooperativity predicted by the linear baselines would be high, and vice versa. This appears to hold for five out of our six cases: Relatively high apparent calorimetric cooperativities resulted from linear baseline subtractions for the 2-letter, 3-letter, and 36mer 20-letter models (Table III); and as expected their linear and nonlinear baselines are quite close (Figure 11a, upper panels of Figures 12 and 13). On the other hand, the nonlinear baselines are very far away from the empirical linear baselines used for the modified HP and the 15mer sidechain models. Not surprisingly, their apparent calorimetric cooperativities remain low even after linear baseline subtractions (Figures 11c and d and Table III).
The only exception is the Gō model (Figure 11b), for which the nonlinear denatured baseline amounts to a dominant contribution to the overall heat capacity, and is very far from the empirical linear denatured baseline. Yet the Gō model is the most cooperative among the models we evaluated, especially after linear baseline subtractions (Table III). The reason for this behavior is because we have taken the denatured state of this model to be the ensemble that encompasses all non-ground-state conformations. And since the enthalpy distribution among these nonnative conformations is bimodal (Figure 10), the nonlinear denatured baseline, which is the denatured heat capacity $`(C_P)_1=(C_P)_\mathrm{D}`$, involves large thermal transitions between the two denatured peaks. This accounts for its high magnitudes. In addition, owing to the adoption of a single-conformation native state, there is no nonlinear native baseline in the present consideration of this Gō model. On the other hand, if a multiple-conformation native state were adopted to incorporate the low-enthalpy conformations that are now being classified as denatured, it would have resulted in nonlinear baselines for both the alternately defined native and denatured states. Adoption of such a multiple-conformation native state would lead to the elimination of contributions to $`(C_P)_1`$ from large thermal transitions between the two enthalpy peaks in Figure 10, and hence a nonlinear denatured baseline with much reduced magnitudes. It is expected that the nonlinear baselines would then be much closer to the empirical linear baselines used in our analysis, and would give rise to a situation much more similar to the 36mer 20-letter case, to be discussed below.
For the 36mer 20-letter model (Figure 13), the (low temperature) nonlinear native baseline derived from a multiple-conformation definition<sup>49</sup> of the native state is almost identical to the empirical linear native baseline. By construction, a nonlinear native baseline accounts for the heat capacity contribution from thermal transitions among the multiple conformations of the native state. Therefore, when an empirical linear native baseline essentially overlaps a particular nonlinear native baseline, and we use the empirical linear baseline for subtraction, we are effectively (empirically) adopting the multiple-conformation native state that underlies the construction of the given nonlinear native baseline. More generally, when empirical linear baselines for both the native and denatured states overlap significantly with their nonlinear counterparts for a particular formal two-state definition,<sup>22,49</sup> and $`T_{\mathrm{max}}`$ $``$ $`T_m`$, as in this particular 20-letter case (Figure 13, upper panel), the empirical linear baseline subtraction scheme may be viewed as an empirical (approximate) adoption of the given formal two-state definition for the native and denatured states. Hence, it follows from the “formal two-state” consideration<sup>22</sup> that such an empirical subtraction would lead to closer conformity to the calorimetric two-state criterion as observed here.
The 3-Letter (3LC) Model Predicts Significant Post-Denaturational Chain Expansion — Comparison with SAXS Experiments
We now broaden our attention to other thermodynamic properties. Obviously, adherence to the calorimetric two-state criterion is only one of many physical properties of real two-state proteins. Therefore, to ascertain whether a model with high apparent calorimetric cooperativity is adequate for generic properties of real two-state proteins, we should also subject its other properties to further experimental evaluation. In this spirit, we now consider the 3-letter model in more detail. This model uses a single-conformation native state,<sup>45</sup> and its apparent calorimetric cooperativity is quite high after empirical baseline subtractions, $`\kappa _2^{(\mathrm{s})}=`$ $`0.952`$. Its behavior is expected to be representative of lattice protein models that are based on additive pairwise contact energies and have small numbers of monomer (residue) types in their alphabets. For instance, in many respects the properties of the 3-letter model are similar to the 2-letter model, which also attains a high apparent calorimetric cooperativity after baseline subtractions (Table III). As discussed above, the 3-letter model is instrumental in Onuchic et al.’s $`T_\mathrm{f}/T_\mathrm{g}`$ $`=`$ $`1.6`$ estimate for small $`\alpha `$-helical proteins.<sup>59</sup>
One thermodynamic property accessible to experimental determination is the dimension of a protein, measured by its average (i.e., mean-square) radius of gyration $`R_g`$ as a function of temperature. Using the MC histogram method, we have computed this function for the 3-letter model (Figure 12, lower panel). It shows a very gradual post-denaturational increase (for $`T>T_{1/2},T_{\mathrm{max}}1.5`$): Average $`R_g`$ is $`30\%`$ larger at higher temperatures than its value at the high-temperature edge ($`T1.8`$) of the peak $`C_P`$ transition region.
It appears that this prediction is signficantly different from experimental observations. Sosnick and Trewhella<sup>84</sup> have used small-angle X-ray scattering (SAXS) to monitor the temperature dependence of $`R_g`$ of ribonuclease A, one of the first few proteins shown to be calorimetrically two-state.<sup>50</sup> They observed no systematic post-denaturational increase of $`R_g`$ under both reducing (no disulfide bonds) and non-reducing conditions. Under reducing conditions (which more closely corresponds to the present lattice chains without crosslinks), the transition temperature $`51^{}`$C. Sosnick and Trewhella observed no continuous chain expansion at temperatures higher than the relatively narrow transition region at $`45`$ – 54C. Indeed, there was even a slight decrease in $`R_g`$ when the temperature reached 74C. More recently, Hagihara et al.<sup>85</sup> used solution X-ray scattering to show that the temperature dependence of $`R_g`$ during heat denaturation of ribonuclease A and cytochrome $`c`$ can be well approximated by a strictly two-state model. Plaxco et al.<sup>64</sup> used SAXS to monitor the dependence of $`R_g`$ of protein L on guanidine hydropchloride concentration. They also did not observe any trend of post-denaturational expansion.
The significant post-denaturational chain expansion predicted by the 3-letter model is directly related to a substantial heat-induced shifting of its denatured enthalpy distribution, as evident from its thick high-temperature $`C_P`$ tail. This behavior is similar to that noted above for the 2-letter model. The discrepancy between this 3-letter model’s $`R_g`$ prediction and experiment<sup>§</sup><sup>§</sup>§ Our conclusion here is based on the fact that the 3-letter model $`R_g`$ continues to increase as the temperature is raised above the peak $`C_P`$ transition region, and that this behavior is not observed in experiments. Following this logic, if the subtraction scheme in Figure 12a is used to ensure high calorimetric cooperativity, there should be no appreciable increase in model $`R_g`$ for $`T>2.2`$ if the prediction is to be consistent with experiment. But this is not the case (Figure 12b). We believe this reflects the main physical difference between this model and experimental observation. We note, however, that a direct mapping of temperatures between the 3-letter model results and experiment is not possible because they are systems of very different sizes. For instance, the peak $`C_P`$ transition regions for real proteins cover a range of 10 – 20 degrees (Refs. 50, 84). However, if we choose an energy unit to equate the 3-letter model $`T_{\mathrm{max}}1.51`$ with the ribonuclease A midpoint temperature of 51C, the model peak $`C_P`$ transition region would translate into a temperature range of $`130`$ degrees. suggests that, in spite of its relative high apparent calorimetric cooperativity after empirical baseline subtractions, it suffers from essential deficiencies as a model for real two-state proteins because of the broad and shifting enthalpy distribution among its denatured conformations. Incorporation of empirical baseline subtractions does not change our previous conclusion that additive hydrophobic interactions are insufficient for calorimetric two-state cooperativity.<sup>23</sup> For the two-dimensional HP, Gō and HP+ models analyzed in Ref. 23, application of empirical baseline subtractions similar to the one used here is not sufficient for bringing their apparent van’t Hoff to calorimetric enthalpy ratio close to unity. However, baseline subtractions are able to take the two new models introduced in Ref. 23 with cooperative interactions much closer to apparent calorimetric two-state behaviors: After subtraction, $`\kappa _2^{(\mathrm{s})}=`$ $`0.90`$ for the new cooperative model with pure enthalpic interactions, and $`\kappa _2^{(\mathrm{s})}=`$ $`0.97`$ for the model with entropic HH interaction in Ref. 23. The present consideration of the 3-letter model also generalizes the previous observation that HP-like nonspecific pairwise additive interactions are insufficient to account for certain generic thermodynamic properties of real two-state proteins. This observation is consistent with the proposal above that the ratio $`T_\mathrm{f}/T_\mathrm{g}`$ $`=`$ $`1.6`$ deduced from the 3-letter model may most likely be an underestimate for real two-state proteins.
Multiple-Conformation Native State and Non-Native Contacts in the 20-Letter Model
Finally, we examine in more detail the thermodynamic of the 36mer 20-letter model (Figures 13–15). This model has an apparent calorimetric cooperativity ($`\kappa _2^{(\mathrm{s})}=0.943`$) similar to that of the 2- and 3-letter models (Table III). Its model potential is the basis of a large body of interesting work;<sup>11</sup> and is expected to be representative of lattice protein models that are based on additive pairwise contact energies, with a large but finite number of monomer types in its alphabet, and a substantial fraction of the contact interactions being repulsive.<sup>19,72,77</sup> Here it also serves to exemplify models with a multiple-lattice-conformation native state. Note, however, that a single-conformation native state with a single ground-state energy $`E_\mathrm{N}`$ was used by the author of Ref. 11 to define the folding transition temperature $`T_\mathrm{f}`$ for a different lattice model in Ref. 76.
The lower panel of Figure 13 shows how the folding/denaturation transition of this model chain is tracked by different thermodynamic order parameters, which may correspond to different experimental probes. The population \[N\] of the single ground-state conformation begins to drop rapidly well below the $`C_P`$ peak temperature $`T_{\mathrm{max}}`$, whereas $`T_{\mathrm{max}}`$ essentially coincides with the midpoint temperatures for all other probes shown. This is consistent with the observation<sup>19</sup> that in general the midpoint temperature for \[N\] is lower than that for $`𝐐`$. The measure $`P(𝐐>20)`$ shows the sharpest transition, as it is a binary “formal two-state” order parameter for which a chain conformation can take only one of two values: either it is native (has $`𝐐>20`$), or not ($`𝐐20`$).<sup>\**</sup><sup>\**</sup>\** Using MC histogram technique, we estimated that there are $`4.4\times 10^9`$ different conformations in this 20-letter sequence’s $`𝐐>20`$ native state. This is $`>10^4`$ times more than the $`10^5`$ low-enthalpy conformations in the Gō model (see above), notwithstanding a 48mer Gō model’s total number of conformations is $`(4.68)^{(4836)}=1.1\times 10^8`$ times that of this 36mer model.<sup>86</sup> This shows that if a multiple-conformation native state were to be defined for the Gō model, its conformational diversity would be much smaller than the one in this 20-letter model. The order parameter $`𝐐/𝐐_\mathrm{N}`$ shows a broader transition because there are 40 possible Q values for this 36mer chain. For this model, the temperature dependence of $`\chi `$ correlates almost perfectly with that of $`𝐐`$ (see inset in the upper panel of Figure 13). These observations illustrate that the sharpness of a transition<sup>48</sup> can vary significantly depending on the probe (order parameter), whereas the calorimetric criterion is a more fundamental measure of cooperativity<sup>33</sup> because it directly probes the underlying density of enthalpic states.<sup>23</sup>
This 20-letter model is a better mimic of real two-state proteins than the 3-letter model in certain respects. For instance, its $`R_g`$ shows no significant post-denaturational expansion and therefore enjoys better agreement with the SAXS experiments discussed above (Figure 15, lower panel). We now briefly touch on two issues that are likely to be relevant in future assessments of the 20-letter model’s conformity to experimental two-state behavior. (i) Structural diversity of the native state: The 20-letter model allows for significant conformational variation (Figure 14). For this particular sequence, this leads to the prediction that the native state has a higher heat capacity contribution from main-chain-like motions than the fully unfolded state, as is evident from the higher $`C_P`$ value in the native tail region than the denatured tail region (Figure 13, upper panel).<sup>††</sup><sup>††</sup>††A recent Gō-like continuum three-helix bundle model also predicts a higher heat capacity for the native state than the denatured state.<sup>22</sup> However, this does not appear to agree with the NMR experiments discussed above.<sup>81</sup> (ii) The prevalence of nonnative contacts: For this model, the number of nonnative contacts undergoes a sharp transition near the heat absorption peak (Figure 15, upper panel). The average number is $`>3`$ at $`T_{\mathrm{max}}`$, reaches a peak $`6`$ at a temperature slightly higher than $`T_{\mathrm{max}}`$, then settles down gradually at a relatively high average number of $`4.5`$ for the high-temperature unfolded state. Recent NMR experiments show that nonnative interactions can exist in the compact denatured states of some proteins,<sup>87,88</sup> but this phenomenon is not universal.<sup>89</sup> If prevalence of nonnative contact is not a generic property of denatured states of real two-state proteins, it would be important to ascertain whether the high number of nonnative contacts observed in this particular sequence reflects a general feature of its underlying 20-letter contact potential.
Concluding Remarks
We have examined the implications of calorimetric two-state cooperativity and other experimentally determined thermodynamic properties on a protein’s density of enthalpic states.<sup>23,90</sup> In general, they require a narrow enthalpy distribution among the denatured conformations, as has been recently proposed.<sup>23</sup> Energy landscape theory<sup>9</sup> has allowed us to make a connection between calorimetric two-state cooperativity and folding kinetics. Using an analytical random-energy energy model, we showed that the folding landscape parameter $`T_\mathrm{f}/T_\mathrm{g}6.0`$, which is significantly higher than a previous estimate of $`1.6`$ for small ($`60`$-residue) $`\alpha `$-helical proteins.<sup>59</sup> Experimental observations of single-exponential folding without kinetic trapping for a number of small single-domain proteins 50–80 residues long with no disulfide bonds<sup>62-67,91-93</sup> is consistent with either $`T_\mathrm{f}/T_\mathrm{g}1.6`$ or $`6.0`$. This is because for proteins with $`T_\mathrm{f}<100^{}`$C, both ratios imply a $`T_\mathrm{g}`$ far lower than any temperature at which folding kinetic experiments have been conducted ($`T_\mathrm{g}<233`$K or $`<62`$K). In general, the present random-energy-model results also imply that folding of all calorimetric two-state proteins should not be affected by kinetic traps. However, this does not appear to agree with experiment. Notable counter-examples include the calorimetrically two-state<sup>33</sup> lysozyme<sup>94,95</sup> and cytochrome $`c`$.<sup>96</sup> This underscores an intrinsic limitation of the random-energy-model method because it is not a chain-based approach and does not address sequence-specific properties.
We have evaluated six lattice protein models against the calorimetric two-state criterion. The initial stage of our analysis treated the native state as a single lattice conformation. This was based on the assumption in conventional analyses of calorimetric data, which have identified the native state as the structure deposited in the Protein Data Bank.<sup>33,36</sup> Therefore, as in a previous investigation,<sup>23</sup> we first evaluated $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratios directly from the model $`C_P`$ functions, without any baseline subtractions (i.e., the baseline was first taken to be simply the $`C_P=0`$ axis). In this evaluation, none of the models came close to meeting the calorimetric two-state standard. This is consistent with our previous conclusion, based on two-dimensional models, that when the native state is considered to be consisting of a single conformation, pairwise additive contact interactions are insufficient for calorimetric two-state cooperativity.<sup>23</sup>
However, based on both theoretical and experimental considerations, principally data from NMR bond vector motion measurements,<sup>81</sup> we have come to believe that it would be profitable to explore using empirical linear (nonzero) baselines to subtract out “tail contributions” from model $`C_P`$ functions so as to compare them on a more equal footing with experimental transition excess heat capacity functions. We have therefore taken the second step of incorporating empirical baseline subtractions in our model evaluation. Analysis of a 20-letter lattice model indicates that subtracting a nonzero native baseline amounts to a re-definition of the native state. Physically, the empirical subtraction operation is roughly equivalent to (i) classifying more conformations as native, (ii) including their contributions in the thermodynamic properties of a multiple-conformation native state, and (iii) excluding thermal transitions among these multiple native conformations from contributing to the subtracted heat capacity function.
After baseline subtractions, a Gō model meets the calorimetric two-state standard. However, while the teleological Gō potential is extremely useful for posting “what if” questions,<sup>43,46</sup> whether and how it can be rationalized in terms of physically plausible interactions remains to be clarified. Among models with a finite alphabet of residue types, the apparent $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratio for the 36mer 20-letter model is relatively high after empirical baseline subtraction, though it still falls short of meeting the high experimental standard for two-state cooperativity. (Its $`(\kappa ^{(\mathrm{s})})^2=0.89`$, the corresponding ratio for real two-state proteins $`0.96`$.) Other models with smaller alphabets or shorter chain lengths either have low $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratios or exhibit significant post-denaturational chain expansions that appear to contradict X-ray scattering experiments.<sup>84,85</sup> This suggests that a relative high level of interaction heterogeneity — as characterized by a larger alphabet<sup>11,97-99</sup> and the presence of repulsive interactions<sup>19,72,77</sup> — is necessary for more proteinlike thermodynamic cooperativity.
The low-temperature tails in the $`C_P`$ functions of the 36mer 20-letter and the Gō models before baseline subtractions are direct consequences of the low-enthalpy conformational diversity embodied in the multiple-conformation native state of the 36mer 20-letter model, and the existence in the Gō model of $`10^5`$ conformations with enthalpies very close to its ground state. This suggests that, for flexible heteropolymer models that achieve high apparent calorimetric cooperativity with only pairwise additive contact interactions, the native state effectively defined by an empirical native baseline would inevitably involve significant conformational fluctuation (as modeled here by different discrete lattice conformations). If one assumes that this model prediction captures at least partially the properties of real proteins, this would imply that the a posteriori experimental calorimetric “native state” defined operationally by empirical baseline subtractions may involve significant conformational diversity, and therefore may be qualitatively different from the a priori single-conformation native state used in conventional interpretation.<sup>33,36</sup>
One of the main goals of this study was to ascertain the degree to which proteinlike thermodynamic cooperativity can be achieved by simple models, especially the question as to whether pairwise additive contact interactions are sufficient. This is part of an effort to delineate the extent to which existing simple protein models capture generic protein properties.<sup>37</sup> This issue is also relevant to a related question regarding the sufficiency of contact interactions for protein structure prediction.<sup>100</sup> Our analysis of the 36mer 20-letter model is particularly instructive. Its apparent calorimetric cooperativity is relatively high after empirical baseline subtractions. However, how well does its predicted native conformational diversity match that in real proteins remains to be further investigated, especially in view of the apparent discrepancy between NMR main-chain bond vector motion measurements and the relative magnitudes of the native and unfolded heat capacities in this model.
Conventional interpretation of calorimetric data has been premised on a single-conformation, X-ray crystal-structure-like native state. The present analysis suggests a new perspective that involves a higher degree of conformational heterogeneity, namely (i) the possibility of a multiple-conformation native state, and (ii) the possibility that conventional baseline subtractions could have masked a non-negligible post-denaturational change in chain dimension driven by thermal transitions among denatured conformations at different enthalpic levels. In this alternate scenario, the relationship between calorimetric two-state cooperativity and a protein’s underlying enthalpic density of states becomes more complex. Nonetheless, if one characterizes the thermodynamics of real two-state proteins by both the calorimetric two-state criterion and the experimental observation<sup>84,85</sup> that no significant post-denaturational chain expansion took place, one central aspect of the physical picture<sup>23</sup> remains essentially the same: For thermodynamically two-state proteins, there is no significant post-denaturational shifting of the enthalpy distribution among the conformations of the denatured state relative to the average enthalpy of the (multiple-conformation) native state. On the other hand, a corresponding pre-denaturational shifting (i.e., under native conditions) does not contradict the experimental observations. This is consistent with the multiple-state picture<sup>101,102</sup> emerging from native-state hydrogen exchange experiments,<sup>103,104</sup> as has been discussed.<sup>23</sup> However, it is noteworthy that the baseline analysis in the present work does raise the possibility that parts of the structural fluctuation revealed by native-state hydrogen exchange can in principle correspond to conformational diversities that have been operationally absorbed into the baseline-defined calorimetric native state.
Acknowledgments
We thank Yawen Bai, Wayne Bolen, Julie Forman-Kay, Ernesto Freire, Roxana Georgescu, Lewis Kay, Ed Lattman, Themis Lazaridis, Kip Murphy, Kevin Plaxco, Nick Socci, Tobin Sosnick, María-Luisa Tasayco, and Dev Thirumalai for helpful discussions. We thank Julie Forman-Kay, Lewis Kay and José Onuchic for their critical reading of the manuscript and very helpful comments. This work was supported by grant MT-15323 to H.S.C. from the Medical Research Council of Canada.
Appendix
Statistical mechanics of a strictly two state model.
Here we describe basic thermodynamics of a strictly two-state model, which may be viewed as the $`\sigma _H0`$ limit of the random-energy model given by Eq. (7) above. The simplicity of this extreme case makes it useful for further elucidating the relationship among different midpoint temperatures and van’t Hoff enthalpies in the analysis of calorimetric cooperativity. The strictly two-state model is given by the partition function
$$Q(T)=1+g_\mathrm{D}\mathrm{e}^{H_\mathrm{D}/(k_BT)},$$
$`(\mathrm{A1})`$
where $`\mathrm{g}_\mathrm{D}`$ in Eq. (7) is re-written as $`g_\mathrm{D}`$ to highlight that we now consider a discrete rather than a continuous density of states.<sup>23</sup> For this model, $`\mathrm{\Delta }H_{\mathrm{cal}}=H_\mathrm{D}`$; and the average enthalpy
$$H(T)=\frac{g_\mathrm{D}H_\mathrm{D}\mathrm{e}^{H_\mathrm{D}/(k_BT)}}{1+g_\mathrm{D}\mathrm{e}^{H_\mathrm{D}/(k_BT)}}.$$
$`(\mathrm{A2})`$
It follows that the specific heat capacity
$$C_P=\frac{H(T)}{T}=\frac{H_{\mathrm{D}}^{}{}_{}{}^{2}}{k_BT^2}\frac{g_\mathrm{D}\mathrm{e}^{H_\mathrm{D}/(k_BT)}}{(1+g_\mathrm{D}\mathrm{e}^{H_\mathrm{D}/(k_BT)})^2}.$$
$`(\mathrm{A3})`$
This functional form gives a single maximum value for $`C_P`$ at a certain $`T=T_{\mathrm{max}}`$. The relation between $`T_{\mathrm{max}}`$ and the population midpoint temperature
$$T_{1/2}=\frac{H_\mathrm{D}}{k_B\mathrm{ln}g_\mathrm{D}}$$
$`(\mathrm{A4})`$
may be determined as follows. First, we note that the slope of the specific heat function at the population midpoint
$$\frac{dC_P}{dT}|_{T=T_{1/2}}=\frac{H_{\mathrm{D}}^{}{}_{}{}^{2}}{2k_B(T_{1/2})^3}<0.$$
$`(\mathrm{A5})`$
This establishes $`T_{1/2}>T_{\mathrm{max}}`$ for a strictly two-state model. We then seek a good estimate of $`T_{\mathrm{max}}`$ by attempting an approximate solution to the $`dC_P/(dT)=0`$ condition — which is equivalent to the equation
$$g_\mathrm{D}\mathrm{e}^\xi =\frac{\xi 2}{\xi +2},$$
$`(\mathrm{A6})`$
where $`\xi =H_\mathrm{D}/(k_BT_{\mathrm{max}})`$. For $`\mathrm{ln}g_\mathrm{D}1`$, which is a reasonable assumption for proteins, as discussed in the text,
$$T_{\mathrm{max}}\frac{H_\mathrm{D}}{k_B[\mathrm{ln}g_\mathrm{D}+4/(2+\mathrm{ln}g_\mathrm{D})]}<T_{1/2}.$$
$`(\mathrm{A7})`$
The last inequality follows from Eq. (A4) for $`T_{1/2}`$, and confirms the conclusion we have drawn from Eq. (A5). Finally, since by Eqs. (A2) and (A4) $`H(T_{1/2})`$ $`=`$ $`\mathrm{\Delta }H_{\mathrm{cal}}/2`$, we have $`T_d=T_{1/2}`$. Therefore, for a strictly two-state model,
$$T_d=T_{1/2}>T_{\mathrm{max}}.$$
$`(\mathrm{A8})`$
We now turn to the various van’t Hoff to calorimetric enthalpy ratios considered in the text \[Eq. (6)\]. Obviously, by definition $`\kappa _0=1`$ for the strictly two-state model. Moreover, by Eqs. (A3) and (A4),
$$2T_{1/2}\sqrt{k_BC_P(T_{1/2})}=H_\mathrm{D}=\mathrm{\Delta }H_{\mathrm{cal}}.$$
$`(\mathrm{A9})`$
Hence $`\kappa _1=\kappa _3=1`$ as well, because $`T_{1/2}=T_d`$. On the other hand,
$$\kappa _2=2T_{\mathrm{max}}\sqrt{k_BC_P(T_{\mathrm{max}})}/H_\mathrm{D}=\sqrt{14(k_BT_{\mathrm{max}}/H_\mathrm{D})^2}<1.$$
$`(\mathrm{A10})`$
However, for proteinlike systems, $`H_\mathrm{D}k_BT`$ is expected for any $`T`$ between 0 to 100C, hence $`T_{1/2}=T_{\mathrm{max}}`$ and $`\kappa _2=1`$ are very good approximations. For instance, if we use the parameters in the text for $`H_\mathrm{D}`$ and $`g_\mathrm{D}`$, which were motivated by experimental data on CI2 (Fig. 3 of Ref. 54), we get $`T_{1/2}=336.190`$K, whereas $`T_{\mathrm{max}}=336.025K`$ is only $`0.17^{}`$C lower, and $`\kappa _2=0.9997`$. Therefore, for a strict two-state model with these proteinlike parameters, practically all three midpoint temperatures are identical, and all $`\kappa `$’s are equal to one.
References
Table I
| $`T_{\mathrm{midpoint}}`$ | $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ | references | $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ | references |
| --- | --- | --- | --- | --- |
| $`T_{1/2}`$ | $`\kappa _0`$ | Ref. 23, Eq. (4) | | |
| | $`\theta =`$ \[D\] | | | |
| $`T_{1/2}`$ | $`\kappa _1`$ | Ref. 23 | $`(\kappa _1)^2`$ | Ref. 23 |
| $`T_{\mathrm{max}}`$ | $`\kappa _2`$ | Ref. 40, Eq. (39) | $`(\kappa _2)^2`$ | Ref. 40, Eq. (38) |
| | | | | Ref. 41, Eq. (21) |
| $`T_d`$ | $`\kappa _3`$ | Ref. 50, Eq. (7) | $`(\kappa _3)^2`$ | Ref. 51, Eq. (11) |
| | | | $`\theta =\mathrm{\Delta }H/\mathrm{\Delta }H_{\mathrm{cal}}`$ | Ref. 22, Eq. (22) |
Table I. Different definitions in the literature for $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$, the van’t Hoff to calorimetric enthalpy ratio. $`T_{\mathrm{midpoint}}`$ is the midpoint temperature of the given definition(s); see Eq. (6) in the text. Equation numbers in the table are those in the example reference(s) in which a given formula is used or proposed. $`\theta `$’s are shown only for $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$’s that follow directly from Eq. (4). Note that $`\kappa _0`$, $`\kappa _2`$, $`(\kappa _2)^2`$, and $`(\kappa _3)^2`$ are equal, respectively, to the expressions “$`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$,” “$`\mathrm{\Delta }H_{\mathrm{vH}}^{\mathrm{exp}}/\mathrm{\Delta }H_{\mathrm{cal}}`$,” “$`\mathrm{\Delta }H_{\mathrm{vH}}^{\mathrm{exp}(\mathrm{a})}/\mathrm{\Delta }H_{\mathrm{cal}}`$,” and “$`\mathrm{\Delta }H_{\mathrm{vH}}^{\mathrm{exp}(\mathrm{a})}/\mathrm{\Delta }H_{\mathrm{cal}}`$” in Ref. 23.
Table II
| Model | $`\mathrm{\Delta }H_{\mathrm{cal}}`$ | $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ | | | | $`\mathrm{\Omega }_c`$ |
| --- | --- | --- | --- | --- | --- | --- |
| | | $`\kappa _0`$ | $`\kappa _1`$ | $`\kappa _2`$ | $`\kappa _3`$ | |
| (a) 2-letter (27mer) | $`68.5`$ | $`0.26`$ | $`0.32`$ | $`0.36`$ | $`0.24`$ | $`11.2`$ |
| (b) 3-letter (27mer) | $`73.9`$ | $`0.36`$ | $`0.43`$ | $`0.46`$ | $`0.31`$ | $`20.7`$ |
| (c) 20-letter (36mer) | $`15.0`$ | $`0.10`$ | $`0.12`$ | $`0.67`$ | $`0.66`$ | $`38.9`$ |
| (d) Gō (48mer) | $`55.2`$ | $`0.54`$ | $`0.78`$ | $`0.87`$ | $`0.87`$ | $`192`$ |
| (e) Modified “HP” (36mer) | $`35.1`$ | $`0.17`$ | $`0.23`$ | $`0.33`$ | $`0.31`$ | $`12.4`$ |
| (f) Sidechain (15mer) | $`11.6`$ | $`0.05`$ | $`0.07`$ | $`0.38`$ | $`0.36`$ | $`5.69`$ |
Table II. Calorimetric cooperativity of the lattice protein models in Figure 3. Thermodynamic quantities are deduced from Figures 4–9: $`\kappa _0`$ involves the population-based van’t Hoff enthalpy,<sup>23</sup> which can be readily read off from the $`\mathrm{\Delta }H_\mathrm{D}`$ curves. $`\kappa _1`$, $`\kappa _2`$, and $`\kappa _3`$ \[Eq. (6)\] are deduced from the $`C_P`$ functions, and $`\mathrm{\Delta }H_{\mathrm{cal}}`$ is obtained by numerical integration of $`C_P`$ over $`T`$. The Klimov-Thirumalai<sup>48</sup> cooperativity parameter $`\mathrm{\Omega }_c`$ is calculated for these models and included for comparison; the present $`\mathrm{\Omega }_c=5.69`$ is slightly different from the value $`5.32`$ reported by Klimov and Thirumalai.<sup>48</sup>
Table III
| Model | $`T_{\mathrm{max}}`$ | $`C_{P,\mathrm{max}}`$ | $`C_{P,\mathrm{max}}^{(\mathrm{s})}`$ | $`\mathrm{\Delta }H_{\mathrm{vH}}^{(\mathrm{s})}`$ | $`\mathrm{\Delta }H_{\mathrm{cal}}^{(\mathrm{s})}`$ | $`\kappa _2^{(\mathrm{s})}`$ |
| --- | --- | --- | --- | --- | --- | --- |
| (a) 2-letter (27mer) | $`1.35`$ | $`80.6`$ | $`69.5`$ | $`22.6`$ | $`24.2`$ | $`0.932`$ |
| (b) 3-letter (27mer) | $`1.56`$ | $`117`$ | $`105`$ | $`32.0`$ | $`33.6`$ | $`0.952`$ |
| (c) 20-letter (36mer) | $`0.282`$ | $`316`$ | $`294`$ | $`9.66`$ | $`10.3`$ | $`0.943`$ |
| (d) Gō (48mer) | $`0.764`$ | $`986`$ | $`965`$ | $`47.5`$ | $`47.3`$ | $`1.00`$ |
| (e) Modified “HP” (36mer) | $`0.558`$ | $`107`$ | $`102`$ | $`11.3`$ | $`27.8`$ | $`0.406`$ |
| (f) Sidechain (15mer) | $`0.268`$ | $`66.4`$ | $`59.9`$ | $`4.14`$ | $`7.75`$ | $`0.535`$ |
Table III. Effects of baseline subtractions on the predicted calorimetric cooperativities of the six lattice protein models considered in this work: The effective van’t Hoff to calorimetric enthalpy ratio $`\kappa _2^{(\mathrm{s})}`$ (right column) is equal to $`\mathrm{\Delta }H_{\mathrm{vH}}^{(\mathrm{s})}/\mathrm{\Delta }H_{\mathrm{cal}}^{(\mathrm{s})}`$ \[Eq. (9)\]. The definitions of all quantities tabulated and methods to determine them are described in the text, Figure 11, and upper panels of Figures 12 and 13.
Figure Captions
Fig. 1 Densities of states $`g(H)`$ of random energy models. Each parabolic curve is $`\mathrm{ln}g(H)`$ from Eq. (7) with $`H_\mathrm{D}=3\times 10^4`$ (vertical dashed lines), $`\mathrm{g}_\mathrm{D}=5.68\times 10^{38}`$, as described in the text, and $`H`$ is in units of $`k_B`$. The $`\kappa _0`$ values of these curves, $`0.6`$, $`0.80`$, $`0.95`$, and $`0.98`$, quantify the different degrees of cooperativity of four models given here as examples, with standard deviations of denatured enthalpy $`\sigma _H=`$ $`1800`$, $`1350`$, $`700`$, and $`440k_B`$ respectively. $`\kappa _0`$’s are the population-based<sup>23</sup> $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ ratios. The horizontal dashed line highlights the fact that for these models it is possible for $`g(H)<1`$; and the dot indicates that their unique native (N) states have zero enthalpy \[$`\delta `$-function in Eq. (7)\]. Note that the logarithmic scale along the vertical axis implies that a 0.693 decrease in $`\mathrm{ln}g`$ is equivalent to halving the value of $`g`$ itself. Hence the distribution of $`g`$ is much sharper than this logarithmic plot might have otherwise conveyed.
Fig. 2 Relationship among different calorimetric two-state criteria in the random energy models defined by Eq. (7). See text and Table I for definitions and references. Left column: (a) Midpoint transition temperatures and (b) van’t Hoff to calorimetric enthalpy ratios, as functions of the standard deviation $`\sigma _H`$ of denatured enthalpy distribution. (b) shows $`\kappa `$’s vs. $`\sigma _H`$ times a constant, so that the horizontal scale corresponds to Onuchic et al.’s expression<sup>9</sup> for $`T_\mathrm{g}/T_\mathrm{f}`$. We note that $`\kappa _0`$ in (b) is well approximated by Eq. (13) of Ref. 23. Right column: Experimental formulas for $`\mathrm{\Delta }H_{\mathrm{vH}}/\mathrm{\Delta }H_{\mathrm{cal}}`$ vs. the population-based $`\kappa _0`$ used in our theoretical analyses.
Fig. 3 Recent three-dimensional cubic lattice protein models considered in this paper for their conformities to the calorimetric two-state criterion. Monomers (residues) are numbered from one end of the chain to the other; monomer 1 corresponds to the leftmost letter of a sequence. Each model protein chain is shown in its unique native or ground-state (lowest-enthalpy) structure. The corresponding sequence is also included, except for the Gō model in (d), as the interactions of a Gō model is determined solely by the ground-state conformation it presumes. (a) A 2-letter model of Socci and Onuchic (sequence 002 in Table 1 of Ref. 44). (b) A 3-letter model of Socci et al. (sequence in Fig. 3 of Ref. 45). (c) A 20-letter model of Gutin et al. (sequence in Fig. 1 of Ref. 49). (d) A Gō model of Pande and Rokhsar (structure in Fig. 1 of Ref. 46). (e) A modified HP “solvation” model of Sorenson and Head-Gordon (sequence 6 in Table 1 of Ref. 47). Filled and open circles represent the H and P monomers, respectively, in this modified HP model. (f) A 20-letter sidechain model of Klimov and Thirumalai (sequence A in Fig. 1 of Ref. 48). Here the main-chain monomers are numbered, and sidechains are represented by grey circles.
Fig. 4 Thermodynamic cooperativity of the 2-letter model in Fig. 3a. Results are obtained by the Monte Carlo (MC) histogram technique using simulation at $`T=1.5`$. \[N\] and \[D\] are respectively the fractional native and denatured population, \[N\] $`+`$ \[D\] $`=1`$. In this figure and subsequent Figs. 5–9, the native state of each model is taken to be only its single ground-state (lowest $`H`$) conformation, and the denatured state consists of all other conformations.<sup>23</sup> The vertical lines give the midpoint temperatures. From left to right, they are $`T_{1/2}`$ when \[N\] $`=`$ \[D\] $`=1/2`$ (dashed line), $`T_{\mathrm{max}}`$, and $`T_d`$ (solid lines). In all six models studied here (Figs. 4–9), $`T_{1/2}<T_{\mathrm{max}}<T_d`$. Upper panel: the specific heat capacity $`C_P`$ is defined by Eq. (2) in the text; $`(C_P)_\mathrm{D}`$ is the specific heat capacity of the denatured ensemble, obtained by replacing the Boltzmann averages $`\mathrm{}`$ in Eq. (2) over the full ensemble by averages $`\mathrm{}_\mathrm{D}`$ over the denatured (nonnative) ensemble.<sup>23</sup> Lower panel: The excess heat function $`\mathrm{\Delta }H`$ (solid curve increasing with $`T`$) is given by Eq. (1) in the text, $`\mathrm{\Delta }H_\mathrm{D}`$ (dashed curve) is the corresponding average over the denatured ensemble,<sup>23</sup> both are normalized by (in units of) $`\mathrm{\Delta }H_{\mathrm{cal}}`$ obtained by numerical integration of the entire area under the $`C_P`$ curve, part of which is shown in the upper panel. Our results for $`C_P`$ and $`\mathrm{\Delta }H`$ are numerically consistent with the $`C_V`$ and $`E`$ functions in Figs. 10 and 9 of the original study.<sup>72</sup>
Fig. 5 Same as Fig. 4, but for the 3-letter model in Fig. 3b; obtained by the MC histogram technique from simulation at $`T=1.5`$.
Fig. 6 Same as Fig. 4, but for the 20-letter model in Fig. 3c; obtained by the MC histogram technique from simulation at $`T=0.27`$.
Fig. 7 Same as Fig. 4, but for the Gō model in Fig. 3d; all continuous curves are obtained by the MC histogram technique from simulation at $`T=0.75`$. For this model, $`T_{\mathrm{max}}`$ ($`0.764`$) is almost equal to $`T_d`$ ($`0.767`$). Black dots in the lower panel are fractional native populations \[N\] at six different temperatures computed by direct MC simulations, showing good agreement with results from the histogram method.
Fig. 8 Same as Fig. 4, but for the modified HP “solvation” model in Fig. 3e; obtained by the MC histogram technique from simulation at $`T=0.6`$. Our simulated $`C_P`$ function (upper panel) is consistent with the original simulation ($`C_V`$ of sequence 6 in Fig. 8 of Ref. 47).
Fig. 9 Same as Fig. 4, but for the 20-letter sidechain model in Fig. 3f; obtained by the MC histogram technique from simulation at $`T=0.25`$. The $`C_P`$ function in the upper panel is consistent with the original heat capacity simulation ($`C_V`$ in Fig. 2c of Ref. 48). Our results are also consistent with the thermodynamics properties $`\chi `$, $`\mathrm{\Delta }\chi `$, and $`P_{\mathrm{NBA}}`$ given by Klimov and Thirumalai<sup>48</sup> in their Fig. 2 (data not shown).
Fig. 10 Distributions of denatured (nonnative) enthalpy $`H`$ of the 48mer Gō model in Figs. 3d and 7 at different temperatures $`T`$, obtained by direct MC simulations (same temperatures as the black dots in Fig. 3). The native enthalpy is $`57`$. The total area under a distribution curve is proportional to the fractional denatured population \[D\] at the given temperature.
Fig. 11 Exploring effects of baseline subtractions on predicted calorimetric cooperativity. Ad hoc baseline subtractions are applied to the heat capacity functions of the 2-letter (a), Gō (b), modified HP (c), and 20-letter sidechain (d) models. The model heat capacities ($`C_P`$’s) are the same as those presented in Figures 4 and 7–9. In each plot, the shaded area is subtracted from the original (pre-subtraction) $`\mathrm{\Delta }H_{\mathrm{cal}}`$ to yield a new effective calorimetric enthalpy $`\mathrm{\Delta }H_{\mathrm{cal}}^{(\mathrm{s})}`$ ($`<\mathrm{\Delta }H_{\mathrm{cal}}`$). Native and denatured baselines with non-zero slopes are constructed for (b) and (d). Denatured baselines with negative slopes are provided for (a) and (c), but their native baselines are assumed to have zero slope (i.e., no new native baseline) because the significant curvatures of their $`C_P`$ functions at low temperatures do not appear to warrant linear positive-slope extrapolations. Solid vertical lines mark the temperature $`T_{\mathrm{max}}`$ at the peak of heat capacity functions; the black dot marks the arithmetric mean of the values of native and denatured baselines at $`T_{\mathrm{max}}`$. Following standard experimental calorimetric baseline procedures<sup>50,51</sup> (see also Ref. 22), the new effective heat capacity peak value $`C_{P,\mathrm{max}}^{(\mathrm{s})}`$ is given by the vertical measure between the black dot and the pre-subtraction $`C_{P,\mathrm{max}}=C_P(T_{\mathrm{max}})`$. The quantities $`C_{P,\mathrm{max}}^{(\mathrm{s})}`$ and $`\mathrm{\Delta }H_{\mathrm{cal}}^{(\mathrm{s})}`$ are then used to compute the new effective van’t Hoff to calorimetric enthalpy ratios $`\kappa _2^{(\mathrm{s})}`$ in Table III. Included for comparison are nonlinear “formal two-state” baselines (dotted curves) constructed using the method of Zhou et al.<sup>22</sup> Nonlinear baselines correspond to heat capacity functions $`(C_P)_0`$ and $`(C_P)_1`$ of the native and denatured ensembles respectively. No native nonlinear baseline is provided for (a) – (c) because each of their native states is taken to have only a single conformation, as in the original analyses.<sup>44,46,47</sup> Hence $`(C_P)_0=0`$ and $`(C_P)_1=(C_P)_\mathrm{D}`$ for (a) – (c). On the other hand, for the 20-letter sidechain model in (d), the nonlinear native baseline is calculated<sup>22</sup> from a multiple-conformation native state defined by the original authors.<sup>48</sup> Vertical dashed lines mark the temperature $`T_m`$. For (a) – (c), $`T_m=T_{1/2}`$; for (d), $`T_m`$ is the temperature at which one half of the chain population is in the multiple-conformation native state (“native basin of attraction”) defined in Ref. 48. See the text for further details.
Fig. 12 Thermodynamic/calorimetric cooperativity of a 3-letter model. (a) Same as Figure 11a, but for the 3-letter model of Socci et al.<sup>45</sup> in Figures 3b and 5. (b) Root-mean-square radius of gyration $`R_g`$ of this 3-letter chain model vs. temperature. (Square root of the Boltzmann average of square radius of gyration of the chains.) $`R_g`$ continues to increase substantially as temperature is raised well above the transition region (vertical dashed and solid lines).
Fig. 13 Thermodynamic/calorimetric cooperativity of a 20-letter model. Upper panel: Same as Figure 11d, but for the 20-letter model of Gutin et al.<sup>49</sup> in Figures 3c and 6. As in Figure 11d, the vertical dashed line marks the temperature $`T_m`$ at which one half of the chain population is in the multiple-conformation native state defined by the original authors as the ensemble of conformations that have more than 20 contacts that also occur in the ground-state conformation ($`𝐐>20`$, Q is referred to as the number of native contacts).<sup>49</sup> For this 36mer model, the total number $`𝐐_\mathrm{N}`$ of native contacts equals 40. The corresponding native and denatured nonlinear baselines are calculated using the method of Zhou et al.<sup>22</sup> Lower panel: Folding/denaturation transition tracked by different order parameters. \[N\] is the fractional chain population in the single-conformation ground state; $`𝐐/𝐐_\mathrm{N}`$ is the normalized Boltzmann-averaged number of native contacts; $`P(𝐐>20)`$ is the fractional population in the multiple-conformation native state; and $`\chi `$ is the Boltzmann average of the overlap function $`\chi `$ of Thirumalai and coworkers,<sup>48</sup> which is a useful measure of the structural similarity between any given conformation and the ground-state conformation. The single ground-state conformation have $`𝐐/𝐐_\mathrm{N}=1`$ and $`\chi =0`$. The inset in the upper panel shows the relation between $`𝐐/𝐐_\mathrm{N}`$ and $`\chi `$. While each $`𝐐`$ is consistent with many values of $`\chi `$, and vice versa (scatter plot), for this model the correlation between their Boltzmann averages at different temperatures is almost perfect (curve in inset with slope $`1`$).
Fig. 14 Conformational diversity in the multiple-lattice-conformation native state of the 20-letter model in Figures 3c, 6 and 13. In each conformation, the directionality of the sequence is indicated by the filled circle, which marks the position of monomer 1 in Figure 3c. The three rows show example non-ground-state conformations (from top to bottom) with number of native contacts $`𝐐=34`$, $`35`$, and $`36`$ respectively. These Q values are close to the average $`𝐐`$ of the multiple-conformation native state at the midpoint temperatures $`T_m`$ and $`T_{\mathrm{max}}`$ (vertical dashed and solid lines in Figure 13).<sup>49</sup>
Fig. 15 Effects of the folding/denaturation transition on conformational properties of the 20-letter model in Figures 3c, 6 and 13. The dashed lines on the left mark $`T_{1/2}`$ at which the fractional population \[N\] of the single ground-state conformation equals $`1/2`$, the dashed lines on the right mark $`T_m`$ $``$ $`T_{\mathrm{max}}`$ (see Figure 13). Upper panel: Boltzmann-averaged number of nonnative contacts (i.e., contacts that do not belong to the single ground-state conformation) vs. temperature. Lower panel: Root-mean-square radius of gyration vs. temperature. (Same as Figure 12b, but now for the 20-letter model.)
|
no-problem/0003/astro-ph0003281.html
|
ar5iv
|
text
|
# Using Slitless Spectroscopy to study the Kinematics of the Planetary Nebula Population in M94
## 1 Introduction
The outer kinematics of galaxies have played a crucial role in our understanding of their structure. The dark matter halos are most important there, so that conclusions about their shape, mass and extent may be drawn that are less dependent on assumed mass-to-light ratios of the observed stars. Most of the angular momentum resides at large radii, and relaxation times are longest there, possibly enabling echos of the formation process to be observed directly. However, the required observations are rather difficult. The integrated stellar light of a galaxy rapidly becomes too weak at large radii to do spectroscopy. In the case of elliptical galaxies, the old stellar populations have now in a few cases been probed as far as two effective radii (e.g., Carollo et al. 1995; Gerhard et al. 1997).
Some tracers, such as globular clusters and H i emission, can be observed at larger radii, but neither provides a reliable tracer of the kinematics of the relaxed, old stellar population. Moreover systems like S0s and ellipticals generally lack an extensive gaseous disk. Fortunately an alternative tracer of the kinematics out to large radii has been identified by Hui et al. (1993), who showed that the radial velocities of a galaxy’s planetary nebula (PN) population constitute a suitable diagnostic. Planetary nebulae (PNe) appear in the post-main sequence phase of stars in the range 0.8 - 8 M. Fortunately, in all but the very youngest of systems the PN population is strongly correlated with the older, and therefore dynamically relaxed, population of low-mass stars. This statement is true not only because of the statistics of stellar formation and evolution, but also because the PN lifetime is itself a strongly decreasing function of progenitor mass (Vassiliadis & Wood 1994).
PNe emit almost all of their light in a few bright emission lines, particularly the \[O iii\] line at 5007Å. There is evidence that the PN \[O iii\] luminosity function is essentially constant with galaxy type and metallicity (Jacoby et al 1992), so that the observed bright-end cutoff magnitude $`M^{}`$ (Ciardullo et al 1989) represents a ‘standard candle’ with which distances can be determined. At a distance of 10 Mpc, this cutoff corresponds to a flux of $`2.0\times 10^{16}`$ erg cm<sup>-2</sup>s<sup>-1</sup>, making PNe within one dex of this limit easily detectable in one night with a 4-m telescope. As a rule of thumb, approximately 100 such PNe are found to be present per $`10^9`$ L of B-band luminosity (Hui 1993), so they are seen in sufficient number to study the kinematics of the stellar population of the host galaxy.
The usual approach in making such kinematic studies has been to identify the PN population by narrow-band imaging and then to re-observe the detected PNe spectroscopically to obtain radial velocities. However, other strategies exist that avoid the need for several observing runs. For example, Tremblay et al. (1995) used Fabry-Perot measurements in a PN kinematics study of the SB0 galaxy NGC 3384. In this paper, we describe a novel alternative, based on slitless spectroscopy, and discuss its application to the Sab galaxy M94.
## 2 Detection and Kinematics of PNe through Slitless Spectroscopy
Our method for obtaining the kinematics of PNe is outlined in Figure 1. The galaxy under study is imaged through narrow-band filters around the two strongest emission lines in a typical PN spectrum, H$`\alpha `$ and \[O iii\]. The H$`\alpha `$ image is recorded directly, but the \[O iii\] light is dispersed. Comparison of the dispersed and undispersed images then allows the kinematics of the PNe to be measured, without prior knowledge of the location of the PNe. Two modes of analysis are possible:
Dispersed/Undispersed Imaging (DUI): In the dispersed blue arm the PNe will be visible through their \[O iii\] emission as point sources, displaced from their ‘true’ positions by an amount related to their radial velocity, against a background of dispersed galactic light. The red arm will detect the PNe through their H$`\alpha `$ emission, along with any other objects with line or continuum emission in the pass band of the filter. Assuming that the PNe can be unambiguously identified in the H$`\alpha `$ image, their position in the \[O iii\] image will give the radial velocity. We chose to disperse the blue rather than the red light since PN have a higher flux at \[O iii\] than at H$`\alpha `$, and gratings are less efficient than mirrors.
Counter-Dispersed Imaging (CDI): The method of counter-dispersed imaging was described in an earlier paper (Douglas & Taylor 1999). In this mode, pairs of dispersed \[O iii\] images are made with the entire spectrograph rotated by 180 degrees between exposures. The difference in the position of a given PN in the two dispersed images again reflects its radial velocity. In this case the undispersed H$`\alpha `$ image can be used as a consistency check on the derived positions of the PNe.
We have implemented our method on the ISIS medium-dispersion spectrograph at the Cassegrain (f/10.94) focus of the 4.2m William Herschel Telescope. The slit unit was removed during the observations, and the \[O iii\] and H$`\alpha `$ light paths were separated with a dichroic before being passed through appropriate narrow-band filters. The filters ($`\lambda `$5026/47 and $`\lambda `$6581/50) were custom-made for this project in order to exploit the full $`4\times 1`$ arcmin field of the instrument in slitless mode, and to give adequate velocity coverage. We used a 1200 g/mm (first order) grating in the blue arm. Both arms contained 1024<sup>2</sup>-pixel Tek CCD detectors. Thus, we obtained dispersed images in the blue (calibration showed the dispersion to be about 24 km s<sup>-1</sup> per pixel) and simultaneous direct images in the red. DUI observations are accomplished in a single exposure; CDI requires two exposures.
The overall efficiency of this setup, including telescope, instrument, filter and CCD, was found from observations of a standard star (Feige 34) to be 14% in the blue and 20% in the red for air mass of 0. We therefore expected to detect $`2.7`$ \[O iii\] photons per second from the brightest PN when viewed at 6 Mpc. Dark sky (V=21.4) would produce $``$2 counts per arcsec<sup>2</sup> per second, and the background light of the galaxy 1–5 counts per arcsec<sup>2</sup>. A reasonable goal is to obtain a 4$`\sigma `$ detection of the PN population over the top decade (2.5 mag) of the luminosity function. This requires $``$ 4 hours of integration if the seeing conditions are of the order of one arcsec. The required integration time is approximately proportional to the square of the seeing.
## 3 Observations
The observations were carried out on 1997 April 11 and 12. For this pilot project NGC 4736 (M94) was chosen. It has a large angular size and is at a distance of 6 Mpc (Bosma et al 1977). The galaxy shows some peculiar morphological features, most notably an inner and an outer optical ring with radii of 1 and 5.5 arcmin respectively. The stellar light is too faint for direct optical spectroscopy outside 1 arcmin, and our goal was to measure PN kinematics to three times this radius. Key parameters of NGC 4736 are listed in Table 1, and the observing log is given in Table 2.
Two fields were observed, 3 arcmin west of centre on the major axis, and 3 arcmin north of centre on the minor axis. The major axis was observed at two orientations (allowing CDI mode) while the minor axis was observed in one orientation only. We took the major axis position angle to be 90, as would seem appropriate from the relevant isophotes (see Figure 3). Total integration times were 6.0hrs on the Western field (4.6hrs in one orientation, and 1.4hrs with the spectrograph rotated by 180 degrees), and 3hrs on the Northern field. The observing conditions were close to photometic with seeing, as judged from stellar images in the red arm, less than 1.1″ at all times. We also observed a flux standard star for photometry and a Galactic PN as a radial velocity reference. A second star was observed as a spectral reference.
The custom-made \[O iii\] and H$`\alpha `$ filters had a central wavelength and peak transmission of 5026Å/0.823 and 6581Å/0.915, respectively. The nominal FWHM was 47Å and 50Å, while the effective photometic bandwidth was evaluated graphically and found to be 38.7Å and 45.7Å, respectively.
## 4 Data Reduction
### 4.1 Calibration
The dispersion in the blue arm was measured by inserting a slit and using an arc lamp, and found to be 0.3992Å/pixel. The spectrum of the star HD66637 was then observed through the same slit and wavelength-calibrated. Subsequent observations of the same star at numerous positions in the field (after removal of the slit) established that the dispersion could be taken as constant over the field. The combination of these observations with the undispersed red arm positions of HD66637 gave an unambiguous solution for the transformation between objects in the red (direct image) and blue (dispersed image). (Note that with this technique the radial velocity of the star does not enter into the calculation.)
To check the zero point of the velocity scale, we moved the telescope from the reference star to the Galactic planetary nebula PN 49.3+88.1, for which the heliocentric radial velocity is listed as $`141`$ km s<sup>-1</sup> (Schneider et. al 1983). In ten pointings over the field of the spectrograph we measured $`136.0\pm 3.2`$ km s<sup>-1</sup>, in agreement with the calculated observatory frame redshift of $`133`$ km s<sup>-1</sup>. Unfortunately we discovered later that at certain telescope orientations the flexure is large enough to cause significantly larger errors. However, such flexure only introduces an offset in absolute velocity, and does not compromise our ability to study a galaxy’s internal kinematics. In order to derive an absolute calibration for the velocity scale, we later obtained a long slit observation of two of the objects detected in this analysis (see § 5).
The spectrograph field of view with the slit unit removed consists of an approximately unvignetted area of about 4′$`\times `$ 1′, but we obtained useful data outside this region. Correcting the observed fluxes for the vignetting is only straightforward for the red (undispersed) arm. In the blue arm the correction is complicated by the fact that the image is dispersed. The sky flat measured in the blue arm was found to closely approximate the aperture function in the red arm, shifted by a small number of pixels, transformed to blue arm coordinates and then convolved with the filter profile at the appropriate dispersion.
We therefore carried out the complete analysis after correcting only for the pixel-to-pixel variation of the CCD responses, determined in the usual way. Once the PNe were identified it was possible to determine their wavelength and their positions in the aperture prior to being shifted by the spectrograph, so that the \[O iii\] magnitudes could then be corrected analytically both for the filter response, which was fitted with a polynomial, and for vignetting.
### 4.2 Object identification
Scripts based on IRAF procedures were used for all of the data reduction.
Due to spectrograph flexure, individual exposures needed to be aligned before being added. This was accomplished by a simple shift (at least one PN was visible in each individual exposure). The images were then combined by computing a weighted and scaled median. A spatial median filter was applied to the combined frame, and the result subtracted from the original image to yield a field of unresolved objects against a background with a mean of zero.
The PNe and any other point sources in the red and blue images were extracted by two methods:
(A) Blinking and hand-tagging, followed by a PSF-fitting step to evaluate the shape and size parameters. For DUI mode observing it was usually found to be easier to search the range of possible (red) coordinates corresponding to an object seen in the blue (see Fig. 2).
(B) an automated procedure based on object lists generated with DAOPHOT. A 2D-gaussian fit to each detected image was used to select PN candidates. The FWHM of a candidate was required to be within a small range of that of the seeing disk (PNe are unresolved) and the axial ratio close to the value 1.28 expected from our instrumental configuration (the ellipticity arises from the anamorphic effect of the grating). The object lists were then correlated to search for potential PN image pairs.
### 4.3 False detections
As well as PNe, our observations will also detect other objects having a predominantly line-emission spectrum near the \[O iii\] line. In the case of a bright spiral galaxy like M94 it is obvious that H ii regions will mimick PNe. As they belong to a younger population than the PNe their inclusion in the analysis would lead to an underestimate of the velocity dispersion. Although it is tempting to use the \[O iii\]/H$`\alpha `$ line ratio as a discriminant (it tends to be larger in PNe) this is not sharp enough to eliminate the H ii regions without eliminating a fair fraction of the PNe. Unless a better discriminant can be found, this source of contamination may ultimately restrict the viability of our technique to the elliptical and S0 galaxies for which it was devised. However in the particular case of M94 the problem is manageable because:
(i) M94 is sufficiently close that a large fraction of bright H ii regions would be spatially extended and thus eliminated by one of the object selection criteria; (ii) the H ii regions in M94 are mostly confined to features associated with spiral arms - one of these passes through the eastern extremity of our major axis field and all objects there were ignored. The rest of the field, and the minor axis field, are devoid of recognised H ii agglomerations; (iii) the total number of unresolved objects detected in \[O iii\] is close to the number expected for PNe on the basis of their luminosity specific density as seen in other galaxies.
None of these factors eliminates the problem entirely, but their combined effect is such that we feel that contamination is negligible. Even if 10% of the PNe had been misidentified the effect on the calculated velocity dispersion would be at most 5%, well below other sources of error.
High redshift galaxies, in which the Ly$`\alpha `$ line is shifted into the \[O iii\] passband, form another potential source of contamination when long integrations are made of extended halos (Freeman et al. 1999). This is also not a significant issue in the present case.
### 4.4 Comparing Spectral Modes
We identified PNe in NGC 4736 along the minor axis with DUI mode observations and along the major axis using CDI (with DUI mode data being redundant). Although CDI requires two distinct integrations, we found that data obtained from CDI results in a higher number of detected PN per unit integration time. For visual identification using blinking, CDI is considerably easier since the two images have the same plate scale and similar properties with respect to sky noise and confusion. Therefore to illustrate the numerical superiority of using two dispersed \[O iii\] images we rely on the automated search results for the major axis observations only. 36 PNe were identified in this field from matching 2.5$`\sigma `$ sources in the counterdispersed \[O iii\] images. The limiting factor here was the shorter integration time with one of the two spectrograph orientations: had both integration times been equal we would presumably have found yet more PNe. By comparison, only 24 PNe were detected from the DUI mode analysis of the same field. Therefore, we conclude that a significant number of PNe are too faint in H$`\alpha `$ for the DUI mode to detect them.
## 5 Long-slit data
It has been mentioned that instrument flexure, particularly between sets of observations in CDI mode, can lead to an uncertainty in the absolute velocity scale. To remove this uncertainty we attempted to obtain the velocity of at least one object in the major axis field via the Willian Herschel Telescope service data program. As the PNe are faint, with effective V-band magnitude of around 25, the slit had to be positioned ‘blind’ on the basis of the position computed from the dispersed data. Fortunately the astrometry does not depend on velocity, but only on correct identification of PN pairs, and on the centroiding of the dispersed images of stars in the field.
We requested a spectrum using ISIS in long-slit mode and with the slit at a position and PA chosen such as to fall across two objects. This provided a good test of the astrometry. The service observation was attempted on 1999 July 28. Both objects were acquired, and their separation along the slit agreed with that calculated. One of the objects, suspected of being an H ii region, was confirmed as such. The radial velocities obtained had an internal error of about 10 km s<sup>-1</sup>, as judged from the values obtained from different lines, and were used to calibrate the major axis data (Table 3).
## 6 Results
The PNe identified along the major axis are listed in Table 3 and those along the minor axis in Table 4. Their positions are also shown in Fig 3. As suggested by the successful long-slit experiment, the positional uncertainty is of the order of 1-2 arcsec. The internal error in the velocities is approximately 10 km s<sup>-1</sup>. The minor axis velocities have an offset that has not been determined, but the values as presented have an average velocity near systemic, as would be expected. The derivation of velocities from DUI mode data, as was used for the minor axis, is in fact much less sensitive to flexure. However, in order to reduce the number of candidate objects in the red image, only radial velocities between 200 and 500 km s<sup>-1</sup> were searched for, so this table has to be used with caution.
### 6.1 Luminosity function
We placed a premium on detecting as many PNe as possible, even in the partially vignetted region of the instrument. Considerable corrections have been applied, and the magnitudes should therefore only be taken as indicative. The luminosity function of the objects detected is presented in Fig 4. The bright-end cutoff for the assumed distance of 6 Mpc ($`m^{}`$ = 24.4) is indicated. At the faint end the luminosity function is, of course, significantly incomplete while at the bright end some objects are brighter than the cutoff. The latter are probably H ii regions and have therefore not been included in the analysis of the kinematics.
The number of PNe found in the major axis field (CDI mode) is in rough agreement with predictions. From the basic data on NGC 4736 compiled by Mulder (1995) ($`m_B=8.58`$, $`D=6.0`$ Mpc) we have $`L_B=2.07\times 10^{10}L_{}`$, and on the basis of the results of Hui et al (1993) the expected number of PN in the top decade of the PNLF in M94 would therefore be around 2000. Mulder also found the galaxy to be fairly well-fitted by an exponential disk with scale length $`h=`$57″. The region we examined includes 0.029 of the light of such an exponential, which should therefore include 59 PNe in the brightest decade. This number compares well with the 53 actually found, though the agreement may be somewhat fortuitous given the incompleteness at the faint end.
### 6.2 Rotation curve
In Fig 5 the line-of-sight velocities of the 53 objects in the major axis field are plotted as function of radius, after subtraction of the systemic velocity (Table 1). Flat rotation is seen out to the last measured point at almost 5 scale lengths. For comparison we overplot the H i/CO rotation curve of Sofue (1997), projected into the plane of the sky. For objects near a distance of 1 arcmin along the major axis the mean velocity is 98 km s<sup>-1</sup>, in agreement with the projected gas rotation velocity of 103 km s<sup>-1</sup> at that point. The uncorrected minor axis data yield a mean velocity of 329 km s<sup>-1</sup>, consistent with the systemic velocity.
### 6.3 Velocity dispersion
Generally, the vertical structure in disks of spiral galaxies is reasonably well described by an isothermal sheet approximation (van der Kruit & Searle 1982; Bottema 1993). In this model, the vertical velocity dispersion is found to follow an exponential decline with radius with scale length twice that of the surface density. With the additional assumption that the dispersion ellipsoid has constant axis ratios throughout the disk, one finds that the line-of-sight velocity dispersion follows the same decline, independent of the galaxy’s inclination. We tested this using the major axis data. Figure 6 shows the velocity dispersion in bins of distance along the major axis. Seven objects were eliminated from the kinematic analysis as their brightnesses suggested that they are H ii regions. The curve is the least-squares exponential fit, yielding central velocity dispersion 111 km s<sup>-1</sup> and scale length $`h_\sigma `$= 130 arcsec. These are close to the published value for the central stellar velocity dispersion ($`120\pm 15`$ km s<sup>-1</sup>) obtained from absorption-line spectra (Mulder and van Driel, 1993) and to twice the photometric scale length ($`2h=114`$ arcsec), suggesting that the isothermal sheet approximation is reasonable.
### 6.4 Combined Kinematic Model
The binning in Fig 6 effectively assumes that the PNe all lie close to the major axis. In fact, they are located up to one arcminute from the axis, at azimuths up to 45, so a more sophisticated approach is required. We therefore projected the PNe on to a thin disk of fixed inclination (35), giving $`r,\varphi `$ coordinates. The nebulae’s line-of-sight velocities can then be compared with a model consisting of a three-dimensional isothermal sheet with a flat rotation curve. This model has five parameters, namely the three components of the central velocity dispersion $`\sigma _z,\sigma _\varphi ,\sigma _R`$, the scale length $`h_\sigma `$, and the rotation amplitude. A maximum likelihood method was then used to fit the model. We added twelve PNe from the minor axis field (Table 4) to help constrain the fit (two were excluded from the fit as probable H ii regions).
In practice the data were not adequate to constrain all five parameters. Using the canonical relationship $`\sigma _\varphi ^2=\sigma _R^2/2`$ from the epicyclic approximation (Binney & Merrifield 1998, eq. 11.18) and allowing $`\sigma _z/\sigma _R`$ to vary over the range 0.2 to 2.0, we found a robust maximum likelihood solution with scale length $`h_\sigma =144\pm 30`$ arcsec, central velocity dispersion $`\sigma _{los}=120\pm 30`$ km s<sup>-1</sup>, and circular rotation speed $`v_c=177\pm 11`$ km s<sup>-1</sup>. These results are consistent with the H i rotation speed at 1 arcmin radius (180 km s<sup>-1</sup>) and again with twice the photometric scale length.
It was not possible to constrain the shape of the velocity ellipsoid with these data – such an analysis would require a more complete azimuthal coverage of the galaxy. However if we assume $`0.5<\sigma _z/\sigma _R<1.0`$ then we infer $`75<\sigma _R<110`$ km s<sup>-1</sup> at one photometric scale length, consistent with the trend between rotation speed and disk velocity dispersion found by Bottema (1993).
Thus far we have ignored measurement error in the velocities, which will tend to increase the measured velocity dispersion. This turns out to be a small effect: allowing for a 1$`\sigma `$ error of 10 km s<sup>-1</sup>, the fitted dispersion becomes approximately 3% smaller and the scale length is unchanged.
## 7 Conclusions
In this paper we have demonstrated how the kinematics of the PN population in a galaxy can be measured by slitless spectroscopy through narrow-band filters with a dual-beam spectrograph. We compared two possible modes: dispersed/undispersed imaging, in which a dispersed \[O iii\] image is compared to an undispersed H$`\alpha `$ image; and counterdispersed imaging, in which two \[O iii\] images, dispersed in opposite directions, are analysed. It turns out that the latter method is more effective: evidently the H$`\alpha `$ fluxes of faint PNe are not reliably high enough to allow both spectral lines to be used.
Our pilot experiment was performed on the large nearby Sab galaxy M94. It has revealed a PN population in the disk whose rotation curve remains flat, and whose velocity dispersion declines radially exponentially, consistent with the predictions of a simple isothermal sheet model. PNe were detected out to five exponential scale lengths, well beyond the reach of kinematic measurements based on integrated-light absorption-line spectroscopy. The number of PNe detected was consistent with expectations.
The present experiment was limited to two fields in this large galaxy. Complete coverage of the galaxy should yield around 2000 PNe, and would allow a detailed kinematic model to be fitted, including a determination of the axis ratio of the velocity ellipsoid following the technique of Gerssen et al. (1997). Obtaining such data for a small sample of nearby galaxies in just a few nights of 4-m telescope time is a practical proposition.
## 8 Acknowledgements
The WHT is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. We wish to acknowledge the help and support of the ING staff. We are also grateful for some excellent additional data provided by ING astronomers in service mode. The IRAF data reduction package is written and supported by the IRAF programming group at the National Optical Astronomy Observatories (NOAO) in Tucson, Arizona. We thank the referee Dr R. Ciardullo for comments which led to the addition of §4.3.
|
no-problem/0003/hep-th0003121.html
|
ar5iv
|
text
|
# Towards the matrix model of M-theory on a lattice
## Figures
|
no-problem/0003/nlin0003065.html
|
ar5iv
|
text
|
# Theorem 1
Preprint CAMTP/98-7
July 1998
Revised December 1998
On Urabe’s criteria of isochronicity
Marko Robnik<sup>1</sup><sup>1</sup>1e–mail: robnik@uni-mb.si and Valery G. Romanovski†<sup>2</sup><sup>2</sup>2e–mail: math@micro.rei.minsk.by
Center for Applied Mathematics and Theoretical Physics,
University of Maribor, Krekova 2, SI-2000 Maribor, Slovenia
†Belarusian State University of Informatics and Radioelectronics
P. Brovka 6, Minsk 220027, Belarus
Abstract. We give a short proof of Urabe’s criteria for the isochronicity of periodical solutions of the equation $`\ddot{x}+g(x)=0.`$ We show that apart from the harmonic oscillator there exists a large family of isochronous potentials which must be all non-polynomial and not symmetric (even function of the coordinate $`x`$).
PACS numbers: 46.10.+z, 95.10.Ce
AMS classification scheme numbers: 34C05
Published in Journal of Physics A: Mathematical and General
Vol. 32 (1999) 1279-1283
We consider a system of differential equations of the form
$$\begin{array}{c}\dot{x}=y,\hfill \\ \dot{y}=g(x),\hfill \end{array}$$
(1)
where we suppose
$$g(x)C(a,b),xg(x)>0\text{ for}x0,g(0)=0\text{ and}g^{}(0)=k0.$$
(2)
Denoting
$$U(x)=_0^xg(s)𝑑s$$
we obtain the first integral in the form ”kinetic energy+potential energy”, i.e. in the form
$$H(x,y)\stackrel{def}{=}\frac{y^2}{2}+U(x)=E,$$
(3)
such that $`H(x,y)`$ is the Hamiltonian and (1) are the Hamilton equations of the motion of our system .
It is well known that any solution near the origin oscillates around $`x=0,y=0`$ with a bounded period, i.e. system (1) has a center in the origin. The problem arises then to determine whether the period of oscillations is constant for all solutions near the origin. A center with such property is called $`isochronous`$. At present the problem of isochronicity is of renewed interest (see, for example, for current references).
It was shown in that if $`g(x)`$ is a polynomial, then system (1) cannot have an isochronous center, except when $`g(x)`$ is linear $`g(x)=kx`$, in which case $`k=(2\pi /\tau )^2`$, where $`\tau `$ is the period of oscillations. If $`g(x)`$ is not exactly linear, then still the period of oscillations infinitesimally close to the origin is also equal to $`\tau `$.
In the present Letter we give a simple short proof of the following Urabe’s criteria of isochronicity of the center of system (1).
###### Theorem 1
When $`g(x)`$ is continuous, the necessary and sufficient condition that $`g(x)C^1(a,b)`$ and system (1) has an isochronous center in the origin, is that, in the neighbourhood of $`x=y=0`$ by the transformation
$$\frac{1}{2}X^2=U(x),$$
(4)
where $`X/x>0`$ for $`x0,`$ $`g(x)`$ is expressed as
$$g(x)=g[x(X)]\stackrel{def}{=}h(X)=\frac{2\pi }{\tau }\frac{X}{1+S(X)},$$
(5)
where $`S(X)`$ is an arbitrary continuous odd function and $`\tau `$ is the period of the oscillations.
First in Urabe proved the criteria in the case when $`g(x)`$ is an analytic function. For function $`g(x)C^1`$ he got a more complicated criteria with the function $`h(X)`$ of the form
$$h(X)=\frac{2\pi }{\tau }\frac{X}{1+S(X)+R(X)},$$
where $`S(X)`$ is an odd and $`R(X)`$ is an even continuous function (see ). Then in he showed that if $`g(x)C^1(a,b)`$ then necessarily $`R(X)0.`$
Note that in the statement of the theorem Urabe demands the additional property
$$S(0)=0,XS(X)C^1,$$
but every continuous odd function has the property $`S(0)=0`$, and the second one is not essential for our proof. We have also required $`g(x)`$ to be smooth in a neighbourhood of $`x=0`$ (as in the original work by Urabe , but in fact it is sufficient for our reasoning if $`g(x)`$ is continuous in a neighbourhood of the origin and differentiable at $`x=0`$.
Our proof of the Theorem 1 is based on the following criteria, which for the first time appears, apparently, in Landau and Pyatigorsky and which later was rederived by Keller (who also considered some connected problems, in particular, the case of non-monotonic potential). For convenience of the reader we present the criteria with the proof, which stems from the books , here.
###### Theorem 2
When $`g(x)`$ is continuous and the conditions (2) hold, system (1) has an isochronous center of the period $`\tau `$ at the origin if and only if
$$x_2(U)x_1(U)=\frac{\sqrt{2}\tau }{\pi }\sqrt{U},$$
(6)
for $`U(0,U_0)`$, where $`x_1(U)`$ is the inverse function to $`U(x)`$ for $`x(a,0)`$ and $`x_2(U)`$ is the inverse function to $`U(x)`$ for $`x(0,b)`$.
Proof. First we note that due to (2) the functions $`x_1(U),x_2(U)`$ are defined and $`x_1(U),x_2(U)C^1(0,U_0)`$ with a $`U_0>0`$. Denote by $`T(E)`$ the period of the orbit of (1) corresponding to the value of energy $`E`$. Then we have
$$T(E)=\sqrt{2}_0^E\left[\frac{dx_2(U)}{dU}\frac{dx_1(U)}{dU}\right]\frac{dU}{\sqrt{EU}}.$$
(7)
Dividing both sides of this equation by $`\sqrt{\alpha E}`$, where $`\alpha `$ is a parameter, integrating with respect to $`E`$ from 0 to $`\alpha `$ and putting U in place of $`\alpha `$ (see for detail) one gets
$$x_2(U)x_1(U)=\frac{1}{\sqrt{2}\pi }_0^U\frac{T(E)dE}{\sqrt{UE}}.$$
In the case when $`T(E)\tau `$ that yields (6).
To prove that (6) is the sufficient condition of isochronicity we note that (6) implies
$$x_2^{}(U)x_1^{}(U)=\frac{\sqrt{2}\tau }{2\pi \sqrt{U}}.$$
Substituting this expression into (7) and integrating we get $`T(E)\tau `$. o
As an immediate consequence we get the following proposition proved earlier in .
###### Corollary 1
If $`g(x)C^1(a,b)`$ is an odd function, then the origin is an isochronous center iff $`g(x)=(2\pi /\tau )^2x.`$
In other words, if the potential (energy) $`U(x)`$ is an even function of position $`x`$ then the only isochronous system is the harmonic oscillator given above.
Proof of theorem 1. Let us suppose that the system (1) has an isochronous center. Then due to theorem 2 the relation (6) holds and we get
$$x_2(U)\frac{\sqrt{2}\tau }{2\pi }\sqrt{U}=x_1+\frac{\sqrt{2}\tau }{2\pi }\sqrt{U}\stackrel{def}{=}f(U).$$
Therefore
$`x_2^{}(U)={\displaystyle \frac{\tau }{2\sqrt{2}\pi \sqrt{U}}}+f^{}(U),`$ (8)
$`x_1^{}(U)={\displaystyle \frac{\tau }{2\sqrt{2}\pi \sqrt{U}}}+f^{}(U).`$ (9)
Taking derivative in the both parts of (6) with respect to $`x`$ we get for $`x<0`$
$$x_2^{}(U)U^{}1=\frac{\sqrt{2}\tau }{2\pi \sqrt{U}}U^{}.$$
Therefore, using (8) we obtain
$$U^{}=\frac{2\pi }{\tau }\frac{\sqrt{2U}}{1\frac{2\pi }{\tau }\sqrt{2U}f^{}(U)}.$$
(10)
Similarly, for $`x>0`$ we get from (9)
$$U^{}=\frac{2\pi }{\tau }\frac{\sqrt{2U}}{1+\frac{2\pi }{\tau }\sqrt{2U}f^{}(U)}.$$
(11)
Therefore function $`g(x)`$ can be expressed in the form (5).
Now it remains to show that
$$S(X)=\frac{2\pi }{\tau }Xf^{}(X^2)$$
is a continuous function. Obviously, it is true if $`X0`$.
For $`X=x=0`$ we have the situation as follows. First note that (2) and (6) yield
$$U=\frac{2\pi ^2}{\tau ^2}x^2+o(x^2).$$
Then for $`x,X>0`$ from (11) we get
$$S(X)=\frac{2\pi }{\tau }\sqrt{2U}f^{}(U)=\frac{\frac{2\pi ^2}{\tau }\sqrt{2U}}{U^{}}1=\frac{x\sqrt{1+o(1)}}{x+o(x)}1.$$
Therefore
$$\underset{X0+}{lim}S(X)=0.$$
For $`x,X<0`$ (10) yields
$$S(X)=\frac{2\pi }{\tau }\sqrt{2U}f^{}(U)=\frac{\frac{2\pi ^2}{\tau }\sqrt{2U}}{U^{}}1=\frac{|x|\sqrt{1+o(1)}}{x+o(x)}1.$$
It means $`lim_{X0}S(X)=0`$ and, hence, $`S(X)`$ is continuous at zero.
Let us prove that (5) is also the sufficient condition of isochronicity. For $`x>0`$ we can write (5) in the form
$$\frac{dU}{dx}=\frac{2\pi }{\tau }\frac{X}{1+S(X)}=\frac{2\pi }{\tau }\frac{\sqrt{2U}}{1+S(\sqrt{2U})}.$$
Integrating this equation we get
$$x_2(U)=\frac{\tau }{2\pi }(\sqrt{2U}+_0^{\sqrt{2U}}S(z)𝑑z).$$
Similarly, for $`x<0`$ we obtain
$$x_1(U)=\frac{\tau }{2\pi }(\sqrt{2U}+_0^{\sqrt{2U}}S(z)𝑑z).$$
Due to the condition of the theorem $`S(z)`$ is a continuous function, and, hence, the integral is convergent. Therefore (6) holds, i.e. the system has an isochronous center in the origin. o
In conclusion, we have proven that the Hamiltonian (3) has the isochronous center iff the condition (6) is satisfied. In case of a symmetric potential $`U(x)`$ (even function of $`x`$) the only solution is the harmonic oscillator. If $`U(x)`$ is not symmetric (even), other solutions might be possible. However, for any polynomial $`U(x)`$ (and $`g(x)=U^{}(x)`$), the harmonic potential is still the only solution . Thus, other nontrivial isochronic potentials can be invented by taking an analytic but not polynomial and not even function $`U(x)`$, in agreement with Urabe’s criteria (5) of Theorem 1, which we have shown to be equivalent to (6). These criteria allow still for a quite large family of isochronous potentials $`U(x)`$ and we can construct such potentials analytically. Indeed, differentiating the both sides of the equality (4) and taking into account (5) we get in the case of isochronous center
$$X\frac{dX}{dx}=g(x)=\frac{2\pi }{\tau }\frac{X}{1+S(X)}.$$
Hence, we obtain the next formula, which for the first time appears in
$$x=\frac{\tau }{2\pi }_0^X\left(1+S(u)\right)𝑑u.$$
(12)
This formula together with (5) is a tool to construct isochronous potentials. Taking $`S(X)=X`$ Urabe got
$$g(x)=\frac{2\pi }{\tau }[1(1+\frac{4\pi }{\tau }x)^{\frac{1}{2}}],$$
hence, the corresponding isochronous potential is
$$U(x)=1+\frac{2\pi }{\tau }x\sqrt{1+\frac{4\pi }{\tau }x}.$$
(13)
where $`\frac{\tau }{4\pi }<x<\frac{3\tau }{4\pi }`$, i.e. the potential is an analytic function defined on a finite segment of real axis. Here, in the calculation, we have chosen the (negative) sign such that $`g(x=0)=0`$ is obeyed.
Let now
$$S(X)=\frac{2}{\pi }\mathrm{arctg}X.$$
Then (12) yields
$$x=\frac{\tau }{2\pi }X+\frac{\tau }{\pi ^2}X\mathrm{arctg}X\frac{\tau }{2\pi ^2}\mathrm{log}(X^2+1).$$
Obviously, $`x(X)`$ is strictly increasing on $`𝐑`$ and $`x(0)=0,x(𝐑)=𝐑.`$ Therefore,
$$g(x)=\frac{2\pi }{\tau }\frac{X(x)}{1+\frac{2}{\pi }\mathrm{arctg}(X(x))}$$
is defined for all $`x𝐑`$, positive for $`x>0`$ and negative for $`x<0.`$ Hence, the corresponding potential $`U(x)`$ is an analytic function defined on the whole real axis with the only one minimum in the origin. One can construct this potential at least in the form of power series. However, the potential is not an entire function. As we have mentioned above it was shown in (in fact, it is an immediate consequence of formula (6)), that the only polynomial isochronous potential is the quadratic one. We also see that there are analytic potentials defined on whole real axis. Thus the question naturally arises whether there are isochronous potentials defined by entire functions? Another still open and interesting question is the investigation of the isochronicity property of non-monotonic potentials.
The second author thanks the Śniadecki Foundation (Poland) and the Foundation of Fundamental Research of the Republic of Belarus for their support of the research and Professor M.Robnik for kind invitation to visit CAMTP and hospitality.
|
no-problem/0003/cs0003069.html
|
ar5iv
|
text
|
# Proving Failure of Queries for Definite Logic Programs Using XSB-Prolog
## 1 Introduction
In methods are studied for proving that a query for a definite logic program fails. The general idea underlying all methods is the generation of a finite model of the definite program in which the query is false. However the approach developed in is quite different from that used in general purpose model generators for first order logic such as FINDER , SEM , and FMC<sub>ATINF</sub> . Whereas the latter systems search for a model in the space of interpretations, the former searches in the smaller space of pre-interpretations and applies a top-down proof procedure using tabulation to verify whether the query is false in the least model of the Horn theory based on the candidate pre-interpretation. Experiments in , an extended version of , show that the abductive procedure of extended with intelligent backtracking outperforms FINDER and FMC<sub>ATINF</sub> on problems where there are a large number of different interpretations for a given pre-interpretation. The difference is not only in the number of backtracks, but also, for some problems, in time, and this notwithstanding the former is implemented as a straightforward meta-interpreter in Prolog while the latter are sophisticated implementations in a more low level language.
The current paper describes how the meta-interpreter can be replaced by a more direct implementation in XSB-Prolog which relies on the XSB system to perform the tabulation. This is not a straightforward task because of the intelligent backtracking and because the meta-interpreter does not follow the standard depth-first left-to-right search strategy but uses heuristics to direct the search towards early failures and selects the pre-interpretation on the fly, as components are needed by the proof procedure. To exploit the tabling system underlying XSB, one has to stick to the depth-first left-to-right execution order and one should not modify the program by creating new components of the pre-interpretation while evaluating a call to a tabled predicate.
The random selection of an initial pre-interpretation, combined with the loss of control over the search results in a system which has to explore a substantially larger part of the search space than the original system. The paper introduces two innovations to compensate for this. Firstly, it uses a variant of intelligent backtracking which is much less dependent on the random initial order of the choice points. Secondly, it introduces a more accurate failure analysis, so that smaller conflict sets are obtained and that the intelligent backtracking selects its targets with more accuracy.
The motivation for this research is in the world of planning. Planners are typically programs which search in an infinite space of candidate plans for a plan satisfying all requirements. The planner searches forever (until some resource is exhausted) when no candidate plan satisfies all requirements. Hence it is useful to have methods to show that the problem has no solution. It turns out that our approach outperforms first order model generators on planning problems.
In the next section we recall some basic notions about semantic of definite logic programs. In Section 3 we describe our approach in more detail and then in Section 4 we show the results of testing our system on different problems. The comparison not only includes the model generator FINDER as in , and FMC<sub>ATINF</sub> as in but also SEM .
## 2 Preliminaries
Now we will recall some basic definitions about semantics of definite programs. Most of them are taken from .
A pre-interpretation $`J`$ of a program $`P`$ consists of domain $`D=\{d_1,\mathrm{},d_m\}`$<sup>1</sup><sup>1</sup>1We will consider only domains with finite size. and for each $`n`$-ary function symbol $`f`$ in $`P`$ a mapping $`f_J`$ from $`D^n`$ to $`D`$. Following the literature on model generators, a term of the form $`f(d_1,\mathrm{},d_n)`$ where $`d_1,\mathrm{},d_nD`$ is called a cell. Given a program $`P`$ and domain size $`m`$, the set of all cells is fixed. A pair $`c,v`$ where $`c`$ is a cell and $`vD`$ is the mapping of that cell is called a component and $`v`$ the value of the component. A set of components defines a pre-interpretation if there is exactly one component $`c,v`$ for each cell.
A variable assignment $`V`$ wrt. expression $`E`$ and pre-interpretation $`J`$ consists of an assignment of an element in the domain $`D`$ for each variable in $`E`$. A term assignment wrt. $`J`$ and $`V`$ is defined as follows: each variable is given its assignment according to $`V`$; each constant is given its assignment according to $`J`$; if $`d_1,\mathrm{},d_n`$ are the term assignments of $`t_1,\mathrm{},t_n`$ then the assignment of $`f(t_1,\mathrm{},t_n)`$ is the value of the cell $`f(d_1,\mathrm{},d_n)`$.
An interpretation $`I`$ based on a pre-interpretation $`J`$ consists of a mapping $`p_I`$ from $`D^n`$ to $`\{false,true\}`$ for every $`n`$-ary predicate $`p`$ in $`P`$. An interpretation $`I`$ is often defined as the set of atoms $`p(d_1,\mathrm{},d_n)`$ for which $`p(d_1,\mathrm{},d_n)`$ is mapped to true. An interpretation $`M`$ is a model of a program $`P`$ iff all clauses in $`P`$ are true in $`M`$. For a definite program, the intersection of two models is also a model hence a definite program always has a unique least model. As a consequence, if a conjunction of atoms is false in some model then it is also false in the least model of a definite program.
Throughout the paper we will use the following simple example about even and odd numbers to show the different concepts and program transformations.
```
even(zero).
even(s(X)) :- odd(X).
odd(s(X)) :- even(X).
```
Consider a query ?- even(X),odd(X). For simplicity of the presentation we will add to the program the definite clause
```
even_odd :- even(X),odd(X).
```
and consider the query ?- even\_odd. It cannot succeed as ?- even\_odd is not a logical consequence of the program. The SLD proof procedure does not terminate. This is still the case when extended with tabulation as in XSB-Prolog.
We choose a domain with two elements $`D=\{0,1\}`$ and consider the pre-interpretation $`J=\{zero_J=0,s_J(0)=1,s_J(1)=0\}`$. The least model of the definite program is $`\{even(0),odd(1)\}`$ and the atom even\_odd is false in this model.
## 3 The Method
Figure 1 shows the general architecture of the system. The input consists of a definite program $`P`$, a query ?-$`Q`$ and domain size $`m`$. First the program and the query are transformed to $`P^t`$ and ?-$`Q^t`$. The transformation replaces all functional symbols with calls to predicates defining the components of the pre-interpretation and allows the program to collect the components which were used during the evaluation of the query. Also an initial pre-interpretation $`J`$ is constructed for the given domain size $`m`$. Then the query ?-$`Q^t`$ is evaluated wrt. the program $`P^t`$ and the current pre-interpretation $`J`$. If the query succeeds then it also returns a set of components $`CS`$ which are necessary for the success of the proof. Then, based on $`CS`$, the pre-interpretation is modified and the query is run again. If we have exhausted all possible pre-interpretations for the given domain size then we can eventually increase it and run the system again. If the query ?-$`Q^t`$ fails then $`Q^t`$ is false in the least model based on the pre-interpretation $`J`$ and we can conclude that the original query ?-$`Q`$ cannot succeed.
### 3.1 Basic Transformation
To evaluate the query in the least model based on a pre-interpretation $`J`$, we use a variant of the abstract compilation approach to program analysis used by Codish and Demoen in . The pre-interpretation $`J`$ of a $`n`$-ary function $`f`$ is represented by a set of facts $`p_f(d_1,\mathrm{},d_n,v)`$; one fact for each cell $`f(d_1,\mathrm{},d_n)`$. In the source program, non variable terms are represented by their pre-interpretation. This is achieved by replacing a term $`f(t_1,\mathrm{},t_n)`$ by a fresh variable $`X`$ and introducing a call $`p_f(t_1,\mathrm{},t_n,X)`$. This transformation is repeated for the non variable terms in $`t_1,\mathrm{},t_n`$ until all functions are eliminated. Codish and Demoen evaluate the resulting DATALOG program bottom up, obtaining the least model which expresses declarative properties of the program. In , one also transforms the query and using a top-down procedure with tabulation checks whether it fails. Experience showed that one typically ends up with computing the whole model of the predicates reachable from the query. So the meta-interpreter used there tables only the most general call for each predicate. As we want direct execution under XSB, our transformation has to take care that a program predicate is only called with all variables free and different, so that XSB tables only the most general call. To achieve this, a predicate $`p_f(\mathrm{})`$ which is added to compute a term $`t`$ in a call is inserted after the call and a predicate which is added to compute a term in the head is inserted at the end of the clause. Finally, when a call to a program predicate contains a variable $`X`$ which already occurs to the left of its position in the clause, then it is replaced by a fresh variable $`Y`$ and an equality $`X=Y`$ is inserted after the call. The calls to the pre-interpretation are not tabled, and a call $`p_f(g(\mathrm{}),\mathrm{})`$ is transformed in $`p_g(\mathrm{},X),p_f(X,\mathrm{})`$. This gives less branching than when $`p_g(\mathrm{})`$ is added after $`p_f(\mathrm{})`$. For our example this gives the following code:
```
even(X) :- p_zero(X).
even(Y) :- odd(X),p_s(X,Y).
odd(Y) :- even(X),p_s(X,Y).
even_odd :- even(X),odd(X1),X1=X.
p_zero(0).
p_s(0,1).
p_s(1,0).
```
In , values are assigned to the cells of the pre-interpretation in an abductive way, as needed by the heuristic search for a proof of the query. When a proof is found, standard backtracking occurs: the last assigned value is modified. To have direct execution under XSB, the pre-interpretation has to be fixed in advance. Obviously, it is not feasible to enumerate all possible pre-interpretations until one is found for which the query fails. The search has to be guided by the proof found so far. Failure analysis and intelligent backtracking have to be incorporated to obtain a usable system.
### 3.2 Failure Analysis
#### 3.2.1 Elementary Failure Analysis.
As the goal is to find a pre-interpretation for which the query fails, failure occurs when the query succeeds. In the more general setting of first order model generation, failure occurs when some formula gets the wrong truth value. The FINDER and FMC<sub>ATINF</sub> systems keep track of which cells are used in evaluating a formula and when the formula receives the wrong truth value, the set of cells used in evaluating it is used to direct the backtracking. In the meta-interpreter is extended with such a failure analysis and intelligent backtracking is used to guide the search. This substantially improved the performance of the system. Incorporating these features in the current approach which relies on direct execution with XSB of the transformed query, requires special care. First let us formalize the notion of conflict set (refutation in first order model generators ).
###### Definition 1 (Conflict set)
A conflict set $`CS`$ of a definite program $`P`$ and query $`Q`$ is a finite set of components such that for any pre-interpretation $`J`$ for which $`CSJ`$ follows that $`Q`$ is true in any model of $`P`$ based on $`J`$.
The idea is that any pre-interpretation $`J`$ which has the same values for all components from the conflict set $`CS`$ can not be extended to an interpretation in which the query fails. Hence any candidate pre-interpretation must differ from $`CS`$ in the value of at least one component. Exploiting conflict sets requires first to compute them. This can be done by adding to the program predicates an extra argument which is used to collect the components used for solving a call to this predicate. For example a call even(X) is replaced by even(X,CS) and the answer even(0) becomes even(0,\[p\_zero(0)\]). However there is a potential problem. Also even(0,\[p\_zero(0),p\_s(0,1),p\_s(1,0)\]) is an answer. Previously, the tabling system did not recognize it as a new answer and did not use it to solve calls to even/1. But as the value of the added second argument differs from that in the first answer, XSB will also use it to solve calls to $`even/2`$ and it will obtain a third answer. Fortunately, if the list of used components is reduced to some canonical form, then the third answer will be identical to the second and the evaluation will terminate. However, this repetition of answers with different lists of components can substantially increase the cost of the query evaluation. Fortunately the XSB system has built-in predicates to inspect and modify the tables so we can control this behavior. The idea is to replace a clause
```
p(X,CS) :- Body.
```
with a clause
```
p(X,CS) :- Body,check_return(p(X,CS)).
```
When the body of the clause succeeds, XSB will process the answer $`p(X,CS)`$ (add it to the table for the call to $`p/2`$ if it is new). Remember, that as the transformed program makes only most general calls there is only one table associated with each predicate. Using the built-ins, the predicate $`check\mathrm{\_}return/1`$ looks up the previous answers in the table for $`p/2`$ and compares them with the candidate answer $`p(X,CS)`$. If there is no other answer with the same $`X`$ then $`check\mathrm{\_}return/1`$ and thus $`p/2`$ simply succeed. The interesting case is when the table already holds an answer $`p(X,CS_{old})`$ with a different conflict set $`CS_{old}`$ (if $`CS_{old}=CS`$ then XSB will recognize it is a duplicate answer). Then several strategies are possible for check\_return/1:
* The simplest approach is to let check\_return/1 fail when the table already holds an answer with the same $`X`$.
* An alternative approach is to check whether the new conflict set $`CS`$ is “better” than $`CS_{old}`$. Then the old answer is removed from the table and check\_return/1 succeeds. Otherwise check\_return/1 fails.
* Finally, but more expensive for the overall query evaluation, one could allow several answers, only rejecting/removing redundant ones ($`p(X,CS_1)`$ is redundant wrt. $`p(X,CS_2)`$ if $`CS_1CS_2`$).
#### 3.2.2 Advanced Failure Analysis.
A conflict set can be called minimal if it has no subset which is a conflict set. Obviously it is not feasible to compute minimal conflict sets. However, simply collecting the components used in a proof can be a large overestimation. For example, in our planning problems, a three argument predicate is used: one argument is the initial state, one argument is the final state and one argument is the description of the derived plan. The pre-interpretation of the terms representing the plan is completely irrelevant for the failure of the query. However the components used to compute it will be part of the conflict set.
To see how to refine our failure analysis, let us reconsider how answers are obtained. Using a slightly different notation, the base case of the $`even/1`$ predicate can be written as:
```
even(X) :- X=0_J.
```
This represents the basic answer, parameterized by the pre-interpretation $`J`$. Now consider the definition of the $`odd/1`$ predicate:
```
odd(X) :- even(Y),X=s_J(Y).
```
An answer of $`odd/1`$ is obtained by performing resolution with the basic answer for $`even/1`$, yielding:
```
odd(X) :- Y=X1,X1=0_J,X=s_J(Y).
```
This can be generalized, answers for a predicate $`p/n`$ are of the form:
$$p(X_1,\mathrm{},X_n)X_1=t_{1_J},\mathrm{},X_N=t_{n_J},Eqs$$
with $`Eqs`$ a set of equations involving $`X_1,\mathrm{},X_n`$ and some local variables $`Y_1,\mathrm{},Y_n`$. Under the elementary failure analysis the answer is $`p(t_{1_J},\mathrm{},t_{n_J})`$ and the associated conflict set is the set of components used in computing $`t_{1_J},\mathrm{},t_{n_J}`$ and the terms of $`Eqs`$.
The basis for the advanced failure analysis is the observation that the answer clauses can be simplified while preserving the solution they represent. Terms form equivalence classes under a pre-interpretations. Members of the equivalence class can be represented by the domain element which is their pre-interpretation and equalities between terms modulo equivalence class can be simplified using three of the four Martelli-Montanari simplification rules:
* $`p(t_{1_J},\mathrm{},t_{n_J})X=X,Eqs`$ is equivalent to
$`p(t_{1_J},\mathrm{},t_{n_J})Eqs`$ (remove)
* $`p(t_{1_J},\mathrm{},t_{n_J})t_J=X,Eqs`$ is equivalent to
$`p(t_{1_J},\mathrm{},t_{n_J})X=t_J,Eqs`$ (switch)
* $`p(t_{1_J},\mathrm{},t_{n_J})X=t_J,Eqs`$ is equivalent to
$`p(t_{1_J},\mathrm{},t_{n_J})\{X/t_J\}Eqs\{X/t_J\}`$ (substitute)
Note that $`f_J(t_{1_J},\mathrm{},t_{n_J})=g_J(s_{1_J},\mathrm{},s_{m_J}),Eqs`$ is not equivalent to $`false`$ and that $`f_J(t_{1_J},\mathrm{},t_{n_J})=f_J(s_{1_J},\mathrm{},s_{n_J}),Eqs`$ is not equivalent to $`t_{1_J}=s_{1_J},\mathrm{},t_{n_J}=s_{n_J},Eqs`$, hence peel is not allowed.
So an answer can be simplified to a form
$$p(t_{1_J},\mathrm{},t_{n_J})Eqs$$
where $`Eqs`$ contains equations between non variable terms and some of the $`t_{i_J}`$ in the head can be variables. The pre-interpretations in the terms of $`Eqs`$ decide whether $`Eqs`$ is interpreted as true or false, hence the components used in interpreting the terms in $`Eqs`$ form the real conflict set of the answer. However also the components used to interpret the terms $`t_{i_J}`$ of the head are important. When the answer is used to solve a call, they become part of new equations. Hence, with each variable we should associate a set holding the components used in evaluating the term the variable is bound to and with each answer we should associate the “real” conflict set. Moreover, the execution of the equalities $`X=Y`$ has to be monitored. When one of $`X`$ or $`Y`$ is free then unification can be performed, otherwise if $`X`$ and $`Y`$ have the same interpretation then the sets of components associated with $`X`$ and $`Y`$ have to be added to the conflict set of the answer (as before the equality fails when $`X`$ and $`Y`$ have a different interpretation). Note that our transformation is such that calls have fresh variables as arguments, so the equality between an argument of a call and an argument of an answer always involves a free variable and is correctly handled by standard unification. A final point is that the body of the compiled clause has to be carefully ordered: equalities on predicate calls involving a variable $`X`$ should precede the interpretation of a term containing $`X`$, e.g. $`p(X),Y=f_J(X)`$ is a correct ordering: first the call $`p/1`$ binds $`X`$ to a domain element and also returns the set of components $`CS_X`$ used in computing that domain element. Then $`Y`$ is bound to a domain element and the set of components used in computing it is $`\{f_J(X)\}CS_X`$. Taking the above into account, the code for our example is as follows:
```
even(X,[]) :- comp(p_zero,[],X), check_return(even(X,[])).
even(X,CS) :- odd(Y,CS),comp(p_s,[Y],X), check_return(even(X,CS)).
odd(X,CS) :- even(Y,CS),comp(p_s,[Y],X), check_return(odd(X,CS)).
even_odd(CS) :-
even(X,EvenCS),odd(Y,OddCS),
merge(EvenCS,OddCS,CS1),unify(X,Y,CS1,CS),
check_return(even_odd(CS)).
```
Calls to the pre-interpretation are made through an intermediate predicate comp/3 defined below. The call to combine\_arg\_cs/3 collects the conflict sets associated with the ground arguments of the function to be interpreted (none if the argument is a free variable) in ArgsCS and merge/3 extends ArgsCS with Comp, the consulted component of the pre-interpretation, to obtain the final conflict set ResCS.
```
comp(F,Args,R-ResCS) :-
combine_arg_cs(Args,RealArgs,ArgsCS),
append([F|RealArgs],[R],C),Comp =.. C,
call(Comp),
merge([Comp],ArgsCS,ResCS).
combine_arg_cs([],[],[]).
combine_arg_cs([A-[]|T],[A|T1],RestCS) :- !,
combine_arg_cs(T,T1,RestCS).
combine_arg_cs([A-ACS|T],[A|T1],OutCS) :-
combine_arg_cs(T,T1,RestCS),
merge(ACS,RestCS,OutCS).
```
The merge/3 predicate makes the union of two sets (represented as lists) and places the result in a canonical form and unify/4 is used to monitor the unification process and can be defined by the following Prolog code:
```
unify(X,Y,S,S) :- (var(X);var(Y)), !, X=Y.
unify(X-Sx,X-Sy,Sin,Sout) :- merge(Sx,Sy,S), merge(S,Sin,Sout).
```
The first two arguments are the terms to be unified, the third is the current conflict set of the clause and the last argument is the new conflict set of the clause. The first clause handles the case that one is a free variable: unification is performed and the conflict set of the clause remains the same. The second clause handles the case that both arguments $`X`$ and $`Y`$ are bound to the same domain element. The set of components used in evaluating the first argument $`(Sx)`$ and in evaluating the second argument $`(Sy)`$ are added to $`Sin`$ yielding $`Sout`$.
### 3.3 Intelligent Backtracking
Under standard backtracking, candidate pre-interpretations are enumerated according to some fixed total ordering $`c_1,c_2,\mathrm{},c_n`$ of the cells. When some partial solution $`c_1=d_1^1,c_2=d_2^1,\mathrm{},c_m=d_m^1`$ is rejected then the value assignment $`d_m^1`$ for the last cell $`c_n`$ is modified. If no other value is left, then $`c_{m1}`$ is modified (and all domain elements become again available for $`c_m`$). The simplest use of conflict sets is based on the observation that no extension of the conflict set can be a solution, so the last element according to the total order over the cells of the conflict set is selected and the assignment to this cell is modified. However also secondary conflict sets can be derived . Assume, due to different conflicts, all values for some cell $`c_n`$ have been rejected. With $`\{c_{i,1},\mathrm{},c_{i,k_i},c_n\}`$ the conflict set which led the rejection of $`d_i`$ we can formalize the knowledge in the conflict sets as:
$`c_{1,1}=d_{1,1}\mathrm{}c_{1,k_1}=d_{1,k_1}c_n=d_1false`$
$`\mathrm{}`$
$`c_{m,1}=d_{m,1}\mathrm{}c_{m,k_m}=d_{m,k_m}c_n=d_mfalse.`$
As we have that cell $`c_n`$ must be assigned some domain element, we have $`c_n=d_1\mathrm{}c_n=d_m`$. Applying hyper-resolution , one can infer
$`c_{1,1}=d_{1,1}\mathrm{}c_{1,k_1}=d_{1,k_1}`$
$`\mathrm{}`$
$`c_{m,1}=d_{m,1}\mathrm{}c_{m,k_m}=d_{m,k_m}false`$
which says that $`\{c_{1,1},\mathrm{},c_{1,k_1},\mathrm{},c_{m,1},\mathrm{},c_{m,k_m}\}`$ is also a conflict set.
At the implementation level, an accumulated conflict set is associated with each cell and initialized as empty. When a conflict $`\{c_1,\mathrm{},c_{n1},c_n\}`$ is derived with $`c_n`$ its last cell, then $`\{c_1,\mathrm{},c_{n1}\}`$ is added to the accumulated conflict set of $`c_n`$. Once all assignments to a cell are exhausted, its associated conflict set holds the secondary conflict which can be used to direct further backtracking. This is the approach taken in where it worked quite well, as the initial order was carefully chosen. In the current implementation, where the initial order over the cells is random, the system had to do much more search before finding a solution. Hence we adopted a variant of intelligent backtracking mentioned in which leaves the cells unordered until they participate in a conflict. Under this approach, cells are split over two sets, a set with a total order (initially empty) and a set which is unordered. When a conflict is found, the cells from it which are in the unordered set (if any) are moved to the end of the ordered set. Then the last cell of the conflict set is chosen as target of the backtracking. Cells which are after the target in the total order return to the unordered set. This approach resulted in substantially better results.
### 3.4 Dealing with Equational Problems
There exists many problems which contain only one predicate, the equality predicate $`eq/2`$. They consist of a number of facts $`eq(t_{i_1},t_{i_2})`$ for $`i=1,\mathrm{},m`$ and a number of denials $`eq(s_{j_1},s_{j_2})`$ for $`j=1,\mathrm{},n`$. To solve such problems, one has to add to the program the axioms for the equality theory for reflexivity, symmetry, transitivity and function substitution, the latter consists of an axiom
$$f(X_1,\mathrm{},X_n)=f(Y_1,\mathrm{},Y_n)X_1=Y_1\mathrm{}X_n=Y_n.$$
for each functor $`f/n`$. The least model of the standard equality theory is the identity relation over the domain of the interpretation, hence the search space can be reduced by restricting the interpretation of $`eq/2`$ to the identity relation.
In the abductive system of , this is achieved by initializing the interpretation of $`eq/2`$ as identity, and removing the standard equality theory (only the problem specific facts and denials remain). Backtracking is initiated as soon as either one of the denials $`eq(s_{j_1},s_{j_2})`$ evaluates to true or one of the facts $`eq(t_{i_1},t_{i_2})`$ results in an answer which is not in the identity relation.
With direct execution under XSB, a slightly different approach is required. Unification reduces to the identity relation, hence after compiling the terms, the call to $`eq/2`$ can be done by unifying the compiled terms. However, the problem is that all facts and denials need to be activated. Therefore a new predicate $`p/0`$ is introduced and defined as follows:
$`p\neg eq(t_{i_1},t_{i_2}).`$ $`i=1,\mathrm{},m`$
$`peq(s_{j_1},s_{j_2}).`$ $`j=1,\mathrm{},n`$
Proving failure of the query $`p`$ yields the desired pre-interpretation. Indeed $`p`$ is equivalent to
$$p\underset{1im}{}\neg eq(t_{i_1},t_{i_2})\underset{1jn}{}eq(s_{j_1},s_{j_2}).$$
Hence $`p`$ fails if the right-hand side is true, i.e. if
$$\underset{1im}{}eq(t_{i_1},t_{i_2})\underset{1jn}{}\neg eq(s_{j_1},s_{j_2})$$
is true. $`eq(t_{i_1},t_{i_2})`$ is equivalent with the fact $`eq(t_{i_1},t_{i_2})`$ and $`\neg eq(s_{j_1},s_{j_2})`$ is equivalent to the denial $`eq(s_{j_1},s_{j_2})`$. Thus $`p`$ fails if the conjunction of the original facts and denials is true under the chosen pre-interpretation. Compilation of terms is as described in Section 3.1, i.e. a call $`eq(s_{j_1},s_{j_2})`$ is replaced by a call $`X_{j_1}=X_{j_2}`$ preceded by the code computing the pre-interpretation of $`s_{j_1}`$ and $`s_{j_2}`$. A call $`\neg eq(t_{i_1},t_{i_2})`$ is handled in a similar way; the built-in $`\backslash =`$ (not unifiable) can be used instead of not equal. However, special care is required to ensure the arguments are ground in case $`t_{i_1}`$ or $`t_{i_2}`$ is a variable. Whereas the compilation leaves such variables intact, here it has to be mapped (the mapping introduces a backtrack point) to a domain element.
Similarly as in Section 3.2, conflict sets can be associated with terms for the task of advanced failure analysis. Hence a call $`\neg eq(t_{i_1},t_{i_2})`$ is transformed in the sequence $`interpret(t_{i_1},X_{i_1}),interpret(t_{i_2},X_{i_2}),disunify(X_{i_1},X_{i_2},S_{in},S_{out})`$ where $`interpret/2`$ is an abbreviation for the sequence of calls computing the pre-interpretation of the term and the associated conflict set and $`disunify/4`$ is defined as
```
disunify(X-Sx,Y-Sy,Sin,Sout) :-
X\=Y,merge(Sx,Sy,S), merge(S,Sin,Sout).
```
## 4 Experiments
### 4.1 The Problems
We tested our system with a large number of different problems. Below we give a short description for each one of them and for some of them the source code is given in Appendix 0.A.
#### 4.1.1 List Manipulation.
The appendlast problem uses the standard definition of the predicates append and last and the following query:
```
appendlast :- append(X, [a], Xs),last(Xs, b).
```
The reverselast problem is similar to the appendlast problem but uses the version of the predicate reverse with accumulator:
```
reverselast:- reverse(L, R, [a]), last(R, b).
```
The nreverselast problem uses the “naive” definition of reverse:
```
nreverselast :- reverse([a|X], R), last(R, b).
```
#### 4.1.2 Multisets.
The multiset?o are programs to check the equivalence of two multisets using a binary operator “o” to represent them. multiset3o is a problem which has a solution, thus failure cannot be proven for it.
#### 4.1.3 Planning in the Blocks-World.
These are simple problems for planning in the blocks-world. The theory for the blockpair problems has, besides the usual actions of the blocks-world, an action to add or remove a pair of blocks. In the blockzero problems, the extra action is to create a new block named $`s(X)`$ on top of a clear block $`X`$.
The queries ending in “o” use multisets based on the function o/2 and those ending in “l” use a standard list representation. Those problems which have the number 2 in their name do not collect the plan and those having 3 store the plan in the second argument. blockzero2ls<sup>2</sup><sup>2</sup>2corresponds to blocksol in and is a problem which has a solution.
#### 4.1.4 TPTP-Problems.
The rest of the examples are taken from the TPTP problem library . In Table 1 in brackets are given the TPTP names for each one of them. All these problems are equational problems and are transformed in the way described in Section 3.4.
The tba problem is to prove an independence of one axiom for ternary boolean algebra.
The grp problem is to prove that some axiom is not a single axiom for group theory.
The cl3 problem is from the domain of combinatory logic and the goal is to find a set of combinators which satisfy axioms $`S`$ and $`W`$ and do not satisfy the weak fixed point property.
Table 1 gives some details about the properties of the problems. The column #pred shows the number of predicates. The column size dom gives the domain size for which the query has been evaluated (which is, for the failing queries, the minimum domain size for which a model proving failure exists). The column size pre gives the number of cells in the pre-interpretation and the next column #pre gives the number of all possible pre-interpretations for the given domain size. The column size int gives the number of atoms to be assigned a truth value in an interpretation and the last column #int/pre gives the number of different interpretations for a fixed pre-interpretation. For the TPTP problems this value is 1 because they have only one predicate for which the interpretation is known to be identity.
### 4.2 Results
The results with FMC<sub>ATINF</sub> were taken from or were sent to us by its author which was using a SUN 4 ELC machine. All other systems were run on SUN Sparc Ultra-2 computer. The system AB is the abductive system described in , however, running under (the slower) XSB-Prolog instead of Master Prolog for equal comparison. We used FINDER version 3.0.2 and SEM version 1.7 which are well known model generators implemented in C.
The system naive results from the direct translation of the system AB to XSB: it uses the same failure analysis, it starts from a random total order over the cells of the pre-interpretation and it uses the simplest variant of check\_return which sticks to the first answer whatever the associated conflict set is. For the TPTP problems the standard equality axioms were used.
The systems single CS and best CS use a more sophisticated version of check\_return which prefers the answer with the shorter conflict set, advanced failure analysis and the more sophisticated version of intelligent backtracking which leaves elements unordered until they participate in a conflict set. The system single CS uses the first answer to the top level query to direct the backtracking. The system best CS computes all answers to the top level query and then selects from them the conflict set which will add the fewest number of cells to the ordered sequence. Both systems use the technique described in Section 3.4 on the TPTP problems.
Table 2 gives the times obtained by the different systems. The time is in seconds unless followed by H, then it is in hours. A “-” means the example was not run. A “$`>n`$” means the system had still no solution after time $`n`$.
Table 3 shows the number of generated and tested pre-interpretations (number of backtracks). For the SEM system, we have modified the source code to report exactly this number. For the FINDER system we report the sum of the number of bad candidates tested and other backtracks. Also in this table “-” means not run, “$`>n`$” means already $`n`$ backtracks when interrupted. For the system best CS we give an additional column total which shows the total number of conflict sets obtained as “answers” to the query (divided by the number of backtracks, this gives the average number of conflict sets obtained when running the query).
### 4.3 Discussion
Comparing the systems naive and AB, we see that the straightforward transfer of AB to XSB results in a much worse behavior. Hence the heuristics used by AB to control the search have a big impact.
The effect of the advanced failure analysis is not reported separately. Its impact is only visible in the block\*3? problems which compute, for the failure analysis, an irrelevant output argument. The advanced failure analysis makes these problems behave as well as the corresponding block\*2? problems. Note that the AB system as well as all first order model generators behave much worse on the 3-argument problems than on the corresponding 2-argument problems. As computing some output is a natural feature of a logic program, the advanced failure analysis is an important asset of our system.
Adding more sophisticated backtracking which does not fix the order of the cells in advance yields a substantial improvement on most problems. The system single CS which sticks everywhere to the first conflict set is often the fastest, although it often needs more backtracks than best CS. It fails only on nreverselast which uses a 5 element domain and has a very large search space. However, on the equality problems it becomes obvious that a good choice of a conflict set is essential for solving such problems. In number of backtracks, best CS compares quite well with AB. Only on blockzero2ls it needs a lot more backtracks, while it needs a lot less on nreverselast. Perhaps on blockzero2ls, which has no solution, it suffers from the less optimal ordering because the search space has to be searched exhaustively.
From the model generators FINDER and SEM perform reasonably well in terms of time and also in number of backtracks. However, the results for FINDER were obtained only after a fine tuning of the different parameters and the representation of the problems (see ). The system also uses intelligent backtracking for deriving secondary conflict sets and some other forms of failure analysis. It has a smaller number of backtracks on the more complex planning problems than SEM. The system SEM is the fastest in raw speed and is not so sensible to the problem representation. Of the model generators, the system FMC<sub>ATINF</sub> is the weakest on the class of problems we consider. This result contrasts with the results in where it is the best on several problems.
Compared with our system the model generators have to backtrack much more on the planning problems and the other logic programs where they have to explore the full space of interpretations while we look only for the least model of the program for a given pre-interpretation (the extra cost of evaluating the query in the least model is more than compensated for by the exponentially smaller search space). On the TPTP problems our system is doing worse which suggests that there is further room for making better use of the information in conflict sets.
## 5 Conclusion
In this paper we presented a method for proving failure of queries for definite logic programs based on direct execution of the abstracted program in XSB-Prolog, a standard top-down proof procedure with tabulation.
By using a better form of intelligent backtracking (proposed in ) which does not fix the enumeration order in advance and an improved failure analysis, we were able to compensate for the loss of flexibility which results from the direct execution of the abstracted program.
This way of intelligent backtracking could also be interesting for other systems, e.g. FMC<sub>ATINF</sub> of which Peltier reports that it is quite sensitive to the initial enumeration order.
While difference in speed with the AB system are modest, the approach is still very interesting as the depth-first left-to-right execution results in a much better memory management so that larger problems can be tackled. The meta-interpreter of the AB system keeps track of the whole top-down proof tree in evaluating the query, which leads to very large memory consumption.
Interesting future work is to further investigate some control issues. One could explore whether there is a good compromise between computing only one solution to the query and computing all solutions. One could try to further improve the backtracking by developing some heuristics which order a group of new elements when they are inserted in the ordered sequence.
## Acknowledgements
We want to thank Kostis Sagonas for his help with the XSB system. Maurice Bruynooghe is supported by FWO-Vlaanderen. Nikolay Pelov is supported by the GOA project LP+.
## Appendix 0.A Code for Some of the Problems
### 0.A.1 Multiset
```
multiset1o :- sameMultiSet(a, X), sameMultiSet(X, b).
multiset2o :- sameMultiSet(o(a,o(a,emptyMultiSet)),o(X,o(emptyMultiSet,b))).
multiset3o :- sameMultiSet(o(a,o(a,o(emptyMultiSet,b))),
o(o(a,b),o(a,emptyMultiSet))).
sameMultiSet(X, X).
sameMultiSet(o(X, Y), o(X, Z)):- sameMultiSet(Y, Z).
sameMultiSet(o(o(X, Y), Z), U):- sameMultiSet(o(X, o(Y, Z)), U).
sameMultiSet(U, o(o(X, Y), Z)):- sameMultiSet(U, o(X, o(Y, Z))).
sameMultiSet(o(emptyMultiSet, X), Y):- ΨsameMultiSet(X, Y).
sameMultiSet(X, o(emptyMultiSet, Y)):-ΨsameMultiSet(X, Y).
sameMultiSet(o(X, Y), Z) :- sameMultiSet(o(Y, X), Z).
```
### 0.A.2 Planning Problems
Blocks are identified by integers represented as terms with the constant $`0`$ and the function $`s/1`$. The $`actionZero/3`$ predicate gives the possible actions and the $`causesZero/3`$ predicate tries to find a plan. In both predicates the first argument is the initial state, the last argument is the final state and the plan is collected in the second argument.
```
blockzero3o :-
causesZero(o(o(on(s(s(0)), s(0)), cl(s(s(0)))), em), Plan,
o(on(s(0), 0), Z)).
causesZero(I1, void, I2):-
sameMultiSet(I1, I2).
causesZero(I, plan(A, P), G):-
actionZero(C, A, E),
sameMultiSet(o(C, Z), I),
causesZero(o(E, Z), P, G).
actionZero(holds(V), put_down(V),
o(table(V), o(clear(V), nul))).
actionZero(o(clear(V), o(table(V), nul)), pick_up(V),
holds(V)).
actionZero(o(holds(V), clear(W)), stack(V, W),
o(on(V,W), o(clear(V), nul))).
actionZero(o(clear(V), o(on(V, W), nul)), unstack(V),
o(holds(V), clear(W))).
actionZero(o(on(X, Y), o(clear(X), nul)), generate_block,
o(on(s(X), X), o(on(X, Y), o(clear(s(X)), nul)))).
```
|
no-problem/0003/astro-ph0003021.html
|
ar5iv
|
text
|
# Synchrotron and Compton Components and their Variability in BL Lac Objects
## 1. Introduction
BL Lacertae objects are extreme extragalactic sources characterized by the emission of strong and rapidly variable nonthermal radiation over the entire electromagnetic spectrum. Synchrotron emission followed by inverse Compton scattering in a relativistic beaming scenario is generally thought to be the mechanism powering these objects (e.g. Kollgaard 1994., Urry & Padovani 1995). BL Lacs can be divided into different subclasses depending on their Spectral Energy Distribution (SED), namely LBL for objects with the synchrotron emission peaking at $`\nu _{peak}10^{1314}Hz`$, intermediate objects ($`\nu _{peak}10^{1516}Hz`$) and HBL or high energy peaked BL Lacs with $`\nu _{peak}10^{1718}Hz`$ (Padovani & Giommi 1995). The wide X-ray band pass of the BeppoSAX satellite (Boella et al. 1997) is well suited for the detailed spectral study of all types of BL Lacs. In fact, direct measurements of the Compton part of the spectrum have been obtained for a number of LBLs (e.g. Padovani et al. 1999), and the very variable tail of the Synchrotron component has been studied in several HBLs (e.g. Pian et al. 1998, Wolter et al. 1998, Giommi, Padovani & Perlman 1999, Chiappetti et al. 1999). In the case of the two intermediate BL Lacs S5 0716+714 and ON 231 BeppoSAX for the first time was able to detect both spectral components within a single instrument (Giommi et al. 1999, Tagliaferri et al. 1999).
The BeppoSAX archive at the Science Data Center (SDC, Giommi & Fiore 1998) presently includes over 100 observations of 56 distinct BL Lacs, about half of which are already publicly available. We have started a project to construct the SED of a large number of all types of BL Lacs by combining a) public BeppoSAX data (0.1-200 keV); b) simultaneous optical and radio data when these are available from monitoring campaigns, or from the University of Michigan Radio Astronomy Observatory (UMRAO) on-line data base (Aller et al. 1999); and c) non-simultaneous photometric data form NED. Here we present the first results of this project.
## 2. Spectral Energy Distributions and variability
The SEDs that we have assembled are shown in figure 1 for LBLs and intermediate objects, and in figure 2 for HBL BL Lacs. The X-ray part of the plots have been constructed using data from the LECS, MECS and PDS instruments of the BeppoSAX satellite. The cleaned and calibrated data files have been taken from the SDC on-line archive and have been analyzed using the XSPEC package. Unfolded spectral data have been corrected for low energy absorption assuming $`N_H`$ equal to the Galactic value. Nearly simultaneous data are plotted with the same symbols used for the X-ray data. Optical monitoring observations are available for S5 0716+714 (Giommi et al. 1999), and ON 231 (Tagliaferri et al. 1999). Nearly simultaneous radio data from the UMRAO database are available for several objects. All other (non-simultaneous) data are plotted as small open circles and are from the photometric data points provided by NED. Strong variability at several frequencies is apparent from Figures 1 and 2. In particular quite spectacular spectral changes are concentrated at or just after the synchrotron peak. BeppoSAX observations of intermediate BL Lacs clearly show that the soft X-ray synchrotron radiation vary in a different way compared to the harder Compton components (Giommi et al. 1999, Tagliaferri et al. 1999). The SED shown here indicate that the variability of the Compton component may be correlated with radio flux and not with the optical and soft X-ray synchrotron emission (see figure 1).
## 3. Ultra High Energy Synchrotron Peaked BL Lacs (UHBLs) ?
Figures 1 and 2 clearly show that the peak frequency of the synchrotron emission ranges from around $`10^{13}Hz`$ for OJ 287 to well above $`10^{19}Hz`$ for 1ES1426+428. Ghisellini (1999) argued that this trend could continue to much higher energies. We have thus been searching for BL Lacs with Ultra High synchrotron peak energy (UHBLs). We have selected candidates UHBLs from the sample of extreme BL Lacs of the ”Sedentary Multifrequency Survey” (Giommi, Menna & Padovani 1999) by looking for objects within the error circle of unidentified sources in the third EGRET catalog. One such object is 1RXS J23511.1-14033; its finding chart is shown in figure 3 (left). The SED of 1RXS J23511.1-14033, on the right part of figure 3, indicates that the synchrotron emission could reach the gamma ray band. A first BeppoSAX pointing of this object unfortunately gave inconclusive results since the observation had to be split into three short exposures and the spectrum appears to be variable. Details will be published elsewhere. A second UHBL candidate will be observed by BeppoSAX in a few months. If these observations will confirm the hypothesis that UHBLs exist, this type of sources could be the long sought counterpart of many of the still unidentified high galactic latitude EGRET sources.
## REFERENCES
Aller M.F, Aller H.D., Huges P.A., & Latimer G.E., 1999 ApJ 512, 601
Boella G. et al. 1997 A&AS, 122, 299
Chiappetti,L., et al. 1999, ApJ 521, 552
Ghisellini, G., 1999, Proc 3rd Integral Workshop, Taormina, astro-ph/9812419
Giommi P.,& Fiore F. 1998, in Proc. 5th Workshop on Data Analysis in Astronomy, World Scientific, Singapore, p. 93
Giommi, P., Padovani, P. & Perlman, E. 1999, MNRAS in press, astro-ph/9907377
Giommi, P. et al. 1999, A&A, in press, astro-ph/9909241
Giommi, P., Menna, M.T., & Padovani, P. 1999, MNRAS in press, astro-ph/9907014
Kollgaard R.I., 1994 Vistas in Astronomy, 38, 29
Padovani, P. & Giommi, P. 1995, ApJ, 444, 567
Padovani, P. et al. 1999, in preparation
Pian, E. et al. 1998, APJ L,492, L17
Tagliaferri, G., et al. 1999, A&A, submitted
Urry, C.M., & Padovani, P., 1995, PASP, 107, 803
Wolter, A. et al. 1998 A&A 335, 899
|
no-problem/0003/quant-ph0003058.html
|
ar5iv
|
text
|
# Local and Nonlocal Properties of Werner States
## Abstract
We consider a special kind of mixed states – a Werner derivative, which is the state transformed by nonlocal unitary – local or nonlocal – operations from a Werner state. We show the followings. (i) The amount of entanglement of Werner derivatives cannot exceed that of the original Werner state. (ii) Although it is generally possible to increase the entanglement of a single copy of a Werner derivative by LQCC, the maximal possible entanglement cannot exceed the entanglement of the original Werner state. The extractable entanglement of Werner derivatives is limited by the entanglement of the original Werner state.
Quantum entanglement plays an essential role in various types of quantum information processing, including quantum teleportation , superdense coding , quantum cryptographic key distribution , and quantum computation . Since the best performance of such tasks requires maximally entangled states (Bell singlet states), one of the most important entanglement manipulations is the entanglement purification or distillation , namely, the process extracting maximally entangled states from input states. Most of protocols for entanglement purification (distillation) proposed so far utilize the collective operations on many copies of a given state $`\rho `$. Strictly speaking, these protocols rely only on the properties of $`\rho ^N`$ with large $`N`$ and have no direct relevance to the intrinsic properties of the individual state $`\rho `$. In fact, it has been shown that there exist no purification protocols utilizing local quantum operations and classical communications (LQCC) producing a pure singlet from a single copy of a given mixed state of two qubits . If we are not available to many copies of a given mixed state but a single one, the only task we can do by LQCC is to enhance the amount of entanglement to some extent. However, there exist entangled mixed state, for which even such a restricted task is also not successful . Therefore, it is of fundamental importance to clarify the limit of entanglement manipulations of a single copy of a given mixed state for deeper understanding of the nature of mixed state entanglement.
We consider in this paper a special kind of mixed states – a Werner derivative, which refers to the state transformed by unitary – local or nonlocal – operations from a Werner state . We show the followings. (i) The amount of entanglement of Werner derivatives cannot exceed that of the original Werner state. (ii) Although it is generally possible to increase the entanglement of a single copy of a Werner derivative by LQCC, the maximal possible entanglement cannot exceed the entanglement of the original Werner density matrix. The extractable entanglement of Werner derivatives is limited by the entanglement of the original Werner state. Here, the extractable entanglement of a given state $`\rho `$ is referred to as the maximal possible entanglement obtained by LQCC applied to a single copy of $`\rho `$ (single-state LQCC) . The first point (i) is the direct consequence of the results presented in our recent work , that is, a Werner state belongs to a set of maximally entangled mixed states, in which the amount of entanglement cannot be increased by applying any unitary operations. The second point (ii) is our main result.
The degree of entanglement of mixed states of two qubits is customarily measured by the entanglement of formation (EOF) . The EOF for a two-party pure state is defined as the von Neumann entropy of the reduced density matrix associated with one of the parties. The EOF of a bipartite mixed state is defined as $`E_F(\rho )=\mathrm{min}_ip_iE_F\left(|\psi _i\psi _i|\right)`$, where the minimum is taken over all possible decomposition of $`\rho `$ into pure states, $`\rho =_ip_i|\psi _i\psi _i|`$. In $`2\times 2`$ systems the closed form for EOF is known ;
$$E_F(\rho )=H\left(\frac{1+\sqrt{1C^2}}{2}\right),$$
(1)
with $`H(x)=x\mathrm{log}_2x(1x)\mathrm{log}_2(1x)`$. The nonnegative real number $`C=\mathrm{max}\{0,\lambda _1\lambda _2\lambda _3\lambda _4\}`$ is called a concurrence, where $`\lambda _i`$ are the square roots of eigenvalues of positive matrix $`\rho \stackrel{~}{\rho }`$ in descending order. The spin-flipped density matrix $`\stackrel{~}{\rho }`$ is defined as $`\stackrel{~}{\rho }=\sigma _2\sigma _2\rho ^{}\sigma _2\sigma _2`$, where asterisk denotes complex conjugation in the standard basis $`\{|00,|01,|10,|11\}`$ and $`\sigma _i`$, $`i=1,2,3`$, are usual Pauli matrices. Since $`E_F`$ is a monotonic function of $`C`$ and $`C`$ ranges from zero to one, the concurrence $`C`$ is also a measure of entanglement.
Before verifying our main result, we firstly show that a Werner state belongs to a family of maximally entangled mixed states. Although the argument based on the convexity of concurrence is presented in Ref. , we follows here the direct calculations for later convenience. A Werner state in $`2\times 2`$ systems takes the following form,
$$\rho _W=\frac{1F}{3}𝐈_4+\frac{4F1}{3}|\mathrm{\Psi }^{}\mathrm{\Psi }^{}|,$$
(2)
where $`𝐈_n`$ denotes the $`n\times n`$ identity matrix and $`|\mathrm{\Psi }^{}=\left(|01|10\right)/\sqrt{2}`$ the singlet state. The Werner state $`\rho _W`$ is characterized by a single real parameter $`F`$ called fidelity. This quantity measures the overlap of the Werner state with a Bell state. The concurrence of $`\rho _W`$ is simply given by $`C(\rho _W)=\mathrm{max}\{0,2F1\}`$; for $`F1/2`$ the Werner state is unentangled, while for $`1/2<F1`$ it is entangled. We assume $`1/2<F1`$ so that $`C(\rho _W)=2F1`$ in the following.
The nonlocal unitary transformation, $`UU(4)`$, brings $`\rho _W`$ to a new density matrix of the form,
$$\rho =\frac{1F}{3}𝐈_4+\frac{4F1}{3}|\psi \psi |,$$
(3)
where $`|\psi =U|\mathrm{\Psi }^{}`$. Because $`U`$ preserves the rank of states, $`|\psi `$ is still a pure (rank of one) state vector but is generally less entangled and it can be written in a Schmidt decomposed form, $`|\psi =\sqrt{a}|00+\sqrt{1a}|11`$ with $`1/2a1`$. The nonlocal unitary transformation $`U`$ is thus parametrized by a single real number $`a`$. The Peres-Horodecki criterion (the partial transposition test) tells us that the Werner derivative described by Eq. (3) is entangled if and only if
$$\frac{1}{2}a<\frac{1}{2}\left(1+\frac{\sqrt{3(4F^21)}}{4F1}\right).$$
(4)
The range of parameter $`a`$ is assumed to be limited by above inequalities so that $`\rho `$ is always entangled. The square roots of eigenvalues of $`\rho \stackrel{~}{\rho }`$ are calculated as
$$\lambda _1=\frac{(4F1)G_+}{3},$$
(5)
$$\lambda _2=\frac{(4F1)G_{}}{3},$$
(6)
and
$$\lambda _3=\lambda _4=\frac{1F}{3},$$
(7)
which are sorted in decreasing order. In Eqs. (5) and (6),
$$G_\pm =\left[2a(1a)+G\pm 2\sqrt{a(1a)(a(1a)+G)}\right]^{\frac{1}{2}},$$
(8)
with $`G=3F(1F)/(4F1)^2`$. The concurrence of $`\rho `$, $`C(\rho )=\lambda _1\lambda _2\lambda _3\lambda _4`$, is given by
$$C(\rho )=\frac{4F1}{3}\left(G_+G_{}\right)\frac{2}{3}(1F).$$
(9)
The problem is to find the maximal value of $`C(\rho )`$. We have that
$$\frac{d}{da}C(\rho )=\frac{1}{6}\frac{(4F1)(12a)}{\sqrt{a(1a)(a(1a)+G)}}\left(G_++G_{}\right),$$
(10)
which is clearly nonpositive for $`a1/2`$. It follows that the maximal $`C(\rho )`$ is achieved only for $`a=1/2`$. The maximal value of the concurrence is calculated as $`2F1`$. Therefore, the EOF of Werner derivatives $`E_F(\rho )`$ cannot exceed the EOF of the original Werner state $`E_F(\rho _W)`$; a Werner state is indeed a member of a set of maximally entangled mixed states.
Now let us turn to the proof of our main result. The Werner derivative $`\rho `$ given by Eq. (3) can be also written as
$$\rho =\frac{1}{4}𝐈_4+\frac{4F1}{12}\left[(2a1)\left(𝐈_2\sigma _3+\sigma _3𝐈_2\right)+2\sqrt{a(1a)}\left(\sigma _1\sigma _1+\sigma _2\sigma _2\right)+\sigma _3\sigma _3\right].$$
(11)
Since the coefficient vectors of $`𝐈_2𝝈`$ or $`𝝈𝐈_2`$ are nonzero $`\left[𝝈=(\sigma _1,\sigma _2,\sigma _3)\right]`$, it is possible to increase the EOF of $`\rho `$ by a single-state LQCC . As shown below, however, the maximum EOF thus obtained is still less than or equal to the EOF of the original Werner state. According to Theorem 3 in Ref. , there exist a single-state LQCC mapping $`\rho `$ to a Bell diagonal state $`\rho ^{}`$ with maximal possible EOF of the form,
$$\rho ^{}=\frac{1}{4}\left(𝐈_4+\underset{i=1}{\overset{3}{}}r_i\sigma _i\sigma _i\right),$$
(12)
with $`r_1r_2r_30`$. The square roots of eigenvalues of $`\rho ^{}\stackrel{~}{\rho ^{}}`$ in descending order are $`\lambda _1^{}=(1r_1r_2r_3)/4`$, $`\lambda _2^{}=(1r_1+r_2+r_3)/4`$, $`\lambda _3^{}=(1+r_1r_2+r_3)/4`$, and $`\lambda _4^{}=(1+r_1+r_2r_3)/4`$. Since the ratio $`\lambda _i^{}/\lambda _j^{}`$ are invariant under LQCC, $`\lambda _i^{}/\lambda _4^{}=\lambda _i/\lambda _4`$ $`(i=1,2,3)`$, where $`\lambda _i`$ are given by Eqs. (5), (6), and (7). Therefore, the concurrence of $`\rho ^{}`$, $`C(\rho ^{})=\lambda _1^{}\lambda _2^{}\lambda _3^{}\lambda _4^{}`$, can be expressed in terms of $`\lambda _i`$ as follows,
$$C(\rho ^{})=\frac{\lambda _1\lambda _2\lambda _3\lambda _4}{\lambda _1+\lambda _2+\lambda _3+\lambda _4}.$$
(13)
Inserting the explicit forms of $`\lambda _i`$ into this equation, we obtain
$$C(\rho ^{})C(\rho _W)=2\frac{(1F)G_+FG_{}2(1F)/(4F1)}{G_++G_{}+2(1F)/(4F1)}.$$
(14)
The denominator of the right hand side of this equation is strictly positive so that it suffices to verify the numerator is less than or equal to zero in order to show $`C(\rho ^{})C(\rho _W)`$. We have that
$$\frac{d}{da}\left[(1F)G_+FG_{}\right]=\frac{1}{2}\frac{12a}{\sqrt{a(1a)(a(1a)+G)}}\left[(1F)G_++FG_{}\right],$$
(15)
which is clearly nonpositive for $`a1/2`$. It follows that maximal value of $`\left[(1F)G_+FG_{}\right]`$ is achieved for $`a=1/2`$ and it turns out to be $`2(1F)/(4F1)`$. Therefore, numerator in the right hand side of Eq. (15) is strictly less than or equal to zero. Hence $`C(\rho ^{})C(\rho _W)`$ so that $`E_F(\rho ^{})E_F(\rho _W)`$. It should be noted that unitary transformation with $`a=1/2`$ is just a local unitary transformation; $`|0_A|0_A`$, $`|1_A|1_A`$, $`|0_A|1_A`$, and $`|1_A|0_A`$ such that $`|\mathrm{\Psi }^{}_{AB}=\left(|01_{AB}|10_{AB}\right)/\sqrt{2}`$ $`\left(|00_{AB}+|11_{AB}\right)/\sqrt{2}`$. The state $`\rho `$ is, therefore, equivalent to $`\rho _W`$ up to local unitary transformations and the present result is reduced to that of Ref. . It implies the following. If we bring a Werner state $`\rho _W`$ to one of Werner derivatives $`\rho `$ by essentially nonlocal unitary transformations, the extractable entanglement of $`\rho `$ is strictly below the EOF of the original Werner state. Our main result can be also stated in other words that the EOF of a Werner state cannot be increased by a single-state LQCC followed by nonlocal unitary transformations. This property is unique to Werner states, as shown below. If another state $`\rho `$ which does not belong to a family of Werner states has the property stated above, it must be one of the maximally entangled mixed states; otherwise $`E_F(\rho )`$ could be increased by nonlocal unitary transformation. The maximally entangled mixed states take the following form ,
$$\rho =p_1|\mathrm{\Psi }^{}\mathrm{\Psi }^{}\left|+p_2\right|0000\left|+p_3\right|\mathrm{\Psi }^+\mathrm{\Psi }^+\left|+p_4\right|1111|,$$
(16)
where $`|\mathrm{\Psi }^+=\left(|01+|10\right)/\sqrt{2}`$ and $`p_i`$ are eigenvalues of $`\rho `$ in decreasing order $`(p_1p_2p_3p_40)`$. The state $`\rho `$ can be also written as
$$\rho =\frac{1}{4}\left[𝐈_4+(p_2p_4)\left(𝐈_2\sigma _3+\sigma _3𝐈_2\right)(p_1p_3)\left(\sigma _1\sigma _1+\sigma _2\sigma _2\right)(p_1p_2+p_3p_4)\sigma _3\sigma _3\right].$$
(17)
If $`p_2p_4`$, the coefficient vectors of $`𝐈_2𝝈`$ or $`𝝈𝐈_2`$ are nonzero and the EOF of $`\rho `$ can be increased further by a single-state LQCC, which contradicts the assumed property of $`\rho `$. Therefore, the equality $`p_2=p_4`$ must hold, which implies $`p_2=p_3=p_4=(1p_1)/3`$. It follows that the state $`\rho `$ takes the form,
$$\rho =\frac{1p_1}{3}𝐈_4+\frac{4p_11}{3}|\mathrm{\Psi }^{}\mathrm{\Psi }^{}|.$$
(18)
Hence, $`\rho `$ must be a Werner state.
Finally, we mention that Eq. (13) gives the general expression for the extractable entanglement of a given entangled state $`\rho `$ of two qubits. It has the form of the concurrence of $`\rho `$, $`C(\rho )=\lambda _1\lambda _2\lambda _3\lambda _4`$, modified by an enhancement factor $`(\lambda _1+\lambda _2+\lambda _3+\lambda _4)^1`$. For Bell diagonal states, including Werner states, $`\rho =\stackrel{~}{\rho }`$ so that the square roots of eigenvalues of $`\rho \stackrel{~}{\rho }`$ are same as the eigenvalues of $`\rho `$. Therefore, $`\lambda _1+\lambda _2+\lambda _3+\lambda _4=1`$ and the enhancement factor is one. It follows directly that we cannot extract higher EOF from a Bell diagonal state by a single-state LQCC as argued in Ref. . For pure states of rank one, $`C(\rho ^{})=1`$, which indicates that it is always possible to extract a Bell singlet state as expected.
In summary, combined the present results with previously obtained ones , the following peculiar property of a Werner state of two qubits has been revealed; its EOF cannot be increased (i) by LQCC, (ii) by nonlocal unitary transformations, and (iii) by LQCC followed by nonlocal unitary transformations. We hope that our results presented in this paper would lead to a proper classification of entangled mixed states.
|
no-problem/0003/cond-mat0003421.html
|
ar5iv
|
text
|
# Effect of memory and dynamical chaos in long Josephson junctions
## Abstract
A long Josephson junction in a constant external magnetic field and in the presence of a dc bias current is investigated. It is shown that the system, simulated by the sine–Gorgon equation, “remembers” a rapidly damping initial perturbation and final asymptotic states are determined exactly with this perturbation. Numerical solving of the boundary sine–Gordon problem and calculations of Lyapunov indices show that this system has a memory even when it is in a state of dynamical chaos, i.e., dynamical chaos does not destroy initial information having a character of rapidly damping perturbation.
PACS number(s): 74.50+r, 05.45.+b
Dynamical chaos is one of the most interesting phenomena in the theory of Josephson junctions.<sup>1-10</sup> This phenomenon is not only of theoretical importance but also of practical importance, because many devices are founded on Josephson junctions, in particular, superconducting quantum interference devices (SQUID’s).<sup>11</sup> Dynamical chaos in these devices is another source of noise. Furthemore, a long Josephson junction (LJJ) serves as a very good system for studying nonlinear phenomena such as an exitation of fluxons and antifluxons, their propagation, interaction, scattering, and breakup. Investigations of the last few years showed that a LJJ detects deeper characteristics that had seemed. Even in the simplest case, when a bias current and an external oscillating field are absent, the presence only of a constant external magnetic field leads to the most interesting phenomenon connected with the selection of the solution of the stationary Ferrell-Prange equation. The fact is that this equation has not only provided one solution by given boundary conditions; the number of these solutions increases by the strength of the external magnetic field and the total length of the junction.<sup>11</sup>
Recently we have shown<sup>12</sup> that the selection of a solution is carried out with the form of a small and rapidly damping initial perturbation in time in the nonstationary sine-Gordon equation and what is more surprising an asymptotic solution of this equation coincides with one of the stable solutions of the stationary Ferrell-Prange equation. Two circumstances are remarkable here: (1) A small perturbation influences very much the evolution of the system with $`t\mathrm{}`$; in a sense it defines the character of asymtotic solutions. (2) One can say that in spite of the fact that a small perturbation is a rapidly damping one, the stable asymptotic solution “remembers” the initial perturbation. In other words, the nonlinear system, i.e., a LJJ, described with the sine-Gordon equation shows an effect of memory. However, in Ref. 12, the LJJ is studied solely under the influence of an external constant magnetic field. Therefore, it is of interest to investigate the LJJ from the point of view of the effect of memory not only in the presence of an external constant magnetic field but also under the influence of a dc bias current through the junction causing an exitation of dynamical chaos. How will the effect of memory in the presence of a dc bias current be shown? Will this effect take place in the states of dynamical chaos in general? Below we will try to give answers on these questions.
We write down the sine-Gordon equation in the presence of a dc bias current in a LJJ in the form
$$\phi _{tt}(x,t)+2\gamma \phi _t(x,t)\phi _{xx}(x,t)=\mathrm{sin}\phi (x,t)+\beta ,$$
(1)
where $`\phi (x,t)`$ is the Josephson phase variable, $`x`$ is the distance along the junction normalized to the Josephson penetration length $`\lambda _J`$,
$$\lambda _J=\left(\frac{c\mathrm{\Phi }_0}{8\pi ^2j_cd}\right)^{1/2},$$
$`\mathrm{\Phi }_0`$ is the flux quantum, $`j_c`$ is the critical current density of the Josephson junction, $`d=2\lambda _L+b`$, $`\lambda _L`$ is the London penetration length, $`b`$ is the thickness of the dielectric barrier, $`t`$ is the time normalized to the inverse of the Josephson plasma frequency $`\omega _J`$,
$$\omega _J=\left(\frac{2\pi cj_c}{C\mathrm{\Phi }_0}\right)^{1/2},$$
$`C`$ is the junction capacitance per unit area, $`\gamma `$ is the dissipative coefficient per unit area, and $`\beta `$ is the dc bias current density normalized to $`j_c`$.
We write down the boundary condition for Eq. (1) in the form
$$\begin{array}{c}\frac{\phi (x,t)}{x}|_{x=0}H(0,t)=\frac{\phi (x,t)}{x}|_{x=L}H(L,t)\\ =H_0\left(1ae^{t/t_0}\mathrm{cos}t\right),\end{array}$$
(2)
where $`L`$ is the total length of the junction normalized to $`\lambda _J`$, $`H_0`$ is the external constant magnetic field perpendicular to the junction and normalized as well as $`H(0,t)`$ and $`H(L,t)`$ to $`\frac{\mathrm{\Phi }_0}{2\pi \lambda _Jd}`$, $`a`$ is the controlling (perturbation) parameter characterizing the rapidly damping in the time perturbation, and $`t_0`$ is the characteristic time of this damping perturbation normalized to $`\omega _J^1`$.
Eq. (1) with boundary condition (2) is solved numerically. In contrast to the case $`\beta =0`$ considered by us<sup>12</sup>, the picture of magnetic field evolution in the junction turns out more complicated by $`\beta 0`$ as will be shown below. Physically this is connected with the fact that the energy balance in the junction in the presence of bias current is such that the energy brought into the system with this current can make up for the energy loss because of dissipation or exceed it. And so one can expect that at few values of $`\beta `$ a state of junction will differ little from the stationary one described by the Ferrell-Prange equation. At sufficiently large values of $`\beta `$ asymptotic regular (periodic) solutions that represent waves — fluxons and antifluxons — moving along the junction and interacting among themselves and with junction boundaries and also nonregular solutions that represent dynamical chaos will take place<sup>1</sup>. Our calculations showed that if an asymtotic state at $`a=0`$ (further we shall call the state at $`a=0`$ as “starting”) is regular, by introduction at the initial moment of a rapidly damping perturbation defined with the controlling parameter $`a`$ as well as in a stationary case, examined in Ref. 12, the selection of the asymtotic solution by a given set of parameters $`H_0`$, $`\gamma `$, $`L`$, and $`\beta `$ is determined with this parameter $`a`$; i.e., the system “remembers” the form of the rapidly damping perturbation and chooses the way of further evolution in accordance with this. (It is noteworthy that the effect of memory discussed here happens in a dissipative system and so it is not connected with the reproduction of a signal as it takes place, for example, in noncollision plasma in the effect of plasma echo<sup>14</sup>). Furthermore, our calculations show that if the “starting” state is regular (periodic), stationary states can also arise by introduction of perturbation ($`a0`$) which it is of surprise in itself. However, the most remarkable fact is that the system is very sensitive to a rapidly damping perturbation as well as in the dynamical chaos conditions; i.e., the system has memory in this case too.
For the quantitative description of different characteristic states we used Lyapunov indices. We write down Eq. (1) in the form
$$\{\begin{array}{ccc}\phi _t\hfill & =\hfill & V,\hfill \\ V_t\hfill & =\hfill & 2\gamma V+\phi _{xx}\mathrm{sin}\phi +\beta ,\hfill \end{array}$$
(3)
or, that is the same, in the form
$$z_t=F\left(z\right),$$
(4)
where $`z`$ is the vector with the components $`\phi `$ and $`V`$, $`z\left(\begin{array}{c}\phi \\ V\end{array}\right)`$, and $`F(z)`$ is defined as follows:
$$F\left(z\right)F(\phi ,V)=\left(\begin{array}{c}V\hfill \\ 2\gamma +\phi _{xx}\mathrm{sin}\phi +\beta \hfill \end{array}\right).$$
Let $`z(t)`$ be a solution of Eq. (4). Then we can write down the equation for variations:
$$\begin{array}{ccc}w_t\hfill & =\hfill & \frac{F\left(z\left(t\right)\right)}{z}w,\hfill \\ w\hfill & \hfill & \left(\begin{array}{c}w_1\\ w_2\end{array}\right).\hfill \end{array}$$
(5)
We define the Lyapunov index (LI) as
$$\lambda =\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}\frac{w\left(t\right)}{w\left(0\right)},$$
(6)
where $`w`$ is the vector norm that we define as Euclidean norm
$$w^2=\underset{0}{\overset{L}{}}\left(w_1^2+w_2^2\right)𝑑x.$$
(7)
Depending on direction of the initial vector $`w(0)`$, different LI’s will exist and their number will be infinite. The definition of LI (6) for system (3) is a natural generalization of a LI for finite-dimensional dynamic systems.<sup>13</sup>
The maximum LI plays a very important role because it is precisely this maximum that determines the motion character — exponential growth, decay, or zero change — for the majority of the trajectories of the system. A set of initial data $`w(0)`$ for which formula (6) gives LI’s differing from the maximum one is negligibly small and by numerical calculations this formula gives the maximum LI as a rule.
The results a numerical solution of Eq. (1) with boundary conditions (2) and calculation of LI (6) showed that there exist three forms of characteristic states of the system for which the maximum LI can be as follows: (1)$`\lambda >0`$, (2)$`\lambda <0`$, and (3)$`\lambda 0`$. The states with $`\lambda >0`$ represent the dynamical chaos states (Fig. 1), the states with $`\lambda <0`$ represent the stable stationary states (Fig.2), and the states with $`\lambda 0`$ represent the regular (periodic) states (Fig. 3). All states in Figs. 1–3 that are shown as illustration are as “starting” ones ($`a=0`$) and they are calculated with identical values of the parameters $`H_0=1.25`$, $`L=5`$, and $`\gamma =0.26`$, but with different values of $`\beta `$. For the chaos state in Fig. 1, $`\beta =0.50`$; for the stationary one in Fig. 2, $`\beta =0.427`$; and for the regular one in Fig. 3, $`\beta =0.60`$. In Figs. 1–3 the dependences of the potential $`\phi _t`$ (potentials are normalized to the value $`V_c\frac{\mathrm{}\omega _p}{2e}`$) on time and the calculated values LI $`\lambda `$ corresponding to them are shown. We note that the “starting” chaotic state, represented in Fig. 1, is the same as in Ref. 1.
Let us examine the specific set of parameters, i.e., the definite point in parameter space, corresponding to the chaotic “starting” state: $`H_0=1.25`$, $`L=5`$, $`\gamma =0.26`$, $`\beta =0.44`$, and $`a=0`$. If we input now the rapidly damping in the time perturbation determined by the controlling parameter $`a`$, the system does not remain in the previous chaotic state as the calculations show, but wanders between all three forms of states: chaotic, stationary, and regular, when this parameter changes. By calculations the following hierarchy of times was used: $`t_0\tau _rT`$, where $`\tau _r`$ is the characteristic time of relaxation processes ($`\tau _r`$ is the time of relaxation to asymtotic states) and $`T`$ is the time of observation. The values of characteristic times in our calculations were as follows: $`t_0=5`$, $`\tau _r60`$, $`T=2000`$. At first sight, one might have expected that initial perturbations at the time interval $`T`$ damping at the time about $`t_0`$ would be forgotten and they will have no influence on the evolution in the large time interval. \[We notice that $`\tau _r`$ is not equal to $`\gamma ^1`$, exactly $`\tau _r\gamma ^1`$ in our case. This is connected with the system’s nonlinearity. The value of $`\tau _r`$ is defined from numerical calculation of our problem (1), (2).\] In Fig. 4 are shown the results of calculation of LI’s with values of parameters for $`H_0`$, $`L`$, $`\gamma `$, and $`\beta `$ mentioned above and by specific values of parameter $`a`$. As we can see from Fig. 4 three typical clusters of states take place: a cluster of chaotic states ch (in Fig. 4 the following values of the parameter $`a`$ correspond to them: $`a`$ =0, 0.175, 0.180, 0.280), a cluster of regular states r ($`a`$ = 0.290, 0.300, 0.320), and a cluster of stationary states s ($`a`$ = 0.190, 0.195, 0.285). For the cluster ch the values of the LI $`\lambda 510^2`$; for the cluster r, $`\lambda 10^3`$; and for the cluster s, $`\lambda 10^1`$.
In Fig. 5 the potentials on the junction $`\phi _t`$ depending on time for three values of parameter $`a`$ differing from each other on 0.005 and belonging to cluster ch, $`a=0.280`$ \[Fig. 5a\]; to cluster s, $`a=0.285`$ \[Fig. 5b\]; and to cluster r, $`a=0.290`$ (Fig. 5c) are showed; their LI’s are represented in Fig. 4. Thus, a small change of parameter $`a`$ leads to a transition between all three characteristic states of the system.
In Table I transitions between chaotic ch, stationary s, and regular r states are represented by a change of the parameter $`a`$ from 4.000 to 4.155 at the fixed remaining parameters indicated above.
Table I. States of a LJJ
| 4.000 | 4.005 | 4.010 | 4.015 | 4.020 | 4.025 | 4.030 | 4.035 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| r | r | r | r | r | r | r | ch |
| 4.040 | 4.045 | 4.050 | 4.055 | 4.060 | 4.065 | 4.070 | 4.075 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| ch | ch | ch | s | s | s | s | s |
| 4.080 | 4.085 | 4.090 | 4.095 | 4.100 | 4.105 | 4.110 | 4.115 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| ch | ch | ch | ch | ch | ch | ch | ch |
| 4.120 | 4.125 | 4.130 | 4.135 | 4.140 | 4.145 | 4.150 | 4.155 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| s | s | r | r | r | r | r | r |
We note that the transitions between the states, reduced in Table I and stipulated by a change of the perturbation parameter $`a`$, correspond to the “starting” state of chaos ($`a=0`$). Thus, by the given values of $`H_0`$, $`L`$, $`\gamma `$, and $`\beta `$ a final asymptotic state is determined with the parameter $`a`$ independently of this, whether the “starting” state is chaotic or not. Such asymptotic behavior of the system says that dynamical chaos essentially differed from statistical chaos by which any perturbation damps rapidly and a system relaxes to its final state (for example, to the state of thermodynamical equilibrium) completely “having forgotten” an initial perturbation; i.e., the final state does not depend on this perturbation. As we see, a system in the state of dynamical chaos in contrast to a system being in the state of statistical chaos “remembers” the initial perturbation and, in any sense, final states and transitions between them are defined with the very initial perturbation. It makes it possible for us to recognize this memory effect in the system described with the sine-Gordon equation with dissipation and in the presence of an external magnetic field and a bias current. Thus, the dynamical chaos originating in a nonlinear system does not destroy an initial information; i.e., the nonlinear system has a memory in the states of dynamical chaos as well.
<sup>1</sup>W.J. Yeh, O.G. Symko, and D.J. Zheng, Phys. Rev. B 42, 4080(1990).
<sup>2</sup>L.E. Guerrero and M. Ostavio, Physica B 165-166, 1657(1990).
<sup>3</sup>L.E. Guerrero and M. Ostavio, Physica B 165-166, 1659(1990).
<sup>4</sup>M. Cirillo and N.F. Pedersen, Phys. Lett. A 90, 150(1982).
<sup>5</sup>N. Gronbech-Jensen, P.S. Lomdahl, and M.R. Samuelsen, Phys. Rev. B 43, 12799(1991).
<sup>6</sup>N. Gronbech-Jensen, Phys. Rev. B 45, 7315(1992).
<sup>7</sup>S. Rajasekar and M. Lakshmanan, Physica A 167, 793(1990).
<sup>8</sup>S. Rajasekar and M. Lakshmanan, Phys. Lett. A 147, 264(1990).
<sup>9</sup>E.F. Eriksen and J.B. Hansen, Phys. Rev. B 41, 4189(1990).
<sup>10</sup>X. Yao, J.Z. Wu, and C.S. Ting, Phys. Rev. B 42, 244(1990).
<sup>11</sup>A. Barone and G.Paterno,Physics and Applications of the Josephson Effect (Wiley-Interscience,New-York, 1982).
<sup>12</sup>K.N. Yugay, N.V. Blinov, and I.V. Shirokov, Phys. Rev. B 49, 12036 (1994).
<sup>13</sup>A.J. Lichtenberg and M.A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, New-York, 1983).
<sup>14</sup>F.F. Chen, Introduction to Plasma Physics and Controlled Fusion (Plenum Press, New-York, 1984).
Effect of memory and dynamical chaos
in long Josephson junctions K.N. Yugay, N.V. Blinov, and I.V. Shirokov
1. The potential $`\phi _t`$ (a) and the Lyapunov index (b) in a chaotic state by $`\beta =0.50`$. The values of other parameters are $`H_0=1.25`$, $`L=5`$, $`\gamma =0.26`$, $`a=0`$.
2. The potential $`\phi _t`$ (a) and the Lyapunov index (b) in a stationary state by $`\beta =0.427`$. Other parameters are the same as those in Fig. 1.
3. The potential $`\phi _t`$ (a) and the Lyapunov index (b) in a regular state by $`\beta =0.60`$. Other parameters are the same as those in Fig. 1.
4. The Lyapunov indices $`\lambda `$ vs parameter $`a`$: ch is the cluster of chaotic states ($`a=`$0, 0.175, 0.180, 0.280), r is the cluster of regular states ($`a=`$0.290, 0.300, 0.320), and s is the cluster of stationary states ($`a=`$0.190, 0.195, 0.285). The values of other parameters are just the same: $`H_0=1.25`$, $`L=5`$, $`\gamma =0.26`$, $`\beta =0.44`$.
5. The potential $`\phi _t`$ vs $`t`$ belonging to the cluster ch, $`a=0.280`$ (a), to the cluster s, $`a=0.285`$ (b), and to the cluster r, $`a=0.290`$ (c). Other parameters are the same as those in Fig. 4.
|
no-problem/0003/cond-mat0003294.html
|
ar5iv
|
text
|
# Mean field and Monte Carlo studies of the magnetization-reversal transition in the Ising model
## 1 Introduction
The study of the response of pure Ising systems under the action of a time-dependent external magnetic field has been of recent interest in statistical physics . A whole class of dynamic phase transitions emerged from the study of such driven spin systems under different time dependences of the driving field. A mean field study was initially proposed by Tome and Oliveira where the time dependence of the external perturbation was periodic. Subsequently, through extensive Monte Carlo studies, the existence of a dynamic phase transition under periodic magnetic field was established and properly characterized . Later, efforts were made to investigate the response of such systems under magnetic fields which are of the form of a ‘pulse’ or in other words applied for a finite duration of time. All the studies with pulsed fields were made on a system below its static critical temperature $`T_c^0`$, where the equilibrium state has got a prevalent order along a particular direction. The pulse is called ‘positive’ when it is applied along the direction of the prevalent order and ‘negative’ when applied in opposition. The results for the positive pulse case was analyzed by extending appropriately the finite size scaling technique to this finite time window case, and it did not involve any new phase transition or introduced any new thermodynamic scale . However a negative field competes with the existing order and depending on the strength $`h_p`$ and duration $`\mathrm{\Delta }t`$ of the pulse, the system may show a transition from one ordered state with equilibrium magnetization $`+m_0`$ (say) to the other equivalent ordered state with equilibrium magnetization $`m_0`$ . This transition is called here the “magnetization-reversal” transition. It may be noted that a magnetization-reversal phenomenon trivially occurs in the limit $`\mathrm{\Delta }t\mathrm{}`$ for any non vanishing value of $`h_p`$ at any $`T<T_c^0`$. However, this is a limiting case of the transition, which is studied here only for finite $`\mathrm{\Delta }t`$. In our studies the magnetization-reversal need not occur during the presence of the external field. In fact, it will be shown later that the closer one approaches to the threshold value $`h_p^c`$ of the pulse strength longer is the time taken by the system, after the field is withdrawn, to relax to the final ordered state. We report here in details the various results obtained for this dynamic magnetization-reversal transition in pure Ising model in two and three dimensions.
The model we studied here is the Ising model with nearest neighbour interaction under a time dependent external magnetic field, described by the Hamiltonian
$$H=\frac{1}{2}\underset{\left[ij\right]}{}J_{ij}S_iS_j\underset{i}{}h_i(t)S_i,$$
(1)
where $`J_{ij}`$ is the cooperative interaction between the spins at sites $`i`$ and $`j`$ respectively and each nearest-neighbour pair denoted by $`\left[\mathrm{}\right]`$ is counted twice in the summation. We consider the system at temperatures only below its static critical temperature ($`T<T_c^0`$). The external field is applied after the system is brought to equilibrium characterized by an equilibrium magnetization $`m_0(T)`$. The field is uniform in space ($`h_i(t)=h(t)\text{ for all }i`$) and its time dependence is given by
$$\begin{array}{ccccc}h(t)& =& h_p& ,& \text{for }t_0tt_0+\mathrm{\Delta }t\hfill \\ & =& 0& ,& \text{otherwise}.\hfill \end{array}$$
(2)
Typical responses of the time dependent magnetization $`m(t)`$ under different $`h(t)`$ are shown in figure 1. As mentioned before, for appropriate combination of $`h_p`$ and $`\mathrm{\Delta }t`$, magnetization-reversal transition occurs when the system makes a transition from one ordered state to another. This transition can be observed at any dimension $`d`$ greater than unity for systems with short range interactions. This is because one has to work at temperatures $`T<T_c^0`$ where, in absence of a symmetry breaking field, the free energy landscape has got two equivalent minima at magnetizations $`m=\pm m_0`$. A phase boundary in the $`h_p\mathrm{\Delta }t`$ plane gives the minimal combination of the two parameters at a particular temperature $`T`$($`<T_c^0`$) required to bring about the transition.
A full numerical solution as well as an analytical treatment in the linear limit of the dynamic mean field equation of motion shows the existence of length and time scale divergences at the transition phase boundary . The divergence of length and time scales is also observed in Monte Carlo (MC) simulation study of Ising model with nearest neighbour interaction evolving under a negative pulse through single spin flip Glauber dynamics . The phase diagram for the transition was obtained for both MF and MC studies. While the phase boundaries for the two cases are qualitatively of similar nature, there exists a major difference which can be accounted for by considering the presence of fluctuations in the simulations. In the MC study, there exists two distinct time scales in the problem : (i) the nucleation time $`\tau _N`$ is the time taken by the system to leave the metastable state under the influence of the external magnetic field and (ii) the relaxation time $`\tau _R`$ is the time taken by the system to reach the final equilibrium state after the external field is withdrawn. While $`\tau _N`$ is controlled by the strength $`h_p`$ of the external pulse and is bounded by its duration $`\mathrm{\Delta }t`$ which is finite, $`\tau _R`$ is the time scale that diverges at the magnetization-reversal phase boundary. According to the classical nucleation theory (CNT) , there can be two distinct mechanisms for the growth of domains or droplets depending on the strength of the external field. Under the influence of weaker external magnetic fields, only a single droplet grows to span the entire system and this is called the single-droplet (SD) or the nucleation regime. On the other hand, under stronger magnetic fields, many small droplets can grow simultaneously and eventually coalesce to form a system spanning droplet. This is called the multi-droplet (MD) or the coalescence regime. The crossover from SD to MD regime takes place at the dynamic spinodal field or $`h_{DSP}(L,T)`$ which is a function of system size $`L`$ and temperature $`T`$. The nucleation time $`\tau _N`$ changes abruptly as one crosses over from SD to MD regime even along the same phase boundary. The nature of the transition too changes from a continuous one in the MD regime to a discontinuous nature in the SD regime. All our simulation observations for the dynamic phase boundary compare well with those suggested by the CNT. The investigations about the relaxation time $`\tau _R`$ and the correlation length $`\xi `$ are also discussed here. The application of scaling theory in the MD regime gives the estimates of the critical exponents for this dynamic transition. The organization of the paper in as follows : We discuss the MF results in the next section and the MC results for square and simple cubic lattices in section 3. A brief summary and concluding remarks are given in section 4.
## 2 Mean field study
The master equation for a system of $`N`$ Ising spins in contact with a heat bath evolving under Glauber single spin flip dynamics can be written as
$`{\displaystyle \frac{d}{dt}}P(S_1,\mathrm{},S_N;t)`$ $`=`$ $`{\displaystyle \underset{j}{}}W_j(S_j)P(S_1,\mathrm{},S_N;t)`$ (3)
$`+`$ $`{\displaystyle \underset{j}{}}W_j(S_j)P(S_1,\mathrm{},S_j,\mathrm{},S_N;t),`$
where $`P(S_1,\mathrm{},S_N;t)`$ is the probability to find the spins in the configuration $`(S_1,\mathrm{},S_N)`$ at time $`t`$ and $`W_j(S_j)`$ is the probability of flipping of the $`j`$th spin. Satisfying the condition of detailed balance one can write the transition probability as
$$W_j(S_j)=\frac{1}{2\lambda }\left[1S_j\mathrm{tanh}\left(\frac{_iJ_{ij}S_i(t)+h_j}{T}\right)\right],$$
(4)
where $`\lambda `$ is a temperature dependent constant. Defining the spin expectation value as
$$m_i=S_i=\underset{\{S\}}{}S_iP(S_1,\mathrm{},S_N;t),$$
(5)
where the summation is carried over all possible spin configurations, one can write
$$\lambda \frac{dm_i}{dt}=m_i+\mathrm{tanh}\left(\frac{_jJ_{ij}S_j+h_i}{T}\right).$$
(6)
Under the mean field approximation (6) can be written after a Fourier transform as
$$\lambda \frac{dm_q(t)}{dt}=m_q(t)+\mathrm{tanh}\left(\frac{J(q)m_q(t)+h_q(t)}{T}\right),$$
(7)
where $`J(q)`$ is the Fourier transform of $`J_{ij}`$. Equation (7) is not analytically tractable and one can only look for solutions in the small $`m_q`$ limit where terms linear in $`m_q`$ are dominant. The linearized equation of motion, therefore, can be written as
$$\frac{dm_q(t)}{dt}=\lambda ^1\left[\left(K(q)1\right)m_q(t)+\frac{h_q(t)}{T}\right],$$
(8)
where $`K(q)=J(q)/T`$. When we are concerned only with the homogeneous magnetization, we consider the $`q=0`$ mode of the equation and writing $`m_{q=0}=m`$ and $`h_{q=0}=h`$, we get
$$\frac{dm}{dt}=\lambda ^1\left[\left(K(0)1\right)m(t)+\frac{h(t)}{T}\right].$$
(9)
In the mean field approximation $`K(0)=T_c^{MF}/T`$ with $`T_c^{MF}=J(0)`$ and for small $`q`$, $`K(q)K(0)\left(1q^2\right)`$. Differentiating (7) with respect to the external field, we get the rate equation for the dynamic susceptibility $`\chi _q(t)`$ as
$$\lambda \frac{d\chi _q(t)}{dt}=\chi _q(t)+\left(\frac{J(q)\chi _q(t)+1}{T}\right)\text{ sech}^2\left[\frac{J(q)m_q(t)+h_q(t)}{T}\right],$$
(10)
which in the linear limit can be written as
$$\frac{d\chi _q(t)}{dt}=\lambda ^1\left[\left(K(q)1\right)\chi _q(t)+\frac{1}{T}\right].$$
(11)
Before we proceed with the solutions of these dynamical equations, we divide the entire time zone in three different regimes : (I) $`0<t<t_0`$, where $`h(t)=0`$ (II) $`t_0tt_0+\mathrm{\Delta }t`$, where $`h(t)=h_p`$ and (III) $`t_0+\mathrm{\Delta }t<t<\mathrm{}`$, where $`h(t)=0`$ again. We note that (9) can be readily solved separately for the three regions as the boundary conditions are exactly known. In region I, $`dm/dt=0`$ and the solution of the linearized (9) becomes trivial. We, therefore, use the solution of (7) in region I ($`m_0=\mathrm{tanh}\left(m_0T_c^{MF}/T\right)`$) as the initial value of $`m`$ for region II. Integrating (9) in region II, we then get
$$m(t)=\frac{h_p}{\mathrm{\Delta }T}+\left(m_0\frac{h_p}{\mathrm{\Delta }T}\right)\mathrm{exp}\left[b\mathrm{\Delta }T\left(tt_0\right)\right],$$
(12)
where $`b=1/\lambda T`$ and $`\mathrm{\Delta }T=T_c^{MF}T`$. It is to be noted that in order to justify the validity of the linearization of (7) one must keep the factor inside the exponential of (12) small. This restricts the linear theory to be valid at temperatures close to $`T_c^{MF}`$ and for small values of $`\mathrm{\Delta }t`$. Writing $`m_wm(t_0+\mathrm{\Delta }t)`$, we get from (12)
$$m_w=\frac{h_p}{\mathrm{\Delta }T}+\left(m_0\frac{h_p}{\mathrm{\Delta }T}\right)e^{b\mathrm{\Delta }T\mathrm{\Delta }t}.$$
(13)
It is to be noted here that in absence of fluctuations, the sign of $`m_w(h_p,\mathrm{\Delta }t)`$ solely decides which of the two final equilibrium states will be chosen by the system after the withdrawal of the pulse. At $`t=t_0+\mathrm{\Delta }t`$, if $`m_w>0`$, the system goes back to $`+m_0`$ state and if $`m_w<0`$, magnetization-reversal transition occurs and the system eventually chooses the $`m_0`$ state (see figure 1). Thus setting $`m_w=0`$, we obtain the threshold value of the pulse strength at the mean field phase boundary for this dynamic phase transition. At any $`T`$, combinations of $`h_p`$ and $`\mathrm{\Delta }t`$ below the phase boundary cannot induce the magnetization-reversal transition, while those above it can induce the transition. From (13) therefore we can write the equation of the mean field phase boundary for the magnetization-reversal transition as
$$h_p^c(\mathrm{\Delta }t,T)=\frac{\mathrm{\Delta }Tm_0}{1e^{b\mathrm{\Delta }T\mathrm{\Delta }t}}.$$
(14)
Figure 2 shows phase boundaries at different $`T`$ obtained from (14) and compares those to the phase boundaries obtained from the numerical solution of the full dynamical equation (7). The phase boundaries obtained under linear approximation match quite well with those obtained numerically for small values of $`\mathrm{\Delta }t`$ and at temperatures close to $`T_c^{MF}`$, which is the domain of validity of the linearized theory as discussed before. In region III, we again have $`h(t)=0`$ and solution of (9) leads to
$$m(t)=m_w\mathrm{exp}\left[b\mathrm{\Delta }T\left\{t\left(t_0+\mathrm{\Delta }t\right)\right\}\right].$$
(15)
We define the relaxation time $`\tau _R^{MF}`$, measured from $`t=t_0+\mathrm{\Delta }t`$, as the time required to reach the final equilibrium state characterized by magnetization $`\pm m_0`$ in region III (see figure 1). From (15) therefore we can write
$`\tau _R^{MF}`$ $`=`$ $`{\displaystyle \frac{1}{b\mathrm{\Delta }T}}\mathrm{ln}\left({\displaystyle \frac{m_0}{\left|m_w\right|}}\right)`$ (16)
$``$ $`\left({\displaystyle \frac{T}{T_c^{MF}T}}\right)\mathrm{ln}\left|m_w\right|.`$
A point to note is that $`m(t)`$ in (15) grows exponentially with $`t`$ and therefore in order to confine ourselves to the linear regime of $`m(t)`$, $`m_0`$ must be small ($`T`$ close to $`T_c^{MF}`$) and $`t\tau _R^{MF}`$. The factor $`\left(T_c^{MF}T\right)^1`$ gives the usual critical slowing down for the static transition at $`T=T_c^{MF}`$. However, even for $`TT_c^{MF}`$, $`\tau _R^{MF}`$ diverges at the magnetization-reversal phase boundary where $`m_w`$ vanishes. Figure 3 shows the divergence of $`\tau _R^{MF}`$ against $`m_w`$ as obtained from the numerical solution of the full mean field equation of motion (7) and compares it with that obtained from (16).
Solution of $`\chi _q(t)`$ is more difficult as all the boundary conditions are not directly known. However, $`\chi _q(t)`$ can be expressed in terms of $`m(t)`$ and the solution of the resulting equation will then have the $`t`$ dependence coming through $`m(t)`$, which we have solved already. Dividing (10) by (7) we get
$$\frac{d\chi _q(t)}{dm(t)}=\frac{\chi _q(t)+\left(\frac{J(q)\chi _q(t)+1}{T}\right)\text{ sech}^2\left[\frac{J(q)m_q(t)+h_q(t)}{T}\right]}{m_q(t)+\mathrm{tanh}\left(\frac{J(q)m_q(t)+h_q(t)}{T}\right)},$$
(17)
which can be rewritten in the linear limit as
$$\frac{d\chi _q}{\chi _q+\mathrm{\Gamma }}=a_q\frac{dm}{m+h(t)/\mathrm{\Delta }t},$$
(18)
where $`\mathrm{\Gamma }=1/T\left(K(q)1\right)`$ and $`a_q=\left(K(q)1\right)/\left(K(0)1\right)1q^2/\mathrm{\Delta }T`$ for small $`q`$.
In region II, solution of (18) can be written as
$$\chi _q(t)=\mathrm{\Gamma }+\left(\chi _q^s+\mathrm{\Gamma }\right)\left[\frac{m(t)h_p/\mathrm{\Delta }T}{m_0h_p/\mathrm{\Delta }T}\right]^{a_q},$$
(19)
where $`\chi _q^s`$ is the equilibrium value of susceptibility in region I. Solving (18) in region III with the initial boundary condition $`m\left(t_0+\mathrm{\Delta }t\right)=m_w`$, we get
$`\chi _q(t)`$ $`=`$ $`\mathrm{\Gamma }+\left(\chi _q\left(t_0+\mathrm{\Delta }t\right)+\mathrm{\Gamma }\right)\left({\displaystyle \frac{m(t)}{m_w}}\right)^{a_q}`$ (20)
$`=`$ $`\mathrm{\Gamma }+\left(\chi _q^s+\mathrm{\Gamma }\right)\left({\displaystyle \frac{m(t)}{m_w}}\right)^{a_q}e^{b\mathrm{\Delta }T\mathrm{\Delta }ta_q},`$
where use has been made of (19) and (13). The dominating $`q`$ dependence in $`\chi _q(t)`$ is coming from $`\left(1/m_w\right)^{a_q}`$ when $`m_w0`$ as one approaches the phase boundary. The singular part of the dynamic susceptibility can then be written as
$$\chi _q(t)=\left(\chi _q^s+\mathrm{\Gamma }\right)\mathrm{exp}\left[q^2\left(\xi ^{MF}\right)^2\right],$$
(21)
where for small values of $`m_w`$ the correlation length $`\xi ^{MF}`$ is given by
$$\xi ^{MF}\xi ^{MF}\left(m_w\right)=\left[\frac{T_c}{\mathrm{\Delta }T}\mathrm{ln}\left(\frac{1}{\left|m_w\right|}\right)\right]^{\frac{1}{2}}.$$
(22)
Thus the length scale also diverges at the magnetization-reversal phase boundary and this can be demonstrated even using the linearized mean field equation of motion. Equations (16) and (22) can now be used to establish the following relation between the diverging time and length scales :
$$\tau _R^{MF}\frac{T}{T_c}\left(\xi ^{MF}\right)^2,$$
(23)
which leads to a dynamical critical exponent $`z=2.`$ It may be noted that these divergences in $`\tau _R^{MF}`$ and $`\xi ^{MF}`$ are shown to occur for any $`T<T_C^{MF}`$, and these dynamic relaxation time and correlation length defined for the magnetization-reversal transition exist only for $`T<T_c^{MF}`$.
It may further be noted from (21) that $`\chi _q(t)0`$ as $`\xi ^{MF}\mathrm{}`$, thereby producing a minimum of $`\chi _q`$ at the phase boundary. The absence of any divergence in the susceptibility is due to the fact that at $`t=t_0+\mathrm{\Delta }t`$, there remains no contribution of $`m_w`$ in $`\chi _q(t)`$ as is evident from (20). However, numerical solution of (17) for $`q=0`$ mode shows a clear singularity in the homogeneous susceptibility $`\chi _0`$ at the magnetization-reversal phase boundary ($`m_w=0`$), as depicted in figure 4. One can also have a numerical estimate of $`\xi ^{MF}`$ by solving (17) for different values of $`q`$. Figure 5 shows plots of $`\chi _q(t)`$ against $`m_w`$ for different values of $`q`$. The inset of figure 5 shows the variation of $`\left(\xi ^{MF}\right)^2`$ against $`\left(\mathrm{ln}\left|m_w\right|\right)^1`$, where $`\xi ^{MF}`$ was obtained by fitting the data of figure 5 with straight lines. It is clearly seen from the inset that for small values of $`m_w`$ the linear approximation agrees quite well with the numerical results.
## 3 Monte Carlo Study
We now study the transition using Monte Carlo simulation with single spin-flip Glauber dynamics . Working at a temperature below the static critical temperature ($`T_c^02.27`$ and $`4.51`$ in units of the nearest neighbour interaction strength $`J`$ for square and simple cubic lattices respectively), the system is prepared by evolving the initial state (say with all spins up) under Glauber dynamics for the temperature $`T`$. The evolution time $`t_0`$ is usually taken to be sufficiently larger than the static relaxation time at $`T`$ to ensure that the system reaches an equilibrium state with magnetization $`m_0`$ before the external magnetic field is applied at time $`t=t_0`$. The magnetization $`m(t)`$ starts decreasing from its initial value $`m_0`$ due to the effect of the competing field during the period $`t_0tt_0+\mathrm{\Delta }t`$, and it assumes the value $`m_w`$ at $`t=t_0+\mathrm{\Delta }t`$. Due to presence of fluctuations, $`m_w<0`$ does not necessarily lead to a magnetization-reversal whereas even for $`m_w>0`$ fluctuations can give rise to a magnetization-reversal. This is in contrast with the mean field case, where due to the absence of any fluctuation the sign of $`m_w`$ solely determines the final state. In the MC study, however, on an average the final state is determined by the sign of $`m_w`$ (see figure 1). The magnetization-reversal transition phase boundary therefore again corresponds to $`m_w=0`$.
Figure 6 shows phase boundaries at different temperatures for square and simple cubic lattices. The data points for $`d=2`$ are averaged over 500 different Monte Carlo runs (MCR) and those for $`d=3`$ are averaged over 150 MCR. A qualitative difference between the MF and the MC phase boundaries may be noted here. In the former, even for $`\mathrm{\Delta }t\mathrm{}`$, due to the absence of fluctuations, $`h_p`$ must be greater than the non-zero coercive field to bring about the transition and therefore the phase boundaries flatten for larger values of $`\mathrm{\Delta }t`$. However, in real systems fluctuations are present and even an infinitesimal strength of the pulse, if applied for very long time, can bring about the transition. This is evident from the asymptotic nature of the phase boundaries for large values of $`\mathrm{\Delta }t`$.
It is instructive to look at the classical theory of nucleation to understand the nature of the MC phase diagram of the magnetization-reversal transition. A typical configuration of a ferromagnet, below its static critical temperature $`T_c^0`$, consists of droplets or domains of spins oriented in the same direction, in a sea of oppositely oriented spins. According to CNT, the equilibrium number of droplets consisting of $`s`$ spins is given by $`n_s=N\mathrm{exp}\left(ϵ_s/T\right)`$, where $`ϵ_s`$ is the free energy of formation of a droplet containing $`s`$ spins and $`N`$ is a normalization constant. In presence of a negative external magnetic field $`h`$, the free energy can be written as $`ϵ_s=2hs+\sigma s^{(d1)/d}`$, where the shape of the droplet is assumed to be spherical and $`\sigma (T)`$ is the temperature dependent surface tension. Droplets of size greater than a critical value $`s_c`$ are favoured to grow, where $`s_c=\left[\sigma (d1)/(2d\left|h\right|)\right]^d`$ is obtained by maximizing $`ϵ_s`$. The number of supercritical droplets is therefore given by $`n_{s_c}=N\mathrm{exp}\left[\mathrm{\Lambda }_d\sigma ^d\left|h\right|^{1d}/T\right]`$, where $`\mathrm{\Lambda }_d`$ is a constant depending on dimension only. In the SD regime, where a single supercritical droplet grows to engulf the whole system, the nucleation time is inversely proportional to the nucleation rate $`I`$. According to the Becker-D$`\ddot{\text{o}}`$ring theory, $`I`$ is proportional to $`n_{s_c}`$ and therefore one can write
$$\tau _N^{SD}I^1\mathrm{exp}\left[\frac{\mathrm{\Lambda }_d\sigma ^d}{T\left|h\right|^{d1}}\right].$$
However, in the MD regime the nucleation mechanism is different and in this regime many supercritical droplets grow simultaneously and eventually coalesce to create a system spanning droplet. The radius $`s_c^{1/d}`$ of a supercritical droplet grows linearly with time $`t`$ and thus $`s_ct^d`$. For a steady rate of nucleation, the rate of change of magnetization is $`It^d`$. For a finite change $`\mathrm{\Delta }m`$ of the magnetization during the nucleation time $`\tau _N^{MD}`$, one can write
$$\mathrm{\Delta }m_0^{\tau _N^{MD}}It^d𝑑t=I\left(\tau _N^{MD}\right)^{d+1}.$$
Therefore, in the MD regime one can write
$$\tau _N^{MD}I^{1/(d+1)}\mathrm{exp}\left[\frac{\mathrm{\Lambda }_d\sigma ^d}{T(d+1)\left|h\right|^{d1}}\right].$$
During the time $`t_0tt_0+\mathrm{\Delta }t`$, when the external field remains ‘on’, the only relevant time scale in the system is the nucleation time. The magnetization reversal phase boundary gives the threshold value $`h_p^c`$ of the pulse strength which, within time $`\mathrm{\Delta }t`$, brings the system from an equilibrium state with magnetization $`+m_0`$ to a non-equilibrium state with magnetization $`m_w=0_{}`$, so that eventually the system evolves to the equilibrium state with magnetization $`m_0`$. The field driven nucleation mechanism takes place for $`t_0tt_0+\mathrm{\Delta }t`$ and therefore equating the above nucleation times with $`\mathrm{\Delta }t`$, one gets the for the magnetization-reversal phase boundary
$$\begin{array}{cccccc}\mathrm{ln}\left(\mathrm{\Delta }t\right)& =& c_1& +& C\left[h_p^c\right]^{1d},& \text{in the SD regime}\hfill \\ & =& c_2& +& C\left[h_p^c\right]^{1d}/(d+1),& \text{in the MD regime}\hfill \end{array}$$
(24)
where $`C=\mathrm{\Lambda }_d\sigma ^d/T`$ and $`c_1`$, $`c_2`$ are constants. Therefore a plot of $`\mathrm{ln}(\mathrm{\Delta }t)`$ against $`\left[h_p^c\right]^{d1}`$ would show two different slopes corresponding to the two regimes . Figure 7 shows these plots and it indeed have two distinct slopes for both $`d=2`$ (figure 7(a)) and $`d=3`$ (figure 7(c)) at sufficiently high temperatures, where both the regimes are present. The ratio $`R`$ of the slopes corresponding to the two regimes has got values close to $`3`$ for $`d=2`$ and close to $`4`$ for $`d=3`$, as suggested by (24). The value of $`h_{DSP}`$ is obtained from the point of intersection of the straight lines fitted to the two regimes. At lower temperatures, however, the MD region is absent and the phase diagram here is marked by a single slope as shown in figures 7(b) and 7(d).
Once the pulse is withdrawn, the system relaxes to one of the two equilibrium states. The closer one leaves the system to the phase boundary ($`m_w0`$), larger is the relaxation time $`\tau _R`$. However, unlike the mean field case, the MC relaxation time falls off exponentially with $`\left|m_w\right|`$ away from the phase boundary. Figure 8 shows the growth of $`\tau _R`$ as $`m_w0`$ at a particular $`T`$ and for a particular $`\mathrm{\Delta }t`$. Typical number of MCR used to obtain the data is $`400`$ for $`L=40`$ and $`25`$ for $`L=400`$. The best fitted curve through the data points shows the relaxation behaviour as follows :
$$\tau _R\kappa (T,L)e^{\mu (T)\left|m_w\right|},$$
(25)
where $`\kappa (T,L)`$ is a constant depending on temperature and system size and $`\mu (T)`$ is a constant depending on temperature only. It may be noted from (25) that $`\tau \kappa (T,L)`$ as $`m_w0`$. Therefore the true divergence at the phase boundary (where $`m_w=0`$) of the relaxation time depends on the nature of $`\kappa (T,L)`$. The inset of figure 8 shows the sharp growth of $`\kappa (T,L)`$ with the system size. The relaxation time $`\tau _R`$ therefore diverges in the thermodynamic limit ($`L\mathrm{}`$) through the constant $`\kappa `$. It may be noted that this divergence of $`\tau _R`$ at the dynamic magnetization-reversal phase boundary occurs even at temperatures far below the static critical temperature $`T_c^0`$.
According to CNT, $`s_c\left|h_p\right|^d`$ and therefore at any fixed $`T`$, stronger fields will allow many critical droplets to form and hence the system goes over to the MD regime. On the other hand, a weaker field rules out the possibility of more than one critical droplet and therefore the system goes over to the SD regime. Figure 9 shows snapshots of the spin configurations at different times in both SD and MD regimes. The snapshots at $`t=t_0+\mathrm{\Delta }t`$ corresponds to $`m_wO\left(10^2\right)`$. $`h_p>h_{DSP}`$ in figure 9(a) and a single large droplet is formed whereas $`h_p<h_{DSP}`$ in (b) and many droplets are seen to be formed. It may be noticed from figure 9 that the boundaries of the droplets are flat with very few kinks on it at $`t=t_0+\mathrm{\Delta }t`$. The probability of growth of a droplet along a flat boundary is very small (only $`25\%`$ in case of a square lattice) and hence domain wall movement practically stops immediately after the withdrawal of the field. This restricts further nucleation. It is then left to very large fluctuations to resume the domain wall movement and long time is required for the system to come out of the metastable state and subsequently reach the final equilibrium state. Thus the effect of the pulse is to initiate the nucleation process and the threshold value of the pulse strength is such that within the pulse duration it renders the system with droplets almost without any kink in it. This observation justifies the sharp growth of the relaxation time at the phase boundary.
The growth of a length scale at the transition phase boundary can be qualitatively shown from the distribution of domains of reversed spins. We define a pseudo-correlation length $`\stackrel{~}{\xi }`$ as
$$\stackrel{~}{\xi }^2=\frac{_sR_s^2s^2n_s}{_ss^2n_s},$$
(26)
where the radius of gyration $`R_s`$ is defined as $`R_s^2=_{i=1}^s\left|r_ir_0\right|^2/s`$, $`r_i`$ denoting the position vector of the $`i`$th spin of the domain and $`r_0=_{i=1}^sr_i/s`$ being the centre of mass of the domain. As the transition phase boundary is approached, $`\stackrel{~}{\xi }`$ is observed to grow with the system size as shown in figure 10. Typical number of MCR used for obtaining the data is $`10`$ for $`L=1000`$ and $`2000`$ for $`L=50`$. This indicates the divergence of a length scale at the phase boundary in the thermodynamic limit. It should be noted, however, that $`\stackrel{~}{\xi }`$ is not exactly the correlation length of the system . An estimate for the power law growth of the actual correlation length $`\xi `$, as the phase boundary is approached in the MD region, will be obtained from the finite size scaling study discussed later in this section.
The order of the magnetization-reversal transition changes with temperature and with $`\mathrm{\Delta }t`$ even along the same phase boundary. The transition is discontinuous all along the low $`T`$ phase boundary, whereas at higher values of $`T`$ the nature of the transition changes from a continuous to a discontinuous one as one moves towards higher values of $`\mathrm{\Delta }t`$. For $`h_p^c(T)h_{DSP}(T)`$, the system is brought to the SD regime where the order of the transition is observed to be discontinuous. On the other hand continuous transition is observed for $`h_p^c(T)h_{DSP}(T)`$ when the system goes over to the MD regime. One can look at the probability distribution $`P(m_w)`$ of $`m_w`$ to determine the order of the phase transition. Figure 11 shows the variation of $`P(m_w)`$ as the phase boundary corresponding to a particular temperature is crossed at two different positions (different $`\mathrm{\Delta }t`$). The data are averaged over $`500`$ MCR. The existence of a single peak in (a), which shifts its position continuously from $`+1`$ to $`1`$ as the phase boundary is crossed, indicates the continuous nature of the transition. In (b), however, two peaks of comparable strength at positions close to $`\pm m_0`$ exist simultaneously. This shows that the system can simultaneously reside in both the phases which is a sure indication for a discontinuous phase transition. On phase boundaries corresponding to higher temperatures the crossover from the discontinuous transition to a continuous one is not very sharp and there exists a region around $`h_p^c=h_{DSP}`$ on the phase boundary, over which the nature of the transition cannot be determined with certainty. This is evident from figure 7, where the data points near the tricritical point do not fit to the slope of either of the straight lines corresponding to the two different regimes.
In the region where the transition is continuous in nature one can expect scaling arguments to hold. We assume power law behaviour in this regime both for $`m_w`$
$$m_w\left|h_ph_p^c(\mathrm{\Delta }t,T)\right|^\beta $$
(27)
and for the correlation length
$$\xi \left|h_ph_p^c(\mathrm{\Delta }t,T)\right|^\nu .$$
(28)
For a finite size system, $`h_p^c`$ is a function of the system size $`L`$. Assuming that at the phase boundary $`\xi `$ can at the most reach a value equal to $`L`$, one can write the finite size scaling form of $`m_w`$ as :
$$m_wL^{\beta /\nu }f\left[\left(h_ph_p^c(\mathrm{\Delta }t,T,L)\right)L^{1/\nu }\right],$$
(29)
where $`f(x)x^{\beta /\nu }`$ as $`x\mathrm{}`$. A plot of $`m_w/L^{\beta /\nu }`$ against $`\left(h_ph_p^c(\mathrm{\Delta }t,T,L)\right)L^{1/\nu }`$ shows a nice collapse of the data corresponding to $`L=50`$, $`100`$, $`200`$, $`400`$ and $`800`$ for $`d=2`$ and $`L=10`$, $`20`$, $`40`$, $`80`$ and $`120`$ for $`d=3`$ as shown in figure 12. Typical number of MCR used to obtain the data is $`5120`$ for $`L=50`$ in $`d=2`$ and $`10000`$ for $`L=10`$ in $`d=3`$. The values of the critical exponents obtained from the data collapse are $`\beta =0.85\pm 0.05`$ and $`\nu =1.5\pm 0.5`$ in $`d=3`$ and $`\beta =1.00\pm 0.05`$ and $`\nu =2.0\pm 0.5`$ in $`d=2`$, where $`h_p^c(\mathrm{\Delta }t,T)`$ was obtained with an accuracy $`O\left(10^3\right)`$. All attempts to fit similar data to the above finite size scaling form obtained in the SD regime failed.
The accuracy with which $`h_p^c(\mathrm{\Delta }t,T)`$ is measured, is very crucial for obtaining the critical exponents through finite size scaling. The cumulant method introduced by Binder et al. is one of the reliable methods which can be employed to obtain the value of $`h_p^c`$. The fourth order cumulant is defined as
$$g(L)=\frac{1}{2}\left[3\frac{m_w^4}{m_w^2^2}\right],$$
(30)
where $`m_w^n=m_w^nP(m_w)𝑑m_w`$. The quantity $`g(L)`$ is dimensionless and is equal to unity for $`\left|m_w\right|0`$, while $`g(L)0`$ for $`m_w0`$, assuming a Gaussian distribution of $`m_w`$ around $`0`$ on the phase boundary. Figure 13 shows a plot of $`g(L)`$ against $`h_p`$ at a fixed $`\mathrm{\Delta }t`$ and $`T`$ and the value of the pulse strength corresponding to the point of intersection of the different curves gives $`h_p^c(\mathrm{\Delta }t,T)`$; assuming $`gg\left[L/\left|h_ph_p^c\right|^\nu \right]`$. Typical number of MCR used to obtain the data is $`50000`$ for $`L=50`$ and $`2500`$ for $`L=800`$. It is to be noted that none of the curves touch the abscissa which corresponds to $`m_w=0`$ which is numerically unattainable. The closer one gets to $`m_w=0`$ better the accuracy in the measurement of $`h_p^c`$. In principle the minima of $`g(L)`$ correponding to different $`L`$ should occur at the same position (at $`h_p=h_p^c`$). The shift in the position of the minima of $`g(L)`$ in figure 13 is caused by the presence of large fluctuations in measuring higher moments of $`m_w`$. However, this estimate of $`h_p^c`$, when used in the scaling fit of (29), did not significantly improve the estimates of the critical exponents $`\beta `$ and $`\nu `$.
## 4 Summary and Conclusions
In this paper we have discussed in detail almost all the studies that have been made so far on the dynamic magnetization-reversal transition in the Ising model under finite duration external magnetic field competing with the existing order for $`T<T_c^0`$. Any combination of the pulse strength and duration above the phase boundary in the $`h_p\mathrm{\Delta }t`$ plane leads to the transition from one ordered phase to the equivalent other. We solved numerically the mean field equation of motion for the magnetization to obtain the MF phase boundary where the susceptibility and the relaxation time were observed to diverge. The divergence of both the time ($`\tau _R^{MF}`$) and the length scale ($`\xi ^{MF}`$) at the MF phase boundary was observed even from the analytic solution of the MF equations of motion under a linear approximation. Under this approximation, the dynamical critical exponent was found to have a value $`2`$ : $`\tau _R^{MF}\left(\xi ^{MF}\right)^2\mathrm{ln}\left|m_w\right|`$, where $`m_w(h_p,\mathrm{\Delta }t,T)=0`$ gives the phase boundary. The same transition has been studied using Monte Carlo simulations in both two and three dimensions. The obtained phase diagram is fully consistent with the classical nucleation theory. The nucleation process is initiated by the external magnetic field and depending on the strength of the field the system nucleates either through the growth of a single droplet or through the growth and subsequent coalescence of many droplets. For $`h_p>h_{DSP}`$ the system belongs to the multi-droplet regime and the transition is continuous in nature; whereas for $`h_p<h_{DSP}`$ the system goes over to the single-droplet regime where transition is discontinuous. Expecting power law behaviour for both $`m_w`$ and $`\xi `$ in multi-droplet regime, the finite size scaling fits give the estimates of the critical exponents $`\beta `$ and $`\nu `$ for both $`d=2`$ and $`3`$. Unlike in the MF case, where the relaxation time $`\tau _R^{MF}`$ shows a logarithmic divergence, $`\tau _R`$ in MC studies falls off exponentially away from $`m_w=0`$ and the divergence in $`\tau _R`$ comes through the growth of the prefactor $`\kappa `$ in (25) with the system size.
The symmetry breaking transition of the dynamic hysteresis in pure Ising systems under oscillating external fields , where the $`mh`$ loop becomes asymmetric due to the fact that the magnetization $`m(t)`$ fails to follow even the phase or sign of the rapidly changing field $`h(t)`$, leads to a dynamic transition. This dynamic transition has been studied employing finite size scaling theory and the estimates of the critical exponents seem to be consistent with the static Ising universality class . Although this transition as well as the one discussed in this paper occur due to the failure of the system to get out of the ‘free energy well’ corresponding to the existing order because of the lack of proper combination of the pulse strength and duration, they belong to different universality classes.
Acknowledgements
We are grateful to M. Acharyya, D. Chowdhury, C. Dasgupta, B. Duenweg, D. Stauffer and R. B. Stinchcombe for their useful comments and suggestions.
Figure Captions
Figure 1. Typical time variation of the response magnetizations $`m(t)`$ for two different field pulses $`h(t)`$ with same $`\mathrm{\Delta }t`$ and $`T`$ are shown. The quantities of interest to characterize the response magnetizations for both the pulses are indicated.
Figure 2. MF phase boundaries for three different temperatures. The solid line is obtained from numerical solution of (7) and the dotted lines give the corresponding analytical estimates in the linear limit.
Figure 3. Logarithmic divergence of $`\tau _R^{MF}`$ across the phase boundary for $`T/T_c=0.9`$. The data points shown by circles are obtained from the solution of (7) and the solid line corresponds to the solution of the linearized MF equation.
Figure 4. Divergence of $`\chi _{q=0}`$ across the phase boundary obtained from the numerical solution of (17).
Figure 5. Plot of $`\chi _q`$ against $`m_w`$ for different values of $`q`$. The inset shows the linear variation of $`\left(\xi ^{MF}\right)^2`$ against $`\left[\mathrm{ln}\left|m_w\right|\right]^1`$. The data points for $`\xi ^{MF}`$ in the inset are obtained from the slope of the best fitted straight lines through a plot of $`\mathrm{ln}\chi _q`$ against $`q^2`$ for different values of $`m_w`$.
Figure 6. Phase boundaries obtained from the MC study for (a) square lattice with $`L=100`$ and (b) simple cubic lattice with $`L=50`$.
Figure 7. Plot of $`\mathrm{ln}\mathrm{\Delta }t`$ against $`\left(h_p\right)^{1d}`$ along the MC phase boundary. (a) $`T/T_c=0.31`$ and (b) $`T/T_c=0.09`$ for square lattice and (c) $`T/T_c=0.67`$ and (d) $`T/T_c=0.11`$ for simple cubic lattice. The slope ratio $`R3.27`$ in (a) and $`3.97`$ in (c).
Figure 8. MC results for the divergence of $`\tau _R`$ for $`L=40`$, $`50`$, $`100`$, $`200`$ and $`400`$. The best fitted straight lines are guide to the eye. The inset shows the variation with $`L`$ of the peak height $`\kappa `$ in the prefactor of $`\tau _R`$ in (25).
Figure 9. Snapshots of spin configurations in a $`100\times 100`$ square lattice at different stages ($`t=t_0`$, $`t_1`$ and $`t_0+\mathrm{\Delta }t`$ ) of nucleation, where $`t_0<t_1<t_0+\mathrm{\Delta }t`$. The dots correspond to $`+1`$ spin state. (a) $`h_p=0.55,`$ $`\mathrm{\Delta }t=300`$ at $`T/T_c=0.44`$ (SD regime) and (b) $`h_p=0.52`$, $`\mathrm{\Delta }t=9`$ at $`T/T_c=0.88`$ (MD regime).
Figure 10. Variation of $`\stackrel{~}{\xi }`$ with $`L`$ for $`L=50`$, $`100`$, $`200`$, $`400`$, $`800`$ and $`1000`$ for MC study on a square lattice.
Figure 11. Plot of $`P(m_w)`$ against $`m_w`$ as one crosses the phase boundary for the MC study on a $`100\times 100`$ square lattice in (a) MD regime and (b) SD regime.
Figure 12. Finite size scaling fits : (a) for $`d=2`$ at $`T/T_c=0.88`$ and (b) for $`d=3`$ at $`T/T_c=0.67`$.
Figure 13. Plot of $`g(L)`$ against $`L`$ for $`L=50`$, $`100`$, $`200`$, $`400`$ and $`800`$ in the MD regime for MC study on a $`100\times 100`$ square lattice at $`T=2.0`$ for $`\mathrm{\Delta }t=5`$.
Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.
Figure 6.
Figure 7.
Figure 8.
Figure 9.
Figure 10.
Figure 11.
Figure 12.
Figure 13.
|
no-problem/0003/astro-ph0003268.html
|
ar5iv
|
text
|
# Untitled Document
SUPERMASSIVE BLACK HOLES IN INACTIVE GALAXIES<sup>1</sup> To appear in Encyclopedia of Astronomy and Astrophysics
John Kormendy<sup>2</sup> Department of Astronomy, RLM 15.308, University of Texas, Austin, TX 78712-1083 and Luis C. Ho<sup>3</sup> Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101-1292
1. INTRODUCTION
Several billion years after the Big Bang, the Universe went through a “quasar era” when high-energy active galactic nuclei (AGNs) were more than 10,000 times as numerous as they are now. Quasars must then have been standard equipment in most large galaxies. Since that time, AGNs have been dying out. Now quasars are exceedingly rare, and even medium-luminosity AGNs such as Seyfert galaxies are uncommon. The only activity that still occurs in many galaxies is weak. A paradigm for what powers this activity is well established through the observations and theoretical arguments that are outlined in the previous article. AGN engines are believed to be supermassive black holes (BHs) that accrete gas and stars and so transform gravitational potential energy into radiation. Expected BH masses are $`M_{}`$ $``$ 10<sup>6</sup> – 10<sup>9.5</sup> M. A wide array of phenomena can be understood within this picture. But the subject has had an outstanding problem: there was no dynamical evidence that BHs exist. The search for BHs has therefore become one of the hottest topics in extragalactic astronomy.
Since most quasars have switched off, dim or dead engines – starving black holes – should be hiding in many nearby galaxies. This means that the BH search need not be confined to the active galaxies that motivated it. In fact, definitive conclusions are much more likely if we observe objects in which we do not, as Alan Dressler has said, “have a searchlight in our eyes.” Also, it was necessary to start with the nearest galaxies, because only then could we see close enough to the center so that the BH dominates the dynamics. Since AGNs are rare, nearby galaxies are not particularly active. For these reasons, it is no surprise that the search first succeeded in nearby, inactive galaxies.
This article discusses stellar dynamical evidence for BHs in inactive and weakly active galaxies. Stellar motions are a particularly reliable way to measure masses, because stars cannot be pushed around by nongravitational forces. The price is extra complication in the analysis: the dynamics are collisionless, so random velocities can be different in different directions. This is impossible in a collisional gas. As we shall see, much effort has gone into making sure that unrecognized velocity anisotropy does not lead to systematic errors in mass measurements.
Dynamical evidence for central dark objects has been published for 17 galaxies. With the Hubble Space Telescope (HST) pursuing the search, the number of detections is growing rapidly. Already we can ask demographic questions. Two main results have emerged. First, the numbers and masses of central dark objects are broadly consistent with predictions based on quasar energetics. Second, the central dark mass correlates with the mass of the elliptical-galaxy-like “bulge” component of galaxies. What is less secure is the conclusion that the central dark objects must be BHs and not (for example) dense clusters of brown dwarf stars or stellar remnants. Rigorous arguments against such alternatives are available for only two galaxies. Nevertheless, these two objects and the evidence for dark masses at the centers of almost all galaxies that have been observed are taken as strong evidence that the AGN paradigm is essentially correct.
2. DEAD QUASAR ENGINES IN NEARBY GALAXIES
The qualitative discussion of the previous section can be turned into a quantitative estimate for $`M_{}`$ as follows. The quasar population produces an integrated comoving energy density of
$$u=_0^{\mathrm{}}_0^{\mathrm{}}\mathrm{\Phi }(L,z)L𝑑L\frac{dt}{dz}𝑑z=\mathrm{\hspace{0.17em}1.3}\times 10^{15}\mathrm{erg}\mathrm{cm}^3,$$
$`(1)`$
where $`\mathrm{\Phi }(L,z)`$ is the comoving density of quasars of luminosity $`L`$ at redshift $`z`$ and $`t`$ is cosmic time. For a radiative energy conversion efficiency of $`ϵ`$, the equivalent present-day mass density is $`\rho _u=u/(ϵc^2)=2.2\times 10^4ϵ^1`$ M Mpc<sup>-3</sup>. Comparison of $`\rho _u`$ with the overall galaxy luminosity density, $`\rho _g1.4\times 10^8`$$`h`$ $`L_{}`$ Mpc<sup>-3</sup>, where the Hubble constant is $`H_0`$ = 100 $`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, implies that a typical nearby bright galaxy (luminosity $`L^{}10^{10}h^2`$ $`L_{}`$) should contain a dead quasar of mass $`M_{}`$ $``$ $`1.6\times 10^6ϵ^1h^3`$ M. Accretion onto a BH is expected to produce energy with an efficiency of $`ϵ0.1`$, and the best estimate of $`h`$ is 0.71 $`\pm `$ 0.06. Therefore the typical BH should have a mass of $`10^{7.7}`$ M. BHs in dwarf ellipticals should have masses of $`10^6`$ M.
In fact, the brightest quasars must have had much higher masses. A BH cannot accrete arbitrarily large amounts of mass to produce arbitrarily high luminosities. For a given $`M_{}`$, there is a maximum accretion rate above which the radiation pressure from the resulting high luminosity blows away the accreting matter. This “Eddington limit” is discussed in the preceeding article. Eddington luminosities of $`L10^{47}`$ erg s<sup>-1</sup> $``$ 10<sup>14</sup> $`L_{}`$ require BHs of mass $`M_{}`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$ 10<sup>9</sup> M. These arguments define the parameter range of interest: $`M_{}`$ $``$ 10<sup>6</sup> to $`10^{9.5}`$ M. The highest-mass BHs are likely to be rare, but low-mass objects should be ubiquitous. Are they?
3. STELLAR DYNAMICAL SEARCHES FOR CENTRAL DARK OBJECTS
The answer appears to be “yes”. The majority of detections on which this conclusion is based are stellar-dynamical. However, finding BHs is not equally easy in all galaxies. This results in important selection effects that need to be understood for demographic studies. Therefore we begin with a discussion of techniques. We then give three examples that highlight important aspects of the search. NGC 3115 is a particularly clean detection that illustrates the historical development of the search. M 31 is one of the nearest galaxies and contains a new astrophysical phenomenon connected with BHs. Finally, the strongest case that the central mass is a BH and not a dark cluster of stars or stellar remnants is the one in our own Galaxy.
3.1 Stellar Dynamical Mass Measurement
Dynamical mass measurement is conceptually simple. If random motions are small, as they are in a gas, then the mass $`M(r)`$ within radius $`r`$ is $`M(r)=V^2r/G`$. Here $`V`$ is the rotation velocity and $`G`$ is the gravitational constant. In stellar systems, some dynamical support comes from random motions, so $`M(r)`$ depends also on the velocity dispersion $`\sigma `$. The measurement technique is best described in the idealized case of spherical symmetry and a velocity ellipsoid that points at the center. Then the first velocity moment of the collisionless Boltzmann equation gives
$$M(r)=\frac{V^2r}{G}+\frac{\sigma _r^2r}{G}\left[\frac{d\mathrm{ln}\nu }{d\mathrm{ln}r}\frac{d\mathrm{ln}\sigma _r^2}{d\mathrm{ln}r}\left(1\frac{\sigma _\theta ^2}{\sigma _r^2}\right)\left(1\frac{\sigma _\varphi ^2}{\sigma _r^2}\right)\right].$$
$`(2)`$
Here $`\sigma _r`$, $`\sigma _\theta `$, and $`\sigma _\varphi `$ are the radial and azimuthal components of the velocity dispersion. The density $`\nu `$ is not the total mass density $`\rho `$; it is the density of the luminous tracer population whose kinematics we measure. We never see $`\rho `$, because the stars that contribute most of the light contribute almost none of the mass. Therefore we assume that $`\nu (r)`$ volume brightness. All quantities in Equation 2 are unprojected. We observe brightnesses and velocities after projection and blurring by a point-spread function (PSF). Information is lost in both processes. Several techniques have been developed to derive unprojected quantities that agree with the observations after projection and PSF convolution. From these, we derive the mass distribution $`M(r)`$ and compare it to the light distribution $`L(r)`$. If $`M/L(r)`$ rises rapidly as $`r0`$, then we have found a central dark object.
There is one tricky problem with this analysis, and it follows directly from Equation 2. Rotation and random motions contribute similarly to $`M(r)`$, but the $`\sigma ^2r/G`$ term is multiplied by a factor that depends on the velocity anisotropy and that can be less than 1. Galaxy formation can easily produce a radial velocity dispersion $`\sigma _r`$ that is larger than the azimuthal components $`\sigma _\theta `$ and $`\sigma _\varphi `$. Then the third and fourth terms inside the brackets in Equation 2 are negative; they can be as small as $`1`$ each. In fact, they can largely cancel the first two terms, because the second term cannot be larger than $`+1`$, and the first is $`1`$ in many galaxies. This explains why ad hoc anisotropic models have been so successful in explaining the kinematics of giant ellipticals without BHs. But how anisotropic are the galaxies?
Much effort has gone into finding the answer. The most powerful technique is to construct self-consistent dynamical models in which the density distribution is the linear combination $`\rho =\mathrm{\Sigma }N_i\rho _i`$ of the density distributions $`\rho _i`$ of the individual orbits that are allowed by the gravitational potential. First the potential is estimated from the light distribution. Orbits of various energies and angular momenta are then calculated to construct a library of time-averaged density distributions $`\rho _i`$. Finally, orbit occupation numbers $`N_i`$ are derived so that the projected and PSF-convolved model agrees with the observed kinematics. Some authors also maximize $`\mathrm{\Sigma }N_i\mathrm{ln}N_i`$, which is analogous to an entropy. These procedures allow the stellar distribution function to be as anisotropic as it likes in order (e. g.) to try to explain the observations without a BH. In the end, such models show that real galaxies are not extremely anisotropic. That is, they do not take advantage of all the degrees of freedom that the physics would allow. However, this is not something that one could take for granted. Because the degree of anisotropy depends on galaxy luminosity, almost all BH detections in bulges and low-luminosity ellipticals (which are nearly isotropic) are based on stellar dynamics, and almost all BH detections in giant ellipticals (which are more anisotropic) are based on gas dynamics.
3.2 NGC 3115: $`M_{}10^{9.0\pm 0.3}`$ M
One of the best stellar-dynamical BH cases is the prototypical S0 galaxy NGC 3115 (Fig. 1). It is especially suitable for the BH search because it is very symmetrical and almost exactly edge-on. NGC 3115 provides a good illustration of how the BH search makes progress. Unlike some discoveries, finding a supermassive BH is rarely a unique event. Rather, an initial dynamical case for a central dark object gets stronger as observations improve. Eventually, the case becomes definitive. This has happened in NGC 3115 through the study of the central star cluster – a tiny, dense cusp of stars like those expected around a BH (Figure 1). Later, still better observations may accomplish the next step, which is to strengthen astrophysical constraints enough so that all plausible BH alternatives (clusters of dark stars) are eliminated. This has happened for our Galaxy (§ 3.4) but not yet for NGC 3115.
The kinematics of NGC 3115 show the signature of a central dark object (Fig. 2). The original detection was based on the blue crosses. Already at resolution $`\sigma _{}=0`$.<sup>′′</sup>$`44`$, the central kinematic gradients are steep. The apparent central dispersion, $`\sigma 300`$ km s<sup>-1</sup>, is much higher than normal for a galaxy of absolute magnitude $`M_B=20.0`$. Therefore, isotropic dynamical models imply that NGC 3115 contains a dark mass $`M_{}`$ $``$ $`10^{9\pm 0.3}`$ M. Maximally anisotropic models allow smaller masses, $`M_{}`$ $``$ 10<sup>8</sup> M, but isotropy is more likely given the rapid rotation.
Since that time, two generations of improved observations have become available. The green points in Figure 2 were obtained with the Subarcsecond Imaging Spectrograph (SIS) and the Canada-France-Hawaii Telescope (CFHT). This incorporates tip-tilt optics to improve the atmospheric PSF. The observations with the HST Faint Object Spectrograph (FOS) have still higher resolution. If the BH detection is correct, then the apparent rotation and dispersion profiles should look steeper when they are observed at higher resolution. This is exactly what is observed. If the original dynamical models are “reobserved” at the improved resolution, the ones that agree with the new data have $`M_{}`$ = (1 to 2) $`\times `$ 10<sup>9</sup> M.
Figure 1. HST WFPC2 images of NGC 3115. The left panel shows a color image made from 1050 s $`V`$\- and $`I`$-band images. The right panel shows a model of the nuclear disk. The center panel shows the difference; it emphasizes the compact nuclear star cluster. Brightness is proportional to the square root of intensity. All panels are 11.<sup>′′</sup>6 square. \[This figure is taken from Kormendy et al. 1996, Astrophys. J. Lett., 459, L57.\]
Finally, a definitive detection is provided by the HST observations of the nuclear star cluster. Its true velocity dispersion is underestimated in Figure 2, because the projected value includes bulge light from in front of and behind the center. When this light is subtracted, the velocity dispersion of the nuclear cluster proves to be $`\sigma =600\pm 37`$ km s<sup>-1</sup>. This is the highest dispersion measured in any galactic center. The velocity of escape from the nucleus would be much smaller, $`V_{\mathrm{esc}}352`$ km s<sup>-1</sup>, if it consisted only of stars. Without extra mass to bind it, the cluster would fling itself apart in $`2\times 10^4`$ yr. Independent of any velocity anisotropy, the nucleus must contain an unseen object of mass $`M_{}`$ $`10^9`$ M. This is consistent with the modeling results. The dark object is more than 25 times as massive as the visible star cluster. We know of no way to make a star cluster that is so nearly dark, especially without overenriching the visible stars with heavy elements. The most plausible explanation is a BH. This would easily have been massive enough to power a quasar.
Figure 2. Rotation velocities (lower panel) and velocity dispersions (upper panel) along the major axis of NGC 3115 as observed at three different spatial resolutions. Resolution $`\sigma _{}`$ is the Gaussian dispersion radius of the PSF; in the case of the HST observations, this is negligible compared to the aperture size of 0.<sup>′′</sup>21. \[This figure is adapted from Kormendy et al. 1996, Astrophys. J. Lett., 459, L57.\]
3.3 M 31: $`M_{}3\times 10^7M_{}`$
M 31 is the highest-luminosity galaxy in the Local Group. At a distance of 0.77 Mpc, it is the nearest giant galaxy outside our own. It can therefore be studied in unusual detail.
M 31 contains the nearest example of a nuclear star cluster embedded in a normal bulge. When examined with HST, the nucleus appears double (Figure 3). This is very surprising. At a separation of $`2r=0`$.<sup>′′</sup>$`49=1.7`$ pc, a relative velocity of 200 km s<sup>-1</sup> implies a circular orbit period of 50,000 yr. If the nucleus consisted of two star clusters in orbit around each other, as Fig. 3 might suggest, then dynamical friction would make them merge within a few orbital times. Therefore it is unlikely that the simplest possible explanation is correct: we are not observing the last stages of the digestion of an accreted companion galaxy.
The nucleus rotates rapidly and has a steep velocity dispersion gradient (Figure 3). Dynamical analysis shows that M 31 contains a central dark mass $`M_{}`$ $`3\times 10^7`$ M. The possible effects of velocity anisotropy have been checked and provide no escape. Furthermore, the asymmetry provides an almost independent check of the BH mass, as follows.
The top panel of Figure 3 shows the HST image at the same scale as and registered in position with the kinematics. It shows that the dispersion peak is approximately centered on the fainter nucleus. In fact, it is centered almost exactly on a cluster of blue stars that is embedded in this nucleus. This suggests that the BH is in the blue cluster. This hypothesis can be tested by finding the center of mass of the asymmetric distribution of starlight plus a dark object in the blue cluster. The mass-to-light ratio of the stars is provided by dynamical models of the bulge at larger radii. If the galaxy is in equilibrium, then the center of mass should coincide with the center of the bulge. It does, provided that $`M_{}`$ $``$ $`3\times 10^7`$ M. Remarkably, the same BH mass explains the kinematics and the asymmetry of the nucleus.
An explanation of the mysterious double nucleus has been proposed by Scott Tremaine. He suggests that both nuclei are part of a single eccentric disk of stars. The brighter nucleus is farther from the barycenter; it results from the lingering of stars near the apocenters of very elongated orbits. The fainter nucleus is produced by an increase in disk density toward the center. The model depends on the presence of a BH to make the potential almost Keplerian; then the alignment of orbits in the eccentric disk may be maintained by the disk’s self-gravity. Tremaine’s model was developed to explain the photometric and kinematic asymmetries as seen at resolution $`\sigma _{}0`$.<sup>′′</sup>$`5`$. It is also consistent with the data in Figure 3 ($`\sigma _{}0`$.<sup>′′</sup>$`27`$). The high velocity dispersion near the BH, the low dispersion in the offcenter nucleus, and especially the asymmetric rotation curve are signatures of the eccentric, aligned orbits.
Most recently, spectroscopy of M 31 has been obtained with the HST Faint Object Camera. This improves the spatial resolution by an additional factor of $``$ 5. At this resolution, there is a 0.<sup>′′</sup>25 wide region centered on the faint nucleus in which the velocity dispersion is $`440\pm 70`$ km s<sup>-1</sup>. This is further confirmation of the existence and location of the BH.
Figure 3. (top) HST WFPC2 color image of M 31 constructed from $`I`$-, $`V`$\- and 3000 Å-band, PSF-deconvolved images obtained by Lauer et al. (1998, Astron. J., 116, 2263). The scale is 0.<sup>′′</sup>0228 pixel<sup>-1</sup>. (bottom and middle) Rotation curve $`V(r)`$ and velocity dispersion profile $`\sigma (r)`$ of the nucleus with foreground bulge light subtracted. The symmetry point of the rotation curve and the sharp dispersion peak suggest that the BH is in the blue star cluster embedded in the left brightness peak. \[This figure is adapted from Kormendy & Bender 1999, Astrophys. J., 522, 772.\]
We do not know whether the double nucleus is the cause or an effect of the offcenter BH. However, offcenter BHs are an inevitable consequence of hierarchical structure formation and galaxy mergers. If most large galaxies contain BHs, then mergers produce binary BHs and, in three-body encounters, BH ejections with recoil. How much offset we see, and indeed whether we see two BHs or one or none at all, depend on the relative rates of mergers, dynamical friction, and binary orbit decay. Offcenter BHs may have much to tell us about these and other processes. Already there is evidence in NGC 4486B for a second double nucleus containing a BH.
3.4 Our Galaxy: $`M_{}(2.9\pm 0.4)\times 10^6M_{}`$
Our Galaxy has long been known to contain the exceedingly compact radio source Sgr A\*. Interferometry gives its diameter as 63 $`r_s`$ by less than 17 $`r_s`$, where $`r_s=0.06`$ AU = $`8.6\times 10^{11}`$ cm is the Schwarzschild radius of a $`2.9\times 10^6`$ M BH. It is easy to be impressed by the small size. But as an AGN, Sgr A\* is feeble: its radio luminosity is only $`10^{34}`$ erg s<sup>-1</sup> $`10^{0.4}L_{}`$. The infrared and high-energy luminosities are higher, but there is no compelling need for a BH on energetic grounds. To find out whether the Galaxy contains a BH, we need dynamical evidence.
Getting it has not been easy. Our Galactic disk, which we see in the sky as the Milky Way, contains enough dust to block all but $``$$`10^{14}`$ of the optical light from the Galactic center. Measurements of the region around Sgr A\* had to await the development of infrared detectors. Much of the infrared radiation is in turn absorbed by the Earth’s atmosphere, but there is a useful transmission window at 2.2 $`\mu `$m wavelength. Here the extinction toward the Galactic center is a factor of $``$ 20. This is large but manageable. Early infrared measurements showed a rotation velocity of $`V100`$ km s<sup>-1</sup> and a small rise in velocity dispersion to $`120`$ km s<sup>-1</sup> at the center. These were best fit with a BH of mass $`M_{}`$ $``$ 10<sup>6</sup> M, but the evidence was not very strong. Since then, a series of spectacular technical advances have made it possible to probe closer and closer to the center. As a result, the strongest case for a BH in any galaxy is now our own.
Most remarkably, two independent groups led by Reinhard Genzel and Andrea Ghez have used speckle imaging to measure proper motions – the velocity components perpendicular to the line of sight – in a cluster of stars at radii $`r`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 0.<sup>′′</sup>5 $``$ 0.02 pc from Sgr A\* (Figure 4). When combined with complementary measurements at larger radii, the result is that the one-dimensional velocity dispersion increases smoothly to $`420\pm 60`$ km s<sup>-1</sup> at $`r0.01`$ pc. Stars at this radius revolve around the Galactic center in a human lifetime! The mass $`M(r)`$ inside radius $`r`$ is shown in Figure 5. Outside a few pc, the mass distribution is dominated by stars, but as $`r0`$, $`M(r)`$ flattens to a constant, $`M_{}`$ = $`(2.9\pm 0.4)\times 10^6M_{}`$. Velocity anisotropy is not an uncertainty; it is measured directly and found to be small. The largest dark cluster that is consistent with these data would have a central density of $`4\times 10^{12}`$ M pc<sup>-3</sup>. This is inconsistent with astrophysical constraints (§ 5). Therefore, if the dark object is not a BH, the alternative would have to be comparably exotic. It is prudent to note that rigorous proof of a BH requires that we spatially resolve relativistic velocities near the Schwarzschild radius. This is not yet feasible. But the case for a BH in our own Galaxy is now very compelling.
Figure 4. Images of the star cluster surrounding Sgr A\* (green cross) at the epochs indicated. The arrows in the left frame show approximately where the stars have moved in the right frame. Star S1 has a total proper motion of $`1600`$ km s<sup>-1</sup>. \[This figure is updated from Eckart & Genzel 1997, M. N. R. A. S., 284, 576 and was kindly provided by A. Eckart.\]
Figure 5. Mass distribution implied by proper motion and radial velocity measurements (blue points and curve). Long dashes (green) show the mass distribution of stars if the infrared mass-to-light ratio is 2. The red curve represents the stars plus a point mass $`M_{}=2.9\times 10^6`$ M. Short green dashes provide an estimate of how non-pointlike the dark mass could be: its $`\chi ^2`$ value is 1 $`\sigma `$ worse than the solid curve. This dark cluster has a core radius of 0.0042 pc and a central density of $`4\times 10^{12}`$ M pc<sup>-3</sup>. \[This figure is updapted from Genzel et al. 1997, M. N. R. A. S., 291, 219 and was kindly provided by R. Genzel.\]
4. BH DEMOGRAPHICS
The census of BH candidates as of January 2000 is given in Table 1. The table is divided into three groups – detections based on stellar dynamics, on ionized gas dynamics, and on maser disk dynamics (top to bottom). The rate of discovery is accelerating as HST pursues the search. However, we already have candidates that span the range of predicted masses and that occur in essentially every type of galaxy that is expected to contain a BH. Host galaxies include giant AGN ellipticals (the middle group), Seyfert galaxies (NGC 1068), normal spirals with moderately active nuclei (e. g., NGC 4594 and NGC 4258), galaxies with exceedingly weak nuclear activity (our Galaxy and M 31), and completely inactive galaxies (M 32 and NGC 3115).
| Table 1 Census of Black Hole Candidates | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | Galaxy | Type | $`D`$0 | $`M_{B,\mathrm{bulge}}`$ | $`M_{}`$ | log $`\frac{M_{}}{M_{\mathrm{bulge}}}`$ |
| | | | (Mpc)0 | | (M) | |
| | Galaxy | Sbc | 000.0085 | $`17.65`$ | 3 $`\times `$$`10^6`$ | $`3.62`$0 |
| | M 31 | Sb | 000.7000 | $`18.82`$ | 3 $`\times `$$`10^7`$ | $`3.31`$0 |
| | M 32 | E | 000.7000 | $`15.51`$ | 3 $`\times `$$`10^6`$ | $`2.27`$0 |
| | NGC 3115 | S0/ | 008.4000 | $`19.90`$ | 1 $`\times `$$`10^9`$ | $`1.92`$0 |
| | NGC 4594 | Sa/ | 009.2000 | $`21.21`$ | 1 $`\times `$$`10^9`$ | $`2.69`$0 |
| | NGC 3377 | E | 009.9000 | $`18.80`$ | 8 $`\times `$$`10^7`$ | $`2.24`$0 |
| | NGC 3379 | E | 009.9000 | $`19.79`$ | 1 $`\times `$$`10^8`$ | $`2.96`$0 |
| | NGC 4342 | S0 | 015.3000 | $`17.04`$ | 3 $`\times `$$`10^8`$ | $`1.64`$0 |
| | NGC 4486B | E | 015.3000 | $`16.66`$ | 6 $`\times `$$`10^8`$ | $`1.03`$0 |
| | M 87 | E | 015.3000 | $`21.42`$ | 3 $`\times `$$`10^9`$ | $`2.32`$0 |
| | NGC 4374 | E | 015.3000 | $`20.96`$ | 1 $`\times `$$`10^9`$ | $`2.53`$0 |
| | NGC 4261 | E | 029.0000 | $`20.89`$ | 5 $`\times `$$`10^8`$ | $`2.92`$0 |
| | NGC 7052 | E | 059.0000 | $`21.31`$ | 3 $`\times `$$`10^8`$ | $`3.31`$0 |
| | NGC 6251 | E | 106.0000 | $`21.81`$ | 6 $`\times `$$`10^8`$ | $`3.18`$0 |
| | NGC 4945 | Scd/ | 003.7000 | $`15.1`$ | 1 $`\times `$$`10^6`$ | 0 |
| | NGC 4258 | Sbc | 007.5000 | $`17.3`$ | 4 $`\times `$$`10^7`$ | $`2.05`$0 |
| | NGC 1068 | Sb | 015.0000 | $`18.8`$ | 1 $`\times `$$`10^7`$ | 0 |
Notes to Table 1: Column 1: galaxy name; column 2: Hubble type; / means that the galaxy is edge-on; column 3: distance based on a Hubble constant of 80 km s<sup>-1</sup> Mpc<sup>-1</sup>; column 4: absolute $`B`$-band magnitude of the bulge component of the galaxy; column 5: BH mass based on isotropic models; column 6: ratio of BH mass to bulge mass. The mass in stars is calculated from the luminosity via the mass-to-light ratio measured at large radii.
However, no complete sample has been studied at high resolution. The detections in Table 1, together with low-resolution studies of larger samples of galaxies, support the hypothesis that BHs live in virtually every galaxy with a substantial bulge component. The total mass in detected remnants is consistent with predictions based on AGN energetics, within the rather large estimated errors in both quantities.
The main new demographic result is an apparent correlation between BH mass and the luminosity of the bulge part of the galaxy. This is shown in Figure 6. Note that the correlation is not with the total luminosity: if the disk is included, the correlation is considerably worse. Whether the correlation is real or not is still being tested. The concern is selection effects. High-mass BHs in small galaxies are easy to see, so their scarcity is real. But low-mass BHs can hide in giant galaxies, so the correlation may be only the upper envelope of a distribution that extends to smaller $`M_{}`$. If it is real, then the correlation implies that BH formation or feeding is connected with the mass of the high-density, elliptical-galaxy-like part of the galaxy. With the possible exception of NGC 4945 (a late-type galaxy for which the existence and luminosity of a bulge are uncertain), BHs have been found only in the presence of a bulge. However, the limits on $`M_{}`$ in bulgeless galaxies like M 33 are still consistent with the correlation. Current searches concentrate on the question of whether small BHs – ones that are significantly below the apparent correlation – can be found or excluded.
BH mass fractions are listed in Table 1 for cases in which the mass-to-light ratio of the stars has been measured. The median BH mass fraction is 0.29 %. The quartiles are 0.07 % and 0.9 %.
Figure 6. Correlation of BH mass with the absolute magnitude of the bulge component of the host galaxy. Since $`M/L`$ varies little from bulge to bulge, this implies a correlation between BH mass and bulge mass. Blue filled circles indicate $`M_{}`$ measurements based on stellar dynamics, green diamonds are based on ionized gas dynamics, and red squares are based on maser disk dynamics. It is reassuring that all three techniques are consistent with the same correlation.
5. ARE THEY REALLY BLACK HOLES?
The discovery of dark objects with masses $`M_{}`$ $``$ 10<sup>6</sup> to 10<sup>9.5</sup> M in galactic nuclei is secure. But are they BHs? Proof requires measurement of relativistic velocities near the Schwarzschild radius, $`r_s2M_{}/(10^8M_{})`$ AU. Even for M 31, $`r_s8\times 10^7`$ arcsec. HST spectroscopic resolution is only 0.<sup>′′</sup>1. The conclusion that we are finding BHs is based on physical arguments that BH alternatives fail to explain the masses and high densities of galactic nuclei.
The most plausible BH alternatives are clusters of dark objects produced by ordinary stellar evolution. These come in two varieties, failed stars and dead stars. Failed stars have masses $`m_{}`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 0.08 M. They never get hot enough for the fusion reactions that power stars, i. e., the conversion of hydrogen to helium. They have a brief phase of modest brightness while they live off of gravitational potential energy, but after this, they could be used to make dark clusters. They are called brown dwarf stars, and they include planetary mass objects. Alternatively, a dark cluster could be made of stellar remnants – white dwarfs, which have typical masses of 0.6 M; neutron stars, which typically have masses of $``$ 1.4 M, and black holes with masses of several M. Galactic bulges are believed to form in violent starbursts, so massive stars that turn quickly into dark remnants would be no surprise. It is not clear how one could make dark clusters with the required masses and sizes, especially not without polluting the remaining stars with more metals than we see. But in the absence of direct proof that the dark objects in galactic nuclei are BHs, it is important to examine alternatives.
However, dynamical measurements tell us more than the mass of a potential BH. They also constrain the maximum radius inside which the dark stuff must live. Its minimum density must therefore be high, and this rules out the above BH alternatives in our Galaxy and in NGC 4258. High-mass remnants such as white dwarfs, neutron stars, and stellar BHs would be relatively few in number. The dynamical evolution of star clusters is relatively well understood; in the above galaxies, a sparse cluster of stellar remnants would evaporate completely in $`\genfrac{}{}{0pt}{}{_<}{^{}}`$$`10^8`$ yr. Low-mass objects such as brown dwarfs would be so numerous that collision times would be short. Stars generally merge when they collide. A dark cluster of low-mass objects would become luminous because brown dwarfs would turn into stars.
More exotic BH alternatives are not ruled out by such arguments. For example, the dark matter that makes up galactic halos and that accounts for most of the mass of the Universe may in part be elementary particles that are cold enough to cluster easily. It is not out of the question that a cluster of these could explain the dark objects in galaxy centers without getting into trouble with any astrophysical constraints. So the BH case is not rigorously proved. What makes it compelling is the combination of dynamical evidence and the evidence from AGN observations. This is discussed in the previous article.
For many years, AGN observations were decoupled from the dynamical evidence for BHs. This is no longer the case. Dynamical BH detections are routine. The search itself is no longer the main preoccupation; we can concentrate on physical questions. New technical developments such as better X-ray satellites ensure that progress on BH astrophysics will continue to accelerate.
6. SUGGESTIONS FOR FURTHER READING
$``$ The search for BHs is reviewed in the following papers:
Kormendy, J., & Richstone, D. Ann. Rev. Astr. Astrophys. 33, 581 (1995)
Richstone, D., et al. Nature 395, A14 (1998)
$``$ In the following papers, quasar energetics are used to predict the masses of dead AGN engines:
Sołtan, A. M.N.R.A.S. 200, 115 (1982)
Chokshi, A., & Turner, E. L. M.N.R.A.S. 259, 421 (1992)
$``$ Dynamical models of galaxies as linear combinations of individual orbits are discussed in
Schwarzschild, M. Astrophys. J. 232, 236 (1979)
Richstone, D. O., & Tremaine, S. Astrophys. J. 327, 82 (1988)
van der Marel, R. P., Cretton, N., de Zeeuw, P. T., & Rix, H.-W. Astrophys. J. 493, 613 (1998)
Gebhardt, K., et al. Astron. J. 119, 1157 (2000)
$``$ The BH detection in NGC 3115 is discussed in
Kormendy, J., & Richstone, D. Astrophys. J. 393, 559 (1992)
Kormendy, J., et al. Astrophys. J. Lett. 459, L57 (1996)
$``$ The BH detection in M 31 is discussed in
Dressler, A., & Richstone, D. O. Astrophys. J. 324, 701 (1988)
Kormendy, J. Astrophys. J. 325, 128 (1988)
$``$ Tremaine’s model for the double nucleus of M 31 and new evidence for that model are in
Tremaine, S. Astron. J. 110, 628 (1995)
Kormendy, J., & Bender, R. Astrophys. J. 522, 772 (1999)
$``$ HST spectroscopy of the double nucleus of M 31 is presented in
Statler, T. S., King, I. R., Crane, P., & Jedrzejewski, R. I. Astron. J. 117, 894 (1999)
$``$ The following are thorough reviews of the Galactic center:
Genzel, R., Hollenbach, D., & Townes, C. H. Rep. Prog. Phys. 57, 417 (1994)
Morris, M., & Serabyn, E. Ann. Rev. Astr. Astrophys. 34, 645 (1996)
$``$ The latest measurement of the size of the Galactic center radio source is by
Lo, K. Y., Shen, Z.-Q., Zhao, J.-H., & Ho, P. T. P. Astrophys. J. Lett. 508, L61 (1998)
$``$ The remarkable proper motion measurements of stars near Sgr A\* and resulting conclusions about the Galactic center BH are presented in
Genzel, R., Eckart, A., Ott, T., & Eisenhauer, F. M.N.R.A.S. 291, 219 (1997)
Ghez, A. M., Klein, B. L., Morris, M., & Becklin, E. E. Astrophys. J. 509, 678 (1998)
$``$ Arguments against compact dark star clusters in NGC 4258 and the Galaxy are presented in
Maoz, E. Astrophys. J. Lett. 494, L181 (1998)
|
no-problem/0003/cond-mat0003020.html
|
ar5iv
|
text
|
# Untitled Document
Spectral Equivalence of Bosons and Fermions in
One-Dimensional Harmonic Potentials
by
M. Crescimanno
Physics Department
Berea College
Berea, KY 40404
and
A. S. Landsberg
W. M. Keck Science Center
The Claremont Colleges
Claremont, CA 91711
February, 2000
ABSTRACT: Recently, Schmidt and Schnack (Physica A260 479 (1998)), following earlier references, reiterate that the specific heat of $`N`$ non-interacting bosons in a one-dimensional harmonic well equals that of $`N`$ non-interacting fermions in the same potential. We show that this peculiar relationship between heat capacities results from a more dramatic equivalence between bose and fermi systems. Namely, we prove that the excitation spectrums of such bose and fermi systems are spectrally equivalent. Two complementary proofs of this equivalence are provided, one based on an analysis of the dynamical symmetry group of the $`N`$-body system, the other using a combinatoric analysis.
I. Introduction: With the advent of dilute atomic BEC<sup></sup> and, recently, nearly degenerate dilute atomic fermi gas<sup></sup>, there is renewed interest in understanding aspects of quantum many-body theory in inhomogeneous (in particular, harmonically trapped) systems. Since trapped, cooled atoms have properties that are, in principle, controllable to a degree unavailable in other systems (e.g., clusters and nuclei), they present new opportunities to study quantum mechanics and many-body theory.
Although the ultra-cold dilute atomic gas systems are large compared to the coherence lengths, they are not homogeneous, due to the fact they are generally trapped in a (nearly) harmonic potential. In many of these systems the interparticle forces are significant. In this note, however, we ignore the interactions between atoms, with the motivation being to understand better the thermodynamic properties of $`N`$ trapped non-interacting bosons and fermions. Recent work<sup></sup> describes strange relations between the equilibrium thermodynamics of these two systems. It was shown, for example, that the heat capacity (as a function of temperature) of $`N`$ noninteracting bosons in a one-dimensional harmonic potential is the same as that of $`N`$ noninteracting fermions in an identical potential. The respective partition functions for these systems are likewise closely related (see Ref. ).
These ”coincidences” provide hints that a deeper underlying connection exists between bose and fermi gases in a harmonic well. In particular, the heat capacity and partition functions as functions of the inverse temperature $`\beta `$ can be thought of as an “imaginary time” continuation of a fourier transform of the spectrum. The fact that the heat capacities are the same for all temperatures suggests that there should be a state-for-state, level-for-level correspondence between these non-interacting many-body bosonic and fermionic systems. We show that this is indeed the case, and below describe two independent proofs of the spectral equivalence of excitations in these systems. The first of these arguments relies on the dynamical symmetry group properties of these systems, while the other is based on combinatoric methods.
II. Dynamical symmetry group approach: Consider $`N`$ non-interacting bosons or fermions in a harmonic well. In either case, the energy differences between energy levels is an integer multiple of $`\mathrm{}\omega `$. Consequently, to analyze the spectral properties of each system it suffices to study the multiplicity per (total energy) level. Below we will show that the entire spectrum of both the bosonic and fermionic systems are isomorphic up to an overall energy shift.
Classically, a system of $`N`$ non-interacting particles in a 1-d harmonic potential is identical to that of a single particle in an $`N`$-dimensional isotropic harmonic potential. The system thus has an obvious spatial $`O(N)`$ symmetry we call “angular momentum”. However, it is apparent with more introspection that the system possesses a much larger dynamical symmetry group. Orbits in the $`N`$-dimensional isotropic harmonic potential do not precess. In analogy with the Kepler problem, we say that there is a conserved Runge-Lenz vector (which may be thought of as the axis of the orbit in configuration space), and we thus expect the symmetry group to be enlarged.
Since we will be interested in the quantization of the system, we describe the dynamical symmetry enlargement through the operators of the associated quantum theory. To simplify notation, take $`\mathrm{}\omega =1`$ throughout. Label the raising and lowering operators for the bosonic theory $`a_i^{},a_i`$ with $`i=1,\mathrm{},N`$. The canonical commutation relations (for the bosonic case) are $`[a_i,a_j^{}]=\delta _{ij}`$. The many-body hamiltonian operator of this noninteracting system is $`H=_ia_i^{}a_i+ϵ`$, where $`ϵ`$ is an overall constant.
We call the space of the eigenvalues of the $`a_i^{}a_i`$ the state space. Equivalently, the state space is the integer lattice in the $`(+,\mathrm{},+)`$ quadrant of $`N`$-dimensional Euclidean space. State space is not the Fock space, but is a useful auxiliary space from which we will construct the Fock space, and so we discuss it’s properties. Let $`e_i`$ be orthonormal unit basis vectors in this Euclidean space associated with the eigenvalues of $`a_i^{}a_i`$. We name several distinguished vectors in this space, namely the level vector $`k=_ie_i`$ and the root vectors $`l_i=e_ie_{i+1}`$ for $`i=1,\mathrm{},N1`$. We also define a spanning set of weight vectors $`r_i`$ via $`(r_i,l_j)=\delta _{i,j}`$ together with $`(r_i,k)=0`$.
Note that the operators associated with $`l_i`$, namely $`a_i^{}a_ia_{i+1}^{}a_{i+1}`$, are independent, and commute with each other (being all diagonal) and with the hamiltonian. The hamiltonian corresponds to the level vector. Furthermore, to each pair of particles $`lj`$, there is an associated $`su(2)`$ subalgebra generated by $`\{a_l^{}a_la_j^{}a_j,a_l^{}a_j+a_j^{}a_l,i(a_l^{}a_ja_j^{}a_l)\}`$. The application of the second or third operators in the above $`su(2)`$ subalgebra “shift” the first operator’s eigenvalue by a combination of root vectors. Finally, note that the matrix of inner products $`M_{ij}=(l_i,l_j)`$ of the root vectors is exactly the cartan matrix of $`su(N)`$. Thus, we have identified the dynamical symmetry group of this system generated by the (trace-free part of the) products of $`a_i^{}a_j`$ to be $`su(N)`$.
We now construct the Fock space for both fermions and bosons from the state space by realizing the respective anti-symmetrizations and symmetrizations of the multi-particle Fock states as linear combinations of states in the state space that lie on the same Weyl group orbit. We make this correspondence precise with the following observations. Each state in the state space can be thought of as a particular product of single particle states, its coordinates (the components of the $`l_i`$ are integers) are simply the harmonic oscillator level of each particle. Constructing the multi-particle state associated with that product of single particle states consists of combining all the states from single-particle label permutations. The permutation group $`S_N`$ is generated by primitive transpositions $`(\mathrm{},n_i,n_{i+1},\mathrm{})(\mathrm{},n_{i+1},n_i,\mathrm{})`$. Each of these primitive transpositions acts as a Weyl reflection (acting on all the roots) about the hyperplane perpendicular to the root $`l_i`$.
Thus, the Weyl group, $`𝒲`$ of the symmetry algebra $`su(N)`$ is exactly the group of permutations of the single particle states that make up the many-body state. Each element of the Weyl group preserves the level $`k`$. We specify a many body state through an assignment of a highest weight vector $`r`$ and level $`s`$ (a natural number) for which $`r+\frac{sk}{N}`$ is a vector in the $`(+,\mathrm{},+)`$ quadrant (boundaries included). Explicitly, in terms of the vectors in the state space, the bosonic many-body Fock space has the basis $`\mathrm{\Psi }_{r,s}^{\mathrm{b}\mathrm{o}\mathrm{s}\mathrm{o}\mathrm{n}}`$
$$\mathrm{\Psi }_{r,s}^{\mathrm{b}\mathrm{o}\mathrm{s}\mathrm{o}\mathrm{n}}=\frac{1}{\sqrt{N!}}\underset{\sigma 𝒲}{}|\frac{s}{N}k+\sigma r>$$
whereas the basis of the fermionic many-body Fock space is
$$\mathrm{\Psi }_{r,s}^{\mathrm{f}\mathrm{e}\mathrm{r}\mathrm{m}\mathrm{i}\mathrm{o}\mathrm{n}}=\frac{1}{\sqrt{N!}}\underset{\sigma 𝒲}{}(1)^{sgn(\sigma )}|\frac{s}{N}k+\sigma r>$$
where the $`sgn(\sigma )`$ is 1 if $`\sigma `$ is an even permutation and -1 if it is an odd permutation. Note that according to this definition, only $`r`$ vectors from the interior of the Weyl chamber are associated with a fermionic many-body state.
Succinctly stated, the multi-particle permutation symmetry of quantum mechanics maps the single particle states of state space into the highest weight space of the symmetry algebra $`su(N)`$. For bosons, the map covers the entire Weyl chamber (including the lattice points in the bounding hyperplanes) at each level. For fermions, the map covers only the interior lattice points of the Weyl chamber. Additionally, due to the constraint that $`r+\frac{sk}{N}`$ is in the $`(+,\mathrm{},+)`$ quadrant, at each level there are of course only a finite number of highest weight candidates.
The vector $`\rho =\frac{1}{2}_{\alpha >0}\alpha `$ (half the sum of positive roots) translates the vacuum of the bosonic Fock space to that of the fermionic Fock space at each level. Note also that $`\rho `$ is thus orthogonal to the level vector $`k`$. It can be combined with the level vector to constitute a one-to-one map between the spectrum of the $`N`$ boson and $`N`$ fermion systems. Note that translation by the vector vector $`\mathrm{\Gamma }=(0,1,2,\mathrm{},N1)`$ is precisely that map, and that $`\mathrm{\Gamma }=\rho +\frac{N1}{2}k`$. Note further that $`\mathrm{\Gamma }`$ has level $`\mathrm{\Gamma }k=N(N1)/2`$, which is precisely the ground state energy shift between the bosonic and fermionic system. Geometrically, $`\mathrm{\Gamma }`$ is the smallest lattice vector that translates the lattice points in the bounding hyperplanes entirely into (the subset of) the interior of the Weyl chambers at each level.
III Combinatoric approach: The spectral equivalence of one-dimensional, noninteracting harmonically-trapped bosonic and fermionic gases can also be understood through a straightforward combinatoric argument.
In a system of $`N`$ noninteracting particles (bosons or fermions) in a harmonic well let the energy level of the $`i`$th particle be specified by the integer $`e_i`$, with $`E=_{i=1}^Ne_i`$ the total energy of the system. (Note: In writing the energy $`e_i`$ as an integer, we are, as before, setting $`\mathrm{}\omega =1`$, and for notational convenience are ignoring the constant $`1/2`$ associated with the single-particle ground state energy.) Clearly there are many different microconfigurations possessing the same total energy $`E`$; we let $`G_N(E)`$ denote the multiplicity of states with fixed energy $`E`$. We will show that the multiplicity functions for bosons and fermions are equivalent. More precisely, we show that $`G_N^{boson}(E)=G_N^{fermion}(E+N(N1)/2)`$, indicating that the multiplicities for the bose and fermi cases are identical provided each is measured relative to its respective ground state energy (i.e., $`0`$ for bosons and $`N(N1)/2`$ for fermions). This equivalence is sufficient to prove the spectral equivalence of the excitation spectrum.
We begin with the bose case. We imagine ordering the $`N`$ particles from lowest energy to highest $`(e_1,e_2,\mathrm{},e_N)`$. The energy of the lowest-energy particle ($`e_1`$) can range from zero up to a maximum value of $`[E/N]`$, where the brackets $`[]`$ denote the integer part of the expression enclosed. (It is readily seen that if the energy of the lowest-energy particle were to exceed this maximum value, then the sum of the energies of the $`N`$ individual particles would exceed the total specified energy $`E`$ of the system.)
For a fixed $`e_1`$, the remaining energy $`Ee_1`$ must be divided up among $`N1`$ particles. So the possible values of $`e_2`$, which represents the lowest energy among the remaining $`(N1)`$ particles, can range from $`e_1`$ to $`\left[\frac{Ee_1}{N1}\right]`$. (As before, it is clear that if $`e_2`$ went outside this range, then the sum of the energies of the $`N1`$ particles would exceed the prescribed value $`Ee_1`$.)
Proceeding in this fashion, we see that
$$G_N^{boson}(E)=\underset{e_1=0}{\overset{[E/N]}{}}\underset{e_2=e_1}{\overset{\left[\frac{Ee_1}{N1}\right]}{}}\mathrm{}\underset{e_{N1}=ϵ_{N2}}{\overset{\left[\frac{Ee_1e_2\mathrm{}e_{N2}}{2}\right]}{}}1.$$
A similar argument is used to construct the multiplicity function for the fermionic case. The fundamental distinction stems from the additional constraint that two fermions cannot occupy the same energy orbital, which in turn modifies the lower and upper bounds in the above summations, as we now describe. Consider first the lower bounds. From the exclusion principle, it immediately follows that the lower (fermionic) bounds must take the form $`e_i=e_{i1}+1`$. The upper limits are found by noting that for a system of $`N`$ fermions with total energy $`E`$, the energy of the lowest-energy fermion cannot exceed $`\left[\frac{E\frac{N(N1)}{2}}{N}\right]`$, as a straightforward calculation reveals. Consequently, we find
$$G_N^{fermion}(E)=\underset{e_1=0}{\overset{\left[\frac{E\frac{N(N1)}{2}}{N}\right]}{}}\underset{e_2=e_1+1}{\overset{\left[\frac{Ee_1\frac{(N1)(N2)}{2}}{N1}\right]}{}}\mathrm{}\underset{e_{N1}=e_{N2}+1}{\overset{\left[\frac{Ee_1e_2\mathrm{}e_{N2}\frac{(2)(1)}{2}}{2}\right]}{}}1.$$
Expressed in this manner, the equivalence of $`G_N^{boson}(E)`$ and $`G_N^{fermion}(E+N(N1)/2)`$ is now revealed through the following key coordinate transformation: In the fermionic summations above, introduce new coordinates $`\widehat{e}_i=e_ii+1`$. We claim that this will transform the fermionic sum into the corresponding bose sum. (Note: in the context of the preceding analysis, this coordinate change serves to relate the interior lattice points of the Weyl chamber (fermionic case) to the entire Weyl chamber (bose case), that is, it is simply the translation by the vector $`\mathrm{\Gamma }`$) To see that this transformation achieves the desired result, first observe that under this transformation, the lower bounds in the fermionic summations $`(e_{i+1}=e_i+1)`$ become $`(\widehat{e}_{i+1}=\widehat{e}_i)`$, just as in the bose case. Meanwhile, it is not difficult to verify that the upper limits in the fermionic summations
$$e_{i+1}=\left[\frac{Ee_1e_2\mathrm{}e_i\frac{(Ni)(Ni1)}{2}}{Ni}\right]$$
now take the form
$$\widehat{e}_{i+1}=\left[\frac{E\frac{N(N1)}{2}\widehat{e}_1\widehat{e}_2\mathrm{}\widehat{e}_i}{Ni}\right],$$
which, again is the same as for the bosonic case (once we shift by the fermionic ground state energy $`EE+N(N1)/2`$).
This equivalence between the bosonic and fermionic multiplicity functions proves that the excitation spectrum of one-dimensional harmonically trapped $`N`$ non-interacting bosons is identical to that of $`N`$ non-interacting fermions.
IV. Remarks and Conclusion: Although the excitation spectra of the fermi and bose systems are identical, these systems are not related by an obvious supersymmetry. There may, however, exist a connection associated with the fermionic representation of affine lie algebra characters as described in Refs.. Lastly, we observe that the recent work of Schmidt and Schnack<sup>6,7</sup> indicates that the specific heats of similar bose and fermi systems in higher spatial dimensions (specifically, odd dimensions) might also be equivalent, just as for the one-dimensional case considered here. However, preliminary work suggests that spectral equivalence does not persist in higher (odd) dimensions.
V: Acknowledgments This research was supported in part by Research Corporation Cottrell Science Award #CC3943 and in part by the National Science Foundation under grants PHY 94-07194 and EPS-9874764.
Bibliography
1. M. H. Anderson, J. R. Ensher, M. R. Mathews, C. E. Weiman and E. A. Cornell, Science 269, 198 (1995).
2. K. B. Davis et. al., Phys. Rev. Lett. 75, 3969 (1995).
3. C. C. Bradley, C. A. Sackett and R. G. Hulet, (to be published).
4. B. DeMarco and D. S. Jin, Phys. Rev. A 58, R4267 (1998).
5. B. DeMarco, Bohm, Burke, Holland, D. S. Jin, Phys. Rev. Lett., 82:(21) 4208, (1999), cond-mat/9812350.
6. H.-J. Schmidt and J. Schnack, “Thermodynamic fermion-boson symmetry in Harmonic Oscillator Potentials,” cond-mat/9810036
7. H.-J. Schmidt and J. Schnack, Physica A260 479, (1998) cond-mat/9803151
8. J. E. Humphreys, “Introduction to Lie Algebras and Representation Theory,” Springer-Verlag, New York, 1972
9. R. Kedem, T. R. Klassen, B. M. McCoy, and E. Melzer, Phys. Lett. B307 (1993) 68-76, hep-th/9301046
10. E. Melzer, Int. J. Mod. Phys. A9 (1994), 1115-1136, hep-th/9305114
11. E. Bauer and D. Gepner, Phys. Lett. B372 (1996) 231-235, hep-th/9502118
|
no-problem/0003/cs0003081.html
|
ar5iv
|
text
|
# Variable Word Rate N-Grams
## 1 Introduction
In both spoken and written language, word occurrences are not random but vary greatly from document to document. Indeed, the field of information retrieval (IR) relies on the degree of departure from randomness as a discriminative indicator. IR systems are typically based on unigram statistics (often referred to as a “bag-of-words” model), coupled with sophisticated term weighting schemes and similarity measures . In an attempt to mathematically realise the intuition that an occurrence of a certain word may increase the chance that the same word is observed later, several probabilistic models of word occurrence have been proposed. Much of this work has evolved around the use of (a mixture of) the Poisson distribution . Recently, Church and Gale have demonstrated that a continuous mixture of Poisson distributions can produce accurate estimates of variable word rate . Lowe has introduced a beta-binomial mixture model which was applied to topic tracking and detection .
Although a constant word rate is an unlikely premise, it is nevertheless adopted in many areas including $`n`$-gram language modelling. In order to address the problem of variable word rate, several adaptive language modelling approaches have been proposed with a moderate degree of success. Typically, some notion of “topic” is inferred from the text according to the “bag-of-words” model. Information from different language model statistics (e.g., a general model and/or models specific to each topic) are then combined using methods such as mixture modelling or maximum entropy . The *dynamic cache model* is a related approach, based on an observation that recently appearing words are more likely to re-appear than those predicted by a static $`n`$-gram model. It blends cached unigram statistics for recent words with the baseline $`n`$-grams using an interpolation scheme.
Theoretically, it should not be necessary to rely on an *ad hoc* device such as a cache in order to model variable word occurrences. All the parameters of a language model may be completely determined according to probabilistic model of word rate, such as a Poisson mixture.
In this paper, we outline the theoretical background for modelling the variable word rate, and illustrate a key observation that word rates are not static using spoken data transcripts. The constant word rate assumption is then eliminated, and we introduce a variable word rate $`n`$-gram language model. An approach to estimating relative frequencies using prior information of word occurrences is presented. It is integrated with standard $`n`$-gram modelling that naturally involves discounting and smoothing schemes for practical use. Using the DARPA/NIST Hub–4E North American Broadcast News task, the approach demonstrates the reduction of perplexity up to 10%.
## 2 Modelling Variable Word Rates
In this section, we illustrate how the assumption of a constant word rate fails to capture the statistics of word occurrence in spoken (or written) documents. We show that the word rate is variable and may be modelled using a Poisson distribution or a continuous mixture of Poissons.
### 2.1 Poisson Model
The Poisson distribution is one of the most commonly observed distributions in both natural and social environments. It is fundamental to the queueing theory: under certain conditions, the number of occurrences of a certain event during a given period, or in a specified region of space, follows a Poisson distribution (a *Poisson process* ).
By assuming randomness in a Poisson process, word rate is no longer uniform. Firstly, we provide a loose definition of a document as a unit of spoken (or written) data of a certain length that contains some topic(s), or content(s). We consider a model in which a word occurs at random in a fixed length document. For a set of documents we assume that each document produces this word independently and that the underlying process is the Poisson with a single parameter $`\lambda >0`$.
Formally, a Poisson distribution is a discrete distribution (of a random variable $`X`$) which is defined for $`x=0,1,\mathrm{}`$ such that
$`\theta ^{\left[p\right]}\left(x\right)=𝒫\left(X=x;\lambda \right)={\displaystyle \frac{e^\lambda \lambda ^x}{x!}}`$ (1)
whose expectation and variance are given by $`E\left[X\right]=\lambda `$ and $`V\left[X\right]=\lambda `$, respectively .
### 2.2 Poisson Mixture — Negative Binomial Model
A less constrained model of variable word rate is offered by a multiple of Poissons, rather than a single Poisson.
Suppose the parameter $`\lambda `$ of the pdf (1) is distributed according to some function $`\varphi \left(\lambda \right)`$, then we define a continuous mixture of Poisson distributions by
$`\theta \left(x\right)={\displaystyle _0^{\mathrm{}}}\theta ^{\left[p\right]}\left(x\right)\varphi \left(\lambda \right)𝑑\lambda .`$ (2)
In particular, if $`\varphi \left(\lambda \right)`$ is a gamma distribution, i.e.,
$`\varphi \left(\lambda \right)=𝒢(\lambda ;\alpha ,\beta )={\displaystyle \frac{\lambda ^{\alpha 1}e^{\frac{\lambda }{\beta }}}{\beta ^\alpha \mathrm{\Gamma }\left(\alpha \right)}}`$ (3)
for $`\alpha >0`$ and $`\beta >0`$, then the integral (2) is reduced to a discrete distribution for $`x=0,1,\mathrm{}`$ such that
$`\theta ^{\left[nb\right]}\left(x\right)`$ $`=`$ $`𝒩\left(X=x;\alpha ,\beta \right)`$ (6)
$`=`$ $`\left(\begin{array}{c}\alpha +x1\\ x\end{array}\right){\displaystyle \frac{\beta ^x}{\left(1+\beta \right)^{\alpha +x}}}.`$
This $`\theta ^{\left[nb\right]}\left(x\right)`$ is a negative binomial distribution<sup>1</sup><sup>1</sup>1 Let $`\varphi (\lambda )`$ be $`𝒢(\lambda ;\alpha ,\beta )`$ in (2). This integration is straightforward using the definition of the gamma function, $`\mathrm{\Gamma }(\alpha )={\displaystyle _0^{\mathrm{}}}t^{\alpha 1}e^t𝑑t`$, and the recursion, $`\mathrm{\Gamma }(\alpha +1)=\alpha \mathrm{\Gamma }(\alpha )`$. The resultant pdf (6) has a slightly unconventional form in comparison to that in most of standard textbooks (e.g.), but is identical by setting a new parameter $`\gamma ={\displaystyle \frac{1}{1+\beta }}`$ with $`0<\gamma <1`$. and its expectation and variance are respectively given by $`E\left[X\right]=\alpha \beta `$ and $`V\left[X\right]=\alpha \beta \left(\beta +1\right)`$.
### 2.3 Word Occurrences in Documents
The histograms in figure 1 show the number of word (unigram) occurrences in spoken news broadcast, taken from transcripts of the Hub–4E Broadcast News acoustic training data (1996–97). These transcripts were separated into documents according to section markers and those with less than 100 words were removed, resulting 2583 documents containing slightly less than 1.3 million words in total. In the following, the number of word occurrences were normalised to 1000-word length documents.
FOR’ and ‘YOU’ appeared approximately the same number of times across all the transcripts. Using a constant word rate assumption, they would have been assigned a probability of around 0.0086. However their occurrence rates varied from document to document; about 11% and 33% of all documents did not contain ‘FOR’ and ‘YOU’ (respectively), while 1% and 3% contained these words more than 30 times. This seems to indicate that occurrences of ‘FOR’ is less dependent on the content of the document. A negative binomial distribution was used to model the variable word rate in each case (the solid line in figure 1).
The negative binomial seems to model word occurrence rate relatively well for most vocabulary items, regardless of frequency. Figure 1 illustrates this for one of the most frequent words ‘OF’ (probability of 0.023 according to the constant word rate assumption) and the less frequently occurring ‘CHURCH’ (less than 0.00029). In particular, ‘CHURCH’ appeared only in 93 out of 2583 documents, but 28 of them contained more than 10 instances, suggesting strong correlation with document content.
We also collected statistics of bigrams appearing in the Broadcast News transcripts. Figure 2 show histograms and their negative binomial fits for bigrams ‘FOR YOU’ and ‘OF CHURCH’. Although very sparse (e.g., they appeared in 127 and 6 documents, respectively), this suggests that variable bigram rate can also be modelled using a continuous mixture of Poissons.
## 3 Variable Word Rate Language Models
Taking word occurrence rate into account changes a probabilistic language model from a situation akin to playing a lottery, to something closer to betting on a horse race: the odds for a certain word improve if it has come up in the past. In this section, we eliminate the constant word rate assumption and present a variable word rate $`n`$-gram language model.
### 3.1 Relative Frequencies with Prior Word Occurrences
Let $`f\left(wn_w\right)`$ denote a relative frequency after we observe $`n_w`$ occurrences of word $`w`$. It is calculated by
$`f\left(wn_w\right)={\displaystyle \frac{1}{N}}{\displaystyle \frac{m_w{\displaystyle \underset{j=0}{\overset{n_w1}{}}}j\theta _w\left(j\right)}{1{\displaystyle \underset{j=0}{\overset{n_w1}{}}}\theta _w\left(j\right)}}.`$ (7)
The function is defined for $`n_w=0,1,\mathrm{},N`$, where $`N`$ is a fixed document length (e.g., $`N`$ is normalised to 1000 in figures 1 and 2). $`\theta _w\left(j\right)`$ is the occurrence rate for word $`w`$ in an $`N`$-length document (e.g., Poisson, negative binomial), satisfying
$$\underset{j=0}{\overset{N}{}}j\theta _w\left(j\right)=m_w,\underset{j=0}{\overset{N}{}}\theta _w\left(j\right)=1.$$
In particular,
$`f\left(w0\right)={\displaystyle \frac{m_w}{N}},`$ (8)
which corresponds to the case with no prior information of word occurrence. For the conventional approach with the constant word rate assumption, this $`f\left(w0\right)`$ is not modified regardless of any word occurrences. Further, function (7) satisfies our intuition; the value of $`f\left(wn_w\right)`$ increases monotonically as the number of observation $`n_w`$ accumulates (easy to verify), and it reaches a unity (‘1’) when $`n_w=N`$.
The characteristics of function (7) are illustrated in figure 3. The right hand figure shows relative frequencies for ‘OF’ and ‘CHURCH’ after a certain number of previous observations of the word. It indicates that the first few instances of the frequent word (‘OF’) do not modify its relative frequency very much, but have a substantial effect on the relative frequency of the less common word (‘CHURCH’). As the number of observations increases, the former is caught up by the latter.
Finally, in order to convert this relative frequency model to any type of probabilistic model for language, normalisation is required. This is achieved by dividing $`f\left(wn_w\right)`$ by $`{\displaystyle \underset{w𝒱}{}}f\left(wn_w\right)`$, where $`𝒱`$ implies a set of vocabulary. Variable relative frequencies for bigrams can also be calculated in a similar fashion.
### 3.2 Discounting and Smoothing Techniques
For any practical application, smoothing of the probability estimates is essential to avoid zero probabilities for events that were not observed in the training data. Let $`\left(w|v\right)`$ denote a bigram entry (a word $`v`$ followed by $`w`$) in the model. Further, $`f\left(w|vn_{w|v}\right)`$ implies a relative frequency after we observe $`n_{w|v}`$ occurrences of the bigram. A bigram probability $`p\left(w|vn_{w|v}\right)`$ may be smoothed with a unigram probability $`p\left(wn_w\right)`$. Using the interpolation method :
$`p\left(w|vn_{w|v}\right)`$ $`=`$ $`\widehat{f}\left(w|vn_{w|v}\right)`$ (9)
$`+\left\{1\alpha \left(v\right)\right\}p\left(wn_w\right)`$
where $`\widehat{f}\left(w|vn_{w|v}\right)`$ implies a “discounted” relative frequency (described later) and
$`\alpha \left(v\right)={\displaystyle \underset{w\left(w|v\right)}{}}\widehat{f}\left(w|vn_{w|v}\right)`$ (10)
is a non-zero probability estimate (i.e., the probability that a bigram entry $`\left(w|v\right)`$ exists in the model). Alternatively, the back-off smoothing may be applied:
$`p\left(w|vn_{w|v}\right)=\{\begin{array}{c}\widehat{f}\left(w|vn_{w|v}\right)\text{if }\left(w|v\right)\text{ exists},\hfill \\ \beta \left(v\right)p\left(wn_w\right)\text{otherwise}.\hfill \end{array}`$ (13)
In (13), $`\beta \left(v\right)`$ is a back-off factor and is calculated by
$`\beta \left(v\right)={\displaystyle \frac{1\alpha \left(v\right)}{1{\displaystyle \underset{w\left(w|v\right)}{}}\widehat{f}\left(wn_w\right)}}.`$ (14)
A unigram probability $`p\left(wn_w\right)`$ can be obtained similarly by smoothing with some constant value.
Finally, a number of standard discounting methods exist for constant word rate models (see, e.g.). Analogous discounting functions for variable word rate models may be
$`\widehat{f}_{abs}\left(w|vn_{w|v}\right)=f\left(w|vn_{w|v}\right){\displaystyle \frac{c}{N}}`$ (15)
for the absolute discounting, and
$`\widehat{f}_{gt}\left(w|vn_{w|v}\right)=df\left(w|vn_{w|v}\right)`$ (16)
for the Good-Turing discounting. Discounting factors ($`c`$ and $`d`$) may be obtained using zero prior information case — i.e., $`f\left(w|v0\right)`$’s of all bigrams in the model — and the rest should be referred to, e.g. or .
### 3.3 Language Model Perplexities
As noted in section 2, we extracted 2583 documents from the transcripts of the Broadcast News acoustic training data, each with a minimum of 100 words. A vocabulary of 19 885 words was selected and 390 000 bigrams were counted. In these experiments, the absolute discounting scheme (15) was applied, followed by interpolation smoothing (9). Figure 4 shows perplexities for the reference (key) transcription of the 1997 Hub–4E evaluation data, containing three hours of speech and approximately 32 000 words. Using conventional modelling with a constant word rate assumption, unigram and bigram perplexities were 936.5 and 237.9, respectively.
For the variable word rate models, the Poisson distribution was adopted because of simplicity in calculation. The number of word occurrences were normalised to $`N`$-word length document with $`N`$ being between 200 and 50 000, and the model parameters were modified ‘on-line’ during the perplexity calculation. For each occurrence of a word (bigram) in the evaluation data, a histogram of the past $`N`$ words (bigrams) was collected and their relative frequencies were modified according to the Poisson estimates (appropriate normalisation applied), then discounted and smoothed.
As figure 4 indicates, the variable word rate models were able to reduced perplexities from the constant word rate models. A unigram perplexity of 843.4 (10% reduction) was achieved when $`N=500`$, and a bigram perplexity of 219.0 (8% reduction) when $`N=\mathrm{50\hspace{0.17em}000}`$. The difference was predictable because bigrams were orders of magnitude more sparse than unigrams.
## 4 Conclusion
In this paper, we have presented a variable word/$`n`$-gram rate language model, based upon an approach to estimating relative frequencies using prior information of word occurrences. Poisson and negative binomial models were used to approximate word occurrences in documents of fixed length. Using the Broadcast News task, the approach demonstrated a reduction of perplexity up to 10%, indicating potential although the technique is still premature. Because of the data sparsity problem, it is not clear if the approach can be applied to language model components of current state-of-the-art speech recognition systems that typically use 3/4-grams. However, we believe this technique does have application to problems in the area of information extraction. In particular, we are planning to apply these methods to the named entity annotation task, along with further theoretical development.
|
no-problem/0003/astro-ph0003297.html
|
ar5iv
|
text
|
# Smoke Signals From IRC +10216 1. Milliarcsecond Proper Motions of the Dust
## 1. Introduction
The extreme carbon star IRC +10216 is a classic example of a red giant caught in the act of evolving into a planetary nebula. Its relative proximity, high infrared luminosity, and abundance of molecules found in its dense outflow has resulted in a barrage of observations by astronomers, working across the spectrum, but particularly in the infrared and millimeter/sub-millimeter. Despite all this attention, a good model of what is happening in the innermost regions where the stellar outflow is born and accelerated is still sorely lacking.
Numerous studies of molecular lines in the outer envelope (e.g. Bieging & Tafalla (1993)) have revealed a spherically expanding outflow, a finding which was beautifully confirmed with deep $`B`$ and $`V`$ band images of the dust shell in ambient scattered galactic light (Mauron & Huggins (1999)). However this spherical symmetry, a characteristic of most red giant winds, will likely be broken as the IRC +10216 evolves into a planetary nebula, most of which are elongated or bipolar (e.g. Zuckerman & Aller (1986)). The pronounced asymmetry in the innermost regions of the envelope of IRC +10216 reported by numerous high-resolution imaging experiments (most recently Weigelt et al. (1997); Danchi et al. (1998); Weigelt et al. (1998); Haniff & Buscher (1998)) and also polarization studies (Trammell, Dinerstein & Goodrich (1994); Kastner & Weintraub (1994)) suggests that the onset of this aspherical flow has already begun, probably within the last few hundred years. With a privileged vantage onto such a brief yet important period in the evolution of a low to intermediate mass (initial mass $`3`$ – 5 M Guelin et al. (1995)) star, high resolution observations are crucial in distinguishing between the many competing models for the physical mechanisms underlying the onset of asymmetry in the birth of a planetary nebula.
In this paper, we present a 7-epoch diffraction-limited imaging study of the inner dust shell of IRC +10216 in the near-infrared K-band. Although some interpretation of the morphology of the images is given, full radiative transfer modelling results are beyond the scope of this report, and will be presented in a second paper. Instead, we emphasize here the detection and measurement of the motion of features presumably embedded in the outflow. Although proper motions have been reported for near-infrared images of dusty Wolf-Rayet shells (Tuthill, Monnier & Danchi (1999); Monnier, Tuthill & Danchi (1999)), the two order-of-magnitude slower winds around Asymptotic Giant Branch (AGB) stars result in the requirement of longer time bases, and extremely high fidelity mapping schemes.
## 2. Observations and Results
### 2.1. Observations
Diffraction-limited images of IRC +10216 were obtained at seven separate epochs with the Keck I telescope, with dates and other observing details given in Table 1. Observations used the technique of aperture masking interferometry, by which starlight from the primary mirror is selectively blocked, with only a few regions of the pupil allowed to contribute to the final image. For observations of bright compact objects, of which IRC +10216 is a prime example, the methods of sparse-pupil interferometry have been shown to be fully competitive with, or superior to, other techniques such as speckle interferometry. Statistical methods based on the maximum-entropy technique (Sivia (1987)) have been used to recover maps from the complex visibility data; however alternate methods such as the CLEAN algorithm (Högbom (1974)) produced similar results. Data reduction and analysis procedures were also tested by observing a number of test objects, such as known binary stars, on each night. A detailed description of the Keck aperture masking experiment covering the observational techniques, data reduction, and image reconstruction can be found in Tuthill et al. (2000).
At each epoch in Table 1, a number of observations of IRC +10216 were made, often with quite distinct experimental setups. Three different aperture masks were used over the course of the project, with the non-redundant Golay-type masks passing only a few percent, and the partially-redundant annulus mask passing around ten percent of the unobstructed pupil. Three different filters were also employed at various epochs, with the bandpass characteristics given in Table 1. This diversity of observing parameters, while in part representing experimental evolution, allowed us to tailor the observations to conditions and specific requirements on a given night. However, no systematic differences were found when comparing maps from annulus and Golay data, and more significantly, from any of the different filters used (filters differ in bandwidth, but have similar center frequencies; c.f. Table 1). Thus for the remainder of this work, all maps taken at one observing epoch are treated as measuring the same quantities.
Two limitations of the interferometrically reconstructed maps should be mentioned. Firstly, the absolute photometry, or surface brightness scale in the maps, is difficult to calibrate with any great accuracy. For this reason, images shown have fluxes scaled relative to the peak intensity in each map, and discussion will refer to these relative fluxes. Secondly, the closure phase method does not deliver absolute positional information, and the center of each map has been chosen to be the location of the brightest pixel.
### 2.2. Morphology of IRC +10216
Figure 1 shows maps of IRC +10216 from data spanning a period from 1997 January to 1999 April. In this figure, maps shown are the noise-weighted average over all image reconstructions from a given epoch (or, in the case of Jan/Feb 99, pair of epochs). Simple inspection shows that the basic morphology of the inner nebula surrounding IRC +10216 has been very well established in the K band. From the most prominent structures down to relatively minor features at only a few percent of the peak surface brightness, a high degree of consistency is found in the sequence of images. Furthermore, the early epoch images shown in Figure 1 are also in excellent agreement with recently published near-infrared images of Weigelt et al. (1998) and Haniff & Buscher (1998) (We refer collectively to these two papers as “WHB” hereafter).
In order to proceed with a quantitative analysis of the maps, a simple descriptive model has been used to identify and label features. WHB have labelled compact features A through D, with Weigelt et al. (1998) adding E and F which seem much less distinct. With the considerably higher resolution available from the Keck, these compact knots have in most cases been resolved into more complicated structures, and therefore a different approach is taken in describing the images, which we relate to the earlier schemes.
A skeleton diagram of our model, based on analysis of images from Figure 1, is given in Figure 2. The location of the brightest feature in all maps, the relatively compact core, appears as a + surrounded by rings showing its approximate extent (Feature A of WHB). This core appears somewhat offset to the North-West from the center of an elongated, roughly elliptical region (Features E & F of Weigelt et al. (1998)). We refer to the Core and these immediate surroundings as the Southern Component in Figure 2. The next most prominent structure in the maps is the linear extension to the North and North-East, which we have split into two; the shorter North Arm (WHB Feature C) and the more prominent, elongated, North-East Arm (WHB Feature B). The important North-East Arm we have modelled as a linear ridge of emission. Displaced to the East between these other features is a dimmer structure, consisting of multiple peaks we have labelled the Eastern Complex (WHB Feature D). Although signal-to-noise was not uniformly high enough to be assured of high-fidelity reconstructions of the Eastern Complex, for the purposes of characterizing the structure, we have labelled the most southerly portion Cloud EC1 (usually a little brighter and appearing like a ring or three peaks in a triangle), while in the more northern section we have been content to simply tag the location of the two most prominent peaks.
### 2.3. Proper Motions and Modelling
Having established in the previous section a framework for describing the appearance of the maps of IRC +10216, we now proceed to examine the data for changes over the seven epochs. The easiest thing to look for is a change in the relative positions of the components. This was done with the use of a computer program which found the best-fit location of model components describing four features: the North-East Arm, EC1, and two minor peaks in the Eastern Complex. As our maps have no associated astrometry, the registration between separate images is unknown and we have used the bright compact core as the fixed point against which to measure any motions. Note that fits were not made directly to the averaged images presented in Figure 1, but to the full 7-epoch dataset with multiple separate images at each epoch (comprising a total of 25 maps). This allowed errors to be determined on the locations of the features from the apparent spread of values over different maps.
All four features tracked were found to exhibit widening separations from the core as a function of time. This can be verified by visual inspection of the images of Figure 1 where the Northward motion of the North-East Arm is readily apparent without any assistance from computer model-fits. Motions of the North-East Arm and elements in the Eastern Complex including EC1 are shown in Figure 2. To avoid the confusion of seven sets of features on one plot, the motion is shown averaged into four time-intervals (labelled t1 – t4), as described in the caption.
In addition to the motions of certain features, other changes are apparent with time in Figure 1, at a level of significance well above the level of noise in the maps. Two of these in particular are worth highlighting. The North-East Arm which in 1997 January ends in a bright, fairly compact knot, evolves through a stage where it is of fairly uniform brightness along its length (1998), and in the final measurements (1999) the extreme end of the arm has dimmed considerably. The second interesting change concerns the central core, which in 1997 January appears fairly circular and compact, but by 1999 April clearly exhibits an extension to the South-East. Further discussion of these points is given in Section 3 below.
### 2.4. Outflow Velocities
Having established the presence of motion between the components, it is possible to characterize the apparent outflow of material by watching the features which are presumably embedded within it. We restrict our attention to the North-East Arm and EC1, as the more minor features are too near to the map noise level to make useful tag points for the flow. The first important question to be addressed is the assumption as to the origin of the flow. WHB present arguments that their component A, the compact core, be identified as the star itself. Certainly, features identified to be moving do appear to be moving away from a point to the South, and in our subsequent analysis we have assumed radial divergence from the compact core. However, there is no guarantee that any of the map features must be the stellar disk, which may lie behind some optically thick region, as is discussed further below.
It was fairly straightforward to project the motion of EC1 onto a vector beginning at the origin (the brightest pixel in the core; see Figure 2). Displacement along this vector with time then gives a velocity. Motion of the North-East Arm, however, is not so easy to quantify. The minimum possible apparent velocity consistent with the time-sequence of images would have material in the arm moving to the North-West, perpendicular to its length. While giving a lower bound on the possible velocity, this motion is not consistent with spherical outflow from our defined origin, and furthermore does not really describe the data as the arm would also need to grow in length. Instead, we have labelled points NE1 & NE2 near the beginning and end of the arm respectively, and we measure the radial motion of these points from the origin. Since the Arm has been fit as a linear ridge, two points are sufficient to completely determine its motion. Thus our three trace points in the flow are EC1 and the two points NE1 & NE2 on the North-East Arm, all of which are assumed to be radially diverging from the Core.
Plotted in Figure 3 are the displacements from the origin of each trace point over the seven epochs. For the well-determined points NE1 & NE2 lying on the high signal-to-noise North-East Arm, the points can be seen to describe a very well defined linear relationship with time, which is echoed, with more scatter, by the EC1 data. Velocities may be computed from the slope of a least-squares fit to the data, with resulting apparent outflow rates given in Figure 3.
### 2.5. Outflow Velocity Dispersion
The most direct way to observe an accelerating body is to follow its change of speed with time. Unfortunately, the error bars are too great, and the time-span of observation too short to make this strategy worthwhile for data in Figure 3. A uniform radial acceleration of 3.4 mas.yr<sup>-2</sup> (see below) has been over-plotted (dotted line) illustrating the difficulty of measuring such a small curvature of the velocity law. However, some information on the flow dynamics can be inferred from examination of the velocity field derived from the maps. Specifically, the greater the distance from the origin, the higher the outflow speed. Three possible explanations for this finding are described below.
The acceleration of material over the region sampled by the measurements may have been detected. This acceleration can be easily visualized from the motion of the North-East Arm: the ridge-lines of best fit are not parallel causing the Arm to tilt Northward with time. In order to do this, the far end of the arm (NE2) must be moving faster than the near end (NE1). The three flow-velocity data points from Figure 3 are consistent with a uniform radial acceleration of $`3.4\pm 0.5`$ mas.yr<sup>-2</sup> starting at a radius of $`80\pm 20`$ mas. As the principle acceleration mechanism is expected to be radiation pressure from the central star, a uniform acceleration law would not be expected. However, in a region as clearly anisotropic as the inner dust shell of IRC +10216, flows may be complex and more realistic models will require greater efforts both in modelling and in recovery of longer time-baseline data.
A second possible explanation for the observed velocity field arises from the ambiguities associated with the projection of three-dimensional structure onto the plane of the sky. Such projected motions and velocities may give a misleading view of the true flow structure. To take an extreme example, the North-East Arm may, in reality, be a fragment of a circular arc, all of which is at a uniform distance from the star and moving at a uniform velocity. However, when viewed from a relatively acute projected angle, apparent differences in separation and velocity will be recorded. For a roughly spherical distribution of clumps at a uniform flow speed, projection effects alone would result in slower apparent speeds being observed closer to the star, mimicking an acceleration. The uniform ‘acceleration’ law given above is appropriate for such a scenario, where the velocity will be proportional to the displacement from the origin.
Finally, a third possibility which presents itself is that faster material is found further out (e.g velocity law $`VR`$), but without acceleration taking place over the sampled region. The most obvious origin for such a flow would be a stellar eruption ejecting material with a range of velocities at some point in the past, allowing faster clumps to move further from the star. Under this scenario, back-projection of the flow through time, based on the NE1 & NE2 velocities, implies that the North-East Arm was created in an event around JD 2449000 (about 1993 January) originating from a point some 90 mas from the core. Ideally, the assumption of a single expulsion event could be tested with three or more clumps by seeing if they appear to be diverging from a single point in space and time. Unfortunately, the errors on the EC1 flow are too large to offer a meaningful constraint, although a common origin for NE1, NE2 & EC1 does fall within the errors.
In summary, although acceleration stands as a strong candidate, the ambiguities of interpretation make it impossible to claim a clear detection. Indeed, some combination of all three effects discussed above may be needed to account for the true angular velocity field. This highlights the need for high quality imaging over more extended periods to follow clumps from birth to dispersal in the extended shell. Despite the uncertainties, a plane-of-the-sky assumption represents a simplest case and should give, at the least, valid lower bounds to the velocity determinations discussed here.
## 3. Discussion
Taking the Southern Component and the North-East Arm as the dominant bright structures in our images, the bipolarity at position angle $`20^{}`$ reported throughout the literature is confirmed here. However, at very high resolution, complex and clumpy structures are revealed which do not conform immediately to any simple axially symmetric models. Perhaps one of the most striking features of the maps of Figure 1 is the Dark Lane (see Figure 2), around which all the major structures are distributed. At all epochs, the flux level in this hole, in close proximity to all the brightest knots of emission, is consistent with a surface brightness of $`\stackrel{<}{_{}}12`$% of the peak: similar to the noise level in the maps. That such a dark region could exist in the heart of the nebula argues for the presence of considerable material in the line-of-sight, leading to high levels of obscuration. If this is the case, the interpretation of the maps becomes more difficult. A bright knot might be either a hot clump of dust, or a “window” in the dust allowing a glimpse of the hot inner regions.
The modelling of the dust shell is a current work in progress, with the most promising interpretation being that the dark central band is a dusty torus or equatorial density enhancement, which we can see tilted towards us to the South revealing the inner hot regions (Southern Component) and possibly the star (Core). Clumpy features embedded in the outflow, such as the North-East Arm and Eastern Complex, might have their origins in enhanced dust formation occurring above slowly-evolving massive convective features (Weigelt et al. (1998)) or magnetic spots (Soker & Clayton (1999)) on the stellar surface. Interestingly, no changes observed over the seven epochs, which cover more than one pulsational cycle ($``$638 days), gave clear evidence for new dust nucleation (discounting the elongation of the Core, discussed below). This is in accord with model dust shells (e.g. Winters Fleischer Gauger & Sedlmayr (1995)) which show new layers of dust forming on timescales longer than one pulsation period.
Until further modelling can be completed, we confine further discussion here to the motion of material in the outflow. For features such as the North-East Arm, it seems most likely that motions detected are simply the displacement of emitting material, and not some more exotic scenario such as the motion of a viewing hole in the dust, or a warm spot caused by a ‘searchlight beam’ where the star shines through a moving window in the dust. The North-East Arm does exhibit some common-sense characteristics reinforcing this view, such as the flux from the bright knot at its extreme end, initially bright, fading as it moves outwards and presumably further from the central star (see Section 2.3).
Perhaps the greatest uncertainties affecting the interpretation are the questions as to the origin and direction of the flow, and the three-dimensional structure of the nebula. Although Figure 2 does appear to show that features are moving away from a point in the general location of the Southern Component, it was not possible to pin this down precisely. Worse, it was not possible to tell if the Southern Component and Core were moving also: it may be the case that all features are diverging from the star which lies obscured behind the dark central band. A number of previous authors (WHB, Kastner & Weintraub (1994)) have claimed that the central star is visible in near-infrared. The compact feature we have labelled the Core has an approximate angular size of $`50`$ mas – close to the expected angular diameter of the stellar photosphere (Danchi et al. (1994); Monnier (1999)). However the progressive elongation of the Core through time (see Section 2.3; Figure 1) is not easy to reconcile with this view. It is possible that the South-East extension we see is simply the newest condensation of dust moving from the star, however for this to be the case the dust must be forming extremely close ($`\stackrel{<}{_{}}2R_{}`$) to the photosphere. The inner radius of the dust shell has been recently estimated at $`4.5R_{}`$ (Groenewegen (1997)) and $`6.8R_{}`$ (Monnier (1999)). It may also be possible that a binary companion is playing some role in this inner distortion, although it is highly unlikely that direct light from a main-sequence star could have been seen. In this work, the assumption has been made that the Core does mark a fixed location associated with the star, with extra structure in the Southern Component arising from emission and/or partial obscuration from the inner boundary of the dust shell. Alternate scenarios, in which the Core may also be moving, lead to modification of the derived velocities by up to a factor of 2.
The gas outflow velocity at large distances from the star is well established from CO line profile studies to be 14.5 km.sec<sup>-1</sup>, however as calculated by Groenewegen (1997), the dust will drift through the gas at 3 km.sec<sup>-1</sup> resulting in a dust outflow speed of 17.5 km.sec<sup>-1</sup>. Combining this with our maximum angular velocity of 25.5 mas.yr<sup>-1</sup> from Figure 3 yields a distance estimate of 145 pc to IRC +10216. This is in accord with modern estimates lying in the range 110 – 170 pc (Winters, Dominik & Sedlmayr (1994); Le Bertre (1997); Groenewegen, Van Der Veen & Matthews (1998)). However, there are too many uncertainties involved to place high confidence in this result. In addition to the geometric ambiguities already mentioned, it is unclear if the dust clump followed here (NE2) has finished accelerating and is at its terminal velocity. Furthermore, the value of V measured in the spherical molecular shell may not be a good measure of the inner dust motions. It is difficult to imagine that any dramatic change from spherical outflow to an equatorial disk and bipolar lobes in IRC +10216 did not also entail changes in the velocity structure.
Taking a distance of 145 pc, proper motions of 11.5, 17.8 & 25.5 mas.yr<sup>-1</sup> (Figure 3) imply outflow velocities of 7.9, 12.2 & 17.5 km.sec<sup>-1</sup> for points NE1, EC1, and NE2 respectively. As computed above, the velocity structure is consistent with an apparent uniform acceleration of $`3.4\pm 0.5`$ mas.yr<sup>-2</sup> from rest at $`80\pm 20`$ mas. However, the projection of a three-dimensional motion onto the plane of the sky, and unknowns associated with the initial conditions of each clump may also account for the range of observed flow speeds. Models of radiatively-driven dust acceleration predict a more complicated velocity law dependent upon many properties of star and the outflowing gas and dust (Kwok (1975); Papoular & Pegourie (1986)). However, detailed comparison with such predictions is premature until the velocity of a single feature can be shown to be changing over time.
## 4. Conclusions
Diffraction-limited images recovered using interferometric techniques from a multi-epoch study spanning more than two years at the Keck I telescope are presented. Taken in the near-infrared K Band, the maps have revealed an asymmetric and clumpy structure at angular resolutions exceeding the expected diameter of the stellar photosphere ($``$ 50 mas). The most likely morphology for the circumstellar environment is an optically thick circumstellar disk or torus, possibly tilted towards the line of sight in the South revealing the hot inner cavity and emission from the stellar photosphere. The angular separations of clumps of material thought to be in the Northern bipolar lobe have been followed over time, revealing increasing separation from the compact core to the South. Outflow velocities derived from this motion are consistent with estimates of the radial outflow velocity (from CO measurements) and the expected distance. Clumps at greater distances from the Core were found to show increasing velocities, which may be taken as evidence for acceleration in the inner regions; the effects of geometrical projection; or the result of a past event which ejected material with a range of velocities. In addition to the changing separations of components, the appearance of the inner nebula was found to be evolving in other ways, however none were interpreted as evidence for new dust condensation over the pulsation cycle. Further modelling of this system is currently underway, and will be presented in a subsequent paper.
We would like to thank the referee, Matt Bobrowsky, for helpful suggestions and Devinder Sivia for the maximum-entropy mapping program “VLBMEM”. Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. This work is a part of a long-standing interferometry program at U.C. Berkeley, supported by the National Science Foundation (Grant AST-9315485 and AST-9731625) by the Office of Naval Research (OCNR N00014-89-J-1583), and the France-Berkeley Fund.
|
no-problem/0003/math0003223.html
|
ar5iv
|
text
|
# On an asymptotic behavior of elements of order 𝑝 in irreducible representations of the classical algebraic groups with large enough highest weights
## 1 Notation and preliminary comments
Throughout the article for a semisimple algebraic group $`S`$ the symbols $`Irr(S)`$, $`𝐗(S)`$, and $`𝐗^+(S)`$ mean the same as the similar ones for $`G`$ introduced earlier; $`R(S)`$ is the set of roots of $`S`$, $`S_1,\mathrm{},S_j`$ is the subgroup in $`S`$ generated by subgroups $`S_1,\mathrm{},S_j`$; $`Irr_p(S)Irr(S)`$ is the set of $`p`$-restricted representations, i.e. irreducible representations with $`p`$-restricted highest weights; $`𝐗(\phi )`$ ($`𝐗(M)`$) is the set of weights of a representation $`\phi `$ (a module $`M`$); $`dimM`$ is the dimension of $`M`$; $`M(\omega )`$ is the irreducible $`S`$-module with highest weight $`\omega `$; $`L`$ is the Lie algebra of $`G`$; $`R=R(G)`$, $`R^+R`$ is the set of positive roots; $`Irr_p=Irr_p(G)`$; $`𝒳_\beta G`$ and $`X_\beta L`$ are the root subgroup and the root element associated with $`\beta R`$, $`𝒳_{\pm i}=𝒳_{\pm \alpha _i}`$, and $`X_{\pm i}=X_{\pm \alpha _i}`$. Set $`H(\beta _1,\mathrm{},\beta _j)=𝒳_{\beta _1},𝒳_{\beta _1}\mathrm{},𝒳_{\beta _j},𝒳_{\beta _j}`$. For $`\omega 𝐗(S)`$ and $`\alpha R(S)`$ denote by $`\omega ,\alpha `$ the value of the weight $`\omega `$ on the root $`\alpha `$. For an $`S`$-module $`M`$ and a unipotent element $`xS`$ define $`k_M(x)`$ similarly to $`k_\phi (x)`$. If $`|x|=p`$, then $`n_\phi (x)`$ is the number of Jordan blocks of size $`p`$ of the matrix $`\phi (x)`$ for a representation $`\phi `$ of $`S`$ and $`n_M(x)`$ denotes the same number for a module $`M`$ affording $`\phi `$.
An element $`xG`$ of order $`p`$ can be embedded into a closed connected subgroup $`\mathrm{\Gamma }`$ of type $`A_1`$ whose labelled diagram coincides with $`\mathrm{\Delta }_x`$ (see \[6, Theorem 4.2\]). Set $`𝐗_1=𝐗(A_1(K))`$ (the simply connected group of this type) and identify $`𝐗_1`$ with $``$ mapping $`a\omega _1𝐗_1`$ into $`a`$. Then $`𝐗(\mathrm{\Gamma })`$ can be identified with a subset of $``$. The canonical homomorphism $`\tau _x`$ can be obtained as the restriction of weights from a maximal torus $`TG`$ to a maximal torus $`T_1\mathrm{\Gamma }`$ such that $`T_1T`$. From now on we fix the tori $`T`$ and $`T_1`$, and all weights and roots of $`G`$ and $`\mathrm{\Gamma }`$ are considered with respect to $`T`$ and $`T_1`$. Throughout the text $`\epsilon _i`$ with $`1ir+1`$ for $`G=A_r(K)`$ and $`1ir`$ otherwise are weights of the standard realization of $`G`$ labelled as in \[3, ch. VIII, §13\]. Set $`e_i=\tau _x(\epsilon _i)`$. One can choose $`\mathrm{\Gamma }`$, $`T`$ and $`T_1`$ such that the restriction to $`\mathrm{\Gamma }`$ of the natural representation of $`G`$ is a direct sum of irreducible components with $`p`$-restricted highest weights (see comments in \[14, Section 3\]); $`e_ie_j`$ for $`i<j`$; $`e_i0`$ if $`G=A_r(K)`$ and $`i(r+1)/2`$; and $`e_i0`$ for all $`ir`$ if $`GA_r(K)`$. If $`HG`$ is a semisimple subgroup generated by some root subgroups, then $`T_H=TH`$ is a maximal torus in $`H`$. If $`T_1T_H`$, we denote by the same symbol $`\tau _x`$ the homomorphism $`𝐗(H)`$ determined by restricting weights from $`T_H`$ to $`T_1`$. This causes no confusion. If an element $`v`$ of some $`G`$-module is an eigenvector for $`T`$, we denote its weights with respect to $`T`$, $`T_H`$, and $`T_1`$ by $`\omega (v)`$, $`\omega _H(v)`$, and $`\omega _\mathrm{\Gamma }(v)`$. In what follows $`x`$ is conjugate to an element of $`G_m`$, $`|x|=p`$, $`m`$ and $`rm`$ are such as in the assertion of Theorem 1, and $`\delta _i=\tau _x(\alpha _i)`$, $`1ir`$.
###### Lemma 3
Set $`l=[(m+2)/2]`$ for $`G=A_r(K)`$ and $`l=m`$ otherwise. Then $`c_x=_{i=1}^l\delta _i`$ and $`c_xp1`$.
Proof. Put $`k=l1`$ for $`G=A_r(K)`$, $`m=2t`$, and $`k=l`$ for $`G=A_r(K)`$, $`m=2t+1`$. Our assumptions on $`e_i`$, $`mr`$, and $`x`$ imply that $`e_i=0`$ for $`k<i<r+2k`$ if $`G=A_r(K)`$ and $`e_i=0`$ for $`i>m`$ otherwise; notice that $`e_{k+1}=e_{k+2}=0`$ for $`G=A_r(K)`$. Now it follows from the definition of $`c_x`$ and the formulae in \[3, ch. VIII, §13\] that $`c_x=_{i=1}^l\delta _i=e_1e_{l+1}=e_1`$. As $`e_1`$ is a weight of a $`p`$-restricted $`\mathrm{\Gamma }`$-module, we have $`e_1<p`$. This yields the lemma.
Proof of Theorem 1. Set $`\omega =\omega (\phi )`$ and let $`\omega =_{i=1}^ra_i\omega _i`$. It is clear that $`\omega 0`$ as $`\tau _x(\omega )0`$. Define subgroups $`H_1`$ and $`H_2G`$ as follows. For $`G=A_r(K)`$ set $`u=rt+2`$ if $`m=2t`$ and $`rt+1`$ if $`m=2t+1`$, $`\beta =\epsilon _{t+1}\epsilon _u`$,
$$H_1=H(\alpha _1,\mathrm{},\alpha _t,\beta ,\alpha _u,\mathrm{},\alpha _r),H_2=H(\alpha _{t+2},\mathrm{},\alpha _{u2})$$
(we have $`H_1=H(\alpha _1,\epsilon _2\epsilon _{r+1})`$ for $`m=2`$ and $`H_1=H(\epsilon _1\epsilon _{r+1})`$ for $`m=1`$). For $`G=B_r(K)`$, $`C_r(K)`$, or $`D_r(K)`$ put $`\beta =\epsilon _m`$, $`2\epsilon _m`$, or $`\epsilon _{m1}+\epsilon _m`$, respectively, and
$$H_1=H(\alpha _1,\mathrm{},\alpha _{m1},\beta ).$$
Next, set
$$H_2=H(\alpha _{m+1},\mathrm{},\alpha _{r1},\epsilon _{r1}+\epsilon _r)$$
for $`G=B_r(K)`$ and
$$H_2=H(\alpha _{m+1},\mathrm{},\alpha _r)$$
for $`G=C_r(K)`$ or $`D_r(K)`$ (here $`H_1=H(\beta )`$ for $`G=C_r(K)`$ and $`m=1`$). One easily observes that the sets of roots in brackets used to define $`H_1`$ and $`H_2`$ yield bases of the systems $`R(H_1)`$ and $`R(H_2)`$, respectively. Denote these bases by $`_i`$. In all cases $`H_1`$ is conjugate to $`G_m`$ in $`G`$. We have $`H_2A_{rm1}(K)`$, $`D_{rm}(K)`$, $`C_{rm}(K)`$, or $`D_{rm}(K)`$ for $`G=A_r(K)`$, $`B_r(K)`$, $`C_r(K)`$, or $`D_r(K)`$, respectively. It is clear that the subgroups $`H_1`$ and $`H_2`$ commute. Set $`H=H_1H_2`$. Let $`U_i=𝒳_\gamma \gamma R^+,𝒳_\gamma H_i`$, $`i=1,2`$, and $`U=U_1U_2`$. It is not difficult to conclude that $`U_i`$ is a maximal unipotent subgroup in $`H_i`$ and $`U`$ is such a subgroup in $`H`$. We can assume that $`xU_1`$, $`\mathrm{\Gamma }H_1`$ and $`T_1T_{H_1}`$. We shall write a weight $`\mu 𝐗(H)`$ in the form $`(\mu _1,\mu _2)`$ where $`\mu _i𝐗(H_i)`$ is the restriction of $`\mu `$ to $`T_{H_i}`$. Set $`M=M(\omega )`$.
It is clear that $`n_V(x)=dim(x1)^{p1}V`$ for each $`H`$-module $`V`$. Taking this into account, it is not difficult to conclude the following. If $`0W_1\mathrm{}W_t=V`$ is a filtration of $`V`$, $`F_i=W_i/W_{i1}`$, $`1it`$, and $`n_{F_i}(x)=n_i`$, then
$$n_V(x)\underset{i=1}{\overset{t}{}}n_i.$$
(1)
First suppose that $`\phi Irr_p`$. Since passing to the dual representation does not influence the Jordan form of $`\phi (x)`$, one can assume that $`a_i0`$ for some $`i(r+1)/2`$ if $`G=A_r(K)`$. As for $`p`$-large representations the estimates of \[12, Theorem 1.1\] hold; we also assume that $`\phi `$ is not $`p`$-large. Hence $`\mu ,\alpha <p`$ for all $`\mu 𝐗(\phi )`$ and long roots $`\alpha `$ (for all $`\alpha `$ if $`G=A_r(K)`$ or $`D_r(K)`$). By the formulae for the maximal roots of the classical groups in \[2, Tables 1-4\], this forces that
$$\begin{array}{cc}\hfill a_1+\mathrm{}+a_r<p& \text{for}G=A_r(K)\text{or}C_r(K),\hfill \\ \hfill a_1+2a_2+\mathrm{}+2a_{r1}+a_r<p& \text{for}G=B_r(K),\hfill \\ \hfill a_1+2a_2+\mathrm{}+2a_{r2}+a_{r1}+a_r<p& \text{for}G=D_r(K).\hfill \end{array}$$
(2)
Now we proceed to construct two composition factors $`M_1`$ and $`M_2`$ of the restriction $`M|H`$ such that $`n_{M_1}(x)d(rm)`$ and $`n_{M_2}(x)>0`$. This will be done for almost all $`\omega `$. In exceptional cases we shall find one factor $`M_1`$ such that $`n_{M_1}(x)>d(rm)`$. By (1), this would yield the assertion of the theorem.
Let $`vM`$ be a nonzero highest weight vector. Put $`\mu _i=\omega _{H_i}(v)`$. The vector $`v`$ generates an indecomposable $`H`$-module $`V_1`$ with highest weight $`\mu =(\mu _1,\mu _2)`$. Using (2), one can deduce that $`\mu _1,\beta <p`$ for all $`\beta _1`$. Here for $`G=B_r(K)`$ we take into account that $`m>1`$. Hence $`\mu _1`$ is $`p`$-restricted. Now assume that either $`GB_r(K)`$, or $`a_i0`$ for some $`i<r`$. For such representations we construct another weight vector $`wM`$ that is fixed by $`U`$. Set $`l=t+1`$ for $`G=A_r(K)`$, $`m=2t`$; otherwise take $`l`$ as in Lemma 3. First suppose that $`a_j0`$ for some $`jl`$ (Case 1). Choose maximal such $`j`$ and put $`w=X_l\mathrm{}X_{(j+1)}X_jv`$. Now let $`a_j=0`$ for all $`jl`$ (Case 2). Our assumptions on $`a_i`$ imply that $`a_i0`$ for some $`i>l`$; furthermore, one can take $`i(r+1)/2`$ for $`G=A_r(K)`$ and $`i<r`$ for $`G=B_r(K)`$. Choose minimal such $`i`$ and set $`w=X_l\mathrm{}X_{(i1)}X_iv`$ if $`GD_r(K)`$ or $`i<r`$ and $`w=X_l\mathrm{}X_{(r3)}X_{(r2)}X_rv`$ for $`G=D_r(K)`$ and $`i=r`$. It follows from \[12, Lemma 2.1(iii) and Lemma 2.9\] that in all cases $`w0`$. Using \[10, Lemma 72\] and analyzing the roots in $`_1`$ and $`_2`$ and the weight system $`𝐗(\phi )`$, we get that $`U`$ fixes $`w`$ in all situations. Here it is essential that the case $`G=B_r(K)`$ with $`\omega =a_r\omega _r`$ is excluded. In the latter case we cannot assert that $`𝒳_\beta `$ fixes $`w`$. Set $`\lambda _i=\omega _{H_i}(w)`$, $`i=1,2`$. Now it is clear that $`w`$ generates an indecomposable $`H`$-module $`V_2`$ with highest weight $`\lambda =(\lambda _1,\lambda _2)`$. We claim that $`\lambda _1`$ is $`p`$-restricted. Write down all the situations where $`\lambda _1,\gamma \mu _1,\gamma `$ for some $`\gamma _1`$. We have $`\lambda _1,\beta =\mu _1,\beta 1`$ in Case 1 if $`j=l`$ and $`GB_r(K)`$ or $`j=l1`$ and $`G=D_r(K)`$ and in Case 2 for $`GB_r(K)`$ and all $`i`$; and $`\lambda _1,\beta =\mu _1,\beta 2`$ for $`G=B_r(K)`$ both in Case 1 with $`j=l`$ and in Case 2. In Case 1 we also have $`\lambda _1,\alpha _{j1}=\mu _1,\alpha _{j1}+1`$ if $`j>1`$ and $`\lambda _1,\alpha _j=\mu _1,\alpha _j1`$ if $`j<l`$. In Case 2 one gets $`\lambda _1,\alpha _{l1}=\mu _1,\alpha _{l1}+1`$ if $`l>1`$. In all other situations we have $`\lambda _1,\gamma =\mu _1,\gamma `$. Now apply (2) to conclude that $`\lambda _1`$ is $`p`$-restricted.
Set $`M_1=M(\mu )`$, $`M_2=M(\lambda )`$, $`M_1^j=M(\mu _j)`$, and $`M_2^j=M(\lambda _j)`$, $`j=1,2`$. Obviously, $`M_i`$ is a composition factor of $`V_i`$. It is well known that $`M_i=M_i^1M_i^2`$. It is clear that $`\tau _x(\mu _1)=\tau _x(\omega )p`$. Since $`xH_1`$, we have $`\delta _t=0`$ if $`\alpha _t_2`$. So by Lemma 3,
$$\tau _x(\lambda _1)=\tau _x(\omega (w))\tau _x(\omega )\underset{i=1}{\overset{l}{}}\delta _i=\tau _x(\omega )c_xp1.$$
It follows from \[11, Theorem 1.1, Lemma 2.5, and Proposition 2.12\] that $`k_{M_i^1}(x)=p`$. Hence $`n_{M_i}(x)dimM_i^2`$. One easily observes that $`M_1^2`$ and $`M_2^2`$ cannot both be trivial $`H_2`$-modules. Our assumptions on $`rm`$ and \[5, Proposition 5.4.13\] imply that the dimension of a nontrivial irreducible $`H_2`$-module is at least $`d(rm)`$. In the exceptional case where $`G=B_r(K)`$ and $`\omega =a_r\omega _r`$ we need to evaluate $`dimM_1^2`$. First let $`a_r1`$. As above, $`X_rv0`$. This implies that $`𝐗(M_1^2)`$ contains a dominant weight $`\mu _2\alpha _r`$ and $`dimM_1^2`$ is greater than the size of the orbit of $`\mu _2`$ under the action of the Weyl group of $`H_2`$. The latter is equal to $`2^{rm1}d(rm)`$ for our values of $`rm`$. By (1), this yields the assertion of the theorem for almost all $`\phi \mathrm{Irr}_p`$. It remains to consider the case where $`G=B_r(K)`$ and $`\omega =\omega _r`$. It is well known that then the restriction $`MH_1`$ is a direct sum of $`2^{rm}`$ $`H_1`$-modules $`N=M(\omega _m)`$. Since $`k_m(x)=p`$, we get $`k_N(x)=p`$ and $`n_M(x)2^{rm}>d(rm)`$.
Now suppose that $`\phi Irr\backslash Irr_p`$. By the Steinberg tensor product theorem \[9, Theorem 1.1\], $`\phi `$ can be represented in the form $`_{j=1}^s\phi _jFr^j`$ where $`Fr`$ is the Frobenius morphism of $`G`$ associated with raising elements of $`K`$ to the $`p`$th power and all $`\phi _jIrr_p`$. It is clear that the morphism $`Fr`$ does not influence the Jordan form of $`\phi (x)`$. Hence one can assume that $`\phi =\psi \theta `$ where $`\theta =\phi _jFr^j`$ for some $`j`$ and both $`\psi `$ and $`\theta `$ are nontrivial. Set $`a=\sigma _x(\omega (\psi ))`$, $`\nu =\omega (\phi _j)`$, $`b=\tau _x(\nu )`$ and define by $`\mu `$ the restriction of $`\nu `$ to $`T_H`$. Now it follows from the definitions of $`\sigma _x`$ and $`\tau _x`$ that $`\sigma _x(\omega )=a+b`$. By \[11, Theorem 1.1, Lemma 2.5 and Proposition 2.12\], $`k_\psi (x)=\mathrm{min}\{a+1,p\}`$ and $`k_\theta (x)=\mathrm{min}\{b+1,p\}`$. First suppose that $`a`$ or $`bp1`$. Set $`\rho =\psi `$ if $`ap1`$ and $`\rho =\theta `$ otherwise and denote by $`\pi `$ the remaining representation from the pair $`(\psi ,\theta )`$. Then $`k_\rho (x)=p`$ and \[4, ch. VIII, Lemma 2.2\] implies that $`n_\phi (x)dim\pi `$. Let $`d(r)`$ be the value of $`d(rm)`$ if one formally sets $`m=0`$. Then by \[5, Proposition 5.4.13\], $`dim\pi d(r)>d(rm)`$ which settles the case under consideration.
Now assume that both $`a`$ and $`b<p1`$. Then $`k_\psi (x)=a+1`$ and $`k_\theta (x)=b+1`$. Since $`\sigma _x(\omega )p1+c_x`$, we have $`b>c_x`$. Arguing as for $`p`$-restricted $`\phi `$, we can and shall suppose that $`\nu ,\alpha _i0`$ for some $`i(r+1)/2`$ if $`G=A_r(K)`$. Put $`M^{}=M(\nu )`$ and construct the composition factors $`M_i`$, $`i=1,2`$, of the restriction $`M^{}|H`$ as for $`p`$-restricted $`M`$ before. Transfer the notation $`\mu _1`$, $`\lambda _1`$, and $`M_j^i`$, $`i,j=1,2`$, to $`M^{}`$. Again we have the exceptional case $`G=B_r(K)`$ and $`\nu =a_r\omega _r`$ where we do not construct $`M_2`$ and consider $`M_1`$ only. Obviously, $`\tau _x(\mu _1)=b`$. As before, we deduce that $`\tau _x(\lambda _1)bc_x`$. By \[11, Theorem 1.1, Lemma 2.5, and Proposition 2.12\], $`k_{M_1}(x)=b+1`$ and $`k_{M_2}(x)b+1c_x`$. Let $`n_i`$ be the number of Jordan blocks of the maximal size in the canonical form of $`x`$ as an element of $`\mathrm{End}M_i`$, $`i=1,2`$. Looking at the realizations of $`M_i`$ as tensor products, one easily observes that $`n_idimM_i^2`$. Set $`F_1=M(\omega (\psi ))M_1`$, $`F_2=M(\omega (\psi ))M_2`$ and consider $`F_i`$ as $`H`$-modules in the natural way. In the general case the $`H`$-module $`M`$ has a filtration two of whose quotients are isomorphic to $`F_1`$ and $`F_2`$, respectively. In the exceptional case $`F_1`$ is a quotient of a submodule in $`M`$. Observe that $`a+k_{M_2}(x)p`$. Using \[4, ch. VIII, Theorem 2.7\] that describes the canonical Jordan form of a tensor product of unipotent blocks, we obtain that $`k_{M_i}(x)=p`$ and $`n_{F_i}(x)dimM_i^2`$. As for $`p`$-restricted $`M`$, we show that $`n_12^{rm}`$ if $`G=B_r(K)`$ and $`\nu =\omega _r`$ and conclude that $`dimM_1^2+dimM_2^2>d(rm)`$ in the general case and $`dimM_1^2>d(rm)`$ in the exceptional cases with $`a_r1`$. Now (1) completes the proof.
Proof of Proposition 2. Let $`a`$, $`x`$, $`m`$, and $`c`$ be such as in the assertion of the proposition. Assume that $`p<acp+c1`$. Therefore we have $`(a1)cp1`$. Set $`M=M(\omega )`$ and denote by $`M_t`$ the weight subspace of weight $`t`$ in the $`\mathrm{\Gamma }`$-module $`M`$. It is clear that the Weyl group of $`\mathrm{\Gamma }`$ interchanges $`M_t`$ and $`M_t`$; hence $`dimM_t=dimM_t`$. Put $`e=(a1)c`$, $`V_1=_{t>e}M_t`$, $`V_2=_{t>e}M_t`$, and $`V=M_e`$. Set $`f=[(m+1)/2]`$ for $`G=A_r(K)`$, $`f=m`$ for $`G=B_r(K)`$ or $`C_r(K)`$, and $`f=m1`$ for $`G=D_r(K)`$. Let $`vM`$ be a nonzero highest weight vector and put $`w=X_f\mathrm{}X_2X_1v`$. By \[12, Lemma 2.9\], $`w0`$. We need a subgroup $`S`$ which can be defined as follows. Put $`I=\{i1ir,\delta _i=0\}`$ and $`S=𝒳_i,𝒳_iiI`$. The canonical Jordan forms of $`x`$ in the standard realizations of $`G_m`$ and $`G`$ are well known. We have $`|x|=p`$ since the dimension of the first realization is at most $`p`$ due to our assumptions. Taking into account these Jordan forms, one easily obtains the values of $`\delta _i`$, $`1ir`$, and using Lemma 3, deduces the following facts: $`I=\{if+1irf\}`$ for $`G=A_r(K)`$ and $`m=2f`$, $`I=\{if+1ir\}`$ for $`G=B_r(K)`$ and $`D_r(K)`$, and $`S=H_2`$ in all other cases where $`H_2`$ is the subgroup defined in the proof of Theorem 1; $`c_x=_{i=1}^f\delta _i=c`$, $`\tau _x(\omega )=ac`$; and $`wV`$. Next, observe that $`SA_{rm}`$ for $`G=A_r(K)`$ and $`m=2f`$, $`SB_{rm}(K)`$ for $`G=B_r(K)`$, and $`SD_{rm+1}`$ for $`G=D_r(K)`$. Our construction of the vector $`w`$ shows that $`𝒳_i`$ fixes $`w`$ if $`iI`$. This forces that $`w`$ generates an indecomposable $`S`$-module $`M_S`$ with highest weight $`\omega _S(w)`$. Then one immediately concludes that $`M_SM(\omega _1)`$. This yields that $`dimM_S=rm+1=d(rm)+1`$ for $`G=A_r(K)`$ and $`m=2f`$, $`dimM_S=2(rm)+1=d(rm)+1`$ for $`G=B_r(K)`$, $`dimM_S=2(rm+1)=d(rm)+2`$ for $`G=D_r(K)`$, and $`dimM_S=d(rm)`$ otherwise. It is clear that $`M_SV`$. Denote by $`𝐗_f𝐗(M)`$ the subset of weights of the form $`\omega _{i=1}^fb_i\alpha _i`$ and by $`M_A`$ the irreducible $`A_f(K)`$-module with highest weight $`a\omega _1`$. By Smith’s theorem , for each $`\mu 𝐗_f`$ the dimension of the weight subspace $`M_\mu M`$ coincides with that of the weight subspace in $`M_A`$ whose weight differs from $`a\omega _1`$ by the same linear combination of the simple roots. Hence $`dimM_\mu `$ does not depend upon $`r`$. Set $`W=_{\mu 𝐗_f}M_\mu `$. Since $`M`$ is an irreducible $`L`$-module and $`p>2`$, observe that $`M`$ is a linear span of vectors of the form $`X_{i_s}\mathrm{}X_{i_2}X_{i_1}v`$. Now, analyzing the weight structure of $`M`$, we conclude that $`V_1W`$ and $`V=(VW)M_S`$. This implies that $`dimV_1`$ ($`=dimV_2`$) and $`dim(VW)`$ do not depend upon $`r`$.
It follows from \[10, Lemma 72\] that
$$(x1)^{p1}M_t\underset{it+2p2}{}M_i.$$
(3)
Let $`M_tV_2`$. Then $`te`$. Obviously, $`e<p1`$ if $`ac<p1+c_x`$ and $`e=p1`$ for $`ac=p1+c_x`$. Thus (3) implies that
$$(x1)^{p1}M_t\underset{t>p1}{}M_tV_1$$
in the first case and
$$(x1)^{p1}M_t\underset{tp1}{}M_tV_1V$$
in the second case. This forces that $`n_M(x)dimV_2+dimV_1=2dimV_1`$ in the first case and $`n_M(x)2dimV_1+dim(VW)+dimM_S`$ in the second case. We have seen before that $`dimM_S=d(rm)+u`$ with $`u=0`$, $`1`$, or $`2`$. Hence one can take $`N_G(a,m,p)=2dimV_1`$ and $`Q_G(a,m,p)=2dimV_1+dim(VW)+u`$ to complete the proof.
###### Remark 4
For $`G=A_r(K)`$ or $`C_r(K)`$ we could give a shorter proof of Proposition 2 using the realization of $`\phi `$ in the $`a`$th symmetric power of the standard module (see \[7, 1.14 and 8.13\]), but we need the proof above for $`B_r(K)`$ and $`D_r(K)`$.
This research has been supported by the Institute of Mathematics of the National Academy of Sciences of Belarus in the framework of the State program “Mathematical structures” and by the Belarus Basic Research Foundation, Project F 98-180.
|
no-problem/0003/cond-mat0003215.html
|
ar5iv
|
text
|
# Topological Evolution of Dynamical Networks: Global Criticality from Local Dynamics∗
\[
## Abstract
We evolve network topology of an asymmetrically connected threshold network by a simple local rewiring rule: quiet nodes grow links, active nodes lose links. This leads to convergence of the average connectivity of the network towards the critical value $`K_c=2`$ in the limit of large system size $`N`$. How this principle could generate self-organization in natural complex systems is discussed for two examples: neural networks and regulatory networks in the genome.
PACS numbers: 05.65.+b, 64.60.Cn, 87.16.Yc, 87.23.-n
\]
Networks of many interacting units occur in diverse areas as, for example, gene regulation, neural networks, food webs in ecology, species relationships in biological evolution, economic interactions, and the organization of the internet. For studying statistical mechanics properties of such complex systems, discrete dynamical networks provide a simple testbed for effects of globally interacting information transfer in network structures.
One example is the threshold network with sparse asymmetric connections. Networks of this kind were first studied as diluted, non-symmetric spin glasses and diluted, asymmetric neural networks . For the study of topological questions in networks, a version with discrete connections $`c_{ij}=\pm 1`$ is convenient and will be considered here. It is a subset of Boolean networks with similar dynamical properties. Random realizations of these networks exhibit complex non-Hamiltonian dynamics including transients and limit cycles . In particular, a phase transition is observed at a critical average connectivity $`K_c`$ with lengths of transients and attractors (limit cycles) diverging exponentially with system size for an average connectivity larger than $`K_c`$. A theoretical analysis is limited by the non-Hamiltonian character of the asymmetric interactions, such that standard tools of statistical mechanics do not apply . However, combinatorial as well as numerical methods provide a quite detailed picture about their dynamical properties and correspondence with Boolean Networks .
While basic dynamical properties of interaction networks with fixed architecture have been studied with such models, the origin of specific structural properties of networks in natural systems is often unknown. For example, the observed average connectivity in a nervous structure or in a biological genome is hard to explain in a framework of networks with a static architecture. For the case of regulation networks in the genome, Kauffman postulated that gene regulatory networks may exhibit properties of dynamical networks near criticality . However, this postulate does not provide a mechanism able to generate an average connectivity near the critical point. An interesting question is whether connectivity may be driven towards a critical point by some dynamical mechanism. In the following we will sketch such an approach in a setting of an explicit evolution of the connectivity of networks.
Network models of evolving topology, in general, have been studied with respect to critical properties earlier in other areas, e.g., in models of macro-evolution . Network evolution with a focus on gene regulation has been studied first for Boolean networks in observing self-organization in network evolution, and for threshold networks in . Combining the evolution of Boolean networks with game theoretical interactions is used to model networks in economy .
In a recent paper Christensen et al. introduce a static network with evolving topology of undirected links that explicitly evolves towards a critical connectivity in the largest cluster of the network. In particular they observe for a neighborhood-oriented rewiring rule that the connectivity of the largest cluster evolves towards the critical $`K_c=2`$ of a marginally connected network. Motivated by this work we here consider the topological evolution of threshold networks with asymmetric links to study how local rules may affect global connectivity of a network, including the entire set of clusters of the network. In the remainder of this Letter we define a threshold network model with a local, topology-evolving rule. Then numerical results are presented that indicate an evolution of topology towards a critical connectivity in the limit of large system size. Finally, we discuss these results with respect to other mechanisms of self-organization and point to possible links with interaction networks in natural systems.
Let us consider a network of $`N`$ randomly interconnected binary elements with states $`\sigma _i=\pm 1`$. For each site $`i`$, its state at time $`t+1`$ is a function of the inputs it receives from other elements at time $`t`$:
$`\sigma _i(t+1)=\text{sgn}\left(f_i(t)\right)`$ (1)
with
$`f_i(t)={\displaystyle \underset{j=1}{\overset{N}{}}}c_{ij}\sigma _j(t)+h.`$ (2)
The interaction weights $`c_{ij}`$ take discrete values $`c_{ij}=\pm 1`$, with $`c_{ij}=0`$ if site $`i`$ does not receive any input from element $`j`$. In the following, the threshold parameter $`h`$ is set to zero. The dynamics of the network states is generated by iterating this rule starting from a random initial condition, eventually reaching a periodic attractor (limit cycle or fixed point).
Then we apply the following local rewiring rule to a randomly selected node $`i`$ of the network:
If node $`i`$ does not change its state during the attractor, it receives a new non-zero link $`c_{ij}`$ from a random node $`j`$. If it changes its state at least once during the attractor, it loses one of its non-zero links $`c_{ij}`$.
Iterating this process leads to a self-organization of the average connectivity of the network.
To be more specific, let us now describe one of several possible realizations of such an algorithm in detail. We define the average activity $`A(i)`$ of a site $`i`$
$`A(i)={\displaystyle \frac{1}{T_2T_1}}{\displaystyle \underset{t=T_1}{\overset{T_2}{}}}\sigma _i(t)`$ (3)
where the sum is taken over the dynamical attractor of the network defined by $`T_1`$ and $`T_2`$. For practical purposes, if the attractor is not reached after $`T_{max}`$ updates, $`A(i)`$ is measured over the last $`T_{max}/2`$ updates. This avoids exponential slowing down by long attractor periods for an average connectivity $`K>2`$. The algorithm is then defined as follows:
(1) Choose a random network with an average connectivity $`K_{ini}`$.
(2) Choose a random initial state vector $`\stackrel{}{\sigma }(0)=`$ $`(\sigma _1(0),\mathrm{},\sigma _N(0))`$.
(3) Calculate the new system states $`\stackrel{}{\sigma }(t),t=1,\mathrm{},T`$ according to eqn. (2), using parallel update of the $`N`$ sites.
(4) Once a previous state reappears (a dynamical attractor is reached) or otherwise after $`T_{max}`$ updates the simulation is stopped. Then change the topology of the network according to the following local rewiring rule:
(5) A site $`i`$ is chosen at random and its average activity $`A(i)`$ is determined.
(6) If $`|A(i)|=1`$, $`i`$ receives a new link $`c_{ij}`$ from a site $`j`$ selected at random, choosing $`c_{ij}=+1`$ or $`1`$ with equal probability. If $`|A(i)|<1`$, one of the existing non-zero links of site $`i`$ is set to zero.
(7) Finally, one non-zero entry of the connectivity-matrix is selected at random and its sign reversed.
(8) Go to step number 2 and iterate.
The fluctuations introduced in step 7 as random spin reversals are motivated by structurally neutral noise often observed in natural systems. Omitting this step does not change the basic behavior of the algorithm, however, the distribution of the number of inputs per node then evolves away from a Poissonian, thereby increasing the fraction of nodes with many inputs. The resulting dynamics only differs from the original algorithm in a slightly larger connectivity $`K_{ev}`$ of the evolved networks. This effect vanishes $`1/N`$ with increasing system size.
The typical picture arising from the model as defined above is shown in Fig. 1 for a system of size $`N=1024`$.
Independent of the initial connectivity, the system evolves towards a statistically stationary state with an average connectivity $`K_{ev}(N=1024)=2.55\pm 0.04`$. With varying system size we find that with increasing $`N`$ the average connectivity converges towards $`K_c`$ (which, for threshold $`h=0`$ as considered here, is found at $`K_c=2`$), see Fig. 2.
One observes the scaling relationship
$`K_{ev}(N)2=cN^\delta `$ (4)
with $`c=12.4\pm 0.5`$ and $`\delta =0.47\pm 0.01`$. Thus, in the large system size limit $`N\mathrm{}`$ the networks evolve towards the critical connectivity $`K_c=2`$.
The self-organization towards criticality observed in this model is different from currently known mechanisms exhibiting the amazingly general phenomenon of self-organized criticality (SOC) . Our model introduces a (new, and interestingly different) type of mechanism by which a system self-organizes towards criticality, here $`KK_c`$. This class of mechanisms lifts the notions of SOC to a new level. In particular, it exhibits considerable robustness against noise in the system. The main mechanism here is based on a topological phase transition in dynamical networks. To see this consider the statistical properties of the average activity $`A(i)`$ of a site $`i`$ for a random network. It is closely related to the frozen component $`C(K)`$ of the network, defined as the fraction of nodes that do not change their state along the attractor. The average activity $`A(i)`$ of a frozen site $`i`$ thus obeys $`|A(i)|=1`$. In the limit of large $`N`$, $`C(K)`$ undergoes a transition at $`K_c`$ vanishing for larger $`K`$. With respect to the average activity of a node, $`C(K)`$ equals the probability that a random site $`i`$ in the network has $`|A(i)|=1`$. Note that this is the quantity which is checked stochastically by the local update rule in the above algorithm. The frozen component $`C(K,N)`$ is shown for random networks of two different system sizes $`N`$ in Fig. 3.
One finds that $`C(K,N)`$ can be approximated by
$`C(K,N)={\displaystyle \frac{1}{2}}\{1+\mathrm{tanh}[\alpha (N)(KK_0(N))]\}.`$ (5)
This describes the transition of $`C(K,N)`$ at an average connectivity $`K_0(N)`$ which depends only on the system size $`N`$.
One finds for the finite size scaling of $`K_0(N)`$ that
$`K_0(N)2=aN^\beta `$ (6)
with $`a=3.30\pm 0.17`$ and $`\beta =0.34\pm 0.01`$ (see Fig. 4), whereas the parameter $`\alpha `$ scales with system size as
$`\alpha (N)=bN^\gamma `$ (7)
with $`b=0.14\pm 0.016`$ and $`\gamma =0.41\pm 0.01`$. Thus we see that in the thermodynamic limit $`N\mathrm{}`$ the transition from the frozen to the chaotic phase becomes a sharp step function at $`K_0(\mathrm{})=K_c`$. These considerations apply well to the evolving networks in the rewiring algorithm.
In addition to the rewiring algorithm as described in this Letter, we tested a number of different versions of the model. Including the transient in the measurement of the average activity $`A(i)`$ results in a similar overall behavior (where we allowed a few time steps for the transient to decouple from initial conditions). Another version succeeds using the correlation between two sites instead of $`A(i)`$ as a mutation criterion (this rule could be called “anti-Hebbian” as in the context of neural network learning). In addition, this version was further changed allowing different locations of mutated links, both, between the tested sites or just at one of the nodes. Some of these versions will be discussed in detail in a separate article . All these different realizations exhibit the same basic behavior as found for the model above. Thus, the mechanism proposed in this Letter exhibits considerable robustness.
An interesting question is whether a comparable mechanism may occur in natural complex systems, in particular, whether it could lead to observable consequences that cannot be explained otherwise.
One example where such mechanisms could occur is the regulation of connectivity density in neural systems. Activity-dependent attachment of synapses to a neuron is well known experimentally, for example in the form of the gating of synaptic changes by activity correlation between neurons . Such local attachment rules could provide a sufficient basis for a collective organization to occur as described in this Letter. For symmetric neural networks similar rules have been discussed, e.g., in the context of “Hebbian unlearning” suppressing spurious memories . In the here studied asymmetric networks, however, such rules appear to generate a completely new form of self-organization dynamics. As a consequence, an emerging average connectivity $`K_{ev}`$ could be stabilized to a specific value mostly determined by local properties of the dynamical elements of the system. It would be interesting to discuss whether synaptic density in biological systems could be regulated by such mechanisms.
Another biological observable of interest is the connectivity of gene-gene interactions in the expression of the genome as first studied by Kauffman . Whether this observable results from any such mechanism clearly is an open question. However, one may discuss whether biological evolution exerts selection pressure on the single gene level, that results in a selection rule similar to our algorithm; E.g., for a frozen regulation gene which is practically non-functional to obtain a new function (obtain a new link), as well as for a quite active gene to reduce functionality (remove a link). First experimental estimates for the global observable of genome connectivity are available for E. coli with a value in the range $`23`$ . While it is clearly too early to speculate about the mechanisms of global genome organization, it is interesting to note that the robust self-organizing algorithm presented here provides a mechanism that in principle predicts a value in this range.
To summarize, we study topological evolution of asymmetric dynamical networks on the basis of a local rewiring rule. We observe a network evolution with the average connectivity $`K`$ of the network evolving towards the critical connectivity $`K_c`$ without tuning. In the limit of large system size $`N`$ this limit becomes accurate. It is well conceivable that this form of global evolution of a network structure towards criticality might be found in natural complex systems.
|
no-problem/0003/physics0003046.html
|
ar5iv
|
text
|
# Ortho-Para Conversion in CH3F. Self–Consistent Theoretical ModelPresented at the VI International Symposium on Magnetic Field and Spin Effects in Chemistry and Related Phenomena, Emmetten, Switzerland, August 21-26, 1999.
## I Introduction
The study of nuclear spin isomers of molecules was started by the discovery of the ortho and para hydrogen in the late 1920’s . It became clear already in that time that many other symmetrical molecules should have nuclear spin isomers too. Nevertheless, their investigation has been postponed by almost 60 years. The reason for this delay was severe difficulties in the enrichment of spin isomers. The situation is improving now (see the review in Ref. ) but yet we are at the very early stage of this research: in addition to the well-known spin isomers of H<sub>2</sub> only a few molecules have been investigated so far. Among them, the CH<sub>3</sub>F nuclear spin isomers occupy a special place being the most studied and understood.
The conversion of CH<sub>3</sub>F nuclear spin isomers has been explained in the framework of quantum relaxation . which is based on the intramolecular ortho-para state mixing and on the interruption of this mixing by collisions. This mechanism of spin conversion has a few striking features. The nuclear spin states of CH<sub>3</sub>F appeared to be extremely stable surviving $`10^910^{10}`$ collisions. Each of the collision changes the energy of the molecule by $`10100`$ cm<sup>-1</sup> and shuffles the molecular rotational state substantially. Nevertheless, the model predicts that the spin conversion is governed by tiny intramolecular interactions having the energy $`10^6`$ cm<sup>-1</sup>.
Under these circumstances, the validity of the proposed theoretical model should be checked with great care. This is especially important because the CH<sub>3</sub>F case gives us the first evidence of the new mechanism behind the nuclear spin conversion in molecules. Hydrogen spin conversion, which is the only other comprehensively studied case, is due to the completely different process based on direct collisional transitions between ortho and para states of H<sub>2</sub>.
Presently there is substantial amount of the experimental data on CH<sub>3</sub>F isomer conversion (see and references therein). Theory and experiment on the CH<sub>3</sub>F isomer conversion were compared in a number of papers but these comparisons were never aimed to determine a complete set of parameters necessary for a quantitative description of the process. The purpose of the present paper is to construct such self-consistent theoretical model of the CH<sub>3</sub>F isomer conversion.
## II Quantum relaxation
The CH<sub>3</sub>F molecule is a symmetric top having the C<sub>3v</sub> symmetry. The total spin of the three hydrogen nuclei in the molecule can be equal to $`I=3/2`$ (ortho isomers), or $`I=1/2`$ (para isomers). The values of the molecular angular momentum projection on the molecular symmetry axis ($`K`$) are specific for these spin isomers. Only $`K`$ divisible by 3 are allowed for the ortho isomers. All other $`K`$ are allowed for the para isomers. Consequently, the rotational states of CH<sub>3</sub>F form two subspaces which are shown in Fig. 1 for the particular case of the <sup>13</sup>CH<sub>3</sub>F molecules.
Let us briefly recall the physical picture of the CH<sub>3</sub>F spin conversion by quantum relaxation. Suppose that a test molecule was placed initially in the ortho subspace of the molecular states (Fig. 1). Due to collisions in the bulk the test molecule will undergo fast rotational relaxation inside the ortho subspace. This running up and down along the ortho ladder proceeds until the molecule reaches the ortho state $`m`$ which is mixed with the para state $`n`$ by the intramolecular perturbation $`\widehat{V}`$. Then, during the free flight just after this collision, the perturbation $`\widehat{V}`$ mixes the para state $`n`$ with the ortho state $`m`$. Consequently, the next collision is able to move the molecule to other para states and thus to localize it inside the para subspace. Such mechanism of spin isomer conversion was proposed in the theoretical paper .
The quantum relaxation of spin isomers can be quantitatively described in the framework of the kinetic equation for density matrix . Let us consider first a free molecule which is not subjected to an external field. One needs to split the molecular Hamiltonian into two parts
$$\widehat{H}=\widehat{H}_0+\mathrm{}\widehat{V},$$
(1)
where the main part of the Hamiltonian, $`\widehat{H}_0`$, has pure ortho and para states as the eigenstates; the perturbation $`\widehat{V}`$ mixes the ortho and para states. In the first order perturbation theory the nuclear spin conversion rate, $`\gamma `$, is given by
$$\gamma =\underset{\alpha ^{}p,\alpha o}{}\frac{2\mathrm{\Gamma }_{\alpha ^{}\alpha }|V_{\alpha ^{}\alpha }|^2}{\mathrm{\Gamma }_{\alpha ^{}\alpha }^2+\omega _{\alpha ^{}\alpha }^2}\left(W_p(\alpha ^{})+W_o(\alpha )\right),$$
(2)
where $`\mathrm{\Gamma }_{\alpha ^{}\alpha }`$ is the decay rate of the off-diagonal density matrix element $`\rho _{\alpha ^{}\alpha }(\alpha ^{}para;\alpha ortho)`$; $`\mathrm{}\omega _{\alpha ^{}\alpha }`$ is the energy gap between the states $`\alpha ^{}`$ and $`\alpha `$; $`W_p(\alpha ^{})`$ and $`W_o(\alpha )`$ are the Boltzmann factors of the corresponding states. The parameters $`\mathrm{\Gamma }_{\alpha ^{}\alpha }`$, $`V_{\alpha ^{}\alpha }`$, and $`\omega _{\alpha ^{}\alpha }`$ are crucial for the quantitative theoretical description of the <sup>13</sup>CH<sub>3</sub>F spin isomer conversion.
All previous comparisons between the experiment on the CH<sub>3</sub>F spin conversion and the theory were performed using “total” rates of conversion which summarize all contributions to the rate from many ortho-para level pairs. The “total” rate is just a single number and obviously cannot provide unambiguous determination of all parameters which are present in the expression (2). One may combine the experimental data on “total” rates with theoretical calculations of some parameters but it is not easy. In this case one has to perform extensive calculations of the intramolecular ortho-para state mixing. Even more difficult is to calculate the decoherence rates $`\mathrm{\Gamma }_{\alpha ^{}\alpha }`$. Consequently, development of the self-consistent model of the nuclear spin conversion in which all parameters are unambiguously determined should be based on a different approach.
## III Level–crossing resonances
Theoretical model of spin conversion predicts strong dependence of the conversion rate, $`\gamma `$, on the level spacing $`\omega _{\alpha ^{}\alpha }`$ (see Eq. (2)). This can be used to single out the contribution to the conversion from each level pair which should substantially simplify the quantitative comparison between theory and experiment. It was proposed in and performed in to use the Stark effect for crossing the ortho and para states of CH<sub>3</sub>F. These crossings result in sharp increase of the conversion rate $`\gamma `$ giving the conversion spectra if electric field is varied. The experimental data are presented in Fig. 2. It is evident that such spectrum contains much more information than the ”total” conversion rate which is just a single number.
Comparison of the conversion spectrum in Fig. 2 with the theory needs a modification of the model in order to incorporate the Stark effect. Homogeneous electric field lifts partially the degeneracy of the $`\alpha `$-states of CH<sub>3</sub>F (see Appendix). The new states, $`\mu `$-basis, can be found in a standard way :
$$|\mu >|\beta ,\xi >|\sigma _F>|\sigma _C>;\xi =0,1.$$
(3)
Because electric field in the experiment is relatively small, it is sufficient to consider only diagonal matrix elements of the Stark perturbation over angular momentum $`J`$ when calculating the $`\mu `$-states. Energy of the $`\mu `$-states are given by the expression
$$E(\mu )=E_{free}(J,K)+(1)^\xi \frac{K|M|}{J(J+1)}|d|,$$
(4)
where $`E_{free}(J,K)`$ is the energy of free molecule; $`d`$ is the molecular permanent electric dipole moment; $``$ is the electric field strength. The new states are still degenerate with respect to the spin projections $`\sigma `$, $`\sigma _F`$, and $`\sigma _C`$, and to the sign of $`M`$. An account of the Stark effect in the spin conversion model is straightforward. Eq. (2) should be rewritten in the $`\mu `$-basis with the level energies determined by the Eq. (4).
## IV Fitting of the experimental data
Nuclear spin conversion in <sup>13</sup>CH<sub>3</sub>F at zero electric field is governed almost completely by mixing of only two level pairs ($`J^{}`$=11, $`K^{}`$=1)–($`J`$=9, $`K`$=3) and (21,1)–(20,3) . The spectrum presented in Fig. 2 is produced by crossings of the $`M`$-sublevels of the para (11,1) and ortho (9,3) states. This pair of states is mixed by the spin-spin interaction between the molecular nuclei . There is no contribution to the mixing of this level pair from the spin-rotation interaction because of the selection rule for spin-rotation interaction $`|\mathrm{\Delta }J|1`$ . This is fortunate because the spin-spin interaction can be calculated rather accurately. Contrary to that, the spin-rotation interaction in CH<sub>3</sub>F is known only approximately. For more details on spin-rotation contribution to the CH<sub>3</sub>F spin conversion see Refs.
The second pair of ortho-para states (21,1)–(20,3), which is also important for the spin conversion in <sup>13</sup>CH<sub>3</sub>F at zero electric field, is mixed by both the spin-spin and spin-rotation interactions. The magnitude of the latter is presently unknown. Nevertheless, it does not complicate the fitting procedure because in the vicinity of the (11,1)–(9,3) resonances presented in Fig. 2, the (21,1)–(20,3) pair gives very small and almost constant contribution.
Let us find out now an analytical expression for modelling the experimental data. We start by analyzing the contribution to the conversion rate produced by the level pair (11,1)–(9,3) which will be denoted as $`\gamma _a()`$. This contribution can be obtained using the results of Refs. :
$`\gamma _a()`$ $`=`$ $`{\displaystyle \underset{M^{}p;Mo}{}}{\displaystyle \frac{2\mathrm{\Gamma }|V_{M^{}M}|^2}{\mathrm{\Gamma }^2+\omega _{M^{}M}^2()}}\left(W_p(\mu ^{})+W_o(\mu )\right);`$ (5)
$`|V_{M^{}M}|^2`$ $`=`$ $`(2J+1)(2J^{}+1)\left(\begin{array}{ccc}\hfill J^{}& J& 2\\ \hfill K^{}& K& K^{}K\end{array}\right)^2\left(\begin{array}{ccc}\hfill J^{}& J& 2\\ \hfill M^{}& M& M^{}M\end{array}\right)^2𝒯^2.`$ (10)
Here $`V_{M^{}M}<\mu ^{}|V|\mu >`$ are the matrix elements of the perturbation $`\widehat{V}`$ in which only $`M`$-indexes were shown explicitly; (:::) stands for the 3j-symbol; $`𝒯`$ is the magnitude of the spin-spin interaction. Note, that the selection rules for the ortho-para state mixing by the spin-spin interaction result from Eq. (10): $`|\mathrm{\Delta }K|;|\mathrm{\Delta }J|;|\mathrm{\Delta }M|2`$. In the fitting procedure $`𝒯`$ will be considered as an adjustable parameter. In Eq. (10) we have assumed all $`\mathrm{\Gamma }_{M^{}M}`$ being equal: $`\mathrm{\Gamma }_{M^{}M}\mathrm{\Gamma }`$. This property of $`\mathrm{\Gamma }`$ is the consequence of the spherical symmetry of the media. The decoherence decay rate $`\mathrm{\Gamma }`$ is an another unknown parameter which needs to be determined.
The spacing between the $`M^{}`$ and $`M`$ states in an electric filed follows directly from the Eq. (4)
$$\omega _{M^{}M}()=\omega _0+\left(\frac{K^{}|M^{}|}{J^{}(J^{}+1)}\frac{K|M|}{J(J+1)}\right)|d|,$$
(11)
where $`\omega _0`$ is the gap between the states ($`J^{}`$,$`K^{}`$) and ($`J`$,$`K`$) at zero electric field. We have considered in Eq. (11) only pairs of states which have $`\xi ^{}=\xi `$. They are the only pairs which contribute to the spectrum in the electric field range of Fig. 2. The level spacing $`\omega _0`$ will be considered as an adjustable parameter in the fitting. The dipole moment of <sup>13</sup>CH<sub>3</sub>F in the ground state, which is necessary for the calculation of $`\omega _{M^{}M}()`$, was determined very accurately from the laser Stark spectroscopy of <sup>13</sup>CH<sub>3</sub>F and was found equal $`d=1.8579\pm 0.0006`$ D .
At zero electric field the level pair (21,1)–(20,3) contributes nearly 30% to the total conversion rate . At electric fields, where $`\gamma _a()`$ has peaks, this contribution is on the order of 10<sup>-2</sup> in comparison with $`\gamma _a()`$. The first crossing of the pair (21,1)-(20,3) occurs at $`4000`$ V/cm thus having its peaks far away from the electric field range of Fig. 2. In the electric field range of Fig. 2 (1–1200 V/cm) the contribution from the pair (21,1)-(20,3) is changing by 10% only. Consequently, in the fitting procedure the (21,1)-(20,3) contribution is assumed to be constant. This quantity will be denoted as $`\gamma _b`$.
To summarize, the function which will be used to model the experimental data is
$$\gamma ()=\gamma _a()+\gamma _b.$$
(12)
This function contains adjustable parameters $`𝒯`$, $`\mathrm{\Gamma }`$, $`\omega _0`$, and $`\gamma _b`$.
The result of the least-square fit is shown in Fig. 2 by solid line. The error of the individual experimental points in Fig. 2 was estimated as 7%. The values of the parameters are given in the Table 1, where one standard deviation of statistical error is indicated.
Electric field in the Stark cell was determined in experiment by measuring the voltage applied to the electrodes and assuming the distance between them equal to 4.18 mm, which is the spacer thickness. It was found out after the experiment was performed that the thickness of the glue used to attach the Stark electrodes was not negligible. The updated spacing between the electrodes in the Stark cell is $`l=4.22\pm 0.02`$ mm. Such correction of the spacing gives 1% systematic decrease of the experimental electric field values given in . This shift is taken into account in Fig. 2.
## V Theoretical estimation of the parameters
Let us compare the parameters obtained in the previous section with their theoretical estimates. We start from the analysis of the level spacing $`\omega _0`$. The best sets of the ground state molecular parameters of <sup>13</sup>CH<sub>3</sub>F are given in Ref. . The spacing between the levels (11,1) and (9,3) is presented in the Table 1 where the set having most accurate molecular parameter $`A_0`$ was used. The theoretical value appears to be close to the experimental one obtained from the spin conversion spectra. The difference between them is
$$\omega _0(exp)\omega _0(theor)=1.0\pm 0.3\text{MHz},$$
(13)
which is less than 1% in comparison with $`\omega _0`$ itself.
Next, we calculate the parameter $`𝒯`$ which characterizes the spin-spin mixing of the level pair (11,1)–(9,3) in <sup>13</sup>CH<sub>3</sub>F. The spin-spin interaction between the two magnetic dipoles $`𝐦_1`$ and $`𝐦_2`$ separated by the distance $`𝐫`$ has the form :
$`\mathrm{}\widehat{V}_{12}`$ $`=`$ $`P_{12}\widehat{𝐈}^{(1)}\widehat{𝐈}^{(2)}{}_{}{}^{}\text{T}_{}^{(12)},`$ (14)
$`T_{ij}^{(12)}`$ $`=`$ $`\delta _{ij}3n_in_j;P_{12}=m_1m_2/r^3I^{(1)}I^{(2)},`$ (15)
where $`\widehat{𝐈}^{(1)}`$ and $`\widehat{𝐈}^{(2)}`$ are the spin operators of the particles 1 and 2, respectively; n is the unit vector directed along r; $`i`$ and $`j`$ are the Cartesian indexes.
For the spin-spin mixing of the ortho and para states in <sup>13</sup>CH<sub>3</sub>F one has to take into account the interaction between the three hydrogen nuclei ($`\widehat{V}_{HH}`$), between the three hydrogen and fluorine nuclei ($`\widehat{V}_{HF}`$), and between the three hydrogen and carbon nuclei ($`\widehat{H}_{HC}`$). Thus the total spin-spin interaction responsible for the mixing in <sup>13</sup>CH<sub>3</sub>F is
$$\widehat{V}_{SS}=\widehat{V}_{HH}+\widehat{V}_{HF}+\widehat{V}_{HC}.$$
(16)
The complete expressions for all components of $`\widehat{V}_{SS}`$ can be written by using Eq. (15) for the spin-spin interaction between two particles. For example, for $`\widehat{V}_{HF}`$ one has
$$\widehat{V}_{HF}=P_{HF}\underset{n}{}\widehat{𝐈}^{(n)}\widehat{𝐈}^F{}_{}{}^{}𝐓_{}^{nF};n=1,2,3.$$
(17)
Here $`P_{HF}`$ is the scaling factor analogous to $`P_{12}`$ in Eq. (15); $`n`$ refers to the hydrogen nuclei in the molecule.
$`𝒯`$ can be calculated in a way similar to that used previously . It gives
$$|𝒯|^2=3|P_{HH}𝒯_{2,2}^{(12)}|^2+2|P_{HF}𝒯_{2,2}^{1F}|^2+2|P_{HC}𝒯_{2,2}^{1C}|^2.$$
(18)
Here $`𝒯_{2,2}^{1q}`$ is the spherical component of the second rank tensor T<sup>1q</sup> calculated in the molecular system of coordinates. The superscripts $`1q`$ indicate the interacting particles: 1 refers to the hydrogen nucleus H<sup>(1)</sup> and $`q`$ refers to the nucleus of H<sup>(2)</sup>, or F, or C.
The calculation of $`𝒯`$ needs the knowledge of the molecular structure. We used the ground state structure of <sup>13</sup>CH<sub>3</sub>F determined in : $`r_{CF}=1.390(1)`$ Å, $`r_{CH}=1.098(1)`$ Å, and $`\beta (FCH)=108.7^o(2)`$. The numbers in parentheses represent the error bars in units of the last digit. By using these parameters one can obtain the value of $`𝒯`$ which is given in the Table 1. The difference between the experimental and theoretical values of $`𝒯`$ is equal to
$$𝒯_{exp}𝒯_{theor}=5.1\pm 0.5\text{kHz}.$$
(19)
## VI Discussion
Small difference between the experimental and theoretical values of $`\omega _0`$ unambigiously confirms that the mixed ortho-para level pair (9,3)–(11,1) was determined correctly. From the spectroscopical data one can conclude that there are no other ortho-para level pairs which can mimic the level spacing $`\omega _0=130.99`$ MHz. It is also true if one takes into account even all ortho-para level pairs ignoring the restrictions imposed by the selection rules for the ortho-para state mixing.
The difference between experimental and theoretical values of the level spacing at zero electric field, $`\omega _0`$, is only $`1.0\pm 0.3`$ MHz. The main error in theoretical value of $`\omega _0`$ is caused by the error in the molecular parameter $`A_0`$. It gives nearly half of the error indicated in the Table 1. On the other hand, the $`J`$ and $`K`$ dependences of the molecular electric dipole moment are too small to affect our determination of the theoretical value of $`\omega _0`$. It is possible that the experimental value of $`\omega _0`$ is affected by the pressure shift, which magnitude we presently do not know. Further investigations can precise the frequency gap between the states (9,3) and (11,1).
The difference between the experimental and theoretical values of $`𝒯`$ is rather small ($``$7%) but well outside the statistical error. This difference may originate from our method of calculating $`𝒯_{𝓉𝒽𝓇}`$ in which we used the molecular structure (bond lengths and angles) averaged over ground state molecular vibration. More correct procedure would be to average an exact expression for $`𝒯`$ over molecular vibration. This requires rather extensive calculations.
There are a few contributions to the systematic error of value of $`𝒯_{𝓍𝓅}`$. The response time of the setup used to measure the concentration of ortho molecules ($`1`$ sec) was not taken into account in the processing of the experimental data. This gives $``$2% systematic decrease in the value of $`𝒯_{𝓍𝓅}`$. Another few percent of the systematic error may appear due to the procedure employed in to find out the conversion rate inside the Stark cell. This procedure relies on the ratio of the Stark cell volume to the volume outside the electric field. Taking these circumstances into account we can estimate that up to $``$10% difference between the experimental and theoretical values of $`𝒯`$ can be explained by the systematic errors. Despite this difference, it is rather safe to conclude that our analysis has proven that the levels (9,3) and (11,1) are indeed mixed by the spin-spin interaction between the molecular nuclei. It is impressive that the level-crossing spectrum in the <sup>13</sup>CH<sub>3</sub>F isomer conversion has allowed to measure the hyperfine spin-spin coupling with the statistical error of 0.5 kHz only.
Comparison between the measured spectrum and the model supports our choice for the $`\mathrm{\Gamma }_{M^{}M}`$ being independent on $`M`$ and $`M^{}`$. Independence of this parameter on $`M`$ is the direct consequence of the spatial isotropy of the media. The independence on $`M^{}M`$ is more intricate. This will be discussed in more detail elsewhere.
The value of $`\mathrm{\Gamma }`$ obtained from the fitting procedure, $`\mathrm{\Gamma }=(1.9\pm 0.1)10^8`$ s<sup>-1</sup>/Torr, appeared to be close to the level population decay rate $`1.010^8`$ s<sup>-1</sup>/Torr measured in Ref. for the state ($`J`$=5, $`K`$=3) of <sup>13</sup>CH<sub>3</sub>F. The factor 2 difference is not surprising. $`\mathrm{\Gamma }`$ refers to the decay rate of the off-diagonal density matrix element $`\rho _{\mu ^{}\mu }`$ between the states (11,1) and (9,3) which should be different from the population decay rate. In addition, the rotational quantum numbers in these two cases are different too.
Column designated as $`\gamma (0)`$ in the Table 1 gives the rates at zero electric filed. The “theoretical value” is the magnitude of $`\gamma (0)`$ given by the solid line in Fig. 2. The theoretical value coincides well with the experimental one from Ref. . Finally we would like to mention that our analysis of the spin conversion spectrum has allowed to disentangle for the first time the contributions to the conversion rate which arise from the mixing of the two level pairs: (9,3)–(11,1) and (20,3)–(21,1).
## VII Conclusions
We have performed the first quantitative comparison of the level-crossing spectrum of the nuclear spin conversion in <sup>13</sup>CH<sub>3</sub>F with the theoretical model. This approach has allowed to single out the contribution to the spin conversion caused by the mixing of one particular pair of the ortho-para rotational states of the molecule and confirmed unambiguously that the mechanism of the intramolecular state mixing is the spin-spin interaction between the molecular nuclei.
All important parameters of the theoretical model which describe the nuclear spin conversion in <sup>13</sup>CH<sub>3</sub>F due to the spin-spin mixing of the ortho-para level pair (9,3)–(11,1) are determined quantitatively. These parameters are the decoherence rate, $`\mathrm{\Gamma }`$, the spin-spin mixing strength, $`𝒯`$, the level spacing, $`\omega _0`$, and the contributions to the conversion rate from the two level pairs separately (9,3)-(11,1) and (20,3)-(21,1). While the decoherence rate $`\mathrm{\Gamma }`$ is difficult to estimate on the basis of independent information, the experimental values for the spin-spin mixing, $`𝒯`$, and the level spacing, $`\omega _0`$, are found to be close to their theoretical values. These results prove that the nuclear spin conversion in the <sup>13</sup>CH<sub>3</sub>F molecules is indeed governed by the quantum relaxation.
## Acknowledgments
This work was made possible by financial support from the the Russian Foundation for Basic Research (RFBR), grant No. 98–03–33124a, and the Région Nord Pas de Calais, France.
## VIII Appendix
The CH<sub>3</sub>F quantum states in the ground electronic and vibration state can be classified as follows . CH<sub>3</sub>F is a rigid symmetric top but it is more transparant to take molecular inversion into account and classify the states in D<sub>3h</sub> symmetry group. First, one has to introduce an additional (molecular) system of coordinates which has the orientation defined by the numbered hydrogen nuclei and $`z`$–axis directed along the molecular symmetry axis.
Next, one introduces the states
$$|\beta >|J,K,M>|I,\sigma ,K>;K0,$$
(20)
which are invariant under cyclic permutation of the three hydrogen nuclei: $`P_{123}|\beta >=|\beta >`$. In Eq. (20), $`|J,K,M>`$ are the familiar rotational states of symmetric top, which are characterized by the angular momentum ($`J`$), its projection ($`K`$) on the $`z`$-axis of the molecular system of coordinates and the projection ($`M`$) on the laboratory quantization axis $`Z`$. $`I`$ and $`\sigma `$ are the total spin of the three hydrogen nuclei and its projection on the $`Z`$-axis, respectively. The explicit expression for the spin states $`|I,\sigma ,K>`$ is given in .
Permutation of any two hydrogen nuclei in CH<sub>3</sub>F inverts $`z`$-axis of the molecular system of coordinates. Consequently, the action of such operation ($`P_{23}`$, for instance) on the molecular states reads: $`P_{23}|\beta >=|\overline{\beta }>`$, where $`\overline{\beta }\{J,K,M,I,\sigma \}`$. Note that the complete set of the molecular states comprises both $`\beta `$ and $`\overline{\beta }`$ sets.
Using the states $`|\beta >`$ and $`|\overline{\beta }>`$ one can construct the states which have the proper symmetry with respect to the permutation of any two hydrogen nuclei:
$$|\beta ,\kappa >=\frac{1}{\sqrt{2}}\left[1+(1)^\kappa P_{23}\right]|\beta >;\kappa =0,1.$$
(21)
The action of the permutation of two hydrogen nuclei on the state $`|\beta ,\kappa >`$ is defined by the rule: $`P_{23}|\beta ,\kappa >=(1)^\kappa |\beta ,\kappa >`$ and by similar relations for the permutations of the other two pairs of hydrogen nuclei.
In the next step, one has to take into account the symmetric ($`|s=1>`$) and antisymmetric ($`|s=0>`$) inversion states. The action of the permutation of the two hydrogen nuclei on these states, for example $`P_{23}`$, reads
$$P_{23}|s=0>=|s=0>;P_{23}|s=1>=|s=1>.$$
(22)
Evidently, the cyclic permutation of the three hydrogen nuclei of the molecule does not change the inversion states.
The total spin-rotation states of CH<sub>3</sub>F should be antisymmetric under permutation of any two hydrogen nuclei, because protons are fermions. Consequently, the only allowed states of CH<sub>3</sub>F are $`|\beta ,\kappa =s>|s>`$.
Finally, the description of the CH<sub>3</sub>F states should be completed by adding the spin states of fluorine and carbon (<sup>13</sup>C) nuclei, both having spin equal 1/2:
$$|\alpha >=|\beta ,\kappa =s>|s>|\sigma _F>|\sigma _C>,$$
(23)
where $`\sigma _F`$ and $`\sigma _C`$ are the $`Z`$-projections of the F and <sup>13</sup>C nuclei’ spins, respectively. In the following, we will denote the states (23) of a free molecule as $`\alpha `$-basis. For the rigid symmetric tops, as CH<sub>3</sub>F is, the states $`|\alpha >`$ are degenerate over the quantum numbers $`s`$, $`M`$, $`\sigma `$, $`\sigma _F`$, $`\sigma _C`$.
Table 1. Experimental and theoretical parameters of the nuclear spin conversion in <sup>13</sup>CH<sub>3</sub>F by quantum relaxation.
| | $`\omega _0/2\pi `$ | $`𝒯`$ | $`\mathrm{\Gamma }`$ | $`\gamma (0)`$ | $`\gamma _b`$ |
| --- | --- | --- | --- | --- | --- |
| | (MHz) | (kHz) | (10<sup>8</sup> s<sup>-1</sup>/Torr) | (10<sup>-3</sup> s<sup>-1</sup>/Torr) | (10<sup>-3</sup> s<sup>-1</sup>/Torr) |
| Experiment | 132.06 $`\pm `$ 0.27 | 64.1 $`\pm `$ 0.5 | 1.9 $`\pm `$ 0.1 | $`12.2\pm 0.6^{(4)}`$ | 4.6 $`\pm `$ 0.7 |
| Theory | $`130.99\pm 0.15^{(1)}`$ | $`69.2\pm 0.2^{(2)}`$ | 1.0<sup>(3)</sup> | $`12.04\pm 0.5^{(5)}`$ | |
| Difference | 1.0 $`\pm `$ 0.3 | -5.1 $`\pm `$ 0.5 | | 0.15 $`\pm `$ 0.8 | |
<sup>(1)</sup>Calculated using the molecular parameters from Ref. , (Table 1, column 2).
<sup>(2)</sup>Calculated using the molecular structure determined in Ref. .
<sup>(3)</sup>The level population decay rate from Ref. .
<sup>(4)</sup>Experimental value from Ref.
<sup>(5)</sup>Zero-field value predicted by the theoretical curve in Fig. 2.
|
no-problem/0003/cond-mat0003148.html
|
ar5iv
|
text
|
# Proximity effect in planar TiN-Silicon junctions
## 1 Introduction
When a diffusive normal metal (or doped semi-conductor) (N) is electrically connected to a superconductor (S), electron-electron correlations (i.e. a finite pair amplitude) are induced in the normal metal through Andreev reflections : this is known as the proximity effect. The subgap conductance is a sensitive measurement of these correlations and a lot of recent works has been devoted to this subject, especially with the development of nanotechnologies. In the original version of the proximity effect , the strength of the induced correlations depends on the barrier transmittance $`\mathrm{\Gamma }`$ at the interface and their extension is limited either by the thermal length $`L_T=\sqrt{\frac{\mathrm{}D}{2\pi k_BT}}`$ or by $`L_V=\sqrt{\frac{\mathrm{}D}{eV}}`$ in a non-equilibrium situation (D is the diffusion constant of the normal metal); the phase breaking length $`L_\phi `$ or the geometry of the sample at small distances were not taken into account.
Emergence of mesoscopic physics has enlightened the role of these effects, both when the contact between the superconductor and the metal is very good (high transmittance $`\mathrm{\Gamma }1`$) and in the opposite case (tunnel junctions). In high transmissive contacts, when $`L_T`$ becomes larger than the typical sample length $`L`$ (supposed smaller than $`L_\phi `$), the re-entrance phenomenon takes place , which cancels out the standart proximity effect: instead of a steadily increase of the subgap conductance as T decreases, the conductance has a maximum at $`T_0`$ such that $`L_{T_0}L`$ and decreases at lower temperature, theoretically recovering its normal value for $`T=0K`$ (without interaction in the normal metal) . On the other hand, when the barrier is rather strong (low transmittance $`\mathrm{\Gamma }1`$), the low temperature subgap conductance is small since the dominating Andreev reflection is a two-particle process which scales as $`\mathrm{\Gamma }^2`$. The subgap conductance for any value of $`0\mathrm{\Gamma }1`$ is described by the BTK theory , without taking into account coherence effects (ballistic normal metal). Coherent effects are also observed experimentally for $`\mathrm{\Gamma }1`$: when the Andreev reflected hole that traces back the incoming electron path is coherently backscattered towards the interface, the amplitudes for successive Andreev reflections add constructively. The subgap conductance is greatly enhanced as if the retroreflected hole did not feel the barrier (“reflectionless tunneling”) . The conductance shows a peak at low energies, whose amplitude depends on the balance between the tunnel transparency and the rate of coherent backscattering. For a diffusive normal metal, the peak in the conductance increases when the coherent resistance increases and is maximum when the coherent normal resistance equals the barrier resistance. Eventually, if the coherent normal resistance increases above the barrier resistance, the situation evolves generically to the first case (good NS contact) and the conductance shows a peak at finite energy .
So far, reflectionless tunneling has essentially been observed in Superconductor/Semiconductor junctions (S/Sm) (the only exception is the NS-SQUID devices by Pothier et al. ). In these systems, the low transparency of the interface is due to a Schottky barrier and to the mismatch of the Fermi velocities. The transparency of these junctions is intermediate (typically $`\mathrm{\Gamma }_{S/Sm}10^3`$) between a very good interface (transmission coefficient $`\mathrm{\Gamma }`$1) and an oxide barrier ($`\mathrm{\Gamma }10^6`$). In semiconductors, annealing and surface cleaning processes or even the absorption of dopands by the electrode produce an increase of the sheet resistance of the semiconductor below the interface overlap. As reflectionless tunneling is a balance between the electron probabilities of crossing the barrier and backscattering to the interface, S/Sm systems are subject to this effect. But the superconducting material can also be weakened near the interface and consequently the BCS density of states of the superconductor can be smoothed by pair-breaking processes or creation of states below the gap. Zero-bias anomalies are always associated to such smooth conductance-voltage characteristics . Because this effect is very sensitive to the microscopic parameters near the interface as well as to the energy of carriers, one can take advantage of its observation both to feature the S/Sm contact and to investigate the thermalization of electrons in the normal part. The latter point has concentrated a lot of recent works in out-of-equilibrium mesoscopic normal conductors . The problem of carriers thermalization is even more crucial in S/N or S/N/S junctions because of the Andreev thermal resistance at the interface . It is also of practical importance for superconducting bolometers or Josephson field effect transistors.
In this article, we report the observation of zero-bias anomalies in titanium nitride/heavily doped silicon junctions at very low temperature (down to 30mK). In a first part, the samples are characterized with various measurements (contact resistance, sheet resistance, weak localization). In a second part, the differential resistance of the junctions is measured as a function of temperature, voltage and magnetic field. The third part is devoted to the quantitative comparison between the observed zero bias anomaly and the theory for a planar SIN junction. A good agreement is obtained for temperature behavior, allowing us to extract the depairing rate and the barrier transmittance of the junctions. However, discussion on the voltage response of the junctions leads us to consider the effective temperature of the carriers in the silicon, which is well above the phonon bath temperature. This overheating effect is discussed in the fourth part.
## 2 Samples and materials characterisation
The samples are fabricated from a TiN(1000Å)/Si $`n^{++}`$ (P doped on $`d_{Si:P}`$=0.6$`\mu `$m) bilayer deposited on a $`Si\mathrm{\hspace{0.17em}8}^{}`$ wafer. First, the wafer is oxidized on 13nm. Phosphorus is then implanted at 15 keV, $`2.10^{15}cm^2`$, followed by a recristallisation heat treatment at 650C and a $`30`$ minutes activation/diffusion treatment at 1050C with oxygen. The substrate is deoxidized and a 10nm Ti and 100nm TiN bilayer is deposited. Optical lithography defines a Tranverse Length Method (TLM) pattern with various distances $`L=1,2,5,10,20,50,100,200`$ and $`500\mu m`$ between large TiN pads (typically 1000$`\times `$1000$`\mu m^2`$) (see bottom insert of figure 1). The TiN/Ti is etched and a final heat treatment at 720C in N<sub>2</sub> atmosphere during 20 seconds provides TiN densification and forms a 40nm thick TiSi<sub>2</sub> layer. Nevertheless, the zero bias anomaly reported in this work appears rather insensitive to this heat treatment. On some samples, another etching is performed to define two TiN pads fingers with lateral dimensions w=10$`\mu `$m facing each other and connected to larger TiN reservoirs.
Although titanium nitride has been used for many years in microelectronics as a diffusion barrier, ohmic contact and gate electrode in field effect transistors , its superconducting properties have been thoroughly studied only recently . Its transition temperature is T<sub>c</sub>=4.6 K for our samples. STM measurements give a gap of 250$`\mu `$V at T=1.4K . The relation between the superconducting gap and the transition temperature departs from the BCS theory probably because titanium nitride is a granular superconductor. Its room temperature resistivity is 85$`\mu \mathrm{\Omega }`$.cm.
Doped Si:P has been studied for a long time because of its great importance in microelectronics. Alexander et al. evaluated the Mott-transition donor concentration $`n_c=3.10^{18}cm^3`$ and the concentration at which the Fermi level of the electron system passes into the conduction band of the host crystal $`n_{cb}=2.10^{19}cm^3`$. Heslinga et al. studied the inelastic lifetime in heavily doped Si:P. They found $`1/\tau _{in}(s)=\mathrm{1.1\; 10}^9T(K)^{2.2}`$ at $`n=2.10^{19}cm^3`$ from T=1.2 to 4K. In our samples, doped silicon Si:P forms the normal part of the junction with a donor concentration $`n_e`$=2.10<sup>19</sup> cm<sup>-3</sup> over a depth of P implantation $`d_{Si:P}`$=0.6$`\mu `$m. From the TLM geometry, we estimate the sheet resistance of the doped silicon and the resistance $`R_{NN}`$ of the interfaces at T=4K:
$$R=\frac{R_{\mathrm{}}}{N_{\mathrm{}}}+2R_{NN}$$
(1)
where $`R_{\mathrm{}}`$ is the sheet resistance of the doped silicon, $`N_{\mathrm{}}=w/L`$ the number of squares of silicon in parallel between the two SIN contacts, L’ and w the length and width of the TiN overlap and L the distance between the two TiN electrodes (see inset figure 1). Following Giaever’s calculation , $`R_{NN}=\sqrt{R_bR_{\mathrm{}}^{}}/w`$ is the normal state resistance per contact (determined at temperatures just below $`T_c`$, to eliminate the resistance of the TiN pads), with $`R_b`$ the barrier resistance expressed in $`\mathrm{\Omega }.\mu m^2`$ and $`R_{\mathrm{}}^{}`$ the sheet normal resistance below the overlap (eventually $`R_{\mathrm{}}^{}R_{\mathrm{}}`$ for the bare film) . From equation 1, we see that the resistance is linear with the distance $`L`$. This is what we observed at small $`L`$ on figure 1. For $`L>50\mu m`$, dispersion of the current lines on the sides of the normal part of the S/N/S system becomes noticeable (the TiN pads are on top of an infinite silicon layer). Therefore, we can recover the experimental curve by adding in this case two squares of silicon to $`N_{\mathrm{}}`$.
The linear fit at small lengths (using $`w=1000\mu m`$) gives at 4K (see inset figure 1):
$$R(\mathrm{\Omega })=0.024L(\mu m)+0.156$$
(2)
We deduced a sheet resistance of the bulk silicon (between the interfaces): $`R_{\mathrm{}}=24\mathrm{\Omega }`$. The resistivity at 4K is then $`\rho `$=14.4 $`\mu \mathrm{\Omega }`$.m in very good agreement with reference . The normal resistance of each interface is $`R_{NN}=\mathrm{7.8\; 10}^2\mathrm{\Omega }`$ for $`w`$=1000$`\mu `$m.
Given the effective masses of silicon $`m^{}=0.321m_e`$ (with $`m_t^{}=0.19m_e`$ and $`m_l^{}=0.916m_e`$) and the valley degeneracy N=6, we used a free electrons model to calculate the parameters of the doped bare silicon:
$`k_F`$ $`=`$ $`\left({\displaystyle \frac{\pi ^2n_e}{2}}\right)^{1/3}=\mathrm{4.62\; 10}^8m^1\text{ and }\lambda _F=13.6nm`$ (3)
$`\mathrm{}_e`$ $`=`$ $`{\displaystyle \frac{\mathrm{}k_F}{\rho e^2n_e}}=6.6nm\text{ and }k_F\mathrm{}_e=3`$ (4)
$`D`$ $`=`$ $`{\displaystyle \frac{1}{3}}v_F\mathrm{}_e=\mathrm{3.67\; 10}^4m^2.s^1`$ (5)
Those values ensure that the doped silicon is in the metallic regime, since $`k_F\mathrm{}_e>1`$ and $`n_e>n_c`$.
Finally, we measured the magnetoresistance of the bulk silicon using a long and wide bar ($`20mm\times 1mm`$) of Si $`n^{++}`$, at various temperatures (figure 2). We fitted the experimental curves with the 2D and 3D theories of weak localization :
$`\sigma `$ $`=`$ $`\sigma _{Boltzmann}+\mathrm{\Delta }\sigma `$ (6)
$`\mathrm{\Delta }\sigma `$ $`=`$ $`{\displaystyle \frac{e^2}{2\pi ^2\mathrm{}}}f_2(2{\displaystyle \frac{L_\phi ^2}{L_H^2}})\text{ at 2D }(L_\phi d_{Si:P})`$ (7)
$`=`$ $`{\displaystyle \frac{e^2}{2\pi ^2\mathrm{}L_H}}f_3(2{\displaystyle \frac{L_\phi ^2}{L_H^2}})\text{ at 3D }(L_\phi d_{Si:P})`$ (8)
with
$`f_2(x)`$ $`=`$ $`\mathrm{ln}(x)+\psi (x)`$
$`f_3(x)`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}(2(\sqrt{n+1+{\displaystyle \frac{1}{x}}}\sqrt{n+{\displaystyle \frac{1}{x}}}){\displaystyle \frac{1}{\sqrt{n+\frac{1}{2}+\frac{1}{x}}}})`$
$`L_H=\sqrt{\mathrm{}/(2eH)}`$ and $`L_\phi `$ are the magnetic and the phase-breaking lengths and $`d_{Si:P}`$ the depth of doped silicon. $`\psi (x)`$ is the digamma function. We deduced $`\tau _\phi =L_\phi ^2/D`$ for various temperatures (see table 1). As we were not able to fit the experimental curve at T=700mK, we concluded that the cross-over between the 2D and 3D regimes lies in this temperature range (corresponding to $`L_\phi d_{Si:P}`$). For lower temperatures, we used the 2D theory and for T=1.4K, the 3D theory. At T=1.4K, we measured $`\tau _\phi =0.48ns`$, in good agreement with Heslinga et al who obtained $`\tau _\phi =0.58ns`$ at T=1.2K. At lower temperature, these results can be compared to the expression given by Altshuler et al. , where only $`R_{\mathrm{}}`$ enters as a parameter:
$$\frac{\mathrm{}}{\tau _\varphi (2D)}=k_bT\frac{e^2}{h}R_{\mathrm{}}ln(\frac{h}{2e^2R_{\mathrm{}}})$$
(9)
which gives $`\tau _\phi =1.3ns`$ at 1K (see table 1), in relative good agreement with our experimental results.
As distances between the TiN electrodes are larger than or of the order of the phase-breaking length for all studied samples, as far as the coherence is concerned, one can divide the samples into three parts: the silicon resistance and the two interfaces. No coherence effect should link both interfaces and they could be studied as two N/S systems in series. This is confirmed in the next section.
## 3 Zero bias anomaly in TiN/Si planar junctions
We measured the sample resistance versus temperature (down to $`30mK`$) by a standard four probes lock-in technique ($`I_{ac}=10`$ nA at 180Hz). We studied samples of different lengths (from L=1$`\mu `$m to 500$`\mu `$m) and they all show the same behaviour at low temperature (see inset figure 3). Just above 4K, we observe a step in the resistance due to the superconducting transition of the TiN electrodes. Independently, by measurements on a long and wide bar of doped silicon, we have checked the silicon resistance does not depend (or only slightly) on temperature and voltage (see figure 2 and part 2). Since we know the normal resistance from TLM measurements, we can plot the resistance per contact: $`R_c(T,V)=\frac{1}{2}(R_{total}(T,V)R_{Si})`$ where $`R_{Si}`$ is the resistance of the silicon between the two contacts. So we are able to plot the resistance per contact $`R_c`$ as a function of the voltage drop at the interface $`V_i`$. The voltage drop at each interface is $`V_i=\frac{1}{2}(V_{total}R_{Si}I)`$ where $`I`$ is the applied DC current.
In the rest of the paper and in figure 3, we plot $`R_c`$. Below T=4.2K, the resistance increases as expected for an SIN junction. Anticipating the discussion in the section 4, its temperature dependence is well fitted within the BTK model above $`T400mK`$. The resistance increases only by a factor of 6 between 4K and 400mK, indicating a strong departure from the sharp BCS density of states. This is taken into account via a high damping factor $`\mathrm{\Gamma }_S`$ in the superconductor ($`\mathrm{\Gamma }_S/\mathrm{\Delta }=0.11`$ in the BTK adjustment) while the barrier is rather opaque (transparency $`\mathrm{3.4\; 10}^2`$). At 250 mK, it shows a maximum and then decreases. This can not be explained as a precursor of a Josephson current, since this effect is observed for distances between superconducting electrodes much larger than the coherence length $`L_T=\sqrt{\mathrm{}D/(2\pi k_BT)}=120nm`$ at 30mK (see figure 3). We interpret this decrease of the resistance at low temperature as due to reflectionless tunneling .
We also measured the differential resistance versus DC voltage at different temperatures (see figure 4) by applying a DC and ac current and by recording both the DC and ac voltages.
Except at low voltages and low temperatures, the $`R_c(V)`$ characteristics show the same smooth behavior for all samples: as the voltage is increased, the resistance steadily decreases and reaches its normal value above the gap voltage: there is no shoulder around $`eV=\mathrm{\Delta }`$. No shoulder is observed either at the gap voltage for previously studied Sm/S contacts exhibiting reflectionless tunneling . At low temperature, the resistance shows a dip (ZBA) consistent with the reflectionless tunneling regime. When the temperature is increased (see figure 4), the ZBA amplitude vanishes and disappears at 250mK. Moreover, as a function of the voltage, the ZBA disappears on the same energy scale than as a function of temperature, i.e. roughly 20$`\mu V`$.
Finally, we measured the differential resistance versus the DC voltage for various magnetic fields (see figure 5). Two behaviors can be distinguished. When the magnetic field is perpendicular to the S/N interface (along $`y`$, see figure 1), the resistance decreases at very small field and the ZBA is destroyed at 30G. On the contrary, for weak applied magnetic field parallel to the interface (along $`z`$), the resistance background is unchanged at small field and only the ZBA diminished. Then, above 200G, the overall resistance decreases. The ZBA is divided by a factor two for $`H_c^z200G`$ and completely disappears at 400G.
## 4 Proximity effect in planar SIN junctions
Many theoretical works have been completed since Kastalsky experiments . Volkov considered planar SIN junctions and, using a theory based on the Usadel equation, he calculated the conductance-voltage characteristics of this system at any voltages and temperatures. Assuming $`ϵ_b<<\gamma _{in}`$, with $`ϵ_b`$ the barrier energy (see below) and $`\gamma _{in}`$ the depairing rate, he obtains in the case of long junctions ($`L>>L_b`$):
$$G_{NS}=G_{NN}\frac{𝑑ϵ\frac{1}{4}A(ϵ)f_{N,z}(ϵ,V)}{2\sqrt{_0^{|V|}𝑑V_1𝑑ϵ\frac{1}{4}A(ϵ)f_{N,z}(ϵ,V_1)}}$$
(10)
with
$$\frac{1}{4}A(ϵ)=Re\frac{ϵ_b\gamma _{in}}{ϵ^2+\gamma _{in}^2}\frac{\mathrm{\Delta }^2}{\mathrm{\Delta }^2(ϵ+i\mathrm{\Gamma }_s)^2}\theta (\mathrm{\Delta }|ϵ|)$$
$$+Re\sqrt{\frac{(ϵ+i\mathrm{\Gamma }_s)^2}{(ϵ+i\mathrm{\Gamma }_s)^2\mathrm{\Delta }^2}}\theta (|ϵ|\mathrm{\Delta })$$
(11)
$$\gamma _{in}=\frac{\mathrm{}}{\tau _{in}}$$
(12)
$$ϵ_b=\frac{\mathrm{}D}{\mathrm{L}_b^2}\text{ , }L_b=\sqrt{\frac{2R_b}{R_{\mathrm{}}}}$$
(13)
with $`\mathrm{\Delta }`$ the superconducting gap, $`\mathrm{\Gamma }_s`$ a small damping in the superconductor, $`\tau _{in}`$ the inelastic time and $`L_b`$ the barrier length, $`R_b`$ the tunnel barrier resistance and $`R_{\mathrm{}}`$ the sheet resistance of the normal part. $`f_{N,z}=\frac{1}{2}[\mathrm{tanh}(\frac{ϵ+eV}{2k_BT})\mathrm{tanh}(\frac{ϵeV}{2k_BT})]`$ is the equilibrium distribution function.
The first term in $`\frac{1}{4}A(ϵ)`$ (equation 11) describes the pair amplitude in the normal metal. It is non-zero (although small) in spite of the null value of the electron-electron interaction potential in this region: this is the proximity effect. As $`ϵ_b0`$ (the barrier is more and more opaque), the proximity effect reduces and the pair amplitude vanishes. In the same way, this pair amplitude is zero when there is no coherence effect ($`\gamma _{in}+\mathrm{}`$). The second term is the BCS normalized superconducting density of states, which leads to the usual conductance-voltage characteristics (tunnel Hamiltonian approach). At zero temperature and voltage, $`G_{NS}=G_{NN}\sqrt{ϵ_b/\gamma _{in}}`$ as long as $`ϵ_b\text{}\gamma _{in}<<\mathrm{\Delta }`$. The width of the peak is roughly proportional to $`\gamma _{in}`$.
As we said in the introduction, S/Sm junctions are intermediate between good metallic junctions and oxyde tunnel barriers. We choose to treat our junctions within the SIN Volkov’s model, taking into account that the absolute contact resistances are high and that estimation for the Schottky barrier gives poor interface transparencies. We will show that applying Volkov’s model forces us to attribute a much larger sheet resistance for the silicon layer below the overlap ($`R_{\mathrm{}}^{}`$=315 $`\mathrm{\Omega }`$) than for the bare silicon ($`R_{\mathrm{}}`$=24 $`\mathrm{\Omega }`$). The depairing rate below the interface is also found to be much lower than the phase breaking rate for the bare film. Within this theory, we stress that it is impossible to consistently describe the ZBA taking the silicon parameters in the bare film unchanged below the overlap. This illustrates how the full quantitative analysis of the ZBA provides distinctive informations about the microscopic parameters of the S/Sm contacts, otherwise unaccessible.
Figure 3 shows the experimental data for the zero bias resistance versus temperature and the best Volkov’s model adjustment. The parameter are the following: titanium nitride superconducting gap $`\mathrm{\Delta }`$=0.21meV, damping factor $`\mathrm{\Gamma }_S/\mathrm{\Delta }`$=0.12, normal resistance $`R_{NN}`$=7.45$`\mathrm{\Omega }`$, barrier transparency per channel $`\mathrm{\Gamma }=\mathrm{3.4\; 10}^2`$, and finally the pair breaking energy $`\gamma _{in}=6085\mu eV`$.
There are two separate sets of parameters in this list: $`\mathrm{\Delta }`$, $`\mathrm{\Gamma }_S`$, $`R_{NN}`$ are determined in the high temperature range (and also independently by the large voltage range in the differential resistance curves, see later on); $`\gamma _{in}`$ and $`\mathrm{\Gamma }`$ are fixed by the low temperature range of the curve. In principle, $`\mathrm{\Gamma }`$ enters also as a parameter in the high temperature range, but due to the large value of $`\mathrm{\Gamma }_S`$, any relative small value of $`\mathrm{\Gamma }`$ satisfies the adjustement.
In the high temperature range (above 500mK), we also fit the resistance versus temperature curves with the BTK model , using the same values for the parameters than for the Volkov’s adjustment (see figure 3). Because in this range of energy the phase coherence is likely negligible, we find that both formalisms describe equally the experimental data. Below 500mK, coherence effects appear and the BTK model is no longer valid.
We now comment on the various parameters: the damping factor $`\mathrm{\Gamma }_S`$ is large probably because the disorder weakens the superconductivity at the interface. In particular, TEM image of the contacts indicates that a TiSi<sub>2</sub> 40nm thick layer is produced by the thermal treatment at the interface . From the transparency, we estimate the barrier resistance $`R_b=\frac{h}{2e^2}\frac{(\lambda _F/2)^2}{\mathrm{\Gamma }}18\mathrm{\Omega }.\mu m^2`$. From the absolute value of $`R_{NN}`$, we can deduce the sheet resistance of the silicon underneath the interface: $`R_{\mathrm{}}^{}=\frac{R_{NN}^2w^2}{R_b}`$=315$`\mathrm{\Omega }`$. This value is 15 times larger than the bulk value obtained with the TLM measurements ($`R_{\mathrm{}}`$=24 $`\mathrm{\Omega }`$). Note that without the determination of $`\mathrm{\Gamma }`$ by the Volkov’s fit of the ZBA, another set of $`\mathrm{\Gamma }R_{\mathrm{}}^{}`$ parameters could explain the absolute value of $`R_{NN}`$, namely for instance $`R_{\mathrm{}}^{}=R_{\mathrm{}}=24\mathrm{\Omega }`$ and $`\mathrm{\Gamma }=\frac{h}{2e^2}\frac{(\lambda _F/2)^2}{R_b}=\mathrm{2.35\; 10}^3`$ (with $`\lambda _F=13.6nm`$). This set will give a good BTK fit above $`T=500mK`$, but the $`\mathrm{\Gamma }`$ is too small to give enough Andreev reflection to provide the ZBA. The ZBA is explained only if both the normal sheet resistance of the silicon layer under the overlap and the transparency of the Schottky barrier are large enough. As said previously and also in reference , the semiconductor layer under the overlap could have a much larger sheet resistance than the native layer and is close to the metal/insulator transition ($`k_F\mathrm{}_e1`$). In such a case, the phase breaking length should also strongly decrease in the vicinity of the overlap as compared to the bare film.
Moreover, we obtain $`\gamma _{in}=6085\mu eV`$, which leads to $`\tau _{in}=\mathrm{}/\gamma _{in}`$=8-11ps. This value, which is an upper limit for $`\tau _\phi `$, is smaller by two orders of magnitude as compared to $`\tau _\phi `$2.7ns ($`L_\phi `$=1$`\mu `$m at 240mK) estimated from magnetoresistance measurements in the bulk silicon layer. Interestingly, such small $`\tau _{in}`$ has been reported in tunnel experiments in copper wires or in gold wires from analysing the Josephson effect in SNS junctions , and the discrepancy with phase breaking times measured by weak localization measurements also quoted. We do not know if it is coincidental, or if local measurements of $`\tau _{in}`$ near an interface generally lead to such underestimations as compared to bulk measurements.
In summary, the precise analysis of the reflectionless tunneling effect leads us to deduce that the ZBA is not due to coherent backscattering over long distances (about $`L_\phi `$=1$`\mu `$m) into the bulk silicon, but to coherent backscattering in a short disordered layer.
Finally, we discuss the effect of an applied magnetic field parallel or perpendicular to the junction. First, we measured the resistance-voltage characteristics at 30mK for various magnetic fields parallel to the interface (along $`z`$, see figure 1). We note that the zero bias anomaly is divided by a factor two for an applied fields $`H_c^z200G`$ and completely disappears at 400G. We can estimate the field necessary to put one flux quantum $`\mathrm{\Phi }_0=h/e`$ in a square of size $`L_{in}^2`$. Following Marmorkos et al , it gives the critical field to destroy the zero bias anomaly. Under the interface, we deduce from the fit of our experimental curves, $`\tau _{in}`$=8-11ps, so $`L_{in}=\sqrt{D\tau _{in}}`$=54-64nm with $`D=\mathrm{3.67\; 10}^4m^2.s^1`$. Consequently, $`H_c=\mathrm{\Phi }_0/L_{in}^2`$1T, which is much higher than the observed values of the field. Volkov et al. proposed another mechanism to destroy the coherence of the Andreev pairs. The magnetic field leads to a screening current, which gives a dependence of the phase of the order parameter with the coordinate $`z`$. This situation is comparable to Andreev interferometers : if the two electrons encounter a superconducting phase varying over $`\pi `$, destructive interferences occur and the ZBA disappears. Then, the depairing depends on the magnetic field through:
$$\gamma _{in}(H)/\mathrm{\Delta }=\gamma _{in}(0)/\mathrm{\Delta }+\mathrm{}D(\pi H\lambda /\mathrm{\Phi }_0)^2$$
(14)
with $`\lambda `$ the London penetration depth of the superconductor. Using Volkov theory (see equation 10), we calculate the value of the depairing rate at zero voltage for various fields and obtain the fitting curve: $`\gamma _{in}(H)/\mathrm{\Delta }=0.405+\mathrm{4.5\; 10}^7H(G)^2`$ (see inset figure 5), which leads to $`\lambda =790nm`$. This value seems rather high compared to $`\lambda 200nm`$ for NbN. To calculate this penetration depth, we take the diffusion coefficient in the bulk silicon (see equation 5) which maybe lower under the interface because of the disorder induced during annealing. Then, the penetration depth maybe overestimated. Nevertheless, this mechanism may cause the destruction of the interferences in our sample, since they are not sensitive to the decoherence induced by flux in electron-hole trajectories in the normal part (critical field too high).
But great care should be taken with the orientation of the magnetic field. We also measured the resistance-voltage characteristics at 30mK for various magnetic fields perpendicular to the interface (along $`y`$). The overall subgap resistance decreases for very small fields. Since titanium nitride is a type II superconductor, we attributed this effect to the appearance of vortices: the total resistance decreases because the junction normal resistance due to the vortices is less than the superconducting junction resistance. Secondly, the zero bias anomaly is divided by a factor two for a weak applied field $`H_c^y30G`$ (see figure 5). We can estimate the demagnetization factor under the interface. According to Zeldov et al. , the field under the interface is given by $`H_{interface}\sqrt{w/d_{TiN}}H_{applied}10H_{applied}`$, with $`d_{TiN}=100nm`$ thickness of the TiN film and $`w=1\mu m`$. This value agrees with the ratio of characteristic field: $`H_c^y/H_c^z7`$. Consequently, the ZBA for magnetic field applied perpendicularly to the interface disappears for weak field because of the demagnetization factor of the superconducting film.
The vortices may also explain the decreasing around 200G of the resistance background when the magnetic field is parallel to the interface: if the field is not strictly parallel to the interface, some vortices may appear with a small perpendicular component of the field.
## 5 Electron heating and effective temperature
The Volkov’s theory or the BTK theory are not able to fit our experimental resistance-voltage curves if we suppose an equilibrium Fermi distribution with a base temperature T<sub>0</sub>. Paradoxically, we note that the maximum of differential resistance happens precisely at a voltage $`V_020\mu V`$ such that $`eV_0k_BT_0`$, where $`T_0250mK`$ is the temperature at which the zero bias resistance is maximum. Such coincidence happens also in the context of the re-entrance effect and has been observed in most of the reflectionless tunneling experiments . But this is not predicted in the theoretical models. From the model, we expect that the voltage brings to a much smaller suppression of the ZBA as compared to experimental data (see figure 4). A way to restore a good accordance between the model and the experimental voltage characteristics is to choose the effective temperature of carriers as an adjustable parameter . At a fixed voltage, we find the elevated temperature such that the model gives precisely the measured resistance value. Therefore, we construct the variation of the effective temperature $`T_{eff}`$ as a function of the applied voltage (figure 6). We attribute this temperature to the carriers in the doped silicon film in between the two TiN electrodes.
What is happening first at very low temperature when a finite voltage is applied to the TiN/Si n<sup>++</sup>/TiN sample? Some Joule power is dissipated in the silicon part which is very hardly evacuated either in the phonon bath or in the contacts because the electron-phonon coupling is very small at low temperature and the Andreev thermal resistance at each N-S interface is very large . Consequently the electrons inside the silicon are overheated. Overheating has two causes. First, the Andreev thermal resistance, which depends exponentially on the effective temperature, is mainly responsible for the elevated temperature at low voltage, and one can neglect the gradient of temperature in the silicon between the superconducting contacts. Secondly, at high voltage, this gradient is non negligible and given by the Wiedemann-Franz law. We suppose that the inelastic electron-electron scattering time is short as compared to the length of the sample ($`120\mu m`$), to have an quasi-equilibrium Fermi distribution with an effective temperature.
At low voltage, below 20 $`\mu `$V, the electronic temperature increases very rapidly up to 320mK. Hoss et al. observed a very similar behavior in $`1\mu m`$ long Nb/Au/Nb samples at low currents. To understand this results, we used a BTK-based model of dissipated power . The power dissipated through a NS interface is given by:
$$P(T_e,V_i)=\frac{2G_{NN}}{e^2}_{\mathrm{}}^+\mathrm{}𝑑ϵϵ[f(\frac{ϵeV_i}{k_BT_e})f(\frac{ϵ}{k_BT_0})][1A(ϵ)B(ϵ)]$$
(15)
with $`T_e`$ and $`T_0`$ electronic and phonon temperatures, $`V_i`$ voltage drop at the interface, A and B the Andreev and normal reflexion probabilities , $`G_{NN}=1/R_{NN}`$ the normal conductance. We fit the curve with the parameters used earlier in the BTK fit: $`\mathrm{\Delta }=0.21meV`$, $`\mathrm{\Gamma }_S/\mathrm{\Delta }=0.14`$, $`Z=5.3`$ (which gives a barrier transparency of $`\mathrm{3.4\; 10}^2`$), $`R_{NN}=7.45\mathrm{\Omega }`$ and $`T_0=30mK`$. We assume that all the electric power is dissipated through the two NS interfaces, i.e. that the electron-phonon length at this temperature exceeds the distance between the two superconducting interfaces. Then, by equating $`P(T_e,V_i)=V_{total}I`$, with $`V_{total}`$ the total voltage and $`I`$ the current across the sample, we obtain $`T_{el}(V_i)`$.
These parameters describe the experimental points of figure 6 at low voltages. Since the thermal resistance of the N/S interface decreases exponentially with temperature, for larger effective temperatures the heat is rapidly evacuated in the superconducting electrodes and a thermal equilibrium is reached, giving the saturation of the effective electronic temperature supposed constant in the silicon to around 300mK (solid symbols in figure 6).
At higher voltages, the effective temperature continues to increase. Our hypothesis of a constant effective temperature in the silicon should fail. The simplest analysis is to consider the Wiedemann-Franz law, globally for the whole TiN/Si n<sup>++</sup>/TiN sample. In the 1$`\times `$10 $`\mu m^2`$ sample, it follows a Wiedemann-Franz law (interacting hot electron regime) $`T_e(V)=\sqrt{T_{Si}^2+\frac{1}{4}\frac{3}{\pi ^2}(\frac{eV}{k_B})^2}`$, with $`T_{Si}=320mK`$ temperature of the electron at the interface and V the total voltage applied to the SNS system (see figure 6). In the 20$`\times `$10 $`\mu m^2`$, the effective temperature is less than predicted by the Wiedemann-Franz temperature. Heslinga et al give an approximation of $`L_{eph}2\mu m`$ at 320mK. This value is intermediate between the length of our two samples and explains their different behaviors. In the 20$`\times `$10 $`\mu m^2`$ sample, as the length of the sample matches the electron-phonon length, electrons can interact with phonons and be cooled under the Wiedemann-Franz temperature, whereas in the 1$`\times `$10 $`\mu m^2`$, electrons are only heated by electron-electron interaction and the Wiedemann-Franz law is valid.
Finally, we test our estimation of the effective temperature using the BTK model. On figure 4 (bottom), we observe a much better accordance with the data if we introduce the effective temperature given by the Wiedemann-Franz law than if we use the base phonon temperature. For consistency, all the parameters except the temperature are obtained from the BTK fit of $`R(V=0)`$ versus temperature.
## 6 Conclusion
In conclusion, we measured the differential resistance versus temperature, applied voltage and magnetic field of semiconductor/superconductor junctions. TiN/Si n<sup>++</sup> heterostructures proved to be a new and interesting tool for the study of proximity effect. We observed a zero-bias anomaly due to reflectionless tunneling. By comparing our results to a proximity effect theory , we find that the sheet resistance of the silicon underneath the interface is much larger than the bulk value and that the quasiparticule lifetime is much shorter than in bulk silicon. We explain these discrepancies by disorder induced by annealing during the process. Interestingly, the quasiparticule lifetime is comparable to the electron-electron interaction time deduced from tunnel experiments in copper wires or in gold wires from analysing the Josephson effect in SNS junctions . At very low temperature and finite voltage, the effective electronic temperature in the silicon is much higher than the phonon temperature. This is valid for separation between superconducting contacts as large as $`20\mu m`$. The effective temperature increases very rapidly at low voltage and then follows a Wiedemann-Franz law. This behavior is well explained by a model of heat dissipation through a N/S interface.
This rapid raise of the temperature of the carriers with the injected power is one of the interesting properties of this system. It could be used to make a bolometer, by measuring directly the zero bias conductance as function of the absorbed power in the silicon .
## 7 Aknowledgments
We would like to thank MM. Deleonibus and Demolliens (CEA/Leti) for providing the TiN/Si n<sup>++</sup> bilayers and the use of the PLATO facilities.
|
no-problem/0003/cond-mat0003468.html
|
ar5iv
|
text
|
# Planar cracks in the fuse model
## 1 Introduction
Understanding the mechanisms of crack propagation is an important issue in mechanics, with potential application to geophysics and material science. Experiments have shown that in several materials under different loading conditions, the crack front tends to roughen and can often be described by self-affine scaling man . In particular, the out of plane roughness exponent is found displays universal values for a wide variety of materials bouch . Interesting experiments have been recently performed on PMMA and the in plane roughness of a planar crack was observed to scale with an exponent $`\zeta =0.63\pm 0.03`$ schmit1 .
While the experimental characterization of crack roughness is quite advanced and the numerical results very accurate, theoretical understanding and numerical models are still unsatisfactory. The simplest theoretical approach to the problem identifies the crack front with a deformable line pushed by the external stress through a random toughness landscape. The deviations of the crack front from a flat line are opposed by the elastic stress field, through the stress intensity factor gao . In certain conditions, the problem can be directly related to models and theories of interface depinning in random media and the roughness exponent computed by numerical simulations and renormalization group calculations natt ; nf . Unfortunately, the agreement within this theoretical approach and experiments are quite poor. For the out of plane roughness the theory predicts only a logarithmic roughness in mode I ram1 , while the experimental results give $`\zeta _{}=0.5`$ at small length scale and $`\zeta _{}=0.8`$ at larger length scales bouch . For planar cracks, simulations predict $`\zeta =0.35`$ schmit2 and RG gives $`\zeta =1/3`$ ram2 , both quite far from the experimental result. The inclusion of more details in the model, such as elastodynamic effects, does not lead to better results ram2 .
A different approach to crack propagation in disordered media, considers the problem from the point of view of lattice models hr . The elastic medium is replaced by a network of bonds obeying discretized equations until the stress reached a failure threshold. The disorder in the medium can be simulated by a distribution of thresholds or by bond dilution. Models of this kind have been widely used in the past to investigate several features of fracture of disordered media, such as the failure stress fuse ; duxbury ; fuse2 ; fuse3 , fractal properties hr ; th and avalanches hh ; th ; zrsv ; gcalda ; zvs . The out of plane roughness exponent has been simulated in two dimension, resulting in $`\zeta _{}0.7`$ HHR ; CCG , and three dimensions where $`\zeta _{}0.40.5`$ RSAD ; BH-98 ; pcp . The last result is in good agreement with experimental results, if we identify the small length scales with the quasistatic regime used in simulations. The advantage of lattice models over interface models is that the former allow for nucleation of microcracks ahead of the main crack. While it is well known experimentally that microcracks do nucleate, their effect on the roughness exponent has never been studied.
In this paper we present numerical simulations of a planar crack using the random fuse model fuse . We employ a quasi two-dimensional geometry, considering two horizontal plates separated by a network of vertical bonds. A similar setup was used in a spring model nak , but the roughness was studied only in the high velocity regime for crack motion. The experiments of Ref. schmit1 where instead performed at low velocity so that a quasistatic model seems more appropriate.
We find that the two dimensional geometry introduces a characteristic length limiting the crack roughness. In addition, crack nucleation does not appear to change in a qualitative way the behavior of the system. For length scales smaller than the characteristic length, the crack is not self affine, but possibly self-similar. We study the damage zone close to the crack and find that several of its features can be described by gradient percolation gradper .
## 2 Model
In the random fuse model, each bond of the lattice represents a fuse, that will burn when its current overcome a threshold fuse ; duxbury ; fuse2 ; fuse3 . The currents flowing in the lattice are obtained solving the Kirchhoff equation with appropriate boundary conditions nota\_kir . In this paper, we consider two horizontal tilted square lattices of resistors connected by vertical fuses (see Fig. 1). The conductivity of the horizontal resistors is chosen to be unity, while the vertical fuses have conductivity $`\sigma `$. A voltage drop $`\mathrm{\Delta }V`$ is imposed between the first horizontal rows of the plates. To simulate the propagation of a planar crack, we allow for failures of vertical bonds only and assign to each of them a random threshold $`j_i^c`$, uniformly distributed in the interval $`[1:2]`$. When the current in a bond $`i`$ overcomes the random threshold, the bond is removed from the lattice and the currents in the lattice are recomputed, until all the currents are below the threshold. The voltage drop is thus increased until the weakest bond reaches the threshold.
The quasistatic dynamics we are using should correspond to the small constant displacement rate at the boundary of the crack used in experiments schmit1 . In order to avoid spurious boundary effect, we start with a preexisting crack occupying the first half of the lattice (see Fig. 1) and employ periodic boundary conditions in the direction parallel to the crack. In addition, once an entire row of fuses has failed, we shift the lattice backwards one step in the direction perpendicular to the crack, to keep the crack always in the middle of the lattice.
Before discussing the numerical results, we present some analytical considerations which will guide the simulations.
## 3 Characteristic length
Here we investigate the model introduced in the preceding section in some particular configuration. We first analyze the case of a perfectly straight planar crack and study the current decay in front of it. In this condition, the system is symmetric in the direction parallel to the crack and we can thus reduce it to one dimension.
We consider an infinite ladder composed of vertical bonds of resistance $`r1/\sigma `$ connected by unitary horizontal resistances. Since the ladder is infinite, we can add one additional step without changing the end to end resistance $`R`$:
$$R=2+1/(1/r+1/R)=2+\frac{rR}{r+R}.$$
(1)
Solving Eq. (1) we obtain the total resistance
$$R=\sqrt{1+2r}+1.$$
(2)
The fraction of current $`j`$ flowing through the first ladder step is such that $`rj=R(1j)`$, which implies
$$j=\frac{R}{r+R}=\frac{\sqrt{1+2r}+1}{r+1+\sqrt{1+2r}}=\frac{\sqrt{1+2r}1}{r}.$$
(3)
The current flowing in the second ladder step is then $`(1j)j`$ and similarly in the $`n`$th step it is given by $`j_n=(1j)^{n1}j`$, thus scaling as $`j_n\mathrm{exp}(n/\xi )`$ where
$$\xi \frac{1}{\mathrm{log}(1j)}=\frac{1}{\mathrm{log}(1\sigma (1\sqrt{12/\sigma }))}.$$
(4)
Thus the current in front of the crack decays exponentially with a characteristic length $`\xi 1/\sqrt{2\sigma }`$, for $`\sigma 1`$. A similar result could have been anticipated from the structure of the Kirchhoff equations reading as
$$\underset{nn}{}(V_{i+nn}V_i)+\sigma (V_i^{}V_i)=0$$
(5)
where the sum runs over the nearest neighbors of node $`i`$ and $`V_i^{}`$ is the voltage of the corresponding node in the opposite plate. Due to symmetry we can chose $`V_i^{}=V_i`$ and solve the equations only for one of the plates. Eq. (5) represents a discretization of a Laplace equation with a “mass term”
$$^2V\xi ^2V=0,$$
(6)
where $`\xi =1/\sqrt{2\sigma }`$.
The continuum limit can be used to understand how current is transfered after a single failure. We define $`G(xx^{})`$ as the difference in the currents in $`x`$ before and after a bond in $`x^{}`$ has failed. The function $`G`$ is analogous of the “stress Green function” used in interface models for cracks propagation ram1 ; schmit2 ; ram2 . In fact, the equation of motion in these models is written as
$$\frac{h(x,t)}{t}=F+𝑑yG(xy)(h(y,t)h(x,t))+\eta (x,h),$$
(7)
where $`h`$ indicate the position of the crack, $`F`$ is proportional to the external stress, $`\eta `$ to the random toughness of the material and for a planar crack in three dimensions $`G(x)1/|x|^2`$. A renormalization group analysis shows that the roughness of the interface crucially depends on the decay of $`G`$ natt ; nf . If $`G`$ decays slower than $`|x|^1`$, for $`x\mathrm{}`$ the interface is not rough on large length scales.
In order to compute the function $`G`$, we solve Eq. (6) with the appropriate boundary conditions. Note that by definition $`G(x)`$ is proportional to the differences in the voltages in $`x`$ before and after removing a fuse. Since Eq. (6) is linear, the difference of the voltages still satisfies the equation in all points except $`x=0`$. This condition can also be expressed in terms of the current $`JV/y`$ cond , which should be continuous everywhere apart from $`x=0`$.
Let’s consider a planar crack along the $`x`$ direction and identify two domains: 1) the domain where fuses are present ($`y>0`$) labeled A 2) the domain where all fuses are burnt out ($`y<0`$) labeled B. Thus the equation to solve in domain A is
$$^2V=\xi ^2V$$
(8)
and in B $`^2V=0`$.
Taking the Fourier transform along $`x`$, and calling $`k`$ the conjugate variable to $`x`$, we can write in domain A
$$_y^2\stackrel{~}{V}=(k^2+\xi ^2)\stackrel{~}{V}.$$
(9)
Integrating the equation, setting $`V0`$ at infinity, we obtain
$$\stackrel{~}{V}(k,y)=\stackrel{~}{V}(k,0)\mathrm{exp}(y/\mathrm{})$$
(10)
where $`1/\mathrm{}=\sqrt{k^2+\xi ^2}`$. A similar calculation allows to obtain $`\stackrel{~}{V}(k,y)`$ in domain B.
The currents normal to the crack in the two domains are given by
$$\stackrel{~}{J}_A(k)=\sqrt{k^2+\xi ^2}\stackrel{~}{V}(k,0)$$
(11)
and
$$\stackrel{~}{J}_B(k)=|k|\stackrel{~}{V}(k,0),$$
(12)
where $`\stackrel{~}{V}`$ is the same for the two domains. If one bond is removed at $`x=0`$ along the interface, the continuity of the current implies $`J_A+J_B\delta (x)`$ and in Fourier space
$$\stackrel{~}{J}_A(k)+\stackrel{~}{J}_B(k)1$$
(13)
and hence
$$\stackrel{~}{V}\frac{1}{|k|+\sqrt{k^2+\xi ^2}}.$$
(14)
The Fourier transform of the function $`G`$ is simply proportional to $`\stackrel{~}{V}`$ and therefore at short distances $`k\xi 1`$, $`\stackrel{~}{V}1/|k|`$, or $`G\mathrm{log}(x)`$, while at long distances $`k\xi 1`$, $`G\mathrm{exp}(r/\xi )`$.
We test the asymptotic behavior predicted above by estimating $`G`$ from numerical simulations. The results for a lattice of size $`L=128`$ are in good agreement with the analytical predictions as shown in Fig. 3. It is interesting to remark that the roughness of the crack is limited by $`\xi `$, but even in the limit $`\xi \mathrm{}`$ we do not expect a self affine crack, since $`G`$ decays slower than $`1/|x|`$. In the next section, we will show numerically that damage nucleation does not alter this conclusion.
## 4 Crack roughness: simulations
In order to analyze the effect of crack nucleation ahead of the main crack, we first simulate the model confining the ruptures on the crack surface. In this way, our model reduces to a connected interface moving in a random medium with an effective stiffness given by the solution of the Kirchhoff equations. The results are then compared with simulations of the unrestricted model, where ruptures can occur everywhere in the lattice. In both cases the crack width increases with time up to a crossover time at which it saturates. In Fig. 4 we compare the damage structure in the saturated regime for the two growth rules. The height-height correlation function $`C(x)(h(x)h(0))^2`$, where the average over different realizations of the disorder, is shown in Fig. 5. From these figures it is apparent that the structure of the crack is similar in the two cases. The only difference lies in the higher saturation width that is observed when microcracks are allowed to nucleate ahead of the main crack.
Next, we analyze the behavior of the crack as a function of $`\sigma `$ which should set the value of the characteristic length to $`\xi 1/\sqrt{2\sigma }`$. In this study we restrict our attention to the general model with crack nucleation. We compute the global width $`W(h^2h^2)^{1/2}`$, averaging over several realization of the disorder (typically 10), as a function of time for different values of $`\xi `$. Fig. 6 shows that $`W`$ increases linearly in time until saturation. The global width in the saturated regime scales as $`\xi ^\zeta `$, with $`\zeta 0.75`$, as shown in Fig. 7. Due to the limited scaling range, we could not obtain a more reliable estimate of the exponent value.
## 5 Mean-field approach
The long-range nature of the Green function suggests that a mean-field approach could be suitable. We outline here the spirit of such an approach, through the determination of the density of burnt fuses ahead of the crack front. First, we note that the mean profile is expected to be translational invariant along the $`x`$ axis and thus the problem reduces to a one dimensional geometry. As argued earlier, the tension $`V(y)`$ should obey the following differential equation, in the continuum limit:
$$\frac{^2V}{y^2}=2\sigma (y)V(y)$$
(15)
where $`\sigma (y)`$ is the ($`x`$-averaged) conductivity at position $`y`$. The latter can be written as $`(1D(y))\sigma _0`$ where $`D(y)`$ is the “damage”, i.e. fraction of burnt fuses. This fraction is a known function of the current density in the mean-field approach. Namely, the vertical current going through intact fuses is $`j(y)=2\sigma _0V(y)=V(y)/\xi ^2`$. It is remarkable that $`D(y)`$ drops out of this equation: the damage reduces the current density flowing at a given position by a factor $`(1D(y))`$ as compared to the intact state, but the same current density flows through a reduced number of intact fuses, and is thus multiplied by $`1/(1D(y))`$. In conclusions the two factors cancel out.
The proportion of fuses which may support the current without burning is given by the cumulative distribution of threshold currents: $`P(j)=_j^{\mathrm{}}p(j^{})𝑑j^{}`$, which implies $`D(y)=1P(j(y))`$. The voltage profile along the $`y`$ axis is thus given by
$$\xi ^2\frac{^2V}{y^2}=P(V(y)/\xi ^2)V(y)$$
(16)
This equation can be rewritten in terms of the rescaled coordinate $`s=y/\xi `$ and current $`j=V/\xi ^2`$ and in our case, since $`P(j)=2j`$ for $`1<j<2`$, we obtain
$$j^{\prime \prime }(s)=j(s)(2j(s)).$$
(17)
Notice that Eq. 17 is valid only for $`1<j<2`$, while for $`j<1`$ the equation becomes $`j^{\prime \prime }=j`$ and for $`j>2`$ we have $`j^{\prime \prime }=0`$. At infinity, $`j<1`$ thus the current is given by $`j=e^{(ss_0)}`$, for $`s>s_0`$, so that the boundary condition for Eq. 17 at $`s=s_0`$ is $`j(s_0)=j^{}(s_0)=1`$ With these boundary conditions, Eq. (17) can not be solved explicitely so we resort to numerical integration. From the solution of Eq. (17) we obtain the damage profile and compare it to numerical simulations (see Fig. 8) The remarkable agreement between the mean-field solution and simulations, with a damage profile which is a single function of $`y/\xi `$, implies that the fracture front should be given by gradient percolation (in fact the gradient is non-linear) gradper . From this observation we can extract the scaling of the front with the gradient $`g`$ (here $`g1/\xi `$) as $`Wg^{\nu /(1+\nu )}\xi ^{\nu /(1+\nu )}`$ where $`\nu `$ is the percolation correlation length critical exponent $`\nu =4/3`$, or $`W\xi ^{0.57}`$, reasonably consistent with our data.
## 6 Conclusions
In this paper, we have studied the propagation of planar cracks in the random fuse model. This model allows to investigate the effect on the crack front roughness of the microcracks nucleating ahead of the main crack. The study was restricted to a quasi two dimensional geometry and could apply to cases in which the material is very thin in the direction perpendicular to the crack plane plate .
In two dimensions, the geometry of the lattice induces a characteristic length $`\xi `$ limiting the roughness and microcrack nucleation does not appear to be relevant. In addition, for length scales smaller than $`\xi `$ the Green function decays very slowly, suggesting the validity of a mean-field approach. We study the problem numerically, computing the scaling of the crack width with time and $`\xi `$, and analyze the damage ahead of the crack. The results suggest an interpretation in terms of gradient percolation gradper , as it is also indicated by mean-field theory. The limited range of system sizes accessible to simulations does not allow for a definite confirmation of these results.
The present analysis does not resolve the issue of the origin of the value of the roughness exponent for planar cracks in heterogeneous media. While microcrack nucleation is irrelevant in the present context, three dimensional simulations are needed to understand whether this is true in general. In principle, one could still expect that microcrack nucleation in three dimensions would change the exponent of the interface model ($`\zeta =1/3`$), but the present results do not lead to such a conclusion.
## Acknowledgment
S. Z. acknowledges financial support from EC TMR Research Network under contract ERBFMRXCT960062. We thank A. Baldassarri, M. Barthelemy and J. R. Rice and A. Vespignani for useful discussions.
|
no-problem/0003/gr-qc0003113.html
|
ar5iv
|
text
|
# Existence, uniqueness and other properties of the BCT (minimal strain lapse and shift) gauge
## I The BCT gauge
Cauchy data for general relativity consist of a three-metric $`g_{ab}`$ and extrinsic curvature $`K_{ab}`$ specified on a three-manifold (“slice”) $`\mathrm{\Sigma }`$. The Cauchy data determine the four-dimensional spacetime (locally) as a geometric object, but without fixing a coordinate system. When the spacetime is computed numerically as a sequence of spacelike slices $`\mathrm{\Sigma }(t)`$, the coordinates may be fixed incrementally by specifying a lapse $`\alpha `$ and shift vector $`\beta ^a`$ on each new slice – a gauge choice. In one class of gauge choices, $`\alpha `$ and $`\beta ^a`$ on a slice are determined by $`g_{ab}`$ and $`K_{ab}`$ on the same slice, i.e.,
$$(\mathrm{\Sigma },g_{ab},K_{ab})(\alpha ,\beta ^a).$$
(1)
Recently, Brady, Creighton and Thorne have proposed a new gauge choice of this class in which $`\alpha `$ and $`\beta ^a`$ are determined as a solution of the coupled equations
$$K^{ab}F_{ab}=0,D^aF_{ab}=0,$$
(2)
where $`D_a`$ is the covariant derivative with respect to the three-metric $`g_{ab}`$, and
$$F_{ab}(\alpha ,\beta )L_{ab}(\beta )2\alpha K_{ab},L_{ab}(\beta )D_a\beta _b+D_b\beta _a$$
(3)
These two equations arise when one varies the action principle
$$I=F^{ab}F_{ab}\sqrt{g}d^3x$$
(4)
with respect to $`\alpha `$ and $`\beta ^b`$, respectively. In a four-dimensional context, $`F_{ab}`$ is the time derivative of the three-metric induced on the time slicing with lapse $`\alpha `$ and shift $`\beta ^a`$:
$$\dot{g}_{ab}_tg_{ab}=F_{ab}(\alpha ,\beta ),t^a\alpha n^a+\beta ^a,$$
(5)
where $`n^a`$ is the unit normal vector of the slice $`\mathrm{\Sigma }`$ when it is embedded into a spacetime. Equations (2) are therefore also called the minimal strain lapse equation and minimal strain shift equation, where $`I`$ is the “strain” that is being extremized.
The motivation for considering these equations is that a good gauge choice should have the property (among others) to be compatible with approximate Killing vectors, in the sense that if an approximate Killing vector exists, the spacetime metric in that gauge should evolve as slowly as possible. The inspiral phase of a binary black hole system in a spherical orbit, for example, has an approximate Killing vector, and one would like to be able to find (corotating) coordinates in which the spacetime metric evolves on the timescale in which the orbit shrinks through the emission of gravitational waves, rather than on the much shorter orbital period timescale.
The first of the two equations (2) can be solved algebraically for $`\alpha `$,
$$\alpha =\frac{K^{cd}L_{cd}(\beta )}{K^{ef}K_{ef}},$$
(6)
and the result substituted into the second equation. One obtains a vector linear second-order differential equation for the vector $`\beta `$ alone:
$$D^aH_{ab}(\beta )=0,H_{ab}(\beta )L_{ab}(\beta )2\frac{K^{cd}L_{cd}(\beta )}{K^{ef}K_{ef}}K_{ab}.$$
(7)
We shall call this single vector equation the BCT equation, and in the following we shall consider the lapse $`\alpha `$ as a dependent quantity determined by (6). Equation (7) can be obtained from the action principle
$$J=H^{ab}H_{ab}\sqrt{g}d^3x,$$
(8)
which is obtained by substituting the lapse (6) into the action principle $`I`$.
## II Existence and uniqueness
If one is to use the BCT gauge, it is important to know the answer to the following question: For which choices of data $`(U,g_{ab},K_{ab})`$ on a region $`U`$ of $`\mathrm{\Sigma }`$ with boundary $`U`$ and for which sets of boundary conditions for $`\beta `$ can one solve the BCT equation (7)? In this brief note, we show that for some choices of data there is a unique solution with specified Dirichlet boundary conditions, while for others, there are many solutions with those boundary conditions.
The issues of existence and uniqueness of the BCT gauge have been previously considered by Gonçalves . Gonçalves shows that the differential operator defined by (7) is strongly elliptic if and only if $`K_{a}^{}{}_{}{}^{b}`$, considered as a map on the tangent space of $`\mathrm{\Sigma }`$, has at most one vanishing eigenvalue. (Let the principal part of the differential operator acting on $`\beta ^a`$ be $`M_{}^{a}{}_{b}{}^{}{}_{}{}^{cd}D_cD_d\beta ^b`$. The operator is then defined to be strongly elliptic if $`M_{}^{a}{}_{b}{}^{}{}_{}{}^{cd}`$ is positive definite with respect both to the two indices that slot into derivatives and the index pair that shuffles the vector index on $`\beta `$.) Generally, operator ellipticity – strong or otherwise – is not enough to determine whether a boundary value problem admits a solution. However, using the fact that the equation of interest (7) is of divergence form, Gonçalves does obtain local results. In this paper, using the Fredholm alternative, we obtain stronger, global, results.
Before deriving the condition for existence of the BCT gauge, we wish to turn the boundary value problem for Equation (7) into one with homogeneous boundary conditions. Let us write Equation (7) in operator form as $`O(\beta )=0`$, and the corresponding Dirichlet problem as
$$O(\beta )=0,\beta |_U=f$$
(9)
for some given continuous vector-valued function $`f`$. Presuming that the region $`U`$ and its boundary are well-behaved, we may always extend $`f`$ to a function $`F:UR^3`$ which is $`C^2`$ on $`U`$ and continuous on $`UU`$. Then if we find a solution $`\xi `$ of the boundary value problem
$$O(\xi )=O(F),\xi |_U=0,$$
(10)
and if we set $`\beta =\xi +F`$, we have $`\beta `$ satisfying the boundary value problem (9). We may now focus on the discussion of the boundary value problem (10) (for arbitrary $`F`$).
To be able to use Fredholm ideas to study the boundary value problem (10), one first needs to establish that (10) defines a Fredholm map. As shown in propositions 11.10 and 11.16 of , this fact follows from the strong ellipticity of $`O`$. We also need the following:
Lemma: For $`\beta `$ satisfying the Dirichlet condition $`\beta |_U=0`$, $`O`$ is a self-adjoint operator.
Proof: We consider the quantity
$$_U\gamma O(\beta )_U\gamma ^aD^bH_{ab}(\beta )$$
(11)
with $`H_{ab}`$ from Equation (7). Integrating by parts, and using $`\gamma |_U=0`$, we obtain
$$_U\gamma O(\beta )=_UD^b\gamma ^aH_{ab}(\beta )=_UH^{ab}(\gamma )D_a\beta _b.$$
(12)
Integrating by parts again, and using $`\beta |_U=0`$, we have
$$_U\gamma O(\beta )=_UD_aH^{ab}(\gamma )\beta _b=_U\beta O(\gamma ).$$
(13)
Hence $`O`$ is self-adjoint on functions $`\beta `$ that vanish on the boundary. (As the boundary term in the integration by parts is of the form $`_Us^a\beta ^bH_{ab}(\gamma )`$, where $`s^a`$ is the unit normal vector to $`U`$, the operator $`O`$ is also self-adjoint on the space of functions $`\gamma `$ that satisfy $`s^aH_{ab}(\gamma )=0`$ on the boundary. This boundary condition is the mathematical analog of the Neumann boundary condition for the Laplace equation, but it contains additional terms that make its physical meaning unclear. Therefore we do not consider it here.)
Since $`O`$ is self-adjoint, and since it defines a Fredholm map, the Fredholm Alternative specifies a clear condition for existence of solutions. Stated for the operator $`O`$, one has
Proposition (Fredholm Alternative): Fix $`(U,g_{ab},K_{ab})`$. The Dirichlet boundary value problem (10) has a unique solution for every choice of $`O(F)`$ if and only if the only vector function satisfying
$$O(\xi )=0,\xi |_U=0$$
(14)
is $`\xi ^a=0`$. (That is, the kernel of $`O`$ is empty.) For a given $`O(F)`$, the boundary value problem (10) has a solution so long as $`O(F)`$ satisfies
$$_U\xi O(F)=0$$
(15)
for every $`\xi `$ which satisfies (14). (The kernel of $`O`$ is orthogonal to the source.)
We base our existence results for the BCT gauge on the Fredholm Alternative. So we are led to consider solutions $`\xi `$ of (14), or elements of the kernel of $`O`$. Given such a solution, we consider the quantity
$$_U\xi O(\xi )=\frac{1}{2}_UL^{ab}(\xi )H_{ab}(\xi )=\frac{1}{2}_UH^{ab}(\xi )H_{ab}(\xi ),$$
(16)
where the first equality follows from integration by parts ($`\xi `$ vanishes on $`U`$) and the second equality follows from $`K^{ab}H_{ab}=0`$. $`\xi `$ therefore satisfies (14) if and only if
$$H_{ab}(\xi )=0,\xi |_U=0.$$
(17)
Thus the kernel of $`O`$ consists of solutions of this boundary value problem.
For a given set of initial data $`(U,g_{ab},K_{ab})`$, the Fredholm Alternative asks that we determine the kernel of $`O`$. If the kernel is empty, then the Dirichlet boundary value problem (10), and hence (9), admits a unique solution.
If the kernel is not empty, with non-trivial elements $`\xi `$, then to determine the solubility of the boundary value problem (10) we need to consider $`_U\xi O(F)`$. Integrating by parts, we have
$$_U\xi O(F)=\frac{1}{2}_UL(\xi )H(F)=\frac{1}{2}_UH(\xi )L(F),$$
(18)
but $`H(\xi )=0`$ by assumption. So this always vanishes, and a solution always exists in this case, too. It is determined, however, only up to the addition of any element of the kernel.
We conclude that a solution of the BCT gauge exists whenever $`O`$ is strongly elliptic, but if the kernel (17) is not empty, the solution is not unique.
## III Examples for uniqueness and non-uniqueness
It is useful to note that there are choices of data $`(U,g_{ab},K_{ab})`$ for which each of these two cases hold. For the first case, where the kernel is empty, we consider data with $`K_{ab}=\rho g_{ab}`$, with $`\rho `$ nowhere vanishing. The kernel equation (17) then becomes
$$D_a\xi _b+D_b\xi _a\frac{2}{3}g_{ab}D^c\xi _c=0,$$
(19)
This is the equation for a conformal Killing vector. If we choose a metric $`g_{ab}`$ that does not admit a conformal Killing vector that vanishes on the boundary, we have constructed data that give rise to an empty kernel, and therefore a unique BCT gauge for a given choice of the boundary data.
For the other case, we consider a slice through a spacetime that has a timelike Killing vector. We choose the lapse and shift so that $`t^a`$ is the Killing vector. The time derivative of the three-metric then vanishes, and consequently $`F_{ab}(\alpha ,\beta )=0`$. From this it follows that $`H_{ab}(\beta )=0`$. In order to obtain $`\beta |_U=0`$, we choose the slice so that it is normal to the Killing vector in $`B`$ but not in $`B`$. If in such a situation one tries to solve the BCT equation with a boundary where the slice is approximately normal to the Killing vector, the numerical problem might become badly conditioned.
As a concrete example, we consider a spherically symmetric slice in flat spacetime. Let $`(t,r,\theta ,\phi )`$ be the standard spherically coordinates on Minkowski spacetime, and let the slice be given by $`t=T(r)`$. As coordinates intrinsic to the slice we use $`(r,\theta ,\phi )`$ induced by the spacetime coordinates of the same name. The normal vector to the slice, induced 3-metric and extrinsic curvature are then given by
$`n^r`$ $`=`$ $`{\displaystyle \frac{T^{}}{\sqrt{1T^2}}},n^t={\displaystyle \frac{1}{\sqrt{1T^2}}},`$ (20)
$`g_{rr}`$ $`=`$ $`1T^2,g_{\theta \theta }=r^2,g_{\phi \phi }=r^2\mathrm{sin}^2\theta ,`$ (21)
$`K_{rr}`$ $`=`$ $`{\displaystyle \frac{T^{\prime \prime }}{\sqrt{1T^2}}},K_{\theta \theta }={\displaystyle \frac{rT^{}}{\sqrt{1T^2}}},K_{\phi \phi }=\mathrm{sin}^2\theta K_{\theta \theta }.`$ (22)
The normal vector is parallel to the Killing vector $`/t`$ where $`T^{}(r)=0`$, say at $`r=r_0`$. We choose $`U`$ to be the ball $`rr_0`$. The desired element of the kernel of $`O`$ is then
$$\alpha =\sqrt{1T^2},\beta ^r=T^{},$$
(23)
up to an overall constant factor.
The potential difficulty with non-uniqueness can be avoided by an appropriate choice of slice and boundary. In particular, for the choice of slice and boundary proposed in for the binary black hole problem, the shift is nowhere small, so that the Killing vector is nowhere normal to the slice. Near the black hole excision boundary, the slicing is of the Painlevé-Güllstrand (or Kerr-Schild) type, with a large radial shift, while at the outer boundary the coordinates are corotating, with a large $`/\varphi `$ shift component.
## IV Use of the BCT gauge and other gauges in time evolution
Consider the evolution of initial data $`(g_{ab},K_{ab})`$. The evolution equation for $`K_{ab}`$ is of the form
$$\dot{K}_{ab}=D_aD_b\alpha +\text{other terms}.$$
(24)
The time derivative of $`K_{ab}`$ contains the second spatial derivative of the lapse. If one uses the BCT gauge to determine $`\alpha `$ and $`\beta `$ from $`g_{ab}`$ and $`K_{ab}`$, the evolution equation for $`K_{ab}`$ becomes
$$\dot{K}_{ab}=\left[\frac{L^{cd}(\beta )}{K_{ef}K^{ef}}2\frac{L^{mn}(\beta )K_{mn}K^{cd}}{(K_{ef}K^{ef})^2}\right]D_aD_bK_{cd}+\text{other terms}.$$
(25)
This means that if $`K_{ab}`$ is initially in a function space of finite differentiability, time evolution takes it out of that space – it “loses two derivatives”. ($`\beta ^a`$ itself appears to gain two derivatives because it is the solution of an elliptic equation, but this does not affect the argument.)
While this is a technical obstacle to proving existence and uniqueness of solutions to the Einstein equations in the BCT gauge, it also hints at the possible existence of a practical problem for the use of the BCT gauge in numerical evolution. Roughly speaking, one would expect numerical noise to be amplified during time evolution, whereas maximal slicing with zero shift, for example, is empirically known to dampen numerical noise. In the toy model equation $`\dot{u}=\kappa u_{xx}`$, noise is damped for $`\kappa >0`$ (heat equation) and increases for $`\kappa <0`$ (heat equation run backwards), so that one only needs to choose the correct sign of $`\kappa `$. In contrast, Equation (25) is a nonlinear tensor equation. It appears plausible that some of the eigenvalues of its linearization around some backgrounds $`(g_{ab},K_{ab})`$ correspond to negative $`\kappa `$, thus leading to the growth of linear noise. Corresponding nonlinear instabilities may also exist. We have not attempted to investigate this question.
Here we would like to draw attention to an alternative gauge choice. Consider the following coupled equations for the lapse and shift:
$`D_bD^b\beta ^a+D_bD^a\beta ^b2D_b(\alpha K^{ab}){\displaystyle \frac{n}{3}}D^a(2D_b\beta ^b2\alpha K)`$ $`=`$ $`0,`$ (26)
$`D_aD^a\alpha +\left[{}_{}{}^{(3)}R+K^2+{\displaystyle \frac{1}{2}}(\tau 3\rho )\right]\alpha +\beta ^aD_aK`$ $`=`$ $`0.`$ (27)
Here $`\tau `$ and $`\rho `$ are matter terms, namely the trace of the (three-dimensional) stress tensor and the energy density. $`n`$ is a constant that is either zero or one. These equations for $`\alpha `$ and $`\beta `$ are elliptic, but they are not self-adjoint. Existence and uniqueness of solutions will be considered elsewhere . As the equations are elliptic in both $`\alpha `$ and $`\beta `$, $`(\dot{g}_{ab},\dot{K}_{ab})`$ do not lose differentiability compared to the Cauchy data $`(g_{ab},K_{ab})`$.
The first, vector, equation is $`D_bF^{ab}=0`$ for $`n=0`$, and is $`D_b(F^{ab}1/3g^{ab}F_{}^{c}{}_{c}{}^{})=0`$ for $`n=1`$. These two shift conditions were suggested by Smarr and York in their classic paper on coordinate conditions under the names “minimal strain shift” and “minimal distortion shift”. The second, scalar, equation is $`\dot{K}=0`$. For $`K=0`$ it reduces to maximal slicing. Here, however, we do not assume that $`K`$ is zero, nor that it is constant in space. $`\dot{K}=0`$ slicing, and its combination with minimal distortion shift, was suggested by Smarr and York, and they also discuss its Killing vector-tracking property. Their subsequent discussion focuses on maximal ($`K=0`$) slicing with minimal distortion shift. This more restricted gauge choice is now commonly associated with the name “Smarr-York” (SY) gauge, while the more general gauge with $`K0`$ seems to have been forgotten. The desirability of Killing vector tracking was later rediscovered in and , and the gauge discussed here was rediscovered as “generalized Smarr-York” (GSY) gauge in .
Here we want to point out that the GSY gauge has the desirable properties of the BCT gauge – it tracks Killing vectors and it admits generic Cauchy data – without having the “loss of derivatives” property and the possible problem when the slice is normal to the Killing vector at the boundary. In a direct numerical comparison of the BCT and GSY gauges in spherical symmetry, Garfinkle and Gundlach find that the GSY gauge stably tracks Killing vectors, but are unable to obtain a stable time evolution with the BCT gauge, and this may be due to the “loss of derivatives” property. Shibata reports good experiences with an approximate implementation of maximal slicing with minimal strain shift in 3D evolutions of a neutron star binary. Independent numerical work, and tests in 3D, are required to decide if one of the gauges is more suitable in practice than the other.
## ACKNOWLEDGMENTS
This work was carried out during a mini-program on “Colliding Black Holes” at the Institute for Theoretical Physics, Santa Barbara, where it was supported by NSF grant PHY-9407194. DG is also supported NSF grant PHY-9722039 to Oakland University, and JI by grant PHY-9800732 to the University of Oregon.
|
no-problem/0003/cond-mat0003438.html
|
ar5iv
|
text
|
# Investigation of routes and funnels in protein folding by free energy functional methods
ABSTRACT We use a free energy functional theory to elucidate general properties of heterogeneously ordering, fast folding proteins, and we test our conclusions with lattice simulations. We find that both structural and energetic heterogeneity can lower the free energy barrier to folding. Correlating stronger contact energies with entropically likely contacts of a given native structure lowers the barrier, and anticorrelating the energies has the reverse effect. Designing in relatively mild energetic heterogeneity can eliminate the barrier completely at the transition temperature. Sequences with native energies tuned to fold uniformly, as well as sequences tuned to fold by a single or a few routes, are rare. Sequences with weak native energetic heterogeneity are more common; their folding kinetics is more strongly determined by properties of the native structure. Sequences with different distributions of stability throughout the protein may still be good folders to the same structure. A measure of folding route narrowness is introduced which correlates with rate, and which can give information about the intrinsic biases in ordering due to native topology. This theoretical framework allows us to systematically investigate the coupled effects of energy and topology in protein folding, and to interpret recent experiments which investigate these effects.
The energy landscape has been a central paradigm in understanding the physical principles behind the self-organization of biological molecules . A central feature of landscapes of biomolecules which has emerged is that the process of evolution, in selecting for sequences that fold reliably to a stable conformation within a biologically relevant time, induces a new energy scale into the landscape . In addition to the ruggedness energy scale already present in heteropolymers, it now has the overall topography of a funnel . A sequence with a funneled landscape has a low energy native state occupied with large Boltzmann weight at temperatures high enough that folding kinetics is not dominated by slow escape from individual traps.
As an undesigned heteropolymer with a random, un-evolved sequence is cooled, it becomes trapped into one of many structurally different low energy states, similar to the phase transitions seen in spin glasses, glasses, and rubber. The low temperature states typically look like a snapshot of the high temperature collapsed states, but have dramatically slower dynamics. On the other hand, when a designed heteropolymer or protein is cooled, it reliably and quickly finds the dominant low energy structure(s) corresponding to the native state, in a manner similar to the phase transition from the gas or liquid to the crystal state. As in crystals, the low temperature states typically have a lower symmetry group than the many high temperature states . Connections have been made between native structural symmetry and robustness to mutations of proteins . Funnel topographies are maximized in atomic clusters when highly symmetric arrangements of the atoms are possible, as in van der Waals clusters with “magic numbers” , and similar arguments have been applied to proteins , where funneled landscapes are directly connected to mutational robustness .
It is appealing to make the connection between symmetry and designability of native structures to the actual kinetics of the folding process, arguing that symmetry or uniformity in ordering the protein maximizes the number of folding routes and thus the ease of finding a candidate folding nucleus, thus maximizing the folding rate. Explicit signatures of multiple folding routes as predicted by the funnel theory have been seen in simulations of well-designed proteins as well as experiments on several small globular proteins . However these folding routes are not necessarily equivalent. There is an accumulating body of experimental and simulation evidence which show varying degrees of heterogeneity in the ordering process. These data refine the funnel picture by focusing on which parts of the protein most effectively contribute to ordering, and on the effects of native topology and native energy distribution on rates and stability. The ensemble of foldable sequences with a given ratio of $`T_\text{F}/T_\text{G}>1`$ has a wide distribution of mean first passage times , indicating that several other properties of the sequence and structure contribute to folding thermodynamics and kinetics. These include topological properties of the native structure (e.g. mean loop length $`\overline{\mathrm{}}`$, dispersion in loop length $`\delta \mathrm{}`$, and kinetic accessibility of the native structure), the distribution over contacts of total native energy in the protein, and the coupling of contact energetics with native topology.
In this paper we integrate the above sundry observations into a theory which explicitly accounts for native heterogeneity, structural and energetic, in the funnel picture. We introduce a simple field theory with a non-uniform order parameter to study fluctuations away from uniform ordering, through free energy functional methods introduced earlier by Wolynes and collaborators <sup>1</sup><sup>1</sup>1 We treat only native couplings in detail, accounting for non-native interactions as a uniform background field. Additionally, the correlation between contacts $`(i,j)`$ is a function only of the overall order $`Q`$ in our theory. This is analogous to the Hartree approximation in the one-electron theory of solids where electrons mutually interact only through an averaged field; extensions of our theory to include correlation mediated by native structure may be examined within the density-functional framework, and are a topic of future research. On the other hand, tests of the theory by simulation (fig. (1)) produce qualitatively the same results, so the conclusions are not effected by including correlations to any order. The theory is in agreement with simulations also performed in this paper. We organize the paper as follows. First we outline the calculation and results. Next we derive and use an approximate free energy functional which captures the essence of the problem. Then we conclude and suggest future research, leaving technical aspects of the derivation for the methods section.
OUTLINE. The free energy functional description in principle allows for a fairly complete understanding of the folding process for a particular sequence; this includes effects due to the three dimensional topological native structure, possible misfolded traps, and heterogeneity among the energies of native contacts. We model a well-designed, minimally frustrated protein with an approximate functional, but many of the results we obtain are quite general. We find that for a well-designed protein, gains in loop entropy and/or core energy always dominate over losses in route entropy, so the thermodynamic folding barrier is always reduced by any preferential ordering in the protein. <sup>2</sup><sup>2</sup>2 Folding heterogeneity effects the free energy in three ways: 1.) The number of folding routes to the native state decreases; this effect increases the folding barrier, 2.) The conformational entropy of polymer loops increases, since native cores with larger halo entropies are more strongly weighted. This decreases the folding barrier 3.) Making likely contacts stronger in energy lowers the thermal energy of partially native structures; this decreases the folding barrier. However as long as ordering heterogeneity is not too large, there are still many folding routes to the native structure, and the funnel picture is valid. When there are very few routes to the native state due to large preferential ordering, folding is slow and multi-exponential at temperatures where the native structure is stable. In this scenario the rate is governed by the kinetic traps along the path induced, rather than the putative thermodynamic barrier which is absent. Several physically motivated arguments giving the above results are described in the supplementary material.
To analyze the effects of native energetic as well as structural heterogeneity on folding, we coarsely describe the native structure through its distributions of native contact energies $`\left\{ϵ_i\right\}`$ and native loop lengths $`\left\{\mathrm{}_i\right\}`$. Here $`ϵ_i`$ is the solvent averaged effective energy of contact $`i`$, and $`\mathrm{}_i`$ is the sequence length pinched off by contact $`i`$. The labeling index $`i`$ runs from $`1`$ to $`M`$, where $`M=zN`$ is the total number of contacts, $`N`$ is the length of the polymer, $`z`$ the number of contacts per residue. In the spirit of density functional theory of fluids we introduce a coarse-grained free energy functional $`F(\{Q_i(Q)\}|\left\{ϵ_i\right\},\left\{\mathrm{}_i\right\})`$ approximating the physics of secondary (as e.g. along a helix) and tertiary (non-local) contacts in ordering. $`Q`$ is defined as the overall fraction of native contacts made, used here to stratify the configurations with given similarity to the native state, since this partitioning results in a funnel topography of the energy landscape for designed sequences . The fraction of time contact $`i`$ is made in the sub-ensemble of states at $`Q`$ is $`Q_i(Q)`$. From a knowledge of this functional all relevant thermodynamic functions can in general be calculated such as transition state entropies and energies, barrier heights, and surface tensions. Moreover, derivatives of the functional give the equilibrium distribution and correlation functions describing the microscopic structure of the inhomogeneous system, as we see below.
Given all the contact energies $`\left\{ϵ_i\right\}`$ and loop lengths $`\left\{\mathrm{}_i\right\}`$ for a protein, the thermal distribution of contact probabilities $`\left\{Q_i\left(Q\right)\right\}`$ is found by minimizing the free energy functional $`F(\left\{Q_i\left(Q\right)\right\}|\left\{ϵ_i\right\},\left\{\mathrm{}_i\right\})`$ subject to the constraint that the average probability is $`Q`$, i.e. $`_iQ_i=MQ`$ ($`Q`$ parameterizes the values of the $`Q_i^{}s`$<sup>3</sup><sup>3</sup>3 This procedure is analogous to finding the most probable distribution of occupation numbers, and thus the thermodynamics, by maximizing the microcanonical entropy for a system of particles obeying a given occupation statistics - here the effective particles (the contacts) obey Fermi-Dirac statistics, c.f. eq. (7). Since in the model the probability of a contact to be formed is a function of its energy and loop length, we can next consider the minimized free energy as a function of the contact energies for a given native topology: $`F(\left\{ϵ_i\right\}|\left\{\mathrm{}_i\right\})`$. Then we can seek the special distribution of contact energies $`\{ϵ_i^{}(\mathrm{}_i)\}`$ that minimizes or maximizes the thermodynamic folding barrier to a particular structure by finding the extremum of $`F^{}(\left\{ϵ_i\right\}|\left\{\mathrm{}_i\right\})`$ with respect to the contact energies $`ϵ_i`$, subject to the constraint of fixed native energy, $`_iϵ_i=M\overline{ϵ}=E_\text{N}`$. This distribution when substituted into the free energy gives in principle the extremum free energy barrier as a function of native structure $`F^{}(\left\{\mathrm{}_i\right\})`$, which might then be optimized for the fastest/slowest folding structure and its corresponding barrier. We found that in fact the only distribution of energies for which the free energy was an extremum is in fact the distribution which maximizes the barrier by tuning all the contact probabilities to the same value.
METHODS. We derive an approximate free energy functional, which takes account for ordering heterogeneity, starting from a contact Hamiltonian $`(\{\mathrm{\Delta }_{\alpha \beta }\}|\{\mathrm{\Delta }_{\alpha \beta }^\text{N}\})`$ of the form
$$=\underset{\alpha <\beta }{}\left[ϵ_{\alpha \beta }^\text{N}\mathrm{\Delta }_{\alpha \beta }\mathrm{\Delta }_{\alpha \beta }^\text{N}+ϵ_{\alpha \beta }\mathrm{\Delta }_{\alpha \beta }\left(1\mathrm{\Delta }_{\alpha \beta }^\text{N}\right)\right]$$
(1)
Here the double sum is over residue indices, $`\mathrm{\Delta }_{\alpha \beta }=1`$ ($`0`$) if residues $`\alpha `$ and $`\beta `$ (do not) contact each other, $`\mathrm{\Delta }_{\alpha \beta }^\text{N}=1`$ ($`0`$) if these residues (do not) contact each other in the native configuration. The sum over native energies $`ϵ_{\alpha \beta }^\text{N}`$ and non-native energies $`ϵ_{\alpha \beta }`$ gives the energy for a particular configuration. <sup>4</sup><sup>4</sup>4A similar derivation of the free energy for a uniform order parameter $`Q`$ was calculated in ref. . To obtain the thermodynamics we proceed by obtaining the distribution of state energies in the microcanonical ensemble by averaging non-native interactions over a Gaussian distribution of variance $`b^2`$: $`P(E|E_\text{N},\{\mathrm{\Delta }_{\alpha \beta }\mathrm{\Delta }_{\alpha \beta }^\text{N}\})=\delta [E\{\mathrm{\Delta }_{\alpha \beta }\}]\delta [E_\text{N}\{\mathrm{\Delta }_{\alpha \beta }^\text{N}\}]_{nnat}`$<sup>5</sup><sup>5</sup>5 This approach assumes minimal frustration, in that native heterogeneity is explicitly retained and non-native heterogeneity is averaged over; phenomena specific to a particular set of non-native energies, e.g. “off-pathway” intermediates, are smoothed over in this procedure. The averaging results in a Gaussian distribution having mean $`_iϵ_i𝒬_i`$ and variance $`Mb^2(1Q)`$, where $`𝒬_i\mathrm{\Delta }_{\alpha \beta }\mathrm{\Delta }_{\alpha \beta }^\text{N}`$ counts native contacts present in the configuration state inside the stratum $`Q`$. From this distribution the log density of states is obtained in terms of the configurational entropy of stratum $`Q`$, $`S(\{𝒬_i\}|Q)`$, and the free energy functional $`F(\{𝒬_i\}|Q)`$ obtained by performing the usual Legendre transform to the canonical ensemble (c.f. eq (4)). <sup>6</sup><sup>6</sup>6Note that in eq. (4) we explicitly include the thermal trace over configurations at overall order $`Q`$.
We express the free energy in terms of an arbitrary distribution of contact probabilities - the distribution of $`\{Q_i\}`$ that minimizes $`F(\{Q_i\}|Q)`$ is the (most probable) thermal distribution. <sup>7</sup><sup>7</sup>7In the contact representation, the averaged bond occupation probabilities $`Q_i=𝒬_i_{\text{TH}}`$ are analogous to the averaged number density operator in an inhomogeneous fluid: $`n(𝐱)_{\text{TH}}=_i\delta (𝐱_i𝐱)_{\text{TH}}`$. For the ensemble of configurations at $`Q`$, we define the entropy that corresponds to the multiplicity of contact patterns as $`𝒮_{\text{ROUTE}}(\{Q_i\}|Q)`$ ($`>0`$), and the configurational entropy lost from the coil state to induce a contact pattern $`\{Q_i\}`$ as $`𝒮_{\text{BOND}}(\{Q_i\}|\left\{\mathrm{}_i\right\},Q)`$ ($`<0`$). We make no capillarity or spinodal assumption, and treat the route entropy as the entropy of a binary fluid mixture , modified by a prefactor $`\lambda (Q)1Q^\alpha `$, which measures the number of combinatoric states reduced by chain topology: residues connected by a chain have less mixing entropy than if they were free <sup>8</sup><sup>8</sup>8The value $`\alpha =1.37`$ gives the best fit to the lattice $`27`$-mer data for the route entropy, while $`\alpha 1.0`$ best fits the $`27`$-mer free energy function. We generally use $`\alpha 1.0`$ since the $`27`$-mer is small - for larger systems $`\alpha `$ is smaller: more polymer is buried and thus more strongly constrained by surrounding contacts.:
$$𝒮_{\text{ROUTE}}=\lambda \left(Q\right)\underset{i=1}{\overset{M}{}}\left[Q_i\mathrm{ln}Q_i(1Q_i)\mathrm{ln}\left(1Q_i\right)\right].$$
(2)
We introduce a measure of “routing” $`(Q)`$ by expanding the entropy to lowest order <sup>9</sup><sup>9</sup>9 We avoid the word “pathway” since several definitions exist in the literature; here a single route is unambiguously defined through the limit $`𝒮_{\text{ROUTE}}0`$.: $`𝒮_{\text{ROUTE}}(\{Q+\delta Q_i\})𝒮_{\text{ROUTE}}^{\text{MAX}}\lambda (Q)/2`$, where we have defined $`(Q)`$ by $`(Q)=\delta Q^2/\delta Q^2_{\text{MAX}},`$ which is the variance of contact probabilities normalized by the maximal variance, <sup>10</sup><sup>10</sup>10That is, if $`MQ`$ contacts were made with probability $`1`$ and $`MMQ`$ contacts were made with probability $`0`$, then $`(Q_iQ)^2_{\text{MAX}}=(1/M)(MQ(1Q)^2+(MMQ)Q^2)=Q(1Q)`$. Thus $`(Q)`$ is between $`0`$ and $`1`$. In the limit $`(Q)=0`$ the uniformly ordering system has the maximal route entropy. When $`Q_i=0`$ or $`1`$ only, $`(Q)=1`$, $`𝒮_{\text{ROUTE}}=0`$, and only one route to the native state is allowed. <sup>11</sup><sup>11</sup>11That is, since all $`Q_i`$ are only zero or one at any degree of nativeness, each successive bond added must always be the same one, so folding is then a random-walk on the potential defined by that single route (there is still chain entropy present). $`(Q)`$ is in the spirit of a Debye-Waller factor applied to folding routes.
In the supplementary material we derive a form for the configurational entropy loss to fold to a given topological structure by accounting for the distribution of entropy losses to form bonds or contacts due to the distribution of sequence lengths in that structure. We let the effective sequence (loop) length between residues $`i`$ and $`j`$, $`\mathrm{}_{\text{EFF}}(|ij|,Q)`$ be a function of $`Q`$ (this is a mean field approximation), and we take the entropy loss to close this loop to be of the Flory form $`(3/2)\mathrm{ln}(a/\mathrm{}_{\text{EFF}})`$. The requirement that the entropy be a state function restricts the possible functional form of the effective loop length. The result of the derivation for the contact entropy loss to form state $`\{Q_i\}`$ is
$$𝒮_{\text{BOND}}=(3/2)M\left(\delta Q\delta \mathrm{ln}\mathrm{}+S_{\text{MF}}(Q,\overline{\mathrm{}})\right)$$
(3)
where $`\delta Q\delta \mathrm{ln}\mathrm{}=(1/M)_i(Q_iQ)(\mathrm{ln}\mathrm{}_i\overline{\mathrm{ln}\mathrm{}})`$ is the correlation between the fluctuations in contact probability and log loop length, and $`S_{\text{MF}}(Q,\overline{\mathrm{}})`$ is the mean-field bond entropy loss (described in the supplement), and is a function only of $`Q`$ and the mean loop length $`\overline{\mathrm{}}`$. By eq. (3) the entropy is raised above that of a symmetrically ordering system when shorter ranged contacts have higher probability to be formed; this effect lowers the barrier. Eq.s (4), (2), and (3) together give expression (6) for the free energy $`F(\{Q_i(Q)\}|\left\{ϵ_i\right\},\left\{\mathrm{}_i\right\})`$ of a well-designed protein that orders heterogeneously.
The lattice protein used in fig 1 to check the theory is a chain of $`27`$ monomers constrained to the vertices of a 3-D cubic lattice. Details of the model and its behavior can be found in . Monomers have non-bonded contact interactions with a Gō potential (native interactions only). <sup>12</sup><sup>12</sup>12Corner, crankshaft, and end moves are allowed. Free energies and contact probabilities are obtained by equilibrium monte-carlo sampling using the histogram method . Sampling error is $`<5\%`$. Coupling energies were chosen for row 1 of fig 1 by first running a simulated annealing algorithm to find the set $`\{ϵ_i^{}\}`$ that makes all the $`Q_i(\{ϵ_i^{}\})=Q^{}`$ at the barrier peak. Energies are always constrained to sum to a fixed total native energy: $`_iϵ_i=M\overline{ϵ}`$. Then energies were relaxed by letting $`ϵ_i=ϵ_i^{}+\alpha (\overline{ϵ_i}ϵ_i^{})`$. The values $`\alpha =1`$, $`1.35`$, $`2.05`$ were used in rows 2, 3, and 4 respectively.
FREE ENERGY FUNCTIONAL. By averaging a contact Hamiltonian over non-native interactions, we can derive an approximate free energy functional for a well-designed protein (See the methods section). We analyze here heterogeneity in minimally frustrated sequences, where the roughness energy scale $`b`$ is smaller than the stability gap $`\overline{ϵ}`$. The general form of the free energy functional is
$$F=\underset{i=1}{\overset{M}{}}\left[ϵ_i𝒬_iTS\left(\{𝒬_i\}|Q\right)\right]_{\text{THERM}}^{^{}}\frac{Mb^2}{2T}\left(1Q\right)$$
(4)
where $`𝒬_i=(0,1)`$ counts native contacts in a configurational state (so the sum on $`ϵ_i𝒬_i`$ gives the states energy), summing $`S(\{𝒬_i\}|Q)`$ gives the states configurational entropy, and then this is thermally averaged over all states restricted to have $`MQ`$ contacts. The second term accounts for low energy non-native traps.
The study of the configurational entropy is a fascinating but complicated problem detailed in the methods section. In summary this entropy functional generalizes the Flory mean-field result to account for the topological heterogeneity inherent in the native structure and a finite average return length for that structure (contact order ), as well as to account for the number of folding routes to the native structure. The amount of route diversity or narrowness in folding can be quantified in terms of the relative fluctuations of contact formation $`\delta Q=Q_i(Q)Q`$:
$$(Q)=\delta Q^2/\delta Q^2_{\text{MAX}},$$
(5)
which is useful for our analysis below. Our resulting analytic expression for the free energy of a protein that folds heterogeneously is <sup>13</sup><sup>13</sup>13We have expanded the route entropy eq. (2) to second order in this expression for clarity; in deriving the results of the theory the full expression is used..
$$\frac{F}{M}\frac{F_{\text{MF}}^o}{M}+\overline{\delta Q\delta ϵ}+\frac{\lambda T}{2}\frac{\overline{\delta Q^2}}{Q\left(1Q\right)}+\frac{3}{2}\overline{\delta Q\delta \mathrm{ln}\mathrm{}}.$$
(6)
Here $`F_{\text{MF}}^o(Q)`$ is the uniform-field free energy function (similar to that obtained previously in ). The free energy functional is approximate in that it results from an integration over a local free energy density whose only information about the surrounding medium is through the average field present ($`Q`$), $`F=_if_i(Q_i,Q)`$. Cooperative entropic effects due to local correlations between contacts would be an important extension of the model, and have been treated elsewhere in similar models . Inspection of eq. (6) shows that as heterogeneity increases, the effect on the barrier is a competition between energetic and polymer entropy gains (2nd and 4th terms) and route entropy losses (3rd term) as described above.
Minimizing the free energy (6) at fixed $`Q`$, $`\delta (F+\mu _jQ_j)=0`$, gives a Fermi distribution for the most probable bond occupation probabilities $`\left\{Q_i^{}\right\}`$ for a given $`\left\{ϵ_i\right\}`$ and $`\left\{\mathrm{}_i\right\}`$:
$$Q_i^{}(Q)=1/\left(1+\mathrm{exp}\left[\left(\mu ^{}+ϵ_iTs_i\right)/\lambda T\right]\right)$$
(7)
where the Lagrange multiplier $`\mu ^{}(1/M)F/Q`$ is related to the effective force on the potential $`F(Q)`$. Positive second variation of $`F`$ indicates the extremum is in fact a minimum.
OPTIMIZING RATES, STABILITY, AND ENTROPY We now consider the effects on the free energy when the native interactions between residues are changed in a controlled manner. The theory predicts a barrier at the transition temperature of a few $`k_\text{B}T`$, in general agreement with experiments on small, single-domain proteins. The barrier height is fairly small compared to the total thermal energy of the system, reflecting the exchange of entropy for energy as the protein folds. However the barrier height can vary significantly depending on which parts of the protein are more stable in their local native structure. At uniform stability we find the largest barrier (for a given total native energy): about twice as large as the barrier when stability is governed purely by the three-dimensional native structure, i.e. when all interaction energies are equal. Increasing heterogeneity, by energetically favoring regions of the protein which are already entropically likely to order, systematically decreases the barrier, and in fact can eliminate the barrier entirely if the heterogeneity is large enough. See figure 1.
We seek to relax the values of $`\left\{ϵ_j\right\}`$ at fixed native energy $`E_\text{N}=_jϵ_j`$ to the distribution $`\left\{ϵ_i^{}\left(\left\{\mathrm{}_j\right\}\right)\right\}`$ that extremizes the free energy barrier, by finding the solution of $`_i[\delta F^{}/\delta ϵ_ip]\delta ϵ_i=0`$ for arbitrary and independent variations $`\delta ϵ_i`$ in the energies. It can be shown that $`\delta F/\delta ϵ_i=F/ϵ_i+\mu (\delta /\delta ϵ_i)_jQ_j`$, however the second term is zero since $`\delta Q/\delta ϵ_i=0`$, so by eq. (4) $`\delta F/\delta ϵ_i=Q_i`$: the contact probability plays the role of the local density, and the perturbation $`\delta ϵ_i`$ the role of an external field, as in liquid state theory. At the extremum all contact probabilities are equal: $`Q_i=p=Q^{}`$, which in our model means that longer loops have lower (stronger) energies: $`\delta ϵ_i=T\delta s_i=(3/2)T\delta \mathrm{ln}\mathrm{}_i`$; there is full symmetry in the ordering of the protein at the extremum. Evaluating the second derivative mechanical-stability matrix shows $`Q_i=Q^{}`$ to be an unstable maximum:
$$\left(\delta ^2F^{}/\delta ϵ_j\delta ϵ_i\right)_{ϵ_i^{},ϵ_j^{}}=\delta _{ij}Q^{}\left(1Q^{}\right)/\lambda ^{}T.$$
(8)
This is clearly negative, meaning that tuning the energies so that $`Q_i=Q^{}`$ maximizes the free energy at the barrier peak. Since the change in the unfolded state (at $`Q0`$) is much weaker than at the transition state, the barrier height itself is essentially maximized. Substituting eq. (8) into a Taylor expansion of the free energy at the extremum (and using $`\lambda ^{}=\lambda (Q^{})1Q^{}`$) gives for the rate
$$k=k_{\text{HOMO}}\mathrm{exp}(Q^{}M\overline{\delta ϵ^2}/2T^2),$$
(9)
which is to be compared with eq. (1) in the supplementary material (obtained by an argument using the random energy model). In terms of the route narrowness measure $`(Q)`$ the change in free energy barrier on perturbation is
$$\delta \mathrm{\Delta }F^{}=(1/2)M\lambda ^{}T(Q^{}).$$
(10)
A variance in contact participations $`\overline{\delta Q^2}=0.05`$ which is about $`20\%`$ of the maximal dispersion ($`1/4`$, taking $`Q^{}1/2`$) lowers the barrier by about $`0.1Nk_\text{B}T`$ or about $`5k_\text{B}T`$ for a chain of length $`N50`$ (believed to model a protein with $`100`$ aa ).
We can extend the analysis by perturbing about a structure with mean loop length $`\overline{\mathrm{}}`$, and including effects on the barrier due to dispersion in loop length and correlations between energies and loop lengths. A perturbation expansion of the free energy gives to lowest order:
$$\frac{\delta \mathrm{\Delta }F^{}}{M}=\frac{Q^{}}{2T}\overline{\delta ϵ^2}T\frac{9}{8}Q^{}\frac{\overline{\delta \mathrm{}^2}}{\overline{\mathrm{}}^2}\frac{3}{4}Q^{}\frac{\overline{\delta \mathrm{}\delta ϵ}}{\overline{\mathrm{}}}$$
(11)
indicating that the free energy barrier is additionally lowered by structural variance in loop lengths, and also when shorter range contacts become stronger energetically ($`\delta \mathrm{}_i<0`$ and $`\delta ϵ_i<0`$) or longer range contacts become weaker energetically ($`\delta \mathrm{}_i>0`$ and $`\delta ϵ_i>0`$) i.e. in the model the free energy is additionally lowered when fluctuations are correlated so as to further increase the variance in contact participations. This effect has been seen in experiments by the Serrano group .
To test the validity of the theory, we compare the analytical results obtained from our theory with the results from simulation of a 27-mer lattice protein model. The comparison is shown on figure 1 where a full analysis is performed. All energies are in units of the mean native interaction strength $`\overline{ϵ}`$.
The rate dependence on heterogeneity should be experimentally testable by measuring the dependencies of folding rate at the transition temperature of a well-designed protein on the dispersion of $`\varphi `$-values. It is important that before and after the mutation(s) the protein remains fast-folding to the native structure without “off-pathway” intermediates, and that its native state enthalpy remain approximately the same, perhaps by tuning environmental variables.
CONCLUSIONS AND FUTURE WORK. In this paper we have introduced refinement and insight into the funnel picture by considering heterogeneity in the folding of well-designed proteins. We have explored in minimally frustrated sequences how folding is effected by heterogeneity in native contact energies, as well as the entropic heterogeneity inherent in folding to a specific three-dimensional native structure. Specifically we examined the effects on the folding free energy barrier, distribution of participations in the transition state ensemble TSE’, <sup>14</sup><sup>14</sup>14We use a prime since we actually look at the barrier peak along the $`Q`$ coordinate. as well as the diversity or narrowness of folding routes. For the ensemble of sequences having a given $`T_\text{F}/T_\text{G}`$, homogeneously ordering sequences have the largest folding free energy barrier. For most structures, where topological factors play an important role, this regime is achieved by introducing a large dispersion in the distribution of native contact energies which in practice would be almost impossible to achieve. As we reduce the dispersion in the contact energy distribution to a uniform value $`\overline{ϵ}`$, the dispersion of contact participations increases and thus the number of folding routes decreases, the free energy barrier decreases and the total configurational entropy at the TSE’ initially increases due to polymer halo effects. The folding temperature is only mildly effected; the prefactor appearing in the rate is probably only mildly effected also, since it is largely a function of $`T_\text{F}/T_\text{G}`$ and polymer properties . Tuning the interaction energies further results in more probable contacts having stronger energy. Route diversity decreases to moderate values - there are still many routes to the native state, and $`T_\text{F}/T_\text{G}`$ is still sufficiently greater than one. The barrier eventually decreases to zero, at relatively mild dispersion in native contact energy. The funnel picture, with different structural details, is valid for the above wide range of native contact energy distributions. However, tuning the energies further so that probable contacts have even lower energy eventually induces the system to take a single or very few folding routes at the transition temperature. A large dispersion of energies is required to achieve this, and in this regime the folding temperature drops well below the glass temperature range, where folding rates are extremely slow.
Since fine tuning interactions on the funnel may effect the rate, sequences may be designed to fold both faster or slower to the same structure of a wild type sequence, depending how the interaction strengths correlate with the entropic likelihood of contact formation. Folding rates in mutant proteins that exceed those of the wild type have been receiving much interest in recent experiments . Enhancement (or suppression) of folding rate to a given structure due to changes in sequence are modeled in our theory through changes in native interactions; our results are fully supported by the experiments cited above. The fact that a minimally frustrated protein is robust to perturbations in the interactions means that at least the folding scenarios depicted in the center 2 rows of fig. 1 are feasible within the ensemble of sequences that fold to the given structure. However the number of sequences should be maximal when all the native interactions are near their average, and the actual width of the native interactions depends on the true potential energy function. Fluctuations in rate due to the weakening or strengthening of non-native traps by sequence perturbations is an interesting topic of future research. The enhancements or reductions in rate we have explored are mild compared to the enhancement by minimal frustration (funneling the landscape): the fine tuning of rates may be a phenomenon manifested by in vitro or in machina evolution, rather than in vivo evolution. Nevertheless rate tuning and folding heterogeneity may become an important factor for larger proteins, where e.g. stabilizing partially native intermediates may increase the overall rate or prevent aggregation. Given that a sequence is minimally frustrated, heterogeneity or broken-ordering-symmetry in fact aids folding, similar to the enhancement of nucleation rates seen in other disordered media . Similar effects have been observed in Monte Carlo simulations of sequence evolution, when the selection criteria involves fast folding rate . Here we see how such phenomena can arise from general considerations of the energy landscape theory. The notion that rates increase with heterogeneity at little expense to native stability contrasts with the view that non-uniform ordering exists merely as a residual signature of incomplete evolution to a uniformly folding state. Adjusting the backbone rigidity or the non-additivity of interactions can also modify the barrier height, possibly as much as the effects we are considering here. There may also be functional reasons for non-uniform folding - malleability or rigidity requirements of the active site may inhibit or enhance its tendency to order. The amount of route narrowness in folding was introduced as a thermodynamic measure through the mean square fluctuations in a local order parameter. The route measure may be useful in quantifying the natural kinetic accessibility of various structures. While structural heterogeneity is essentially always present, the flexibility inherent in the number of letters of the sequence code limits the amount of native energetic heterogeneity possible. However some sequence flexibility is in fact required for funnel topographies and so is probably present at least to a limited degree. We have seen here how a very general theoretical framework can be introduced to explain and understand the effects of local heterogeneity in native stability and structural topology on such quantities as folding rates, transition temperatures, and the degree of routing in the funnel folding mechanism. Such a theory should be a useful guide in interpreting and predicting experimental results on many fast-folding proteins.
We thank Peter Wolynes, Hugh Nymeyer, Cecilia Clementi, and Chinlin Guo for their generous and insightful discussions. This work was initiated while Plotkin was a graduate student with Peter Wolynes. This work was supported by NSF Grant MCB9603839 and NSF Bio-Informatics fellowship DBI9974199.
CAPTION FOR FIG. 1:
The effects of heterogeneity in contact probability (increased from top to bottom) on barrier height $`F^{}`$, folding temperature $`T_\text{F}`$, and ordering heterogeneity are summarized here; plots are for simulations of a $`27`$-mer lattice Gō model (yellow) to the same native structure (given in ), and for the analytic theory in the text (red). The simulation results make no assumptions on the nature of the configurational entropy; the theoretical results use the approximate state function of eq. (3), along with a cutoff used for the shorter loops so the bond entropy loss for each loop is always $`0`$ (the same loop length distribution as in the lattice structure is used). In the top row, energies are tuned for both simulation and theory to fully symmetrize the funnel: $`Q_i(ϵ_i^{})=Q`$; Second row: energies are then relaxed for the simulation results so they are all equal: $`ϵ_i=\overline{ϵ}`$; energies in the theory are relaxed the same way until a comparable $`T_\text{F}`$ is achieved; Third row: energies are then further tuned to a distribution $`ϵ_iϵ_i^o`$ that kills the barrier (there a many such distributions - all that is necessary is sufficient contact heterogeneity); The top 3 rows are funneled folding mechanisms with many routes to the native structure. Last row: energies are tuned to induce a single or a few specific routes for folding. All the while the energies are constrained to sum to $`E_\text{N}`$: $`_iϵ_i=E_\text{N}`$. The free energy profile $`F(Q)`$ (in units of $`\overline{ϵ}`$) is plotted in the left column at the folding transition temperature $`T_\text{F}`$, which is given. The next column shows the distribution of thermodynamic contact probabilities $`Q_i(Q^{})\varphi ^{}`$ at the barrier peak (we use the notation $`\varphi ^{}`$ since this is a thermodynamic rather than kinetic measurement, however for well-designed proteins the two are strongly correlated with coefficient $`0.85`$ ). Only simulation results are shown to keep the figure easy to read; the theory gives $`\varphi ^{}`$ distributions within $`10\%`$ as may be inferred from their similar route measures. The next column shows the route measure $`(Q)`$ of eq. (5) and gives the dispersion in native energies required to induce the scenario of that row ($`(0,1)=0/0`$ is undefined and so is omitted from the simulation plots; it is defined in the theory through the limit $`Q0,1`$). The right column shows schematically the different folding routes as heterogeneity is increased; from a maximum number of routes through $`Q^{}`$ to essentially just one route. TOP ROW: In the uniformly ordering funnel we can see first that $`P(\varphi ^{})`$ is a delta function and $`(Q^{})=0`$ (c.f. eq. (5)), so ordering at the transition state (or barrier peak $`Q^{}`$) is essentially homogeneous. The number of routes through the bottleneck (c.f. eq. (2)) is maximized, as schematically drawn on the right. Branches are drawn in the routes to illustrate the minimum of $`(Q)`$ at $`Q^{}`$. The free energy barrier is maximized (eq. (10)), thus the stability of the native state at fixed temperature and native energy is maximized, and so the folding temperature $`T_\text{F}`$ at fixed native energy is maximized. $`T_\text{F}`$ in the simulation is defined as the temperature where the native state ($`Q=1`$) is occupied $`50\%`$ of the time. In the theory, at $`T_\text{F}`$ the probability for $`Q0.8`$ is $`0.5`$. A very large dispersion in energies is required to induce this scenario; some contact energies are nearly zero, others are several times stronger than the average. SECOND ROW: In the uniform native energy funnel the barrier height is roughly halved while hardly changing $`T_\text{F}`$, for the following reason. In a Gō model, as the contact energies are relaxed from $`\{ϵ_i^{}\}`$ to a uniform value $`ϵ_i=\overline{ϵ}`$, the energy of the transition state is essentially constant: initially the energy is $`_iQ_i^{}(Q^{})ϵ_i^{}=Q_iϵ_i^{}=QE_\text{N}`$, and as the contact energies are relaxed to a uniform value $`_iQ_i\overline{ϵ}=\overline{ϵ}_iQ_i=QE_\text{N}`$ once again. However the transition state entropy increases and obtains its maximal value when $`ϵ_i=\overline{ϵ}`$, because then all microstates at $`Q^{}`$ are equally probable since the probability to occupy a microstate is $`p_i\mathrm{exp}(E_i(Q^{})/T)=\mathrm{exp}(QE_\text{N}/T)/Z=1/\mathrm{\Omega }(Q^{})`$. The thermal entropy $`_ip_i\mathrm{log}p_i`$ then equals the configurational entropy $`\mathrm{log}\mathrm{\Omega }(Q^{})`$ (its largest possible value). Thus as contact energies are relaxed from $`ϵ_i^{}`$ where they are anti-correlated to their loop lengths (more negative energies tend to be required for longer loops to have equal free energies) to $`\overline{ϵ}`$ where they are uncorrelated to their loop lengths, the barrier initially decreases because the total entropy of the bottleneck increases (drawn schematically on the right), i.e. increases in polymer halo entropy are more important than decreases in route entropy. The system is still sufficiently two-state that $`T_\text{F}`$ is hardly changed. $`P(\varphi ^{})`$ is broad indicating inhomogeneity in the transition state, due solely in this scenario to the topology of the native structure since all contacts are equivalent energetically; Routing is more pronounced - when $`ϵ_i=\overline{ϵ}`$, $`(Q)`$ is measure of the intrinsic fluctuations in order due to the natural inhomogeneity present in the native structure; different structures will have different profiles and it will be interesting to see how this measure of structure couples with thermodynamics and kinetics of folding. Loops and dead ends in the schematic drawings are used to illustrate local decreases and increases in $`(Q)`$; these fluctuations are captured by the theory only when the routing becomes pronounced (last row). The solid curves presented for the theory are shown for a reduction in $`T_\text{F}`$ comparable to the simulations. There is still some energetic heterogeneity present as indicated. When $`ϵ_i=\overline{ϵ}`$ in the theory (dashed curves), the fluctuations in $`Q_i`$ are somewhat larger than the simulation values, and the entropic heterogeneity is sufficient to kill the barrier- the free energy is downhill at $`T_\text{F}0.5\overline{ϵ}`$. The free energy barrier results from a cancellation of large terms and is significantly more sensitive than intensive parameters such as route measure $`(Q)`$. THIRD ROW: In approaching the zero-barrier funnel scenario for the simulation, the energies are further perturbed and now begin to anti-correlate with contact probability (and tend to correlate with loop length); i.e. more probable contacts (which tend to have shorter loops) have stronger energies. For the theory not as much heterogeneity is required. Contact energies are still correlated with formation probability as indicated by the signs in parentheses. The free energy barrier continues to decrease until some set of energies $`\{ϵ_i^{}\}`$ where the barrier at $`T_\text{F}`$ vanishes entirely. All the while the transition temperature $`T_\text{F}`$ decreases only $`10\%`$, so that slowing of dynamics (as $`T_\text{F}`$ approaches $`T_\text{G}`$) would not be a major factor. At this point the $`\varphi ^{}`$ distribution at the barrier position $`Q^{}(\overline{ϵ})`$ is essentially bi-modal, but the distribution at $`Q^{}(\{ϵ_i^o\})`$ (inset) is less so because of transition state drift towards lower $`Q`$ values (the Hammond effect). A relatively small amount of energetic heterogeneity is needed to kill the barrier at $`T_\text{F}`$. There are still many routes to the native state since $`(Q^{})0.30.4`$, but some contacts are fully formed in the transition state (some $`\varphi ^{}1`$). BOTTOM ROW: As the energies continue to be perturbed to values that cause folding to occur by a single dominant route rather than a funnel mechanism, folding becomes strongly downhill at the transition temperature, which drops more sharply towards $`T_\text{G}`$: here to induce a single pathway $`T_\text{F}`$ must be decreased to about $`1/4`$ the putative estimate of $`T_\text{G}`$ (about $`T_\text{F}(\{\overline{ϵ}\})/1.6`$, see ). In this scenario, the actual shape of the free energy profile depends strongly on which route the system is tuned to; Non-native interactions not included here become important. Contact participation at the barrier is essentially one or zero, and the route measure at the barrier is essentially one. The entropy at the bottleneck is relatively small (the halo entropy of a single native core). The energetic heterogeneity necessary to achieve this scenario is again very large - comparable to what is needed to achieve a uniform funnel.
|
no-problem/0003/cond-mat0003420.html
|
ar5iv
|
text
|
# Numerical simulations of directed sandpile models
## I Introduction
The classification of sandpile models in their different universality class has been a topic of intensive research in the last decade . However, most of the work has been devoted to models with undirected toppling while their corresponding directed variants has been less studied. From the theoretical side we count with the exact solution for the directed BTW model obtained by Dhar and Ramaswamy , which can be taken as a reference for numerical simulations. On the other hand, Pastor-Satorras and Vespignani has recently reported numerical simulations for the Manna model , which puts clear evidence about the classification of the BTW and Manna directed models in different universality classes. Other studies include a directed model with a probabilistic toppling and a recent report concerning the effect of local dissipation on the SOC state .
The analysis of other directed models, the non Abelian Zhang model for instance, is of great interested and may give light to the study of their corresponding undirected variants. With this scope three different directed models are investigated by means of numerical simulations. This includes the well known Zhang model , the random threshold model (RT) , and the BTW model under an uniform driving. The influence of a uniform driving in the Zhang model has been already discussed in the literature although some aspects are still not clear. However, the same analysis has not been made for models with a discrete toppling rule, like the BTW model, where the energy transfer always takes place in discrete units. In this direction Narayan and Middleton suggested that the BTW model under noisy and uniform driving has the same critical behavior.
From the numerical data it is concluded that the Manna and RT directed models belong to the same universality class, in agreement with the general believe for the corresponding undirected variants . Moreover, the data obtained for the Zhang model does not satisfy finite-size scaling which suggest that this model is in different universality class from that of the models mentioned above.
In the case of the BTW model under uniform driving it is shown, after some algebra with the toppling operators , that its evolution is periodic in time with a period scaling linearly with system size. In spite of this periodicity, which is not present in the original model with noisy driving , the statistics of the avalanches is found to be practically identical to its noisy driving counterpart.
## II Models and simulations
### A Models with noisy driving
Consider a square lattice of $`L^2`$ sites labeled by index $`(i,j)`$ ($`i,j=1,\mathrm{},N`$) and assign a variable $`z_{ij}`$ to each of them. $`z_{ij}`$ can be continuous or discrete and may have different interpretations depending on the system one is modeling. It will be referred here as the energy storage at the corresponding site. The geometry used is shown in fig. 1, in which a site can transfer energy only to its three downward nearest neighbors (nn). In the horizontal direction periodic boundary conditions are considered while the downward boundary is taken open. This geometry allows a natural implementation of the Manna toppling rule .
One motivation for the use of this geometry was given in , it is just introduced to allow the implementation of the Manna toppling rule.
To completely define a model one should specify the initial condition and the evolution rules (addition of energy and toppling) of the sandpile cellular automaton model. A threshold $`z_c`$ is considered in such a way that sites with $`z<z_c`$ are say to be stable and their energy remains constant, while those with $`zz_c`$ are say to be active and topple transferring energy to their downward nn. First the usual noisy addition of energy is considered. In this case, if all sites are stable a unit of energy is added to a site selected at random. Then the system is updated in parallel using the toppling rule until all sites are stable. The number of toppling events required to drive the system to an stable configuration is the size of the avalanche and is denoted by $`s`$. On the other hand, the number of steps (parallel updates) required is its duration and is denoted by $`T`$. Since the driving acts at random after some avalanches the system ”forget” its initial condition and reaches a stationary state. In other words the initial condition is irrelevant.
Models will differ from each other depending on the specific toppling rule one implements. Here the following toppling rules are considered,
* BTW: $`z_c=3`$, $`z_{ij}z_{ij}3`$ and $`z_{kj1}z_{kj1}+1`$ ($`k=i1,i,i+1`$);
* Manna: $`z_c=2`$, $`z_{ij}z_{ij}2`$ and $`z_{kj1}z_{kj1}+\delta _k`$ ($`k=i1,i,i+1`$), where $`\delta _k`$ can take the values $`0,1,2`$ at random but with the constraint of conservation $`_k\delta _k=2`$;
* Zhang: $`z_c=1`$, $`z_{ij}0`$ and $`z_{kj1}z_{kj1}+z_{ij}/3`$ ($`k=i1,i,i+1`$);
* RT: $`z_c`$ takes the values 3 and 4 at random after each toppling, $`z_{ij}z_{ij}3`$ and $`z_{kj1}z_{kj1}+1`$ ($`k=i1,i+1`$).
### B BTW model under uniform driving
In the noisy driving described above the addition of energy takes place at one site selected at random. However, there are many situations where a uniform driving in which the energy at all sites increases in the same amount becomes more realistic. Examples can be found in earthquake dynamics , interface depinning and also in some experimental setups to for granular materials . This type of driving has been investigated in models with a toppling rule similar to that of the Zhang models but with local dissipation .
In the case of the BTW model we should be careful when introducing a global driving. If as usual $`z_{ij}`$ is an integer variable and the energy at all sites is increased at a constant rate $`c`$ then many sizes will reach the threshold energy at the same time and, therefore, many avalanches will start at different points of the lattice leading to the superposition of avalanches.
This problem can be solved considering a continuum energy profile. This is still not enough because if all sites start with a discrete energy it will remain discrete forever. We are thus forced to consider a continuum initial profile $`z_{ij}(0)`$. Then as it was already shown by Narayan and Middleton the continuum addition of energy can be replaced by a sequential addition of energy. For simplicity consider the low disorder regime where $`z_{ij}(0)<1`$ for all sites. For the analysis below is irrelevant if the initial energy profile is displaced uniformly at all sites.
Now, suppose that the energy increases at rate $`c`$ at all sites. An example is shown in fig. 2 for a lattice made of a horizontal line of three sites. Notice that in this case one has only input of energy coming form the driving field and output dissipation under toppling, a simplification considered for illustrative purposes.
In the continuum time scale the energy increases linearly until it reaches the threshold where the site topples and its energy decreases by 3 (BTW toppling rule). But the system can be also monitored in a discrete time and energy scales. In these scales, at step $`t=0`$ all sites have energy 0. In step 1, 2 and 3 sites 1, 2 and 3 receives one unit of energy. Then in subsequent steps the same sequence of addition is repeated. The order in the sequence of addition is clearly determined by the initial condition and all sites should receive a unit of energy before the first site of the second receives the second grain.
Now consider an square lattice, where sites can also receive energy from nearest neighbors in the layer above. The picture will not change in relation to the addition of energy from the external field. In the BTW model the energy is transferred in discrete units and, therefore, the toppling only modifies the integer part of $`z`$ with no modification of the sequence of addition. This is a fundamental difference with the Zhang toppling rule which not only involve the integer par of the energy but on toppling all the energy at the active site is transferred. The consequences derived from this periodic sequential driving is investigated below, using the formalism introduced by Dhar et al .
Let be $`a_{ij}`$ the operator which add a particle at site $`(i,j)`$ and lead the system relax to an equilibrium position . After $`N`$ steps all sites receives one, and only one, unit of energy in certain order determined at $`t=0`$. Thus, if at time $`t`$ we have a configuration $`𝒞(t)`$ then at time $`t+N`$ we will obtain the configuration
$$𝒞(t+N)=\underset{i=1}{\overset{L}{}}\underset{j=1}{\overset{L}{}}a_{ij}𝒞(t).$$
(1)
The order in which the string of operators appears in this equation is irrelevant because the operators $`a_{ij}`$ commute among them self.
Applying these string three times it results that
$$𝒞(t+3N)=\underset{i=1}{\overset{L}{}}\underset{j=1}{\overset{L}{}}a_{ij}^3𝒞(t).$$
(2)
This expression can be simplified using the following property of the toppling operators
$$a_{ij}^3=\{\begin{array}{cc}a_{i1j1}a_{ij1}a_{i+1j1},\hfill & \text{for}j<L,\hfill \\ 1,\hfill & \text{for}j=L.\hfill \end{array}$$
(3)
The first equality express the fact that the addition of three grains at a site $`(i,j)`$, with $`j<L`$, makes this site active transferring one grain to each of its downward nn. The second one applies for the boundary sites which after receiving three grains become active dissipating these three grains through the boundary and, therefore, leaving the energy configuration invariant.
Starting at layer $`j=1`$ all the operators $`a_{i1}`$ are eliminated using eq. (3). This will increase the power of operators $`a_{i2}`$ in eq. (2) by 3. The same procedure is applied to the second, third … $`L1`$ layer finally resulting
$$𝒞(t+3N)=\underset{i=1}{\overset{L}{}}a_{iL}^{3L}𝒞(t).$$
(4)
The application of the operator $`a_{iL}`$ three consecutive times lead its energy invariant and, therefore, eq. (4) is reduced to
$$𝒞(t+3N)=𝒞(t).$$
(5)
Hence the evolution of the energy profile is periodic with period 3N.
This property is not observed in the noisy driving case where the randomness introduced by the driving field makes the dynamics Markovian . Nevertheless, as it is shown in the next section, the statistical properties of the avalanches in the BTW directed model are independent of the driving mechanism.
### C Numerical simulations and discussion
Numerical simulations of the BTW, Manna, Zhang and RT models with directed toppling were performed. In all cases lattice sizes ranging from $`L=64`$ to $`L=2048`$ where used. The numerical results obtained for the BTW and Manna models are taken only as a reference because we count with the largest scale simulations reported in (up to $`L=6400`$).
Noisy driving: Starting from an initial flat profile all systems were updated until they reach the stationary state. After that statistics over $`10^8`$ avalanches was taken, recording the avalanche sizes and durations.
Uniform driving: the evolution in time of the energy profile is periodic and, therefore, average was taken over the period $`3N`$. Different initial conditions where simulated using different permutations of the sequence of addition of energy.
To extract the scaling exponents we use the moment analysis technique . The $`q`$ moment of the probability density $`p_x(x)`$ of a magnitude $`x`$ is defined by
$$x^q=𝑑xp_x(x)x^q.$$
(6)
where $`x=s,T`$. As defined above $`s`$ and $`T`$ are the avalanche size and duration, i.e. the number of toppling events and parallel updates, respectively, required to drive the system to an stable configuration.
If the hypothesis of finite-size scaling is satisfied, that is the distribution of avalanche size and duration can be written in the form $`p_x(x)=x^{\tau _x}f_x(x/L^{\beta _x})`$, then the $`q`$ moment scales with system size according to the power law
$$x^qL^{\sigma _x(q)},$$
(7)
with
$$\sigma _x(q)=\beta _x(1\tau _x)+\beta _xq,$$
(8)
where $`\beta _s=D`$ and $`\beta _T=z`$ are effective dimensions which characterize how the cutoffs of the distribution of avalanche sizes and durations, respectively, scales with system size. On the other hand $`\tau _x`$ is the power low exponent which can be measured in the scaling region before the finite-size cutoffs.
The plot of $`\sigma _s(q)`$ and $`\sigma _T(q)`$ vs. $`q`$ is shown in figs. 3 for different directed models. and 4, respectively. If two models belong to the same universality class then the linear part of the plot should overlap. Based on this argument it is then concluded that the RT model belong to the same universality class of the Manna model. A more quantitative comparison can be seen in table I where the exponents computed here for the RT model are compared with those reported in for the Manna model. The scaling exponents are found in very good agreement within the numerical error.
If the hypothesis of finite size scaling is valid then one can take the scaling exponents obtained from the moment analysis and plot the different distributions on rescaled variables in such a way that curve for different system sizes overlap. This is done in figs. 5 and 6 for the RT model resulting in a very good data collapse, as it has been also observed for the directed Manna model .
On the other hand, one cannot distinguish between the curve for the BTW model with noisy or uniform driving, leading to the same scaling exponents. Thus, the periodicity introduced by the uniform driving carry no consequence for the critical behavior of the BTW model. Hence, the noisy driving can be substituted by a uniform driving together with an initial random energy profile. This will correspond in an interface depinning description, the number of toppling events playing the role of the interface height, to a columnar disorder. A similar conclusion was obtained by Lauritsen and Alava using a different argument .
The things becomes less clear when analyzing the Zhang model. In this case from the moment technique it results that $`D1.55`$, $`z1.03`$, $`\tau _s1.31`$ and $`\tau _t1.53`$. These exponents define by them self a new universality class. However, the moment analysis technique is based on the hypothesis of finite size scaling which in this case is not satisfied. This fact becomes clear in figs. 7 and 8, where the data collapse is shown, revealing that in this case the finite-size scaling is not satisfied. Deviations are observed not only for the smallest avalanches but also for the largest avalanches where the finite size scaling is expected to be better.
The anomalies observed for the Zhang model are associated with the existence of huge avalanches which practically empties the system. After one of these huge avalanches the system needs some time to reach again the critical state. This means that the mean energy of the system displays strong fluctuations and, therefore, the overall avalanche statistics is given by the small avalanches taking place during the accumulation of new grains and these huge avalanches. This picture is illustrated in fig. 9 where the fraction of avalanches of size $`s`$ is plotted. It is characterized by a rounded peak at the largest avalanche sizes which shifts with lattice size. On the other hand, the other part of the distribution cannot be fitted by a single power law.
The classification of the Manna and RT directed models in the same universality class is in agreement with a similar report for the corresponding undirected variants . Thus, there should be some common element in these models, which is off course not present in the BTW model. A clue was given in related with the possibility of multiple toppling events. In this final part of this section we discuss this statement in more detail.
In the directed BTW model the cluster of sites which topples within an avalanche is compact and these sites topple only once. On the contrary, Pastor-Satorras and Vespignani observed that in the directed Manna model the cluster of sites touched by the avalanche is still compact but each site participating in the avalanche can topple more than once. If the existence of multiple toppling events is the property that puts the Manna model in a different universality class then a similar behavior should be observed in the present simulations of the directed RT model.
A decomposition of the sites participating in an avalanche based on the number of toppling events performed at these sites is shown in fig. 9, for the case of the directed random-threshold model. In this particular realization the cluster of sites touched by the avalanche is decomposed in three sub clusters where sites have toppled one, two and three times. The fraction of sites toppling three times is small but the one with two toppling is comparable with that of one. In general it was observed that in large avalanches the fraction of sites which topple one and two times are of the same order and, therefore, multiple toppling events are relevant.
In the case of undirected models it is known that multiple toppling events are present even in the BTW model, which leads to the decomposition of the avalanches in waves . However, their origin is different than in directed models. In the undirected BTW model a site may topple more than once because after a first toppling (let say at step $`t`$) it is possible that all its neighbors become active and topple (at step $`t+1`$) and, therefore, the site will again be active and topple (at step $`t+2`$). In the decomposition of waves one apply the toppling rule to all sites until they are stable before toppling the initial active site a second, third, … time, generating in this way the first, second, … wave.
One may think in applying a similar approach for the avalanches in the Manna and RT directed models, decomposing the avalanche as a superposition of waves. A fundamental property of the waves is that within it sites can toppling only once, otherwise the concept is useless. Below its is shown that such a decomposition is not possible in the Manna and RT directed models, a least not in such a simple way.
Let us analyze in detail how a multiple toppling event can be generated in the RT directed model. Suppose the lattice has a configuration where a site has height $`3`$ and threshold $`4`$ and its three upward nearest neighbors are active. Then in the next step the site will receive three grains, one per active neighbor, taking an energy $`3+3=6>4`$, becoming active. After toppling the energy will decrease to $`63=3`$ and a new threshold is assigned. But the new threshold can be either 3 or 4. If it is 4 the site will be stable but if it is 3 it will be still active and topple in the next step. Since in the particular model considered here the two threshold are selected with equal probability then the multiple toppling can take place with the same probability than the single one, which explains the previous observation that in large avalanche the fraction of two-toppling events at the same site are of the order of the one-toppling one.
During the evolution of an avalanche which started at layer $`j_0`$ is possible that a site at a layer below $`j_1>j_0`$ needs two consecutive toppling to be stable. Thinking in a decomposition in waves one can delay the second toppling until all the sites below are stable (first wave) and then topple the site the second time generating the second wave. However, during the first wave is possible that a site at a deeper layer $`j_2>j_1>j_0`$ also needs two toppling to become stable and, therefore, the first wave has to be decomposed in sub-waves where sites topple only once. The same process may occur even at deeper layer thus generating a hierarchical structure of sub-waves. Hence, the decomposition of avalanche in waves in these models lead to a more complex structure which nevertheless may be exploited to obtain some estimate of the scaling exponents. This is nevertheless out of the scope of this work.
## III Summary and conclusions
Directed sandpile models with different toppling rule has been studied by means of numerical simulations, with the purpose of determining the different universality classes. To extract the scaling exponents the moment analysis technique was used and the resulted exponents were latter corroborated by finite size scaling of the distribution of avalanche size and duration.
The numerical analysis reveals that the introduction of a uniform driving in the BTW directed model does not change the critical properties. The evolution in time of the energy profile is in this case periodic with a period which scales linearly with the system size. In spite of this periodicity the avalanche distributions are practically identical to that obtained for the same model but with the usual noisy driving.
It is concluded that the Manna and RT models are in the same universality, where multiple toppling events appear to be a fundamental property. The existence of multiple toppling events leads to a decomposition of the avalanche in a hierarchical structure of waves which my be a starting point for future research.
Finally, it is observed that the avalanches in the directed Zhang model displays a complex structure which does not satisfy the finite-size scaling hypothesis. It is given by the superposition of huge avalanches involving a large dissipation of energy through the boundary a small avalanches taking place during the accumulation of energy.
## Acknowledgements
I thanks R. Pastor Satorras and A. Vespignani for useful comments and discussion during the elaboration of this manuscript. The numerical simulations where performed using the computing facilities at the ICTP.
|
no-problem/0003/cond-mat0003015.html
|
ar5iv
|
text
|
# Ground state of excitons and charged excitons in a quantum well
## I Abstract
A variational calculation of the ground state of a neutral exciton and of positively and negatively charged excitons (trions) in a single quantum well is presented. We study the dependence of the correlation energy and of the binding energy on the well width and on the hole mass. Our results are compared with previous theoretical results and with available experimental data.
## II Introduction
Negatively (X<sup>-</sup>) and positively (X<sup>+</sup>) charged excitons, also called trions, have been the object of intense studies in the last years, both experimentally and theoretically. The stability of charged excitons in bulk semiconductors was proven theoretically by Lampert in the late fifties, but only recently they have been observed in quantum well structures: first in CdTe/CdZnTe by Kheng et al. and subsequently in GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As.
The calculation we present in this paper, of the ground state energy for the exciton (X) and the charged exciton, fully includes the Coulomb interaction among the particles, i.e. no approximated average potential is assumed in any of the three spatial directions and the correlation among the particles is fully taken into account.
The Hamiltonian of a negatively charged exciton (X<sup>-</sup>) in a quantum well is in the effective mass approximation given by
$$\widehat{H}=T_{1e}+T_{1h}+T_{2e}+V_C+V_{1e}+V_{2e}+V_{1h},$$
(1)
where $`1e`$, $`2e`$ indicate the electrons and $`1h`$ the hole; $`V_{ie},`$ $`V_{ih}`$ are the quantum well confinement potentials; $`T_i=\stackrel{}{p}_i^2/2m_i`$ is the kinetic operator for particle $`i`$, with $`m_i`$ the corresponding mass; $`V_C`$ is the sum of the Coulomb electron-electron and electron-hole interactions,
$$V_C=\frac{e^2}{\epsilon }\left(\frac{1}{|\stackrel{}{r}_{1e}\stackrel{}{r}_{2e}|}\frac{1}{|\stackrel{}{r}_{2e}\stackrel{}{r}_{1h}|}\frac{1}{|\stackrel{}{r}_{1e}\stackrel{}{r}_{1h}|}\right),$$
(2)
with $`e`$ the elementary charge and $`\epsilon `$ the static dielectric constant. In the present work the heights of the square well confinement potentials are $`V_{ie}=0.57\times (1.155x+0.37x^2)`$ eV for the electrons and $`V_{ih}=0.43\times (1.155x_0+0.37x^2)`$ eV for the holes for the GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As quantum well system.
The Hamiltonian is then solved using the stochastic variational method. The trial function is taken as a linear combination of correlated Gaussian functions,
$`\varphi _0(\stackrel{}{r}_{1e},\stackrel{}{r}_{2e},\stackrel{}{r}_{1h})={\displaystyle \underset{n=1}{\overset{K}{}}}C_{n0}\mathrm{\Phi }_{n0}(\stackrel{}{r}_{1e},\stackrel{}{r}_{2e},\stackrel{}{r}_{1h}),`$ (3)
$`\mathrm{\Phi }_{n0}(\stackrel{}{r}_{1e},\stackrel{}{r}_{2e},\stackrel{}{r}_{1h})=𝒜\left\{\mathrm{exp}\left[{\displaystyle \frac{1}{2}}{\displaystyle \underset{\begin{array}{c}i,j\{1e,2e,1h\}\\ k\{x,y,z\}\end{array}}{}}A_{nijk0}r_{ik}r_{jk}\right]\right\},`$ (6)
where $`r_{ik}`$ gives the positions of the $`ith`$ particle in the direction $`k`$; $`𝒜`$ is the antisymmetrization operator and $`\{C_{n0},A_{nijk0}\}`$ are the variational parameters. The dimension of the basis, $`K`$, is increased until the energy is sufficiently accurate.
## III The results
The correlation energy of a charged exciton is defined as
$`E_C(X^{})`$ $`=`$ $`E_T(X^{})2E_eE_h,`$ (7)
$`E_C(X^+)`$ $`=`$ $`E_T(X^+)2E_hE_e,`$ (8)
with $`E_T(X^\pm )`$ the energy level of the charged exciton and, $`E_e`$ and $`E_h`$ the energy levels of the free electron and hole, respectively, in the quantum well. We discuss here the results obtained for a GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As quantum well with $`x=0.3`$. The values of the GaAs masses used are $`m_e=0.0667m_0,`$ $`m_{hh}=0.34m_0,`$ which results into $`2R_y=\mathrm{}/m_ea_B=11.58`$ meV and $`a_B=\mathrm{}^2\epsilon /m_ee^2=99.7`$ Å.
Our numerical results for the correlation energy are shown in Fig. 1 and are compared with the results of Ref. . For the X we find that the magnitude of the correlation energy is larger than the one obtained in Ref. while for the X<sup>-</sup> our approach gives a $`5\%`$ smaller magnitude for the correlation energy. The reasons for the difference are: 1) in Ref. the Coulomb potential along the $`z`$-direction was approximated by an analytical form, see Appendix in Ref. . This approximation leads, as the authors already noted, to an error in the X correlation energy of approximately 5%; 2) for the X<sup>-</sup> energy the authors of Ref. did not report an estimate of the error introduced by the approximations made. However we find a decrease of the absolute value of the correlation energy by about 5%. We think that this result is not in conflict with the one obtained for the X. In fact if in Ref. for the X case the intensity of the attractive interaction between the electron and hole (e-h) was underestimated, for the X<sup>-</sup> case also the intensity of the repulsive interaction between the electrons (e-e) is underestimated which leads to a less negative E<sub>C</sub>.
We also report the correlation energy for the X<sup>+</sup> in Fig. 1. Note that the correlation energy of the X<sup>+</sup> is practically equal to the one of X<sup>-</sup>. This is in perfect agreement with recent experimental data where the binding energy of the X<sup>+</sup> is found to be equal to the one of the X<sup>-</sup>.
Next we take into account, for the narrow well regime, the difference in mass of the particles in the well (GaAs) and in the barrier (Al<sub>x</sub>Ga<sub>1-x</sub>As) material. The values for the GaAs-masses, i.e. the masses for the electron and the hole, are taken equal to the one used in the previous calculation. The values for the masses in Al<sub>x</sub>Ga<sub>1-x</sub>As are $`m_{eb}^{}=0.067+0.083x,`$ $`m_{hhb}^{}=0.34+0.42x,`$ where $`x`$ indicates the percentage of Al present in the alloy. If we assume, as a first approximation, that the electron and the hole have part of their wave function in the quantum well and the rest in the barrier we may take the total effective mass of the electron and the hole as given by
$$\frac{1}{m_i}=\frac{P_{iw}}{m_{iw}}+\frac{P_{ib}}{m_{ib}},$$
(9)
where $`m_{iw},m_{ib}`$ are the masses of the $`i`$-th particle in the barrier and in the well, and $`P_{iw}`$, $`P_{ib}`$ are the probabilities of finding the $`i`$-th particle in the well or in the barrier, respectively. The results of this calculation are shown in Fig. 2. We observe that the effect of the mass mismatch is important only in the narrow quantum well regime, i.e. L$`<`$ $`40`$ Å, where it leads to a substantial increase of the energy.
The dependence of the total energy on the hole mass for a 200 Å wide quantum well is shown in Fig. 3. The energy of the negatively charged exciton becomes equal to the D<sup>-</sup> energy for $`m_h/m_e>`$ 16. Note that the X<sup>+</sup> energy is for large values of the hole electron mass ratio parallel to the one of the X<sup>-</sup>. For a large hole mass its contribution to the total energy in terms of confinement energy is negligible, and the difference between the total energy of the positively and the negatively charged exciton is just due to the confinement energy of one electron, which does not dependent on the hole mass.
Experimental data were reported for the binding energy of the X<sup>-</sup> in zero magnetic field for a 200 Å, a 220 Å and a 300 Å quantum well. The binding energy is defined as $`E_B=E_T(X)+E_eE_T(X^{}).`$ The experimental results are shown in Fig. 4 together with our theorethical calculation, where the shaded band indicates the estimate accuracy of our variational procedure. Notice that the experimental results give a larger binding energy as compared to the theoretical estimate and this discrepancy increases with decreasing well width. This may be a consequence of the localization of the trion due to well width fluctuations which become more important with decreasing L.
Last we study the wave function of the X<sup>-</sup>. In Fig. 5(a) we show the contour plot of $`|\varphi _0(\stackrel{}{r}_{1e},\stackrel{}{r}_{2e},\stackrel{}{r}_{1h})|^2`$ for a X<sup>-</sup> in a quantum well of width 100 Å. We fix the hole in $`\stackrel{}{r}_h=(0,0,0)`$ and one of the two electrons in$`\stackrel{}{r}_e=(0.25a_B,0,0)`$ and calculate the probability to find the other electron in the $`\widehat{xy}`$-plane. Notice that the second electron sits far from the hole and the fixed electron and behaves like an electron weakly bound to a polarized exciton. If now we fix the hole in $`\stackrel{}{r}_h=(2a_B,0,0)`$ and the electron in $`\stackrel{}{r}_e=(0,0,0)`$, see Fig. 5(b), we observe that the second electron is completely localized around the hole and the configuration that we obtain is the one of an exciton plus an extra electron.
## IV Conclusion
In this paper a new calculation for the exciton and the charged exciton energy in a quantum well was presented which is based on the stochastic variational method. To our knowledge, this is the first time, that a calculation fully includes the effect of the Coulomb interaction and the confinement due to the quantum well. The results obtained do not show a big qualitative difference from the one already present in the literature, however a sensible quantitative difference is observed. This difference leads to an improvement of the agreement with available experimental data for the binding energy.
## V <br>Acknowledgment
Part of this work is supported by the Flemish Science Foundation (FWO-Vl) and the ‘Interuniversity Poles of Attraction Program - Belgian State, Prime Minister’s Office - Federal Office for Scientific, Technical and Cultural Affairs’. F.M.P. is a Research Director with FWO-Vl. Discussions with M. Hayne are gratefully acknowledged.
|
no-problem/0003/cond-mat0003113.html
|
ar5iv
|
text
|
# Diabolical points in the magnetic spectrum of Fe8 molecules
## Abstract
The magnetic molecule Fe<sub>8</sub> has been predicted and observed to have a rich pattern of degeneracies in its spectrum as an external magnetic field is varied. These degeneracies have now been recognized to be diabolical points. This paper analyzes the diabolicity and all essential properties of this system using elementary perturbation theory. A variety of arguments is gievn to suggest that an earlier semiclassical result for a subset of these points may be exactly true for arbitrary spin.
The molecular cluster \[Fe<sub>8</sub>O<sub>2</sub>(OH)<sub>12</sub>(tacn)<sub>6</sub>\]<sup>8+</sup> (or just Fe<sub>8</sub> for short) has a total spin $`J=10`$ at low temperatures, and is described to a first approximation by the spin Hamiltonian
$$=k_1J_x^2+k_2J_y^2g\mu _B𝐉𝐇,$$
(1)
where $`k_1>k_2>0`$, and $`𝐇`$ is an external magnetic field. Thus the axes $`x`$, $`y`$, and $`z`$ are hard, medium, and easy, respectively. EPR measurements indicate $`k_10.33`$ K, $`k_20.22`$ K.
In the absence of any applied magnetic field, the spin of the molecule has degenerate classical minima along the $`\pm \widehat{𝐳}`$ directions. Application of a field cants the minima away from $`\pm \widehat{𝐳}`$, but the degeneracy is preserved if $`𝐇`$ is in the $`x`$-$`y`$ plane. This degeneracy is lifted by quantum mechanical tunnelling between the low energy orientations. It is of some interest to calculate the tunnel splitting $`\mathrm{\Delta }`$, since tunnelling plays an important role in the low temperature dynamics. A few years ago, without knowledge of the relevance of Eq. (1) to Fe<sub>8</sub>, it was predicted that, for $`𝐇\widehat{𝐱}`$, $`\mathrm{\Delta }`$ would oscillate as a function of $`H`$, with perfect zeros at certain values, and this effect was explained in terms of interference arising from a Berry phase in the spin path integral. These oscillations have now been seen by Wernsdorfer and Sessoli using a clever technique which enables Landau-Zener-Stückelberg (LZS) transitions between the levels in question. The underlying value of $`\mathrm{\Delta }`$ can be extracted from the observed LZS transition rate.
In addition to the predicted oscillations, however, Wernsdorfer and Sessoli have also observed oscillations for certain non-zero values of $`H_z`$ as $`H_x`$ is swept. Villain and Fort have noted that if $`H_z`$ is chosen properly, these oscillations also represent perfect degeneracy, i.e., $`\mathrm{\Delta }`$ again vanishes exactly at isolated points in the $`H_x`$-$`H_z`$ plane \[or the full three-dimensional $`(H_x,H_y,H_z)`$ space\]. Thus, all the zeros of $`\mathrm{\Delta }`$ are, in fact, “diabolical points” in the magnetic field space. (This coinage is due to Berry and Wilkinson , as the shape of the energy surface when plotted against two parameters in the Hamiltonian — $`H_x`$ and $`H_z`$ in our case — is a double elliptic cone joined at the vertex, which resembles an Italian toy called the diavolo.) Formulas for these points have been found by Villain and Fort, and independently by the author (see below).
Diabolical points are of interest because of their rarity in real-life physical systems. Indeed, the von Neumann-Wigner theorem states that as a single parameter in a Hamiltonian is varied, an intersection of two levels is infinitely unlikely, and that level repulsion is the rule. It is useful to review the argument behind this theorem. Let the energies of levels in question be $`E_1`$ and $`E_2`$, which we suppose to be far from all other levels. Under an incremental perturbation $`V`$, the secular matrix is
$$\left(\begin{array}{cc}E_1+V_{11}& V_{12}\\ V_{21}& E_2+V_{22}\end{array}\right),$$
(2)
with $`V_{21}=V_{12}^{}`$. The difference between the eigenvalues of this matrix is given by
$$[(E_1E_2+V_{11}V_{22})^2+4|V_{12}|^2]^{1/2},$$
(3)
which vanishes only if
$$E_1+V_{11}=E_2+V_{22},V_{12}=V_{12}^{}=0.$$
(4)
Hence, for a general Hermitean matrix, three conditions must be satisfied for a degeneracy, which in general requires at least three tunable parameters. If the matrix is real and symmetic, the number of conditions and tunable parameters is reduced to two .
An exception to this rule occurs when the Hamiltonian has some symmetry, when levels transforming differently under this symmetry can intersect. For the Fe<sub>8</sub> problem, the intersections when $`𝐇\widehat{𝐱}`$ or $`𝐇\widehat{𝐳}`$ can be understood in terms of symmetry , but those with both $`H_x`$ and $`H_z`$ non-zero cannot.
The results reported in Ref. are based on a generalization of the discrete phase integral (or WKB) method , and are asymptotically accurate as $`J\mathrm{}`$. Villain and Fort use an approximate version of the same method, with the additional condition $`k_1k_2k_1`$. These calculations while involving only elementary methods of analysis, still entail the development of considerable calculational machinery, and are quite long. Surprisingly, the full global structure of the energy spectrum can be obtained by a much simpler method—text-book perturbation theory in $`k_2/k_1`$ and the field components $`H_y`$, $`H_z`$. This is an extension of an earlier calculation by Weigert , who analysed the problem for $`H_y=H_z=0`$. In particular, one can rigorously establish the existence of diabolical points, and find formulas for their locations via a series of small calculations. It is hoped that the simplicity of this approach will make the subject accessible to a wide readership.
Before proceeding further, it is useful to develop a scheme for labelling the eigenstates of $``$. Suppose first that $`𝐇=0`$, and $`k_2=k_1`$. The states can then be labelled by the eigenvalue $`m`$ of $`J_z`$, and the ground states are $`m=\pm J`$. If $`k_2`$ is now decreased, states with $`m`$ differing by an even integer will mix. If $`k_1k_2k_2`$, or $`J`$ is large, the barrier between $`m=J`$ and $`m=+J`$ is large (see fig. 1), tunnelling is negligible, and we can find states $`|m^{}`$ which evolve continuously from $`|m`$, such that $`\{|m^{}\}`$ are eigenstates of $``$ to good approximation. This approximation will continue to hold if the field $`𝐇`$ is turned on, as long as $`|𝐇|H_c=2k_1J/g\mu _B`$.
The first set of diabolical points lies on the line $`H_y=H_z=0`$, because $``$ is then invariant under a $`180^{}`$ rotation about $`\widehat{𝐱}`$. Levels with different parity under this operation can intersect as $`H_x`$ is varied. In particular, the pseudo-ground states $`m^{}=\pm J`$ are exactly degenerate at a sequence of $`H_x`$ values as found in Ref. . Since the symmetry is destroyed if either $`H_y0`$ or $`H_z0`$, so are the intersections, and the points are indeed diabolical. The same is true of intersections of levels with $`m^{}=\pm (J\mathrm{})`$, where $`\mathrm{}`$ is an integer.
A similar argument applies when $`𝐇\widehat{𝐳}`$, so another set of diabolical points is expected when $`H_x=H_y=0`$. In terms of fig. 1, the states which are degenerate are no longer symmetrically located, and it is possible for say, $`m^{}=J`$ to be degenerate with $`m^{}=J1`$. The new discovery by Wernsdorfer and Sessoli is that the tunnel splitting between these states also oscillates as $`H_x`$ is now varied. As mentioned above, these oscillations are also perfect, and the corresponding diabolical points are not associated with any obvious symmetry of $``$. (A similar situation holds in the spectrum of a particle confined to a two dimensional triangular region . Apart from an overall size, which only affects the overall energy scale in a trivial manner, a triangle is parametrized by two angles. Two sets of diabolical points arise when the triangles are isoceles, but the rest appear when the triangles are scalene with no special symmetry.)
We can thus classify the diabolical points by the $`m^{}`$ numbers of the levels which are degenerate. Let the state with predominantly negative values of $`m`$ be labelled by $`m_1^{}`$, and the other state by $`m_2^{}`$. We define $`k=m_1^{}+J`$, and $`k^{}=m_2^{}J`$. In other words, counting from 0, the $`k`$th level in the left well is degenerate with state number $`k^{}`$ in the right well. When $`k,k^{}J`$, the semiclassical analysis gives the location of the diabolical point as ($`H_y=0`$ always)
$`h_x`$ $`=`$ $`{\displaystyle \frac{\sqrt{1\lambda }}{J}}\left[J\mathrm{}{\displaystyle \frac{1}{2}}(k+k^{}+1)\right],`$ (5)
$`h_z`$ $``$ $`{\displaystyle \frac{\sqrt{\lambda }}{2J}}(kk^{}).`$ (6)
Here, $`𝐡=𝐇/H_c`$ is a reduced field with $`H_c=2k_1J/g\mu _B`$, $`\lambda =k_2/k_1`$, and $`\mathrm{}`$ is an integer.
Another way to label the degeneracies is to number the levels in order of increasing energy, starting with 1 for the lowest level, and then simply give the numbers of the two crossing levels. Thus if the lowest two levels are degenerate ($`k=k^{}=0`$), we will say that levels 1 and 2 cross, while for $`k=0,k^{}=1`$, or $`k=1,k^{}=0`$, we would say that levels 2 and 3 cross. This labelling is not unique, but we will find it convenient.
With this background, we now turn to our calculations. Following Weigert we regard the $`k_2`$ term in Eq. (1) as the perturbation, along with the $`y`$ and $`z`$ components of $`𝐇`$. It is convenient to divide all energies by $`k_1`$, and write $`\overline{}=/k_1=\overline{}_0+\overline{}_1`$, where
$`\overline{}_0`$ $`=`$ $`J_x^22Jh_xJ_x,`$ (7)
$`\overline{}_1`$ $`=`$ $`\lambda J_y^2J(h_{}J_++h_+J_{}),`$ (8)
where $`J_\pm =J_y\pm iJ_z`$, $`h_\pm =h_y\pm ih_z`$. These notations for $`J_\pm `$ are unconventional, but they are now convenient, as we will take the quantization axis to be $`x`$, not $`z`$. We will label the eigenvalue of $`J_x`$ by $`n`$. To zeroth order, the energy of state $`n`$ is given by
$$E_n^{(0)}=n^22Jh_xn,$$
(9)
Levels $`n`$ and $`n^{}`$ are approximately degenerate if $`Jh_x=(n+n^{})/2`$. To see if they are exactly degenerate when $`\overline{}_1`$ is included, we find the secular matrix $`V`$ to an appropriate order in perturbation theory, and examine the conditions (4). We do this for a number of different cases.
Case 1 — levels 1 and 2 cross. Let the degenerate levels be $`n_0`$ and $`n_0+1`$, so that $`Jh_x(n_0+\frac{1}{2})`$. For brevity, we label the states by A and B, and denote the matrix elements $`n_0+1|J_{}|n_0`$ etc. by $`a_1`$, $`a_2`$, $`a_3`$, etc., as indicated in fig. 2. Note that all $`a_i`$ can be chosen as real. To first order in $`\lambda `$ and $`h_\pm `$,
$`V_{AA}`$ $`=`$ $`\lambda [J(J+1)n_0^2]/2,`$ (10)
$`V_{BB}`$ $`=`$ $`\lambda [J(J+1)(n_0+1)^2]/2,`$ (11)
$`V_{AB}`$ $`=`$ $`Jh_+a_2.`$ (12)
The conditions for diabolicity are thus
$$Jh_x=(n_0+\frac{1}{2})(1\frac{1}{2}\lambda ),h_y=h_z=0.$$
(13)
Writing $`n_0=J\mathrm{}1`$, this is identical to Eqs. (5) and (6) with $`k=k^{}=0`$, once we recognize that $`(1\lambda /2)=(1\lambda )^{1/2}+O(\lambda ^2)`$. Since $`Jn_0J1`$, there are $`2J`$ such points.
The conclusion that these points lie on the line $`h_y=h_z=0`$ is unchanged if we go to higher order. The relevant condition is clearly that for off-diagonal elements. Contributions to the AB element of the second order secular matrix arise from intermediate states $`n_0+2`$ and $`n_01`$. A short calculation gives $`V_{AB}^{(2)}=\lambda a_2J(a_1^2+a_3^2)h_{}/8`$. Adding this to Eq. (12) and setting the sum to zero, we again obtain the conditions $`h_y=h_z=0`$.
Case 2 — levels 2 and 3 cross. Let the lowest energy level be $`n_0`$, and let $`n_0\pm 1`$ be approximately degenerate. This requires $`Jh_xn_0`$. Again, we denote the states $`n_0\pm 1`$ by A and B, and the various matrix elements of $`J_\pm `$ by $`a_1`$ to $`a_4`$ as in fig. 3. To $`O(\lambda )`$, $`V_{AA}`$ and $`V_{BB}`$ are given by $`\lambda [J(J+1)(n_0\pm 1)^2]/2`$. The order $`h_y^2`$, $`h_z^2`$ contributions to the diagonal terms of the second order secular matrix are found to both be equal to $`4J^2(h_y^2+h_z^2)[J(J+1)n_0^2+1]/3`$. The interesting terms are $`V_{AB}`$ and $`V_{BA}`$. Including first order pieces from $`\lambda J_y^2`$, and second order pieces from $`h_\pm `$, we get
$$V_{AB}=\left(\frac{1}{4}\lambda +J^2h_+^2\right)a_2a_3.$$
(14)
For a diabolical point, therefore, the vector $`𝐡`$ must have components
$$𝐡=\frac{1}{J}[n_0\left(1\frac{1}{2}\lambda \right),0,\frac{1}{2}\sqrt{\lambda }].$$
(15)
With $`n_0=J\mathrm{}1`$, these are exactly the lowest order terms in an expansion in $`\lambda `$ of Eqs. (5) and (6) with $`k=1`$, $`k^{}=0`$. Since $`J+1n_0J1`$, there are $`2J1`$ such points.
Case 3 — levels 3 and 4 cross. This case can arise either with $`k=k^{}=1`$, or with $`k=2`$, $`k^{}=0`$, but we shall be able to distinguish between these. Refering to fig. 2 again, the degenerate levels are $`n_01`$ (C) and $`n_0+2`$ (D). Equality of $`E_C`$ and $`E_D`$ again requires $`Jh_x(n_0+\frac{1}{2})`$. To first order in $`\lambda `$, $`V_{CC}V_{DD}=3\lambda (n_0+\frac{1}{2})`$, so that the diagonal elements are equal when
$$Jh_x=(n_0+\frac{1}{2})(1\frac{1}{2}\lambda ).$$
(16)
As in case 2, it is the off-diagonal term which is of greater interest. The secular matrix is now diagonal in first order, and off-diagonal terms only arise in second and higher orders. Second order terms arise from the combination of one $`h_\pm J_{}`$ term and one $`\lambda J_y^2`$ term, while third order terms arise from three $`h_\pm J_{}`$ terms. The net result is
$$V_{CD}=\frac{1}{4}(h_+^2J^2+\lambda )h_+Ja_1a_2a_3.$$
(17)
This can vanish in two ways. The first is to have $`h_y=h_z=0`$, in which case the diabolical field is given by Eq. (13) again. This case corresponds to $`k=k^{}=1`$.
The second way for $`V_{CD}`$ to vanish is for the factor in parentheses in Eq. (17) to vanish. This happens when
$$h_y=0,h_z=\sqrt{\lambda }/J.$$
(18)
In conjunction with Eq. (16), this is seen to be the same as Eqs. (5) and (6) with $`k=2`$, $`k^{}=0`$, and $`n_0=J\mathrm{}2`$.
It is clear that this procedure gets rapidly more tedious if we apply it to cases with larger $`k`$ and $`k^{}`$. It is more useful to consider higher order perturbative corrections for the cases treated above. In the argument leading to Eq. (16), e.g., we have only gone up to $`O(\lambda )`$. It is obvious that inclusion of higher order terms can at best alter the value of $`h_x`$ at the diabolical point by terms of order $`\lambda ^2`$, $`h_y^2`$, and $`h_z^2`$, but cannot destroy the existence of a perfect degeneracy. The same argument applies in all the other cases, and constitutes a constructive proof of the existence of diabolical points.
It is particularly interesting to investigate the subset of diabolical points on the line $`H_y=H_z=0`$ in greater depth. As noted before, these points correspond to $`k=k^{}`$, and the degenerate levels have $`n`$ quantum numbers differing by an odd integer. Thus they can never be coupled by the remaining perturbation $`\overline{}_1=\lambda J_y^2`$, and the problem is effectively one of non-degenerate perturbation theory. Let us consider case 1 first. The second order correction to the energy of state A arises from the intermediate states $`n_0\pm 2`$, and to that of state B from $`n_01`$ and $`n_0+3`$. It suffices to find the energy denominators assuming that $`Jh_x=(n_0+\frac{1}{2})`$. A short calculation gives
$$\left(\begin{array}{c}V_{AA}^{(2)}\\ V_{BB}^{(2)}\end{array}\right)=\frac{2}{3}\lambda ^2\left[\left[J(J+1)(n_0^2+n_0+1)\right]^2+2n_0^2+\left(\begin{array}{c}n_01\\ 5n_0+2\end{array}\right)\right].$$
(19)
Along with Eqs. (9)-(11), this means that to $`O(\lambda ^2)`$, the states are degenerate when
$$Jh_x=(n_0+\frac{1}{2})(1\frac{1}{2}\lambda \frac{1}{8}\lambda ^2),$$
(20)
which is precisely what Eq. (5) also gives.
In the same way, for the subcase $`k=k^{}=1`$ of case 3, we obtain
$$\left(\begin{array}{c}V_{CC}^{(2)}\\ V_{DD}^{(2)}\end{array}\right)=\frac{1}{40}\lambda ^2\left[\left[J(J+1)(n_0^2+n_01)\right]^26n_0^2+\left(\begin{array}{c}9n_04\\ 21n_019\end{array}\right)\right].$$
(21)
Including lower order terms, the condition for degeneracy is found to be identical to Eq. (20).
The fact that the two pairs of states $`k=k^{}=0`$, and $`k=k^{}=1`$ are simultaneously degenerate (at least to order $`\lambda ^2`$), is very striking. Calculations to $`O(\lambda ^2)`$ were in fact done by Weigert , but he did not perform them sufficiently explicitly, and reached the opposite conclusion, i.e., that the degeneracy conditions would be different. It is clear, however, that this equality is a result of the simple form of $``$, and is violated when higher anisotropies such as $`(J_x\pm iJ_y)^4`$ are included.
The second striking feature about the result (20) is that there are no terms like $`\lambda ^2J^2`$ or $`\lambda ^2n_0^4`$ etc. on the right hand side, and that it is agrees precisely with the semiclassical answer. Since the latter is obtained in a very different limit, namely, $`J\mathrm{}`$, it begins to raise the suspicion that it might be exact. To test this suspicion, we have carried the calculation for case 1 to order $`\lambda ^3`$. For this, not only must we find $`V_{AA}^{(3)}`$ and $`V_{BB}^{(3)}`$, but we must also keep $`O(\lambda )`$ corrections in the energy denominators in the calculations for $`V_{AA}^{(2)}`$ and $`V_{BB}^{(2)}`$, since $`Jh_x`$ depends on $`\lambda `$ at the diabolical point. The resulting calculation is lengthy, but is efficiently done using MAPLE. Almost miraculously, all powers of $`J`$ multiplying $`\lambda ^3`$ cancel, as do terms $`\lambda ^3n_0^j`$ with $`j2`$, and the contribution to $`E_AE_B`$ is just $`\lambda ^3(2n_0+1)/16`$. The condition for degeneracy thus becomes
$$Jh_x=(n_0+\frac{1}{2})(1\frac{1}{2}\lambda \frac{1}{8}\lambda ^2\frac{1}{16}\lambda ^3).$$
(22)
It will not have escaped the reader that the last factor equals $`(1\lambda )^{1/2}`$ to $`O(\lambda ^3)`$!
It is useful to consider the structure of the perturbation series to higher order in $`\lambda `$. It is clear that we cannot get negative powers of $`J`$ in the formula for $`Jh_x`$; instead it generates positive powers. Although the low order analysis suggests otherwise, in principle we should expect terms such as $`\lambda ^NJ^K(J+1)^K`$ with $`0<K<N1`$ in $`N`$th order. Such terms would be reminiscent of an asymptotic series, and would signal a zero radius of convergence. Such a situation would be very odd in our problem since the perturbation $`\lambda J_y^2`$ does not appear to be singular. Although plausible, this is far from a complete argument that such terms are in fact absent, since we have not excluded terms such as $`\lambda ^Nn_0^{N1}`$ in $`N`$th order.
Further evidence that the result (5) is exact comes from looking at low values of $`J`$. We have done this for $`J`$ up to 2. For $`J=1/2`$, there is nothing to prove as the only degeneracy is at $`h_x=0`$, which is also guaranteed by Kramers’s theorem. For $`J=1`$, the energies are directly found to be $`E_{\pm 1}=1+\frac{1}{2}\lambda (4h_x^2J^2+\frac{1}{4}\lambda ^2)^{1/2}`$, and $`E_0=\lambda `$, so $`E_1=E_0`$ when $`Jh_x=(1\lambda )^{1/2}/2`$. For $`J=3/2`$, $`\overline{}`$ separates into two $`2\times 2`$ matrices in the $`J_x`$ basis, which we call $`M_1`$ and $`M_2`$. Both eigenvalues of $`M_1`$ coincide with those of $`M_2`$ at $`h_x=0`$. This is again Kramers’s degeneracy. In addition, one eigenvalue of $`M_1`$ coincides with one of $`M_2`$ precisely when $`h_x=2(1\lambda )^{1/2}/3`$. For $`J=2`$, $`\overline{}`$ separates into a $`3\times 3`$ matrix ($`M_1`$) and a $`2\times 2`$ matrix ($`M_2`$). The expected degeneracies are at $`h_x=(1\lambda )^{1/2}/4`$ and $`3(1\lambda )^{1/2}/4`$. At the second value of $`h_x`$, one eigenvalue of $`M_1`$ indeed coincides with one of $`M_2`$. At $`h_x=(1\lambda )^{1/2}/4`$, however, two distinct $`M_1`$ eigenvalues coincide with two $`M_2`$ eigenvalues. Thus we again see the simultaneous degeneracy of two sets of levels ($`k=k^{}=0`$, and $`k=k^{}=1`$), leading us to believe that this feature is also generally true. A rigorous proof of these conjectures remains an open problem.
\***
This work is supported by the National Science Foundation through Grant No. DMR-9616749.
|
no-problem/0003/astro-ph0003060.html
|
ar5iv
|
text
|
# Interaction in Abell 2256: the BeppoSAX view
## 1. Introduction
Abell 2256 (hereafter A2256) is a rich, nearby (z$`=`$ 0.057, Bothun & Schombert 1990), cluster of galaxies. Studies in the optical band have shown that the velocity dispersion is quite large ($``$1400 km/s; Fabricant, Kent & Kurtz 1989, Bothun & Schombert 1990). An early ROSAT PSPC image of A2256 (Briel et al. 1991) provided clear evidence of substructure, showing two emission peaks separated by about 3.5 arcminutes. One of the two peaks is coincident with the cD galaxy while the distorted morphology of the other indicates that it is merging with the main cluster. A reanalysis, by Briel et al. (1991), of the velocity distribution of the galaxies measured by Fabricant, Kent & Kurtz (1989) shows that it can be separated into two distinct distributions coincident with the two X-ray peaks. Fabian & Daines (1991), from the ROSAT PSPC surface brightness distribution, have estimated cooling times of 2$`\times 10^{10}`$ years and 5$`\times 10^9`$ years at the center of the main cluster and of the infalling subcluster respectively. The above authors imply that, prior to the merger event, a cooling flow had already developed in the core of the infalling subgroup and that the merger may have interrupted the cooling flow and stirred up the gas within it.
Various attempts have been made to measure the temperature structure of A2256. Briel & Henry (1994), using ROSAT PSPC data, find evidence that the infalling group has a lower temperature than the main peak. They also find evidence of two hot spots opposite each other and perpendicular to the presumed infall direction of the subgroup, however this result was not confirmed by Markevitch & Vikhlinin (1997) who reanalyzed the same data. Markevitch (1996, hereafter M96), from ASCA data, finds evidence of a smoothly declining radial temperature profile, going from $``$8.7 keV near the core to $``$4 keV in the outskirts. His temperature map shows that the subgroup has a smaller temperature than the main peak. Irwin, Bregman & Evrard (1999), from ROSAT PSPC hardness ratios, find a radial profile consistent with a constant temperature out to 15 from the cluster core. Their hardness ratio two dimensional map is in general agreement with the one of Briel & Henry (1994). White (1999, hereafter W99), from a reanalysis of the ASCA data, finds a radial temperature profile consistent with being constant out to 18 from the cluster core.
In this Letter we report BeppoSAX observations of A2256. We use our data to perform an independent measurement of the temperature profile and two-dimensional map of A2256. We also present the abundance profile and the first abundance map of A2256. The outline of the Letter is as follows. In section 2 we give some information on the BeppoSAX observation of A2256 and on the data preparation. In section 3 we present spatially resolved measurements of the temperature and metal abundance. In section 4 we discuss our results and compare them to previous findings. Throughout this Letter we assume H<sub>o</sub>=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and q<sub>o</sub>=0.5.
## 2. Observation and Data Preparation
The cluster A2256 was observed by the BeppoSAX satellite (Boella et al. 1997a) at two different epochs; between the 11<sup>th</sup> and the 12<sup>th</sup> of February 1998 and between the 25<sup>th</sup> and the 26<sup>st</sup> of February 1999. We will discuss here data from the MECS instrument onboard BeppoSAX; a joint analysis of the MECS and PDS spectra of A2256 is presented in Fusco-Femiano et al. (2000). The MECS (Boella et al. 1997b) is presently composed of two units working in the 1–10 keV energy range. At 6 keV, the energy resolution is $``$8% and the angular resolution is $``$0.7 (FWHM). Standard reduction procedures and screening criteria have been adopted to produce linearized and equalized event files. Data preparation and linearization was performed using the Saxdas package under Ftools environment. The total effective exposure time for the two observation was 1.3$`\times `$10<sup>5</sup> s. All spectral fits have been performed using XSPEC Ver. 10.00. Quoted confidence intervals are 68% for 1 interesting parameter (i.e. $`\mathrm{\Delta }\chi ^2=1`$), unless otherwise stated.
## 3. Spatially Resolved Spectral Analysis
Spectral distortions introduced by the energy dependent PSF must be accounted for when performing spatially resolved spectroscopy of galaxy clusters. As for the analysis of other BeppoSAX observations of clusters (e.g. A2319, Molendi et al. 1999), we have taken them into account using the Effarea program publicly available within the latest Saxdas release. We remark that we fit spectra individually. This is not what is typically done when performing spatially resolved spectroscopy of clusters with ASCA data. Here spectra accumulated from different regions are typically analyzed simultaneously, the reason being that the correction to be applied to a given region depends on the temperature of all the others. The lack of a strong dependence of the MECS PSF on energy allows us to avoid such complications.
### 3.1. Radial Profiles
For each of the two observations we have accumulated spectra from 6 annular regions centered on the main X-ray emission peak of A2256, with inner and outer radii of 0-2, 2-4, 4-6, 6-8, 8-12 and 12-16. We have also accumulated a global spectrum from a circle with radius 16. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. A correction for the absorption caused by the strongback supporting the detector window has been applied for the 8-12 annulus, where the annular part of the strongback is contained. For the 6-8 and 12-16 annuli, where the strongback covers only a small fraction of the available area, we have chosen to exclude the regions shadowed by the strongback. For the 5 innermost annuli the energy range considered for spectral fitting was 2-10 keV; for the outermost annulus, the fit was restricted to the 2-8 keV energy range to limit spectral distortions which could be caused by an incorrect background subtraction (see De Grandi & Molendi 1999a for details). Source and background spectra accumulated for each of the two observations have then been summed together.
We have fitted each spectrum with a MEKAL model absorbed by the Galactic line of sight equivalent hydrogen column density, $`N_H`$, of 4.1$`\times 10^{20}`$ cm<sup>-2</sup>. The temperature and abundance we derive from the global spectrum are respectively 7.5$`\pm 0.1`$ keV and 0.25$`\pm 0.02`$, solar units. In figure 1 we show the temperature and abundance profiles obtained from our six annular regions. A constant does not provide a good fit to the temperature or the abundance profile (see table 1).
As in Molendi et al. (1999), we have used the Fe K<sub>α</sub> line as an independent estimator of the ICM temperature. Considering the limited number of counts available in the line, we have performed the analysis on 2 annuli with bounding radii, 0-8 and 8-12, the very small Fe abundance measured in the 12-16 annulus prevents us from deriving a reliable line centroid for this region. We have fitted each spectrum with a bremsstrahlung model plus a line, both at a redshift of z=0.057 (ZBREMSS and ZGAUSS models in XSPEC), absorbed by the galactic $`N_H`$. A systematic negative shift of 40 eV has been included in the centroid energy to account for a slight misscalibration of the energy pulseheight-channel relationship near the Fe line. To convert the energy centroid into a temperature we have derived an energy centroid vs. temperature relationship. This has been done by simulating thermal spectra, using the MEKAL model and the MECS response matrix, and fitting them with the same model, which has been used to fit the real data. We derive a temperature of 8.0$`{}_{1.0}{}^{}{}_{}{}^{+0.9}`$ keV for the inner radial bin and of 3.2 $`{}_{1.7}{}^{}{}_{}{}^{+2.8}`$ keV for the outer one. Thus, our two independent measurements of the temperature profile are in good agreement with each other.
### 3.2. Maps
As shown in figure 2, we have divided the MECS image of A2256 into 4 sectors: NW, SW, SE and NE, each sector has been divided into 4 annuli with bounding radii, 2-4, 4-8, 8-12 and 12-16. The background subtraction has been performed using spectra extracted from blank sky event files in the same region of the detector as the source. Correction or exclusion of the regions shadowed by the strongback supporting the detector window have been performed as in the previous subsection. The energy ranges and the spectral models adopted for fitting are the same used for the azimuthally averaged spectra.
In figures 3 and 4 we show respectively the temperature and abundance profiles obtained from the spectral fits for each of the 4 sectors. In table 1 we report the best fitting constant temperatures and abundances for the profiles shown in figures 3 and 4. Note that in all the profiles we have included the measure obtained for the central circular region with radius 2. All sectors, except for the SW sector, show a statistically significant temperature decrease with increasing radius. In the NW sector the temperature decreases continuously as the distance from the cluster center increases. In the SE and NE sectors the temperature first increases, reaching a maximum in either the second (NE sector) or third (SE sector) annulus, and then decreases. Interestingly, a fit to the temperatures of the 4 sectors in the third annulus (bounding radii 4-8) with a constant, yields $`\chi ^2=19.2`$ for 3 d.o.f., with an associated probability for the temperature to be constant of $`2.5\times 10^3`$, indicating that an azimuthal temperature gradient is present near the core of the cluster. More specifically the NW sector of the cluster is the coldest, 6$`\pm 0.3`$ keV, and the SE sector the hottest, 8.4$`\pm 0.5`$ keV. The SE sector is the only one to show clear evidence of an abundance decline with increasing radius, all other sectors have abundance profiles which are consistent with being flat.
## 4. Discussion
Previous measurements of the temperature structure of A2256 have been performed by Briel & Henry (1994) and Irwin et al. (1999), using ROSAT data, by M96 and by W99 using ASCA data. We have performed a detailed comparison of our radial temperature profile with the ones based on the ASCA satellite (M96 and W99) which covers an energy range similar to ours. In figure 1 we have overlaid the temperature profile obtained by M96 and by W99 on our profile. The higher quality of the BeppoSAX measurement, due in part to the much longer exposure time and in part to the better angular resolution of our instrument, is quite evident. The innermost bin in the M96 profile, 0-6 has a temperature that is inconsistent with the temperature we measure from our three innermost bins spanning the same radial range. It must be noted that, while our profile is azimuthally averaged over all angles, the M96 measurement has been obtained excluding the region presumably contaminated by the softer emission of the infalling group. To obtain a direct comparison between our measurement and the one reported in M96, we have derived the temperature from a circular region with radius 6 excluding the NW sector containing the infalling group. Our measurement, 7.5$`\pm `$0.2 keV, although somewhat higher than the one obtained by averaging over all directions, is still incompatible at more than the 3$`\sigma `$ level with the one reported by M96. The second radial bin reported in M96 (6-11) is characterized by a temperature apparently larger than the mean temperature for our corresponding bins (i.e. 6-8 and 8-12). However this difference is only apparent, indeed if we simultaneously fit the BeppoSAX spectra for the 6-8 and 8-12 bins, which is equivalent to fitting data from the 6-12 bin, we derive a temperature of 6.6$`\pm `$0.3 keV, which is consistent with the one derived by M96. The temperature for the outermost bin in the M96 profile is in agreement with our own measurement. The W99 measurement, which comes from a different analysis of the same ASCA observation used by M96, is in agreement with ours for radii smaller than 6. The outermost bin reported in W99 appears to have a temperature substantially larger than the mean temperature for our corresponding bins (i.e. 6-8, 8-12 and 12-16). This difference is only apparent, if we simultaneously fit the BeppoSAX spectra for the 6-8, the 8-12 and the 12-16 bins we derive a temperature of 6.5$`\pm `$0.3, which is consistent with the one derived by W99. The apparent difference is related to the strong gradient in the surface brightness profile when going from 6 to 16, which causes the emission from the entire region to be dominated by the contribution of the innermost annuli. In summary: for radii larger than 6, our profile is in agreement with the M96 and W99 profile, while for radii smaller than 6, our profile is in agreement with the W99 profile and in disagreement with the M96 profile.
The most striking feature of our radial temperature profile is the presence of a relatively localized gradient. The temperature is flat out to 8 and decreases by almost a factor two within the following 8. The radius at which the temperature starts to decline $`8^{}`$ (0.8 Mpc) is comparable to the radius at which the X-ray isophotes are no longer disturbed by the interaction of the two subclusters, which is clearly seen at smaller scales in the ROSAT PSPC image (e.g. figure 2 of Briel et al. 1991). Thus the presence of a hot almost isothermal region in the core is most likely related to the on-going merger between the main cluster and the group. The BeppoSAX temperature map shows clear evidence of an azimuthal gradient in the 8-12 radial bin. The NW sector is found to be the coldest while the SE sector appears to be the hottest, thus the gradient appears to be oriented in the same direction as the merger itself. Interstingly, in a previous work (De Grandi & Molendi 1999a), the merging cluster A3266 was found to have a similar temperature structure. No evidence of the two hot spots reported by Briel & Henry (1994) is found in our map.
The metal abundance in A2256 appears to decrease with increasing radius (see figure 1). This is the first firm case, to our knowledge, of an abundance gradient in a rich non cooling flow cluster. Evidence of an abundance gradient has been found in the poor cluster MKW4 (Finoguenov et al. 1999), while marginal evidence has been found in A399 (Fujita et al. 1996) and A1060 (Finoguenov et al. 1999). In A2256 the abundance averaged over a central region of 0.2 Mpc radius is $``$ 0.3, solar units, a value which, although higher than the average abundance for non cooling flow cluster, $``$ 0.20 (Allen & Fabian 1998), is smaller than those commonly observed in the core of cooling flow cluster (see for example Finoguenov et al. 1999, for an analysis of abundance profiles from ASCA data, and our own BeppoSAX results on Abell 2029, Molendi & De Grandi 1999, and PKS 0745-191, De Grandi & Molendi 1999b). Furthermore, the abundance map (see figure 4) shows that the SE sector, i.e. the one furthest away from the on-going merger, presents a highly significant abundance decline (probability $`=7.6\times 10^6`$) localized at a radius comparable to the core radius of the cluster. A possible interpretation is that, prior to the merger event, a cooling flow had already developed in the core of the infalling subgroup, as suggested by Fabian & Daines (1991). The above authors, from the gas densities at the center of the main cluster and of the infalling subcluster compute cooling times of 2$`\times 10^{10}`$ years and 5$`\times 10^9`$ years respectively, implying that the infalling subcluster must have had a cooling flow. The interaction between the substructures would have disrupted the cooling flow thereby re-heating and re-mixing the gas. As the merger in A2256 is still in a relatively early stage, the gas located on the side opposite to the merger event may still retain the low abundances associated to the ICM prior to the cooling flow disruption. It seems unlikely that a contribution to the metallicity enhancement has come from the main cluster as its core density implies a cooling time that is larger than the age of the Universe. Finally we speculate that other rich merging clusters, similar to A2256, may present metallicity gradients produced by disrupted cooling flows. BeppoSAX and future XMM observations of merging clusters will certainly contribute in clarifying this issue.
We acknowledge support from the BeppoSAX Science Data Center. We thank the referee for useful comments and D. Lazzati for help in producing the contour plot.
|
no-problem/0003/hep-ph0003189.html
|
ar5iv
|
text
|
# March 2000 Revised April 2000 To appear in Phys.Lett.B Maximal 𝜈_𝑒 oscillations, Borexino and smoking guns…
## Abstract
We examine the maximal $`\nu _e\nu _s`$ and $`\nu _e\nu _{\mu ,\tau }`$ oscillation solutions to the solar neutrino problem. These solutions lead to roughly a $`50\%`$ solar flux reduction for the large parameter range $`3\times 10^{10}\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}10^3`$. It is known that the earth regeneration effect may cause a potentially large night-day asymmetry even for maximal neutrino oscillations. We investigate the night-day asymmetry predictions for the forthcoming Borexino measurement of the $`{}_{}{}^{7}Be`$ neutrinos for both maximal $`\nu _e\nu _s`$ and $`\nu _e\nu _{\mu ,\tau }`$ oscillations. If $`y\times 10^8\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}4y\times 10^5`$ (with $`y0.5`$ for $`\nu _e\nu _s`$ case and $`y1`$ for the $`\nu _e\nu _{\mu ,\tau }`$ case) then the maximal neutrino oscillations will lead to observable night-day asymmetries in Borexino and/or superKamiokande. With Kamland covering the high mass range, $`10^5\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}10^3`$ and Borexino/SuperK covering the low mass range, $`3\times 10^{10}\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}5\times 10^9`$ (“just so” region), essentially all of the $`\delta m^2`$ parameter space will soon be scrutinized.
Maximal oscillations occupy a special point in parameter space. Neutral Kaons and B-mesons both oscillate maximally with their antiparticle partners. Interestingly there is now strong evidence from solar and atmospheric neutrino experiments that electron and muon neutrinos also oscillate maximally with some as yet unidentified partner. Identifying these states is one of the most pressing issues in particle physics.
One possibility is that each of the three known neutrinos oscillates maximally with an approximately sterile partner. This behaviour is expected to occur if parity is an unbroken symmetry of nature . In this theory, the sterile flavour maximally mixing with the $`\nu _e`$ is identified with the mirror electron neutrino. The characteristic maximal mixing feature occurs because of the underlying exact parity symmetry between the ordinary and mirror sectors. The maximal mixing observed for atmospheric muon neutrinos is nicely in accord with this framework (see e.g.), which has the atmospheric neutrino problem resolved through ‘$`\nu _\mu `$ mirror partner’ oscillations. Alternatively, it has also been suggested that each of the known neutrinos are pseudo-Dirac fermions which has each of the known neutrinos oscillating maximally into a sterile, $`\nu _R`$ partner. Both of these ideas motivate the study of maximal two flavour $`\nu _e\nu _s`$ oscillations (where $`\nu _s`$ means sterile neutrino).
Of course there are other possibilities. For example it is possible that the neutrino anomalies are due to bi-maximal mixing. This sees the atmospheric anomaly being solved by maximal $`\nu _\mu \nu _\tau `$ oscillations and the solar problem being solved by maximal $`\nu _e(\nu _\mu +\nu _\tau )/\sqrt{2}`$ oscillations. The bi-maximal hypothesis is an interesting possibility even though a compelling theoretical motivation for it has yet to be found. Thus, two flavour maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations (where $`\nu _{\mu ,\tau }`$ means any linear combination of $`\nu _\mu `$ or $`\nu _\tau `$) is therefore also interesting. Note that the two phenomenologically similar (but theoretically very different) possibilities of $`\nu _e\nu _s`$ and $`\nu _e\nu _{\mu ,\tau }`$ oscillations will hopefully be distinguished at the Sudbury Neutrino Observatory (SNO) when they measure the neutral and charged current contributions separately.
Two flavour maximal oscillations between the electron neutrino and a sterile or active flavor produces an approximate $`50\%`$ solar neutrino flux reduction for a large range of $`\delta m^2`$:
$$\text{3}\times 10^{10}\stackrel{<}{}\frac{\delta m^2}{\mathrm{eV}^2}\stackrel{<}{}10^3.$$
(1)
The reason why the reduction is not exactly $`50\%`$ is because earth regeneration effects can modify the night time rate (and there is also a small neutral current contribution in the case of active neutrino oscillations in $`\nu e\nu e`$ elastic scattering experiments). This earth regeneration effect can lead to a modest energy dependence, but not enough to explain the low Homestake result. The upper bound in Eq.(1) arises from the lack of $`\overline{\nu }_e`$ disappearence in the CHOOZ experiment<sup>*</sup><sup>*</sup>*Note that this entire range for $`\delta m^2`$ does not necessarily lead to any inconsistency with bounds imposed by big bang nucleosynthesis., while the lower bound can be deduced from the observed recoil electron energy spectrum. For $`E_{recoil}<12MeV`$ the recoil electron energy spectrum is consistent with an overall flux reduction of roughly $`50\%`$ with no evidence of any energy dependent distortion of the neutrino flux. Maximal oscillations with $`\delta m^2\stackrel{<}{}3\times 10^{10}eV^2`$ either significantly distort this spectrum or (in the case of very small $`\delta m^2`$) do not lead to any flux reduction (because the oscillation length becomes too long for oscillations to have any effect). Note that there is a hint of a spectral anomaly for $`E_{recoil}>12MeV`$ which may be due to “just so” oscillations with $`\delta m^24\times 10^{10}eV^2`$ (see e.g.) although it is also possible that it is due to a systematic uncertainty or statistical fluctuation.
The current experimental situation for solar neutrinos is summarized in the table below where the data is compared to the theoretical model of Ref..
| Experiment | Flux | Theory |
| --- | --- | --- |
| Homestake | $`2.55\pm 0.25(stat+syst)`$ SNU | $`7.7_{1.0}^{+1.2}`$ SNU |
| Kamiokande | $`2.80\pm 0.19(stat)\pm 0.33(syst)\times 10^6cm^2s^1`$ | $`5.15_{0.7}^{+1.0}`$ $`10^6cm^2s^1`$ |
| SuperKamiokande | $`2.44\pm 0.05(stat)\pm 0.08(syst)\times 10^6cm^2s^1`$ | $`\mathrm{"}\mathrm{"}\mathrm{"}`$ |
| GALLEX | $`77\pm 6(stat)\pm 5(syst)`$ SNU | $`129_6^{+8}`$ SNU |
| SAGE | $`67\pm 7(stat)\pm 3.5(syst)`$ SNU | $`\mathrm{"}\mathrm{"}\mathrm{"}`$ |
Table Caption: Comparison of solar neutrino experiments with the solar model of Ref..
As the above table shows, the approximate $`50\%`$ flux reduction implied by maximal neutrino oscillations in the parameter range, Eq.(1) would reconcile four out of the five experiments which means that this solution is in broad agreement with the experiments. The misbehaving experiment is Homestake which is roughly $`34`$ standard deviations too low (a $`50\%`$ flux reduction would imply $`3.34.5SNU`$ c.f. the measured $`2.55\pm 0.25SNU`$). If taken seriously, then the low Homestake results suggests some specific regions of parameter space. However one should keep in mind that theoretical solar models involve a number of simplifying assumptions and it is therefore also possible that the $`{}_{}{}^{7}Be`$ neutrino flux has been overestimated which would alleviate the discrepancy. Alternatively, there might be some as yet unidentified systematic error in the Homestake experiment. This seems plausible as the Homestake team argued that their data was anti-correlated with the sun spot cycle during the period before about 1986 (with high confidence level), but has since stabilized (see e.g. Ref. and also section 10.5 of Ref. for some discussion about this). We adopt the cautious viewpoint that this experiment needs to be checked by another experiment before a compelling case for large energy dependent suppression of the solar flux can be made.
Recently, Guth et al pointed out that the earth regeneration effect leads to a night-day asymmetry, $`A_{nd}`$, for maximal neutrino oscillations. We define $`A_{nd}`$ by Note that in the literature an alternative definition is also used which differs from our definition in Eq.(2) by an approximate factor of 2.
$$A_{nd}\frac{ND}{N+D}.$$
(2)
Guth et al computed the night-day asymmetry for superKamiokande for large angle and maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations. In Ref. this was extended to maximal $`\nu _e\nu _s`$ oscillations where it was shown that the current measurements of the night-day asymmetry allow the parameter space $`2\times 10^7\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}8\times 10^6`$ to be excluded at about two standard deviations. The point of this paper is to study both maximal $`\nu _e\nu _s`$ and $`\nu _e\nu _{\mu ,\tau }`$ oscillation solutions in the context of the forthcoming Borexino experiment.
The Borexino experiment is a real time $`\nu e\nu e`$ elastic scattering experiment like superKamiokande, but is designed to be sensitive to relatively low energy neutrinos. This should allow the neutrino flux from the $`E=0.86MeV`$ $`{}_{}{}^{7}Be`$ line to be measured. Our procedure for calculating the night-day asymmetry is very similar to Refs. so we will not repeat the details here. One difference is that now we must use the zenith distribution function for the Gran Sasso latitude which we obtain from Ref.. Also, we use the advertised Borexino cuts in the apparent recoil electron kinetic energy of $`0.25<E_{recoil}/MeV<0.70`$. With this cut, about $`80\%`$ of the recoil electron events are due to $`{}_{}{}^{7}Be`$ neutrinos and $`20\%`$ due to CNO and pep neutrinos.
Our results for the night-day asymmetry for the maximal $`\nu _e\nu _s`$ oscillation solution are given in figure 1 (solid line) and the maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillation solution is given in figure 2. Also shown (dashed line) is the analogous results obtained for the superKamiokande experiment obtained from Ref.. Also included (dotted line) in the figures is the results for Kamland which may also be able to measure low energy solar neutrinos.
As far as I am aware, the night-day asymmetry for $`\nu _e\nu _s`$ oscillations (maximal or otherwise) has never been computed previously in the context of Borexino. While this paper was in preparation we became aware of the recent eprint, Ref. which discusses the night-day asymmetry for large angle $`\nu _e\nu _{\mu ,\tau }`$ oscillations in the context of Borexino. Our results are in agreement with the results of this paper when we examine the $`sin^22\theta =1`$ line on their contour plot in the $`\delta m^2,\mathrm{sin}^22\theta `$ plane. For the subset of people interested mainly in maximal mixing our results are complementary to those of Ref. since they contain more information than the contour plots.
The night-day asymmetry results for Borexino are roughly similar to the results for superKamiokande, except they are shifted to lower values of $`\delta m^2`$. This shift of about an order of magnitude in $`\delta m^2`$ is quite easy to understand. It arises because the typical neutrino energies for superKamiokande are about an order of magnitude larger than the energies relevant for Borexino and the oscillations depend on $`E,\delta m^2`$ only in the ratio $`E/\delta m^2`$.
Assuming maximal oscillations in the range, Eq.(1) (and the solar model of Ref.), Borexino is expected to detect around 25-30 events/day (with the cut $`0.25<E_{recoil}/MeV<0.70`$). This is somewhat more than in the SuperKamiokande experiment. Accordingly a night-day asymmetry as low as $`A_{nd}0.02`$ (or even lower) maybe observable at Borexino after only a couple of years of data (see Ref. for discussions of backgrounds and systematic uncertainties). From our figures we see that the maximal neutrino oscillation solutions lead to a significant (i.e. $`A_{nd}\stackrel{>}{}0.02`$) night-day asymmetry in Borexino and/or superKamiokande for the parameter range:
$`5\times 10^9\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}2\times 10^5for\nu _e\nu _s`$ (3)
$`10^8\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}4\times 10^5for\nu _e\nu _{\mu ,\tau }`$ (4)
If $`\delta m^2`$ is in this range then the night-day asymmetry should provide a suitable “smoking gun” signature which could provide compelling evidence that the solar neutrino problem is solved by neutrino oscillations. This is especially important for $`\nu _e\nu _s`$ oscillations since it predicts that SNO will not find any anomalous NC/CC ratio.
Let us label the region in Eq.(4) as the “medium $`\delta m^2`$ region”. Observe that there are two other possible regions of interest: The “high $`\delta m^2`$ region” with $`2\times 10^5\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}10^3`$ and the “low $`\delta m^2`$ region” with $`3\times 10^{10}\stackrel{<}{}\delta m^2/eV^2\stackrel{<}{}5\times 10^9`$ (where the upper boundary is increased to about $`10^8`$ for $`\nu _e\nu _{\mu ,\tau }`$ oscillations). If $`\delta m^2`$ is in the high region then the Kamland experiment will be able to see reactor electron neutrino disappearance. This should fully test this region. Note that part of the high $`\delta m^2`$ region is already being probed by the atmospheric neutrino experiments. For large values of $`\delta m^2\stackrel{>}{}10^4eV^2`$, $`\nu _e\nu _s`$ oscillations lead to observable up-down asymmetries for the detected electrons. At the moment there is no evidence for any electron up-down asymmetry which disfavours maximal $`\nu _e\nu _s`$ oscillations with $`\delta m^2/eV^2\stackrel{>}{}10^4`$ (similar results should also hold for $`\nu _e\nu _{\mu ,\tau }`$ oscillations). For $`\delta m^2`$ in the low region the oscillations will lead to “just so” phenomena such as energy distortion and seasonal effects. These effects can be probed at superKamiokande for $`\delta m^2/eV^2\stackrel{<}{}10^9`$ (see e.g.) and at Borexino for $`\delta m^2/eV^2\stackrel{<}{}5\times 10^9`$.
We summarize the current situation and expected sensitivities to $`\delta m^2`$ of the various experiments in figure 3 (for the maximal $`\nu _e\nu _s`$ oscillations) and figure 4 (for the maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations). In the $`\nu _e\nu _s`$ case observe that all of the $`\delta m^2`$ parameter space will lead to a “smoking gun” signature in at least one of the experiments (Borexino, SuperKamiokande and/or Kamland). For the maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations, there is a narrow region $`5\times 10^9\stackrel{<}{}\delta m^2\stackrel{<}{}10^8`$ which may fall between the cracks. This region may possibly be tested at Borexino (or Kamland) if their systematic uncertainties can be reduced sufficiently so that $`A_{nd}0.01`$ (cf.Ref.) could be seen for the $`{}_{}{}^{7}Be`$ neutrinos.
Finally, the current superKamiokande measurement of the night-day asymmetry is
$$A_{nd}=0.033\pm 0.017(stat+syst).$$
(5)
If we take the above hint seriously, i.e. that the superKamiokande night-day asymmetry is small but non-zero then in the context of the maximal mixing scenario there are two possible regions for $`\delta m^2`$, depending on which side of the night-day “mountain” we are on. If we are on the left-hand slope then Borexino will see a large night-day asymmetry. Our results in figure 1,2 suggest a range of $`0.12<A_{nd}<0.20`$ for the $`\nu _e\nu _s`$ case and $`0.10<A_{nd}<0.16`$ for the $`\nu _e\nu _{\mu ,\tau }`$ case. Of course if we are on the right-hand slope of the superKamiokande night-day mountain then Borexino will not see any night-day asymmetry. The shape of the superKamiokande energy spectrum of the night-time events can also tell us, in principle, which side of the night-day mountain we are on (see e.g.).
In summary, there are strong general and specific theoretical reasons for neutrino oscillations to be maximal. This prejudice is broadly consistent with the $`\nu _\mu `$ disappearance observed by the atmospheric neutrino experiments as well as the $`\nu _e`$ disappearance suggested by the solar neutrino experiments. We have examined the predictions of maximal $`\nu _e`$ oscillations for Borexino (see figures 1,2). This experiment together with SNO, superKamiokande and Kamland should be able to cover essentially all of the parameter space of interest.
Acknowledgements The author would like to thank Silvia Bonetti and Marco Giammarchi for answering my questions about the sensitivity of Borexino to pp neutrinos. The author would also like to thank H. Murayama for some comments which lead to improvements to the paper.
Figure Captions
Figure 1: Night-day asymmetry, $`A_{nd}(ND)/(N+D)`$ versus $`\delta m^2/\text{eV}^2`$ for maximal $`\nu _e\nu _s`$ oscillations. The solid line is the prediction for Borexino assuming a cut on the apparent recoil electron energy of $`0.25<E_{recoil}/MeV<0.70`$, while the dashed line is the night-day asymmetry for superKamiokande ($`6.5<E_{recoil}/MeV<20`$). Also shown (dotted line) is the corresponding result for the Kamland site ($`0.25<E_{recoil}/MeV<0.70`$).
Figure 2: Same as figure 1 except for maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations.
Figure 3: Sensitivity of maximal $`\nu _e\nu _s`$ oscillations to the various experiments. Note that the “SuperK night-day” region denotes the region with an observable ($`A_{nd}\stackrel{>}{}0.02`$) night-day asymmetry at superKamiokande (which is not so large as to be excluded by the current superKamiokande data).
Figure 4: Same as figure 3 except for maximal $`\nu _e\nu _{\mu ,\tau }`$ oscillations.
|
no-problem/0003/astro-ph0003046.html
|
ar5iv
|
text
|
# Self-interacting Warm Dark Matter
## I introduction
Dark matter is a necessary ingredient in the standard Big Bang model of the universe. Its presence has an impact from subgalactic dynamics to the global evolution of the universe. However, the nature of the dark matter remains unknown. So far, the cold dark matter model has been very successful in explaining how structure forms . In this model the dark matter consists of weakly interacting massive particles (WIMPs) which are extremely non-relativistic when structure formation begins. Because they are so massive they do not free stream and perturbations on small scales are preserved. In the 1980s it was realised that CDM produces too much small-scale structure, and that some modification of the model is needed. Several possibilities exist: there could be a large component of hot dark matter damping small scale fluctuations or there could be a non-zero cosmological constant. Recent data from type Ia supernovae indeed suggest that the energy density of the universe is dominated by a cosmological constant . Thus, the problem with CDM is at first sight remedied. However, in the past few years very high resolution N-body simulations of structure formation have shown that any type of CDM model produces far too much substructure on galactic scales, compared with observations. The halo of a galaxy like our own should contain of the order 1000 distinct subhaloes, a factor of ten more than is found by observations . Another, related problem is that galaxies are predicted to have singular cores. Navarro, Frenk and White found that N-body simulations predicted a universal core profile of halos where $`\rho r^1`$. Later simulations with higher resolution find an even steeper profile . At the same time galactic rotation curves indicate dark matter halos with finite cores, i.e. constant core density . This problem is very severe and is consistently found in all simulations.
If the details of star formation and feedback do not solve the problem, then physics at a more fundamental level possibly could. One option is that the primordial power spectrum has a sharp drop at subgalactic scales so that substructure is prevented from forming . Another option along this line is that the dark matter is not cold, but warm . In this model the dark matter particle mass should be around 1 keV so that the dark matter has significant thermal motion and perturbations on small scales are erased. However, the cut-off scale needed for the correct core radius of halos to be produced is so large that it is difficult to form the correct number of dwarf galaxies .
A radically different explanation was suggested by Spergel and Steinhardt , namely that the dark matter could be cold, but have significant self-interactions. If the mean free path of the dark matter particles is of the order the size of the collapsing system, then the core singularity would form much more slowly, while the outer parts of the halo would remain unchanged. Recently, a large number of papers have appeared which investigate this possibility numerically . The conclusion is that if the interactions are very strong, the model does not fit observations . The halos become completely spherical apart from a small rotational deformation, and a singular core develops. However, it seems that models where the dark matter mean free path is similar to the system size produce halos closely resembling the observed ones . It has also been suggested that the self-interacting matter could be in the form of a scalar field .
That dark matter could have self-interactions is an old idea. It was originally suggested by Raffelt and Silk that HDM neutrinos could have strong self interactions. In this way free streaming would be suppressed and fluctuations only washed out via diffusion. The scenario was elaborated on by Atrio-Barandela and Davidson who did a numerical study of this model. The possibility of number changing self interactions has also been considered .
In the present paper we wish to explore the possibility that dark matter has both significant thermal motion and self-interactions. The self-interactions are assumed to consist only of two-particle scattering. In general, the inclusion of self interactions leads to less small scale suppression of perturbations because the small scale cut-off in power is given by the Jeans scale which is smaller than the free-streaming scale. We find that self-interacting hot dark matter, as suggested by Refs. , is clearly ruled out because it produces far too little small-scale structure. However, self interacting warm dark matter may be a viable possibility. Strong self interactions push the power spectrum towards smaller scales by roughly a factor of 1.6, which may make it consistent with observations.
## II The Boltzmann equation
The evolution of any given particle species can be described via the Boltzmann equation. Our notation is identical to that of Ma and Bertschinger (MB) . We shall work in synchronous gauge because the numerical routine for calculating matter and CMB power spectra, CMBFAST , is written in this gauge. As the time variable we use conformal time, defined as $`d\tau =dt/a(t)`$, where $`a(t)`$ is the scale factor. Also, as the momentum variable we shall use the comoving momentum $`q_jap_j`$. We further parametrize $`q_j`$ as $`q_j=qn_j`$, where $`q`$ is the magnitude of the comoving momentum and $`n_j`$ is a unit 3-vector specifying direction.
The Boltzmann equation can generically be written as
$$L[f]=\frac{Df}{D\tau }=C[f],$$
(1)
where $`L[f]`$ is the Liouville operator. The collision operator on the right-hand side describes any possible collisional interactions.
We then write the distribution function as
$$f(x^i,q,n_j,\tau )=f_0(q)[1+\mathrm{\Psi }(x^i,q,n_j,\tau )],$$
(2)
where $`f_0(q)`$ is the unperturbed distribution function. For a standard fermion which decouples while relativistic, this distribution function is simply
$$f_0(q)=[\mathrm{exp}(q/T_0)+1]^1,$$
(3)
where $`T_0`$ is the present-day temperature of the species. For a self-interacting species in scattering equilibrium the distribution is instead
$$f_0(q)=[\mathrm{exp}((ϵ\mu )/aT)+1]^1,$$
(4)
where $`ϵ=\sqrt{q^2+a^2m^2}`$ and $`\mu `$ is a chemical potential. This distribution is in general different from the one for collisionless particles, so that one might worry that a detailed calculation of $`f_0(q,\tau )`$ is needed. However, the relevant quantity to look at for our purpose is the entropy per particle, $`s/n`$, which is conserved for both interacting and non-interacting species (note that this would not hold in a model with number-changing self interactions ). This means that for instance $`p/T_\gamma =\mathrm{constant}`$. Thus we do not need to worry about how the unperturbed distribution is changed by self-interactions. In practise we just assume that the distribution function is equal to what it would be for a collisionless species.
In synchronous gauge the Boltzmann equation can be written as an evolution equation for $`\mathrm{\Psi }`$ in $`k`$-space
$$\frac{1}{f_0}L[f]=\frac{\mathrm{\Psi }}{\tau }+i\frac{q}{ϵ}\mu \mathrm{\Psi }+\frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}\left[\dot{\eta }\frac{\dot{h}+6\dot{\eta }}{2}\mu ^2\right]=\frac{1}{f_0}C[f],$$
(5)
where $`\mu n^j\widehat{k}_j`$. $`h`$ and $`\eta `$ are the metric perturbations, defined from the perturbed space-time metric in synchronous gauge
$$ds^2=a^2(\tau )[d\tau ^2+(\delta _{ij}+h_{ij})dx^idx^j],$$
(6)
$$h_{ij}=d^3ke^{i\stackrel{}{k}\stackrel{}{x}}\left(\widehat{k}_i\widehat{k}_jh(\stackrel{}{k},\tau )+(\widehat{k}_i\widehat{k}_j\frac{1}{3}\delta _{ij})6\eta (\stackrel{}{k},\tau )\right).$$
(7)
Collisionless Boltzmann equation — At first we assume that $`\frac{1}{f_0}C[f]=0`$. We then expand the perturbation as
$$\mathrm{\Psi }=\underset{l=0}{\overset{\mathrm{}}{}}(i)^l(2l+1)\mathrm{\Psi }_lP_l(\mu ).$$
(8)
One can then write the collisionless Boltzmann equation as a moment hierarchy for the $`\mathrm{\Psi }_l`$ by performing the angular integration of $`L[f]`$
$`\dot{\mathrm{\Psi }}_0`$ $`=`$ $`k{\displaystyle \frac{q}{ϵ}}\mathrm{\Psi }_1+{\displaystyle \frac{1}{6}}\dot{h}{\displaystyle \frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}}`$ (9)
$`\dot{\mathrm{\Psi }}_1`$ $`=`$ $`k{\displaystyle \frac{q}{3ϵ}}(\mathrm{\Psi }_02\mathrm{\Psi }_2)`$ (10)
$`\dot{\mathrm{\Psi }}_2`$ $`=`$ $`k{\displaystyle \frac{q}{5ϵ}}(2\mathrm{\Psi }_13\mathrm{\Psi }_3)\left({\displaystyle \frac{1}{15}}\dot{h}+{\displaystyle \frac{2}{5}}\dot{\eta }\right){\displaystyle \frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}}`$ (11)
$`\dot{\mathrm{\Psi }}_l`$ $`=`$ $`k{\displaystyle \frac{q}{(2l+1)ϵ}}(l\mathrm{\Psi }_{l1}(l+1)\mathrm{\Psi }_{l+1}),l3`$ (12)
It should be noted here that the first two hierarchy equations are directly related to the energy-momentum conservation equation. This can be seen in the following way. Let us define the density and pressure perturbations of the dark matter fluid as
$`\delta `$ $``$ $`\delta \rho /\rho `$ (13)
$`\theta `$ $``$ $`ik_j\delta T_j^0/(\rho +P)`$ (14)
$`\sigma `$ $``$ $`(\widehat{k}_i\widehat{k}_j{\displaystyle \frac{1}{3}}\delta _{ij})(T^{ij}\delta ^{ij}T_k^k/3).`$ (15)
Then energy and momentum conservation implies that
$`\dot{\delta }`$ $`=`$ $`(1+\omega )\left(\theta +{\displaystyle \frac{\dot{h}}{2}}\right)3{\displaystyle \frac{\dot{a}}{a}}\left({\displaystyle \frac{\delta P}{\delta \rho }}\omega \right)\delta `$ (16)
$`\dot{\theta }`$ $`=`$ $`{\displaystyle \frac{\dot{a}}{a}}(13\omega )\theta {\displaystyle \frac{\dot{\omega }}{1+\omega }}\theta +{\displaystyle \frac{\delta P/\delta \rho }{1+\omega }}k^2\delta k^2\sigma .`$ (17)
By integrating Eq. (9) over $`q^2ϵdq`$, one gets Eq. (16) and by integrating Eq. (10) equation over $`q^3dq`$ one retrieves Eq. (17).
Collisional Boltzmann equation — We now introduce interactions by lifting the restriction that $`\frac{1}{f_0}C[f]=0`$. Ideally, one should calculate the collision integrals in detail for some explicit interaction. However, we shall instead use the cruder, but more model independent relaxation time approximation. Here, the right hand side of the Boltzmann equation is in general written as
$$\frac{1}{f_0}C[f]=\frac{\mathrm{\Psi }}{\tau },$$
(18)
where $`\tau `$ is the mean time between collisions. However, in this simple approximation we run the risk of not obeying the basic conservation laws. The collision term in Eq. (9) is $`𝑑\mathrm{\Omega }\frac{1}{f_0}C[f]`$ and the one in Eq. (10) is $`𝑑\mathrm{\Omega }\mu \frac{1}{f_0}C[f]`$. Integrating these two terms over momentum space one gets the collision terms in Eqs. (16-17) to be
$$C[f]𝑑\mathrm{\Omega }q^2𝑑qϵ$$
(19)
and
$$C[f]𝑑\mathrm{\Omega }q^2𝑑q\mu q=k^iC[f]𝑑\mathrm{\Omega }q^2𝑑qq_i$$
(20)
respectively. However, any integral of the form
$$C[f]𝑑\mathrm{\Omega }q^2𝑑qA,$$
(21)
where $`A(I,ϵ,q_i)`$ is automatically zero because $`A`$ is a collisional invariant (however, conservation of particle number ($`I`$) only applies to $`22`$ scatterings). Thus, both the above integrals are zero, and the right hand side of the $`l=0`$ and 1 terms should be zero, reflecting that energy and momentum is conserved in each interaction. Apart from these two terms we put
$$\frac{1}{f_0}C[f]_{l2}=\frac{\mathrm{\Psi }_l}{\tau },$$
(22)
so that the full Boltzmann hierarchy, including interactions, is
$`\dot{\mathrm{\Psi }}_0`$ $`=`$ $`k{\displaystyle \frac{q}{ϵ}}\mathrm{\Psi }_1+{\displaystyle \frac{1}{6}}\dot{h}{\displaystyle \frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}}`$ (23)
$`\dot{\mathrm{\Psi }}_1`$ $`=`$ $`k{\displaystyle \frac{q}{3ϵ}}(\mathrm{\Psi }_02\mathrm{\Psi }_2)`$ (24)
$`\dot{\mathrm{\Psi }}_2`$ $`=`$ $`k{\displaystyle \frac{q}{5ϵ}}(2\mathrm{\Psi }_13\mathrm{\Psi }_3)\left({\displaystyle \frac{1}{15}}\dot{h}+{\displaystyle \frac{2}{5}}\dot{\eta }\right){\displaystyle \frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}}{\displaystyle \frac{\mathrm{\Psi }_2}{\tau }}`$ (25)
$`\dot{\mathrm{\Psi }}_l`$ $`=`$ $`k{\displaystyle \frac{q}{(2l+1)ϵ}}(l\mathrm{\Psi }_{l1}(l+1)\mathrm{\Psi }_{l+1}){\displaystyle \frac{\mathrm{\Psi }_l}{\tau }},l3`$ (26)
In Appendix A we discuss how the above set of equations relates to the equations used in other studies of self-interacting dark matter.
Relaxation time — We now need an expression for the collision time $`\tau `$. In general we can write
$$\tau ^1=n\sigma |v|.$$
(27)
For relativistic particles scattering via exchange of a massive vector boson ($`m_XT,m`$ where $`m_X`$ is the vector boson mass and $`m`$ is the mass of the dark matter particle) we have
$$\sigma |v|(T/m)^2,$$
(28)
whereas for non-relativistic particles it is
$$\sigma |v|(T/m)^{1/2}.$$
(29)
As an interpolation we use
$$\sigma |v|=\frac{1}{2}\sigma _0\left[\left(\frac{T}{m}\right)^2+\left(\frac{T}{m}\right)^{1/2}\right].$$
(30)
## III Numerical Results
Using the above equations we have calculated matter and CMB power spectra for two different dark matter models: HDM ($`m=10`$ eV) and warm dark matter ($`m=1`$ keV) over a range of scattering cross sections. In practice we have incorporated the equations into the CMBFAST code developed by Seljak and Zaldarriaga . All the models were done assuming that $`\mathrm{\Omega }_X=0.95`$ and $`\mathrm{\Omega }_B=0.05`$, $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The conclusions are unchanged if a $`\mathrm{\Lambda }`$CDM model is assumed, since our purpose here is only to show how self-interactions change the power spectra. Fig. 1 shows the matter power spectrum in terms of the quantity
$$\mathrm{\Delta }^2(k)\frac{k^3P(k)}{2\pi ^2},$$
(31)
for our two different cases. In both cases, the power spectrum cut-off is pushed towards higher $`k`$ if self-interaction is assumed. The HDM ($`m=10`$ eV) results are in agreement with the results of Atrio-Barandela and Davidson , for $`k`$ smaller than the cut-off scale. At small scales, their results are somewhat different from ours, probably because of an erroneous term in their perturbation equations (as explained in the appendix).
For our choice of particle masses, the dividing line between the non-interacting and strongly interacting regimes is roughly at
$$\sigma _010^{36}\mathrm{cm}^2.$$
(32)
Note that this is much lower than the cross section which is needed to explain structure on galactic scales in the self-interacting cold dark matter model. In that case, the dividing line is closer to $`10^{23}\mathrm{cm}^2`$. For the case where the dark matter is hot, self-interactions are not able to improve the agreement with observations significantly because the power spectrum cut-off is still at much too large a scale. As discussed in Ref. , warm dark matter provides a good fit to observations of dwarf galaxies if the power spectrum cut-off is at roughly $`2h_{50}\mathrm{Mpc}^1`$, corresponding to a mass of 1 keV. However, explaining the core structure of dark matter halos requires that $`m300`$ eV , so that even though the uncertainties involved in determining the best cut-off scale are as large as a factor two , the collisionless warm dark matter model is inconsistent with observations. Our results indicate that it might be possible to lower the warm dark matter particle mass to this smaller value and compensate by making the warm dark matter self-interacting, which decreases the cut-off length scale by about a factor of 1.6 compared to the non-self-interacting case. Numerically we find that the $`k`$ where $`\mathrm{\Delta }^2(k)`$ takes its maximum value is well approximated by
$$\mathrm{\Delta }^2(k)_{\mathrm{max}}\{\begin{array}{cc}1.1\left(\frac{m}{1\mathrm{k}\mathrm{e}\mathrm{V}}\right)^{3/4}{}_{\mathrm{M}}{}^{}\mathrm{pc}_{}^{1}\hfill & \text{collisionless,}\hfill \\ 1.7\left(\frac{m}{1\mathrm{k}\mathrm{e}\mathrm{V}}\right)^{3/4}\mathrm{Mpc}^1\hfill & \text{strongly self-interacting.}\hfill \end{array}$$
(33)
For the collisionless case this corresponds to the free-streaming scale, whereas in the strongly interacting case it corresponds to the Jeans scale for a given particle mass. From this result we conclude that self-interacting warm dark matter is marginally consistent with the present observational constraints.
For the CMB, the fluctuations are usually expressed in terms of the $`C_l`$ coefficients, $`C_l=|a_{lm}|^2`$, where the $`a_{lm}`$ coefficients are determined in terms of the real angular temperature fluctuations as $`T(\theta ,\varphi )=_{lm}a_{lm}Y_{lm}(\theta ,\varphi )`$. Fig. 2 shows the CMB spectra for the same two particle masses. If the dark matter is hot, the CMB spectrum is changed relative to cold dark matter, because the DM particles are not completely non-relativistic at recombination. This gives rise to what is called the early integrated Sachs-Wolfe (ISW) effect. Self-interactions have very little impact because they only affect scales within the dark matter sound horizon at recombination. Even for a dark matter mass of 10 eV, this is at too small a scale to have a significant impact. For a dark matter particle mass of 1 keV, the effects are completely negligible. Our results for non-self-interacting warm dark matter agree with those of Burns ; we have extended his results to demonstrate that the addition of self-interactions to the warm dark matter model also produces a negligible difference from standard CDM.
## IV discussion
We have performed a quantitative calculation of the linear behaviour of warm dark matter models with possible self interactions. As expected, power on small scales is generally increased in self-interacting models because free streaming is suppressed. In collisionless models, power is suppressed on the free streaming scale, whereas in strongly self-interacting models the cut-off is at the Jeans scale. This increase in the amplitude of the fluctuations on small scales has the effect of pushing the cut-off in the power spectrum down to smaller scales by approximately a factor of 1.6. This may allow warm dark matter to better fit the dwarf galaxy observations for masses which are small enough to explain the core structure of dark matter halos, a result which could make warm dark matter a more viable dark matter candidate.
Our CMB results indicate that, like standard warm dark matter, self-interacting warm dark matter is indistinguishable from standard cold dark matter in terms of the CMB fluctuation spectrum. Thus, it is one of the few variants on the standard model which will not be probed by future CMB experiments. Any constraints on this model must therefore come from large-scale and galactic structure considerations. For instance, analysis of high-$`z`$ structure like damped Ly-$`\alpha `$ systems might lead to interesting constraints.
Note that the cross section for scattering of dark matter particles would have to be of the order $`10^{36}\mathrm{cm}^2`$ in order to change the matter power spectrum significantly. This is orders of magnitude more than the cross sections typical in weak interactions, and at present there are no obvious candidates for such dark matter particles. However, it could well be that warm dark matter with relatively strong self-interactions could be in a mirror sector, in which case there are no real restrictions .
###### Acknowledgements.
SH gratefully acknowledges support from the Carlsberg foundation. RJS was supported by the Department of Energy (DE-FG02-91ER40690). All the numerical calculations have been performed using the publicly available code CMBFAST developed by Seljak and Zaldarriaga .
## A The boltzmann equation in different asymptotic limits
### 1 Large scattering cross sections
In the limit of very large scattering cross sections, the dark matter is kept in pressure equilibrium until the present. This is the type of evolution assumed in Refs. . In this case the evolution equations read
$`\dot{\mathrm{\Psi }}_0`$ $`=`$ $`k{\displaystyle \frac{q}{ϵ}}\mathrm{\Psi }_1+{\displaystyle \frac{1}{6}}\dot{h}{\displaystyle \frac{d\mathrm{ln}f_0}{d\mathrm{ln}q}}`$ (A1)
$`\dot{\mathrm{\Psi }}_1`$ $`=`$ $`k{\displaystyle \frac{q}{3ϵ}}\mathrm{\Psi }_0`$ (A2)
$`\mathrm{\Psi }_{l2}`$ $`=`$ $`0.`$ (A3)
By performing the appropriate momentum integrations this yields
$`\dot{\delta }`$ $`=`$ $`(1+\omega )\left(\theta +{\displaystyle \frac{\dot{h}}{2}}\right)3{\displaystyle \frac{\dot{a}}{a}}\left({\displaystyle \frac{\delta P}{\delta \rho }}\omega \right)\delta `$ (A4)
$`\dot{\theta }`$ $`=`$ $`{\displaystyle \frac{\dot{a}}{a}}(13\omega )\theta {\displaystyle \frac{\dot{\omega }}{1+\omega }}\theta +{\displaystyle \frac{\delta P/\delta \rho }{1+\omega }}k^2\delta `$ (A5)
This equation is equivalent to Eqs. (13-14) in Ref. (when their $`\mathrm{\Gamma }=\mathrm{\Pi }=0`$), which are written in gauge invariant form.
### 2 Large k limit
At very small scales one may as a first approximation neglect the metric perturbations. The Boltzmann hierarchy can be truncated by neglecting terms higher than second order (including $`\dot{\sigma }`$), similar to how the Enskog expansion is performed . Then the hierarchy equations when integrated over momentum yield
$`\dot{\delta }`$ $`=`$ $`{\displaystyle \frac{4}{3}}\theta `$ (A6)
$`\dot{\theta }`$ $`=`$ $`k^2(\delta /44\theta \tau /15).`$ (A7)
It is interesting to compare our set of equations with Eqs. (25-26) of Atrio-Barandela and Davidson (AD) . They are almost identical, except for the term proportional to $`H`$ in their equation. For relativistic particles this term should be zero, as it is in the above equation.
The term $`4\theta \tau /15`$ can be interpreted as a shear viscosity term, which can in general be written as $`\eta \theta /\rho `$ . Here $`\eta `$ is the viscosity of the fluid. Using this parametrization we find that
$$\eta =\frac{4}{15}\rho \tau .$$
(A8)
For a relativistic gas with Boltzmann statistics, $`\rho =3Tn`$, so that
$$\eta =\frac{4}{5}Tn\tau .$$
(A9)
This expression for the fluid viscosity agrees with what is found in Ref. (their Eq. (33)). From Eq. A7, one can see that the perturbations oscillate and are damped at the rate
$$\mathrm{\Gamma }=\frac{2}{15}\tau k^2.$$
(A10)
|
no-problem/0003/hep-th0003182.html
|
ar5iv
|
text
|
# SUPEREXTENSION 𝑛=(2,2) OF THE COMPLEX LIOUVILLE EQUATION AND ITS SOLUTION 11footnote 1Talk given at the XIV-th Max Born Symposium, Karpacz, Poland, September 21-25, 1999
## Introduction
It is well-known that a doubly supersymmetric generalization of the geometrical approach to superstring leads in the case of $`N=2`$, $`D=3`$ Green-Schwarz superstring to the new version of the Liouville equation referring in literature as $`n=(1,1)`$ . The latter as one can expected include the real Liouville equation in its bosonic part. The problem, however, arise when we try to extend this result on the case of $`N=2`$, $`D=4`$ superstring. It turns out that in this case the well-known form of the suitable super-Liouville equation is not relevant in virtue of absence of the complex Liouville equation in the corresponding bosonic part. Thus, the equation proposed in can not be applied for description of the $`N=2`$, $`D=4`$ Green-Schwarz superstring, which as one know is reduced to ordinary complex Liouville equation when neglecting by all the fermionic component fields.
In this paper we would like to propose the new version of the $`n=(2,2)`$ super-Liouville equation which appears to be agreement with the equations of motion of the $`N=2`$, $`D=4`$ Green-Schwarz superstring. Our approach is based on the method of the nonlinear realization of local supersymmetries developed by Ivanov and Kapustnikov in frame of supergravity . It will be shown that when applied to the $`n=(2,2)`$ superconformal symmetry this method makes the possibility to impose the supercovariant constraints on the superfields in such a way that all the unphysical degrees of freedom occur in the original equation will appear removed from the residual set of the equation of motion. The latter amounts to the complex Liouville equation for the bosonic worldsheet variable $`\stackrel{~}{u}(\stackrel{~}{\xi }^{++},\stackrel{~}{\xi }^{})`$ supplemented with two first order free equations $`\stackrel{~}{}_{}\lambda ^+(\stackrel{~}{\xi }^{++},\stackrel{~}{\xi }^{})=\stackrel{~}{}_{++}\lambda ^{}(\stackrel{~}{\xi }^{++},\stackrel{~}{\xi }^{})=0`$ for the fermions of opposite chirality.
In Section 3 we present the general solution of this equation in terms of the restricted Lorentz harmonic variables , which by a proper fashion extends a corresponding bosonic string solution obtained in .
## 1 New version of the $`n=(2,2)`$ super-Liouville equation
### 1.1 Linear realization
We begin with the linear realization of two copies of one dimensional superconformal group acting separately on the light-cone complex coordinates of $`N=2`$, $`D=4`$ superstring $`𝐂^{(22)}=(\xi _L^{++}=\xi ^{++}+i\eta ^+\overline{\eta }^+,\eta ^+;\xi _L^{}=\xi ^{}+i\eta ^{}\overline{\eta }^{},\eta ^{})`$:
$`\xi _L^{\pm \pm }`$ $`=`$ $`\mathrm{\Lambda }^{\pm \pm }\overline{\eta }^\pm \overline{D}_\pm \mathrm{\Lambda }^{\pm \pm }`$
$`=`$ $`a_L^{\pm \pm }(\xi _L^{\pm \pm })+2i\eta ^\pm \overline{ϵ}^\pm (\xi _L^{\pm \pm })g^{(\pm \pm )}(\xi _L^{\pm \pm })e^{i\rho ^{(\pm \pm )}(\xi _L^{\pm \pm })},`$
$`\eta ^\pm `$ $`=`$ $`{\displaystyle \frac{i}{2}}\overline{D}_\pm \mathrm{\Lambda }^{\pm \pm }=ϵ^\pm (\xi _L^{\pm \pm })+\eta ^\pm g^{(\pm \pm )}(\xi _L^{\pm \pm })e^{i\rho ^{(\pm \pm )}(\xi _L^{\pm \pm })},`$
$`a_L^{\pm \pm }(\xi _L^{\pm \pm })`$ $`=`$ $`\xi _L^{\pm \pm }+a^{\pm \pm }(\xi _L^{\pm \pm })+iϵ^\pm (\xi _L^{\pm \pm })\overline{ϵ}^\pm (\xi _L^{\pm \pm }),`$
$`g^{(\pm \pm )}`$ $`=`$ $`\sqrt{1+_{\pm \pm }a^{\pm \pm }+i(ϵ^\pm _{\pm \pm }\overline{ϵ}^\pm +\overline{ϵ}^\pm _{\pm \pm }ϵ^\pm )}.`$
In Eq. (1.1) the general superfield (SF)
$$\mathrm{\Lambda }^{\pm \pm }(\xi _L^{\pm \pm },\eta ^\pm ,\overline{\eta }^\pm )=a_L^{\pm \pm }(\xi _L^{\pm \pm })+2i\eta ^\pm \overline{ϵ}^\pm (\xi _L^{++})g^{(\pm \pm )}(\xi _L^{\pm \pm })e^{i\rho ^{(\pm \pm )}(\xi _L^{++})}$$
(2)
$$+2i\overline{\eta }^\pm ϵ^\pm (\xi _L^{\pm \pm })2i\eta ^\pm \overline{\eta }^\pm g^{(\pm \pm )}(\xi _L^{\pm \pm })e^{i\rho ^{(\pm \pm )}(\xi _L^{\pm \pm })},$$
is composed out from parameters $`ϵ^+(\xi _L^{++})`$, $`ϵ^{}(\xi _L^{})`$ of local supertranslations; two real parameters $`a^{++}(\xi _L^{++})`$, $`a^{}(\xi _L^{})`$ of $`D1`$-reparametrizations and two real parameters $`\rho ^{++}(\xi _L^{++})`$, $`\rho ^{}(\xi _L^{})`$ describing local $`U(1)\times U(1)`$-rotations. The spinor covariant derivatives are defined as
$`D_\pm `$ $`=`$ $`_\pm +2i\overline{\eta }^\pm _{\pm \pm },`$ (3)
$`\overline{D}_\pm `$ $`=`$ $`\overline{}_\pm .`$
It is worth to mention that since the parameters $`\xi _L^{\pm \pm }`$ and $`\eta ^\pm `$ in Eqs. (1.1) are subjected to the constraints
$$D_\pm \xi _L^{\pm \pm }2i\overline{\eta }^\pm D_\pm \eta ^\pm =0$$
(4)
the flat spinor covariant derivatives (3) are transformed homogeneously with respect to (1.1)
$$D_\pm =(D_\pm \eta ^\pm )D_\pm ^{}.$$
(5)
Therefore, the following superconformal-covariant equation can be proposed as a natural candidate for $`n=(2,2)`$ superextension of the corresponding $`n=(1,1)`$ super-Liouville equation
$$D_{}D_+W=e^{2W}\mathrm{\Psi }_+^{}\mathrm{\Psi }_{}^{++}.$$
(6)
In Eq. (6) one double-analytical SF
$$W(\xi _L^{\pm \pm },\eta ^\pm )=u(\xi _L^{\pm \pm })+\eta ^+\psi ^{}(\xi _L^{\pm \pm })+\eta ^{}\psi ^+(\xi _L^{\pm \pm })+\eta ^{}\eta ^+F(\xi _L^{\pm \pm }),$$
(7)
and two general SFs $`\mathrm{\Psi }_+(\xi _L^{++},\eta ^+,\overline{\eta }^+)`$, $`\mathrm{\Psi }_{}(\xi _L^{},\eta ^{},\overline{\eta }^{})`$, depending separately on the $`(2,0)`$ and $`(0,2)`$ light-cone variables, are introduced. <sup>2</sup><sup>2</sup>2We omit temporarily the upper indices of SFs $`\mathrm{\Psi }`$ and $`M`$ for the enlightening of formulas but we shall come back to them in Section 3. Eq. (6) is invariant under the following gauge transformations
$`W^{}(\xi _L^{\pm \pm },\eta ^\pm )`$ $`=`$ $`W(\xi _L^{\pm \pm },\eta ^\pm ){\displaystyle \frac{1}{2}}ln(\overline{D}_+\overline{\eta }^+){\displaystyle \frac{1}{2}}ln(\overline{D}_{}\overline{\eta }^{}),`$ (8)
$`\mathrm{\Psi }_+^{}(\xi _L^{++},\eta ^+,\overline{\eta }^+)`$ $`=`$ $`(D_+\eta ^+)^1(\overline{D}_+\overline{\eta }^+)\mathrm{\Psi }_+(\xi _L^{++},\eta ^+,\overline{\eta }^+),`$
$`\mathrm{\Psi }_{}^{}(\xi _L^{},\eta ^{},\overline{\eta }^{})`$ $`=`$ $`(D_{}\eta ^{})^1(\overline{D}_{}\overline{\eta }^{})\mathrm{\Psi }_{}(\xi _L^{},\eta ^{},\overline{\eta }^{}).`$
Note that due to the nilpotence of the covariant derivatives $`(D_\pm ^2=0)`$ the SFs $`W`$ and $`\mathrm{\Psi }_\pm `$ included in the Eq. (6) appear restricted
$$D_\pm \mathrm{\Psi }_\pm +2(D_\pm W)\mathrm{\Psi }_\pm =0.$$
(9)
A particular property we shall encounter with here is, however, that the constraints (9) can be solved explicitly in terms of the unrestricted Fs
$$\mathrm{\Psi }_+=D_+M+2(D_+W)M,\mathrm{\Psi }_{}=D_{}N+2(D_{}W)N.$$
(10)
In Eq. (10) the general SFs <sup>3</sup><sup>3</sup>3It can be shown that in the case of chiral SFs $`M(\xi _L^{++},\eta ^+),N(\xi _L^{},\eta ^{})`$ Eq. (6) is reduced to free one $`D_+D_{}\stackrel{~}{W}=0`$ for the SF $`\stackrel{~}{W}=W+\frac{1}{2}ln(MN)`$.
$`M(\xi _L^{++},\eta ^+,\overline{\eta }^+)`$ $`=`$ $`f(\xi _L^{++})+\eta ^+\omega ^{}(\xi _L^{++})+`$
$`\overline{\eta }^+\overline{\chi }^{}(\xi _L^{++})+\eta ^+\overline{\eta }^+m^{}(\xi _L^{++}),`$
$`N(\xi _L^{},\eta ^{},\overline{\eta }^{})`$ $`=`$ $`g(\xi _L^{})+\eta ^{}\omega ^+(\xi _L^{})+`$
$`\overline{\eta }^{}\overline{\chi }^+(\xi _L^{})+\eta ^{}\overline{\eta }^{}n^{++}(\xi _L^{}),`$
are supposed transform as a superconformal densities
$`M^{}(\xi _L^{++},\eta ^+,\overline{\eta }^+)`$ $`=`$ $`(\overline{D}_+\overline{\eta }^+)M(\xi _L^{++},\eta ^+,\overline{\eta }^+),`$ (12)
$`N^{}(\xi _L^{},\eta ^{},\overline{\eta }^{})`$ $`=`$ $`(\overline{D}_{}\overline{\eta }^{})N(\xi _L^{},\eta ^{},\overline{\eta }^{}).`$
Although the component content of SFs $`W,M,N`$ even upon the gauge fixing is still too large to be related with the $`N=2`$, $`D=4`$ superstring there is very important feature of Eq. (6). It contains complex Liouville equation in its bosonic part
$$_{++}_{}u(\xi _L^{\pm \pm })=\frac{1}{4}e^{2u(\xi _L^{\pm \pm })}m^{}(\xi _L^{++})n^{++}(\xi _L^{})+\mathrm{},$$
(13)
where all the unessential terms in the r.h.s. are omitted. It is clear, however, that to be connected with the superstring theory the SFs we have considered here must be covariantly constrained. In the next Section we are going to show that the desirable constraints could be imposed in frame of the nonlinear realization of $`n=(2,2)`$ superconformal symmetry in which the original SFs becomes reducible.
### 1.2 Nonlinear realization
To see this let us suppose that the v.e.v. of the component fields $`m^{}(\xi _L^{++})`$ and $`n^{++}(\xi _L^{})`$ in (1.1) are not equal to zero and as consequence of this the local supersymmetry (1.1) is actually spontaneously broken. In this case the fermionic components $`\chi ^\pm `$ acquire the sense of the corresponding Goldstone fermions and one can exploit them for the singling out of the complex Liouville equation from the system (6) in a manifestly covariant manner. Indeed, it is well-known that in the models with spontaneously broken supersymmetry all the SFs becomes reducible , . Their irreducible parts are transformed, however, universally with respect to the action of the original supergroups, as the linear representations of the underlying unbroken subgroups but with the parameters depending nonlinearly on the Goldstone fermions. It makes the possibility to impose generally on the SFs in question some absolutely covariant restrictions providing to remove out from the model under consideration undesirable degrees of freedom. Here we can to avail oneself of the opportunity to restrict the SFs enter the Eq. (6) with the help of this approach.
For the beginning let us derive the nonlinear realization of the superconformal symmetry in superspace. Following closely to the general method developed in , we need firstly splits the general finite element of the group (1.1)
$$G(\zeta _L)\zeta _L^{},$$
(14)
where $`\zeta _L=\{\xi _L^{\pm \pm },\eta ^\pm \}`$, onto the product of two successive transformations
$$G(\zeta _L)=K(G_0(\zeta _L)).$$
(15)
In Eq. (15) the following standard notations are used. As before the $`G_0(\zeta _L)`$ refer to the ”primes” coordinates $`\zeta _L^{}`$ but index zero means that they referring now only to the stability subgroup
$`\xi _L^{\pm \pm }`$ $`=`$ $`\xi _L^{\pm \pm }+a^{\pm \pm }(\xi _L^{\pm \pm }),`$ (16)
$`\eta ^\pm `$ $`=`$ $`\eta ^\pm e^{i\rho ^{(\pm \pm )}(\xi _L^{\pm \pm })}\sqrt{1+_{\pm \pm }a^{\pm \pm }}.`$
The latter include only the ordinary conformal transformations (parameters $`a^{\pm \pm }(\xi _L^{\pm \pm })`$) supplemented with the local $`U(1)\times U(1)`$-rotations (parameters $`\rho ^{(\pm \pm )}(\xi _L^{\pm \pm })`$). Note, that the first multiplier in the decomposition (15) is easily recognized as the representatives of the left coset space $`G/G_0`$ <sup>4</sup><sup>4</sup>4In virtue of (15) all the parameters in (1.1) should be regarded as composite ones which are composed out from the parameters of transformations (16) and (17).
$`K^{\pm \pm }(\zeta _L)`$ $`=`$ $`\xi _L^{\pm \pm }+iϵ^\pm (\xi _L^{\pm \pm })\overline{ϵ}^\pm (\xi _L^{\pm \pm })`$ (17)
$`+2i\eta ^\pm \overline{ϵ}^\pm (\xi _L^{\pm \pm })\sqrt{1+i(ϵ^\pm _{\pm \pm }\overline{ϵ}^\pm +\overline{ϵ}^\pm _{\pm \pm }ϵ^\pm )},`$
$`K^\pm (\zeta _L)`$ $`=`$ $`ϵ^\pm (\xi _L^{\pm \pm })+\eta ^\pm \sqrt{1+i(ϵ^\pm _{\pm \pm }\overline{ϵ}^\pm +\overline{ϵ}^\pm _{\pm \pm }ϵ^\pm )}.`$
It deserves to mention that in the decomposition (15) the comultipliers $`K`$ and $`G_0`$ are chosen in such a way that the irreduciblity constraint (3) is satisfied separately for both of them. The prescription for constructing the corresponding nonlinear realization is as follows . Let us identify the local parameters $`ϵ^\pm (\xi _L^{\pm \pm })`$, $`\overline{ϵ}^\pm (\xi _L^{\pm \pm })`$ in (17) with the Goldstone fields $`\lambda ^\pm (\xi _L^{\pm \pm })`$, $`\overline{\lambda }^\pm (\xi _L^{\pm \pm })`$
$`\stackrel{~}{K}^{\pm \pm }(\stackrel{~}{\zeta }_L)`$ $`=`$ $`\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{}+i\lambda ^\pm (\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})\overline{\lambda }^\pm (\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})`$ (18)
$`+2i\stackrel{~}{\eta }^\pm \overline{\lambda }^\pm (\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})\sqrt{1+i(\lambda ^\pm \stackrel{~}{}_{\pm \pm }\overline{\lambda }^\pm +\overline{\lambda }^\pm \stackrel{~}{}_{\pm \pm }\lambda ^\pm )},`$
$`\stackrel{~}{K}^\pm (\stackrel{~}{\zeta }_L)`$ $`=`$ $`\lambda ^\pm (\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})+\stackrel{~}{\eta }^\pm \sqrt{1+i(\lambda ^\pm \stackrel{~}{}_{\pm \pm }\overline{\lambda }^\pm +\overline{\lambda }^\pm \stackrel{~}{}_{\pm \pm }\lambda ^\pm )}`$
and take for $`\stackrel{~}{K}(\stackrel{~}{\zeta }_L)`$ the transformation law associated to (15)
$$G(\stackrel{~}{K}(\stackrel{~}{\zeta }_L))=\stackrel{~}{K}^{}(\stackrel{~}{G}_0(\stackrel{~}{\zeta }_L)).$$
(19)
In Eq. (19) the newly introduced coordinates $`\stackrel{~}{\zeta }_L=\{\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{},\stackrel{~}{\eta }^\pm \}`$ are transformed differently as compared with $`\zeta _L=\{\xi _L^{\pm \pm },\eta ^\pm \}`$ in (1.1). Indeed, in accordance with (16) they change only under the vacuum stability subgroup
$`\stackrel{~}{\xi }_L^{}`$ $`=`$ $`\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{}+\stackrel{~}{a}^{\pm \pm }(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{}),`$ (20)
$`\stackrel{~}{\eta }^\pm `$ $`=`$ $`\stackrel{~}{\eta }^\pm e^{i\stackrel{~}{\rho }^{(\pm \pm )}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})}\sqrt{1+\stackrel{~}{}_{\pm \pm }\stackrel{~}{a}^{\pm \pm }},`$
where the parameters $`\stackrel{~}{a}^{\pm \pm }(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})`$ and $`\stackrel{~}{\rho }^{(\pm \pm )}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})`$ turns out to be dependent nonlinearly on the fields $`\lambda ^\pm (\xi _L^{\pm \pm })`$, $`\overline{\lambda }^\pm (\xi _L^{\pm \pm })`$. Eqs. (19) and (20) determine the transformation properties of the Goldstone fermions $`\lambda ^\pm (\xi _L^{\pm \pm })`$, $`\overline{\lambda }^\pm (\xi _L^{\pm \pm })`$ with respect to the nonlinear realization of the superconformal group $`G`$ in coset space (18).
## 2 Splitting superspace and irreducible form of SFs
Up to now we have dealt with only formal prescription of construction of the nonlinear realization of superconformal group $`G`$ without any relation of this procedure to the original equation (6). Nevertheless, there is the simple possibility to gain a more deeper insight into the model we started with if we compare two Eqs. (14) and (19). We find that $`\stackrel{~}{K}(\stackrel{~}{\zeta }_L)`$ transform under $`G`$ in precisely the same manner as the initial coordinates $`\zeta _L`$ of superspace $`𝐂^{(22)}`$. Thus we have the unique possibility to identify them
$$\zeta _L=\stackrel{~}{K}(\stackrel{~}{\zeta }_L).$$
(21)
Eq. (21) establish the relationship between two forms of the realization of superconformal symmetries in superspace. One of the remarkable futures of the transformations (21) is that superspace of the nonlinear realization $`\stackrel{~}{𝐂}^{(22)}=\stackrel{~}{\zeta }_L`$ turns out to be completely ”splitting” in virtue of the transformations (20) which are not mixed the bosonic and fermionic variables. Due to this very important fact the SFs of the nonlinear realization becomes actually reducible. Indeed, let us perform the change of variables (21) in the Eq. (6)
$$\stackrel{~}{D}_{}\stackrel{~}{D}_+\stackrel{~}{W}=e^{2\stackrel{~}{W}}\stackrel{~}{\mathrm{\Psi }}_+\stackrel{~}{\mathrm{\Psi }}_{},$$
(22)
where the SFs and covariant derivatives of the nonlinear realization (19), (20) and (21) are introduced
$$W=\stackrel{~}{W}\frac{1}{2}ln(\overline{\stackrel{~}{D}}_+\overline{\eta }^+)\frac{1}{2}ln(\overline{\stackrel{~}{D}}_{}\overline{\eta }^{}),D_\pm =(\stackrel{~}{D}_\pm \eta ^\pm )^1\stackrel{~}{D}_\pm ,$$
(23)
$$\stackrel{~}{\mathrm{\Psi }}_+=\stackrel{~}{D}_+\stackrel{~}{M}+2(\stackrel{~}{D}_+\stackrel{~}{W})\stackrel{~}{M},\stackrel{~}{\mathrm{\Psi }}_{}=\stackrel{~}{D}_{}\stackrel{~}{N}+2(\stackrel{~}{D}_{}\stackrel{~}{W})\stackrel{~}{N},$$
(24)
$$M(\xi _L^{++},\eta ^+,\overline{\eta }^+)=(\overline{\stackrel{~}{D}}_+\overline{\eta }^+)\stackrel{~}{M}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{},\stackrel{~}{\eta }^+,\overline{\stackrel{~}{\eta }}^+),$$
$$N(\xi _L^{},\eta ^{},\overline{\eta }^{})=(\overline{\stackrel{~}{D}}_{}\overline{\eta }^{})\stackrel{~}{N}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{},\stackrel{~}{\eta }^{},\overline{\stackrel{~}{\eta }}^{}).$$
(25)
Note should be taken that the covariant derivatives $`\stackrel{~}{D}_\pm `$ in (22) have the same structure as those of linear realization (3). This follows from the structure of the coset space representatives (17) which are defined in such a way that the irreducibility conditions (4) are fulfilled for them automatically.
Although the form of the Eq. (22) is precisely the same as the original one (6) the SFs of the nonlinear realization appearing in (22) are distinguished drastically from the SFs of linear realization. As it follows from (20) and (5) the SFs $`\stackrel{~}{W}`$ and $`\stackrel{~}{\mathrm{\Psi }}`$ are transformed under the action of $`G`$ only with respect to their stability subgroup (20)
$`\stackrel{~}{W}^{}(\stackrel{~}{\xi }_L^{},\stackrel{~}{\eta }^\pm )`$ $`=`$ $`\stackrel{~}{W}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{},\stackrel{~}{\eta }^\pm ){\displaystyle \frac{1}{2}}ln(\overline{\stackrel{~}{D}}_+\overline{\stackrel{~}{\eta }}^+){\displaystyle \frac{1}{2}}ln(\overline{\stackrel{~}{D}}_{}\overline{\stackrel{~}{\eta }}^{}),`$ (26)
$`\stackrel{~}{M}^{}(\stackrel{~}{\xi }_L^{++},\stackrel{~}{\eta }^+,\overline{\stackrel{~}{\eta }}^+)`$ $`=`$ $`(\overline{\stackrel{~}{D}}_+\overline{\stackrel{~}{\eta }}^+)\stackrel{~}{M}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{},\stackrel{~}{\eta }^+,\overline{\stackrel{~}{\eta }}^+),`$
$`\stackrel{~}{N}^{}(\stackrel{~}{\xi }_L^{},\stackrel{~}{\eta }^{},\overline{\stackrel{~}{\eta }}^{})`$ $`=`$ $`(\overline{\stackrel{~}{D}}_{}\overline{\stackrel{~}{\eta }}^{})\stackrel{~}{N}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{},\stackrel{~}{\eta }^{},\overline{\stackrel{~}{\eta }}^{}).`$
Substituting here the explicit form of gauge parameters deduced from the transformations (20)
$$\overline{\stackrel{~}{D}}_\pm \overline{\stackrel{~}{\eta }}^\pm =e^{i\stackrel{~}{\rho }^{(\pm \pm )}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})}\sqrt{1+\stackrel{~}{}_{\pm \pm }\stackrel{~}{a}^{\pm \pm }(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})},$$
(27)
one concludes that all the component fields of the SFs $`\stackrel{~}{W}`$ and $`\stackrel{~}{M},\stackrel{~}{N}`$ are transformed independently of each other. Thus we can put down the following manifestly covariant constraints
$$\stackrel{~}{W}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{},\stackrel{~}{\eta }^\pm )=\stackrel{~}{u}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{}),$$
(28)
$$\stackrel{~}{M}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{},\stackrel{~}{\eta }^+,\overline{\stackrel{~}{\eta }}^+)=\stackrel{~}{\eta }^+\overline{\stackrel{~}{\eta }}^+\stackrel{~}{m}^{}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{}),$$
$$\stackrel{~}{N}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{},\stackrel{~}{\eta }^{},\overline{\stackrel{~}{\eta }}^{})=\stackrel{~}{\eta }^{}\overline{\stackrel{~}{\eta }}^{}\stackrel{~}{n}^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{}).$$
(29)
which leaves intact the $`G`$-invariance of theory. Returning these constraints back into the Eq. (22) we obtain the final component form of the Eq. (6)
$$\stackrel{~}{}_{}\stackrel{~}{}_{++}\stackrel{~}{u}=\frac{1}{4}e^{2\stackrel{~}{u}}\stackrel{~}{m}^{}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{})\stackrel{~}{n}^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{}).$$
(30)
This Eq. together with chirality conditions of the Goldstone fermions $`\lambda ^\pm (\xi _L^{\pm \pm })`$, $`\overline{\lambda }^\pm (\xi _L^{\pm \pm })`$ gives the whole system of Eqs. describing dynamics of the $`N=2`$, $`D=4`$ superstring in the component level .
## 3 General solution
Let us consider shortly the problem of construction of general solution of the Eq. (6). It is well-known that the Virasoro constraints simplifying significantly the string equations of motion can generally be solved in terms of two copies (left- and right-moving) of the Lorentz harmonic variables parameterizing the compact coset spaces isomorphic to the $`(D2)`$-dimensional sphere
$$S_{D2}=\frac{SO(1,D1)}{SO(1,1)\times SO(D2)\times K_{D2}}$$
(31)
Moreover, it was shown in that from these variables the particular Lorentz covariant combinations can be formed which resolve generally the corresponding nonlinear $`\sigma `$-model equations of motion inspired by the bosonic strings in the geometrical approach . By the construction the number of two copies of chiral variables parameterizing the coset space (31) is apparently enough to recover the $`2(D2)`$ physical degrees of freedom of $`D`$-dimensional bosonic strings. But in the case of superstrings these variables replaced by the world-sheet superfields must be properly restricted to provide the necessary balance between bosonic and fermionic degrees of freedom $`(D2)B(D2)F`$.
In this Section we shall show that the suitable constraints can be achieved within the method of the nonlinear realization of superconformal symmetry developed in Section 1.2.
Proceeding from one can check that the general solution of the Liouville Eq. (13) can be written in form
$`e^{2\stackrel{~}{u}(\stackrel{~}{\xi }_{}^{\pm \pm }{}_{L}{}^{})}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\stackrel{~}{r}_m^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{})\stackrel{~}{l}^m(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{}),`$ (32)
$`\stackrel{~}{m}_{++}^{}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{})`$ $`=`$ $`\stackrel{~}{l}_m^{}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{})\stackrel{~}{}_{++}\stackrel{~}{l}^m(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{}),`$
$`\stackrel{~}{n}_{}^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{})`$ $`=`$ $`\stackrel{~}{r}_m^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{})\stackrel{~}{}_{}\stackrel{~}{r}^m(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{}),`$
where the left(right)-moving Lorentz harmonics are normalized as follows
$$\stackrel{~}{l}_m^{++}\stackrel{~}{l}^{m++}=0,\stackrel{~}{l}_m^{}\stackrel{~}{l}^m=0,\stackrel{~}{l}_m\stackrel{~}{l}^{m\pm \pm }=0,$$
(33)
$$\stackrel{~}{l}_m^{}\stackrel{~}{l}^{m++}=2,\stackrel{~}{l}_m\stackrel{~}{l}^m=1.$$
(34)
Substituting these solutions into the Eqs. (29) and taking account of the expressions (25) one finds
$$MM^{}(\xi _L^{++},\eta ^+,\overline{\eta }^+)=(\overline{\stackrel{~}{D}}_+\overline{\eta }^+)\stackrel{~}{\eta }^+\overline{\stackrel{~}{\eta }}^+\stackrel{~}{l}_m^{}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{})\stackrel{~}{}_{++}\stackrel{~}{l}^m(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{}),$$
(35)
$$NM^{++}(\xi _L^{},\eta ^{},\overline{\eta }^{})=(\overline{\stackrel{~}{D}}_{}\overline{\eta }^{})\stackrel{~}{\eta }^{}\overline{\stackrel{~}{\eta }}^{}\stackrel{~}{r}_m^{++}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{})\stackrel{~}{}_{}\stackrel{~}{r}^m(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{})$$
(36)
Now comparing these SFs with explicit form of the general solution of the Eq. (6)
$`e^{2W(\xi _L^{\pm \pm },\eta ^\pm )}`$ $`=`$ $`{\displaystyle \frac{1}{2}}r_m^{++}(\xi _L^{},\eta ^{})l^m(\xi _L^{++},\eta ^+),`$ (37)
$`\mathrm{\Psi }_+^{}(\xi _L^{++},\eta ^+,\overline{\eta }^+)`$ $`=`$ $`{\displaystyle \frac{1}{2i}}l_m^{}(\xi _L^{++},\eta ^+)D_+l^m(\xi _L^{++},\eta ^+),`$
$`\mathrm{\Psi }_{}^{++}(\xi _L^{},\eta ^{},\overline{\eta }^{})`$ $`=`$ $`{\displaystyle \frac{1}{2i}}r_m^{++}(\xi _L^{},\eta ^{})D_{}r^m(\xi _L^{},\eta ^{}),`$
one can derives the following expressions for the corresponding SFs of linear realization
$$M^{\pm \pm }=\mathrm{\Phi }_\pm ^\pm \mathrm{\Omega }^{}\mathrm{\Psi }_{}^{\pm \pm },\overline{D}_\pm \mathrm{\Phi }_\pm ^\pm =0,D_\pm \mathrm{\Omega }^\pm =1,$$
(38)
$$\mathrm{\Phi }_\pm ^\pm =\overline{\stackrel{~}{D}}_\pm \overline{\eta }^\pm ,\mathrm{\Omega }^\pm =(\stackrel{~}{D}_\pm \eta ^\pm )\stackrel{~}{\eta }^\pm ,$$
(39)
$`l_m^{\pm \pm ,0}(\xi _L^{++},\eta ^+)`$ $`=`$ $`\stackrel{~}{l}_m^{\pm \pm ,0}(\stackrel{~}{\xi }_{}^{++}{}_{L}{}^{}),`$ (40)
$`r_m^{\pm \pm ,0}(\xi _L^{},\eta ^{})`$ $`=`$ $`\stackrel{~}{l}_m^{\pm \pm ,0}(\stackrel{~}{\xi }_{}^{}{}_{L}{}^{}).`$
## 4 Conclusion
Thus, we have established that the $`n=(2,2)`$ generalization of complex Liouville equation appropriated to the $`N=2,D=4`$ superstring is given by the Eq. (6) in which the auxiliary SFs $`\mathrm{\Psi }`$ are subjected to the constraints (10) and (38). Then the general solution of this equation can be given in terms of Lorentz harmonics (37) which in one’s turn are also restricted by the conditions (40). Note, that in its own rights this fact actually means that the Eq. (6) proved to be exactly solvable as the corresponding bosonic string equation does but unlike to the bosonic case the corresponding harmonic SFs becomes essentially restricted by the constraints (40) which provide the supersymmetric balance between bosonic and fermionic degrees of freedom. <sup>5</sup><sup>5</sup>5It is obvious, that the same property of $`n=(1,1)`$ super-Liouville equation can be derived from this one with the help of dimensional reduction from (37) and (40). It is worth mentioning that the first constraint in Eqs. (38) implies that the SFs $`M`$ are actually nilpotence $`M^2=0`$. From the theory of spontaneously broken supersymmetries we know that such a type of constraints leads directly to the nonlinear realizations of the underline symmetries , , in frame of which these constraints could be solved explicitly in terms of the corresponding Goldstone (super)fields. In the case under consideration we find the suitable manifestly supercovariant solution (39) in terms of the Goldstone fermions of the nonlinear realization of $`n=(2,2)`$ superconformal symmetry $`\lambda ^\pm (\xi _L^{\pm \pm })`$, $`\overline{\lambda }^\pm (\xi _L^{\pm \pm })`$.
We are convinced that this approach actually gives the universal way of deriving the equations of motion as well as their solutions for the superstrings in the cases of higher dimensions too, i.e. $`D=6,10`$. In particular, the $`N=2,D=6`$ superstring is expected should be described by the nonlinear realization of the $`n=(4,4)`$ supersymmetric WZNW $`\sigma `$-model in which $`W`$ is replaced by the double-analytical SF $`q^{(1,1)}`$ representing twisted multiplet in the harmonic $`(4,4)`$ superspace , .
We hope return to this question in a forthcoming publications.
## Acknowledgments
It is a great pleasure for me to express grateful to I. Bandos, E. Ivanov, S. Krivonos, A. Pashnev and D. Sorokin for interest to this work and valuable discussions.
## References
|
no-problem/0003/cond-mat0003080.html
|
ar5iv
|
text
|
# RVB description of the low-energy singlets of the spin 1/2 kagomé antiferromagnet
## 1 Introduction
It is well known that the conventional picture of a long-range, ordered, dressed Néel ground state (GS) can collapse for low dimensional frustrated antiferromagnets. The GS of several spin 1/2 strongly frustrated systems has no long range antiferromagnetic order and is separated from the first magnetic ($`S=1`$) excitations by a gap. The first example of such a behavior was given by the zigzag chain at the Majumdar-Ghosh point Majumdar ($`J_2/J_1=1/2`$) in which case the two-fold degenerate GS is a product of singlets built on the strong bonds.
In some cases the consequences of frustration on the structure of the spectrum can be even more dramatic. It is now firmly established by many numerical studies that the singlet-triplet gap of the Heisenberg model on the kagomé lattice is filled with an exponential number of singlet states Waldtmann . This property is actually not specific to the kagomé antiferromagnet (KAF) and could be a generic feature of strongly frustrated magnets: It is suspected to occur also for the Heisenberg model on the pyrochlore lattice Canalspriv , and it has been explicitly proved for a one-dimensional system of coupled tetrahedra which can be seen as a 1D analog of pyrochlore Mambrini .
Since the particular low-temperature dependence of many physical quantities is directly connected to the structure of this non-magnetic part of the spectrum, many recent works Waldtmann ; Zengelser1 ; Singhhuse ; Leungelser ; Zengelser2 ; Sindzingre ; Nakamura ; Lecheminant were devoted to understand the nature of the disordered GS and low-lying excitations. Unfortunately, it is still hard to come up with a clear picture of the low-energy sector of the KAF. Resonating Valence Bonds (RVB) states, for which wave functions are products of pair singlets, seem to be a natural framework to describe this exponential proliferation of singlet states. RVB states were first proposed to describe a disordered spin liquid phase by Fazekas and Anderson Fazekas for the triangular lattice and was reintroduced by Anderson Anderson in the context of high-$`T_c`$ superconductivity.
For the kagomé lattice, the absence of long range correlation may lead to consider only Short Range RVB states (SRRVB) , i.e. first neighbor coverings of the lattice with dimers. The first difficulty which occurs is that the number of SRRVB states of a $`N`$ site kagomé lattice with periodic boundary conditions is $`2^{1+(N/3)}2(1.26)^N`$ Elser whereas the number of singlet states before the first triplet of the KAF scales like $`1.15^N`$ Waldtmann . Of course, this does not necessarily disqualify the SRRVB description but raises the question of the selection of the relevant states.
At the mean field level an answer to this question has been given in a recent paper Mila starting from a trimerized remark version of the KAF (see Fig. 1):
$$=J_{\mathrm{}}\underset{i,j_{\mathrm{}}}{}\stackrel{}{S}_i.\stackrel{}{S}_j+J_{\mathrm{}}\underset{i,j_{\mathrm{}}}{}\stackrel{}{S}_i.\stackrel{}{S}_j,$$
(1)
When considering low-energy excitations one can work in the subspace where the total spin of each strong bond triangle is $`1/2`$. Since there are two ways to build a spin $`1/2`$ with three spins $`1/2`$, these triangles have two spin $`1/2`$-like degrees of freedom : The total spin $`\stackrel{}{\sigma }`$, and the chirality $`\stackrel{}{\tau }`$. This representation does not simplify the problem because spin and chirality are coupled in the Hamiltonian but it is no longer the case in the mean field approximation and it is possible to solve the mean-field equations exactly Mila . Low-energy states are SRRVB states on the triangular lattice formed by strong bond triangles and their number grows like the number of dimer coverings of a $`N/3`$ site triangular lattice, $`1.15^N`$, as can be shown using standard methods Fisher ; Kasteleyn .
This result was established under the assumptions that $`J_{\mathrm{}}/J_{\mathrm{}}`$ is small (trimerized limit) and that quantum fluctuations can be treated at the lowest order (mean field approximation). Therefore two questions remain open: What happens beyond mean field approximation ? Can SRRVB state give a good description of the energy spectrum in the isotropic limit?
To answer these questions we have studied the KAF Hamiltonian in the subspace of SRRVB states with no simplifying approximation concerning the non orthogonality of this basis. In this subspace the complete spectrum is obtained up to 36-site clusters in both trimerized and isotropic limit.
The text is organized as follows: In the first part we study the trimerized model and show that mean field predictions are robust with respect to quantum fluctuations. In the trimerized limit, the low-energy spectrum splits into bands in which the average number of dimers lying on one type of bonds is fixed and the size of the lowest band scales as $`1.15^N`$.
Next we present the results obtained in the isotropic limit. Contrary to what was suggested by previous studies Zengelser2 the singlet spectrum obtained with SRRVB states is a continuum. Moreover the number of states below a given total energy increases exponentially for all energy with the size of the system considered.
Finally, we compare the results obtained for KAF with the results obtained using the same basis for a non-frustrated antiferromagnet, the Heisenberg model on a square lattice, and we emphasize the ability of SRRVB states to capture the specific low energy physics of frustrated magnets.
Most of the results presented here contrast with the commonly admitted point of view that SRRVB states do not provide a good variational basis for this problem. In fact, SRRVB states lead to specific numerical difficulties due to the fact that they are not orthogonal to each other. A way to get around this difficulty is to neglect overlap between states under a given threshold. However reasonable this approximation may seem, it appears to modify the results significantly. It turns out that this approximation is not necessary to perform exact numerical simulations, even for large systems. In order to clarify this point, some technical details about the method we used to implement symmetries of the problem and achieve the calculations in this non-orthogonal basis are given in an Appendix.
## 2 The trimerized model
As stated in the introduction, the main question with the trimerized model ($`J_{\mathrm{}}/J_{\mathrm{}}<1`$) is to know if the mean-field selection mechanism (pairing of strong bond triangles) of low-lying singlet states is robust when quantum fluctuations are taken into account.
Fully trimerized limit – Ground state. Let us start with the limit $`J_{\mathrm{}}/J_{\mathrm{}}=0`$. In this limit the system consists of $`N/3`$ independent triangles and the SRRVB GS is obtained by putting one dimer on each of these triangles. Since $`J_{\mathrm{}}/J_{\mathrm{}}=0`$ this state can be completed to a SRRVB state ($`N/2`$ dimers) by putting the $`N/6`$ remaining dimer on the $`J_{\mathrm{}}`$ bonds. The energy of such a state is $`(3/4)J_{\mathrm{}}(N/3)=(N/4)J_{\mathrm{}}`$. In this limit the GS is thus obtained by maximizing the number of dimers on the $`J_{\mathrm{}}`$ bonds ($`N/3`$).
By a simple counting argument it is easy to see that every SRRVB state contains $`N/6=N_t/4`$ triangles, called defaults, for which none of the bonds is occupied by a dimer ($`N_t=(2N/3)`$ is the number of triangles): a SRRVB state being a set of $`N/2`$ dimer, it leaves $`(2N/3)(N/2)=(N/6)`$ triangles unoccupied. The number of defaults $`n_{\text{def}}(J_{\mathrm{}})`$ on the $`J_{\mathrm{}}`$ bonds can take all the values from $`0`$ to $`N/6`$. In terms of defaults the GS discussed above is a SRRVB state which minimizes $`n_{\text{def}}(J_{\mathrm{}})`$.
Let us turn to the question of the degeneracy of this GS and show that the number of dimer coverings of the kagomé lattice with $`n_{\text{def}}(J_{\mathrm{}})=0`$ is exactly the number of dimer coverings of the $`N/3`$ site triangular lattice formed by $`\mathrm{}`$ triangles. To prove this, we have to check that one can associate each GS configuration to a unique dimer covering of the triangular super-lattice and vice versa (see Fig. 3).
Clearly, to each pairing $`A`$ of $`\mathrm{}`$ triangles one can associate a set of dimers $`(1,2,3)`$ on the kagomé lattice. Doing so, the number of dimers on $`\mathrm{}`$ triangles is $`N/3`$, which is the maximum, and $`n_{\text{def}}(J_{\mathrm{}})=0`$. Consider now a SRRVB with $`n_{\text{def}}(J_{\mathrm{}})=0`$. Let us show that there exists a unique way to pair $`\mathrm{}`$ triangles according to the $`(1,2,3)`$ pattern. Starting from dimer $`1`$ on triangle $`T1`$, the existence of dimer $`2`$ is necessary because the state is SRRVB and the triangle $`T2`$ contains dimer $`3`$ because there is no default on $`\mathrm{}`$ triangles by assumption.
For the triangular lattice, the number of coverings increases with the number of sites $`N`$ like $`A\alpha _\text{t}^N`$ with $`\alpha _t=\mathrm{exp}\{\frac{1}{16\pi ^2}_0^{2\pi }_0^{2\pi }\mathrm{ln}(4+4\mathrm{sin}x\mathrm{sin}y+4\mathrm{sin}^2y)𝑑x𝑑y\}1.5351`$ and $`A2`$ Mila . Thus the number of dimer coverings of the kagomé lattice with $`n_{\text{def}}(J_{\mathrm{}})=0`$ increases like $`(\alpha _\text{t}^{1/3})^N1.1536^N`$.
This degeneracy has been obtained considering only SRRVB subspace. In the full $`S=0`$ subspace the GS is much more degenerate. The model, when $`J_{\mathrm{}}/J_{\mathrm{}}=0`$, simply reads:
$$=(J_{\mathrm{}}/2)\underset{i}{}\left\{S_\mathrm{}_i(S_\mathrm{}_i+1)(9/4)\right\},$$
(2)
where $`S_\mathrm{}_i`$ is the total spin of the triangle $`i`$.
The GS is thus obtained by setting the total spin of each $`\mathrm{}`$ triangle to $`1/2`$ and to couple all the $`N/3`$ spin $`1/2`$ triangles to a total spin of $`0`$ and the degeneracy is $`2^{N/3}(N/3)!/[(N/6)!(1+N/6)!]`$. The combinatory factor is the size of the singlet sector of $`N/3`$ spin $`1/2`$ and the other factor refers to the fact that on each of the $`\mathrm{}`$ triangles there are 2 independent ways to build a total spin of $`1/2`$.
Thus asymptotically the full singlet degeneracy increases like $`2^{2N/3}/N^{3/2}1.5874^N/N^{3/2}`$. The table 1 summarizes the various degeneracies.
Fully trimerized limit – Excited states. The situation of excited states in the SRRVB subspace is different, even when $`J_{\mathrm{}}/J_{\mathrm{}}=0`$, because SRRVB states with $`n_{\text{def}}(J_{\mathrm{}})0`$ are not eigenvectors of $``$. In fact this situation occurs each time a state includes a default on a triangle with non-zero bonds (see Fig. 4).
Nevertheless, let us consider the results obtained for $`J_{\mathrm{}}/J_{\mathrm{}}=0`$ (Fig. 5). The spectrum splits into bands: the first, of zero width, is the degenerate GS discussed above, and the other bands consist of linear combinations of SRRVB states with fixed $`n_{\text{def}}(J_{\mathrm{}})`$ (a numerical characterization of the dimer coverings in each band is given below). The center of each of these bands is $`(3/4)N_{\mathrm{}}J_{\mathrm{}}`$ with $`N_{\mathrm{}}`$ the number of dimers built on $`\mathrm{}`$ triangles. Since $`N_{\mathrm{}}=(N/3)n_{\text{def}}(J_{\mathrm{}})`$, the energy of the center of the $`1+(N/6)`$ bands are $`(N/4)J_{\mathrm{}}`$,$`((3/4)(N/4))J_{\mathrm{}}`$,…,$`(N/8)J_{\mathrm{}}`$.
Strong trimerization ($`J_{\mathrm{}}/J_{\mathrm{}}1`$). When it is switched on, $`J_{\mathrm{}}`$ acts as a perturbation on the previous spectrum: bands with $`n_{\text{def}}(J_{\mathrm{}})0`$ begins to get wider and to mix. In contrast, because it is degenerate when $`J_{\mathrm{}}/J_{\mathrm{}}=0`$, the lowest band is expected to mix with the other bands for larger values of $`J_{\mathrm{}}/J_{\mathrm{}}`$. Let us test this scenario on numerical results for a weak trimerization of the lattice ($`J_{\mathrm{}}/J_{\mathrm{}}=0.1`$) on a 36 site cluster. Figure 6 shows the spectrum (number of states below a given energy per site) and the density of states (DOS) . The DOS exhibit a band structure and, as expected, even for such a small value of $`J_{\mathrm{}}/J_{\mathrm{}}`$, gaps between bands are nearly closed. Nevertheless, a very narrow band of states remains very clearly separated from the others.
The existence of this low energy band splited from the rest of the SRRVB spectrum indicates that for small values of $`J_{\mathrm{}}/J_{\mathrm{}}`$, the selection criterion of dimer covering configurations is the same as for $`J_{\mathrm{}}/J_{\mathrm{}}=0`$: the states in the low energy part of the spectrum minimize $`n_{\text{def}}(J_{\mathrm{}})`$. In order to test more precisely this scenario let us characterize numerically the scaling of the bands and verify that $`n_{\text{def}}(J_{\mathrm{}})`$ is fixed in each band.
We performed a finite size analysis including all kagomé clusters with an even number of sites up to 36 sites (12, 18, 24, 30 , 36). We denote by $`𝒩_N(\mathrm{\Delta })`$ the number of states on a N-site cluster with a total energy smaller than $`\mathrm{\Delta }`$. For all $`\mathrm{\Delta }`$, the analysis shows that $`𝒩_N(\mathrm{\Delta })`$ grows exponentially with $`N`$ :
$$𝒩_N(\mathrm{\Delta })=A(\mathrm{\Delta })\alpha (\mathrm{\Delta })^N.$$
(3)
In the large $`\mathrm{\Delta }`$ limit, since all the states have an energy smaller than $`\mathrm{\Delta }`$, the values of $`A`$ and $`\alpha `$ are known to be respectively $`2`$ and $`2^{1/3}`$. Between each band of the spectrum, no state comes to increase $`𝒩`$ when $`\mathrm{\Delta }`$ increases and therefore plateaus appear in $`\alpha `$ (see Fig. 7). The first plateau corresponds to $`\alpha 1.15`$, a numerical confirmation of what was announced at the beginning of the section.
Let us turn now to the question of the nature of the states in each band. We denote by $`\widehat{N}_{\mathrm{}}`$ and $`\widehat{N}_{\mathrm{}}`$ the operators that count for a SRRVB state the number of dimers lying on $`J_{\mathrm{}}`$ and $`J_{\mathrm{}}`$ bonds respectively. Since on a $`N`$ site cluster, each SRRVB state is made of $`N/2`$ dimers, we have $`\widehat{N}_{\mathrm{}}+\widehat{N}_{\mathrm{}}=N/2`$. Fig. 8 shows the values of $`\widehat{N}_{\mathrm{}}`$ and $`\widehat{N}_{\mathrm{}}`$ for each eigenstate of a 36 site cluster from the GS to the most excited state. The results are quite clear: each band of the spectrum is characterized by a fixed value of $`\widehat{N}_{\mathrm{}}`$ (or $`\widehat{N}_{\mathrm{}}`$) which is equivalent to fix $`n_{\text{def}}(J_{\mathrm{}})=(N/3)\widehat{N}_{\mathrm{}}`$.
SRRVB spectrum versus Exact spectrum. SRRVB states on the trimerized kagomé lattice spontaneously selects a small set of wave functions (see table 1) among those which minimize energy for $`J_{\mathrm{}}/J_{\mathrm{}}=0`$. Moreover the number of these states scale as the number of singlets in the singlet-triplet gap of the KAF at the isotropic limit. If this selection is actually relevant, one should be able to identify in the exact spectrum at least for a strong trimerization the existence of a similar selection.
To test this point we compare the exact and SRRVB spectra for $`J_{\mathrm{}}/J_{\mathrm{}}=0.25`$ (see table 2, energy per site for the 10 first states). The conclusion of this comparison is quite clear: The SRRVB subspace reproduces the low-energy part of the singlet spectrum and the structure of the spectrum (order and degeneracy of levels) is also well described.
In conclusion, beyond mean field approximation, the low energy physics of the trimerized KAF is well captured by SRRVB states: Low lying states are selected on an energy criterion, the maximization of the number of dimers on strong bonds, which is equivalent for a weak trimerization to minimize the number of defaults on strong-bond triangles. These selected states form a band which contains a number of states that increases like $`1.15^N`$ in agreement both with exact results and with the effective Hamiltonian approach.
## 3 The isotropic model
When $`J_{\mathrm{}}/J_{\mathrm{}}`$ increases to the isotropic limit one may ask at least two questions: Does the mechanism described above remain valid? Do SRRVB states still provide a good description of the singlet sector?
To answer the first question, we have computed, at the isotropic point, the values of $`\widehat{N}_{\mathrm{}}`$ and $`\widehat{N}_{\mathrm{}}`$ for all the eigenstates. The behavior of these quantities is very different from the trimerized case: no mechanism tends to favor one type of triangle and $`\widehat{N}_{\mathrm{}}=\widehat{N}_{\mathrm{}}=N/4`$. This means that the simple picture obtained with the trimerized model is no longer valid in the isotropic case. The computation of the spectrum for all even sizes up to 36 sites confirms the qualitative differences between trimerized and isotropic model (see Fig. 9). The mixing of the bands which starts for $`J_{\mathrm{}}<J_{\mathrm{}}`$ is complete for $`J_{\mathrm{}}=J_{\mathrm{}}`$, the band structure has completely disappeared and the spectrum is a continuum.
It is important to emphasize that this result contrasts with those obtained for the same model by Zeng and ElserZengelser2 , who concluded to the presence of a gap inside the singlet spectrum. This study was based on an expansion using as a small parameter the overlap between SRRVB states: the non orthogonality between dimer covering $`|\phi _i`$ was neglected under a given threshold of $`\phi _i|\phi _j`$. On the contrary, the results presented here involve no approximation: the non orthogonality of the basis is fully taken into account (see appendix for details). We suspect that the difference comes from this approximation. As could be expected, our treatment provides a smaller variational value of the GS energy. For a 36 site cluster $`E_{\text{GS}}/J=0.42182`$ which is $`3\%`$ above the exact one (the number of SRRVB states is $`\mathrm{1.71\hspace{0.33em}10}^5`$ of the total singlet subspace).
The strongest indication that SRRVB states give a correct description of the low-lying singlets of the KAF is indeed the continuum structure of the spectrum. Moreover, the shape of the spectrum is very similar to the one obtained by exact diagonalization (ED) Waldtmann . In order to test this point more precisely, we have again computed $`\alpha (\mathrm{\Delta })`$ at the isotropic point (see Fig. 10). Plateaus no longer appear in $`\alpha (\mathrm{\Delta })`$, which confirms the complete mixing of the bands. More interestingly, this analysis shows that, for all $`\mathrm{\Delta }`$, the number of SRRVB excitations increases exponentially with the size of the systems. This proves that SRRVB states not only reproduce the continuum nature of the spectrum but give a good description of the exponential proliferation of singlets states in the low energy sector of the KAF.
Since the SRRVB subspace cannot give information about magnetic excitations the question of the counting of states below the first triplet is rather delicate. To discuss this point, one has to take the exact singlet-triplet gap value to make the counting in the variational SRRVB spectrum. Doing so on has to keep in mind that even if the SRRVB spectrum gives a good description of the structure of the low lying singlets (order of levels, degeneracy, exponential proliferation) the energy scale of the excitations above the GS might be slightly different from the exact one: SRRVB are not the exact eigenstates of the Hamiltonian which are more probably dressed SRRVB states including fluctuations that modify the energy scale. But, given the relative accuracy of the GS this point should not prevent us from doing a semi-quantitative comparison between exact and SRRVB results.
In fact, for a 12 site cluster we have checked that the low energy structure of excitation spectrum is correct for $`J_{\mathrm{}}/J_{\mathrm{}}=1`$ up to the first triplet state (see Table 3).
For a more general quantitative discussion on the proliferation of low-energy singlets let us analyze the shape of $`\alpha (\mathrm{\Delta })`$ (see Fig. 10) obtained from SRRVB spectra of 12, 18, 24, 30 and 36 site clusters. The range of the exact singlet-triplet gap extends from $`0.38J`$ for the 12 site cluster to $`0.17J`$ for 36 sites Waldtmann which corresponds to the circled region and the inset of Fig. 10. It is remarkable that in this energy range the value of $`\alpha `$ for SRRVB spectra goes from $`1.18`$ to $`1.15`$ which in good agreement with ED scalings.
## 4 Discussion
At this point, it is fair to ask whether the continuum structure of the spectrum obtained with SRRVB states is really a specific feature of frustration captured by this basis or simply a generic characteristic of the spectra that such states would provide on any lattice. To answer this important question let us compare the results on the kagomé lattice with the SRRVB spectrum for a non-frustrated model, the Heisenberg model on the square lattice (see Fig. 11).
The structure of the SRRVB excitations on the square lattice is qualitatively different from the structure obtained for the kagomé lattice: In particular there is a gap between the singlet GS and the first excitation. Even if it is seems difficult to extract a precise value for this gap (see Fig. 12), the finite size analysis strongly suggests that it remains finite in the thermodynamic limit. Of course, this does not describe the actual singlet spectrum of the square lattice, which is gapless due to two-magnon excitations. But it shows that the structure of the RVB spectrum is specific to very frustrated lattices.
In conclusion, SRRVB states on the kagomé lattice allow to capture the specific low energy properties of the model in both trimerized and isotropic limits. In the trimerized model it gives a simple picture of the non magnetic excitations and a selection criterion of the low-energy states which are built by minimizing the number of defaults on strong bond triangles. The number of such states increases like $`1.15^N`$. The states matching this criterion can also be seen as short-range dimer coverings of the triangular lattice formed by strong-bond triangles which confirms, beyond mean field approximation, the relevance of the effective model approach. At the isotropic point, SRRVB states lead to a continuum of non-magnetic excitations in accordance with ED results. Moreover the shape of the SRRVB spectrum is very similar to the exact one and the number of low-lying singlets increases exponentially for all energy range with the size of the system considered.
Finally, these properties of the SRRVB spectrum are not just a property of this kind of states since the SRRVB spectrum has a gap in the case of the square lattice. So one may conjecture that they provide a good description of the low-energy singlet sector of very frustrated magnets only. Work is in progress to test this idea on the pyrochlore lattice.
Acknowledgments: We acknowledge useful discussions with C. Lhuillier, B. Douçot and P. Simon. We are especially grateful to P. Sindzingre for making available unpublished results of exact diagonalization on the kagomé lattice.
## 5 Appendix: numerical method
Working with SRRVB states as a truncated basis leads to non-trivial numerical difficulties which, as paradoxical as it may seem, make the problem of the determination of the spectrum more tricky in this truncated subspace than in the full space of spin configurations. This is a consequence of the non-orthogonality of RVB states.
If $`|\phi _i`$ and $`|\phi _j`$ are two SRRVB states, the overlap is given by Sutherland
$$\phi _i|\phi _j=s(\phi _i,\phi _j)2^{n_b(\phi _i,\phi _j)N/2},$$
(4)
where $`n_b(\phi _i,\phi _j)`$ is the number of closed loops in the diagram where the two states are superimposed and $`s(\phi _i,\phi _j)=(1)^p`$, where $`p`$ is the number of misoriented dimers compared with the reference orientation (see Fig. 13).
In the case of a non-orthogonal basis, the eigenvalues are solutions of the so-called generalized eigenvalue problem,
$$det(\phi _i||\phi _jE\phi _i|\phi _j)=0,$$
(5)
in which the overlap between states appears explicitly. Since we are interested in the structure of the spectrum, we need to diagonalize completely the Hamiltonian and therefore iterative techniques (typically Lanczos) must be avoided. On the other hand, solving (5) with standard routines, one is limited to small systems.
To achieve a complete diagonalization for large systems (typically 36 sites) it is crucial to take into account all the symmetries of the system in order to break the Hilbert space into smaller pieces. This technique is indeed very standard but is usually used in a context where the basis is orthogonal (e.g. spin configurations) which makes it quite convenient. The non-orthogonal case is far less simple and is worth paying some attention.
The aim of this appendix is to explain how it is possible, starting from a set of configurations that can be non-orthogonal, to build an orthonormal basis of vectors that are eigenstates of all the symmetries of the problem in each symmetry sector. Since this linear algebra problem is planned to be solved numerically one is interested in reducing as much as possible the information to be handled. Therefore one does not work explicitly with this orthonormal basis but with linear combinations of suitably chosen configurations called representatives.
The text is organized as follow: we define the representatives, we show how the number of representatives has to be reduced depending on the symmetry sector and finally explain how one can go from representatives to the orthogonal basis of the symmetries eigenvectors.
Representatives. Let us denote by $`N_𝒮`$ the order of the symmetry group of the system and $`𝒮_i,i=1,\mathrm{},N_𝒮`$ the elements of this group. It is possible to make a partition of the set containing all the configurations in subsets where configurations are related to each other by a symmetry $`𝒮_i`$. Each of these $`N_p`$ subset can be represented by a configuration $`|p_i,i=1,\mathrm{},N_p`$, called representative, of the subset since by construction all the others can be obtained by applying symmetries on it. From a numerical point of view the set of the representatives is the minimal information needed.
Reduction of the number of representatives in a given symmetry sector. In this section we will consider a given symmetry sector $`s`$ characterized by a set of characters $`\chi _s(𝒮_1)`$,…,$`\chi _s(𝒮_{N_𝒮})`$. We are going to show that it is not necessary to keep all the representatives to generate a basis in the sector $`s`$.
Let us consider a given linear configuration of representatives,
$$|\psi =\underset{i=1}{\overset{N_p}{}}\alpha _i|p_i.$$
(6)
The true state of $`s`$ associated to $`|\psi `$ is given by remark1 ,
$$|\stackrel{~}{\psi }\frac{1}{\sqrt{N_𝒮}}\underset{i=1}{\overset{N_𝒮}{}}𝒮_i|\psi =\underset{i=1}{\overset{N_p}{}}\underset{p=1}{\overset{N_𝒮}{}}\alpha _i\chi _s(𝒮_p)|s_p(p_i),$$
(7)
where $`|s_p(p_i)`$ stands for the image of the configuration $`p_i`$ by the symmetry $`𝒮_p`$. For a given representative $`|p_i`$, let us denote by $`_i`$ the set of indices $`q`$ of the symmetries that leave the configuration $`p_i`$ invariant ($`s_q(p_i)=p_i`$), and $`\overline{_i}`$ the remaining indices. With this notation $`|\stackrel{~}{\psi }`$ take the form,
$`|\stackrel{~}{\psi }`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{N_p}{}}}\alpha _i\left({\displaystyle \underset{\begin{array}{c}q_i\end{array}}{}}\chi _s(𝒮_q)\right)|p_i`$ (8)
$`+{\displaystyle \underset{i=1}{\overset{N_p}{}}}{\displaystyle \underset{\begin{array}{c}q\overline{_i}\end{array}}{}}\alpha _i\chi _s(𝒮_q)|s_q(p_i).`$
Let us denote by $`𝒬_s`$ the list of indices $`i`$ of the representatives $`p_i`$ such as $`_{\begin{array}{c}q_i\end{array}}\chi _s(𝒮_q)=0`$ in the symmetry sector $`s`$. It is obvious to note that all the representatives with an index in $`𝒬_s`$ disappear from the first term of eq. (8). What we are going to show is that they also disappear from the second one. Let be $`p_i`$ such as $`_{\begin{array}{c}q_i\end{array}}\chi _s(𝒮_q)=0`$ and $`p_i`$. One has $`_{\begin{array}{c}q\overline{_i}\end{array}}𝒮_q|p_i=_{\begin{array}{c}q\overline{_i}\end{array}}\chi _s^{}(𝒮_p)𝒮_p𝒮_q|p_i=_{\begin{array}{c}q\overline{_i}\end{array}}\chi _s^{}(𝒮_p)𝒮_q|p_i`$. This result does not depend on $`p_i`$ and thus one does not modify the result by applying $`[1/\text{Card}(_i)]_{\begin{array}{c}p_i\end{array}}`$ on the previous expression which proves what was announced. This leads to a reduction of the number of representatives in the symmetry sector $`s`$, namely $`N_s=N_p\text{Card}(𝒬_s)`$.
From non-orthogonal representatives to the orthonormal basis of symmetries eigenvectors. In the general case of non-orthogonal representatives, it is convenient to introduce a mixing matrix $`\mu ^s`$ in order to build the orthonormal basis of symmetries eigenvectors. We will considerer from now linear combinations of mixed representatives,
$$|l=\underset{j=1}{\overset{N_s}{}}\mu _{lj}^s|p_j.$$
(9)
All the problem is to chose $`\mu ^s`$ such as the symmetrized states $`|\stackrel{~}{l}`$ of $`|l`$ according to (7) form an orthonormal basis: $`\stackrel{~}{l}|\stackrel{~}{m}=\delta _{lm}`$. Let us show how this condition writes in the cases of orthogonal and non-orthogonal representatives.
Orthogonal case. In this simple case where $`p_i|p_j=\delta _{ij}`$, the condition $`\stackrel{~}{l}|\stackrel{~}{m}=\delta _{lm}`$ is,
$`\delta _{lm}`$ $`={\displaystyle \underset{i,j=1}{\overset{N_s}{}}}{\displaystyle \underset{q=1}{\overset{N_𝒮}{}}}\mu _{jl}^s\mu _{mi}^sp_j|𝒮_q|p_i`$
$`={\displaystyle \underset{i,j=1}{\overset{N_s}{}}}\mu _{jl}^s\mu _{mi}^s\left({\displaystyle \underset{q_i}{}}\chi _s(𝒮_q)\right)\delta _{ij}`$
$`={\displaystyle \underset{i,j=1}{\overset{N_s}{}}}\mu _{jl}^s\mu _{mi}^s\text{deg}(p_i)\delta _{ij},`$ (10)
where $`\chi _s(𝒮_p)`$ is the character of the symmetry $`𝒮_p`$ in the sector $`s`$, $`\text{deg}(p_i)`$ the degeneracy of representative $`p_i`$ (i.e. the number of symmetries under which it is invariant remark2 ), $`s_p(p_i)`$ the image of the configuration $`p_i`$ by $`𝒮_p`$, and $`N_s`$ the size of the sector $`s`$.
It is easy to see that
$$\mu _{ij}^s=\delta _{ij}/\sqrt{\text{deg}(p_i)}$$
(11)
fulfills (10)
General case. In the non-orthogonal case, the condition (10) now reads,
$$\delta _{lm}=\underset{i,j=1}{\overset{N_s}{}}\mu _{jl}^s\mu _{mi}^s\stackrel{~}{I}_{ij}$$
(12)
where,
$$\stackrel{~}{I}_{ij}=\underset{p=1}{\overset{N_𝒮}{}}\chi _s(𝒮_p)p_j|s_p(p_i)$$
(13)
Here again, the indices $`i`$ and $`j`$ runs from $`1`$ to $`N_s`$ (the size of the sector $`s`$). To determine $`\mu ^s`$ we diagonalize $`\stackrel{~}{I}`$ :
$$P^{}\stackrel{~}{I}P=\text{Diag}(d_1,\mathrm{},d_{N_s})$$
(14)
One can check that,
$$\mu _{ij}^s=\frac{1}{\sqrt{d_i}}P_{ij}$$
(15)
satisfies condition (12).
The basis $`\{|\stackrel{~}{l}\}`$ is orthogonal and in this new basis the Hamiltonian is block diagonal, each block corresponding to one symmetry sector. Thus, it only remains to diagonalize the Hamiltonian in each of these representations to get the whole spectrum :
$$\stackrel{~}{l}||\stackrel{~}{m}=\underset{i,j=1}{\overset{N_s}{}}\underset{p=1}{\overset{N_𝒮}{}}\mu _{jl}^s\mu _{mi}^s\chi _s(𝒮_p)p_j||s_p(p_i)$$
(16)
In conclusion the procedure described above turns the generalized eigenvalue problem of $`n\times n`$ matrices into $`2\times N_s`$ conventional diagonalizations of $`(n/N_s)\times (n/N_s)`$ matrices. The point we now want to stress is that the treatment described above is exact and does not introduce approximation. The subspace of RVB states is a truncated subspace in the sense that it is not stable with respect to an application of the Hamiltonian and only the RVB restricted Hamiltonian is studied. But, the use of symmetries does not act as a new restriction of the Hamiltonian in each representation of the symmetry group. In the new basis the Hamiltonian and the overlap matrix are actually block diagonal, each block corresponding to one representation. Thus the spectrum obtained as well as mean values of operators calculated are the same as those one would obtain by solving with brute-force the generalized eigenvalue problem if it was possible.
|
no-problem/0003/quant-ph0003130.html
|
ar5iv
|
text
|
# Temporal Ordering in Quantum Mechanics
## I Introduction
In quantum mechanics, one typically measures operators at fixed times $`t`$. For example, one can measure the position of a particle at any given time, and obtain a precise result. One could also consider the ”dual” situation in which one tries to measure at what time a particle arrives to a fixed location $`x_a`$. This problem of time-of-arrival allcock has been extensively discussed in the literature muga .
Although the time $`t`$ is a well defined parameter in the Schrödinger equation, Pauli has shown that it cannot correspond to an operator for systems which have an energy bounded from below pauli . Likewise, for general Hamiltonians, there is no operator which corresponds to the time of an event such as the time-of-arrival of a particle to a fixed location aharonov . In addition, if one wishes to operationally measure the time-of-arrival by coupling the system to a clock, then one finds that one cannot measure the time-of-arrival to an accuracy better than $`\mathrm{}/\overline{E}_k`$ where $`\overline{E}_k`$ is the kinetic energy of the particleallcock ; aharonov ; tmeas . The limitation is based on calculations from a wide variety of different measurement models, as well as general considerations, however, there is no known proof of this result.
There have been attempts to circumvent these difficulties rovelli circ muga , usually involving a modified time-of-arrival operator or POVM measurements. Such operators can be measured “impulsively” by interacting with the system at a certain (arbitrary) instant of time. In this manner, one can attempt to measure the time-of-arrival even though the particle has not arrived (and in fact, may never arrive, regardless of what the result of the time-of-arrival measurement yields)tmeas . These procedures, are hence conceptually and operationally very different from the case of continuous measurements discussed here.
One can also ask, given two events $`A`$ and $`B`$, whether one can measure which event occurred first. Surprisingly, there does not appear to be any discussion of this in the literature, even though we believe it is a much more primitive and fundamental concept. In this paper, we are interested in whether the well defined classical concepts of temporal ordering have a quantum analogue. In other words, given two quantum mechanical systems, can we measure which system attains a particular state first. Can we decide whether an event occurs in the past or future of another event.
Classically, one can couple the system to a device which is triggered when an event occurs, and records which event happened first. One can consider a similar measurement scheme in quantum mechanics which classically would correspond to a measurement of order of events. One can then ask whether such a quantum measurement scheme is possible.
The fact that there is a limitation to measurements of the time of an event leads one to suspect that the ordering of events may not be an unambiguous concept in quantum mechanics. However, for a single quantum event $`A`$, although one cannot determine the time an event occurred to arbitrary accuracy, it can be argued that one can often measure whether $`A`$ occurred before or after a fixed time $`t_B`$ to any desired precision.
Consider a quantum system initially prepared in a state $`\psi (0)`$ and an event $`A`$ which corresponds to some projection operator $`\mathrm{\Pi }_A`$ acting on this state. For example, we could initially prepare an atom in an excited state, and $`\mathrm{\Pi }_A`$ could represent a projection onto all states where the atom is in its ground state i.e. the atom has decayed. $`\psi (0)`$ could also represent a particle localized in the region $`x<0`$ and $`\mathrm{\Pi }_A`$ could be a projection onto the positive x-axis. In this case, the event $`A`$ corresponds to the particle arriving to $`x=0`$.
If the state evolves irreversibly to a state for which $`\mathrm{\Pi }_A\psi (t)=1`$, then we can easily measure whether the event $`A`$ has occurred at any time $`t`$. We could therefore measure whether a free particle arrives to a given location before or after a classical time $`t_B`$. Of course, for many systems, the system will not irreversibly evolve to the required state. For example, a particle influenced by a potential may cross over the origin many times<sup>1</sup><sup>1</sup>1Here, and throughout this paper, we will sometimes use language which refers to objective facts about a particle’s motion. It should be understood that these descriptions refer to the results of measurements made on these particles. For example, it can be measured that a particle is traveling towards the origin in the case where one can make a weak measurement of position and momentum.. However, for an event such as atomic decay, the probability of the atom being re-excited is relatively small, and one can argue that the event is effectively irreversible.
For the case of a free particle which has been measured to be traveling towards the origin from $`x<0`$ one can argue that if at a later time we measure the projection operator onto the positive axis and find it there, then the particle must have arrived to the origin at some earlier time. This is in some sense a definition, because we know of no way to measure the particle being at the origin without altering its evolution (or being extremely lucky and happening to measure the particle’s location when it is at the origin).
While measuring whether an event happened before or after a fixed time $`t_B`$ may be possible, we will find that for two quantum events, one cannot in general measure whether the time $`t_A`$ of event $`A`$, occurred before or after the time $`t_B`$ of event $`B`$.
In Section II, confining ourselves to a particular example of order of events, we will consider the question of order of arrival in quantum mechanics. Given two particles, can we determine which particle arrived first to the location $`x_a`$. Using a model detector, we find that there is always an inherent inaccuracy in this type of measurement given by $`\mathrm{}/\overline{E}`$ where $`\overline{E}`$ is the typical total energy of the two particles. This seems to suggest that the notion of past and future is not a well defined observable in quantum mechanics.
We will see that this inaccuracy limitation on the measurement of order-of-arrival is weaker than the inaccuracy on measurements of time-of-arrival. If one attempted to measure the order-of-arrival by measuring the time-of-arrival of both particles, then the limitation on the measurement accuracy is much greater, being $`\mathrm{}/min\{E_x,E_y\}`$ where $`E_x`$ and $`E_y`$ are the typical energies of each individual particle.
In the present article we will consider only continuous measurements in which the detector is left “open” for a long duration. One can also formally define an order-of-arrival operator like
$$𝐎=sgn(𝐓_{𝐀}^{}{}_{x}{}^{}𝐓_{𝐀}^{}{}_{y}{}^{})$$
(1)
where $`T_x`$ and $`T_y`$ are the time-of-arrival operators
$$𝐓_𝐀=\frac{mx_a}{𝐩}m\frac{1}{\sqrt{𝐩}}𝐱\frac{1}{\sqrt{𝐩}}.$$
(2)
As already noted, if one measures such an operator one is measuring which event occurred first, even though neither event has in fact occurred (and may not occur). The measurement of an operator, and the continuous, ”operational” methods discussed here, are therefore rather different. Furthermore, the time-of-arrival operator cannot be self-adjoint aharonov , and therefore has complex eigenvalues and eigenstates reedsimon . However, it can be modifiedrovelli . We believe that modifying the operator causes several technical as well as fundamental difficulties. For example, it has been shown oppenheim , that the eigenstates of modified time-of-arrival operators such as those in rovelli no longer describe events of arrival at a definite time. We anticipate similar difficulties for the case of the order-of-arrival operator.
In Section III we discuss measurements of coincidence. I.e., can we determine whether both particles arrived at the same time. Such measurements allow us to change the accuracy of the device before each experiment. We find that the measurement fails when the accuracy is made better than $`\mathrm{}/\overline{E}`$.
In Section IV we discuss the relationship between ordering of events and the resolving power of “Heisenberg’s microscope“heisenberg , and argue that in general, one cannot prepare a two particle state which is always coincident to within a time of $`\mathrm{}/\overline{E}`$. In the following we use units such that $`\mathrm{}=1`$.
## II Which first?
Consider two free particles (which we will label as x and y) initially localized to the right of the origin, and traveling to the left. We then ask whether one can measure which particle arrives to the origin first. The Hamiltonian for the system and measuring apparatus is given by
$$𝐇=\frac{𝐏_{𝐱}^{}{}_{}{}^{2}}{2m_1}+\frac{𝐏_{𝐲}^{}{}_{}{}^{2}}{2m_2}+𝐇_i$$
(3)
where $`𝐇_i`$ is some interaction Hamiltonian which is used to perform the measurement. One possible choice for an interaction Hamiltonian is
$$𝐇_i=\alpha \delta (𝐱)\theta (𝐲)$$
(4)
with $`\alpha `$ going to infinity.
If the y-particle arrives before the x-particle, then the x-particle will be reflected back. If the y-particle arrives after the x-particle, then neither particle sees the potential, and both particles will continue traveling past the origin. One can therefore wait a sufficiently long period of time, and measure where the two particles are. If both the x and y particles are found past the origin, then we know that the x-particle arrived first. If the y-particle is found past the origin while the x-particle has been reflected back into the positive x-axis then we know that the y-particle arrived first.
Classically, this method would appear to unambiguously measure which of the two particle arrived first. However, in quantum mechanics, this method fails. From (3) we can see that the problem of measuring which particle arrives first is equivalent to deciding where a single particle traveling in a plane arrives. Two particles localized to the right of the origin is equivalent to a single particle localized in the first quadrant (see Figure 1). The question of which particle arrives first, becomes equivalent to the question of whether the particle crosses the positive x-axis or the positive y-axis.
The equivalence between the two-particle system and the single particle system in higher dimensions can be seen by performing the canonical transformation
$`𝐏_𝐱`$ $``$ $`\sqrt{{\displaystyle \frac{m_1}{M}}}𝐏_𝐱,𝐏_𝐲\sqrt{{\displaystyle \frac{m_2}{M}}}𝐏_𝐲`$
$`𝐱`$ $``$ $`\sqrt{{\displaystyle \frac{M}{m_1}}}𝐱,𝐲\sqrt{{\displaystyle \frac{M}{m_2}}}𝐲`$ (5)
and rescaling $`\alpha \sqrt{M/m_1}\alpha `$. Our Hamiltonian now looks like that of a single particle of mass $`M`$ scattering off a thin edge in two dimensions. Classically, the event $`x`$ arriving first, corresponds to the case that the particle does not scatter off the edge and travels to quadrant III. The event of $`y`$ arriving first corresponds to scattering off the edge to quadrant IV.
However, quantum mechanically, we find that sometimes the particle is found in the two classically forbidden regions, I and II. If the particle is found in either of these two regions, then we cannot determine which particle arrived first.
The solution for a plane wave which makes an angle $`\theta _o`$ with the x-axis is well knownmorseandf . If the boundary condition is such that $`\psi (r,\theta )=0`$ on the negative y-axis, then the solution is
$$\psi (r,\theta )=\frac{1}{\sqrt{i\pi }}\left\{e^{ikr\mathrm{cos}(\theta \theta _o)}\mathrm{\Phi }[\sqrt{2kr}\mathrm{cos}(\frac{\theta \theta _o}{2})]e^{ikr\mathrm{cos}(\theta +\theta _o)}\mathrm{\Phi }[\sqrt{2kr}\mathrm{sin}(\frac{\theta +\theta _o}{2})]\right\}$$
(6)
where $`\mathrm{\Phi }(z)`$ is the error function.
Asymptotically, this solution looks like
$$\psi \{\begin{array}{cc}e^{ikr\mathrm{cos}(\theta \theta _o)}+f(\theta )\frac{e^{ikr}}{\sqrt{r}}\hfill & \theta _o<\theta <\pi +\theta \hfill \\ e^{ikr\mathrm{cos}(\theta \theta _o)}e^{ikr\mathrm{cos}(\theta +\theta _o)}+f(\theta )\frac{e^{ikr}}{\sqrt{r}}\hfill & \theta _o>\theta >\pi /2\hfill \\ f(\theta )\frac{e^{ikr}}{\sqrt{r}}\hfill & \pi \theta _o<\theta <3\pi /2\hfill \end{array}$$
(7)
where
$$f(\theta )=\sqrt{\frac{i}{8\pi k}}\left[\frac{1}{\mathrm{sin}(\frac{\theta +\theta _o}{2})}+\frac{1}{\mathrm{cos}(\frac{\theta \theta _o}{2})}\right].$$
(8)
The above approximation is not valid when $`\mathrm{cos}(\frac{\theta \theta _o}{2})`$ or $`\mathrm{sin}(\frac{\theta +\theta _o}{2})`$ is close to zero.
Since we demanded that the particle was initially localized in the first quadrant, the initial wave cannot be an exact plane wave, but we can imagine that it is a plane wave to a good approximation.
We see from the solution above that the particle can be found in the classically forbidden regions of quadrant I and II. For these cases, we cannot determine which particle arrived first. This is due to interference which occurs when the particle is close to the origin (the sharp edge of the potential). The amplitude for being scattered off the region around the edge in the direction $`\theta `$ is given by $`|f(r,\theta )|^2`$.
It might be argued that since these particles scattered, they must have scattered off the potential, and therefore they represent experiments in which the y-particle arrived first. However, this would clearly over count the cases where the y-particle arrived first. We could have just as easily have placed our potential on the negative x-axis, in which case, we would over-count the cases where the x-particle arrived first.
In the ”interference region” we cannot have confidence that our measurement worked at all. We should therefore define a ”failure cross section” given by
$`\sigma _f`$ $`=`$ $`{\displaystyle _0^{2\pi }}|f(\theta )|^2`$ (9)
$`=`$ $`{\displaystyle \frac{1}{k\mathrm{cos}(\frac{\theta _o}{2})}}`$
From (9) we can see that cross section for scattering off the edge is the size of the particle’s wavelength multiplied by some angular dependence. Therefore, if the particle arrives within a distance of the origin given by
$$\delta x>2/k$$
(10)
the measurement will fail. We have dropped the angular dependence from (9) – the angular dependence is not of physical importance for measuring which particle came first, as it depends on the details of the potential (boundary conditions) being used. The particular potential we have chosen is not symmetrical in x and y.
From this we can conclude that if the particle arrives to within one wavelength of the origin, then there is a high probability that the measurement will fail.
If we want to relate this two-dimensional scattering problem back to two particles traveling in one dimension, we need to use the relation
$$\delta t\frac{m\delta x}{k}$$
(11)
In other words, our measurement procedure relies on making an inference between time measurements and spatial coordinates. The last two equations then give us
$$\delta t>\frac{1}{E}.$$
(12)
One will not be able to determine which particle arrived first, if they arrive within a time $`1/E`$ of each other, where $`E`$ is the total kinetic energy of both particles. Note that Equation (12) is valid for a plane wave with definite momentum $`k`$. For wave functions for which $`dk<<k`$, one can replace $`E`$ by the expectation value $`E`$. However, for wave functions which have a large spread in momentum, or which have a number of distinct peaks in $`k`$, then to ensure that the measurement almost always works, one must measure the order of arrival with an accuracy given by
$$\delta t>\frac{1}{\overline{E}}$$
(13)
where $`\overline{E}`$ is the minimum typical total energy <sup>2</sup><sup>2</sup>2For example, one need not be concerned with exponentially small tails in momentum space, since the contribution of this part of the wave function to the probability distribution will be small. If however, $`\psi (E)`$ has two large peaks at $`E_{small}`$ and $`E_{big}`$ spread far apart, then if $`\delta t`$ does not satisfy $`\delta t>1/E_{small}`$ one will get a distorted probability distribution. For a discussion of this, see aharonov . Hence we conclude that if the particles are coincident to within $`1/\overline{E}`$, then the measurement fails.
It is rather interesting that this measurement limitation is less strict than the one obtained if we were to measure the time-of-arrival of each particle individually. This can be seen from the mapping of Eq. 5 since the total energy $`\overline{E}=E_x+E_y`$ where $`E_x`$ and $`E_y`$ are the energies of each individual particle. The limitation on measurements of the time-of-arrival of each particle is given by $`1/E_x`$ and $`1/E_y`$ aharonov . Therefore, if we use time of arrival measurements to determine the order of arrival, the minimal inaccuracy will have to be $`1/min\{E_x,E_y\}`$ which can be considerably worse than $`1/(E_x+E_y)`$ using the method outlined above.
The extreme limit, where one of the particles has a very high energy is then rather interesting. We have argued in the previous section that for the case of a single event, we can measure with arbitrary accuracy if the event occurred before or after a certain given time $`t_0`$. Indeed, let us consider the above setup in the special limit that $`E_yE_x`$ with $`E_y\mathrm{}`$. The diffraction pattern in this case is completely controlled by the $`y`$ particle and $`\delta t>\frac{1}{\overline{E}}\frac{1}{E_y}0`$. Furthermore for the case $`dydE_yE_y`$, the location $`y`$ of the energetic particle can serve as a good “clock”bennicasher and has a well defined time-of-arrival to $`y=0`$. Hence the initial state of the $`y`$ particle defines (up to $`1/E_y0`$) the time-of-arrival of the y-particle, $`t_0=t_A(y=0)`$. The final states of the “clock” hence determines whether the $`x`$ particle arrived before or after $`t=t_0`$. If $`y_{final}>0`$ we conclude that $`t_A(x=0)<t_0`$ and if $`y_{final}<0`$ that $`t_A(x=0)>t_0`$.
On can create a full clock, by considering many heavy ”y” particles, and determining whether the ”x” particle came before or after each one of them. Increasing the number of ”y” particles and having them arrive at regularly spaced intervals would then constitute a measurement of time-of-arrival. We would then expect to recover the limitation of reference aharonov as the density of ”y” is increased.
## III Coincidence
In the previous model for measuring which particle arrived first, we found that if the two particles arrived to within $`1/\overline{E}`$ of each other, the measurement did not succeed. The width $`1/\overline{E}`$ was an inherent inaccuracy which could not be overcome. However, in our simple model, we were not able to adjust the accuracy of the measurement.
It is therefore instructive to consider a measurement of “coincidence” alone for which one can quite naturally adjust the accuracy of the experiment. Given two particles traveling towards the origin, we ask whether they arrive within a time $`\delta t_c`$ of each other. If the particles do not arrive coincidently, then we do not concern ourselves with which arrived first. The parameter $`\delta t_c`$ can be adjusted, depending on how accurate we want our coincident “sieve” to be. We will once again find that one cannot decrease $`\delta t_c`$ below $`1/\overline{E}`$ and still have the measurement succeed.
A simple model for a coincidence measuring device can be constructed in a manner similar to (4). Mapping the problem of two particles to a single particle in two dimensions, we could consider an infinite potential strip of length $`2a`$ and infinitesimal thickness, placed at an angle of $`\pi /4`$ to the x and y axis in the first quadrant (see Figure 2). Particles which miss the strip, and travel into the third quadrant are not coincident, while particles which bounce back off the strip into the first quadrant are measured to be coincident. I.e. if the x-particle is located within a distance $`a`$ of the origin when the y-particle arrives (or visa versa), then we call the state coincident.
Classically, one expects there to be a sharp shadow behind the strip. Quantum mechanically, we once again find an interference region around the strip which scatters particles into the classically forbidden regions of quadrant two and four. The shadow is not sharp, and we are not always certain whether the particles were coincident.
A solution to plane waves scattering off a narrow strip is well known and can be found in many quantum mechanical texts (see for example morseandf where the scattered wave is written as a sum of products of Hermite polynomials and Mathieu functions). However, for our purposes, we will find it convenient to consider a simpler model for measuring coincidence, namely, an infinite circular potential of radius $`a`$, centered at the origin.
$$H_i=\alpha V(r/a)$$
(14)
where $`V(x)`$ is the unit disk, and we take the limit $`\alpha \mathrm{}`$.
It is well known that if $`a<1/k`$, then there will not be a well-defined shadow behind the disk. To see this, consider a plane wave coming in from negative x-infinity. It can be expanded in terms of the Bessel function $`J_m(kr)`$ and then written asymptotically ($`r1`$) as a sum of incoming and outgoing circular waves.
$`e^{ikx}`$ $`=`$ $`{\displaystyle \underset{m=0}{\overset{\mathrm{}}{}}}ϵ_mi^mJ_m(kr)\mathrm{cos}m\theta `$ (15)
$``$ $`\sqrt{{\displaystyle \frac{1}{2\pi ikr}}}\left[e^{ikr}{\displaystyle \underset{m=0}{\overset{\mathrm{}}{}}}ϵ_m\mathrm{cos}m\theta +ie^{ikr}{\displaystyle \underset{m=0}{\overset{\mathrm{}}{}}}ϵ_m\mathrm{cos}m(\theta \pi )\right].`$
where $`ϵ_m`$ is the Neumann factor which is equal to 1 for $`m=0`$ and equal to 2 otherwise.
Since it can be shown that
$$\underset{m=0}{\overset{M}{}}ϵ_m\mathrm{cos}m\theta =\frac{\mathrm{sin}(M+\frac{1}{2})\theta }{\mathrm{sin}\frac{1}{2}\theta }$$
(16)
The two infinite sums approach $`2\pi \delta (\theta )`$ and $`2\pi \delta (\theta \pi )`$ respectively, and so the incoming wave comes in from the left, and the outgoing wave goes out to the right. The presence of the potential modifies the wave function and in addition to the plane wave, produces a scattered wave
$$\psi =e^{ikx}+\frac{e^{ikr}}{\sqrt{r}}f(r\theta )$$
(17)
where
$$\frac{e^{ikr}}{\sqrt{r}}f(r,\theta )=i\underset{m=0}{\overset{\mathrm{}}{}}ϵ_me^{\frac{1}{2}m\pi ii\delta _m}\mathrm{sin}\delta _mH_m(kr)\mathrm{cos}m\theta ,$$
(18)
$`H_m(kr)`$ are Hermite polynomials and
$$\mathrm{tan}\delta _m=\frac{J_m(ka)}{N_m(ka)}$$
(19)
($`N_m(ka)`$ are Bessel functions of the second kind). For large values of $`r`$, the wave function can be written in a manner similar to (15), except that the outgoing wave is modified by the phase shifts $`\delta _m`$.
$$\psi \frac{1}{\sqrt{2\pi ik}}i\underset{m=0}{\overset{\mathrm{}}{}}ϵ_m\mathrm{cos}m(\theta \pi )\frac{e^{ikr}}{\sqrt{r}}+\frac{e^{ikr}}{\sqrt{r}}f(r,\theta ),$$
(20)
where
$$f(r,\theta )\frac{1}{\sqrt{2\pi ik}}\underset{m=0}{\overset{\mathrm{}}{}}ϵ_me^{2i\delta _m(ka)}\mathrm{cos}m\theta $$
(21)
In the limit that $`kam`$ the phase shifts can be written as
$$\delta _mka\frac{\pi }{2}(m+\frac{1}{2}).$$
(22)
In the limit of extremely large $`a`$ (but $`ra`$), the outgoing waves then behave as
$$f(r,\theta )\underset{M\mathrm{}}{lim}i\frac{1}{\sqrt{2\pi ik}}e^{2ika}\frac{\mathrm{sin}(M+\frac{1}{2})(\theta \pi )}{\mathrm{sin}\frac{1}{2}(\theta \pi )}$$
(23)
where once again we see that the angular distribution goes as the delta function $`\delta (\theta \pi )`$. The disk scatters the plane wave directly back, and a sharp shadow is produced. We see therefore, that in the limit of $`ka1`$, our measurement of coincidence works.
The differential cross section can in general be written as
$`\sigma `$ $`=`$ $`|f(\theta )|^2`$ (24)
$`=`$ $`|{\displaystyle \underset{m=0}{\overset{\mathrm{}}{}}}ϵ_me^{2i\delta _m(ka)}\mathrm{cos}m\theta |^2`$
For $`ka1`$ (but still finite), (24) can be computed using our expression for the phase shifts from (22), and is given by
$$\sigma (\theta )\frac{a}{2}\mathrm{sin}\frac{\theta }{2}+\frac{1}{2\pi k}\mathrm{cot}^2\frac{\theta }{2}\mathrm{sin}^2ka\theta $$
(25)
The first term represents the part of the plane wave which is scattered back, while the second term is a forward scattered wave which actually interferes with the plane-wave. The reason it appears in our expression for the scattering cross section is because we have written our wave function as the sum of a plane-wave and a scattered wave, and so part of the scattered wave must interfere with the plane-wave to produce the shadow behind the disk.
For $`kam`$, the phase shifts look like
$$\delta _m(ka)\frac{\pi m}{(m!)^2}\left(\frac{ka}{2}\right)^{2m}m0$$
(26)
and
$$\mathrm{tan}\delta _0(ka)\frac{\pi }{2\mathrm{ln}ka}$$
(27)
As a result, for $`ka1`$, $`\delta _0`$ is much greater than all the other $`\delta _m`$ and the outgoing solution is almost a pure isotropic s-wave.
For $`ka1`$ the only contribution to (24) comes from $`\delta _0`$ and the differential cross section becomes
$$\sigma (\theta )\frac{\pi }{2k\mathrm{ln}^2ka}$$
(28)
and is isotropic. In other words, no shadow is formed at all, and particles are scattered into classically forbidden regions. We see therefore, that as long as the s-wave is dominant, our measurement fails. The s-wave will cease being dominant when $`\delta _0`$ is of the same order as $`\delta _1`$. As can be seen from (22), $`\delta _1/\delta _0`$ approaches a limiting value of $`1`$ when a sharp shadow is produced. It is only when $`\delta _1/\delta _01`$ that the cross-section no longer depends on $`k`$. This is what we require then, for the probability of our measurement to succeed independently of the energy of the incoming particles. From a plot of $`\delta _1/\delta _0`$ we see that this only occurs when $`ka1`$ (Figure 5). Our condition for an accurate measurement is therefore that $`a1/k`$. Since $`\delta t_cam/k`$ we find
$$\delta t_c1/E$$
(29)
## IV Coincident States
We have seen that we can only measure coincidence to an accuracy of $`\delta t_c=1/\overline{E}`$. We shall now show that one cannot prepare a two particle system in a state $`\psi _c`$ which always arrives coincidentally within a time less than $`\delta t_c`$. In other words, one cannot prepare a system in a state which arrives coincidentally to greater accuracy than that set by the limitation on coincidence measurements.
Preparing a state $`\psi _c`$ corresponds to preparing a single particle in two dimensions which always arrives inside a region $`\delta r=p\delta t_c/m`$ of the origin. In other words, suppose we were to set up a detector of size $`\delta r`$ at the origin. If a state $`\psi _c`$ exists, then it would always trigger the detector at some later time.
Our definition of coincidence requires that the state $`\psi _c`$ not be a state where one particle arrives at a time $`t>\delta t_c`$ before the other particle. In other words, if instead, we were to perform a measurement on $`\psi _c`$ to determine whether particle x arrived at least $`\delta t_c`$ before particle y, then we must get a negative result for this measurement.
This latter measurement would correspond to the two-dimensional experiment of placing a series of detectors on the positive y-axis, and measuring whether any of them are triggered by $`\psi _c`$. If $`\psi _c`$ is truly a coincident state, then none of the detectors which are placed at a distance greater than $`y=\delta r`$ can be triggered. One could even consider a single detector, placed for example, at $`(0,\delta r)`$, and one would require that $`\psi _c`$ not trigger this detector.
Now consider the following experiment. We have a particle detector which is either placed at the origin, or at $`(0,\delta r)`$ (we are not told which). Then after a sufficient length of time, we observe whether it has been triggered. If we can prepare a coincident state $`\psi _c`$, then it will always trigger the detector when the detector is at the origin, but never trigger the detector when the detector is at $`(0,\delta r)`$. This will allow us to determine whether the detector was placed at the origin, or at $`(0,\delta r)`$. For example, if we use the detectors described in Section III (namely, just a scattering potential), then some of the time, the particle will be scattered, and some of the time it won’t be, and if it is scattered, we can conclude that the potential was centered around the origin rather than around $`(0,\delta r)`$.
However, as we know from Heisenberg’s gedanken microscope experiment, a particle cannot be used to resolve anything greater than it’s wavelength. In other words $`\psi _c`$ cannot be used to determine whether the detector is at the origin, or at $`(0,\delta r)`$ if $`\delta r<2\pi /k`$. As a result, $`\psi _c`$ can only be coincident to a region around the origin of radius less than $`\delta r`$ or, coincident within a time $`\delta t_c1/E`$.
## V Conclusion
The notion that events proceed in a well defined sequence is unquestionable in classical mechanics. Events occur one after the other, and our knowledge concerning the events at one time allows us to predict what will occur at another time. One can unambiguously determine whether events lie in the past or future of other events. Given two events, $`A`$ and $`B`$, one can compute which event occurred first. It may be, that event $`A`$ causes event $`B`$, in which case, event $`A`$ must have preceded event $`B`$.
However, in quantum mechanics the situation is different. We have argued that we cannot measure the order of arrival for two free particles if they arrive within a time of $`1/\overline{E}`$ of each other, where $`\overline{E}`$ is their typical total kinetic energy. If we try to measure whether they arrive within a time $`\delta t_c`$ of each other, then our measurement fails unless we have at least $`\delta t_c>1/\overline{E}`$. Furthermore, we cannot construct a two particle state where both particles arrive to a certain point within a time of $`1/\overline{E}`$ of each other.
Interestingly, this inaccuracy limitation is weaker than what would be obtained if one tried to measure the time-of-arrival of each particle separately.
It may be interesting to consider the situation where we have an event B which must be preceded by an event A. For example, B could be caused by A, or the dynamics could be such that B can only occur when the system is in the state A. One can then attempt to force B to occur as close to the occurrence of event A as possible. A related problem has been studied in connection to the maximum speed of dynamical systems such as quantum computers dynamical and it was found that one cannot force the system to evolve at a rate greater than $`1/\overline{E}`$ (where $`\overline{E}`$ is the average energy), rather than $`1/dE`$ (where $`dE`$ is the uncertainty in the energy). However since this result concerns only the free evolution of the system between states, it is not clear a priori that it is indeed related to the $`1/\overline{E}`$ restriction found in the present case where the measurement interaction disturbs the system.
Acknowledgments: J.O would like to thank Yakir Aharonov and and Mark Halpern for valuable discussion. W.G.U. acknowledges the CIAR and NSERC for support during the completion of this work. J.O. also acknowledges NSERC for their support. B.R. acknowledges the support from grant 471/98 of the Israel Science Foundation, established by the Israel Academy of Sciences and Humanities.
|
no-problem/0003/quant-ph0003051.html
|
ar5iv
|
text
|
# Decoherence and measurement in open quantum systems
## 1 INTRODUCTION
Quantum decoherence and measurement have attracted new interest owing to advances in nanotechnology. Here we present our recent work culminating in a solvable model of a measurement process. It has been argued that interaction with external environment, modeled by a large, macroscopic “bath,” is an essential ingredient of the measurement process. Coupling to the bath is responsible for decoherence which creates a statistical mixture of eigenstates from the initially fully or partially coherent quantum state of the measured system. Another important macroscopic part of a measurement setup is the “pointer” that stores the outcome. It has been conjectured that the bath also plays a crucial role in the selection of those quantum states of the pointer that manifest themselves in classical observations. In this work we present a model in which a multimode pointer retains information on the measurement outcome because of its coupling to the measured system, without the need to couple it also directly to the bath. However, the measured system is assumed to be coupled to the bath. The latter then also affects the dynamics of the pointer because both of them are coupled to the measured system.
Exact results for models of quantum systems interacting with their environment are quite limited. In the framework of an exactly solvable model of a quantum oscillator coupled to a heat bath of oscillators, it has been shown that the reduced density matrix of the system looses its off-diagonal elements in the eigenbasis of the interaction Hamiltonian. Such loss of correlations between the states of a quantum system is a manifestation of decoherence. Recent work on decoherence has explored its effects for rather general cases, for bosonic (oscillator) and spin baths. Applications for various physical systems have been reported. Fermionic heat bath has also been considered.
We note that a true heat bath should cause a microscopic system interacting with it to thermalize. In fact, we are not aware of any adequate general, truly microscopic model of such a process. Instead, the “temperature” is usually introduced phenomenologically via the initial state of the noninteracting bath modes. The function of a measuring device is different from and more complex than that of a heat bath. In particular, it must store and amplify the measurement outcome information. One of the key issues in the description of a measurement process is the interpretation of the transfer of information stored after the system-pointer and system-bath interaction to the macroscopic level. Here we offer a model of the process which corresponds to the first stage of measurement, in which the pointer acquires amplified information by entanglement with the state of the system. Thus we do not claim to resolve the foundation-of-quantum-mechanics issue of how that information is passed on to the classical world, involving the collapse of the wave functions of the system and of each pointer mode. Indeed, it is impossible to fully describe the wave function collapse within the unitary quantum-mechanical description of the three systems involved: the measured system, the pointer, and the bath, the latter being internal in the sense that, in our model, it only interacts with the measured system. One would have to consider an external bath (the rest of the universe) with which all the “internal” systems interact. As far as we know, this problem is not presently solved, and we first sidestep it by assuming separation of time scales (see below). Nevertheless, we later argue that the results of our model provide a useful insight into this aspect of the quantum measurement process.
Let us identify the three quantum systems involved. The first one, that is being measured, $`S`$, is a microscopic system with the Hamiltonian which will be also denoted by $`S`$. Second, the measuring device must have the “bath” or “body” part, $`B`$, containing many modes. The $`k`$th mode will have the Hamiltonian $`B_k`$. We assume that the bath modes are not coupled to each other. The bath part of the measuring device is not observed, i.e., it can be traced over. The last system is the pointer, $`P`$, consisting of many modes (that are not traced over). The pointer amplifies the information obtained in the measurement process and can later pass it on for further amplification or directly to macroscopic (classical) systems. The $`m`$th pointer mode has the Hamiltonians $`P_m`$. It is assumed that expectation values of some quantities in the pointer undergo a large change during the measurement.
It will become evident later that the device modes involved in the measurement process can be rather simple so that one can focus on the evolution of the system $`S`$ and its effect on the pointer $`P`$. However, it is the pointer interaction with the external bath (some external modes, “the rest of the universe”) that is presumed to select those quantum states of $`P`$ that manifest themselves classically. For now, we prefer to avoid the discussion of this matter, by assuming that the added evolution of the pointer due to such external interactions occurs on time scales larger than the measurement time, $`t`$. Similarly, when we state that the internal bath modes can be “traced over,” we really mean that their interactions with the rest of the universe are such that these modes play no role in the later, wave-function-collapse stage of the measurement process.
Moreover, the measuring device probes the state of the system $`S`$ at the initial time, $`t=0`$, rather than its time evolution under $`S`$ alone. Ideally the process of measurement is instantaneous. In practice, it has to be faster than the time scales associated with the dynamics under $`S`$ and the evolution due to the interactions of all the three systems involved, $`S,B,P`$, with the rest of the universe. This can be obtained as the limit of a system in which very strong interactions between $`S`$ and $`B`$, and also between $`S`$ and $`P`$, are switched on at $`t=0`$ and switched off at $`t>0`$, with small time interval $`t`$. Of course, at later times the pointer can interact with other, external systems to pass on the result of the measurement. Thus, in our approach, we will assume that the Hamiltonian of the system itself, $`S`$, as well as all the external interactions, can be neglected for the duration of the measurement, $`t`$.
In the next section we introduce our model along the lines outlined above. In Section 3, we carry out exact coherent-state calculations that explicitly show how the system and the pointer evolve into a statistical mixture of direct product states, representing the required framework for quantum measurement. In Section 4, we describe the emergence of decoherence in the continuum limit of an infinite number of modes. We also present a general discussion of adiabatic quantum decoherence. Finally, in the last section we calculate expectation values of some pointer operators that retain information on the outcome of the measurement process, and discuss implications of our results.
## 2 THE MODEL
The total Hamiltonian of the system and the measuring device will be written based on the assumptions presented in the preceding section. Specifically, the internal Hamiltonian of the measured system is ignored (set to a constant which can be zero without loss of generality). This is because the dynamics of measurement is assumed to occur on the time scale $`t`$ much shorter than any internal dynamical evolution of the measured system, in order to probe only the instantaneous system wavefunction in the measurement process. We take
$$H=\underset{k}{}B_k+\underset{m}{}P_m+b\mathrm{\Lambda }\underset{k}{}_k+p\mathrm{\Lambda }\underset{m}{}𝒫_m.$$
$`(1)`$
Here $`\mathrm{\Lambda }`$ is some Hermitian operator of the system that couples to certain operators of the modes, $`_k`$ and $`𝒫_m`$. The subscript $`k`$ labels the noninteracting (with each other) modes of the bath, with their Hamiltonians $`B_k`$, whereas $`m`$ labels similar modes of the pointer, with Hamiltonians $`P_m`$. The parameters $`b`$ and $`p`$ are introduced to measure the coupling strength for the bath and pointer modes to the measured system, respectively. They are assumed very large; the ideal measurement process could correspond to $`b,p\mathrm{}`$.
We note that the modes of $`P`$ and $`B`$ can be similar. The only difference between the bath and pointer modes is in how they interact with the “rest of the universe” in a later stage of the measurement process: the bath is not observed (traced over), whereas the pointer modes have their wave functions collapsed. Through their entanglement with the measured system in the first stage of the measurement process, treated here, the pointer modes also cause the collapse of the system wavefunction. Thus, we actually took the same coupling operator $`\mathrm{\Lambda }`$ for the bath and pointer. In fact, all the exact calculations reported in this work can be also carried out for different coupling operators $`\mathrm{\Lambda }_B`$ and $`\mathrm{\Lambda }_P`$, for the bath and pointer modes, provided they commute, $`[\mathrm{\Lambda }_B,\mathrm{\Lambda }_P]=0`$, so that they share a common set of eigenfunctions. The final wavefunction of the measured system, after the full measurement, is in this set. Analytical calculation can be even extended to the case when the system Hamiltonian $`S`$ is retained in (1), provided all three operators, $`S,\mathrm{\Lambda }_B,\mathrm{\Lambda }_P`$, commute pairwise. The essential physical ingredients of the model are captured by the simpler choice (1).
We will later specify all the operators in (1) as the modes of the bosonic heat bath of Caldeira-Leggett type. For now, however, let us keep our discussion general. We will assume that the system operator $`\mathrm{\Lambda }`$ has a nondegenerate, discrete spectrum of eigenstates:
$$\mathrm{\Lambda }|\lambda =\lambda |\lambda .$$
$`(2)`$
Some additional assumptions on the spectrum of $`\mathrm{\Lambda }`$ and $`S`$ will be encountered later. We also note that the requirement that the coupling parameters $`b`$ and $`p`$ are large may in practice be satisfied by that, at the time of the measurement, the system Hamiltonian $`S`$ corresponds to slow or trivial dynamics.
Initially, at $`t=0`$, the quantum systems $`S,B,P`$ and their modes are not correlated with each other. We assume that $`\rho `$ is the initial density matrix of the measured system. The initial state of each bath and pointer mode will be assumed thermalized, with $`\beta =1/(kT)`$ and the density matrices
$$\theta _k=\frac{e^{\beta B_k}}{\mathrm{Tr}_k\left(e^{\beta B_k}\right)},\sigma _m=\frac{e^{\beta P_m}}{\mathrm{Tr}_m\left(e^{\beta P_m}\right)},$$
$`(3)`$
respectively. We cannot offer any fundamental physical reason for having the initial bath and pointer mode states thermalized, especially for the pointer. This choice is really made to allow exact solvability, though we could claim that the bath and pointer might be thermalized if they are in contact with the “rest of the universe” for a long time before the measurement.
The density matrix of the full system at time $`t`$ is then
$$R=e^{iHt/\mathrm{}}\left[\rho \left(\underset{k}{}\theta _k\right)\left(\underset{m}{}\sigma _m\right)\right]e^{iHt/\mathrm{}}.$$
$`(4)`$
The bath is not probed and it can be traced over. The resulting reduced density matrix $`r`$ of the combined system $`S+P`$ will be represented as by its matrix elements in the eigenbasis of $`\mathrm{\Lambda }`$. These quantities are each an operator in the space of $`P`$:
$$r_{\lambda \lambda ^{}}=\lambda |\mathrm{Tr}_B(R)|\lambda ^{}.$$
$`(5)`$
We now assume that operators in different spaces and of different modes commute. Note that until now our discussion was quite general. The commutability requirement is trivial for the bosonic and spin bath modes. However, it must be checked carefully if baths with fermionic modes are used. Then one can show that
$$r_{\lambda \lambda ^{}}=\rho _{\lambda \lambda ^{}}\left[\underset{m}{}e^{it\left(P_m+p\lambda 𝒫_m\right)/\mathrm{}}\sigma _me^{it\left(P_m+p\lambda ^{}𝒫_m\right)/\mathrm{}}\right]\left[\underset{k}{}\mathrm{Tr}_k\left\{e^{it\left(B_k+b\lambda _k\right)/\mathrm{}}\theta _ke^{it\left(B_k+b\lambda ^{}_k\right)/\mathrm{}}\right\}\right],$$
$`(6)`$
where $`\rho _{\lambda \lambda ^{}}=\lambda |\rho |\lambda ^{}`$. This result involves products of $`P`$-space operators and traces over $`B`$-space operators which are all single-mode. Therefore, analytical calculations are possible for some choices of the Hamiltonian (1). The observable $`\mathrm{\Lambda }`$ can be kept general.
The role of the product of traces over the modes of the bath in (6) is to induce decoherence which is recognized as essential for the measurement process. At time $`t`$, the absolute value of this product should approach $`\delta _{\lambda \lambda ^{}}`$ in the limit of large $`b`$. Let us now assume that the bath is bosonic. The Hamiltonian of each mode is then $`\mathrm{}\omega _ka_k^{}a_k`$, where for simplicity we shifted the zero of the oscillator energy to the ground state. The coupling operator $`_k`$ is usually selected as $`g_k^{}a_k+g_ka_k^{}`$. For simplicity, though, we will assume that the coefficients $`g_k`$ are real:
$$B_k=\mathrm{}\omega _ka_k^{}a_k,_k=g_k\left(a_k+a_k^{}\right).$$
$`(7)`$
For example, for radiation field in a unit volume, coupled to an atom, the coupling is via a linear combination of the operators $`(a_k+a_k^{})/\sqrt{\omega _k}`$ and $`i(a_ka_k^{})/\sqrt{\omega _k}`$. For a spatial oscillator, these are proportional to position and momentum, respectively. Our calculations can be extended to have an imaginary part of $`g_k`$ which adds interaction with momentum.
## 3 COHERENT STATE CALCULATION
In order to calculate traces in (6), we utilize the coherent-state formalism. The coherent states $`|z`$ are the eigenstates of the annihilation operator $`a`$ with complex eigenvalues $`z`$. Note that from now on we omit the oscillator index $`k`$ whenever this leads to no confusion. These states are not orthogonal:
$$z_1|z_2=\mathrm{exp}\left(z_1^{}z_2\frac{1}{2}|z_1|^2\frac{1}{2}|z_2|^2\right).$$
$`(8)`$
They form an over-complete set, and one can show that the identity operator in a single-oscillator space can be obtained as the integral
$$d^2z|zz|=1.$$
$`(9)`$
Here the integration by definition corresponds to
$$d^2z\frac{1}{\pi }d\left(\mathrm{Re}z\right)d\left(\mathrm{Im}z\right).$$
$`(10)`$
Furthermore, for an arbitrary operator $`A`$, we have, in a single-oscillator space,
$$\mathrm{Tr}A=d^2zz|A|z.$$
$`(11)`$
Finally, we note the following identity, which will be used later,
$$e^{\mathrm{\Omega }a^{}a}=𝒩\left[e^{a^{}(e^\mathrm{\Omega }1)a}\right].$$
$`(12)`$
In this relation $`\mathrm{\Omega }`$ is an arbitrary c-number, while $`𝒩`$ denotes normal ordering.
It is convenient to introduce operators $`\gamma _\lambda `$ according to
$$\mathrm{}\gamma _{\lambda ,k}=B_k+\lambda b_k.$$
$`(13)`$
Denoting the $`k`$th term in the second product in (6) by $`U_{\lambda \lambda ^{},k}`$ and utilizing relations (8)-(12), we have
$$U_{\lambda \lambda ^{}}=Z^1d^2z_0d^2z_1d^2z_2z_0|e^{it\gamma _\lambda }|z_1z_1|e^{\mathrm{}\beta \omega a^{}a}|z_2z_2|e^{it\gamma _\lambda ^{}}|z_0.$$
$`(14)`$
Here we again omitted the mode index $`k`$ for simplicity, and $`Z`$ stands for $`Z_k\mathrm{Tr}_k\left(e^{\beta B_k}\right)`$. The normal-ordering formula (12) then yields for the middle term,
$$z_1|e^{\mathrm{}\beta \omega a^{}a}|z_2=z_1|z_2e^{z_1^{}(e^{\mathrm{}\beta \omega }1)z_2}=\mathrm{exp}\left[z_1^{}z_2\frac{1}{2}|z_1|^2\frac{1}{2}|z_2|^2+z_1^{}(e^{\mathrm{}\beta \omega }1)z_2\right].$$
$`(15)`$
In order to evaluate the first and last matrix-element factors in (14), we define the shifted operators
$$\eta =a+\lambda b(\mathrm{}\omega )^1g,$$
$`(16)`$
in terms of which we have
$$\gamma _\lambda =\omega \eta ^{}\eta \lambda ^2b^2(\mathrm{}^2\omega )^1g^2.$$
$`(17)`$
Since $`\eta `$ and $`\eta ^{}`$ still satisfy the bosonic commutation relation $`[\eta ,\eta ^{}]=1`$, the normal-ordering formula applies. Thus, for the first matrix element in (14), for instance, we get
$$z_0|e^{it\gamma _\lambda }|z_1=e^{it\lambda ^2b^2g^2/(\mathrm{}^2\omega )}z_0|z_1e^{(e^{i\omega t}1)[z_0^{}+\lambda bg/(\mathrm{}\omega )][z_1+\lambda bg/(\mathrm{}\omega )]}.$$
$`(18)`$
Collecting all the results, one concludes that the calculation of $`U_{\lambda \lambda ^{}}`$ involves six Gaussian integrations over the real and imaginary parts of the variables $`z_0,z_1,z_2`$. This is a rather lengthy calculation but it can be carried out in closed form. The result, with indices $`k`$ restored, is
$$\left|\underset{k}{}U_{\lambda \lambda ^{},k}\right|=\mathrm{exp}\left[2b^2\left(\lambda \lambda ^{}\right)^2\mathrm{\Gamma }(t)\right],$$
$`(19)`$
with
$$\mathrm{\Gamma }(t)=\underset{k}{}(\mathrm{}\omega _k)^2g_k^2\mathrm{sin}^2\frac{\omega _kt}{2}\mathrm{coth}\frac{\mathrm{}\beta \omega _k}{2}.$$
$`(20)`$
We only gave the expression for the absolute value of the product of traces in (6). Its complex phase is also known but plays no role in our considerations. In the next section we analyze the results (19)-(20) in the continuum limit of a large number of modes.
## 4 ADIABATIC DECOHERENCE
Results of the preceding section suggest that the off-diagonal elements of the effective density matrix, obtained after tracing over a bath, are generally decreased by the interaction with the bath modes. We will shortly demonstrate that in the limit of many bath modes, these off-diagonal elements irreversibly decay, thus leaving the system and the pointer in a statistical mixture of direct-product states. This process is usually termed decoherence. Apart from being of fundamental importance in the theory of quantum measurement, decoherence has attracted much interest recently due to rapid development of new fields such as quantum computing and quantum information. Decoherence due to external interactions is a major obstacle to implementation of coherently evolving quantum devices such as quantum computers. In this section we review some aspects of the physics of decoherence.
Decoherence is a result of the coupling of the quantum system to the environment which, generally, is the rest of the universe. In various experimentally relevant situations interaction of a quantum system with environment is dominated/mediated by its microscopic surroundings that can be represented by a set of modes. For example, the dominant source of such interaction for an atom in an electromagnetic cavity field is the field itself coupled to the dipole moment of the atom. In the case of Josephson junction in a magnetic flux or defect propagation in solids, the interaction can be dominated by acoustic phonons or delocalized electrons. Magnetic macromolecules interact with the surrounding spin environment such as nuclear spins, etc.
It is customary to model a heat bath as a system of noninteracting boson modes. It has been established, for harmonic quantum systems, that the influence of the heat bath described by the oscillators is effectively identical to an external uncorrelated random force acting on a quantum system under consideration. In order for the system to satisfy equation of motion with a linear dissipation term, in the classical limit, the coupling was chosen to be linear in coordinates while the coupling constants entered lumped in a spectral function, such as (20), and were assumed to be of a power-law form in the oscillator frequency, with the appropriate Debye cutoff.
The temperature was introduced via the initial thermal state of the bath, as in (3). This model of a heat bath was applied to study effects of dissipation on the probability of quantum tunneling from a metastable state. It was found that coupling a quantum system to the heat bath actually decreased the quantum tunneling rate. The problem of a particle in a double well potential was also considered. In this case the interaction with the bath lead to quantum decoherence and complete localization at zero temperature. This study has lead to the spin-boson Hamiltonian which found numerous other applications.
For a general system, there is no systematic way to separate decoherence and thermalization effects. We note that thermalization is naturally associated with exchange of energy between the quantum system and heat bath. Model system results and general expectations mentioned earlier suggest that in many cases decoherence involves its own time scales which are shorter than those of approach to thermal equilibrium. Our measurement model represents, in fact, a limiting case such that there is no energy exchange between the system and the bath. This situation essentially corresponds to the early stages of the system-bath interaction, at low temperatures, when thermalization effects are not yet significant.
Thus if we ignore the presence of the pointer (set $`p=0`$), and restore the system Hamiltonian $`S`$ in (1), then our model of the system-bath interaction, with $`H=S+_kB_k+\mathrm{\Lambda }_k_k`$, now with $`b=1`$ but still with commuting $`S`$ and $`\mathrm{\Lambda }`$, corresponds to adiabatic quantum decoherence. Indeed, the commutation property means that $`[H,S]=0`$, so that the system energy is conserved and therefore relaxation is generally suppressed. Only decoherence processes are possible and in fact exactly calculable. The results (19)-(20) remain unchanged and indicate that decoherence is controlled by the interaction with the bath, $`\mathrm{\Lambda }`$, rather than by the system Hamiltonian. The eigenvalues of the “pointer observable” $`\mathrm{\Lambda }`$ determine the rate of decoherence, while the type of the bath and coupling determines the function $`\mathrm{\Gamma }(t)`$.
We also note that $`\mathrm{\Gamma }(t)`$ is a sum of positive terms. For true decoherence, i.e., in order for this sum to diverge for large times, one needs a continuum of frequencies and interactions with the bath modes that are strong enough at low frequencies. We will give specific results shortly. For quantum measurement, thought, the time $`t`$ need not be large. The conditions for decoherence are then somewhat different; see below.
In the continuum limit of many modes, the density of the bosonic bath states in unit volume, $`𝒟(\omega )`$, and the Debye cutoff, with frequency $`\omega _D`$, are introduced so that (20) can be rewritten as
$$\mathrm{\Gamma }(t)=\underset{0}{\overset{\mathrm{}}{}}𝑑\omega \frac{𝒟(\omega )g^2(\omega )}{(\mathrm{}\omega )^2}e^{\omega /\omega _D}\mathrm{sin}^2\frac{\omega t}{2}\mathrm{coth}\frac{\mathrm{}\beta \omega }{2}.$$
$`(21)`$
Let us consider the usual choice, motivated by atomic-physics and solid-state applications, corresponding to
$$𝒟(\omega )g^2(\omega )=\mathrm{\Omega }\omega ^n,$$
$`(22)`$
where $`\mathrm{\Omega }`$ is a constant.
Exponent values $`n=1`$ and $`n=3`$ have been analyzed in detail in the literature. For $`n=1`$, three regimes were identified, defined by the time scale for thermal decoherence, $`\mathrm{}\beta `$, which is large for low temperatures, and the time scale for quantum-fluctuation effects, $`\omega _D^1`$. The first, “quiet” regime, $`t\omega _D^1`$, corresponds to no significant decoherence and $`\mathrm{\Gamma }(\omega _Dt)^2`$. The next, “quantum” regime, $`\omega _D^1t\beta `$, corresponds to decoherence driven by quantum fluctuations and $`\mathrm{\Gamma }\mathrm{ln}(\omega _Dt)`$. Finally, for $`t\beta `$, in the “thermal” regime, thermal fluctuations are dominant and $`\mathrm{\Gamma }t/\beta `$. For $`n=3`$, decoherence is incomplete. Indeed, while $`n`$ must be positive for the integral in (21) to converge, only for $`n<2`$ we have divergent $`\mathrm{\Gamma }(t)`$ growing according to a power law for large times (in fact, $`\mathrm{\Gamma }t^{2n}`$) in the “thermal” regime. Thus, strong enough coupling $`|g(\omega )|`$ to the low-frequency modes of the heat bath is crucial for full decoherence.
Let us concentrate on the $`n=1`$ case, motivated by atomic physics and solid-state applications; it is termed Ohmic dissipation. We will now consider conditions for decoherence required for quantum measurement. These are somewhat different from those just discussed for the large-time adiabatic system-bath interaction effects. Here we assume that the relevant energy gaps of $`S`$ are bounded so that there exists a well defined time scale $`\mathrm{}/\mathrm{\Delta }S`$ of the evolution of the system under $`S`$. There is also the time scale $`1/\omega _D`$ set by the frequency cutoff assumed for the interactions. The thermal time scale is $`\mathrm{}\beta `$. The only real limitation on the duration of measurement is that $`t`$ must be less then $`\mathrm{}/\mathrm{\Delta }S`$. In applications, typically one can assume that $`1/\omega _D\mathrm{}/\mathrm{\Delta }S`$. Furthermore, it is customary to assume that the temperature is low,
$$t\mathrm{and}1/\omega _D\mathrm{}/\mathrm{\Delta }S\mathrm{}\beta .$$
$`(23)`$
In the limit of large $`\mathrm{}\beta `$, for Ohmic dissipation, (19) reduces to
$$\left|\underset{k}{}U_{\lambda \lambda ^{},k}\right|\mathrm{exp}\left\{\frac{\mathrm{\Omega }}{2\mathrm{}^2}b^2\left(\lambda \lambda ^{}\right)^2\mathrm{ln}\left[1+(\omega _Dt)^2\right]\right\}.$$
$`(24)`$
In order to achieve effective decoherence, the product $`b^2(\mathrm{\Delta }\lambda )^2\mathrm{ln}[1+(\omega _Dt)^2]`$ must be large. The present approach only applies to operators $`\mathrm{\Lambda }`$ with nonzero scale of the smallest spectral gaps, $`\mathrm{\Delta }\lambda `$.
We thus note that, unlike the large-time system-bath decoherence, the decoherence property needed for the measurement process will be obtained for nearly any well-behaved choice of $`𝒟(\omega )g^2(\omega )`$ because we can rely on the value of $`b`$ being large rather than on the properties of the function $`\mathrm{\Gamma }(t)`$, which can no longer be considered evaluated at large $`t`$. If $`b`$ can be large enough, very short measurement times $`t`$ are possible. However, it may be advisable to use measurement times $`1/\omega _Dt\mathrm{}/\mathrm{\Delta }S`$ to get the extra amplification factor $`\mathrm{ln}(\omega _Dt)`$ and allow for fuller decoherence and less sensitivity to the value of $`t`$ in the pointer part of the dynamics, to be addressed in the next section. We notice, furthermore, that the assumption of a large number of modes is important for the monotonic decay of the absolute value (19) in the “system-bath” decoherence applications, where irreversibility is obtained only in the limit of the infinite number of modes. In the measurement case, it can be shown that the role of such a continuum limit is mainly to allow to extend the possible measurement times from $`t1/\omega _D`$ to $`1/\omega _Dt\mathrm{}/\mathrm{\Delta }S`$.
## 5 POINTER PROPERTIES, AND DISCUSSION
Consider the reduced density matrix $`r`$ of $`S+P`$, with matrix elements given by (6). It becomes diagonal in $`|\lambda `$, at time $`t`$, because all the nondiagonal elements are small, assuming that effective decoherence has been achieved, as discussed in the preceding section:
$$r=\underset{\lambda }{}|\lambda \lambda |\rho _{\lambda \lambda }\underset{m}{}e^{it\left(P_m+p\lambda 𝒫_m\right)/\mathrm{}}\sigma _me^{it\left(P_m+p\lambda 𝒫_m\right)/\mathrm{}}.$$
$`(25)`$
Thus, the dynamics yields the density matrix that can be interpreted as describing a statistically distributed system (a mixture), without quantum correlations. This, however, is only meaningful within the ensemble interpretation of quantum mechanics. For a single system plus device, coupling to the rest of the universe is presumably needed to describe how that system is left in one of the eigenstates $`|\lambda `$, with probability $`\rho _{\lambda \lambda }`$. Presently, such a process of wavefunction collapse is not fully understood, but see comments below on the implications of our model for this problem. After the measurement interaction is switched off at $`t`$, the pointer of that system will carry information on the value of $`\lambda `$. This information is “amplified,” owing to the large parameter $`p`$ in the interaction.
We note that one of the roles of the pointer having many modes, which can be identical and noninteracting, is to allow it (the pointer only) to still be treated in the ensemble, density matrix description, even if we focus on the late stage of the measurement when the wave functions of a single measured system and of each pointer mode are already collapsed. This pointer density matrix can be read off (25); it is the $`\lambda `$-dependent product over the pointer modes labeled by $`m`$.
Another useful insight is provided by the fact that, as will be shown shortly, the changes in the expectation values of some observables of the pointer retain amplified information on the system eigenstate. The coupling to the rest of the universe that leads to the completion of the measurement process, should involve such an observable of the pointer. Eventually, the information in the pointer, perhaps after several steps of amplification, should be available for probe by interactions with classical devices.
At time $`t=0`$, expectation values of various operators of the pointer will have their initial values. These values will be changed at time $`t`$ of the measurement owing to the interaction with the measured system. It is expected that the large coupling parameter $`p`$ will yield large changes in expectation values of the pointer quantities. This does not apply equally to all operators in the $`P`$-space. Let us begin with the simplest choice: the Hamiltonian $`\underset{m}{}P_m`$ of the pointer.
We will assume that the pointer is described by the bosonic heat bath and, for simplicity, use the same notation for the pointer modes as that used for the bath modes. The assumption that the pointer modes are initially thermalized, see (3), was not used thus far. While it allows exact analytical calculations, it is not essential: the effective density matrix describing the pointer modes at time $`t`$, for the system state $`\lambda `$, will retain amplified information on the value of $`\lambda `$ for general initial states of the pointer.
This effective density matrix is the product over the $`P`$-modes in (25). For the “thermal” $`\sigma _m`$ from (3), the expectation value of the pointer energy, $`E_P_\lambda `$, can be calculated from
$$\frac{\mathrm{Tr}_P\left\{\left(\underset{m}{}\mathrm{}\omega _ma_m^{}a_m\right)\underset{n}{}\left[e^{it[\omega _na_n^{}a_n+p\lambda g_n(a_n+a_n^{})/\mathrm{}]}\left(e^{\mathrm{}\beta _k\omega _ka_k^{}a_k}\right)e^{it[\omega _na_n^{}a_n+p\lambda g_n(a_n+a_n^{})/\mathrm{}]}\right]\right\}}{\mathrm{Tr}_P\left(e^{\mathrm{}\beta _s\omega _sa_s^{}a_s}\right)}.$$
$`(26)`$
This expression can be reduced to calculations for individual modes. Operator identities can be then utilized to obtain, after some lengthy algebra, the results
$$E_P_\lambda (t)=E_P(0)+\mathrm{\Delta }E_P_\lambda (t),$$
$`(27)`$
$$E_P(0)=\mathrm{}\underset{m}{}\omega _me^{\mathrm{}\beta \omega _m}\left(1e^{\mathrm{}\beta \omega _m}\right)^2,$$
$`(28)`$
$$\mathrm{\Delta }E_P_\lambda (t)=\frac{4p^2\lambda ^2}{\mathrm{}}\underset{m}{}\frac{g_m^2}{\omega _m}\mathrm{sin}^2\left(\frac{\omega _mt}{2}\right).$$
$`(29)`$
For a model with Ohmic dissipation, the integral in the continuum limit can be calculated to yield
$$\mathrm{\Delta }E_P_\lambda (t)=\frac{2\mathrm{\Omega }\omega _D\lambda ^2p^2}{\mathrm{}}\frac{(\omega _Dt)^2}{1+(\omega _Dt)^2}.$$
$`(30)`$
The energy will be an indicator of the amplified value of the square of $`\lambda `$, provided $`p`$ is large. Furthermore, we see here the advantage of larger measurement times, $`t1/\omega _D`$. The change in the energy then reaches saturation. After time $`t`$, when the interaction is switched off, the energy of the pointer will be conserved, internally, i.e., until the pointer is affected by the “rest of the universe.”
Let us consider the expectation value of the following Hermitian operator of the pointer:
$$X=\underset{m}{}𝒫_m=\underset{m}{}g_m(a_m+a_m^{}).$$
$`(31)`$
For an atom in a field, $`X`$ is related to the electromagnetic field operators. One can show that $`X_P(0)=0`$ and
$$\mathrm{\Delta }X_P_\lambda (t)=X_P_\lambda (t)=\frac{4p\lambda }{\mathrm{}}\underset{m}{}\frac{g_m^2}{\omega _m}\mathrm{sin}^2\left(\frac{\omega _mt}{2}\right)=\frac{2\mathrm{\Omega }\omega _D\lambda p}{\mathrm{}}\frac{(\omega _Dt)^2}{1+(\omega _Dt)^2}.$$
$`(32)`$
The change in the expectation value is linear in $`\lambda `$. However, this operator is not conserved internally. One can show that after time $`t`$ its expectation value decays to zero for times $`t+𝒪(1/\omega _D)`$.
We note that by referring to “unit volume” we have avoided the discussion of the “extensivity” of various quantities. For example, the initial energy $`E_P(0)`$ is obviously proportional to the system volume, $`V`$. However, the change $`\mathrm{\Delta }E_P_\lambda (t)`$ will not be extensive; typically, $`g^2(\omega )1/V`$, $`𝒟(\omega )V`$. Thus, while the amplification in our measurement process can involve a numerically large factor, the changes in the quantities of the pointer will be multiples of microscopic values. Multi-stage amplification, or huge coupling parameter $`p`$, would be needed for the information in the pointer to become truly “extensive” macroscopically.
In summary, we described the first stage of a measurement process. It involves decoherence due to a bath and transfer of information to a large system (pointer) via strong interaction over a short period of time. The pointer itself need not be coupled to the internal bath. While we do not offer a solution to the foundation-of-quantum-mechanics wave-function collapse problem, our results do provide two interesting observations.
Firstly, the pointer operator “probed” by the rest of the universe during the wave-function collapse stage, determined by how the pointer modes are coupled to the external bath, must have the appropriate amplification capacity in the first, decoherence stage of the measurement process, as illustrated by our calculations.
Secondly, for a single system (rather then an ensemble), the multiplicity of the (noninteracting) pointer modes might allow the pointer to be treated within the density matrix formalism even after the system and each pointer-mode wave functions were collapsed. Since it is the information in the pointer that is passed on, this observation might seem to resolve part of the measurement puzzle. Specifically, it might suggest why only those density matrices entering (25) are selected for the pointer: they carry classical (large, different from other values) information in expectation values, rather than quantum-mechanical superposition.
However, presumably only a full description of the interaction of the external world with the system $`S+P`$ can explain the wavefunction collapse. It is likely that in practice there will be two types of pointer involved, in a multistage measurement process. Some pointers will consist of many noninteracting modes. These pointers carry the information, stored in a density matrix rather than a wave function of a single system. The latter transference hopefully makes the wavefunction collapse and transfer of the stored information to the macroscopic level less “mysterious.” The second type of pointer will involve strongly interacting modes and play the role of an amplifier by utilizing the many-body collective behavior of the coupled modes, phase-transition style. Its role will be to alleviate the requirement for large mode-to-system coupling parameters encountered in our model.
We acknowledge helpful discussions with Professor L. S. Schulman. This research has been supported by the US Army Research Office under grant DAAD 19-99-1-0342.
|
no-problem/0003/astro-ph0003069.html
|
ar5iv
|
text
|
# Untitled Document
A Few Things We Do Not Know About the Sun and F Stars and G Stars
Robert L. Kurucz
Harvard-Smithsonian Center for Astrophysics
60 Garden Street, Cambridge, MA 02138, USA
October 3, 1999
presented at the Workshop on Nearby Stars, NASA Ames Research Center,
Moffett Field, California, June 24-26, 1999.
A Few Things We Do Not Know About the Sun and F Stars and G Stars
Robert L. Kurucz
Harvard-Smithsonian Center for Astrophysics
Cambridge, MA 02138, USA
We do not know how to make realistic model atmospheres
We do not understand convection
Recently I have been preoccupied with convection because the model atmospheres are now good enough to show shortcomings in the convective treatment. Here I will outline what I have learned. I will mainly list the conclusions I have come to from examining individual convective models and from examining grids of convective models as a whole. Eighteen figures illustrating the points made here can be found in Kurucz (1996).
Every observation, measurement, model, and theory has seven characteristic numbers: resolution in space, in time, and in energy, and minimum and maximum energy. Many people never think about these resolutions. A low resolution physics cannot be used to study something in which the physical process of interest occurs at high resolution unless the high resolution effects average out when integrated over the resolution bandpasses.
What does the sun, or any convective atmosphere, actually look like? We do not really know yet. There is a very simplified three-dimensional radiation-hydrodynamics calculation discussed in the review by Chan, Nordlund, Steffen, and Stein (1991). It is consistent with the high spatial and temporal resolution observations shown in the review by Topka and Title (1991). Qualitatively, there is cellular convection with relatively slowly ascending, hot, broad, diverging flows that turn over and merge with their neighbors to form cold, rapidly descending, filamentary flows that diffuse at the bottom. The filling factor for the cold downward flowing elements is small. The structure changes with time. Nordlund and Dravins (1990) discuss four similar stellar models with many figures. Every one-dimensional mixing-length convective model is based on the assumption that the convective structure averages away so that the emergent radiation depends only a one-dimensional temperature distribution.
There is a solar flux atlas (Kurucz, Furenlid, Brault, and Testerman 1984) that Ingemar Furenlid caused to be produced because he wanted to work with the sun as a star for comparison to other stars. The atlas is pieced together from eight Fourier transform spectrograph scans, each of which was integrated for two hours, so the time resolution is two hours for a given scan. The x and y resolutions are the diameter of the sun. The z resolution (from the formation depths of features in the spectrum) is difficult to estimate. It depends on the signal-to-noise and the number of resolution elements. The first is greater than 3000 and the second is more than one million. It may be possible to find enough weak lines in the wings and shoulders of strong lines to map out relative positions to a few kilometers. Today I think it is to a few tens of kilometers. The resolving power is on the order of 522,000. This is not really good enough for observations made through the atmosphere because it does not resolve the terrestrial lines that must be removed from the spectrum. (In the infrared there are many wavelength regions where the terrestrial absorption is too strong to remove.) The sun itself degrades its own flux spectrum by differential rotation and macroturbulent motions. The energy range of the atlas is from 300 to 1300 nm, essentially the range where the sun radiates most of its energy.
This solar atlas is of higher quality than any stellar spectrum taken thus far but still needs considerable improvement. If we have difficulty interpreting these data, it can only be worse for other stars where the spectra are of lower quality by orders of magnitude.
To analyze this spectrum, or any other spectrum, we need a theory that works at a similar resolution or better. We use a plane parallel, one-dimensional theoretical or empirical model atmosphere that extends in z through the region where the lines and continuum are formed. The one-dimensional model atmosphere represents the space average of the convective structure over the whole stellar disk (taking account of the center-to-limb variation) and the time average over hours. It is usually possible to compute a model that matches the observed energy distribution around the flux maximum. However, to obtain the match it is necessary to adjust a number of free parameters: effective temperature, surface gravity, microturbulent velocity, and the mixing-length-to-scale-height-ratio in the one-dimensional convective treatment. The microturbulent velocity parameter also produces an adjustment to the line opacity to make up for missing lines. Since much of the spectrum is produced near the flux maximum, at depths in the atmosphere where the overall flux is produced, averaging should give good results. The parameters of the fitted model may not be those of the star, but the radiation field should be like that of the star. The sun is the only star where the effective temperature and gravity are accurately known. In computing the detailed spectrum, it is possible to adjust the line parameters to match many features, although not the centers of the strongest lines. These are affected by the chromosphere and by NLTE. Since very few lines have atomic data known accurately enough to constrain the model, a match does not necessarily mean that the model is correct.
From plots of the convective flux and velocity for grids of models I have identified three types of convection in stellar atmospheres:
$``$ normal strong convection where the convection is continuous from the atmosphere down into the underlying envelope. Convection carries more than 90% of the flux. Stars with effective temperatures 6000K and cooler are convective in this way as are stars on the main sequence up to 8000K. At higher temperature the convection carries less of the total flux and eventually disappears starting with the lowest gravity models. Intermediate gravities have intermediate behavior. Abundances have to be uniform through the atmosphere into the envelope. The highly convective models seem to be reasonable representations of real stars, except for the shortcomings cited below.
$``$ atmospheric layer convection where, as convection weakens, the convection zone withdraws completely up from the envelope into the atmosphere. There is zero convection at the bottom of the atmosphere. Abundances in the atmosphere are decoupled from abundances in the envelope. For mixing-length models the convection zone is limited at the top by the Schwarzschild criterion to the vicinity of optical depth 1 or 2. The convection zone is squashed into a thin layer. In a grid, this layer continues to carry significant convective flux for about 500K in effective temperature beyond the strongly convective models. There is no common-sense way in which to have convective motions in a thin layer in an atmosphere. The solution is that the Schwarzschild criterion does not apply to convective atmospheres. The derivatives are defined only in one dimensional models. A real convective element has to decide what to do on the basis of local three-dimensional derivatives, not on means. These thin-layer-convective model atmospheres may not be very realistic.
$``$ plume convection. Once the convective flux drops to the percent range, cellular convection is no longer viable. Either the star becomes completely radiative, or it becomes radiative with convective plumes that cover only a small fraction of the surface in space and time. Warm convective material rises and radiates. The star has rubeola. The plumes dissipate and the whole atmosphere relaxes downward. There are no downward flows. The convective model atmospheres are not very realistic except when the convection is so small as to have negligible effect, i.e. the model is radiative. The best approach may be simply to define a star with less than, say, 1% convection as radiative. The error will probably be less than using mixing-length model atmospheres.
Using a one-dimensional model atmosphere to represent a real convective atmosphere for any property that does not average in space and time to the one-dimensional model predictions produces systematic errors. The Planck function, the Boltzmann factor, and the Saha equation are functions that do not average between hot and cold convective elements. We can automatically conclude that one-dimensional convective models must predict the wrong value for any parameter that has strong exponential temperature dependence from these functions.
Starting with the Planck function, the ultraviolet photospheric flux in any convective star must be higher than predicted by a one-dimensional model (Bikmaev 1994). Then, by flux conservation, the flux redward of the flux maximum must be lower. It is fit by a model with lower effective temperature than that of the star. The following qualitative predictions result from the exponential falloff of the flux blueward of the flux maximum:
$``$ the Balmer continuum in all convective stars is higher than predicted by a one-dimensional model;
$``$ in G stars, including the sun, the discrepancy reaches up to about 400nm;
$``$ all ultraviolet photoionization rates at photospheric depths are higher in real stars than computed from one-dimensional models;
$``$ flux from a temperature minimum and a chromospheric temperature rise masks the increased photospheric flux in the ultraviolet;
$``$ the spectrum predicted from a one-dimensional model for the exponential falloff region, and abundances derived therefrom, are systematically in error;
$``$ limb-darkening predicted from a one-dimensional model for the exponential falloff region is systematically in error;
$``$ convective stars produce slightly less infrared flux than do one-dimensional models.
The Boltzmann factor is extremely temperature sensitive for highly excited levels:
$``$ the strong Boltzmann temperature dependence of the second level of hydrogen implies that the Balmer line wings are preferentially formed in the hotter convective elements. A one-dimensional model that matches Balmer line wings has a higher effective temperature than the real star;
$``$ the same is true for all infrared hydrogen lines.
The Saha equation is safe only for the dominant species:
$``$ neutral atoms for an element that is mostly ionized are the most dangerous because (in LTE) they are much more abundant in the cool convective elements. When Fe is mostly ionized the metallicity determination from Fe I can be systematically offset and can result in a systematic error in the assumed evolutionary track and age.
$``$ in the sun convection may account for the remaining uncertainties with Fe I found by Blackwell, Lynas-Gray, and Smith (1995);
$``$ the most striking case is the large systematic error in Li abundance determination in extreme Population II G subdwarfs. The abundance is determined from the Li I D lines which are formed at depths in the highly convective atmosphere where Li is 99.94% ionized (Kurucz 1995b);
$``$ molecules with high dissociation energies such as CO are also much more abundant in the cool convective elements. The CO fundamental line cores in the solar infrared are deeper than any one-dimensional model predicts (Ayres and Testerman 1981) because the cooler convective elements that exist only a short time have more CO than the mean model.
Given all of these difficulties, how should we proceed? One-dimensional model atmospheres can never reproduce real convective atmospheres. The only practical procedure is to compute grids of model atmospheres, then to compute diagnostics for temperature, gravity, abundances, etc., and then to make tables of corrections. Say, for example, in using the H$`\alpha `$ wings as a diagnostic of effective temperature in G stars, the models may predict effective temperatures that are 100K too high. So if one uses an H$`\alpha `$ temperature scale it has to be corrected by 100K to give the true answer. Every temperature scale by any method has to be corrected in some way. Unfortunately, not only is this tedious, but it is very difficult or impossible because no standards exist. We do not know the energy distribution or the photospheric spectrum of a single star, even the sun. We do not know what spectrum corresponds to a given effective temperature, gravity, or abundances. The uncertainties in solar abundances are greater than 10%, except for hydrogen, and solar abundances are the best known. It is crucial to obtain high resolution, high signal-to-noise observations of the bright stars.
We do not consider the variation in microturbulent velocity
Microturbulent velocity in the photosphere is just the convective motions. At the bottom of the atmosphere it is approximately the maximum convective velocity. At the temperature minimum it is zero or near zero because the convecting material does not rise that high. There is also microturbulent velocity in the chromosphere increasing outward from the temperature minimum that is produced by waves or other heating mechanisms. In the sun the empirically determined microturbulent velocity is about 0.5 km/s at the temperature minimum and about 1.8 km/s in the deepest layers we can see. In a solar model the maximum convective velocity is 2.3 km/s. The maximum convective velocity is about 0.25 km/s in an M dwarf and increases up the main sequence. The convective velocity increases greatly as the gravity decreases. I suggest that a good way to treat the behavior of microturbulent velocity in the models is to scale the solar empirical distribution as a function of Rosseland optical depth to the maximum convective velocity for each effective temperature and gravity.
Why does this matter? Microturbulent velocity increases line width and opacity and produces effects on an atmosphere like those from changing abundances. At present, models, fluxes, colors, spectra, etc are computed with constant microturbulent velocity within a model and from model to model. This introduces systematic errors within a model between high and low depths of formation, and between models with different effective temperatures, and between models with different gravity. Microturbulent velocity varies along an evolutionary track. If microturbulent velocity is produced by convection, microturbulent velocity is zero when there is no convection, and diffusion is possible.
By now I should have computed a model grid with varying microturbulent velocity but I am behind as usual.
We do not understand spectroscopy
We do not have good spectra of the sun or any other star
Very few of the features called “lines” in a spectrum are single lines. Most features consist of blends of many lines from different atoms and molecules. All atomic lines except those of thorium have hyperfine or isotopic components, or both, and are asymmetric (Kurucz 1993). Low resolution, low-signal-to-noise spectra do not contain enough information in themselves to allow interpretation. Spectra cannot be properly interpreted without signal-to-noise and resolution high enough to give us all the information the star is broadcasting about itself. And then we need laboratory data and theoretical calculations as complete as possible. Once we understand high quality spectra we can look at other stars with lower resolution and signal-to-noise and have a chance to make sense of them.
We do not have energy distributions for the sun or any other star
I get requests from people who want to know the solar irradiance spectrum, the spectrum above the atmosphere, that illuminates all solar system bodies. They want to interpret their space telescope observations or work on atmospheric chemistry, or whatever. I say, “Sorry, it has never been observed. NASA and ESA are not interested. I can give you my model predictions but you cannot trust them in detail, only in, say, one wavenumber bins.” The situation is pathetic.
I am reducing Brault’s FTS solar flux and intensity spectra taken at Kitt Peak for .3 to 5 $`\mu `$m. I am trying to compute the telluric spectrum and ratio it out to determine the flux above the atmosphere but that will not work for regions of very strong absorption. Once that is done the residual flux spectra can be normalized to low resolution calibrations to determine the irradiance spectrum. The missing pieces will have to be filled in by computation. Spectra available in the ultraviolet are much lower resolution, much lower signal-to-noise, and are central intensity or limb intensity, not flux. The details of the available solar atlases can be found in two review papers, Kurucz (1991; 1995a).
We do not know how to determine abundances
We do not know the abundances of the sun or any other star
One of the curiosities of astronomy is the quantity \[Fe\]. It is the logarithmic abundance of Fe in a galaxy, cluster, star, whatever, relative to the solar abundance of Fe. What makes it peculiar is that we do not yet know the solar abundance of Fe and our guesses change every year. The abundance has varied by a factor of ten since I was a student. Therefore \[Fe\] is meaningless unless the solar Fe abundance is also given so that \[Fe\] can be corrected to the current value of Fe.
For an example I use Grevesse and Sauval’s (1999) solar Fe abundance determination. I am critical, but, regardless of my criticism, I still use their abundances. There are scores of other abundance analysis papers, including some bearing my name, that I could criticize the same way.
Grevesse and Sauval included 65 Fe I “lines” ranging in strength from 1.4 to 91.0 mÅand 13 Fe II “lines” ranging from 15.0 to 87.0 mÅ. They found an abundance log Fe/H + 12 = 7.50 $`\pm `$ 0.05.
Another curiosity of astronomy is that Grevesse and Sauval have decided a priori that the solar Fe abundance must equal the meteoritic abundance of 7.50 and that a determination is good if it produces that answer. If the solar abundance is not meteoritic, how could they ever determine it?
There are many “problems” in the analysis. First, almost all the errors are systematic, not statistical. Having many lines in no way decreases the error. In fact, the use of a wide range of lines of varying strengths increases the systematic errors. Ideally a single weak line is all that is required to get an accurate abundance. Weak lines are relatively insensitive to the damping treatment, to microturbulent velocity, and to the model structure. The error is reduced simply by throwing out all lines greater than 30 mÅ. That reduces the number of Fe I lines from 65 to 25 and of Fe II lines from 13 to 5. As we discussed above, the microturbulent velocity varies with depth but Grevesse and Sauval assume that it is constant. This problem is minimized if all the lines are weak.
As we discussed above “lines” do not exist. The lines for which equivalent widths are given are all parts of blended features. As a minimum we have to look at the spectrum of each feature and determine how much of the feature in the “line” under investigation and how much is blending. Rigorously one should do spectrum synthesis of the whole feature. We have solar central intensity spectra and spectrum synthesis programs. For the sun we have the advantage of intensity spectra without rotational broadening. In the flux spectrum of the sun and of other stars there is more blending. The signal-to-noise of the spectra is several thousand and the continuum level can be determined to on the order of 0.1 per cent so the errors from the spectrum are small. With higher signal-to-noise more detail would be visible and the blending would be better understood. Most of the features cannot be computed well with the current line data. None of the features can be computed well without adjusting the line data. Even if the line data were perfect, the wavelengths would still have to be adjusted because of wavelength shifts from convective motions.
Fe has 4 isotopes. The isotopic splitting has not been determined for the lines in the abundance analysis. For weak lines it does not affect the total equivalent width but it does affect the perception of blends.
It is possible to have undetectable blends. There are many Fe I lines with the same wavelengths, including some in this analysis, and many lines of other elements. We hope that these blends are very weak. The systematic error always makes the observed line stronger than it is in reality so they produce an abundance overestimate.
There are systematic errors and random errors in the gf values. With a small number of weak lines on the linear part of the curve of growth it is easy to correct the abundances when the gf values are improved in the future.
We are left with 3 relatively safe lines of Fe I and 1 relatively safe line of Fe II. These have the least uncertainty in determining the blending by my estimation. Grevesse and Sauval found abundances of 7.455, 7.453, and 7.470 for the Fe I lines and 7.457 for the Fe II line.
We do not have good atomic and molecular data
One half the lines in the solar spectrum are not identified
It is imperative that laboratory spectrum analyses be improved and extended, and that NASA and ESA pay for it. Some of the analyses currently in use date from the 1930s and produce line positions uncertain by 0.01 or 0.02 Å. New analyses with FTS spectra produce many more energy levels and one or two orders of magnitude better wavelengths. One analysis can affect thousands of features in a stellar spectrum. Also the new data are of such high quality that for some lines the hyperfine or isotopic splitting can be directly measured. Using Pickering (1996) and Pickering and Thorne (1996) I am now able to compute Co I hyperfine transitions and to reproduce the flag patterns and peculiar shapes of Co features in the solar spectrum. Using Litzen, Brault, and Thorne (1993) I am now able to compute the five isotopic transitions for Ni I and to reproduce the Ni features in the solar spectrum. These new analyses also serve as the basis for new semiempirical calculations than can predict the gf values and the lines that have not yet been observed in the lab but that matter in stars. I have begun to compute new line lists for all the elements and I will make them available on my web site, kurucz.harvard.edu.
We should get our own house in order before worrying about the neighbors.
Ayres, T.R. and Testerman, L. 1981, ApJ 245, 1124-1140.
Bikmaev, I. 1994, personal communication.
Blackwell, D.E., Lynas-Gray, A.E., and Smith, G. 1995, A&A, 296, 217.
Chan, K.L., Nordlund, Å, Steffen, M., Stein, R.F. 1991, The Solar Interior and
Atmosphere, A.N. Cox, W.C. Livingston, and M. Matthews, eds. (Tucson:
U. of Arizona Press) 223-274.
Grevesse, N. and Sauval, A.J. 1999, A&A 347, 348-354.
Kurucz, R.L. 1991, in The Solar Interior and Atmosphere A.N. Cox,
W.C. Livingston, and M. Matthews, eds., Tucson: U. of Arizona Press, 663-669.
Kurucz, R.L. 1993, Physica Scripta, T47, 110-117,
Kurucz, R.L. 1995a. in Laboratory and Astronomical High Resolution Spectra
ASP Conf. Series 81, (eds. A.J. Sauval, R. Blomme, and N. Grevesse) 17-31.
Kurucz, R.L. 1995b. Ap.J.,452, 102-108.
Kurucz, R.L. 1996. in ASP Conf. Series 108, Model Atmospheres and Stellar
Spectra (eds. S. Adelman, F. Kupka, and W.W. Weiss) 2-18.
Kurucz, R.L, Furenlid, I., Brault, J., and Testerman, L. 1984. Solar Flux Atlas from
296 to 1300 nm (Sunspot, N.M.: National Solar Observatory)
Litzen, U., Brault, J.W., and Thorne, A.P. 1993, Physica Scripta 47, 628-673.
Nordlund, Å. and Dravins, D. 1990, A&A 228, 155.
Pickering, J.C. 1996, ApJ Supp 107, 811-822.
Pickering, J.C. and Thorne, A.P. 1996, ApJ Supp 107, 761-809.
Topka, K.P. and Title, A.M. 1991, in The Solar Interior and Atmosphere, A.N. Cox,
W.C. Livingston, and M. Matthews, eds. (Tucson: U. of Arizona Press) 727-747.
|
no-problem/0003/hep-ph0003079.html
|
ar5iv
|
text
|
# 1 Cross sections for 𝐽/𝜓-nucleon interactions as obtained from 𝐽/𝜓 and open charm photoproduction: 𝜎ₑₗ^{𝜓𝑁}(𝑠) (open circles), 𝜎ᵢₙ^{𝜓𝑁}(𝑠) (triangles), and (1+𝜌²)^{1/2}𝜎ₜₒₜ^{𝜓𝑁}(𝑠) (filled circles). The lines give the results of fits (see text).
Photoproduction Constraints on $`J/\psi `$-Nucleon Interactions
K. Redlich<sup>1,2,3</sup>, H. Satz<sup>1</sup> and G. M. Zinovjev<sup>1,4</sup>
1: Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany
2: Gesellschaft für Schwerionenforschung, D-64220 Darmstadt, Germany
3: Institute of Theoretical Physics, University of Wrocław, PL-50204 Wrocław, Poland
4: Bogolyubov Institute of Theoretical Physics, Academy of Sciences,
UA-252143 Kiev, Ukraine
Abstract:
Using $`J/\psi `$ and open charm photoproduction data, we apply the vector meson dominance model to obtain constraints on the energy dependence of the inelastic $`J/\psi `$-nucleon cross section. Predictions of short distance QCD are in accord with these constraints, while recently proposed hadronic models for $`J/\psi `$ dissociation strongly violate them.
The energy dependence of the inelastic $`J/\psi `$-nucleon cross section $`\sigma _{\psi N}^{in}(s)`$ is of great importance in understanding $`J/\psi `$ suppression as signature for colour deconfinement in high energy nuclear collisions . Calculations based on short distance QCD predict a strong threshold damping of $`\sigma _{\psi N}^{in}(s)`$, due to the suppression of high momentum gluons by the gluon distribution function in nucleons -; this damping persists also when finite target mass corrections are taken into account . In contrast to such QCD studies, several recently proposed models based on hadron exchange suggest large threshold values of $`\sigma _{\psi N}^{in}(s)`$ \- . The aim of this note is to show that available $`J/\psi `$ and open charm photoproduction data can do much to clarify the situation.
The existing empirical information on $`J/\psi `$-hadron interactions comes from photoproduction and the vector meson dominance model (VMD) , which relates $`e^+e^{}\psi ,\gamma N\psi N`$ and $`\psi N`$ data . It is based on the assumption that fluctuations of the photon into quark-antiquark pairs are dominated by the corresponding hadronic resonances. As a result, the diffractive $`J/\psi `$-photoproduction cross section is related to elastic $`\psi N`$ scattering,
$$\sigma (\gamma N\psi N)=\left(\frac{4\pi \alpha }{\gamma _\psi ^2}\right)\sigma _{\mathrm{el}}^{\psi N}.$$
(1)
Here $`\gamma _\psi `$ is determined by the $`J/\psi `$-decay into $`e^+e^{}`$,
$$\mathrm{\Gamma }(e^+e^{}\psi )=\frac{\alpha ^2}{3}\left(\frac{4\pi }{\gamma _\psi ^2}\right)M_\psi ,$$
(2)
with $`\mathrm{\Gamma }(e^+e^{}\psi )=5.26\pm 0.37`$ keV . Furthermore, the optical theorem leads to
$$\left(\frac{d\sigma (\gamma N\psi N)}{dt}\right)_{t=0}=\frac{(1+\rho ^2)}{16\pi }\left(\frac{4\pi \alpha }{\gamma _\psi ^2}\right)(\sigma _{\mathrm{tot}}^{\psi N})^2,$$
(3)
where $`\rho =[\mathrm{Re}M(s)/\mathrm{Im}M(s)]`$ is the ratio of real to imaginary part of the $`\psi N`$ forward scattering amplitude. This vanishes at high energy, so that then Eq. (3) relates the total $`\psi N`$ cross section to forward $`J/\psi `$-photoproduction.
The first experimental measurements of the $`J/\psi `$-photoproduction cross section had already shown it to be very small compared to the corresponding cross sections for conventional vector mesons $`\rho ,\omega `$ and $`\varphi `$ . One of the first explanations of this result had invoked the smallness of the Pomeranchuk pole residue for the $`J/\psi `$, i.e., the total cross section of $`\psi N`$-interaction should be small, and the interaction of $`\pi `$ and $`J/\psi `$ was argued to be quite weak . Moreover, it was concluded there the $`J/\psi `$ interaction with hadrons should be dominated by charmed particle production.
Today there exist quite good data. For c.m.s. energy $`\sqrt{s}20`$ GeV (corresponding to a photon energy of about 200 GeV), the forward photoproduction cross section is about 100 nb/GeV<sup>2</sup> . Assuming that here $`\rho 0`$, and using the quoted value for $`\mathrm{\Gamma }(e^+e^{}\psi )`$, we get $`\sigma _{\mathrm{tot}}^{\psi N}1.7`$ mb. Geometric arguments, which also assume $`\rho =0`$, predict $`\sigma _{\mathrm{tot}}^{\psi N}/\sigma _{\mathrm{tot}}^{NN}(r_\psi /r_N)^2`$ . With $`r_\psi 0.2`$ fm, $`r_N0.85`$ fm and $`\sigma _{\mathrm{tot}}(NN)40`$ mb, this gives $`\sigma _{\mathrm{tot}}^{\psi N}2.2`$ mb. Thus both VDM and geometric considerations lead to a total high energy $`\psi N`$ cross section around 2 mb.
At $`\sqrt{s}20`$ GeV, $`\sigma (\gamma N\psi N)17.5`$ nb ; using Eq. (1), we obtain
$$\sigma _{\mathrm{el}}^{\psi N}25\mu \mathrm{b}$$
(4)
for the elastic $`\psi N`$ cross section at this energy. Hence the high energy ratio of elastic to total $`\psi N`$ cross sections is with
$$\frac{\sigma _{\mathrm{el}}^{\psi N}}{\sigma _{\mathrm{tot}}^{\psi N}}\frac{1}{70}$$
(5)
very much smaller than that for the interaction of light hadrons; the corresponding $`\pi N`$ ratio is an order of magnitude larger. At high energy, the total $`\psi N`$ cross section is thus strongly dominated by inelastic channels; for the $`J/\psi `$, it is appearently much more difficult to survive high energy interactions than it is for hadrons consisting of light quarks, so that most of $`\sigma _{\mathrm{tot}}^{\psi N}`$ consists of open charm production. This is in accord with the Okubo-Zweig-Iizuka (OZI) rules, which forbid the $`J/\psi `$ as $`c\overline{c}`$ bound state to annihilate into ordinary light hadrons and hence lead to charmed meson production. Such behaviour is also a natural consequence of partonic interactions, rather than black disc absorption.
Since Eq. (3) determines the total cross section only modulo $`(1+\rho ^2)^{1/2}`$, additional information is needed to determine $`\sigma _{\mathrm{in}}^{\psi N}(s)`$. This is provided by the photoproduction of open charm, which we denote by $`\sigma (\gamma Nc\overline{c})`$; it is empirically obtained by measuring $`D`$ and $`D^{}`$ production. From VMD, we expect
$$\sigma (\gamma Nc\overline{c})\left(\frac{4\pi \alpha }{\gamma _\psi ^2}\right)\sigma _{\mathrm{in}}^{\psi N}.$$
(6)
Before applying this relation, the role of other vector mesons must be clarified. Intermediate light quark states, such as $`\rho `$ or $`\omega `$, could also produce open charm. Data on the cross section for open charm hadroproduction, in accord with perturbative calculations , give some 10 - 20 $`\mu `$b at $`\sqrt{s}20`$ GeV. This is to be compared to $`\sigma _{\mathrm{tot}}^{\psi N}2`$ mb at the corresponding energy, keeping in mind the ratio of the photon couplings $`\gamma _\rho ^2/\gamma _\psi ^25.18`$. Light vector mesons therefore contribute to open charm photoproduction at most on a 5 % level.
Further contributions could come from higher $`c\overline{c}`$ resonances, such as the $`\psi ^{}`$. These are in fact also negligible, but for a different reason. VMD implicitly assumes that the fluctuations of a real photon into a $`q\overline{q}`$ pair are comparable in size to the relevant vector mesons. For light quarks and light mesons, this is the case, since both are of typical hadronic scale. For $`\gamma c\overline{c}`$, the scale is very much smaller, but it is also correspondingly smaller for the $`J/\psi `$, with both around 0.1 - 0.2 fm; hence VDM still makes sense. The higher $`c\overline{c}`$ vector mesons are much larger than the $`c\overline{c}`$ fluctuation, however, and so for them VMD ‘fails’ . This can be checked by considering the ratio of ‘elastic’ $`J/\psi `$ to $`\psi ^{}`$ photoproduction. From VMD and the optical theorem, one expects
$$\frac{\sigma (\gamma N\psi ^{}N)}{\sigma (\gamma N\psi N)}=\left(\frac{M_\psi }{M_\psi ^{}}\right)\left(\frac{\mathrm{\Gamma }(e^+e^{}\psi ^{})}{\mathrm{\Gamma }(e^+e^{}\psi )}\right)\left(\frac{\sigma _{\mathrm{tot}}^{\psi ^{}N}}{\sigma _{\mathrm{tot}}^{\psi N}}\right)^2.$$
(7)
Geometric arguments suggest $`\sigma _{\mathrm{tot}}^{\psi ^{}N}/\sigma _{\mathrm{tot}}^{\psi N}4`$, since the radius of the $`2S`$ state is more than twice that of the $`1S`$. Inserting the corresponding masses and decay widths, the ratio $`\sigma (\gamma N\psi ^{}N)/\sigma (\gamma N\psi N)`$ is predicted to be 5.5. Photoproduction data , in contrast, give a ratio of $`0.15\pm 0.03`$, more than a factor 30 smaller. Evidently the $`\psi ^{}`$ can therefore also be neglected as an intermediate state in open charm photoproduction.<sup>1</sup><sup>1</sup>1In $`e^+e^{}`$ collisions, the $`\psi ^{}`$ continues to appear in VDM strength, so that its decoupling in photoproduction can also be considered as an effect of the exptrapolation from highly virtual to real photons .
As a final consistency check, we can see if the $`\sigma _{\mathrm{in}}^{\psi N}`$ determined by Eq. (6) from open charm photoproduction indeed converges at high energies to the $`\sigma _{\mathrm{tot}}^{\psi N}`$ obtained from forward $`J/\psi `$ photoproduction by Eq. (3). It will be found shortly that this is indeed the case.
We thus use the data for open charm photoproduction and Eq. (6) to determine the energy dependence of $`\sigma _{\mathrm{in}}^{\psi N}(s)`$, while $`J/\psi `$ photoproduction and Eq. (1) gives that of $`\sigma _{\mathrm{el}}^{\psi N}(s)`$. The results are shown in Fig. 1, together with the data for $`(1+\rho ^2)^{1/2}\sigma _{\mathrm{tot}}^{\psi N}(s)`$ as obtained from forward $`J/\psi `$ photoproduction through VMD and the optical theorem (Eq. (3)). We note that at high energy, where we expect $`\rho 0`$, $`\sigma _{\mathrm{in}}^{\psi N}(s)`$ indeed approaches $`\sigma _{\mathrm{tot}}^{\psi N}(s)`$, so that the consistency check just mentioned is satisfied. The curves shown in Fig. 1 are $`\chi ^2`$ fits to the corresponding data, based on the functional form
$$\sigma _x^{\psi N}(s)=A_x\left\{1\left(\frac{s_0^x}{s}\right)^{1/2}\right\}^{k_x},$$
(8)
where $`x`$ refers to elastic and inelastic, respectively, and $`s_0^x`$ denotes the corresponding threshold energy in each case. The parameters obtained are given in Table 1.
Dividing the data for $`(1+\rho ^2)^{1/2}\sigma _{\mathrm{tot}}^{\psi N}(s)`$ by the fitted forms $`\sigma _{\mathrm{in}}^{\psi N}(s)+\sigma _{\mathrm{el}}^{\psi N}(s)`$, we obtain the energy dependence of the ratio of real to imaginary parts of the $`\psi N`$ scattering amplitude. This is shown in Fig. 2, together with a polynomial fit. We see that the conditions for the application of geometric considerations are indeed quite well satisfied for $`\sqrt{s}>15`$ GeV, while for $`\sqrt{s}<15`$ GeV there are significant deviations. – Combining the fits of $`\sigma _{\mathrm{in}}^{\psi N}(s)`$, $`\sigma _{\mathrm{el}}^{\psi N}(s)`$ and $`(1+\rho ^2)^{1/2}`$, we obtain a fit to $`(1+\rho ^2)^{1/2}\sigma _{\mathrm{tot}}^{\psi N}(s)`$ (included in Fig. 1) which is compatible with the form of Eq. (8) and the parameters given in Table 1.
| $`\sigma _x`$ | $`A_x`$ | $`k_x`$ | $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}.`$ |
| --- | --- | --- | --- |
| $`\sigma _{\mathrm{in}}`$ | $`1.90\pm 0.35`$ | $`1.93\pm 0.4`$ | 0.29 |
| $`\sigma _{\mathrm{el}}`$ | $`0.039\pm 0.0014`$ | $`0.284\pm 0.051`$ | 1.7 |
| $`\sqrt{1+\rho ^2}\sigma _{\mathrm{tot}}`$ | $`1.90\pm 0.35`$ | $`0.66\pm 0.03`$ | 3.0 |
Table 1: Fit parameters for $`J/\psi N`$ cross sections
The quantity of particular interest for $`J/\psi `$ suppression in nuclear collisions is $`\sigma _{\mathrm{in}}^{\psi N}(s)`$; its energy dependence as obtained from photoproduction is shown in more detail in Fig. 3. Since we have not discussed the threshold behaviour of light quark contributions to Eq. (6), the curve of Fig. 3 represents in principle only an upper bound. However, $`pp`$ data as well as perturbative studies show a strong threshold suppression also for open charm hadroproduction , so that $`\sigma _{\mathrm{in}}^{\psi N}(s)`$ may well coincide with this upper bound.
Our considerations are based on vector meson dominance, which assumes that in $`J/\psi `$ photoproduction, a $`c\overline{c}`$ fluctuation of a photon of momentum $`P`$ is brought on-shell by interaction with the nucleon, forming a $`J/\psi `$ of momentum $`Q`$. For the validity of such a picture, the longitudinal coherence length $`z_L`$ of the fluctuation cannot be much smaller than the size $`r_N`$ of the nucleon. Hence for
$$z_L\frac{1}{P_LQ_L}=\frac{1}{P_L\sqrt{P_L^2M_\psi ^2}}<<r_N,$$
(9)
vector meson dominance could break down; we should therefore limit our results to $`\sqrt{s}>5`$ GeV in the following discussion. Note that essentially the entire range shown in Fig. 3 falls into the region of VDM validity.
Any model for $`J/\psi `$-hadron interactions, whether based on short distance QCD or on hadron exchange, must satisfy the bound given in Fig’s. 1 and 3. With this in mind, we now turn to the theoretical approaches to inelastic $`\psi N`$ interactions mentioned above.
$``$ Short distance QCD: The heavy quark constituents and the large binding energy of the $`J/\psi `$ had stimulated short distance QCD calculations quite some time ago ; these were subsequently elaborated \- . They are based on the gluon-dissociation of the $`J/\psi `$ (the QCD photo-effect), convoluted with the gluon distribution function in the nucleon as determined in deep inelastic scattering (see Fig. 4a). The produced final state contains a $`D\overline{D}`$ pair and a nucleon, and the resulting form is
$$\sigma _{\mathrm{in}}^{\psi N}(s)\sigma _{\mathrm{in}}^{\psi N}(\mathrm{})\left\{\frac{(2M_D+m)^2M_\psi ^2m^2}{sM_\psi ^2m^2}\right\}^{6.5}$$
(10)
where $`\sigma _{\mathrm{in}}^{\psi N}(\mathrm{})`$ denotes the high enery geometric cross section and $`m`$ the nucleon mass. Eq. (10) shows a very strong damping in the threshold region. The power 6.5 of the damping factor is obtained from scaling gluon distribution functions; more realistic distributions will lead to a further damping at low and an increase at high $`\sqrt{s}`$ .
$``$ Charm exchange: The interaction of a $`J/\psi `$ with a meson or nucleon is here considered to take place through open charm exchange. Such a mechanism has been considered in \- for $`J/\psi `$-meson and $`J/\psi `$-nucleon interactions; for the latter it leads to a $`\mathrm{\Lambda }_c`$ and a $`\overline{D}`$ (see Fig. 4b), for the former to a $`D\overline{D}`$ final state (Fig. 4c). In the threshold region, the cross sections for meson ($`m`$) and nucleon ($`N`$) projectiles are of comparable size, as expected from the fact that the ratio of the couplings
$$g_{DN\mathrm{\Lambda }_c}^2/g_{mD\overline{D}}^2$$
(11)
is of order unity . In , no explicit results are given for the $`J/\psi `$-nucleon cross section. The values obtained there for $`J/\psi `$-meson interactions are quite similar, however, to those in , where the $`J/\psi `$-nucleon interaction is calculated as well. We shall therefore use this form for our actual comparison.
The short distance QCD form Eq. (10) for inelastic $`J/\psi `$-nucleon interactions, with $`\sigma _{\mathrm{in}}^{\psi N}(\mathrm{})=1.9`$ mb, is seen in Fig. 5 to agree quite well with the constraint from open charm photoproduction. We recall moreover that the use of more realistic parton distribution functions would further improve the agreement. In contrast, the charm exchange cross section is found to overshoot the data by more than a factor two over the entire threshold region; the data point at $`\sqrt{s}=6`$ GeV is an order of magnitude lower than the predicted value. Moreover, the predicted functional form differs from that of the data. The form shown in Fig. 5 is obtained by smoothly extrapolating the results given in for $`\sqrt{s}6`$ GeV to the same geometric cross section $`\sigma _{\mathrm{in}}^{\psi N}(\mathrm{})`$ as for the short distance QCD result.
We therefore conclude that the threshold enhancement obtained in hadron exchange models for inelastic $`J/\psi `$-hadron interactions is not compatible with $`J/\psi `$ and open charm photoproduction data. This excludes such mechanisms as possible source for any ‘anomalous’ $`J/\psi `$ suppression observed in $`PbPb`$ collisions at the CERN-SPS . Nevertheless, it would be interesting to compare the inelastic $`J/\psi `$-nucleon cross section obtained from photoproduction to possible direct measurements using either an inverse kinematics or an $`\overline{p}A`$ annihilation experiment.
In closing, we note that in addition to these models considered here, quark interchange or rearrangement has been discussed as possible mechanism for inelastic $`J/\psi `$-hadron interactions . This leads to cross sections which are still much larger very close to threshold; this is a kinematic region in which VDM is not really reliable. Nevertheless, the extremely large dissociation cross section of these models corresponds to a large imaginary part of the $`J/\psi `$-hadron scattering amplitude. Dispersion relations relate its value near threshold to the real part of the amplitude over a large range of energies. This is expected to result in an elastic cross section which strongly violates the bounds shown in Fig. 1, so that also here photoproduction results will very likely prove to be incompatible also to such an approach .
Acknowledgements
It is a pleasure to thank D. Schildknecht and G. Schuler for helpful comments and discussions. One of us (K.R) also acknowledges the partial support of the Gesellschaft für Schwerionenforschung (GSI) and of the Committee for Research Development (KBN).
|
no-problem/0003/gr-qc0003079.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
A lot of observations have proved that our universe undergoes a big bang<sup></sup> from a high temperature and high density state. Today our universe is very large and old, very isotropic and homogeneous in the large scale, and almost flat. In order to illustrate all of these features, our universe in its very early period needs a more violence expansion, i.e., inflation<sup></sup> which must has an enough large inflating e-folds and a small background fluctuation. But somebody would like to ask what is before the inflation? Why did the universe inflate? Further he could ask whether the universe had its birth? How was the universe born? These are very interesting problems the wise mankind want to probe into. Some scientists have supplied for us a part of possible answers about these questions by right of their insights. The universe may be born in quantum way<sup></sup>, and it may suffer eternal chaotic inflation<sup></sup>. Maybe it is born from an instanton<sup></sup>. However, the present theories have still some difficulties, many details are still unclear.
We all hope that a theory will be simple as possible as. However for a most typical potential, such as $`m^2\varphi ^2/2`$, in chaotic inflation theory, it meets some difficulty which can be seen by following analysis. At first there is a famous probability formula about the quantum birth of the universe, $`\rho \mathrm{exp}[24\pi ^2M_p^4V^1(\varphi )]`$, in the quantum cosmology<sup></sup>, where $`8\pi G=M_p^2=\kappa ^2`$, and $`M_p=2.4\times 10^{18}`$GeV is the Planck mass. If this formula is earnest, the largest probability happens at a place where the field value $`\varphi `$ is largest, then maybe it tends to infinity. However if $`\varphi `$ exceeds over a critical value, the universe will undergo an excessive inflation and never roll down. Or we choose a special potential, or we need a cutting off or a suppression of the probability at large field limit. We call it as ”large field difficulty”.
In this paper we find out that a potential well effect appeared naturally in the quantum cosmology can overcome this difficulty. My starting point is still the famous Wheeler-DeWitt equation<sup></sup> of the universe wave function on the cosmic scale factor $`a`$ and a scalar field $`\varphi `$. As a difference from the former investigations, I study both quantum behaviors about $`a`$ and $`\varphi `$, i.e., a tunneling effect for $`a`$ and a potential well effect for $`\varphi `$ (the latter is new for me). Just this well effect supply a probability suppression of large field limit. Just these two combined effects of the tunneling and the well determine that a primary universe has to have a most possible initial state, which initial values may be suitable to develop a relevance universe for us through the startup of its inflation. If this initial field is not too large, the universe maybe doesn’t have an eternal chaotic inflation, therefore this probability makes sense.
In order to illustrate my idea and results, at first we distinguish three inflationary states in the section 2. Then we review the tunneling effect and show ”large field difficulty” in the section 3. We study the well effect and introduce an important parameter $`q`$ to describe the tunneling character in the section 4. The startup, persistent and ending of inflation are studied in the section 5. We give a more discussion about our results in the section 6. Finally we point out its progress and shortcoming of our method in the last section.
## 2 Gentle, chaotic and excessive inflations
Among various scenarios to describe the birth and evolution of the universe, the eternal chaotic inflation is an important one. The necessary condition at which the eternal chaotic inflation happens is that the initial field value $`\varphi _0`$ of inflaton is near a critical one $`\varphi _c`$ (index $`c`$ means critical). This critical value is determined by the condition that the average quantum fluctuation $`\mathrm{\Delta }\varphi _{qu}`$ is near the classical rolling down value $`\mathrm{\Delta }\varphi _{cl}`$ of field in a Hubble time. In the case of potential $`m^2\varphi ^2/2`$, which is regarded as a typical one for chaotic inflation and is studied mainly in this paper, this critical value is<sup></sup>
$$\varphi _c=26^{1/4}\pi ^{1/2}\kappa ^{3/2}m^{1/2},$$
(1)
which is depend on the mass parameter heavily.
Let us estimate rough mass at first. The constraint on mass comes from the small fluctuation about $`10^5`$ of the Cosmic Microwave Background Radiation (CMBR)<sup></sup>,
$$\delta ^2=150^1\pi ^2ϵ_e^1\kappa ^4V_e=10^{10},$$
where $`ϵ_e1`$ (index $`e`$ means ending of inflation) is the slow rolling parameter $`ϵ=\kappa ^2V^2V^2/2`$ of the ending point of inflation. From it we obtain the mass parameter $`m4\times 10^4\kappa ^110^{15}`$GeV or little small, which is just about the grand unification energy scale. Then the field critical value of chaotic inflation is $`\varphi _c=280\kappa ^1`$ and the potential is $`V_c=6\times 10^3\kappa ^4`$. We think that up to $`V_q\kappa ^4`$, i.e., $`\varphi _q3500\kappa ^1`$ (index $`q`$ means quantum), the classical gravity maybe should be applicable.
The eternal chaotic inflation is not the unique possible case. When $`\varphi _0\varphi _c`$, it is obvious that the eternal chaotic inflation can not happen due to the quantum fluctuation too small, but it has a normal inflation if $`\varphi _0\varphi _e`$ where $`\varphi _e`$ is the field value at the ending of inflation, which is determined by the end of slow rolling $`ϵ_e1`$ and gets $`\varphi _e=\sqrt{2}\kappa ^1`$ for potential $`V=m^2\varphi ^2/2`$. We call it as ”gentle” or ”classical” inflation. When $`\varphi _0\varphi _c`$, a tiny classical rolling down is submerged in a huge quantum fluctuation of $`\varphi `$, then in spite of the universe undergoes an eternal chaotic inflation, but it can roll down never and nowhere, any slow rolling down universe is not produced in its fraction. Maybe the field value can climb higher and higher. We call this case as ”excessive” or ”quantum” inflation. Of course we only call the case $`\varphi _0\varphi _c`$ (maybe its range is rather wide) as standard ”chaotic” or ”critical” inflation. Therefore the chaotic inflation which can produce an observed universe is conditional.
## 3 The tunneling effect of scale factor
The birth of the universe is a complicated problem. If we predigest this problem to freeze out all of freedoms except the cosmic scale factor $`a`$ and a scalar field $`\varphi `$, we can obtain the famous Wheeler-DeWitt equation<sup></sup>
$$\frac{^2\psi }{a^2}\frac{6}{\kappa ^2a^2}\frac{^2\psi }{\varphi ^2}\frac{144\pi ^4}{\kappa ^4}(K_ca^2\frac{\kappa ^2}{3}a^4V(\varphi ))\psi =0,$$
(2)
where $`\psi `$ is the wave function of the universe and $`V(\varphi )`$ is a potential of this scalar field. $`K_c`$ is the sign of the curvature term. This is intricate equation, and it is hard to explain the meaning of the wave function. In order to obtain a meaningful result a simplifying method is needed. We adopt a new so-called ”twice loose shoe” method<sup></sup>, i.e., to fix a variable and to let another variable vary respectively. When one fixes at first step the scalar field $`\varphi `$, i.e., $`\psi /\varphi =0`$, one has the equation on $`a`$ from Eq.(2)
$$\frac{^2\psi }{a^2}=\frac{144\pi ^4}{\kappa ^4}(K_ca^2\frac{\kappa ^2}{3}a^4V(\varphi ))\psi .$$
(3)
This is a following standard problem of non-relativity quantum mechanics,
$$H\psi =E\psi ,H=\frac{\mathrm{}^2}{2m}\frac{^2}{x^2}+U(x),\frac{^2\psi }{x^2}=\frac{2m}{\mathrm{}^2}(U(x)E)\psi .$$
We know that Eq.(3) is a tunneling problem. Which potential and total energy are
$$U(a)=\frac{144\pi ^4}{\kappa ^4}(K_ca^2\frac{\kappa ^2}{3}a^4V(\varphi )),E_a=0.$$
We note that only for $`K_c=1`$ the potential can form a barrier and has a quantum tunneling effect. In this case the universe created is closed one. In other hand, it is very hard to imagine that a flat or open universe with real infinite volume can be created immediately from nothing by means of any process. Therefore we suppose that only a finite closed universe can be produced by quantum tunneling effect. $`U(a)`$ is a potential barrier, a ”particle” will tunnel through it, and the tunneling probability is a standard result<sup></sup>
$$\rho _a=c_1\mathrm{exp}(\frac{3\pi }{G\mathrm{\Lambda }})=c_1\mathrm{exp}(\frac{24\pi ^2}{\kappa ^4V(\varphi )}).$$
(4)
Note that it is independent on $`a`$. Here we don’t debate the ”sign problem”, we think this sign is more reasonable.
Serious problem is that there exists a so-called ”large field difficulty” for this probability formula which statement is that if the potential is a monotonous rising unbounded one, such as $`m^2\varphi ^2/2`$, then the larger the $`V`$ is, the higher the probability is. Therewith the initial field value $`\varphi _0`$ maybe tend towards infinity (at least possible up to value $`\varphi _q`$ mentioned above), and the primary universe perhaps stays in state of the excessive inflation. The key is that the most possible $`\varphi `$ value is not adjustable, which takes always its the largest one as possible as. In order to obtain a suitable eternal chaotic inflation we have to choose a special potential (undulate one?), or to cut off the applicable range of the tunneling probability formula in case of unbounded potentials. Otherwise we can not avoid this large field difficulty for this simple potential.
## 4 The potential well effect of the scalar field
After the tunneling, the $`a`$ takes transition from $`a=0`$ to a non-zero value $`a_0`$ determined by the equation $`U(a_0)=0`$ of the turning point,
$$\kappa ^2a_0^2V(\varphi _0)=3,$$
(5)
i.e., if we know $`\varphi _0`$ of the primary universe, then we can calculate its $`a_0`$.
When we use the loose shoe method at second step to fix the cosmic scale factor $`a`$, i.e., $`\psi /a=0`$, we obtain the equation on $`\varphi `$ from Eq.(2)
$$\frac{^2\psi }{\varphi ^2}=8\pi ^4a^6(V(\varphi )\frac{3}{a^2\kappa ^2})\psi .$$
(6)
Comparing with the standard equation of quantum mechanics, it is a deep well problem clearly, and its potential and total energy are
$$U(\varphi )=8\pi ^4a^6V(\varphi ),E_\varphi =24\pi ^4\kappa ^2a^4.$$
Remember that the $`a`$ is fixed here, therefore it doesn’t obey a similar equation $`\kappa ^2a^2V(\varphi )=3`$ with Eq.(5), otherwise the Eq.(6) will be trivial. What value the $`a`$ can be fixed to? Let us do an analysis. Before tunneling, nothing exits $`a=0`$, then $`U(\varphi )`$ and $`E_\varphi `$ all vanish, there is not any potential well for $`\varphi `$. After tunneling, the cosmic scale factor has been $`a_0`$ suddenly, in this time the well is the most steep. In an actual process, the $`a`$ undergoes a tunneling, we can imagine that $`a`$ changes from $`0`$ to $`a_0`$ in the fictive process of the tunneling, therefore it is reasonable to suppose that $`a`$ takes a middle value in the classical forbidden range between $`0`$ and $`a_0`$, i.e., $`a=qa_0`$, and an important parameter $`q`$ is introduced here which is a number factor and is less than $`1`$.
When the potential is a square power one, i.e., the mass term $`V=\frac{1}{2}m^2\varphi ^2`$, this is a harmonic oscillation problem and the Eq.(6) becomes as
$$\frac{^2\psi }{\varphi ^2}=(4\pi ^4q^6a_0^6m^2\varphi ^2\frac{24\pi ^4}{\kappa ^2}q^4a_0^4)\psi .$$
(7)
We can do a variable transformation $`\varphi =sy`$ to obtain the standard form
$$\frac{^2\psi }{y^2}=(y^2\lambda )\psi ,s^2=2\pi ^2q^3a_0^3m,\lambda =2n+1=\frac{12\pi ^2qa_0}{\kappa ^2m}.$$
(8)
The Eq.(7) has the solution
$$\psi _\varphi =c_2\mathrm{exp}(\pi ^2q^3a_0^3m\varphi ^2)w_n(\varphi ),$$
(9)
where $`w_n(\varphi )`$ is a polynomial for which the leading power term is $`\varphi ^n`$. We shall see that only this highest term is important for our later analysis since we want merely to consider a case of the larger values of field $`\varphi `$ or $`y`$ and power $`n`$.
Let us now consider the both effects comes from the tunneling and the well. If unexpected complication is not involved, one can envisage that a combined probability should be
$$\rho =\psi _\varphi ^2\rho _a=c_1c_2^2\mathrm{exp}\left(2n\mathrm{ln}(\kappa \varphi )2\pi ^2q^3a_0^3m\varphi ^2\frac{48\pi ^2}{\kappa ^4m^2\varphi ^2}\right)\mathrm{exp}(F(\varphi )).$$
(10)
We see that just the well effect supplies a powerful suppression of this probability for large $`\varphi `$ value. This is what we hope in order to solve the large field difficulty. The largest probability happens at the point $`F^{}(\varphi _0)=0`$, i.e.,
$$\frac{2n}{\varphi _0}4\pi ^2q^3a_0^3m\varphi _0+\frac{96\pi ^2}{\kappa ^4m^2\varphi _0^3}=0.$$
(11)
In this time we can substitute with $`n=6\pi ^2q\kappa ^2m^1a_0`$ and $`a_0=\sqrt{6}\kappa ^1m^1\varphi _0^1`$ in according to Eq.(8) and Eq.(5) respectively, and obtain finally
$$\varphi _0=4\sqrt{2/3}q^1(2q^21)^1\kappa ^1,$$
(12)
i.e., the most possible inflaton field value when the universe is born in quantum manner. We see that if the parameter $`q`$ is near to $`2^{1/2}`$, the $`\varphi _0`$ will approach to infinity. Of course it is not able to be taken seriously due to approximation of our method. How to solve and understand truly Eq.(2) is a serious challenge!
## 5 Startup, persistent and end of inflation
The transformation for the universe from quantum to classical needs a process. For simpleness we assume that after the universe is born by quantum manner, it comes soon into the classical evolution. However the universe does not come into inflation phase immediately due to a huge curvature term. The equation of its motion is<sup></sup>
$$3\kappa ^2(\frac{\dot{a}}{a})^2=\frac{1}{2}\dot{\varphi }^2+\frac{1}{2}m^2\varphi ^2\frac{3\kappa ^2}{a^2},\ddot{\varphi }+3\frac{\dot{a}}{a}\dot{\varphi }+m^2\varphi =0,$$
(13)
with the initial condition $`\varphi _0`$ of Eq.(12), $`\dot{\varphi }_0=0`$ and $`a_0=\sqrt{6}\kappa ^1m^1\varphi _0^1`$ at the starting time $`t=0`$ chosen by us. In this time the Hubble constant $`H|_{t=0}`$ is zero and density ratio $`\mathrm{\Omega }|_{t=0}\mathrm{}`$. At first we research how the universe begins to inflate. The startup of the inflation in this case is similar at all with the pure cosmological constant case $`\mathrm{\Lambda }=\kappa ^2V_\mathrm{\Lambda }=3a_0^2`$. In the latter case, we have $`H=\dot{a}/a=a_0^1\mathrm{tanh}(a_0^1t)`$, but this expression only can applicable in the first few $`a_0`$’s of time. After this time, i.e., about $`t3a_0`$, the Hubble constant has arrived its highest value $`H_b`$ (index $`b`$ means beginning of inflation) , and then $`H`$ begins to decrease its value, and the universe comes into the normal slow rolling down and persistent inflation. This startup period of about $`t_b3a_0`$ is important to smooth whole primary universe and maybe affect the final observed universe if it is in a gentle inflation. Since $`t_b`$ is small, the difference between the original values $`\varphi _0`$ of the inflaton and the values $`\varphi _b`$, at which $`HH_b`$ and the real inflation begins, is small. We can use the approximation $`\varphi _b\varphi _0`$ to estimate the inflating e-folds. Our data simulation supported these elementary analyses. The details of this model are very rich which will be studied in our later work.
Before the parameter $`q`$ can be calculated exactly, we see that the most possible initial field value is adjustable if we consider both effects of tunneling and well. If $`q`$ is taken as $`0.710.72`$, the $`\varphi _0`$ is near the $`\varphi _c`$, the primary universe will evolve to an eternal chaotic inflating universe, and the primary born probability lost its meaning as explained by Guth<sup></sup>. If $`q`$ is taken as $`0.8`$, then $`\varphi _015\kappa ^1`$ and $`a_0400\kappa `$, the primary universe undergoes only the gentle inflation, which inflating e-fold is about<sup></sup>
$$N=M_p^2VV^1𝑑\varphi =\kappa ^2(\varphi _b^2\varphi _e^2)/4ϵ_b^1/2=53.$$
(14)
The concept of primary born probability is available in the gentle inflation. At the ending of inflation, the cosmic scale factor is enough large,
$$a_e=e^{53}a_b=10^{23}a_b,$$
however it is still a finite closed universe, although very flat. When the inflaton rolls down to $`\varphi _e`$ value, the inflaton field begins its fast oscillation, the matter particles will be produced in succession, and later on the universe comes into the radiation dominated period of the standard big bang.
## 6 More discussions about parameter $`q`$
The parameter $`q`$ is very important in our model. Due to the tunneling effect, $`q`$ should be less than $`1`$. However if $`q`$ is one naively, we only have a little inflation, i.e., $`\varphi _0=3.3\kappa ^1`$ and $`N2`$. In this case we can only put our hope on the small probability events of the universe creation. However the relative probability falls very rapidly in according to Eq.(10). This is not a way out.
We need to note that the role of the polynomial $`w_n(\varphi )`$ which has an obvious affect on $`\varphi _0`$. However we can know the following point that even if the largest contribution term is not the highest power term, the modification is not too remarkable. The coefficient in the exponential of the tunneling probability, in Eq.(4), such as $`24`$ or $`12`$, is also not important, but its sign is important. The main affect comes from the parameter $`q`$.
We can consider other potential such as $`V(\varphi )=\lambda \varphi ^4`$, however we don’t know its exact energy spectrum and are not able to estimate the effect of the polynomial in the wave function. In the same time, there is still a same problem whether the parameter $`q`$ has to be put in it to reflect the tunneling character.
The parameter $`q`$ should be able to be calculated and only is dependent on the unique model parameter, mass, if potential $`m^2\varphi ^2/2`$ is used. Let us imagine a wonderful prospect that if we know the function $`q(m)`$, then we know $`\varphi _0(q(m))`$. Moreover if the most possible field value $`\varphi _0`$ was just the critical value $`\varphi _c(m)`$ of the chaotic inflation, then we could solve the unique parameter $`m`$ in the model. If this mass was just one come from the CMBR, then it is an unbelievable high predicting ability for this model!
Our idea may add some important constraint on the build of the inflation models. It is not true that any potentials which can produce inflation can also suitable to the quantum birth of the universe.
Anyhow the real meaning of the total Wheeler-DeWitt equation of Eq.(2) is abstruse. We face a great challenge and opportunity.
## 7 Conclusion
The universe born probability in the quantum cosmology has its shortcoming if we only consider the tunneling effect, since the largest probability happens at a place with the largest potential. For an unbounded potential this is a disaster. The large potential has a large field value, then the universe may stay in an excessive inflating state. It is necessary to suppress the probability from the large field limit. This suppression comes just from the potential well effect which is unconsidered previously. In order to reflect the tunneling character a parameter $`q`$ has to be introduced, which should be able to be calculated from basic theory and only is dependent on mass parameter if a simple potential $`m^2\varphi ^2/2`$ is used. Which state, i.e., gentle, chaotic or excessive inflation, the universe stays in is subtly dependent on the parameter $`q`$. The primary universe which has zero Hubble constant has to undergo a startup period to establish its inflation, it can become smooth by means of this startup.
It is a pity that the ability to calculate the parameter $`q`$ out is absent in the present. In fact many things are not known by us. We don’t know whether the ”twice loose shoe” method is a proper one. We have to remember that all of methods here used are only phenomenal. We don’t know why only four dimensions begins its expansion but why the extra-dimensions of the superstring or membrane without expansion. There is some speculation in our toy model. To enlighten more excellent new ideas or methods is what this paper wants to pursue.
Acknowledgment:
This work is supported by The foundation of National Nature Science of China, No.19777103. The author would like to thank useful discussions with Profs. J.R.Bond, L.Kofman, U.-L.Pen during my visiting Toronto University, and with X.-M.Zhang Y.-Z.Zhang and Y.-Q.Yu in China.
References:
See e.g., P.J.E.Peebles, Physical Cosmology, Princeton University Press,
Prinston, 1993.
A.H.Guth, Phys.Rev.D23(1981)347.
A.Vilenkin, Phys.Rev.D37(1988)888.
A.Linde, Particle Physics and Inflationary Cosmology,
1990 by Harwood Academic Publishers.
S.W.Hawking and N.G.Turok, Phys.Lett.B425(1998)25.
A.Linde, ”Quantum Creation of an Open Inflationary Universe”,
gr-qc/9802038.
B.S.DeWitt, Phys.Rev.160(1967)1113;
J.A.Wheeler, in: Relativity, Groups and Topology, eds. C.M.DeWitt and
J.A.Wheeler (Benjamin, New York, 1968).
A.H.Guth, ”Inflation and Eternal Inflation”, astro-ph/0002156.
D.H.Lyth and A.Riotto, ”Particle Physics Models of Inflation and the
Cosmological Density Perturbation”, hep-ph/9807278.
E.W. Kolb and M.S. Turner, The Early Universe, Addison Wesley,
1990.
|
no-problem/0003/cond-mat0003102.html
|
ar5iv
|
text
|
# Thermodynamics of the bilinear-biquadratic spin one Heisenberg chain
\[
## Abstract
The magnetic susceptibility and specific heat of the one-dimensional $`S=1`$ bilinear-biquadratic Heisenberg model are calculated using the transfer matrix renormalization group. By comparing the results with the experimental data of $`\mathrm{LiVGe}_2\mathrm{O}_6`$ measured by Millet et al. (Phys. Rev. Lett. 83, 4176 (1999)), we find that the susceptibility data of this material, after subtracting the impurity contribution, can be quantitatively explained with this model. The biquadratic exchange interaction in this material is found to be ferromagnetic, i.e. with a positive coupling constant.
\]
The quantum spin chains have been the subject of many theoretical and experimental studied since the conjecture made by Haldane that the antiferromagnetic Heisenberg model has a finite excitation gap for integer spins. The model which has been intensively used to investigate the physics behind the Haldane’s conjecture is the isotropic spin one Heisenberg Hamiltonian with both bilinear and biquadratic spin interactions:
$$H=J\underset{i}{}\left[𝐒_i𝐒_{i+1}+\gamma (𝐒_i𝐒_{i+1})^2\right].$$
(1)
For most of the existing quasi-one-dimensional (1D) $`S=1`$ materials, the biquadratic term is very small compared with the bilinear term as well as the uniaxial anisotropy. This model was therefore generally thought to be of pure theoretical interest. However, recently Millet et al. found that the magnetic susceptibility of a new quasi-1D $`S=1`$ system, the vanadium oxide $`\mathrm{LiVGe}_2\mathrm{O}_6`$, shows a few interesting features which are absent in other $`S=1`$ materials. They argued that both the interchain coupling and the uniaxial anisotropy are too small to create these features and suggested that the biquadratic term plays an important role in this material.
In this paper, we present a theoretical study for the thermodynamics of the bilinear-biquadratic spin chain (1) with $`J>0`$. We have calculated the magnetic susceptibility and specific heat of this model using the transfer matrix renormalization group (TMRG) method . By comparing with the experimental data of $`\mathrm{LiVGe}_2\mathrm{O}_6`$, we find that the measured susceptibility, after subtracting the impurity contribution, can be quantitatively fitted by the numerical result with $`\gamma =1/6`$. This shows that the spin dynamics of $`\mathrm{LiVGe}_2\mathrm{O}_6`$ can indeed be described by the Hamiltonian (1), in agreement with Millet et al. . However, the value of $`\gamma `$ needed for fitting the experimental data is different from that suggested by Millet et al. .
Let us first consider the properties of the ground state. It is known that when $`\gamma =1`$ and $`1`$, the model (1) can be solved rigorously by the Bethe Ansatz . Between these two soluble points, the system is in the Haldane phase. In this phase, the ground state is a non-magnetic singlet with a finite energy gap in excitations. In particular, when $`\gamma `$ is between $`1`$ and $`\gamma _{ic}0.41`$, the low energy physics of this model can be understood from the valence bond solid (VBS) model proposed by Affleck et al. . In this model, each site on the chain is occupied by two $`S=1/2`$ spins and the ground state is formed by the bonding of two $`S=1/2`$ spins from adjacent sites. These singlet bonds must be broken in order to excite the system and this leads to a non-zero energy gap in the low-lying spectrum. This picture has been confirmed experimentally as well as numerically . At $`\gamma _{ic}`$ , the ground state undergoes a commensurate-incommensurate transition and the critical exponent for the magnetization changes from $`1/2`$ below $`\gamma _{ic}`$ to $`1/4`$ above $`\gamma _{ic}`$. Between $`\gamma _{ic}`$ and $`1`$, the system is in the incommensurate phase and the incommensurate peak in the spin form factor $`S(q)`$ of the ground state moves continuously from $`\pi `$ to $`2\pi /3`$ as $`\gamma `$ increases from $`\gamma _{ic}`$ to $`1`$ . Above $`\gamma =1`$, the true nature of the ground state is still controversial . Some works suggest that it might be in a trimerized phase. When $`\gamma <1`$, the ground state is doubly degenerate and dimerized.
The TMRG is a finite temperature extension of the powerful density matrix renormalization group method . A detailed introduction to this method can be found in references . The TMRG method handles directly infinite spin chains and thus there is no finite system size effects. To calculate the spin susceptibility, we first evaluate the magnetization $`M`$ of the system with a small external field $`B`$, and then from the ratio $`M/B`$ we obtain the value of the susceptibility. The specific heat is evaluated from the numerical derivative of the internal energy with respect to temperature. At low temperatures, since the specific heat is very small, the relative error of the specific heat may become quite large. In most of our calculations 100 states are retained.
Figure 1 shows the zero-field spin susceptibility $`\chi (T)`$ normalized by the its peak value $`\chi _{\mathrm{peak}}`$ as a function of the normalized temperature $`T/T_{\mathrm{peak}}^s`$ for a set of $`\gamma `$, where $`T_{\mathrm{peak}}^s`$ is the temperature of $`\chi _{\mathrm{peak}}`$. Above $`T_{\mathrm{peak}}^s`$, $`\chi (T)/\chi _{\mathrm{peak}}`$ behaves similarly for all the curves shown in the figure. When $`\gamma `$ is positive, $`\chi (T)`$ drops quickly below $`T_{\mathrm{peak}}^s`$. This is because the energy gap in this parameter regime is very large. As $`\gamma `$ becomes negative, $`\chi (T)`$ just below $`T_{\mathrm{peak}}^s`$ tends to become flatter. At $`\gamma =1`$, there is no gap in the excitation spectrum, $`\chi (T)`$ shows a small positive curvature at low temperatures, as in the $`S=1/2`$ Heisenberg chain.
The inset of Figure 1 shows the $`\gamma `$ dependence of $`\chi _{\mathrm{peak}}`$ and $`T_{\mathrm{peak}}^s`$. The increase of $`\chi _{\mathrm{peak}}`$ with $`\gamma `$ indicates that the susceptibility becomes larger when $`\gamma `$ moves from the dimerized phase to the Haldane phase. This is consistent with the picture that in the dimerized phase the spin is frozen by forming rather rigid spin singlet, while in the Haldane phase the spin is relatively free above the Haldane gap. The peak temperature $`T_{\mathrm{peak}}^s`$ drops almost linearly with $`\gamma `$. The slope of this drop is about $`1.6J`$ per unit $`\gamma `$.
In a gaped phase, the low-lying excitation has approximately the energy dispersion
$$\epsilon _k=\mathrm{\Delta }+\frac{v^2}{2\mathrm{\Delta }}\left(kk_0\right)^2+O\left(\left(kk_0\right)^3\right),$$
(2)
where $`k_0`$ is the wavevector of the excitation minimum, $`\mathrm{\Delta }`$ is the energy gap and $`v`$ the spin velocity. When $`T\mathrm{\Delta }`$, it can be shown that $`\chi (T)`$ has the form
$$\chi (T)\lambda \sqrt{\frac{\mathrm{\Delta }}{T}}e^{\mathrm{\Delta }/T},$$
(3)
where $`\lambda `$ is a $`T`$-independent parameter. From the fit of the low temperature TMRG results of $`\chi (T)`$ with this equation, we can estimate the value of $`\mathrm{\Delta }`$. The result of $`\mathrm{\Delta }`$ we obtained is shown in Figure 2. The maximum energy gap is $`2J/3`$, located at $`\gamma =`$ $`1/3`$. Our results agree with other numerical studies.
Figure 3 shows the temperature dependence of the specific heat $`C(T)`$ for a set of $`\gamma `$. The inset of the figure shows the $`\gamma `$ dependence of the peak value of the specific heat, $`C_{\mathrm{peak}}`$, and the peak temperature $`T_{\mathrm{peak}}^c`$. Compared with $`T_{\mathrm{peak}}^s`$, $`T_{\mathrm{peak}}^c`$ behaves quite differently. It drops with increasing $`\gamma `$ when $`\gamma <1/2`$ and becomes almost a constant when $`\gamma >1/2`$. Below the peak temperature, $`C/C_{\mathrm{peak}}`$ shows quite similar behavior for all the curves shown in the figure except at very low temperatures. Since there is no energy gap at $`\gamma =\pm 1`$, $`C(T)`$ at these two points approaches to zero linearly with decreasing $`T`$. However, for other cases, $`C(T)`$ decays exponentially at low temperatures. For the two exact solvable point $`\gamma =\pm 1`$, exact results are available, the specific heat vanishes linearly at low temperature. However, due to large errors at low temperatures, our results do not show this behavior clearly. Above the peak temperature, $`C/C_{\mathrm{peak}}`$ drops quickly for negative $`\gamma `$. However, when $`\gamma `$ becomes bigger, in particular in the incommensurate phase ($`\gamma =2/3`$ and $`1`$), $`C(T)`$ shows a weak and broadened peak above $`T_{\mathrm{peak}}^c`$. It seems that there is a new excitation mode accumulated at low energies in the incommensurate state.
Now let us compare the numerical results with the spin susceptibility data $`\chi _{\mathrm{exp}}`$ of $`\mathrm{LiVGe}_2\mathrm{O}_6`$ measured by Millet et al. on a powder sample . As mentioned in , two extraordinary features appear in $`\chi _{\mathrm{exp}}`$. One is the slow drop of $`\chi _{\mathrm{exp}}`$ on both sides of the susceptibility peak, and the other is the abrupt drop of $`\chi _{\mathrm{exp}}`$ below $`22K`$ with a sharp upturn below $`15K`$. The first feature, in particular the slow drop of $`\chi _{\mathrm{exp}}`$ below the peak temperature, is reminiscent of a gapless system. The second feature of $`\chi _{\mathrm{exp}}`$ is typical of a spin-Peierls system with impurities, such as in $`Zn`$ doped $`\mathrm{CuGeO}_3`$ . These features have led Millet et al. to interpret $`\mathrm{LiVGe}_2\mathrm{O}_6`$ as a nearly gapless $`S=1`$ spin chain with the spin-Peierls instability. However, whether the abrupt drop of $`\chi _{\mathrm{exp}}`$ at $`22K`$ is really due to a spin-Peierls transition is still an open question.
The sharp upturn of $`\chi _{\mathrm{exp}}`$ at low temperatures indicates that the impurity contribution is strong. To see how strong the impurity effect is, let us first do a comparison without subtracting the impurity contribution in $`\chi _{\mathrm{exp}}`$. In Figure 1, the measured susceptibility $`\chi _{\mathrm{exp}}`$ normalized by its peak value at about $`47K`$ is compared with the TMRG results discussed previously. The disagreement between the theoretical and experimental results indicates that the impurity effect is too strong to be ignored even at high temperatures.
The susceptibility of dilute magnetic impurities generally has a Curie-Weiss form
$$\chi _{\mathrm{imp}}=\frac{C^{}}{T+\theta ^{}+\alpha T^1},$$
(4)
where $`C^{}`$ is proportional to the impurity concentration and the square of the effective $`g`$-factor of the impurity and $`\theta ^{}`$ is a measure for the interaction among impurities. The $`\alpha T^1`$ term in $`\chi _{\mathrm{imp}}`$ is the leading order correction to the Curie-Wess term $`C^{}/\left(T+\theta ^{}\right)`$ due to the finite magnetic field. If there is no interaction between impurities, $`\alpha =\left(2S^2+2S^{}+1\right)\left(g^{}\mu _BB/k_B\right)^2/10`$ with $`g^{}`$ and $`S^{}`$ the effective g-factor and spin of impurities. This term is not important at high temperatures. But when the temperature becomes comparable with the level splitting of an impurity spin in a magnetic field, this term becomes important. It prevents $`\chi _{\mathrm{imp}}`$ from being divergent at low temperatures. $`\alpha `$ is typically of order $`1K^2`$ when $`B=1T`$.
At very low temperatures the measured susceptibility is a sum of $`\chi _{\mathrm{imp}}`$ and $`\chi (T)`$ given by Eq. (3), i.e.
$$\chi _{\mathrm{exp}}(T)=\chi _{\mathrm{imp}}+\lambda \sqrt{\frac{\mathrm{\Delta }}{T}}e^{\mathrm{\Delta }/T}.$$
(5)
By fitting the low temperature experimental data below $`15K`$ with this equation, we find that $`C^{}=0.115cm^3K/mol`$, $`\theta ^{}=14.1K`$, $`\alpha =2.18K^2`$, $`\lambda =0.0063cm^3/mol`$ and $`\mathrm{\Delta }=36K`$. These parameters show that not only the contribution from impurities to $`\chi _{\mathrm{exp}}`$ is large as expected, but also the interaction among impurities is strong at low temperatures. There is no simple explanation for such a strong correlation among impurities. Clearly this is an important problem which should be further investigated both theoretically and experimentally.
By subtracting the impurity contribution from $`\chi _{\mathrm{exp}}`$, we obtain the intrinsic susceptibility $`\chi _{\mathrm{intrin}}`$ of $`\mathrm{LiVGe}_2\mathrm{O}_6`$. The result of $`\chi _{\mathrm{intrin}}`$ together with the raw data $`\chi _{\mathrm{exp}}`$ and $`\chi _{\mathrm{imp}}`$ is shown in Figure 4. After the subtraction, the abrupt drop of $`\chi _{\mathrm{exp}}`$ at $`22K`$ becomes less distinct, but the change in the slope is still visible. The most significant change of $`\chi _{\mathrm{intrin}}`$ compared with $`\chi _{\mathrm{exp}}`$ is that the peak shifts to a higher temperature and the drop below the peak temperature becomes more rapidly. By comparing in detail the normalized $`\chi _{\mathrm{intrin}}`$ with the theoretical results, we find that $`\chi _{\mathrm{intrin}}`$ can be well fitted by the numerical curve with $`\gamma =1/6`$ (Figure 4). This shows that the biquadratic term in model (1) does have an important contribution to the low energy spin dynamics of $`\mathrm{LiVGe}_2\mathrm{O}_6`$, in agreement with Millet et al. . However, the value of $`\gamma `$ which gives the best fit, in particular its sign, is different from that suggested in Ref. . A detailed comparison indicates that $`\chi _{\mathrm{intrin}}`$ lies between the theoretical curves for $`\gamma =1/4`$ and $`1/8`$ in the whole temperature region. Thus the uncertainty in the value of $`\gamma _c`$ is very small. The result at $`\gamma _c1`$ suggested in Ref. does not fit the experiment data.
At $`\gamma =1/6`$, the peak temperature is $`T_{\mathrm{peak}}^s=1.025J`$. Setting this $`T_{\mathrm{peak}}^s`$ equal to the peak temperature of $`\chi _{\mathrm{intrin}}`$, we find that $`J73K`$. Compared with the gap value $`\mathrm{\Delta }=36K`$ obtained previously, we have $`\mathrm{\Delta }0.49J`$. This value of $`\mathrm{\Delta }`$ is rather close to the Haldane gap, $`0.54J`$, of the Hamiltonian (1) at $`\gamma =1/6`$ (Figure 2). This suggests that the low energy spin excitations are gapped and the change of the slope at 22K is not due to a spin-Peierls transition.
We have also compared $`\chi _{\mathrm{intrin}}`$ with the spin susceptibility of the $`S=1`$ Heisenberg model with uniaxial single-ion anisotropy but without the biquadratic term, namely the model $`H=J_i\left[𝐒_i𝐒_{i+1}+D_iS_{iz}^2\right]`$. However, in the parameter region which might be physically relevant, $`1/2<D<1/2`$, we find that none of the numerical curves fits $`\chi _{\mathrm{intrin}}`$ in the whole temperature range. This shows that the uniaxial anisotropy in $`\mathrm{LiVGe}_2\mathrm{O}_6`$ is indeed very small, in agreement with the analysis of Millet et al. .
The above analysis confirms the importance of the biquadratic exchange interaction in $`\mathrm{LiVGe}_2\mathrm{O}_6`$. On the other hand, it also raises some new questions. In the argument given by Millet et al., the biquadratic term comes from in fourth order since at second order the antiferromagnetic and ferromagnetic terms partially cancel. However, the coefficient of this biquadratic term is negative (i.e. $`\gamma <0`$) according to their calculation, in contrast with the result we obtain. To resolve this disagreement, further investigation into the electronic structure of $`\mathrm{LiVGe}_2\mathrm{O}_6`$ is needed. More detailed measurements with high quality single crystals would also help clarify the impurity effect as well as the nature of the anomaly at $`22K`$ in this material. In a $`S=1`$ Heisenberg chain, the localized non-magnetic impurity may induce mid-gap states within the Haldane gap . A better understanding of the physical properties of these mid-gap states would also be helpful for further understanding the thermodynamics of $`\mathrm{LiVGe}_2\mathrm{O}_6`$ at low temperatures.
In summary, the thermodynamic properties of the bilinear and biquadratic Heisenberg model have been studied and compared with the experiments. The measured susceptibility data of $`\mathrm{LiVGe}_2\mathrm{O}_6`$, after subtracting the impurity contribution, can be quantitatively explained by the model (1) with $`\gamma =1/6`$.
We wish to thank F. Mila and F. C. Zhang for sending us the experimental data, and M. W. Long, N. d’Ambrumenil and G. A. Gehring for useful discussions. TX acknowledges the hospitality of the Isaac Newton Institute of the University of Cambridge, where this work was completed. This work was supported in part by the National Natural Science Foundation of China and by the Special Funds for Major State Basic Research Projects of China .
|
no-problem/0003/astro-ph0003065.html
|
ar5iv
|
text
|
# Untitled Document
ON THE ORIGIN OF THE UNIVERSE
AFSAR ABBAS
Institute of Physics, Bhubaneswar-751005, India
(e-mail : afsar@iopb.res.in)
Abstract
It has been proven recently that the Standard Model of particle physics has electric charge quantization built into it. It has also been shown by the author that there was no electric charge in the early universe. Further it is shown here that the restoration of the full Standard Model symmetry ( as in the Early Universe ) leads to the result that ‘time’, ‘light’, along with it’s velocity c and the theory of relativity, all lose any physical meaning. The physical Universe as we know it, with its space-time structure, disappears in this phase transition. Hence it is hypothesized here that the Universe came into existence when the Standard Model symmetry $`SU(3)_CSU(2)_LU(1)_Y`$ was spontaneously broken to $`SU(3)_CU(1)_{em}`$. This does not require any spurious extensions of the Standard Model and in a simple and consistent manner explains the origin of the Universe within the framework of the Standard Model itself.
In the currently popular Standard Model of cosmology the Universe is believed to have originated in a Big Bang. Thereafter it started expanding and cooling. Right in the initial stages, it is believed that models like superstring theories, supergravity, and GUTs etc would be applicable. When the Universe cooled to 100 GeV or so, the Standard Model of particle physics (SM) symmetry of $`SU(3)_CSU(2)_LU(1)_Y`$ was spontaneously broken to $`SU(3)_CU(1)_{em}`$. Thereafter the Universe went through quark-gluon to hadron phase transition etc. Belief in the correctness of this model is so prevalent that many are confident that we already have inkling of the Theory of Everything.
However it has recently been shown by the author that models like the superstring theories, supergravity, and GUTs etc have a fatal flaw in them in as much as they are inconsistent with the SM . As the SM has been well verified experimentally and is so far the best model of particle physics, this inconsistency rules them out. To understand this point and to obtain further understanding let us start by summarizing the arguments .
It has been shown by the author that in SM $`SU(N_C)SU(2)_L\times U(1)_Y`$ for $`N_C=3`$ spontaneous symmetry breaking by a Higgs of weak hypercharge $`Y_\varphi `$ and general isospin T where $`T_3^\varphi `$ component develops the vacuum expectation value $`<\varphi >_0`$, fixes ‘h’ in the electric charge definition $`Q=T_3+hY`$ to give
$$Q=T_3\frac{T_3^\varphi }{Y_\varphi }Y$$
(1)
where Y is the hypercharge for doublets and singlets for a single generation. For each generation renormalizability through triangular anomaly cancellation and the requirement of the identity of L- and R-handed charges in $`U(1)_{em}`$ one finds that all unknown hypercharges are proportional to $`\frac{Y_\varphi }{T_3^\varphi }`$. Hence correct charges (for $`N_C=3`$ ) fall through as below
$`Q(u)={\displaystyle \frac{1}{2}}(1+{\displaystyle \frac{1}{N_C}})`$ (2)
$`Q(d)={\displaystyle \frac{1}{2}}(1+{\displaystyle \frac{1}{N_C}})`$ (3)
$`Q(e)=1`$ (4)
$`Q(\nu )=0`$ (5)
Note that the expression for Q in (1) arose due to spontaneous symmetry breaking of $`SU(N_C)SU(2)_LU(1)_Y`$ (for $`N_C=3`$ ) to $`SU(N_C)U(1)_{em}`$ through the medium of a Higgs with arbitrary isospin T and hypercharge $`Y_\varphi `$. What happens when at higher temperature, as for example found in the early Universe, the $`SU(N_C)SU(2)_LU(1)_Y`$ symmetry is restored ? Then the parameter ‘h’ in the electric charge definition remains undetermined. Note that ‘h’ was fixed as in (1) due to spontaneous symmetry breaking through Higgs. Without it ‘h’ remains unknown. Thus the electric charge loses all physical meaning above the EW phase transition stage and thus one concludes that as per the SM there was no electric charge in the early Universe.
Note that complete charge quantization in the canonical Standard Model with a Higgs doublet had been demonstarted by the author earlier . Hence there too on the restoration of the full symmetry the electric charge would disappear as shown above for a Higgs with any arbitrary isospin and hypercharge. There as well as here, ‘h’ is not defined and hence the electric charge is not defined. Thus when the electro-weak symmetry is restored, irrespective of the Higgs isospin and hypercharge the electric charge disappears as a physical quantity. Hence there too we find that there was no electric charge in the early universe.
It is a generic property of all models like the superstring theories, supergravity and GUTs etc that they have electric charges which are quantized. As these models are believed to be describing physics above the electro-weak symmetry breaking stage, they are all inconsistent with the SM requirement that then there was no electric charge above this scale. Hence these are ruled out .
All this misunderstanding and confusion arose because until now we had not understood the electro-weak phase transition correctly. Here we have shown that the restortion of the full SM symmetry leads to the result that there is no electric charge above it and also that there was no photon. What else?
The above result shows that above the electro-weak phase transition there was no electromagnetism. Maxwells equations of electrodynamics show that light is an electromagnetic phenomenon. Hence above the electro-weak phase transition there was no light and no Maxwells equations. And surprisingly we are led to conclude that there was no velocity of light c as well. One knows that the velocity of light c is given as
$$c^2=\frac{1}{ϵ_0\mu _0}$$
(6)
where $`ϵ_0`$ and $`\mu _0`$ are permittivity and permeability of the free space. These electromagnetic properties disappear along with the electric charge and hence the velocity of light also disappears above the electro-weak phase transition.
The premise on which the theory of relativity is based is that c, the velocity of light is always the same, no matter from which frame of reference it is measured. Relativity theory also asserts that there is no absolute standard of motion in the Universe. Of fundamental significance is the invariant interval
$$s^2=(ct)^2x^2y^2z^2$$
(7)
Here the constant c provides us with a means of defining time in terms of spatial separation and vice versa through l = c t. This enables one to visualize time as the fourth dimension. Hence time gets defined due to the constant c. Therefore when there were no c as in the early universe ,there was no time as well. Hence above the electro-weak breaking scale there was no time. As the special theory of relativity depends upon c and time, above the electro-weak breaking scale there was no special theory of relativity. As the General Theory of relativity also requires the concept of a physical space-time, it collapses too above the electro-weak breaking scale. Thus there was no gravity too above this scale. Hence the whole physical universe collapses above this scale.
It is hypothesized here that the Universe came into existence when the electro-weak symmetry was broken spontaneously. Before it there was no electric charge, no ’time’, no ’light’, no maximum and constant velocity like ’c’, no gravity or space-time structure on which objective physical laws could exist.
Note that this new picture of the origin of the Universe arises naturally within the best studied and the best verified model of particle physics. It does not require any ad-hoc or arbitrary models or principles. One only has to do a consistent and careful analysis of the hitherto misunderstood electro-weak spontaneous symmetry breaking. As should be clear now, the Standard Model of the particle physics tells us as to how the Universe was created ( as shown in this paper ), gives hints as to what it was like before this and also as to how it functions thereafter.
REFERENCES
1. Abbas A., 2000, ”Particles, Strings and Cosmology PASCOS99”, Ed K Cheung, J F Gunion and S Mrenna, World Scientific, Singapore ( 2000), p. 123 - 126
2. Abbas A., July 1999, Physics Today , p.81-82
3. Abbas A., 1990, Phys. Lett. , B 238, 344
4. Abbas A., 1990, J. Phys. , G 16 , L163
5. Abbas A., 1992, Hadronic J. , 15 , 475
6. Abbas A., 1993, Nuovo Cimento , 106 A , 985
|
no-problem/0003/cond-mat0003006.html
|
ar5iv
|
text
|
# Short, Medium and Long Range Spatial Correlations in Simple Glasses
## Acknowledgments
We thank M. Dzugatov for providing information on the IC potential. We also thank S. R. Elliott and S. Safran for their comments.
|
no-problem/0003/astro-ph0003205.html
|
ar5iv
|
text
|
# Weak Lensing by Large-Scale Structure: A Dark Matter Halo Approach
## 1. Introduction
Weak gravitational lensing of faint galaxies probes the distribution of matter along the line of sight. Lensing by large-scale structure (LSS) induces correlation in the galaxy ellipticities at the percent level (e.g., Miralda-Escudé 1991; Blandford et al 1991; Kaiser 1992). Though challenging to measure, these correlations provide important cosmological information that is complementary to that supplied by the cosmic microwave background and potentially as precise (e.g., Jain & Seljak 1997; Bernardeau et al 1997; Kaiser 1998; Schneider et al 1998; Hu & Tegmark 1999; Coooray 1999; Van Waerbeke et al 1999; see Bartelmann & Schneider 2000 for a recent review). Indeed several recent studies have provided the first clear evidence for weak lensing in so-called blank fields (e.g., Van Waerbeke et al 2000; Bacon et al 2000; Wittman et al 2000).
Weak lensing surveys are currently limited to small fields which may not be representative of the universe as a whole, owing to sample variance. In particular, rare massive objects can contribute strongly to the mean power in the shear or convergence but not be present in the observed fields. The problem is compounded if one chooses blank fields subject to the condition that they do not contain known clusters of galaxies. Our objective in this Letter is to quantify these effects and to understand what fraction of the total convergence power spectrum should arise from lensing by individual massive clusters as a function of scale.
In the context of standard cold dark matter (CDM) models for structure formation, the dark matter halos that are responsible for lensing have properties that have been intensely studied by numerical simulations. In particular, analytic scalings and fits now exist for the abundance, profile, and correlations of halos of a given mass. We show how the convergence power spectrum predicted in these models can be constructed from these halo properties. The critical ingredients are: the Press-Schechter formalism (PS; Press & Schechter 1974) for the mass function; the NFW profile of Navarro et al (1996), and the halo bias model of Mo & White (1996). Following Seljak (2000), we modify halo profile parameters, specifically concentration, so that halos account for the full non-linear dark matter power spectrum and generalize his treatment to be applicable through all redshifts relevant to current galaxy ellipticity measurements of LSS lensing. This calculational method allows us to determine the contributions to the convergence power spectrum of halos of a given mass.
Throughout this paper, we will take $`\mathrm{\Lambda }`$CDM as our fiducial cosmology with parameters $`\mathrm{\Omega }_c=0.30`$ for the CDM density, $`\mathrm{\Omega }_b=0.05`$ for the baryon density, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.65`$ for the cosmological constant, $`h=0.65`$ for the dimensionless Hubble constant and a scale invariant spectrum of primordial fluctuations, normalized to galaxy cluster abundances ($`\sigma _8=0.9`$ see Viana & Liddle 1999) and consistent with COBE (Bunn & White 1997). For the linear power spectrum, we take the fitting formula for the transfer function given in Eisenstein & Hu (1999).
## 2. Lensing by Halos
### 2.1. Halo Profile
We model dark matter halos as NFW profiles with a density distribution
$$\rho (r)=\frac{\rho _s}{(r/r_s)(1+r/r_s)^2}.$$
(1)
The density profile can be integrated and related to the total dark matter mass of the halo within $`r_v`$
$$M=4\pi \rho _sr_s^3\left[\mathrm{log}(1+c)\frac{c}{1+c}\right],$$
(2)
where the concentration, $`c`$, is defined as $`r_v/r_s`$. Choosing $`r_v`$ as the virial radius of the halo, spherical collapse tells us that $`M=4\pi r_v^3\mathrm{\Delta }(z)\rho _b/3`$, where $`\mathrm{\Delta }(z)`$ is the overdensity of collapse (see e.g. Henry 2000) and $`\rho _b`$ is the background matter density today. We use comoving coordinates throughout. By equating these two expressions, one can eliminate $`\rho _s`$ and describe the halo by its mass $`M`$ and concentration $`c`$. Finally, we can determine a relation between $`M`$ and $`c`$ such that halo distribution produces the same power as the non-linear dark matter power spectrum, as outlined in Seljak (2000).
### 2.2. Convergence Power Spectrum
For lensing convergence, we need the projected surface mass density, which is the line-of-sight integral of the profile
$$\mathrm{\Sigma }(r_{})=_{r_v}^{+r_v}\rho (r_{},r_{})𝑑r_{},$$
(3)
where $`r_{}`$ is the line-of-sight distance and $`r_{}`$ is the perpendicular distance. As in equation (2), the cut off here at the virial radius reflects the fact that we only account for mass contributions out to $`r_v`$ (see Bartelmann 1996 for an analytical description when $`r_v\mathrm{}`$). The convergence on the sky $`\kappa (\theta )`$ is related to surface mass density through
$$\kappa (\theta )=\left(\frac{4\pi G}{c^2}\frac{d_ld_{ls}}{d_s}\right)(1+z_l)\mathrm{\Sigma }(d_l\theta ),$$
(4)
where the extra factor of $`(1+z_l)`$ from the familiar expression comes from the use of comoving coordinates to define densities and distances, e.g. $`d_l`$, $`d_s`$ and $`d_{ls}`$ are the comoving angular diameter distances from the observer to lens, observer to source, and the lens to source, respectively.
The total convergence power spectrum due to halos, $`C_\kappa ^{\mathrm{tot}}`$, can be split into two parts: a Poisson term, $`C_\kappa ^\mathrm{P}`$, and a term involving correlations between individual halos, $`C_\kappa ^\mathrm{C}`$. This split was introduced by Cole & Kaiser (1988) to examine the power spectrum of the Sunyaev-Zel’dovich (SZ; Sunyaev & Zel’dovich 1980) effect due to galaxy clusters (see Komatsu & Kitayama 1999 and references therein for more recent applications).
The Poisson term due to individual halo contributions can be written as,
$$C_\kappa ^P(l)=_0^{z_s}𝑑z\frac{d^2V}{dzd\mathrm{\Omega }}_{M_{min}}^{M_{max}}𝑑M\frac{dn(M,z)}{dM}\left[\kappa _l(M,z)\right]^2.$$
(5)
where $`z_s`$ is the redshift of background sources, $`d^2V/dzd\mathrm{\Omega }`$ is the comoving differential volume, and
$$\kappa _l=2\pi _0^{\theta _v}\theta 𝑑\theta \kappa (\theta )J_0\left[\left(l+\frac{1}{2}\right)\theta \right],$$
(6)
is the 2D Fourier transform of the halo profile. The halo mass distribution as a function of redshift \[$`dn(M,z)/dM`$\] is determined through the PS formalism.
Here, we have assumed that all sources are at a single redshift; for a distribution of sources one integrates over the normalized background source redshift distribution. The minimum, $`M_{\mathrm{min}}`$, and maximum, $`M_{\mathrm{max}}`$, masses can be varied to study the effects of rare and excluded high mass halos.
The clustering term arises from correlations between halos of different masses. By assuming that the linear matter density power spectrum, $`P(k,z)`$, is related to the power spectrum of halos over the whole mass range via a redshift-dependent linear bias term, $`b(M,z)`$, we can write the correlation term as
$`C_\kappa ^C(l)`$ $`=`$ $`{\displaystyle _0^{z_s}}𝑑z{\displaystyle \frac{d^2V}{dzd\mathrm{\Omega }}}P({\displaystyle \frac{l}{d_l}},z)`$ (7)
$`\times `$ $`\left[{\displaystyle _{M_{min}}^{M_{max}}}𝑑M{\displaystyle \frac{dn(M,z)}{dM}}b(M,z)\kappa _l(M,z)\right]^2.`$
Here we have utilized the Limber approximation (Limber 1954) by setting $`k=l/d_l`$. Mo & White (1996) find that the halo bias can be described by $`b(M,z)=1+[\nu ^2(M,z)1]/\delta _c`$, where $`\nu (M,z)=\delta _c/\sigma (M,z)`$ is the peak-height threshold, $`\sigma (M,z)`$ is the rms fluctuation within a top-hat filter at the virial radius corresponding to mass $`M`$, and $`\delta _c`$ is the threshold overdensity of spherical collapse (see Henry 2000 for useful fitting functions).
## 3. Results
Following the approach given in Seljak (2000), we first test the halo prescription against the full non-linear density power spectrum found in simulations and fit by Peacock & Dodds (PD, 1996). In Fig. 1a, as an example, we show the comparison at $`z=0.5`$. A good match between the two power spectra was achieved by slightly modifying the concentration relation of Seljak (2000) as
$$c(M,z)=a(z)\left[\frac{M}{M_{}(z)}\right]^{b(z)}.$$
(8)
Here, $`M_{}(z)`$ is the non-linear mass scale at which $`\nu (M,z)=1`$, while $`a(z)`$ and $`b(z)`$ can be considered as adjustable parameters. The dark matter power spectrum is well reproduced, to within 20% for $`0.0001<k<500`$ Mpc<sup>-1</sup>, out to a redshift of 1 with the parameters $`a(z)=10.3(1+z)^{0.3}`$, and $`b(z)=0.24(1+z)^{0.3}`$, which agree with the values given by Seljak (2000) for the NFW profile at $`z=0`$. The two power spectra differ increasingly with scale at $`k>500`$ Mpc<sup>-1</sup>, but the Peacock and Dodds (1996) power spectrum is not reliable there due to the resolution limit of the simulations from which the non-linear power spectrum was derived. Note that the above $`c(M,z)`$ relation is only valid for the cosmology used here and for the NFW profile; moreover, it should not necessarily give the true mean density profile of halos, since other effects not considered in our halo prescription, such as halo substructure, would affect the relation between the dark matter power spectrum and the spatial distribution and mean density profiles of halos. A detailed study of $`c(M,z)`$ as generally applied to all cosmologies, profile shapes and power spectra is currently in progress (Seljak, private communication).
In general, the behavior of dark matter power spectrum due to halos can be understood in the following way. The linear portion of the dark matter power spectrum, $`k<0.1`$ Mpc<sup>-1</sup>, results from the correlation between individual dark matter halos and reflects the bias prescription. The fitting formulae of Mo & White (1996) adequately describes this regime for all redshifts. The mid portion of the power spectrum, around $`k0.11`$ Mpc<sup>-1</sup> corresponds to the non-linear scale $`MM_{}(z)`$, where the Poisson and correlated term contribute comparably. At higher $`k`$’s, the power arises mainly from the contributions of individual halos (see Seljak 2000 for a discussion of the detailed properties of the density and galaxy power spectra due to halos).
In Fig. 1b, we show the same comparison for the convergence power spectrum. The LSS power spectrum was calculated following Hu & Tegmark (1999) using the Peacock & Dodds (1996) power spectrum for the underlying mass distribution and using the same Limber approximation as the correlation calculation presented here. The lensing power spectrum due to halos has the same behavior as the dark matter power spectrum. At large angles ($`l100`$), the correlations between halos dominate. The transition from linear to non-linear is at $`l500`$ where halos of mass similar to $`M_{}(z)`$ contribute. The Poisson contributions start dominating at $`l>1000`$.
In order to establish the extent to which massive halos contribute, we varied the maximum mass of halos, $`M_{\mathrm{max}}`$, in the convergence calculation. The results are shown in Fig. 2. We use a background source redshift of 1 and 0.4, corresponding to deep lensing surveys and to a shallower survey such as the ongoing Sloan Digital Sky Survey<sup>1</sup><sup>1</sup>1http://www.sdss.org. In Fig. 2a,b we exclude masses above a certain threshold at all redshifts and in c only for halos below redshift $`z=0.3`$, reflecting the fact that current observations of galaxy clusters are likely to be complete only out to such a low redshift. Assuming the latter, we find that a significant contribution comes from massive clusters at low redshifts (see Figs. 2b & c). Ignoring such masses, say above $``$ 10<sup>14</sup> M can lead to a convergence power spectrum which is a factor of $``$ 2 lower than the total. Note that such a high mass cut off affects the Poisson contribution of halos more than the correlated contributions and can bias the shape not just the amplitude of the power spectrum.
In Fig. 3a, we show the dependence of $`C_\kappa ^{\mathrm{tot}}`$, for several $`l`$ values. If halos $`<10^{15}`$ $`M_{\mathrm{}}`$ are well represented in a survey, then the power spectrum will track the LSS convergence power spectrum for all $`l`$ values of interest. The surface number density of halos determines how large a survey should be to possess a fair sample of halos of a given mass. We show this in Fig. 3b as predicted by PS formalism for our fiducial cosmological model for halos out to ($`z=0.3`$ and $`z=1.0`$). Since the surface number density of $`>10^{15}M_{\mathrm{}}`$ halos out to a redshift of 0.3 and 1.0 is $``$ 0.03 and 0.08 degree<sup>-2</sup> respectively, a survey of order $``$ 30 degree<sup>2</sup> should be sufficient to contain a fair sample of the universe for recovery of the full LSS convergence power spectrum.
One caveat is that mass cuts may affect the higher moments of the convergence differently so that a fair sample for a quantity such as skewness will require a different survey strategy. From numerical simulations (White & Hu 1999), we know that $`S_3\kappa ^3/\kappa ^2^2`$ shows substantial sample variance, implying that it may be dominated by rare massive halos. When calculated with hyper-extended perturbation theory (HEPT, see Hui 1999) but using the halo power spectrum instead of non-linear density power spectrum, the skewness decreased by a factor of $``$ 2.5 to 3 with a mass cut off at $``$ $`10^{13}`$ M. Since it is unclear to what extent HEPT ansatz remains valid for the halo description, these results should be taken as provisional and will be the subject of future study.
While upcoming wide-field weak lensing surveys, such as the MEGACAM experiment at Canada-France-Hawaii Telescope (Boulade et al 1998), and the proposed wide field survey by Tyson et al. (2000, private communication) will cover areas up to $``$ 30 degree<sup>2</sup> or more, the surveys that have been so far published, e.g., Wittman et al (2000), only cover at most 4 degree<sup>2</sup> in areas without known clusters. The observed convergence in these fields should be biased low compared with the mean and vary widely from field to field due to sample variance from the Poisson contribution of the largest mass halos in the fields, which are mainly responsible for the sample variance below $`10^{}`$ (see White & Hu 1999).
Our results can also be used proactively. If properties of the mass distribution such as the maximum mass halo in the observed lensing fields are known, say through prior optical, X-ray, SZ or even internally in the lensing observations (see Kruse & Schneider 1999), one can make a fair comparison of the observations to theoretical model predictions with a mass cut off in our formalism. Even for larger surveys, the identification and extraction of large halo contributions can be beneficial: most of the sample variance in the fields will be due to rare massive halos. A reduction in the sample variance increases the precision with which the power spectrum can be measured and hence the cosmological parameters upon which it depends.
We acknowledge useful discussions with Uros Seljak. WH is supported by the Keck Foundation and NSF-9513835.
|
no-problem/0003/astro-ph0003139.html
|
ar5iv
|
text
|
# A model for the alternating phase lags associated with QPOs in X-ray binaries
## 1 Introduction
The steadily increasing amount of high-quality, high time-resolution X-ray data from Galactic X-ray binaries, has stimulated vital interest in the characteristics of the rapid variability of these objects. The X-ray emission from both Galactic black-hole candidates (GBHCs) and accreting neutron stars is known to vary on a wide range of time scales, sometimes showing quasi-periodic oscillations (QPOs) (for reviews see, e.g., van der Klis (1995); Cui 1999a ). In GBHCs, such QPOs have been detected at frequencies ranging from several mHz to $`100`$ Hz. These QPOs are most notable when the sources are in the low-hard or in the rare very high state. Recently, correlations between the amplitudes and centroid frequencies of several types of QPOs with the spectral characteristics of the source have been found in some objects (Rutledge et al. (1999), Markwardt, Swank, & Taam 1999, Sobczak et al. (1999), Muno, Morgan & Remillard 1999). A general pattern emerging from these analyses is that the frequency of those types of QPOs which do show a correlation with spectral properties, seems to increase with both the power-law and the disk black-body flux from the source.
A surprising property of some types of QPOs has recently been found in the 67 mHz QPO of GRS 1915+105 (Cui 1999b ), the 0.5 – 10 Hz QPO of the same source (Reig et al. (2000), Lin et al. (2000)), and in the 0.3 – 3 Hz QPO in XTE J1550-564 (Cui, Zhang, & Chen 2000): While the phase lag at the QPO fundamental frequency is negative, the phase lag associated with the first harmonic was positive. In the case of the low-frequency QPO of GRS 1915+105, even three harmonics were detected, and the phase lags were found to alternate between subsequent harmonics (Cui 1999b ). The phase lag associated with the 0.5 – 10 Hz QPO of GRS 1915+105 was found to change sign from positive to negative as the QPO frequency increases above $`2.5`$ Hz (Reig et al. (2000), Lin et al. (2000)).
These peculiar patterns are apparently completely counter-intuitive in the light of currently discussed models for the hard phase lags in X-ray binaries. Models proposed to explain the hard phase lags are either based to the energy-dependent photon escape time in Compton upscattering scenarios for the production of hard X-rays (Kazanas, Hua & Titarchuk 1997, Böttcher & Liang 1998), or due to intrinsic spectral hardening during X-ray flares, e.g. due to decreasing Compton cooling in active regions pushed away from an underlying accretion disk in a patchy-corona model (Poutanen & Fabian (1999)) or due to density perturbations drifting inward through an ADAF toward the event horizon (Böttcher & Liang 1999). These models dealt only with the continuum variability and did not consider the effects of QPOs.
In this paper we will explore the response of a two-phase accretion flow, consisting of an outer, cool, optically thick accretion disk and an inner, hot ADAF (Narayan & Yi (1994), Abramowicz et al. (1995), Chen et al. (1995)), to a periodically varying soft photon input from the cool disk (Liang & Böttcher (2000)). A two-phase accretion flow with an inner ADAF has been found to produce good fits to the photon spectra of, e.g., several Galactic X-ray binaries (e.g., Narayan, McClintock & Yi 1996, Hameury et al. (1997), Esin et al. (1998)), low-luminosity AGN (Quataert et al. (1999)) and giant elliptical galaxies (Fabian & Rees (1995), di Matteo & Fabian (1997), di Matteo et al. (2000)). In §2, we describe the basic model setup according to this two-phase accretion flow, and derive an analytical estimate for the expected phase lags associated with the QPO and the first harmonic applicable in some simplified cases. A short description of the Monte-Carlo simulations used to solve the time-dependent radiation transport problem follows in §3. In §4 we describe a series of simulations designed specifically to explain the peculiar phase lag behavior associated with the 0.5 – 10 Hz QPO in GRS 1915+105. We summarize in §5.
## 2 Model setup and analytical estimates
The basic model setup, as motivated in the introduction, assumes that the inner portion of the accretion flow is described by an ADAF, where the density profile is approximately given by a free-fall profile, $`n_\mathrm{e}(r)r^{3/2}`$, and the electron temperature is close to the virial temperature and thus scales as $`T_\mathrm{e}(r)r^1`$. We emphasize, however, that other hot two-temperature inner disk models would give similar results. The ADAF exists out to a transition radius $`R_{\mathrm{tr}}`$, beyond which the flow is organized in a standard optically thick, geometrically thin Shakura-Sunyaev disk (Shakura & Sunyaev (1973)). The transition radius is typically expected to be several $`100R_sR_{\mathrm{tr}}10^4R_s`$ (Honma (1996); Manmoto et al. (2000); Meyer et al. (2000)), where $`R_s`$ is the Schwarzschild radius. Outside this transition radius, the disk temperature scales as $`T_\mathrm{D}(r)r^{3/4}`$. With the free-fall density profile given above, the radial Thomson depth of the corona may be written as $`\tau _\mathrm{T}^{\mathrm{ADAF}}=0.71\dot{M}_{17}/(mr_i^{1/2})`$, where $`\dot{M}_{17}`$ is the accretion rate through the ADAF in units of $`10^{17}`$ g s<sup>-1</sup>, $`m`$ is the mass of the central compact object in units of solar masses, and $`r_i`$ is the inner edge of the ADAF in units of Schwarzschild radii.
The basic assumption of our baseline model is that the observed QPOs in the X-ray variability of Galactic black-hole candidates are related to small-scale oscillations of the transition radius. Such oscillations in $`r_{\mathrm{tr}}`$ may be caused by variations of the accretion rate, but we defer a detailed study of possible mechanisms driving the oscillations to a later paper. We thus assume that, as a function of time $`t`$, the transition radius oscillates as $`R_{\mathrm{tr}}(t)=r_{\mathrm{tr}}(1+a_r\mathrm{sin}[\omega t])`$, where $`a_r1`$ is the amplitude of the oscillation and $`\omega =2\pi f_{\mathrm{QPO}}`$ the QPO frequency. Denoting $`\xi 1+a_r\mathrm{sin}(\omega t)`$, the time-dependent disk flux may then be estimated as $`F_\mathrm{D}(t)[R_{\mathrm{tr}}(t)]^2[T_d(t)]^4(r_{\mathrm{tr}}\xi )^1`$. In the following, we are focusing on an analytical description of the X-ray signals at the QPO fundamental and first harmonic frequencies. Thus, we will expand all expressions up to the 2nd order in $`a_r\mathrm{sin}(\omega t)`$. For the disk flux, this yields
$$F_\mathrm{D}(t)F_{\mathrm{D},0}\left[1+\frac{a_r^2}{2}a_r\mathrm{sin}(\omega t)\frac{a_r^2}{2}\mathrm{cos}(2\omega t)\right].$$
(1)
The disk spectrum is approximately a blackbody spectrum at the temperature of the disk at the transition radius, and for the purpose of a simple analytical estimate, we assume that it is monochromatic at a disk photon energy $`E_\mathrm{D}(t)T_\mathrm{D}(r_{\mathrm{tr}})(r_{\mathrm{tr}}\xi )^{3/4}`$. For any observed photon energy $`E`$, we define the ratio
$$ϵ(t)E/E_\mathrm{D}(t)ϵ_0\xi ^{3/4}.$$
(2)
Now, at any given time, a fraction $`f_c\frac{1}{2}\left(1\frac{\pi }{4}\right)`$ of the disk radiation will intercept the quasi-spherical inner ADAF-like corona and serve as soft seed photons for Compton upscattering, producing the time-variable hard X-ray emission. Note that additional hard X-ray emission will be produced by Comptonization of synchrotron and bremsstrahlung photons produced in the inner portions of the ADAF. However, since we do not assume significant changes of the structure of the inner ADAF in the course of the small-scale oscillations of the transition radius, this internal synchrotron and bremsstrahlung emission will constitute a quasi-DC flux component which does not contribute significantly to the variability properties of the source. Being mainly interested in the quasi-periodic variability, we do not consider this emission component here. The time-dependent Comptonization response function to the oscillating disk flux in this geometry can be parametrized as
$$h(t,\tau ,ϵ_0)=\mathrm{\Theta }(u)\left\{Au^{\alpha 1}e^{u/\beta }+Bu^{\kappa 1}\mathrm{\Theta }\left(\frac{2R_{\mathrm{tr}}(\tau )}{c}u\right)\right\}$$
(3)
(Böttcher & Liang (1998)), where now $`t`$ is the observing time, $`\tau `$ is the time of soft photon emission at the transition radius, $`ut\tau `$, $`\mathrm{\Theta }`$ is the Heaviside function, $`A`$ and $`B`$ are normalization factors which are generally energy and time dependent. In the general case, the indices $`\alpha `$ and $`\kappa `$, and the time “constant” $`\beta `$ also depend on photon energy and time ($`\beta `$ depends on time through the time-dependence of the extent of the corona and the disk photon energy).
The observable energy flux at time $`t`$ at photon energy $`E`$ is then given by
$$F(E,t)=f_dF_\mathrm{D}(E,t)+f_c\underset{\mathrm{}}{\overset{t}{}}𝑑\tau \underset{0}{\overset{\mathrm{}}{}}𝑑E_\mathrm{D}F_\mathrm{D}(E_\mathrm{D},\tau )h(t,\tau ,ϵ_0),$$
(4)
where $`f_d1f_c`$. For the purpose of the analytical estimate, we now focus on two regimes for the observed photon energies. At low energies, close to the peak energy of the disk emission, the observed light curve will be dominated by the direct disk emission (the first term on the r.h.s. of Eq. 4) and the contribution from reflection in the outer regions of the ADAF, proportional to $`Bc/(2R_{\mathrm{tr}}[\tau ])`$. For an ADAF-like density and temperature profile, we may set $`\kappa =1`$, i. e. the reflection light curve is flat over the light crossing time through the corona. The multiple-scattering term, proportional to $`A`$ in Eq. 3, may be neglected at low photon energies.
The observed signal at high photon energies, $`EE_\mathrm{D}`$ will be dominated by the multiple-scattering term, i. e. we may set $`B=0`$ for high-energy photons. Since multiple Compton scattering results in a power-law spectrum, we parametrize the energy dependence of the normalization of this term as $`A=A_0ϵ_0^\gamma x^{3\gamma /4}`$, where $`\gamma `$ is the photon spectral index of the Comptonized spectrum, and $`x=1+\mathrm{sin}(\omega \tau )`$. The parameter $`\alpha `$ is generally energy dependent. However, we find that a constant value of $`\alpha =10`$ yields a reasonably good description of the multiple-scattering light curve for an ADAF-like temperature and density profile. To the same degree of approximation, we consider $`\beta `$ as constant for a given energy channel, and note its scaling as $`\beta r_{\mathrm{tr}}`$. Furthermore, $`\beta `$ increases with photon energy. Now, using the $`\delta `$ function assumption for the disk spectrum in the integral in Eq. 4, the observed light curve reduces to
$$F(E,t)f_dF_\mathrm{D}(E,t)+f_cF_{D,0}\underset{0}{\overset{\mathrm{}}{}}𝑑u\left\{A_0ϵ_0^\gamma x^\delta u^{\alpha 1}e^{u/\beta }+x^2B_0(ϵ_0)\mathrm{\Theta }\left(\frac{2r_{\mathrm{tr}}}{c}u\right)\right\},$$
(5)
where $`\delta (3\gamma /4)+1`$.
Expanding all terms up to the first harmonic and writing
$$\frac{F(E,t)}{F_{D,0}}=\eta _0+\eta _1\mathrm{sin}(\omega t)+\eta _2\mathrm{cos}(\omega t)+\eta _3\mathrm{sin}(2\omega t)+\eta _4\mathrm{cos}(2\omega t)$$
(6)
we find
$$\eta _0=f_d\left(1+\frac{a_r^2}{2}\right)+f_c\frac{B}{\omega }\varphi _r\left(1+\frac{3}{2}a_r^2\right)+f_cA_0ϵ_0^\gamma \left(\beta ^\alpha \mathrm{\Gamma }[\alpha ]+\frac{\delta [\delta +1]}{4}a_r^2[C_2(\alpha )+S_2(\alpha )]\right),$$
(7)
$$\eta _1=f_da_r2f_c\frac{B}{\omega }a_r\mathrm{sin}\varphi _rf_cA_0ϵ_0^\gamma \delta a_rC_1(\alpha ),$$
(8)
$$\eta _2=2f_c\frac{B}{\omega }a_r(1\mathrm{cos}\varphi _r)+f_cA_0ϵ_0^\gamma \delta a_rS_1(\alpha ),$$
(9)
$$\eta _3=\frac{3}{2}f_c\frac{B}{\omega }a_r^2\mathrm{sin}^2\varphi _r\frac{1}{2}f_cA_0ϵ_0^\gamma \delta (\delta +1)a_r^2SC(\alpha ),$$
(10)
$$\eta _4=\frac{1}{2}f_da_r^2\frac{3}{2}f_c\frac{B}{\omega }a_r^2\mathrm{sin}\varphi _r\mathrm{cos}\varphi _r+\frac{1}{4}f_cA_0ϵ_0^\gamma \delta (\delta +1)a_r^2(S_2[\alpha ]C_2[\alpha ]),$$
(11)
where $`\varphi _r\frac{2\omega }{c}r_{\mathrm{tr}}`$, and we have defined the integrals
$$S_1(\alpha )\underset{0}{\overset{\mathrm{}}{}}𝑑uu^{\alpha 1}e^{u/\beta }\mathrm{sin}(\omega u)=\beta ^\alpha \mathrm{\Gamma }(\alpha )\frac{\mathrm{sin}(\alpha \mathrm{arctan}[\beta \omega ])}{(1+[\beta \omega ]^2)^{\alpha /2}},$$
(12)
$$C_1(\alpha )\underset{0}{\overset{\mathrm{}}{}}𝑑uu^{\alpha 1}e^{u/\beta }\mathrm{cos}(\omega u)=\beta ^\alpha \mathrm{\Gamma }(\alpha )\frac{\mathrm{cos}(\alpha \mathrm{arctan}[\beta \omega ])}{(1+[\beta \omega ]^2)^{\alpha /2}},$$
(13)
$$S_2(\alpha )\underset{0}{\overset{\mathrm{}}{}}𝑑uu^{\alpha 1}e^{u/\beta }\mathrm{sin}^2(\omega u)=\frac{\beta ^\alpha \mathrm{\Gamma }(\alpha )}{2}\left\{1\frac{\mathrm{cos}(\alpha \mathrm{arctan}[2\beta \omega ])}{(1+[2\beta \omega ]^2)^{\alpha /2}}\right\},$$
(14)
$$C_2(\alpha )\underset{0}{\overset{\mathrm{}}{}}𝑑uu^{\alpha 1}e^{u/\beta }\mathrm{cos}^2(\omega u)=\frac{\beta ^\alpha \mathrm{\Gamma }(\alpha )}{2}\left\{1+\frac{\mathrm{cos}(\alpha \mathrm{arctan}[2\beta \omega ])}{(1+[2\beta \omega ]^2)^{\alpha /2}}\right\},$$
(15)
$$SC(\alpha )\underset{0}{\overset{\mathrm{}}{}}𝑑uu^{\alpha 1}e^{u/\beta }\mathrm{sin}(\omega u)\mathrm{cos}(\omega u)=\frac{\beta ^\alpha \mathrm{\Gamma }(\alpha )}{2}\frac{\mathrm{sin}(\alpha \mathrm{arctan}[2\beta \omega ])}{(1+[2\beta \omega ]^2)^{\alpha /2}}.$$
(16)
Defining the phases, relative to the $`R_{\mathrm{tr}}`$ oscillation, of the signal in a given energy band at the QPO frequency and the first harmonic, respectively, by
$$F(t)=\eta _0+\xi _1\mathrm{sin}(\omega t+\mathrm{\Delta }_Q)+\xi _2\mathrm{sin}(2\omega t+\mathrm{\Delta }_H)$$
(17)
with $`\mathrm{tan}\mathrm{\Delta }_Q=\eta _2/\eta _1`$ and $`\mathrm{tan}\mathrm{\Delta }_H=\eta _4/\eta _3`$, we find for a low-energy channel (with mean energy close to the peak energy of the disk emission spectrum):
$$\mathrm{tan}\mathrm{\Delta }_Q(\mathrm{LE})\frac{2f_c\frac{B}{\omega }(1\mathrm{cos}\varphi _r)}{2f_c\frac{B}{\omega }\mathrm{sin}\varphi _r+f_d},$$
(18)
$$\mathrm{tan}\mathrm{\Delta }_H(\mathrm{LE})\frac{f_d+\frac{3}{2}f_c\frac{B}{\omega }\mathrm{sin}(2\varphi _r)}{\frac{3}{2}f_c\frac{B}{\omega }(1\mathrm{cos}[2\varphi _r])}.$$
(19)
Assuming that the direct disk emission ($`f_d`$) is strongly dominating at this energy, and that both $`\eta _3`$ and $`\eta _4`$ are negative in this case, we have $`\mathrm{\Delta }_Q(\mathrm{LE})0`$, and $`\mathrm{\Delta }_H(\mathrm{LE})\pi /2`$.
For high photon energies, dominated by multiple Compton scattering, we find
$$\mathrm{\Delta }_Q(\mathrm{HE})=\alpha \mathrm{arctan}(\beta \omega )+k_0\pi ,$$
(20)
$$\mathrm{\Delta }_H(\mathrm{HE})=\alpha \mathrm{arctan}(2\beta \omega )+k_1\pi ,$$
(21)
where $`k_i(0,1,1)`$ are determined by the signs of $`\eta _1`$, …, $`\eta _4`$. With the definition of the phases $`\mathrm{\Delta }`$ in Eq. 17, the phase lags are given by $`\mathrm{\Delta }\varphi =\mathrm{\Delta }(\mathrm{LE})\mathrm{\Delta }(\mathrm{HE})`$. Throughout this paper we adopt the convention that positive phase and time lags correspond to hard photons lagging the soft ones.
Intuitively, phase and time lags between the disk-radiation dominated low-energy photons and the Compton-upscattering dominated high-energy photons are determined by the ratio between the QPO period $`T_{\mathrm{QPO}}`$ and the time required for soft photons to reach the inner ADAF region and be Compton upscattered to hard X-ray energies. If this light-travel and diffusion time is longer than half a QPO period, but less than the QPO period itself, the phase lag at the QPO fundamental frequency — which physically is still a hard lag — appears as a soft lag due to the periodicity of the light curves. The same argument holds for the $`n^{th}`$ harmonic of the QPO, where the period now corresponds to $`1/(n+1)`$ of the QPO fundamental period.
For a very large transition radius, the direct disk emission may not contribute significantly to the X-ray flux. In that case, the phase lag between two energy channels will be dominated by the difference in diffusion time scales $`\beta `$, and we expect
$$\mathrm{\Delta }\varphi _{\mathrm{QPO}}\alpha (\mathrm{arctan}[\beta _j\omega ]\mathrm{arctan}[\beta _i\omega ]),$$
(22)
$$\mathrm{\Delta }\varphi _{1.\mathrm{harm}.}\alpha (\mathrm{arctan}[2\beta _i\omega ]\mathrm{arctan}[2\beta _j\omega ]).$$
(23)
for two energy channels $`E_i<E_j`$.
## 3 Monte Carlo simulations
The estimates derived in the previous section were based on several major simplifications in order to keep the problem analytically trackable. In order to test the validity of our approximations and to provide results under more realistic assumptions, we have simulated the radiation transfer in our oscillating ADAF/disk model system, using our time-dependent Monte-Carlo Comptonization code. For a detailed description of the code and its capabilities, see Böttcher & Liang (1998, 1999).
In our simulations, we approximate the accretion disk emission by a blackbody spectrum with the temperature of the disk inner edge at any given time, and its time-dependent luminosity determined as described in the previous section. A fraction $`f_c`$ of the time dependent disk emission enters the spherical Comptonizing region at its outer boundary and serves as seed photon field for Comptonization. The ADAF is characterized by an $`r^{3/2}`$ density and $`r^1`$ temperature structure, and the electron temperature is normalized to $`kT_e(R_s)=500`$ keV, i. e. the electrons become relativistic at the event horizon. The event horizon is treated as an absorbing inner boundary. The corona has the specified radial Thomson depth of $`\tau _\mathrm{T}^{\mathrm{ADAF}}`$ when the ADAF/disk transition is located at $`r_{\mathrm{tr}}`$. As $`r_{\mathrm{tr}}`$ oscillates, we leave the density and temperature structure of the ADAF inside $`R_{\mathrm{tr}}(t)`$ unchanged and set the coronal electron density equal to zero outside $`R_{\mathrm{tr}}(t)`$.
The resulting light curves, generally consisting of the sum of direct disk emission and the Comptonized emission from the corona, are sampled in 5 photon energy bins and over 512 time steps of $`\mathrm{\Delta }t0.05T_{\mathrm{QPO}}`$. For several test cases, we have run identical problem simulations with different time steps, in order to verify that our results are independent of the time step chosen. The energy-dependent light curve are then Fourier transformed, using an FFT algorithm (Press et al. (1992)), and the power density spectra and phase lags are calculated.
In order to test our Monte Carlo code against the analytical estimates derived in the previous section, we did a series of simulations in which the accretion disk temperature at the equilibrium transition radius and the QPO frequency were artificially chosen artificially constant among different simulations, and at a value such that the emission in the 2 – 5 keV reference channel was always dominated by direct disk emission, while the highest two energy channels (15 – 40 keV and 40 – 100 keV, respectively) were dominated by Compton-upscattered radiation from the corona. In that case, Eqs. (18) – (21) with constant values of $`\alpha `$ and $`\beta /r_{\mathrm{tr}}`$ may be used in order to approximate the expected phase lags. In Fig. 1, the phase lags at the QPO fundamental and first harmonic frequencies as measured in our simulations are compared to these analytic estimates. The figure demonstrates that the two approaches are in excellent agreement, indicating that our numerical procedure reliably reproduces the time-dependent radiation transport properties of the model system.
## 4 The 0.5 – 10 Hz QPOs in GRS 1915+105
As briefly mentioned in the introduction, the variable-frequency QPO at 0.5 – 10 Hz observed in the low-hard state of GRS 1915+105 exhibits a very peculiar phase lag behavior (Reig et al. (2000); Lin et al. (2000)): If the source has a rather hard (photon index $`\gamma 2.7`$) photon spectrum and low X-ray flux, the QPO frequency is low $`f_{\mathrm{QPO}}2`$ Hz, and the phase lags associated with both the QPO fundamental and the first harmonic frequencies are positive. As the X-ray flux increases and the photon spectrum becomes softer, the QPO centroid frequency increases, and the phase lag at the QPO fundamental frequency decreases and changes sign at $`f_{\mathrm{QPO}}2.5`$ Hz. At the same time, the phase lag associated with the first harmonic frequency increases. As the source becomes more X-ray luminous and the spectrum becomes softer, the QPO frequency increases up to $`10`$ Hz, and the phase lag associated with the QPO fundamental continues to decrease, while the phase lag at the first harmonic frequency remains positive and does not show any obvious correlation with spectral parameters or the QPO frequency. Spectral fits to GRS 1915+105 with a disk blackbody + power-law model to different spectra states along this sequence revealed that the inner disk temperature increases from $`0.7`$ keV to $`1.5`$ keV as the QPO frequency increases from 0.5 to 10 Hz (Muno et al. (1999)).
In order to model this behavior, we parametrize the disk temperature at the transition radius, $`T_\mathrm{D}(r_{\mathrm{tr}})`$, and the QPO frequency as power-laws in $`r_{\mathrm{tr}}`$. Assuming $`T_\mathrm{D}(r_{\mathrm{tr}})r_{\mathrm{tr}}^{3/4}`$, a simple power-law relation spanning the observed ranges of $`0.7\mathrm{keV}kT_\mathrm{D}1.5\mathrm{keV}`$ and $`0.5\mathrm{Hz}f_{\mathrm{QPO}}10\mathrm{Hz}`$ (Muno et al. (1999)) yields a scaling $`f_{\mathrm{QPO}}T_\mathrm{D}^4r_{\mathrm{tr}}^3`$. This scaling may indicate that the transition radius oscillations are related to a modulation of the disk evaporation process, responsible for the transition to the inner ADAF, by a secular instability. The frequency of such modulations might be proportional to the inverse of the evaporation time scale, $`\tau _{\mathrm{evap}}^1`$, which is expected to be proportional to the disk surface flux, $`dL/dAT_\mathrm{D}^4r_{\mathrm{tr}}^3`$. Thus, the assumed scaling laws of the disk temperature and QPO frequency with the transition radius are physically plausible.
For the high-QPO-frequency end of the sequence mentioned above, we choose a transition radius of $`r_{\mathrm{tr}}=610^8`$ cm, corresponding to $`700R_s`$ for a $`3M_{}`$ black hole or the theoretical limit of $`340R_s`$ found by Meyer et al. (2000) for a $`6M_{}`$ black hole (unfortunately, the mass of the central object in GRS 1915+105 is not known due to the lack of an optical counterpart). The QPO frequency (i.e. the transition-radius oscillation frequency) is 10 Hz, and the disk temperature at the transition radius is 1.5 keV. Assuming $`M=6M_{}`$ and $`r_{tr}=340R_s`$, this is in agreement with the cool, gas-pressure dominated disk model for $`\alpha ^{1/5}(L/0.057L_{\mathrm{Edd}})^{3/10}1`$, where $`\alpha `$ is the viscosity parameter.
Using the Monte-Carlo simulations described in the previous section, we calculate the photon-energy dependent light curves, Fourier transform them, and calculate the Fourier-frequency dependent phase lags from the cross-correlation functions. For comparison with the results of Lin et al. (2000) we sample the light curves in the energy channels 0.1 – 3.3 keV , 3.3 – 5.8 keV , 5.8 – 13 keV , 13 – 41 keV , and 41 – 100 keV , and focus on the phase lags between channels and . We perform a series of 6 simulations, each with different values of $`r_{\mathrm{tr}}`$, implying different QPO frequencies and disk temperatures according to the scaling laws quoted above.
Two representative examples of the resulting power and phase lag spectra are shown in Figs. 2 and 3 for a high-frequency (6.7 Hz) case with alternating phase lags and a low-frequency (1.1 Hz) case with positive phase lags at both fundamental and first harmonic frequencies, respectively.
The results of the complete series of simulations are listed in Table 1 and illustrated in Fig. 4. From Fig. 4 we see that this model reproduces the key features of the peculiar QPO phase lag behavior observed in GRS 1915+105, in particular the change of sign of the phase lag associated with the QPO as the QPO centroid frequency increases, and the alternating phase lags between QPO fundamental and first harmonic at high QPO frequencies. We point out that the measured values of the hard lags at the first harmonic frequency for $`f_{\mathrm{QPO}}2`$ Hz are significantly smaller than predicted by our model. However, this is expected because, for realistic signal-to-noise ratios, the (random-phase) Poisson noise contaminating the actual data always suppresses high phase lag values, but leaves small values of $`|\mathrm{\Delta }\varphi |`$ almost unaffected (Zhang 2000, private communication).
Table 1 and Fig. 4 also show the predicted phase lags at the second harmonic of the QPO, which has not been observed yet. We expect that, if a second harmonic is found in any future or archived observation of GRS 1915+105, it should be associated with a negative phase lag, at least for $`f_{\mathrm{QPO}}2`$ Hz.
We need to point out that the more complicated case discussed in this section is not directly comparable to the test cases illustrated in Fig. 1 since we are now varying the accretion disk temperature and the QPO frequency when varying the transition radius. This also changes the parameters $`\alpha `$ and $`\beta `$ entering the analytical estimates as a function of $`r_{\mathrm{tr}}`$ in a non-trivial way.
## 5 Summary and conclusions
We have investigated the time-dependent X-ray emission from an oscillating two-phase accretion flow, consisting of an outer, cool, optically thin accretion disk, and an inner ADAF. Based on this model, we are proposing an explanation for the peculiar phase lag behavior associated with the variable-frequency 0.5 – 10 Hz QPOs observed in GRS 1915+105. In particular, the changing sign of the phase lag at the QPO fundamental frequency as the QPO frequency increases, and the alternating phase lags between QPO fundamental and first harmonic frequencies in the case of a high QPO frequency are well reproduced by this model. The relation between QPO frequency, transition radius and inner disk radius can be interpreted physically, if the QPO is triggered by a modulation of the transition radius by a secular instability affecting the disk evaporation responsible for the disk/ADAF transition. Based on our results for GRS 1915+105, we predict that, if a second harmonic to the variable-frequency QPO at $`f_{\mathrm{QPO}}2`$ Hz is detected, it should be associated with a negative phase lag.
We found that oscillations at the transition radius can naturally lead to apparent soft lags associated with the QPO frequency, if the photon diffusion time related to the production of hard X-rays is longer than half a QPO period. We point out that it is very unlikely that the same mechanism is responsible for the alternating phase lags observed in the 67 mHz QPOs and its harmonics since this would require a transition radius at $`r_{\mathrm{tr}}10^5R_s`$. This large size of the ADAF is implausible for a non-quiescent state. The 67 mHz QPO alternating phase lag pattern has been observed in a high-flux state of GRS 1915+105, in which we would generally expect the disk to extend down to rather small radii.
One caveat of the model we have proposed here is that it requires a rather large accretion rate in order to produce the observed disk luminosity from GRS 1915+105. This, in turn, requires either that a large fraction of the accreted mass is lost to the collimated outflow of the radio jets, or that the ADAF has a very low radiative efficiency in order not to overproduce the hard power-law emission from the hot ADAF. However, since GRS 1915+105 is known to have very powerful, superluminal radio jets (Mirabel & Rodríguez (1994)), it is plausible to assume that, indeed, a significant fraction of the mass accreted through the outer disk is not advected through the ADAF, but is powering the radio jets.
The work of MB is supported by NASA through Chandra Postdoctoral Fellowship Award Number PF 9-10007, issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS 8-39073.
|
no-problem/0003/astro-ph0003227.html
|
ar5iv
|
text
|
# Cosmic Microwave Background at low frequencies
## 1. Introduction
The coming decade may well see a monumental enhancement in observing capabilities at low radio frequencies if, for example, the proposed square-kilometre array (SKA) is constructed. The quantum leap in the total collecting area and in the numbers of antenna elements constituting the array could move certain key observational proposals from the realm of ‘dreams’ to reality. In this context — and while the telescope configuration and antenna element design are still being debated — it is perhaps timely that key and challenging experiments that may be of relevance to the emerging opportunity are highlighted so that the instrument specifications may be tailored to eventually allow their endeavor.
The discovery of the existence of the cosmic microwave background (CMB), followed by refinements in measurements of its characteristics leading to the precise $`COBE`$ observations of its spectrum and anisotropies, have been a major constraint and discriminant between models of cosmology and structure formation. Almost all measurements that have been valuable in constraining cosmology theories have been made to-date at cm, mm and sub-mm wavelengths. The long wavelength measurements of the CMB have been plagued by large errors owing to the bright Galactic background temperatures and high levels of extragalactic foreground discrete source confusion.
In this article, I shall attempt to highlight some aspects of CMB research and discuss their relevance to low-frequency radio astronomy in the context of the changed scenario expected in the coming decade if the SKA is constructed. Possible observing strategies and calibration schemes are introduced; implications for the specifications for the SKA are touched upon.
## 2. The CMB temperature at low frequencies
The $`COBEFIRAS`$ experiment (Fixen et al. 1996) measured the temperature of the cosmic microwave background (CMB) in the frequency range 70–640 GHz: no significant distortions from a Plankian form were detected and the best-fit thermodynamic temperature of the cosmic background was determined to be $`2.728\pm 0.004`$ K.
Within the framework of the hot big bang cosmology, structure in the universe is believed to have formed via gravitational instabilities from primordial ‘seed’ density perturbations. The damping of sub-horizon scale pressure waves — as perturbations in the radiation field enter the horizon over the redshift interval about $`5\times 10^6`$ to $`5\times 10^5`$ — are expected to inevitably leave their imprint as a $`\mu `$-distortion in the CMB (Daly 1991; Hu, Scott, & Silk 1994). The wavelength $`\lambda _{max}`$ at which maximum distortion occurs is approximately $`\lambda _{max}2.2(\mathrm{\Omega }_bh^2)^{2/3}`$ cm (Burigana, Danese, & De Zotti 1991); for $`\mathrm{\Omega }_bh^20.019`$ (Burles, Nottel & Turner 1999), we may expect the temperature distortion to be a maximum at 30 cm wavelength. The value of the chemical potential $`\mu _{}`$ — as a consequence of the damping of primordial density perturbations — may be as small as $`10^8`$ if the index of the $`COBE`$-$`DMR`$ normalized matter power spectrum has an index $`n=1`$, but could be as large as $`10^4`$ if $`n1.6`$ (Hu, Scott & Silk 1994): correspondingly, the maximum temperature distortion may be as large as 0.01 K. Separately, any release of radiant energy in the redshift interval $`5\times 10^6>z>5\times 10^5`$, perhaps owing to the decay of particles with half lives in this range of cosmic times, would also result in $`\mu `$ distortions (Silk & Stebbins 1983).
The $`COBE`$-$`FIRAS`$ measurements of the CMB spectrum limit $`\mu `$ to be less than $`9\times 10^5`$ (Fixen et al. 1996). This implies that we may not expect a deviation exceeding about 0.008 K at metre wavelengths as a consequence of any $`\mu `$ distortion. Interestingly, these constraints placed on $`\mu `$ by the extremely precise $`COBE`$-$`FIRAS`$ measurements are at least an order of magnitude more valuable than the results from long-wavelength measurements of the CMB spectrum, although they may have been made at frequencies where the deviation may be a maximum.
Recent measurements of the CMB temperature at frequencies close to 1 GHz include the 600 MHz estimate of $`T_{CMB}=3.0\pm 1.2`$ K by Sironi et al. (1990), the 1400 MHz estimate of $`T_{CMB}=2.65\pm 0.3`$ K by Staggs et al. (1996) and the 1470 MHz estimate of $`T_{CMB}=2.26\pm 0.19`$ K by Bensadoun et al. (1993). A more recent estimate, albeit with larger errors, is that $`T_{CMB}=3.45\pm 0.78`$ K at 1280 MHz (Raghunathan & Subrahmanyan 2000).
The large uncertainties in long-wavelength measurements of the absolute brightness of the CMB are due, in part, to the relatively high brightness of the Galactic background at these wavelengths, the larger size of the receiving elements, the greater difficulty in cooling them to cryogenic temperatures and consequently the greater contribution from losses in the antenna and associated feed. The ground-based measurements also suffer from uncertainty associated with estimations of the atmospheric contribution. Improvements in estimates of $`T_{CMB}`$ at low frequencies may come from improved methods of reducing losses and/or developing methods of cancelling unwanted contributions, making multifrequency measurements over an extremely wide frequency range, and by placing the apparatus above the atmosphere.
In this context it may be mentioned that the experimental setup of Raghunathan & Subrahmanyan used a novel technique of selecting cable lengths to cancel, via destructive interference, an unwanted contribution from the cold load connected to the third port of the circulator. The $`DIMES`$ project (see the website at ceylon.gsfc.nasa.gov/DIMES), with a wide spectral coverage from 2 to 100 GHz and based on a satellite platform, may be the kind of experiment that could improve sensitivity to $`\mu `$ distortions by more than an order of magnitude.
## 3. Fine structure in the CMB spectrum
The cooling of the primeval plasma in the expanding universe is expected to have led to recombination at a temperature near 3000 K (Peebles 1968; Jones & Wyse 1985). It has been argued (Bartlett & Stebbins 1991) that measurements to-date of the CMB spectrum (that limit free-free emission from an ionized intergalactic medium and $`y`$ distortions arising from Compton scattering by hot electrons in such a medium) do not require a neutral period and hence do not rule out the possibility that the universe remained ionized throughout its history. However, the upper limits on the redshift of re-ionization (post recombination) derived from the position of the peak in the spectrum of CMB anisotropies (Griffiths, Barbosa, & Liddle 1999) indicates that the universe was perhaps largely neutral beyond a redshift of about 40: the data supports primordial recombination. In this context, a direct observational probe of the recombination epoch would be valuable: the recombination lines that are inevitably generated during recombination is a potential probe.
The dominant additive recombination-line feature in the CMB spectrum is the L<sub>α</sub> hydrogen line, this is expected to manifest in a spectral ‘hump’ at about 0.014 cm wavelength and a broad ‘continuum’ extending to higher frequencies (Burdyuzha & Chekmezov 1994).
Of interest to low-frequency radio astronomy are the hydrogen (and helium) recombination lines corresponding to transitions between highly excited states that may be visible today at metre and centimetre wavelengths owing to the extraordinary redshift ($`z1100`$) of the epoch of recombination. As pointed out by Dubrovich & Solyarov (1995) — continuing on the earlier work by Dubrovich (1975) and Bernshtein, Bernshtein, & Dubrovich (1977) — the number of photons in transitions between adjacent levels as well as the ratio of the distance between consecutive lines to the line widths both decrease with increasing wavelength. As a consequence, the detection becomes extremely challenging at low frequencies. However, at least at cm wavelengths, the extragalactic background light is dominated by the CMB whose intensity varies inversely as the square of the wavelength; therefore, the ratio of line strength to background continuum may be expected to have a maximum at wavelengths 20–60 cm. At these low frequencies, the line is expected (Dell’Antonio & Rybicki 1993; Dubrovich & Solyarov 1995) to appear as spectral features with peak-to-peak brightness of about 0.1 $`\mu `$K. The lines would be extremely broad because recombination is not ‘instantaneous’: the redshift interval $`\mathrm{\Delta }z/z`$ over which the ionization fraction changes is about 20 per cent and is significantly greater than that obtained assuming quasi-equilibrium (Saha) ionization because recombination is ‘stalled’ and ‘regulated’ by the increased temperature in the Ly-$`\alpha `$ line as a consequence of recombination itself. This results in the spectral lines manifesting as spectral ripples with period 20 per cent of the observing frequency.
A detection of these spectral lines in the CMB would, apart from clarifying the thermal history of the baryons, place constraints on the baryon density and the mean matter density. These spectral features are of extremely low intensity; however, if radiant energy release in the early universe resulted in deviations in the radiation background from the Planckian form, the spectral features may have an increased prominence (Lyubarsky & Sunyaev 1983). In this case their observation may provide information on early energy release.
### 3.1. The observation of wideband CMB spectral features
The detection of recombination lines or any other spectral features that were added to the CMB in the early universe is an extremely difficult and challenging experiment of the ‘high-risk high-gain’ type. It may be noted here that the spectral variations in temperature are expected to be smaller than the angular anisotropy in the CMB.
Clearly, the measurements will have to be made with a ‘total-power’ type telescope; any interferometer would resolve the uniform sky signal. Conventionally, ‘total-power’ sky spectra are obtained from auto-correlations of the single-dish signals measured over a range of delays. However, the receiver-noise component may be eliminated from the spectra by the use of ‘correlation receivers’: this will require a built-in capability for splitting the signal from the feeds before the front-end low-noise amplifiers. Alternately, ‘total-power’ sky spectra may be measured using an interferometer array by scalar averaging the cross-power spectral amplitudes.
#### Sensitivity
The detection requires extremely high brightness sensitivity: the spectral features may be only 0.1 $`\mu `$K and a factor $`10^8`$$`10^9`$ of the system temperature. It is obvious that, despite the large bandwidths involved, the signals cannot be detected in any reasonable time using a single element telescope. However, because the signals are isotropic to a high degree, the antenna size is not directly a determinant of the sensitivity. Therefore, we may detect the signal by averaging the sky spectra from a large number of receivers behind relatively-small spatially-separated antenna elements. If we assume that integration times of order 100 hr are realistic (very much larger times would make the debugging of the system difficult), and that signals are to be detected with spectral resolution of order 50 MHz at 1 GHz, the number of independent receivers is of order $`10^3`$ for a 3-$`\sigma `$ detection. Perhaps the elements of an SKA may be designed to attempt this experiment.
#### Wide bandwidth
A second cause for difficulty with this experiment is the wide bandwidth required for the detection of these lines; receivers with low noise temperatures over a wide bandwidth exceeding 20 percent of the observing frequency are required. The receiver characteristics would undoubtedly change over this band and will have to be calibrated; however, spurious broadband spectral features may appear in the received spectrum as a result of any changes in the sky reception pattern over the band. Wide bandwidth observations, particularly at low frequencies, are also susceptible to radio interference: the experiment will require receivers with high dynamic range and interference excising/cancelling techniques.
#### Calibration
Perhaps the greatest challenge would be the accurate calibration of the instrument response. Standard calibration techniques like beam switching fail because the signals are isotropic. Frequency switching may not be useful because no spectral region is line-free. Ideally what one would like to do to calibrate the instrument is to make separate observations with only the interesting signal ‘switched off’.
A possible calibration strategy is to make the CMB spectral measurements using the individual elements of an array of telescopes, in ‘total power’ mode, and ‘switch off’ the uniform sky signal, for the purpose of calibration, by operating the array as an interferometer. The individual antenna spectral responses are determined from the spectral visibilities obtained while observing a strong, largely unresolved source, in interferometric mode. Such an approach may also allow calibration measurements to be made simultaneously with the total-power CMB spectral measurements: time variations in the band-pass characteristics would not then be a limiting factor.
In Fig. 1, I show an example where this calibration strategy has been applied to spectra of Galactic Hi. These spectra were obtained with the Australia Telescope Compact Array (ATCA). Fig. 1(a) shows auto-correlation spectra obtained with the individual antennae, Fig. 1(b) shows the scalar-averaged cross-power spectra obtained by taking the spectral visibilities (these have been vector averaged on-line over 10 s; within the narrow spectral bandwidth this averaging time is not enough to detect the sources in the field with signal-to-noise exceeding unity) on the interferometer baselines and averaging the amplitude spectra off-line. Fig. 1(c) shows the calibration spectrum obtained from the spectral visibilities measured on an unresolved calibrator. Fig. 1(d) and (e) show, respectively, the calibrated auto-correlation and scalar-averaged cross-power spectra. The signal-to-noise in the scalar-averaged spectra are worse because the scalar averaging of amplitudes was done only off line: the on-line 10 s averaging was performed vectorially in the individual 2-kHz channels and, as a consequence, the amplitudes reduced by a factor 200.
A disadvantage of such a scheme is that the calibration does not include the receiver noise characteristics; however, if the total-power spectra are obtained using correlation receivers, as discussed above, the spectra will not include a contribution from receiver noise. Another negative aspect of such an approach is that the calibration will require a strong source: the antenna temperature due to the calibrator source will have to be comparable to the system temperature if the calibration time should be comparable to the observing time.
Alternate methods of calibrating spectra, that may be worth exploring, are (i) using the ‘Moon’ to block the sky spectrum (Shaver et al. 1999; Stankevich, Wielebinski, & Wilson 1970) and (ii) equalizing the instrument bandpass response by observing each sky spectral component through every spectrometer channel. The former may impose severe requirements on the linearity of the receivers and the latter method will require ‘fine-tuning’ capability in the first LO in the receiver chain.
## 4. Angular anisotropies in the CMB temperature
Most measurements of small-angle CMB anisotropy are being done at high radio frequencies (in the 15–90 GHz range) where Galactic and extragalactic contaminants are a minimum. However, the high-frequency detections of small-angle anisotropy, which have to-date been Sunyaev-Zeldovich (S-Z) decrements towards clusters of galaxies, have been difficult measurements made with long integrations and using telescopes with small fields-of-view. All-sky images of CMB anisotropies with 10’s of arcmin resolution are expected to become available in the coming decade from the $`MAP`$ and $`PLANCK`$ satellite missions; however, they are expected to detect S-Z anisotropies from only the relatively high-mass and nearby S-Z clusters. The next generation of high-brightness-sensitive imaging arrays, like the CBI and AMIBA, which are specifically designed for surveys for S-Z clusters, are also expected to cover only small sky areas because of their small fields of view.
Owing to the upturn in the source counts at low flux density levels, the $`\mu `$Jy source counts at GHz frequencies are approximately (see, for example, Fig. 3 in Windhorst et al. 1993)
$$n(S)=10^8S_{\mu \mathrm{Jy}}^{2.2}f_{GHz}^{0.8}\mathrm{arcmin}^2\mathrm{Jy}^1.$$
(1)
In images made with a beam of FWHM $`\theta `$ arcmin, these sources may be expected to result in a 1-$`\sigma `$ confusion rms given by
$$\mathrm{\Delta }S=40\theta ^{1.7}f_{GHz}^{0.7}\mu \mathrm{Jy},\mathrm{or}\mathrm{\Delta }T=14\theta ^{0.3}f_{GHz}^{2.7}\mathrm{mK}.$$
(2)
The arcmin-resolution anisotropy searches made at frequencies $`<10`$ GHz — including those with the VLA and the ATCA — have been limited by foreground discrete-source confusion. In this context, it is useful to ask whether a future low-frequency telescope, like the SKA, which will have the capability of imaging wide fields of view, could make large-area surveys for S-Z clusters and image the decrements with sufficient angular resolution to detect any sub-structure. The answer will depend on whether the enormous improvement in flux sensitivity in an SKA-type telescope will enable surveys for low-surface-brightness S-Z clusters below the confusion limit by detecting and subtracting a large part of the discrete-source confusion.
If we assume that all discrete sources above a lower flux density limit of $`S_m\mu `$Jy are subtracted from the sky images, the residual confusion rms is
$$\mathrm{\Delta }S=8\theta f_{GHz}^{0.4}S_m^{0.4}\mu \mathrm{Jy},\mathrm{or}\mathrm{\Delta }T=3\theta ^1f_{GHz}^{2.4}S_m^{0.4}\mathrm{mK}.$$
(3)
The proposed SKA is to have a sensitivity: $`A_{eff}/T_{sys}=2\times 10^4`$ m<sup>2</sup> K<sup>-1</sup>. We may expect the continuum images to have a thermal noise of about 50 nJy with 10 hr integration time. Assuming that foreground sources above $`S_m250`$ nJy are successfully subtracted, the residual confusion in arcmin resolution images may be expected to be as large as 2 mK at 1 GHz but as low as 6 $`\mu `$K at 10 GHz.
Clearly, any useful survey for S-Z clusters at low frequencies, even with an SKA, will require very long integration times and observations at frequencies $`>10`$ GHz. However, because confusion will be an important limiting factor, it is important that particular attention be given to the design of the antenna element and array configuration in order to ensure that the sidelobe levels are low and roll off rapidly.
## 5. Acknowledgments
The Australia telescope is funded by the Commonwealth of Australia for operation as a National facility managed by CSIRO. The HI spectra displayed were obtained as part of collborative work with Mark Walker of the UNSW, Australia.
## References
Bartlett, J.G. & Stebbins, A. 1991, ApJ, 371, 8
Bensadoun, M., Bersanelli, M., De Amici, G., Kogut, A., Levin, S., Limon, M., Smoot, G.F., & Witebsky, C. 1993, ApJ, 409, 1
Bernshtein, I.N., Bernshtein, D.N., & Dubrovich, V.K. 1997, Sov. Astron., 21, 409
Burdyuzha, V.V. & Chekmezov, A.N. 1994, Astron. rep., 38, 297
Burigana C., Danese, L., & De Zotti, G. 1991, A&A, 246, 49
Burles S., Nottel, K.M., & Turner, M.S. 1999, astro-ph/9903300
Daly R. 1994, ApJ, 371, 14
Dell’Antonio, I.P. & Rybicki, G.B. 1993, in Observational Cosmology, ed. G. Chincarini, A. Iovino, T. Maccacaro & D. Maccagni, ASP Conf. Ser. Vol. 51, 548
Dubrovich, V.K. 1975, Sov. Astron. Lett., 1, 196
Dubrovich, V.K. & Stolyarov, V.A. 1995, A&A, 302, 635
Fixen, D.J., Cheng, E.S., Gales, J.M., Mather, J.C., Shafer, R.A., & Wright, E.L. 1996, ApJ, 486, 623
Griffiths, L.M., Barbosa, D., & Liddle, A.R. 1999, MNRAS, 308, 854
Hu, W., Scott, D., & Silk, J. 1994, ApJ, 430, L5
Jones, B.J.T. & Wyse, R.F.G. 1985, A&A, 149, 144
Lyubarsky, Y.E. & Sunyaev, R.A. 1983, A&A, 123, 171
Peebles, P.J.E. 1968, ApJ, 153, 1
Raghunathan, A. & Subrahmanyan R. 2000, in preparation
Shaver, P.A., Windhorst, R.A., Madau, P., & de Bruyn, A.G. 1999, A&A, 345, 380
Silk, J. & Stebbins, A. 1983, ApJ, 269, 1
Sironi, G., Limon, M., Marcellino, G., Bonelli, G., Bersanelli, M., Conti, G., & Reif, K. 1990, ApJ, 357, 301
Staggs, S.T., Jarosik, N.C., Wilkinson, D.T., & Wollack, E.J. 1996, ApJ, 458, 407
Stankevich, K.S., Wielebinski, & R., Wilson, W.E. 1970, Aust. J. Phys., 23, 529
Windhorst R.A., Fomalont E.B., Partridge R.B., & Lowenthal, J.D. 1993, ApJ, 405, 498
|
no-problem/0003/hep-lat0003008.html
|
ar5iv
|
text
|
# Comment on hep-lat/9901005 v1-v3 by W. Bietenholz
## 1 The Setting
One of the interesting issues in lattice field theory is understanding the structural properties of Ginsparg-Wilson (GW) fermionic actions, i.e. the actions for which the chirally nonsymmetric part of the massless propagator $`𝐑(𝐃^1)_N0`$ is local <sup>1</sup><sup>1</sup>1For definitions, see Ref. . Perhaps the most basic problem is clarifying under which conditions the GW property is (in)compatible with ultralocality of the action. This question can be usefully discussed already at the free level, because non-ultralocality of a free action implies non-ultralocality in any gauge-invariant interacting theory based on it.
It turns out that a lot can be said if one requires the underlying free theory to respect the symmetries of the hypercubic lattice structure , i.e. translations and transformations of the hypercubic group. In fact, this is naturally the most important case since ideally, one prefers to work with the action respecting all crucial fundamental symmetries, i.e. hypercubic symmetries, gauge invariance, and chiral symmetry (in this case Ginsparg-Wilson-Lüscher (GWL) symmetry). It is worth noting that attempts to clarify this issue in the context where symmetry under hypercubic group is abandoned and only lattice translations are kept, were not very successful so far.
Current insight into the question of (in)compatibility of hypercubic symmetries with ultralocality of GW actions is based on two suggestions:
Part 1: Studying the consequences of the GW property for ultralocal lattice Dirac operators restricted on lines corresponding to periodic directions in the Brillouin zone. This was introduced in Ref. and fully developed in Ref. .
Part 2: Studying the consequences of the GW property at the origin of the Brillouin zone for two-dimensional restrictions of ultralocal lattice Dirac operators. This was introduced in Ref. with the suggestion that corresponding analytic properties at the origin have powerful global consequences for the operator in ultralocal case, and may lead to the necessity of fermion doubling.
## 2 The Literature
It is useful to outline the evolution of the above issues in the literature chronologically. For clarity, I will refer to various versions of paper hep-lat/9901005 \[4-6\] as “v1-v3”.
(1) In Ref. , the idea of Part 1 was introduced and necessary steps were performed to prove that canonical GW operators, i.e. operators satisfying $`\{𝐃,\gamma _5\}=𝐃\gamma _5𝐃`$, or $`𝐑(𝐃^1)_N={\scriptscriptstyle \frac{1}{2}}\mathrm{𝕀}`$, can not be ultralocal. On the first page of that Letter it is also explicitly stated that the proof can be extended to all ultralocal $`𝐑`$, trivial in Dirac space. This and other simple generalizations were explicitly deferred to Ref. for reasons of space.
(2) The paper “v1” adopts the approach of Part 1, and the claim is made in the abstract of extending the proof of Ref. to a “…much larger class of Ginsparg-Wilson fermions…”. While it is quite unclear from the paper what this larger class is, it is explicitly stated that the alleged proof applies to all cases for which $`𝐑^1`$ is ultralocal, where $`𝐑`$ is a “Dirac scalar” i.e. trivial in spinor space (Ref. , pages 1,2). This would be an interesting new result but, unfortunately, it was not substantiated.
(3) In paper “v2” the above claim of “v1” is changed, and it is stated that the proof rather applies to all cases for which $`𝐑`$ is ultralocal and trivial in spinor space (Ref. , pages 1,2). This is a result put forward in Ref. .
While “v2” uses the ingredients of Part 1, the satisfactory discussion of its merits as a proof (and the merits of its assumptions) would be rather involved. Some improvements were put forward later in “v3”, and this latest version will be discussed in Sec. 3 of this Comment.
(4) Paper describes in detail the consequences of the approach of Part 1, as indicated in Ref. . The considerations on periodic directions of the Brillouin zone are shown to be sufficient to prove that infinitesimal GWL symmetry transformations must be non-ultralocal for arbitrary GW action in the presence of hypercubic symmetries (“weak non-ultralocality”). On the basis of weak non-ultralocality it is then proved that GW operators for which $`𝐑(𝐃^1)_N`$ is ultralocal, can not be ultralocal. This contains the result announced in Ref. , and generalizes it further by showing that triviality in spinor space is not crucial.
(5) The approach of Part 2 is proposed in Ref. <sup>2</sup><sup>2</sup>2Ref. is a contribution of the author to the proceedings of the conference “Lattice Fermions and the Structure of the Vacuum”, Dubna, Russia, Oct 5-9 1999. The list of participants at this conference includes the author of \[4-6\].. It is pointed out that in the presence of hypercubic symmetries, the GW condition for ultralocal actions translates into analytic properties of certain rational functions after suitable change of variables, and that two-dimensional restrictions of operators on the Brillouin zone already capture the required analytic structure. The crucial observation is that analyticity at the origin implies factorization properties of involved polynomials, which strongly constraints the global behaviour of the corresponding action, and may imply fermion doubling. This connection is encapsulated in the hypothesis (Ref. , page 6), reflecting the conjectured property of such factorizations. The hypothesis was proposed as a key to the problem of “strong non-ultralocality”, i.e. non-ultralocality of all doubler-free GW actions in the presence of hypercubic symmetries, as formulated in Ref. . There is no resolution of the hypothesis to date.
(6) The version “v3” appears (Feb 24, 2000) with two major changes compared to “v2”:
$`(\alpha )`$ Parts of formalism and the main claim of “v2” are upgraded to the most general result of Ref. , i.e. discussion involves actions with $`𝐑(𝐃^1)_N`$ ultralocal (not just trivial in spinor space). This now forms the STEP 1 of the paper.
$`(\beta )`$ The approach Part 2 of Ref. is adopted in the completely new part STEP 2. The argument is presented in such a way that STEP 1 and STEP 2 together are claimed to imply the non-ultralocality for “all Ginsparg-Wilson fermions” .
## 3 Discussion of “v3”
The purpose of this note is to point out that one can raise several objections to the arguments of “v3”, casting doubts about the result claimed in the paper. Some of these objections are described below.
(a) One of the starting points of “v3” is the suggestion that Eq. (2) represents the general ansatz for the restriction of arbitrary lattice Dirac operator in $`d`$ dimensions to the two-dimensional momentum plane through $`𝐃𝐃(p_1,p_2,0,\mathrm{},0)`$. This is supposed to be true if “We assume Hermiticity, discrete translation invariance, as well as invariance under reflections and exchange of the axes.” <sup>3</sup><sup>3</sup>3By “Hermiticity”, the author of “v3” perhaps means $`\gamma _5`$-Hermiticity. However, there doesn’t appear to be a good reason or necessity to assume either of these (none is assumed Refs \[1-3\]).. If true, the statement of this nature should perhaps be proved. If not true, the additional possible terms (such as one proportional to $`\gamma _1\gamma _2`$, which is compatible with hypercubic symmetries) must be included and the proof should proceed with the presence of such terms.
(b) The GW property is encoded in the analyticity properties of Clifford components of $`(𝐃^1)_N`$. “v3” uses the non-invertible change of variables (two to one) on the Brillouin zone $`c_\mu =1\mathrm{cos}p_\mu `$ (see Eq. (14)), and implicitly assumes that the required analyticity properties are inherited in new variables (see case $`(b)`$ on page 6 of “v3”). In the proof one would expect the required analyticity properties to be defined, as well as careful justification that the above change of variables preserves them.
(c) Eq. (18) of “v3” introduces the following polynomial decomposition for symmetric polynomial $`K(c_1,c_2)`$ (the fact that $`K`$ must be symmetric has not been justified)
$$K(c_1,c_2)=c_1X(c_1,c_2)+c_2X(c_2,c_1)$$
Since $`K(0,0)=0`$, the required polynomial $`X`$ exists, but this representation is not unique. There are infinitely many $`X`$ that represent the same $`K`$, for example, $`XX+c_2(c_1c_2)`$. How then is $`X`$ fixed?
(d) The crucial part of the argument presented in STEP 2 of “v3” is the identification expressed by Eq. (22), which is supposed to follow from Eqs. (16,17) and (21). Unless there are hidden assumptions and arguments, there doesn’t appear to be any reason why this identification should hold. If some symmetric polynomial $`P(c_1,c_2)`$ can be written in terms of another polynomial $`Q(c_1,c_2)`$ as
$$P(c_1,c_2)=c_1(2c_1)Q(c_1,c_2)+c_2(2c_2)Q(c_2,c_1)$$
then for similar reasons to those discussed in item (c) above, there are infinitely many polynomials $`Q`$ that can be used for this decomposition. Since there is no uniqueness, how does the Eq. (22) follow?<sup>4</sup><sup>4</sup>4It also appears that the generic case $`n_1=n_2=0`$ of “v3” should be discussed separately, because the polynomial structure is different. If a conclusion of such nature is possibly justifiable, then the proof would seem to require an explicit argument to that effect.
It should be emphasized in closing that the above objections are not aimed at excessive improvements of rigor in “v3”. The aim is to point out that there appear to be serious holes in the arguments, raising the worry that the claim of “v3” might simply not be justified at all. The problem of “strong non-ultralocality” of GW fermions is an important issue and it would be inherently useful to resolve it cleanly (by either giving a proof or a counterexample). Hopefully, the remarks in this Comment can contribute to eventually achieving that goal.
Acknowledgement: I thank R. Mendris for many pleasant discussions on the issues related to those discussed here.
|
no-problem/0003/math0003232.html
|
ar5iv
|
text
|
# Multiplier Ideals of Monomial Ideals
## Introduction
Multiplier ideals have become quite important in higher dimensional geometry, because of their strong vanishing properties, (cf , , , , , ). They reflect the singularity of a divisor, ideal sheaf, or metric. It is however fairly difficult to calculate multiplier ideals explicitly, even in the simplest cases: the algebraic definition of the multiplier ideal associated to an arbitrary ideal sheaf $`𝔞`$ requires that we construct a log resolution of $`𝔞`$ and perform calculations on the resolved space. In this note, we compute the multiplier ideal associated to an arbitrary monomial ideal $`𝔞`$. Like $`𝔞`$, it can be described in combinatorial and linear-algebraic terms.
We begin with some definitions. Let $`X`$ be a smooth quasiprojective complex algebraic variety. Let $`𝔞𝒪_X`$ be any ideal sheaf. By a log resolution of $`𝔞`$, we mean a proper birational map $`f:YX`$ with the property that $`Y`$ is smooth and $`f^1(𝔞)=𝒪_Y(E)`$, where $`E`$ is an effective Cartier divisor, and $`E+exc(f)`$ has normal crossing support.
###### Definition 1.
Let $`𝔞𝒪_X`$ be an ideal sheaf in $`X`$, and let $`f:YX`$ be a log resolution of $`𝔞`$, with $`f^1(𝔞)=𝒪_Y(E)`$. Let $`r>0`$ be a rational number. We define the multiplier ideal of $`𝔞`$ with coefficient $`r`$ to be:
$$𝒥(r𝔞)=f_{}𝒪_Y(K_{Y/X}rE).$$
Here $`K_{Y/X}=K_Yf^{}K_X`$ is the relative canonical bundle, and $``$ is the round-down for $``$divisors. That $`𝒥(r𝔞)`$ is an ideal sheaf follows from the observation that $`𝒪_Y(K_{Y/X}rE)`$ is a subsheaf of $`𝒪_Y(K_{Y/X})`$: since $`f_{}(𝒪_Y(K_{Y/X}))=𝒪_X`$, $`𝒥(r𝔞)𝒪_X`$. We write $`𝒥(𝔞)`$ for $`𝒥(1𝔞)`$.
We will now specialize to the case $`X=𝔸^n`$.
###### Definition 2.
Let $`𝔞[x_1,\mathrm{},x_n]`$ be a monomial ideal. We will regard $`𝔞`$ as a subset of the lattice $`L=^n`$ of monomials. The Newton Polygon $`P`$ of $`𝔞`$ is the convex hull of this subset of $`L`$, considered as a subset of $`L=^n`$. It is an unbounded region. $`PL`$ is the set of monomials in the integral closure of the ideal $`𝔞`$ .
###### Notation 1.
We write $`\mathrm{𝟏}`$ for the vector $`(1,1,\mathrm{},1)`$, which is identified with the monomial $`x_1x_2\mathrm{}x_n`$. The associated divisor $`div(\mathrm{𝟏})`$ is the union of the coordinate axes. We use Greek letters ($`\lambda L`$) for elements of $`L`$ or $`L`$, and exponent notation $`x^\lambda `$ for the associated monomials. For any subset $`P`$ of $`L`$, we define $`rP`$ “pointwise:”
$$rP=\{r\lambda :\lambda P\}.$$
We write $`Int(P)`$ for the topological interior of $`P`$, and $`P`$ for $`\{x^\lambda :\lambda P\}`$.
We regard the Newton polygon “officially” as a subset of the real vector space $`L=^n`$; the interior operation $`Int(P)`$ relies on the real topology of this vector space. However, we don’t always carefully distinguish $`P`$ from the collection of its lattice points $`PL`$, or from the collection of their associated monomials $`\{x^\lambda :\lambda PL\}`$.
Here is our main result:
###### Main Theorem.
Let $`𝔞𝒪_{𝔸^n}`$ be a monomial ideal. Let $`P`$ be its Newton polygon. Then $`𝒥(r𝔞)`$ is a monomial ideal, and contains exactly the following monomials:
$$𝒥(r𝔞)=\{x^\lambda :\lambda +\mathrm{𝟏}Int(rP)L\}.$$
###### Remark 1.
The right hand side, $`\{x^\lambda :\lambda +\mathrm{𝟏}Int(rP)L\}`$, could instead be called $`rP`$. We state the theorem as we do in order to emphasize the monomial $`\mathrm{𝟏}`$, which is independently important.
###### Example 1.
If $`𝔞`$ is generated by a single monomial, $`x^\lambda `$, then the polygon $`P`$ is the positive orthant translated upward to $`\lambda `$, and
$$𝒥(𝔞)=P=P=𝔞.$$
This is not surprising, because in this case $`𝔞`$ is already a divisor with normal crossing support.
###### Example 2.
Let us calculate the multiplier ideal of $`(x^8,y^6)`$. The Newton polygon is pictured in Figure 1. The distinguished integer vectors $`\lambda `$ are those with the property that $`\lambda +\mathrm{𝟏}Int(P)`$. From Figure 1, we conclude
$$𝒥(x^8,y^6)=(x^6,x^5y,x^4y^2,x^2y^3,xy^4,y^5).$$
Notice that $`x^3y^2`$ is almost but not quite in $`𝒥(x^8,y^6)`$, because $`x^4y^3`$ lies on the boundary, not the interior, of the Newton polygon.
###### Example 3.
Let $`(a_i)_{in}`$ be positive integers, and let $`𝔞=(x_1^{a_1},\mathrm{},x_n^{a_n})`$. One might call this a “diagonal ideal.” The only interesting face of the Newton polygon $`P`$ of $`𝔞`$ is defined by a single dual vector $`v=(\frac{1}{a_1},\mathrm{},\frac{1}{a_n})`$. Therefore $`𝒥(𝔞)`$ contains the monomials $`\{x^\lambda :v(\lambda +\mathrm{𝟏})>1\}`$. See \[3, example 5.10\], for an analytic perspective on this same result. In this expression, the term $`v\mathrm{𝟏}`$ ( $`=\frac{1}{a_1}+\mathrm{}+\frac{1}{a_n}`$) may be familiar: It is the log-canonical threshold of $`𝔞`$ (see below).
###### Example 4.
Let $`g𝒪_{𝔸^n}`$ be an arbitrary polynomial. One might hope that the multiplier ideal associated to the (non-monomial) ideal $`(g)`$ would be identical to that associated to the monomial ideal $`𝔞_g`$ generated by the monomials appearing in $`g`$. This is not true. Consider $`g=(x+y)^n`$ in $`[x,y]`$. By a linear change of coordinates in which $`z=x+y`$ we obtain $`g=z^n`$, and can calculate $`𝒥((g))`$ in terms of $`z`$. This gives $`𝒥((g))=(g)𝒥(𝔞_g)`$.
Notice however that for any polynomial $`g`$, $`(g)𝔞_g`$. It is not difficult to show that $`𝒥(r(g))𝒥(r𝔞_g)`$ for all $`r`$. This containment is almost always strict, but it does become an equality if both $`r<1`$ and the coefficients of $`g`$ are sufficiently general.
These conditions guarantee that the multiplicity of the $``$divisor $`r(g=0)`$ is less than one away from the zeroes of $`𝔞_g`$.
###### Example 5.
Let $`𝔞`$ be a monomial ideal in $`𝔸^n`$, and let $`P`$ be its Newton polygon. The log canonical threshold $`t`$ of $`𝔞`$ is defined to be
$$t=sup\{r:𝒥(r𝔞)𝒪_X\}.$$
See or for a detailed discussion of this concept. The Main Theorem shows that this must be equal to $`sup\{r:\mathrm{𝟏}rP\}`$ (provided that $`𝒥(r𝔞)`$ is nontrivial–the trivial case is an annoying exception). Thus the log canonical threshold is the reciprocal of the (unique) number $`m`$ such that the boundary of $`P`$ intersects the diagonal in the $`^n`$ at the point $`m\mathrm{𝟏}`$. In other words, in order to calculate the threshold, we need only find where $`P`$ intersects the diagonal. Arnold calls this number $`m`$ of the intersection point the “remoteness” of the polygon. In , he proves that $`m=\frac{1}{t}`$, in order to analyze asymptotic oscillatory integrals.
###### Example 6.
For the “diagonal ideals” of Example 3, the intersection of the diagonal with the Newton polygon is easily calculated using the dual vector $`v`$. The reader may check that its reciprocal is indeed $`v\mathrm{𝟏}`$. (If it happens that $`v\mathrm{𝟏}>1`$, then the log canonical threshold is 1, and the multiplier ideal is trivial.) See for more details.
###### Example 7.
To illustrate these ideas, we calculate the log-canonical threshold of a slightly more complicated ideal. Let
$$𝔞=(xy^4z^6,x^5y,y^7z,x^8z^8).$$
After drawing the Newton polygon<sup>1</sup><sup>1</sup>1Maple code illustrating this Newton polygon is available from the author by request. Unfortunately, static $`2`$dimensional representations are not very helpful., one sees that the diagonal in $`^3`$ intersects the triangular face generated by the first three generators. Therefore, the fourth generator $`x^8z^8`$ can be ignored. The intersection of the diagonal with the triangle whose vertices have coordinates $`\{(1,4,6),(5,1,0),(0,7,1)\}`$ is the point $`(m,m,m)`$, where $`m=\frac{191}{68}`$. The log canonical threshold of $`𝔞`$ is $`\frac{1}{m}`$, or $`\frac{68}{191}`$.
The structure of the polygon $`P`$ can in general be quite complicated, but it must have a single face which intersects the diagonal. This face may not be simplicial, but it certainly decomposes into simplices, one of which intersects the diagonal in the same place and has no more than $`n`$ vertices. This demonstrates that the log canonical threshold of $`𝔞`$ is equal to that of a smaller ideal generated by no more monomials than the dimension $`n`$ of the space.
It has been conjectured <sup>2</sup><sup>2</sup>2Actually, Shokurov’s version of this conjecture is stronger than that presented here. It refers to log canonical thresholds of effective Weil divisors on possibly singular ambient spaces. (,) that for every dimension $`n`$ the collection $`𝒯_n`$ of all log canonical thresholds satisfies the Ascending Chain Condition (“All subsets have maximal elements”). The restricted case of ACC for monomial ideals follows from the fact that the partial order of all monomial ideals has no infinite increasing sequences, nor even any infinite antichains . This fact doesn’t require any characterization of the thresholds. If ACC is true, then for any fixed dimension $`n`$, there is a threshold $`t_n`$ closest to, but less than, one. We attempted to use the characterization above to calculate $`t_n`$ in the monomial case, but were unsuccessful. It is known that $`t_1=1/2,t_2=5/6,t_3=41/42`$, Also, if we restrict to ideals of the form $`𝔞=(x_1^{b_1},\mathrm{},x_n^{b_n})`$, then it is known that we can do no better than $`t_n=\frac{a_n1}{a_n}`$, where $`a_1=2`$ and $`a_{n+1}=a_n^2+a_n`$. The sequence $`a_n`$ is $`(2,6,42,1806,\mathrm{})`$. We used a computer to calculate the log canonical threshold for large numbers of monomial ideals, and found no evidence that the above pattern is wrong in general.
## Proof of the Theorem
We will give a straightforward proof of the theorem, based on repeated blowups of the underlying space. The basic proof structure is then an induction, but this creates a problem: After a single such blowup $`f:YX`$ the space of interest is no longer $`𝔸^n`$, so an inductive step doesn’t apply.
This difficulty is not a serious one, because $`Y`$ is still locally $`𝔸^n`$. Also, all of the above definitions can be extended to $`Y`$ and onward. For example, the “coordinate axes” on $`Y`$ should be taken to be proper transforms of those from $`X`$, together with the exceptional divisor(s). The notion of a “monomial ideal” on $`X`$ generalizes on $`Y`$ to an intersection of codimension-1 subschemes (monomials) supported on the “coordinate axes.” These extensions are consistent with those obtained by localizing on $`Y`$ and identifying the coordinate patches with $`𝔸^n`$ in the obvious way. A briefer argument can be made if one relies on the theory of toric varieties. We will attempt to point out these connections where appropriate.
###### Definition 3.
By a monomial blowup, we mean a blowup of $`X`$ along the intersection of some coordinate hyperplanes. By a sequence of monomial blowups, we mean a sequence of blowups, each of which is locally a monomial blowup.
###### Definition 4.
Above we defined $`\mathrm{𝟏}`$ as a divisor on $`X=𝔸^n`$, but we will need a more general notion. If $`Y`$ is obtained from $`X`$ by a sequence of monomial blowups, we let $`\mathrm{𝟏}_Y`$ be the divisor which is the sum of the proper transforms of the coordinate axes in $`X`$, together which each exceptional divisor taken with coefficient 1. Thus $`\mathrm{𝟏}`$ is the union of the “coordinate hyperplanes” of $`Y`$. We regard $`\mathrm{𝟏}_Y`$ as an element of the lattice $`𝐋_Y`$, which must be defined as the free abelian group on the coordinate hyperplanes in $`Y`$.
The toric picture better illustrates what’s going on here: the exceptional divisors and the proper transforms of the coordinate axes are precisely those effective divisors on $`Y`$ which are invariant under the natural torus action. Hence $`L_Y`$ is the lattice of torically invariant divisors on $`Y`$. In general, the sum of all of the effective toric divisors (each with coefficient one) on a toric variety is the anticanonical divisor. So $`\mathrm{𝟏}_X`$ and $`\mathrm{𝟏}_Y`$ are the torically natural anticanonical divisors, and $`L_Y`$ and $`L_X`$ are the lattices of torically invariant divisors.
###### Lemma 1.
Let $`X`$ be $`𝔸^n`$ or an intermediate blowup, and let $`f:YX`$ be a monomial blowup of $`X`$. Then
$$\mathrm{𝟏}_Yf^{}(\mathrm{𝟏}_X)=K_{Y/X}.$$
This can be seen without toric geometry by direct calculation; it is easy to pull $`\mathrm{𝟏}_X`$ up to $`Y`$ and count its multiplicity along the exceptional divisor.
###### Corollary 1.
If $`f:YX`$ arises by a sequence of monomial blowups, then
$$\mathrm{𝟏}_Yf^{}(\mathrm{𝟏}_X)=K_{Y/X}.$$
The corollary gives a convenient formula for $`K_{Y/X}`$ when $`Y`$ is a log-resolution (via a sequence of monomial blowups) of the monomial ideal $`𝔞`$. It remains to see that such a space $`Y`$ exists:
###### Lemma 2.
Let $`X=𝔸^n`$, and let $`𝔞`$ be a monomial ideal on $`X`$. Then there is a sequence of monomial blowups $`f:YX`$ which constitutes a log-resolution of $`𝔞`$.
###### Proof.
Here we must use some toric geometry. The ideal $`𝔞`$ defines a subset of the lattice $`L_X`$. The dual set of the ideal, $`\{vL_X^{}:\lambda P,v,\lambda 1\}`$, defines a rational polytope $`P^{}`$ in the dual lattice $`L_X^{}`$. To find a “monomial log resolution” of $`𝔞`$ is to find a sequence of toric blowups which refine the polytope $`P^{}`$ in the appropriate sense. This can be done because $`P^{}`$ is rational. The blowups required are exactly those required torically to resolve the singularity of the space $`Bl_𝔞(X)`$. Figure 2 indicates how this process might be used to resolve the cusp. See \[8, section 2.6\], for more information on toric resolutions. ∎
We now fix a monomial log-resolution $`f:YX`$, as in the Lemma. We need to examine the relationship between $`𝔞`$ and $`f^1(𝔞)`$. By the definition of $`f`$, $`f^1(𝔞)`$ is a line bundle. It corresponds to a divisor whose support is contained in the proper transforms of the coordinate axes from $`X`$ and the exceptional divisors. We called the collection of such divisors $`L_Y`$. To $`f^1(𝔞)`$ we may associate a single element $`\gamma `$ of $`L_Y`$, its “generator.” We may even give it a Newton polygon $`P_Y`$, namely the positive orthant translated to $`\gamma `$.
###### Lemma 3.
Let $`f:YX`$ resolve $`𝔞`$ by a sequence of monomial blowups. Let $`P_X`$ be the Newton polygon of $`𝔞`$, and let $`P_Y`$ be as above. Since $`f^{}`$ acts linearly on the lattices, we may extend it to all of $`L_X`$. When we do this,
1. $`f^{}`$ takes the interior points of $`P_X`$ to interior points of $`P_Y`$.
2. $`f^{}`$ takes the boundary points of $`P_X`$ to boundary points of $`P_Y`$.
3. $`f^{}`$ takes the points not in $`P_X`$ to points not in $`P_Y`$.
###### Proof.
The lemma hinges on three basic ideas. First, $`f^{}`$ is certainly a map from $`L_X`$ to $`L_Y`$, but because it is linear, $`f^{}`$ extends to all of $`L_X`$ in a natural way. As a map of real vector spaces, $`f^{}`$ is continuous because it is linear. Second, for each of the effective toric divisors, or “coordinate planes” $`E_i`$ in $`Y`$
$$ord_{E_i}(f^{}(\mathrm{𝟏}_X))>0.$$
The equality is strict because the blowups permitted are monomial. Third, we have the standard equation $`f_{}(𝒪_Y(E))=\overline{𝔞}`$, where $`\overline{𝔞}`$ is the integral closure of $`𝔞`$.
We will prove the lemma by proving part 3 first for integral points $`\lambda L_X`$, then for rational points, and finally for real points. We will prove part 1 by using the strict positivity of $`f^{}(\mathrm{𝟏}_X)`$. Finally, we’ll deduce part 2 by continuity.
Let $`\lambda `$ be an integer point of $`L_X`$ not in $`P_X`$. Then $`x^\lambda \overline{𝔞}=f_{}(𝒪_Y(E))`$ so $`f^{}(\lambda )P_Y`$. If instead $`\lambda P_X`$ has rational coordinates, then we can clear denominators. Let $`n\lambda `$ be integral. $`n\lambda nP(𝔞)=P(𝔞^n)`$, so $`f^{}(n\lambda )f^{}(𝔞^n)=nP_Y`$. (Here we have used the just-proved integer case, as well as the fact that the resolution $`f:YX`$ resolving $`𝔞`$ also resolves $`𝔞^n`$.) Dividing by $`n`$ gives $`f^{}(\lambda )P_Y`$. If $`\lambda P_X`$ has real coordinates, choose a rational $`\mu \lambda `$ also not in $`P_X`$. $`f^{}(\lambda )f^{}(\mu )P_Y`$, so $`f^{}(\lambda )P_Y`$.
A standard convexity argument proves that if $`\lambda P_X`$ then $`f^{}(\lambda )P_Y`$. To prove part 1 of the lemma, let $`\lambda `$ be in the interior of $`P_X`$. Choose $`\mu P_X`$ and $`ϵ^+`$ with $`\lambda =\mu +ϵ\mathrm{𝟏}`$. Then $`f^{}(\lambda )=f^{}(\mu )+ϵf^{}(\mathrm{𝟏})`$. $`f^{}(\mu )P_Y`$, and $`ϵf^{}(\mathrm{𝟏})`$ is strictly positive in every coordinate, so $`f^{}(\lambda )`$ is in the interior of $`P_Y`$.
Part 2 of the lemma follows from the continuity of the map $`f^{}`$.
We can now give the proof of the main theorem. Because $`𝒥(r𝔞)`$ is invariant under the natural torus action, it must be a monomial ideal. We characterize the monomials $`x^\lambda `$ in $`𝒥(r𝔞)`$. By definition, $`x^\lambda `$ is in $`𝒥(r𝔞)`$ if and only if
$$div(f^{}(x^\lambda ))+K_{Y/X}rE0$$
(recall $`𝒪_Y(E)=f^1(𝔞)`$). This condition simply means that
$$div(f^{}(x^\lambda ))+K_{Y/X}\text{ is in }rP_Y$$
(also recall $`rP_Y=\{x^\lambda :\lambda rP\}`$). Using the calculation of $`K_{Y/X}`$ from Lemma 1, this can be rewritten
$$div(f^{}(x^\lambda ))\mathrm{𝟏}_Y+f^{}(\mathrm{𝟏}_X)rP_Y.$$
This is of the form $`\{divisor\}\mathrm{𝟏}_YrP_Y`$, so we rewrite it as $`\{divisor\}int(rP_Y)`$, obtaining
$$div(f^{}(x^\lambda ))+f^{}(\mathrm{𝟏}_X)int(rP_Y).$$
But this is just a condition on divisors from $`X`$. By Lemma 3, parts 1 and 2, it is equivalent to $`(\lambda +\mathrm{𝟏}_X)Int(rP_X)`$. The theorem is proved.
|
no-problem/0003/hep-ph0003311.html
|
ar5iv
|
text
|
# Gravitational Decay Modes of the Standard Model Higgs Particle
## Abstract
If the Einstein field equations are employed at the tree level, then the decay of the standard model Higgs particle into two gravitons is shown to be independent of the gravitational coupling strength G. The result follows from the physical equivalence between the Higgs induced “inertial mass” and the “gravitational mass” of general relativity. If the Higgs mass lies well between the mass of a bottom quark anti-quark pair and the mass of a top quark anti-quark pair, then the Higgs decay into two gravitons will dominate both the QED induced two photon decay and the QCD induced two jet decays.
PACS: 1480 Bn, O4.20 Fy
The last major notion of the standard electro-weak model which has yet to receive experimental confirmation is the prediction of the Higgs particle. In part, the problem may be simply connected to our lack of knowledge of the value of the Higgs mass $`M_H`$. But other problems arise on a more conceptual level.
The Higgs field is thought to provide the mechanism for the existence of all inertial mass. Yet the standard model does not relate this Higgs induced inertial mass to the important and presumably equivalent value of the gravitational mass. In what follows, we shall add to the standard electro-weak model Higgs notion of inertial mass, the Einstein notion of gravitational mass via the conventional curvature field equations of general relativity
$$R_{\mu \nu }\left(\frac{1}{2}\right)g_{\mu \nu }R=\left(\frac{8\pi G}{c^4}\right)T_{\mu \nu }.$$
(1)
In particular, for the trace $`T=g^{\mu \nu }T_{\mu \nu }`$ we shall employ
$$T=\left(\frac{c^4}{8\pi G}\right)R.$$
(2)
In the standard model, the Higgs field is entirely responsible for the possible existence of $`T0`$. Thus, the Higgs field is entirely responsible for the possible existence of a non-trivial scalar curvature $`R0`$ in general relativity. Nevertheless, the modes of the interaction between the Higgs field and conventional Einstein gravity have gone virtually unnoticed with regard to high energy laboratory quantum gravity experiments.
The reason for this sad state of affairs is that the value of the Planck mass $`M_P`$, defined by
$$\left(\frac{GM_P^2}{\mathrm{}c}\right)=1,$$
(3)
is thought to be much too large to allow for quantum gravity observations using conventional high energy beams. The weak interaction (Fermi coupling $`G_F`$) version of Eq.(3), i.e.
$$\left(\frac{\sqrt{2}G_FM_F^2}{\mathrm{}c}\right)=1,$$
(4)
sets the mass scale at the vacuum condensation value of the Higgs field
$$M_F=\left(\frac{\mathrm{}\varphi }{c}\right).$$
(5)
The value of $`M_F`$ is thus known to be
$$M_F246GeV/c^2,$$
(6)
well within the present day technology of high energy beams. Let us return to the quantum gravity aspects of the problem.
For Higgs particle excitations one normally writes the total field
$$\varphi =\varphi +\chi ,$$
(7)
while the effective action employed for computing the decay of the Higgs particle is given by
$$S_{eff}=\left(\frac{1}{c\varphi }\right)\chi T𝑑\mathrm{\Omega },$$
(8)
where $`d\mathrm{\Omega }=\sqrt{g}d^4x`$ is the space-time “volume” element. While $`T`$, the trace of the stress tensor, has many contributions, e.g. a term $`(mc^2\overline{\psi }\psi )`$ for each massive fermion species, the total sum over all the fields coupling into the Higgs particle is most simply described by the effective action in Eq.(8).
From Eqs.(2) and (8), it follows that the effective action depends on the scalar curvature
$$S_{eff}=\left(\frac{c^3}{8\pi G\varphi }\right)\chi R𝑑\mathrm{\Omega }.$$
(9)
Eqs.(8) and (9) express the fact that the Higgs couples equivalently into inertial and gravitational mass, but in the latter case we can relate the result to the Lagrangian density $`_g`$ of the gravitational field; i.e.
$$S_g=\left(\frac{c^3}{16\pi G}\right)R𝑑\mathrm{\Omega }=\left(\frac{1}{c}\right)_g𝑑\mathrm{\Omega }.$$
(10)
The Higgs coupling to Lagrangian density of gravitons is then
$$S_{eff}=\left(\frac{2}{c\varphi }\right)\chi _g𝑑\mathrm{\Omega }.$$
(11)
One may now be aware that in the Higgs coupling to gravitons, the gravitational coupling strength $`G`$ has very quietly slipped away. (The situation is reminiscent of the discussions between Bohr and Einstein on the completeness of the quantum mechanical view. Toward the end of these discussions, Bohr had to invoke general relativity to “save” the energy-time uncertainty principle. Notwithstanding the need for a finite gravitational coupling $`G`$ in the intermediate stages of the argument, $`G`$ dropped out of the final results.)
A general rule is that an oscillator Hamiltonian $`H=(1/2)\mathrm{}\omega (aa^{}+a^{}a)`$ corresponds to an oscillator Lagrangian $`L=(1/2)\mathrm{}\omega (a^{}a^{}+aa)`$. For the problem at hand, $`_g`$ may create or may destroy two gravitons. The rate at which a Higgs at rest will decay into two gravitons requires the matrix element $`gg\left|S_{eff}\right|H`$. In terms of quantum fields, one requires $`0\left|\chi \right|H`$ and $`gg\left|_g\right|0`$. From Eq.(5) and (11), one computes the rate
$$\mathrm{\Gamma }(Hg+g)=\left(\frac{1}{16\pi }\right)\left(\frac{M_H}{M_F}\right)^2\left(\frac{M_Hc^2}{\mathrm{}}\right).$$
(12)
In terms of the Fermi coupling strength Eq.(4), the Higgs into two gravitons has the decay rate
$$\mathrm{\Gamma }(Hg+g)=\left(\frac{\sqrt{2}}{16\pi }\right)\left(\frac{G_FM_H^2}{\mathrm{}c}\right)\left(\frac{M_Hc^2}{\mathrm{}}\right),$$
(13)
which is the central result of this work. The gravitational coupling strength does not appear in the final gravitational decay rate due to the physical equivalence between the inertial (Higgs induced) mass and the gravitational mass.
Our central Eq.(13) for the Higgs decay into two gravitons $`Hg+g`$ may in some ways be compared with the computation of Higgs decay into two photons $`H\gamma +\gamma `$. A “scalar particle” decay into two photons begins with the Schwinger anomaly for the trace of the stress tensor $`T_\gamma =(2\alpha /3\pi )_\gamma `$, where $`\alpha `$ is the quantum electrodynamic coupling strength, and $`_\gamma `$ is the free photon Lagrangian density. The anomalous $`T_\gamma `$ is a one loop process. The one loop Higgs decay rate into two photons is lower than the tree level Higgs decay into two gravitons by more than $`10^6`$. (In reality, one requires a one loop renormalized coupling strength $`\alpha `$, and this further lowers the two photon decay rate relative to the two graviton decay rate.) In a similar fashion, the Higgs decay into two gravitons is seen to be much larger than the decay into two gluon jets. If the Higgs mass is much larger than twice the bottom quark mass but still much less than twice the top quark mass, then the decay of the Higgs into two gravitons dominates too the decay of the Higgs into quark anti-quark jet pairs. Lastly, if the mass of the Higgs is smaller than the mass of two $`Z`$ Bosons or the mass of a $`W^+W^{}`$ pair, then the Higgs into two graviton decay rate will dominate the decay rate into all $`SU(2)\times U(1)`$ channels; the heavy gauge Boson pairs are ruled out on the basis of the above kinematics.
Consider the analogy between Higgs decay $`Hg+g`$ and the well known weak interaction decay $`Z\nu +\overline{\nu }`$ The latter ($`Z`$-decay) has been observed even though the neutrino anti-neutrino pair escapes direct detection other than by “missing” the four momenta. The full process is $`e^{}+e^+Z+\gamma \nu +\overline{\nu }+\gamma `$. There is a burst of soft photon radiation indicating that the electron positron pair has been destroyed, and then there is “nothing”. Now suppose, as one specific example among many, the following analog event. One produces a Higgs from a proton anti-proton event $`p^++p^{}Hg+g`$. Here too would be a soft photon radiation burst from the proton anti-proton destruction, and then nothing but the missing four momenta of the two hard final state gravitons. Such a process would occur resonantly at a square total four momentum $`s=(P^++P^{})^2/c^2=M_H^2`$.
The experimental search for the Higgs particle has been argued above to also be an experimental search for quantum gravity. Presently there is no direct experimental evidence for the Higgs particle nor for quantized gravitational waves (gravitons). It would however appear that the two are closely connected. Further progress on the source of inertial masses ultimately requires that gravity be added to the electroweak sector. It is hoped that the present exploration of these ideas will stimulate further work on these notions.
|
no-problem/0003/hep-th0003049.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
An important information on the physical properties of a quantum field theory is given by the renormalized four-point coupling, which is defined in terms of the zero momentum projection of the truncated 4-point correlator. At the same time, if one is interested in the lattice discretization of the theory, this renormalized coupling represents one of the most interesting universal amplitude ratios, being related to the fourth derivative of the free energy.
Recently, in , a new interesting approach has been proposed to evaluate this quantity in the case of integrable QFT’s. The idea is that for these theories one has direct access to the so called form factors from which the renormalized coupling can be computed. In the method was tested in the case of the 2d Ising model. The authors found the remarkably precise estimate
$$g_4^{}=14.6975(1)$$
(1)
(see below for the precise definition of $`g_4^{}`$).
The aim of this paper is to test this result by using a completely different method. By combining transfer-matrix methods and the exact knowledge of several terms in the scaling function of the free energy of the model we are able to obtain a precision similar to that of . Our result is
$$g_4^{}=14.69735(3),$$
(2)
which is in substantial agreement with the estimate of . Given the subtlety of the calculations involved in both approaches, our result represents a highly non-trivial test of both methods. In performing our analysis we employ the same techniques which were used in in the study of the 2d Ising model in a magnetic field.
This paper is organized as follows: We begin in sect.2 by collecting some definitions and elementary results which will be useful in the following. Sect.3 is devoted to a discussion of the transfer-matrix results (and of the techniques that we use to improve the performance of the method). In sect.4 we obtain the first 4 terms of the scaling function for the fourth derivative of the free energy, which enters in the estimate of $`g_4`$ and finally, in sect.5, we discuss the fitting procedure that we used to extract the continuum-limit value from the data. To help the reader to follow our analysis, we have listed in tab.2 the output of our transfer-matrix analysis.
## 2 General setting
We are interested in the 2d Ising model defined by the partition function
$$Z=\underset{\sigma _i=\pm 1}{}e^{\beta _{n,m}\sigma _n\sigma _m+h_n\sigma _n},$$
(3)
where the field variable $`\sigma _n`$ takes the values $`\{\pm 1\}`$; $`n(n_0,n_1)`$ labels the sites of a square lattice of size $`L_0`$ and $`L_1`$ in the two directions and $`n,m`$ denotes nearest neighbour sites on the lattice. In our calculations with the transfer-matrix method we shall treat asymmetrically the two directions. We shall denote $`n_0`$ as the “time” coordinate and $`n_1`$ as the “space” one. The number of sites of the lattice will be denoted by $`NL_0L_1`$. The critical value of $`\beta `$ is
$$\beta =\beta _c=\frac{1}{2}\mathrm{log}(\sqrt{2}+1)=0.4406868\mathrm{}$$
In the following we shall be interested in the high-temperature phase of the model in which the $`𝐙_\mathrm{𝟐}`$ symmetry is unbroken, i.e. in the region $`\beta <\beta _c`$. It is useful to introduce the reduced temperature $`t`$ defined as:
$$t\frac{\beta _c\beta }{\beta _c}.$$
(4)
As usual, we introduce the free-energy density $`F(t,h)`$ and the magnetization per site $`M(t,h)`$ defined as
$$F(t,h)\frac{1}{N}\mathrm{log}(Z(t,h)),M(t,h)\frac{F(t,h)}{h}.$$
(5)
The standard definition of the four-point zero-momentum renormalized coupling constant $`g_4`$ is
$$g_4(t)=\frac{F^{(4)}}{\chi ^2\xi _{2\mathrm{n}\mathrm{d}}^2},$$
(6)
where $`\chi `$ and $`F^{(4)}`$ are the second- and fourth-order derivatives of the free-energy density $`F(h,t)`$ at $`h=0`$:
$$\chi (t)=\frac{^2F(t,h)}{(h)^2}|_{h=0},F^{(4)}(t)=\frac{^4F(t,h)}{(h)^4}|_{h=0}$$
(7)
and $`\xi _{2\mathrm{n}\mathrm{d}}`$ denote the second moment correlation length. The second moment correlation length is defined by
$$\xi _{2\mathrm{n}\mathrm{d}}^2=\frac{\mu _2}{2d\mu _0},$$
(8)
where $`d`$ is the dimension (here $`d=2`$) and
$$\mu _i=\underset{L_1\mathrm{}}{lim}\underset{L_0\mathrm{}}{lim}\frac{1}{N}\underset{m,n}{}(mn)^i<\sigma _m\sigma _n>_c.$$
(9)
The connected part of the correlation function is given by
$$<\sigma _m\sigma _n>_c=<\sigma _m\sigma _n><\sigma _m><\sigma _n>.$$
(10)
In particular, we are interested in the continuum-limit value $`g_4^{}`$ defined as
$$g_4^{}=\underset{t0}{lim}g_4(t).$$
(11)
For $`t0`$ we have
$$\xi _{2\mathrm{n}\mathrm{d}}(t)A_{\xi ,2\mathrm{n}\mathrm{d}}t^1,\chi (t)A_\chi t^{7/4},F^{(4)}(t)A_{F^{(4)}}t^{11/2},$$
(12)
from which it follows
$$g_4^{}=\frac{A_{F^{(4)}}}{A_\chi ^2A_{\xi ,2\mathrm{n}\mathrm{d}}^2}.$$
(13)
The amplitude $`A_\chi `$ is known exactly (see e.g. ref. ): $`A_\chi =0.9625817322\mathrm{}`$ The amplitude $`A_{\xi ,2\mathrm{n}\mathrm{d}}`$ can also be computed exactly. Indeed, consider the exponential correlation length $`\xi `$ (inverse mass gap). For $`t0`$, it behaves as $`A_\xi t^1`$, with $`A_\xi =1/(4\beta _c)=0.56729632855\mathrm{}.`$ Using then $`A_\xi /A_{\xi ,2\mathrm{n}\mathrm{d}}=1.000402074\mathrm{}`$, we obtain finally $`A_{\xi ,2\mathrm{n}\mathrm{d}}=0.5670683251\mathrm{}.`$
Our goal in the remaining part of this paper is to give a numerical estimate of $`A_{F^{(4)}}`$.
## 3 Transfer-matrix results
We may have direct numerical access to $`F^{(4)}`$ by looking at the $`h`$ dependence of the magnetization at fixed $`t`$. Expand as follows:
$$h=b_1M+b_3M^3\mathrm{}$$
(14)
we immediately see that
$$b_1=1/\chi ,b_3=\frac{F^{(4)}}{6\chi ^4},$$
(15)
so that
$$F^{(4)}=6b_3/b_1^4.$$
(16)
### 3.1 The transfer-matrix technique
As a first step we computed the magnetization $`M`$ of a system with $`L_0=\mathrm{}`$ and finite $`L_1`$. The magnetization of this system is given by
$$M=v_0^T\stackrel{~}{M}v_0,$$
(17)
where $`v_0`$ is the eigenvector of the transfer matrix with the largest eigenvalue and $`\stackrel{~}{M}`$ is a diagonal matrix with $`\stackrel{~}{M}_{ii}`$ being equal to the magnetization of the time-slice configuration $`i`$. For a detailed discussion of the transfer-matrix method see e.g. ref. .
We computed $`v_0`$ using the most trivial iterative method,
$$v_0^{n+1}=\frac{Tv_0^n}{|Tv_0^n|},$$
(18)
starting from a vector with all entries being equal.
An important ingredient in the calculation is the fact that the transfer matrix can be written as the product of sparse matrices (see e.g. ref. ). This allows us to reach $`L_1=24`$ on a workstation. The major limitation is the memory requirement. We have to store two vectors of size $`2^{L_1}`$. Since we performed our calculation in double precision, this means that 268 MB are needed. Slightly larger $`L_1`$ could be reached by using a super-computer with larger memory space.
For the parameters $`\beta `$ and $`h`$ that we studied, $`n200`$ was sufficient to converge within double-precision accuracy.
### 3.2 The equation of state
In order to obtain high-precision estimates of $`F^{(4)}`$ it turns out to be important to consider the external field $`h`$ as a function of the magnetization rather than the opposite. The advantage of the series (14) is that the coefficients — at least those we can compute — are all positive, and therefore, truncation errors are less severe than in the case of $`m(h)`$.
There is no sharp optimum in the truncation order. After a few numerical experiments we decided to keep in eq. (14) the terms up to $`b_{15}M^{15}`$:
$$h(M)=b_1M+b_3M^3+\mathrm{}+b_{15}M^{15}.$$
(19)
In order to compute the coefficients $`b_1`$, $`b_3`$, …, $`b_{15}`$ we solved the system of linear equations that results from inserting 8 numerically calculated values of the magnetization $`M(h_1)`$, $`M(h_2)`$, …, $`M(h_8)`$ into the truncated equation of state (19). Here we have chosen $`h_j=jh_1`$.
The errors introduced by the truncation of the series decrease as $`h_1`$ decreases, while the errors from numerical rounding increase as $`h_1`$ decreases. Therefore, we varied $`h_1`$ to find the optimal choice. For a given value of $`\beta `$ we performed this search only for one lattice size $`L_1`$. (Typically $`L_1=18`$). From the variation of the result with $`h_1`$ we can infer the precision of our estimates of $`b_i`$. For example, for $`\beta =0.37`$, we get $`b_1`$ with 14 significant digits and $`b_3`$ with 12 significant digits.
### 3.3 Extrapolation to the thermodynamic limit
From the transfer matrix formalism it follows that for periodic boundary conditions and $`\beta \beta _c`$, the free energy density approaches its thermodynamic limit value exponentially in $`L_1`$. Hence, also derivatives of the free energy density with respect to $`h`$ and linear combinations of them should converge exponentially in $`L_1`$ to their thermodynamic limit value. Therefore, in the simplest case, one would extrapolate with an Ansatz
$$b(L_1)=b(\mathrm{})+c\mathrm{exp}(xL_1),$$
(20)
where $`b(L_1)`$ is the quantity at the given lattice size $`L_1`$ and $`b(\mathrm{})`$ the thermodynamic limit of the quantity. In order to obtain numerical estimates for $`b(\mathrm{})`$, $`c`$ and $`x`$ we have inserted the numerical result of $`b`$ for the three lattice sizes $`L_1`$, $`L_11`$ and $`L_12`$ into eq. 20. It turns out that using this simple extrapolation, still a dependence of the result for $`b(\mathrm{})`$ on $`L_1`$ is visible. This indicates that, with our numerical precision, subleading exponential corrections have to be taken into account. For this purpose we have iterated the extrapolation discussed above.
The iteration starts with $`b^{(0)}(L_1)`$ which are the quantities $`b`$ that have been computed by the transfer matrix for the lattice size $`L_1`$. A step of the iteration is given by solving the system of equations
$`b^{(i)}(L_12)`$ $`=`$ $`c\mathrm{exp}(x(L_12))+b^{(i+1)}(L_1)`$
$`b^{(i)}(L_11)`$ $`=`$ $`c\mathrm{exp}(x(L_11))+b^{(i+1)}(L_1)`$
$`b^{(i)}(L_1)`$ $`=`$ $`c\mathrm{exp}(xL_1)+b^{(i+1)}(L_1).`$ (21)
with respect to $`b^{(i+1)}(L_1)`$, $`c`$ and $`x`$. In table 1 we give as an example the extrapolation of $`b_3`$ at $`\beta =0.37`$. In the second column we give the results obtained for the given $`L_1`$. The stability of the extrapolation with varying $`L_1`$ increases up to the fourth iteration. Further iterations become numerically unstable.
As final result we took $`b_3=0.04837802(3)`$ from the $`4^{th}`$ iteration. The error was estimated from the variation of the results with $`L_1`$. As a consistency check, we also extracted the thermodynamic limit by fitting with multi-exponential Ansätze. We found consistent results. The relative accuracy of $`b_1`$ in the thermodynamic limit was in general better than that of $`b_3`$.
In the second column of table 2 we give our final results for $`F^{(4)}t^{11/2}`$ at the $`\beta `$ values that we have studied. For a discussion of the following columns see section 5.
## 4 Scaling function for $`F^{(4)}`$
In this Section we shall study the asymptotic behavior of $`F^{(4)}(t)`$ for $`t0`$ following Ref. . With respect to , we have added the contributions due to the irrelevant operators. Here, we shall use the knowledge of the operator content of the theory at the critical point which can be obtained by using the methods of 2d conformal field theories.
General renormalization-group (RG) arguments indicate that the free energy of the model can be written as
$`F(t,h)`$ $`=`$ $`F_b(t,h)+|u_t|^{d/y_t}f_{\mathrm{sing}}({\displaystyle \frac{u_h}{|u_t|^{y_h/y_t}}},\left\{{\displaystyle \frac{u_j}{|u_t|^{y_j/y_t}}}\right\})`$ (22)
$`+|u_t|^{d/y_t}\mathrm{log}|u_t|\stackrel{~}{f}_{\mathrm{sing}}({\displaystyle \frac{u_h}{|u_t|^{y_h/y_t}}},\left\{{\displaystyle \frac{u_j}{|u_t|^{y_j/y_t}}}\right\}).`$
Here $`F_b(t,h)`$ is a regular function of $`t`$ and $`h^2`$, the so-called bulk contribution, $`u_t`$, $`u_h`$, $`\{u_j\}`$ are the non-linear scaling fields associated respectively to the temperature, the magnetic field and the irrelevant operators, and $`y_t`$, $`y_h`$, $`\{y_j\}`$ are the corresponding dimensions. For the Ising model $`y_t=1`$, $`y_h=15/8`$. Notice the presence of the logarithmic term, that is related to a “resonance” between the thermal and the identity operator<sup>1</sup><sup>1</sup>1In principle, logarithmic terms may also arise from additional resonances due to the fact that $`y_j`$ are integers or differ by integers from $`y_h`$. They will not be considered here since these contributions either are subleading with respect to those we are interested in or have a form that is already included.. The scaling fields are analytic functions of $`t`$ and $`h`$ that respect the $`𝐙_\mathrm{𝟐}`$ parity of $`t`$ and $`h`$. Let us write the Taylor expansion for $`u_h`$ and $`u_t`$, keeping only those terms that are needed for our analysis (we use the notations of ):
$$u_h=h[1+c_ht+d_ht^2+e_hh^2+f_ht^3+O(t^4,th^2)],$$
(23)
$$u_t=t+b_th^2+c_tt^2+d_tt^3+e_tth^2+g_th^4+f_tt^4+O(t^5,t^2h^2).$$
(24)
Let us first discuss the contributions of the irrelevant operators. In generic models their dimensions are usually unknown. In the present case instead, we may identify the irrelevant operators with the secondary fields obtained from the exact solution of the model at the critical point and use the corresponding RG exponents as input of our analysis. We shall discuss this issue in full detail in a forthcoming publication, let us only summarize here the main results of this analysis. It turns out that, discarding corrections of order $`O(t^5)`$, we have only two possible contributions:
* The first one is due to terms $`T\overline{T}`$ , $`T^2`$ and $`\overline{T}^2`$ (where $`T`$ denotes the energy-momentum tensor). These terms would give a correction proportional to $`t^2`$ in the scaling function.
* The second contribution is due to the $`L_3\overline{L}_3I`$ field from the Identity family and to $`L_4ϵ`$, $`\overline{L}_4ϵ`$ from the energy family (where the $`L_i`$’s are the generators of the Virasoro algebra). These terms give a correction proportional to $`t^4`$ in the scaling function.
However, it turns out (see for instance the remarks of ) that in the infinite-volume free energy of the 2d Ising model the $`T\overline{T}`$ , $`T^2`$ and $`\overline{T}^2`$ terms are actually absent<sup>2</sup><sup>2</sup>2This conjecture is verified by the free energy and by the susceptibility at $`h=0`$ and by the free energy $`F(0,h)`$ . Note that this is expected to be true only in the thermodynamic limit. In the finite-size scaling limit corrections that vanish like $`L_1^2`$ are indeed observed . It is also not true for other observables, for instance, for the correlation length $`\xi `$.. Thus, from the above analysis we see that the first correction due to the irrelevant fields appears only at order $`t^4`$. Therefore, since $`u_j/|u_t|^{y_j/y_t}`$ vanishes for $`t0`$, we can expand
$$f_{\mathrm{sing}}(x,\{z_j\})=Y_+(x)+u_0(t,h)u_t^4X_+(x)+O(u_t^5),$$
(25)
where $`u_0(t,h)`$ is an analytic function of $`t`$ and $`h`$, and $`Y_+`$, $`X_+`$ are appropriate scaling functions. The same expansion holds for $`\stackrel{~}{f}_{\mathrm{sing}}`$ with different functions $`\stackrel{~}{Y}_+`$, $`\stackrel{~}{X}_+`$. Additional constraints can be obtained using the exactly known results for the free energy, the magnetization and the susceptibility in zero field. Since all numerical data indicate that all zero-momentum correlation functions diverge as a power of $`t`$ without logarithms for $`t0`$, we assume as in Ref. that $`\stackrel{~}{Y}_+(x)`$ is constant, i.e. $`\stackrel{~}{Y}_+(x)=\stackrel{~}{Y}_0`$. The exact results for the free energy and the magnetization give then
$$c_h=\frac{\beta _c}{\sqrt{2}},d_h=\frac{23\beta _c^2}{16},f_h=\frac{191\beta _c^3}{48\sqrt{2}},$$
(26)
$$c_t=\frac{\beta _c}{\sqrt{2}},d_t=\frac{7\beta _c^2}{6},f_t=\frac{17\beta _c^3}{6\sqrt{2}},$$
(27)
where we have adapted the numbers of to our normalizations, and $`\stackrel{~}{Y}_0=4\beta _c^2/\pi `$. By making use of the expansion of the susceptibility, we obtain further
$$Y_+^{(2)}(0)=A_\chi ,b_t=\frac{D_0\pi }{16\beta _c^2},$$
(28)
where $`D_0`$ is the coefficient of the contribution proportional to $`t\mathrm{log}|t|`$ in the susceptibility. Numerically $`D_0=0.04032550\mathrm{}`$, so that $`b_t=0.0407708\mathrm{}`$ Nickel has also conjectured, on the basis of the numerical analysis of the high-temperature series of the susceptibility, that $`e_t=b_t\beta _c\sqrt{2}`$.
Using the results presented above, and taking four derivatives of the free energy we obtain
$`F^{(4)}`$ $`=`$ $`t^{11/2}(a_{F4}(t)+t^4\stackrel{~}{a}_{F4}(t)\mathrm{log}|t|)+t^{11/4}(b_{F4}(t)+t^4\stackrel{~}{b}_{F4}(t)\mathrm{log}|t|)`$ (29)
$`+c_{F4}(t)+\stackrel{~}{c}_{F4}(t)\mathrm{log}|t|,`$
where $`a_{F4}(t)`$, $`b_{F4}(t)`$, $`c_{F4}(t)`$, $`\stackrel{~}{a}_{F4}(t)`$, $`\stackrel{~}{b}_{F4}(t)`$, and $`\stackrel{~}{c}_{F4}(t)`$ are analytic functions. Using Eqs. (26) and (27), we can compute the first terms in the Taylor expansion of $`a_{F4}(t)`$. By direct evaluation we find
$`a_{F4}(t)`$ $`=`$ $`Y_+^{(4)}(0){\displaystyle \frac{(1+c_ht+d_ht^2+f_ht^3)^4}{(1+c_tt+d_tt^2+f_tt^3)^{11/2}}}+O(t^4)`$ (30)
$`=`$ $`Y_+^{(4)}(0)\left(1{\displaystyle \frac{3\beta _c}{2\sqrt{2}}}t+{\displaystyle \frac{13\beta _c^2}{48}}t^2+{\displaystyle \frac{29\beta _c^3}{32\sqrt{2}}}t^3\right)+O(t^4).`$
From Eq. (30), we immediately identify
$$Y_+^{(4)}(0)=A_{F^{(4)}}.$$
(31)
Analogously, a direct calculation shows that
$$b_{F4}(0)=21b_tY_+^{(2)}(0)=0.8241504\mathrm{}.$$
(32)
The contributions proportional to $`c_{F4}(t)`$ and $`\stackrel{~}{c}_{F4}(t)`$ give corrections of order $`t^{11/2}`$ which will be neglected in the following.
Putting together the various terms, we end up with the following expression for the scaling function:
$`F^{(4)}t^{11/2}`$ $`=`$ $`A_{F^{(4)}}(1+p_1t+p_2t^2+p_3t^3)`$ (33)
$`+p_4t^{11/4}+p_5t^{15/4}+p_6t^4+\stackrel{~}{p}_6t^4\mathrm{log}|t|+p_7t^{19/4}+O(t^5)`$
where
$`p_1`$ $`=`$ $`{\displaystyle \frac{3\beta _c}{2\sqrt{2}}}=0.46741893\mathrm{}`$ (34)
$`p_2`$ $`=`$ $`{\displaystyle \frac{13\beta _c^2}{48}}=0.052597147\mathrm{}`$ (35)
$`p_3`$ $`=`$ $`{\displaystyle \frac{29\beta _c^3}{32\sqrt{2}}}=0.054843243\mathrm{}`$ (36)
$`p_4`$ $`=`$ $`21b_tY_+^{(2)}(0)=0.8241504\mathrm{}.`$ (37)
and $`p_5,p_6,\stackrel{~}{p}_6,p_7`$ and $`A_{F^{(4)}}`$ are undetermined constants which we shall try to fix in the next section.
## 5 Analysis of the data
The aim of this section is to obtain a numerical estimate for $`A_{F^{(4)}}`$ by fitting the data reported in tab.2 with the scaling function (33). The major problem in doing this is to estimate the systematic errors involved in the truncation of the scaling function. To this end we performed two different types of analysis. Let us see in detail the procedure that we followed.
### 5.1 First level of analysis
We first performed a rather naive analysis of the data. In table 2 we include step by step the information that we have gained in the previous section. In the third column of table 2 we have multiplied $`F^{(4)}`$ by $`\frac{u_t^{11/2}}{(u_h/h)^4}`$, where $`u_h`$ and $`u_t`$ are given by eqs. (23,24). We see that the variation from $`\beta =0.30`$ to $`\beta =0.37`$ of the numbers in column three is reduced by a factor of about 10 compared with column two. In column four we add $`b_{F4}(0)t^{11/4}`$ to the numbers of column three. Again we see that the variation of the numbers with $`\beta `$ is drastically reduced in column four compared with column three.
Since we do not know the coefficients of higher order corrections exactly we have to extract them from the data. In the last two columns of table 2 we have extrapolated linearly in $`t^x`$, with $`x=3.75`$ in column 5 and $`x=4`$ in column 6. For the extrapolation we used neighboring $`\beta `$-values (e.g. the value quoted for $`\beta =0.37`$ is obtained from the extrapolation of the data for $`\beta =0.365`$ and $`\beta =0.37`$).
We see that the result of the extrapolation does not vary very much when the exponent is changed from $`15/4`$ to $`4`$. Also the numbers given in column 5 and 6 are much more stable than those of column 4.
From this naive analysis we conclude that $`a_{F_4}(0)=4.3791(1)`$, where the error bar is roughly estimated from an extrapolation of column 5 with $`t^4`$.
In the next section we shall try to include the higher order corrections in a more sophisticated fitting procedure.
### 5.2 Second level of analysis
We made three types of fits:
f1\] In the first we kept $`A_{F^{(4)}}`$, $`p_5`$ and $`p_6`$ as free parameters.
f2\] In the second we kept $`A_{F^{(4)}}`$, $`p_5`$, $`p_6`$ and $`p_7`$ as free parameters.
f3\] In the third we kept $`A_{F^{(4)}}`$, $`p_5`$, $`p_6`$ and $`\stackrel{~}{p}_6`$ as free parameters.
These are the only choices allowed by the data. If we neglect also $`p_6`$ we can never obtain an acceptable confidence level (in fact we know that $`p_6`$ is certainly different from zero and our data are too precise to allow such an approximation). If we add further terms, like a power of $`t^5`$ for instance, or try to fit simultaneously $`p_5`$, $`p_6`$, $`\stackrel{~}{p}_6`$ and $`p_7`$ it always happens that some of the amplitudes are smaller than their statistical uncertainty signalling that our data are not precise enough to allow for five free parameters.
In order to estimate the systematic errors involved in the estimate of $`A_{F^{(4)}}`$ we performed for all the fitting functions several independent fits trying first to fit all the existing data (those listed in tab.2) and then eliminating the data one by one, starting from the farthest from the critical point. Among the set of estimates of the critical amplitudes we selected only those fulfilling the following requirements:
1\] X The reduced $`\chi ^2`$ of the fit must be of order unity. In order to fix precisely a threshold we required the fit to have a confidence level larger than $`30\%`$.
2\] X For all the subleading terms included in the fitting function, the amplitude estimated from the fit must be larger than the corresponding error, otherwise the term is eliminated from the fit. It is exactly this constraint which forbids us to take into account fits with more than four free parameters.
3\] X The amplitude of the $`n^{th}`$ subleading field must be such that when it is multiplied for the corresponding power of $`t`$, (for the largest value of $`t`$ involved in the fit) it gives a contribution smaller than that of the $`(n1)^{th}`$ subleading term. This is intended to avoid artificial cancellations between subleading fields.
Among all the estimates of the critical amplitude $`A_{F^{(4)}}`$ fulfilling these requirements we select the smallest and the largest ones as lower and upper bounds.
The results of the fits are reported in tab.3, 4 and 5. We report all the combinations of input data which fulfill requirements - . In the tables we also report the best fit value of $`p_5`$. All the fits were performed using the double-precision NAG routine GO2DAF.
Looking at the three tables and selecting the lowest and highest values of $`A_{F^{(4)}}`$ we obtain the bounds
$$4.379093\mathrm{}>A_{F^{(4)}}\mathrm{}>4.379110,$$
(38)
from which, using eq.(13), we obtain
$$g_4^{}=14.69735(3)$$
(39)
which we consider as our best estimate for $`g_4^{}`$. As anticipated in the introduction, this result is in substantial agreement with the estimate of . Notice however that the error quoted in eq.(39) should not be considered as a standard deviation. It rather encodes in a compact notation the systematic uncertainty of our fitting procedure.
We can compare the estimate (39) with previous numerical determinations. The analysis of high-temperature expansions gives $`g_4^{}=14.694(2)`$, Ref. and $`g_4^{}=14.693(4)`$, Ref. while Monte Carlo simulations give $`g_4^{}=14.3(1.0)`$, Ref. , and $`g_4^{}=14.69(2)`$, Ref. . These results agree with our estimate (39), which is however much more precise.
It is clear from the data (see the second column of tab. 3, 4 and 5) that the uncertainty on $`A_{F^{(4)}}`$ is mostly due to the fluctuation of $`p_5`$. If one would be able to fix exactly also $`p_5`$, the precision in the determination of $`g_4`$ could be significantly enhanced.
Acknowledgements. We thank Alan Sokal for useful discussions and Bernie Nickel for sending us his unpublished addendum, Ref. . This work was partially supported by the European Commission TMR programme ERBFMRX-CT96-0045.
|
no-problem/0003/cond-mat0003491.html
|
ar5iv
|
text
|
# Spin Dynamics of Cavity Polaritons
## Abstract
We have studied polariton spin dynamics in a GaAs/AlGaAs microcavity by means of polarization- and time-resolved photoluminescence spectroscopy as a function of excitation density and normal mode splitting. The experiments reveal a novel behavior of the degree of polarization of the emission, namely the existence of a finite delay to reach its maximum value. We have also found that the stimulated emission of the lower polariton branch has a strong influence on spin dynamics: in an interval of $``$150 ps the polarization changes from +100$`\%`$ to negative values as high as -60$`\%`$. This strong modulation of the polarization and its high speed may open new possibilities for spin-based devices.
Semiconductor microcavities are one of the most suitable structures to study light matter interaction. In the strong coupling regime, first observed in 1992, excitons and photons form mixed states, named cavity polaritons. The signature of this regime is an anticrossing of the exciton and cavity modes when they are brought into resonance. Although cavity polaritons have been profusely investigated, there are some aspects of their optical properties which need better understanding, in particular those that refer to non-linear processes and the polarization of the emitted light.
In studying non-linear processes, it has been difficult to maintain the polaritonic signature because of saturation of the strong coupling regime. This is the case in Vertical Cavity Surface Emitting Lasers (VCSEL’s), where the nonlinear emission originates in the population inversion of a dense electron-hole plasma in the weak coupling regime. The polaritonic character of stimulated emission in both $`IIIV`$ and $`IIVI`$ microcavities reported recently has been questioned by calculations that claim that those results can be explained within a fermionic quantum theory.
The polarization of the light emitted by bare semiconductor quantum wells (QWs) has been widely investigated. In fact, because the polarization is directly linked to the spin (i. e. third component of the total angular momentum), its study is part of a new field, known as spintronics, that aims to develop spin-based fast optoelectronic devices. The mechanisms responsible for the spin relaxation of excitons in QWs and its dependence on different parameters such as well width, temperature and excitation density have been established. In microcavities, due to the mixed photon-exciton character of the polaritons and the inefficiency of those mechanisms on the cavity like mode, significant changes on the spin dynamics are to be expected. But, in spite of these differences, only a few works on the polarization properties of VCSEL’s and microcavities have been reported, the latest one on the cavity-polaritons spin properties in the nonlinear regime and under cw excitation. Following a preliminary study of spin dynamics in a semiconductor microcavity, we report in this paper on the time evolution of the polariton spin in both linear and non-linear regimes. Specifically, we show that the emission is highly polarized in the non-linear regime and that the polarization dynamics is strongly influenced by exciton-cavity detuning.
The microcavity studied in this work, grown by molecular beam epitaxy, consists of three GaAs quantum well regions embedded in a 3$`\lambda `$/2 Al<sub>0.25</sub>Ga<sub>0.75</sub>As Fabry-Perot resonator clad by dielectric mirrors. The top and bottom mirrors are distributed Bragg reflectors made of twenty and a half and twenty-four alternating AlAs/Al<sub>0.35</sub>Ga<sub>0.65</sub>As $`\lambda `$/4 layers, respectively. The QWs are placed at the antinodes of the resonator’s standing wave. A slight variation (introduced by design during growth) of the cavity’s thickness along the radial direction of the wafer allowed to tune the cavity’s resonance with the transition in the QWs by moving the excitation spot across the sample. Using low temperature cw-photoluminescence (PL) measurements we have identified exciton-like (X) and cavity-like (C) modes, whose Normal Mode Splitting (NMS) variation was found to be between 3.5 and 7 meV as the laser spot was moved across the sample.
We have used time-resolved spectroscopy to study polariton recombination and spin dynamics as a function of excitation density and exciton-cavity detuning (E<sub>C</sub>-E<sub>X</sub>). The experiments were performed under non-resonant excitation above the cavity stop-band (1.71 eV) and the PL emitted by the sample was analyzed in a conventional up-conversion spectrometer with a time resolution of $``$ 2 ps. The sample was mounted on a cold-finger cryostat where the temperature was kept at 5 K. For polarization resolved measurements two $`\lambda `$/4 plates were included in the optical path of the experiment. Under $`\sigma ^+`$ excitation, the degree of polarization of the PL is defined as $`\mathrm{}=\frac{I^+I^{}}{I^++I^{}}`$, where $`I^\pm `$ denotes the PL emitted with $`\pm `$1 helicity. The analysis of this quantity gives direct information about the spin relaxation processes, as it is directly related to the difference of +1 and -1 spin populations. In the following, we will refer to this quantity simply as polarization.
Initial time-resolved experiments under weak excitation ($`I_{exc}<`$ 19 $`W/cm^2`$), confirmed the NMS variation across the sample extracted from cw measurements. The study of the time evolution of both peaks revealed that the NMS does not influence the dynamics for positive detunings, in agreement with Abram et al. The characteristic rise and decay times of the PL were very similar for both polariton branches, and amounted to $`\tau _r^X100ps`$, $`\tau _d^X300ps`$, $`\tau _r^C70ps`$, and $`\tau _d^C250ps`$, where $`r`$ ($`d`$) denotes rise (decay) time.
With increasing excitation density, drastic changes were observed in the time-resolved spectra as well as in the recombination dynamics. At low power densities, both the lower (LPB) and upper (UPB) polariton branches have a similar dependence on power (slightly larger that linear). This dependence is maintained for the UPB in the whole range of excitation densities used in our experiments (up to 40 $`W/cm^2`$). In contrast, the dependence of the LPB emission on power shows a threshold, $`I_{th}`$, at $``$ 20 $`W/cm^2`$ (Ref. ).
Figures 1a and 1b display two PL spectra measured 100 ps after excitation, below ($``$, 7 $`W/cm^2`$) and at the threshold ($``$, 20 $`W/cm^2`$), respectively. At low power, the spectrum is dominated by the UPB at 1.624 eV (Fig. 1a). The LPB becomes resolvable for I $``$14 $`W/cm^2`$ as a very narrow peak at 1.621 eV (Fig. 1b), and it sharpens with increasing density, up to $`I_{th}`$, reducing its width by a factor of $`4`$. This narrowing also occurs for the UPB, but its linewidth is only reduced by a factor for $`2`$. The NMS is practically independent of power.
The time evolution is also strongly affected by an increase of excitation density, as shown in Figures 1c and 1d. For small excitation densities ($``$, 7 $`W/cm^2`$) the time evolution is similar to that typical of QWs under non-resonant excitation: the emission is characterized by slow rise and decay times. For larger excitation ($``$, 20 $`W/cm^2`$), the rise and decay times are faster and there is a 30 ps delay before any PL from the sample is observable. This onset in the PL is due to the bottleneck in the relaxation of polaritons towards K = 0 states.
The subject of non-linear emission in semiconductor microcavities has been controversial regarding the existence of polaritons, due to bleaching at high densities. In our experiments, excitons and photons seem to be still strongly coupled above $`I_{th}`$, as can be inferred from the anticrossing depicted in the inset of Fig. 1d. The non-resonantly created excitons relax their energy very rapidly towards K $``$ 0 states and, a few picoseconds after excitation, the X mode is observed at 1.626 eV. At these very short times, the LPB (1.622 eV) is photon-like. With increasing delay, the X mode red shifts due to the decrease of polariton density, similarly to the behavior of excitons in bare QWs. However, since the C mode energy is density independent, both modes become resonant, and a clear anticrossing is observed at $``$ 180 ps. At longer times, the LPB (UPB) recovers the X-like (C-like) character determined by cw measurements.
The three effects discussed above, linewidth reduction, excitation density threshold and, especially, the anticrossing in time suggest that the PL observed above $`I_{th}`$ can be attributed to polariton stimulated emission.
Let us now concentrate on polariton spin dynamics. A $`\sigma ^+`$ excitation pulse will initially populate the +1 spin level but a -1 spin population will appear as a result of spin flip mechanisms, which eventually balance both spin populations and therefore reduce the polarization to $`\mathrm{}`$=0. For excitons in bare QWs, the polarization reaches its maximum value just after excitation and then decays exponentially to zero. On the other hand, in microcavities, due to the complex nature of polaritons, one expects the spin dynamics of this mixed state to be different from that of bare excitons or photons. This fact is documented in Figure 2, which depicts the time evolution of the polarization of the cavity mode for two different excitation densities below the nonlinear emission threshold. In contrast with the monotonically decreasing behavior of $`\mathrm{}`$ found in bare QWs, in our microcavity a maximum is observed at a finite time after excitation. The polarization at t = 3 ps is $``$ 10$`\%`$, which means that, after the relaxation of polaritons to K $``$ 0 states, only 55$`\%`$ of the total population is in the +1 spin state. Such a small value of $`\mathrm{}`$ is mainly due to the non-resonant excitation conditions. $`\mathrm{}_{max}`$ is reached in 60-100 ps, and its value increases with excitation density, being as high as 80$`\%`$ (Fig. 2(b), 19 $`W/cm^2`$) before entering the stimulated emission regime. These findings corroborate recent cw-results that show that the polariton system can be markedly spin polarized.
The fact that a finite time is needed to reach $`\mathrm{}_{max}`$ implies that there must be a new scattering mechanism that favors polaritons with +1 spin, and thus competes with spin relaxation and tends to prevent equalization of both spin populations. The relaxation of large in-plane wave vector excitons, via emission of acoustic phonons, is stimulated by the polariton final-state population. The increase of $`\mathrm{}_{max}`$ with excitation density (25$`\%`$ @ 0.33 $`I_{th}`$, 80$`\%`$ @ 0.95 $`I_{th}`$) evidences that there is an enhancement of the scattering to the +1 spin state. The stimulation does not occur for the $`\sigma ^{}`$ polarized LPB emission, which also shows a time evolution with rise and decay times much longer than those observed in the non-linear regime for the $`\sigma ^+`$ emission. This process is not only spin selective but also induces an increase of the +1 population by flipping the spin of minoritary polaritons (-1).
An additional fact evidences the importance of the polaritonic stimulation on the spin behavior: the decrease in the time needed to reach $`\mathrm{}_{max}`$ with increasing positive detuning. This means that as we move away from resonance and the excitonic component of the LPB increases, the time evolution of $`\mathrm{}`$ approaches that characteristic of bare excitons, with the maximum value of $`\mathrm{}`$ occurring closer to t=0. The explanation given here is only qualitative and a complete theoretical description must be developed before this new mechanism of scattering into spin polarized states can be fully understood.
Recently, a rate-equation model has been successfully applied to describe the optical properties of cavity-polaritons in the nonlinear regime, under cw excitation, and a microscopic fermionic many-body theory has been developed to explain the linear and non-linear behavior of normal-mode coupling in microcavities, including dynamics of the light emission. However, even in these models, the spin of the polaritons was not taken into account. Our results provide new valuable information on the stimulated scattering into spin-polarized states of the LPB, and should attract interest for theoretical work that includes the spin in the calculations.
For excitation densities above the threshold, the time evolution of the cavity-mode polarization displays a behavior even more surprising. The LPB polarization reaches values as high as 95$`\%`$ when entering into the nonlinear emission regime. In contrast, for the UPB, although it shows a similar behavior, its polarization is only 60$`\%`$. After the initial rise of the polarization, once the maximum is reached, its dynamics is strongly dependent on NMS. Figure 3 depicts the time evolution of $`\mathrm{}`$ for two different points of the sample, with different NMS, under an excitation density of 2 $`I_{th}`$. For small exciton-cavity detunings (Fig. 3(a), 4.5 meV) a negative dip (-60$`\%`$) is observed at $``$150 ps, which is absent for larger NMS (Fig. 3(b), 6 meV).
The negative polarization is a consequence of the fast disappearance of the +1 polaritons, due to the stimulated $`\sigma ^+`$ emission of the LPB and the concurrent slower dynamics of the $`\sigma ^{}`$ emission. The -1 polariton population overcomes that of +1 spin due to the lack of stimulation for $`\sigma ^{}`$ polarization and also because the spin-flip processes are not fast enough to compensate the emptying of the +1 polaritons. The remarkable change in the state of polarization of the emitted light from +80$`\%`$ to -60$`\%`$, taking place in a very short time ($``$100 ps), is unique and, to the best of our knowledge, has not been reported before in any semiconductor based system.
Once the minimum of $`\mathrm{}`$ is reached, the polarization dynamics becomes slower: by then the polariton population has decreased by a factor 5 to 10 (depending on power density) and the remaining +1 spin population is too small to give rise to stimulation. Under these conditions, only the usual spin-flip mechanisms govern the polarization, which decreases steadily. Figure 3b shows that the negative dip has disappeared for larger NMS, due to the modification of the stimulated emission dynamics. The decay time of the $`\sigma ^+`$ PL becomes slower and the loss of +1 polaritons is neutralized by flipping -1 spins and as a result $`\mathrm{}`$ does not reach negative values. One can also observe in Fig. 3b that the abrupt decay of the polarization is slowed down with increasing NMS.
It should be mentioned that excitation with $`\sigma ^{}`$ yields identical results to those of the $`\sigma ^+`$ excitation discussed above, as expected from time reversal symmetry arguments. The sign reversal of the polarization is also observed for the $`\sigma ^{}`$ excitation and it is also the majority spin population (-1 in this case) the only one that undergoes stimulation.
In summary, our experiments on polariton recombination as a function of excitation density and exciton-cavity detuning have revealed strong nonlinearities in the emission of the lower polariton branch. A careful study of the time evolution of the polarization has shown the existence of a new scattering mechanism for the polaritons that is spin selective and gives rise to very high values of the polarization. The Normal Mode Splitting plays a key role on the spin relaxation of cavity polaritons, leading to a reversal in the polarization for small detunings. The large contrast in the polarization and its high speed open the possibility of new concepts for spintronic devices, such as ultrafast switches, based on the spin dynamics of microcavity polaritons.
###### Acknowledgements.
We are thankful to Dr. I. W. Tao and R. Ruf for growing the samples used in this work, which has been supported by Fundación Ramón Areces, the Spanish DGICYT under contract PB96-0085, the CAM (07N/0026/1998), the Spain-US Joint Commission and the U.S. Army Research Office.
|
no-problem/0003/physics0003033.html
|
ar5iv
|
text
|
# Bond breaking in vibrationally excited methane on transition metal catalysts
## Abstract
The role of vibrational excitation of a single mode in the scattering of methane is studied by wave packet simulations of oriented CH<sub>4</sub> and CD<sub>4</sub> molecules from a flat surface. All nine internal vibrations are included. In the translational energy range from 32 up to 128 kJ/mol we find that initial vibrational excitations enhance the transfer of translational energy towards vibrational energy and increase the accessibility of the entrance channel for dissociation. Our simulations predict that initial vibrational excitations of the asymmetrical stretch ($`\nu _3`$) and especially the symmetrical stretch ($`\nu _1`$) modes will give the highest enhancement of the dissociation probability of methane.
The dissociative adsorption of methane on transition metals is an important reaction in catalysis; it is the rate limiting step in steam reforming to produce syngas, and it is prototypical for catalytic C–H activation. Although the reaction mechanism has been studied intensively, it is not been fully understood yet. A number of molecular beam experiments in which the dissociation energy was measured as a function of translational energy have observed that vibrationally hot CH<sub>4</sub> dissociates more readily than cold CH<sub>4</sub>, with the energy in the internal vibrations being about as effective as the translational energy in inducing dissociation. Two independent bulb gas experiment with laser excitation of the $`\nu _3`$ asymmetrical stretch and $`2\nu _4`$ umbrella modes on the Rh(111) surface, and laser excitation of the $`\nu _3`$ and $`2\nu _3`$ modes on thin films of rhodium did not reveal any noticeable enhancement in the reactivity of CH<sub>4</sub>. A recent molecular beam experiment with laser excitation of the $`\nu _3`$ mode did succeed in measuring a strong enhancement of the dissociation on a Ni(100) surface. However, this enhancement was still much too low to account for the vibrational activation observed in previous studies and indicated that other vibrationally excited modes contribute significantly to the reactivity of thermal samples.
Wave packet simulations of the methane dissociation reaction on transition metals have treated the methane molecule always as a diatomic up to now. Apart from one C–H bond (a pseudo $`\nu _3`$ stretch mode) and the molecule surface distance, either (multiple) rotations or some lattice motion were included. None of them have looked at the role of the other internal vibrations, so there is no model that describes which vibrationally excited mode might be responsible for the experimental observed vibrational activation. In previous papers we have reported on wave packet simulations to determine which and to what extent internal vibrations are important for the dissociation of CH<sub>4</sub> in the vibrational ground state, and the isotope effect of CD<sub>4</sub>. We were not able yet to simulate the dissociation including all internal vibrations. Instead we simulated the scattering of methane, for which all internal vibrations can be included, and used the results to deduce consequences for the dissociation. These simulations indicate that for methane to dissociate the interaction of the molecule with the surface should lead to an elongated equilibrium C–H bond length close to the surface. In this letter we report on new wave packet simulations of the role of vibrational excitations for the scattering of CH<sub>4</sub> and CD<sub>4</sub> molecules with all nine internal vibrations. The dynamical features of these simulations give new insight into the initial steps of the dissociation process. The conventional explanation is that vibrations help dissociation by adding energy needed to overcome the dissociation barrier. We find that two other new explanations play also a role. One of them is the enhanced transfer of translational energy into the dissociation channel by initial vibrational excitations. The other more important explanation is the increased accessibility of the entrance channel for dissociation.
We have used the multi-configurational time-dependent Hartree (MCTDH) method for our wave packet simulation. This method can deal with a large number of degrees of freedom and with large grids. (See Ref. for a recent review.) Initial translational energy has been chosen in the range of 32 to 128 kJ/mol. The initial state has been written as a product state of ten functions; one for the normally incident translational coordinate, and one for each internal vibration. All vibrations were taken to be in the ground state except one which was put in the first excited state. The orientation of the CH<sub>4</sub>/CD<sub>4</sub> was fixed, and the vibrationally excited state had $`a_1`$ symmetry in the symmetry group of the molecule plus surface (C<sub>3v</sub> when one or three H/D atoms point towards the surface, and C<sub>2v</sub> when two point towards the surface.) The potential-energy surface is characterised by an elongation of the C–H bonds when the molecule approaches the surfaces, no surface corrugation, and a molecule-surface part appropriate for Ni(111). It has been shown to give reasonable results, and is described in Refs. and . These articles give also the computational details about the configurational basis and number of grid points, and contain illustrations of the orientations and the important vibrational modes.
We can obtain a good idea about the overall activation of a mode by looking at the kinetic energy expectation values $`\mathrm{\Psi }(t)|T_j|\mathrm{\Psi }(t)`$ for each mode $`j`$. During the scattering process the change in the translational kinetic energy is the largest. It is plotted in Fig. 1 as a function of time for CH<sub>4</sub> in the orientation with three bond pointing towards the surface with an initial kinetic energy of 96 kJ/mol and different initial vibrational states. When the molecule approaches the surface the kinetic energy falls down to a minimum value. This minimum value varies only slightly with the initial vibrational states of the molecule. The total loss of translational kinetic energy varies substantially, however. The initial translational kinetic energy is not conserved. This means that the vibrational excitation enhances inelastic scattering. Especially an excitation of the $`\nu _1`$ symmetrical stretch and to a lesser extend the $`\nu _3`$ asymmetrical stretch mode result in an increased transfer of kinetic energy towards the intramolecular vibrational energy. The inelastic scatter component (the initial minus the final translational energy) for both isotopes in the orientation with three bonds pointing towards the surface, shows the following trend for the initial vibrational excitations of the modes; $`\nu _1`$ $`>`$ $`\nu _3`$ $`>`$ $`\nu _4`$ $`>`$ ground state. CH<sub>4</sub> scatters more inelastic than CD<sub>4</sub> over the whole calculated range of translational kinetic energies, if the molecule has an initial excitation of the $`\nu _3`$ stretch mode. CH<sub>4</sub> scatters also more inelastically than CD<sub>4</sub> in the $`\nu _1`$ symmetrical stretch mode at higher energies , but at lower energies it scatters slightly less inelastically. For the molecules with the non-excited state or an excitation in the $`\nu _4`$ umbrella mode CD<sub>4</sub> has a higher inelastic scattering component than CH<sub>4</sub>. At an initial translational kinetic energy of 128 kJ/mol the excitation of the $`\nu _4`$ umbrella mode results in a strong enhancement of the inelastic scattering component. For CD<sub>4</sub> the inelastic scattering component for the initial excited $`\nu _4`$ umbrella mode can become even larger than for the initial excited $`\nu _3`$ stretch mode. For the orientation with two bonds pointing towards the surface we observe the same trends for the relation between the inelastic scatter components and the excited initial vibrational modes, but the inelastic scatter component are less than half of the values for the orientation with three bonds pointing towards the surface. Also the excitation of the $`\nu _3`$ asymmetrical stretch modes results now in a higher inelastic scattering component for CD<sub>4</sub> than for CH<sub>4</sub>. Excitation of the $`\nu _2`$ bending mode gives a little higher inelastic scatter component than the vibrational ground state. For the orientation with one bond pointing towards the surface we observe an even lower inelastic scattering component. At an initial kinetic energy of 128 kJ/mol we find that both the $`\nu _1`$ and $`\nu _3`$ stretch modes have on inelastic component of around 6.5 kJ/mol for CD<sub>4</sub> and 4.0 kJ/mol for CH<sub>4</sub>. At an initial translational energy of 32 kJ/mol we observe for both isotopes in all orientations a very small increase of translational kinetic energy for the vibrational excited molecule, which means that there is a net transfer from intramolecular vibrational energy through the surface repulsion into the translational coordinate.
There seem to be two groups of vibrations with different qualitative behavior with respect to (de)excitation when the molecule hits the surface. The first group, let’s call it the “stretch” group, consist of the $`\nu _3`$ asymmetric stretch in any orientation and the $`\nu _1`$ symmetric stretch in the orientation with three hydrogen/deuterium atoms pointing to the surface. The second, let’s call it the “bending” group, consists of all bending vibrations and the $`\nu _1`$ in other orientations. When the molecule is initially in the vibrational ground state the kinetic energy in the vibrations increases, reaches a maximum at the turn-around point, and then drops back almost to the initial level except for a small contribution due to the inelastic scattering component. The vibrations within a group have very similar amounts of kinetic energy, but the “stretch” group has clearly a larger inelastic component than the “bending” group, and also the kinetic energy at the turn-around point is larger. When the molecule has initially an excitation of a vibration of the “stretch” group then the kinetic energy of that vibration increases, reaches a maximum at the turn-around point, and drops to a level lower than it was initially. For an excitation of a vibration of the “bending” group there is no maximum, but its kinetic energy simply drops to a lower level. We see that in all cases there is not only a transfer of energy from the translation to vibrations, but also an energy flow from the initially excited vibration to other vibrations. However, the total energy of the vibrational kinetic energy and the intramolecular potential energy increases, because it has to absorb the inelastic scattering component.
Figure 2 shows the (repulsive) interaction with the surface during the scattering process of CH<sub>4</sub> at an initial kinetic energy of 96 kJ/mol and different initial vibrational excitations for the orientation with three hydrogens pointing towards the surface. Since this is a repulsive term with a exponential fall-off changes in the repulsion indicate the motion of the part of the wave packet closest to the surface. At the beginning of the simulation the curves are almost linear in a logarithmic plot, because the repulsion hardly changes the velocity of the molecule. After some time the molecule enters into a region with a higher surface repulsion and the slopes of the curves drop. This results in a maximum at the turn-around point, where most of the initial translational kinetic energy is transfered into potential energy of the surface repulsion. For a classical simulation it would have meant no translational kinetic energy, but it corresponds with the minimum kinetic energy for our wave packet simulations. Past the maximum, a part of the wave packet will accelerate away from the surface, and the slope becomes negative. The expectation value of the translational kinetic energy (see Fig.1) increases at the same time. The slope of the curves in Fig. 2 becomes less negative towards the end of the simulation, although the expectation value of the translational kinetic energy in this time region is almost constant. The reason for this is that a part of the wave packet with less translational kinetic energy is still in a region close to the surface. We see also that the height of the plateaus for the different initial vibrational excitations is again in the order; $`\nu _1`$ $`>`$ $`\nu _3`$ $`>`$ $`\nu _4`$ $`>`$ ground state. This again indicates that a larger part of the wave packet is inelastically scattered when $`\nu _1`$ is excited than when $`\nu _3`$ is excited, etc.
At lower initial translational kinetic energies the plateaus have a lower position and the main gap exist between the plateaus of the $`\nu _1`$ and $`\nu _3`$ stretch modes and the lower positioned plateaus of the $`\nu _4`$ umbrella and the ground state. At an initial translational kinetic energies of 128 kJ/mol the positions of the plateaus are higher and the differences between the initial vibrational excitations are also smaller. The plateau of the $`\nu _3`$ stretch mode is even around the same position as the $`\nu _4`$ umbrella mode for CD<sub>4</sub> in the orientation with three bonds pointing towards the surface at this initial energy. The orientation with two bonds pointing towards the surface shows the same trends. The plateaus of the initial excited $`\nu _2`$ bending mode are located slightly above the ground state for both isotopes. For the orientation with one bond pointing towards the surface the relative positions of the plateaus of the different initial excitations are the same as at low energies in the orientation with three bonds pointing towards the surface.
Even though we did not try to describe the dissociation itself, the scattering simulation do yield indications for the role of vibrational excitations on the dissociation of methane, and compare these with experimental observations. The dissociation of methane occurs over a late barrier, because it is enhanced by vibrational energy. Conventionally, the role of vibrational excitation on the enhancement of dissociation probability was discussed as an effect of the availability of the extra (vibrational kinetic) energy for overcoming the dissociation barrier. Our simulations show that such a process might play a role, but they show also that two other processes occur through vibrational excitation.
Firstly, an initial vibrational excitation increases translational kinetic energy transfer towards the intramolecular vibrational energy. The simulations show that this inelastic scatter component can be seen in an large enhancement of the vibrational kinetic energy in the stretch modes at the turn-around point. This increase is larger for higher initial translational kinetic energy and is most effective in the orientation with three bonds pointing towards the surface. If the dissociation of methane occurs primarily in this orientation, then we would expect, based on the total available vibrational energy after hitting the surface, that excitation of the $`\nu _1`$ symmetrical stretch mode is the most effective for enhancing the dissociation probability. The $`\nu _3`$ asymmetrical stretch mode appears to be less so. An explanation of the enhanced inelastic scatter compound by vibrational excitation is that through excitation the bonds are weakened, which will ease excitation in the initial non-excited modes. Other excitations than the $`\nu _2`$, $`\nu _3`$, or $`\nu _4`$ with $`a_1`$ symmetry for a particular orientation can possibly result in higher energy transfers, but we think that the difference with $`\nu _1`$ (which has always $`a_1`$ symmetry) would be still large.
Secondly, the accessibility of the dissociation channel enhances also the dissociation probability. We have concluded previously that our potential mimics reasonably the entrance channel for dissociation. In this letter we find that a part of the wave packet has a longer residence time at the surface. It is this part of the wave packet that accesses the dissociation channel, and it is also this part that is able to come near to the transition state for dissociation. From Figs. 1 and 2 we conclude that the $`\nu _1`$ stretch mode will enhance the dissociation probability the most. The enhanced accessibility by vibrational excitation is explained by the spread of the wave packet along a C–H bond, which gives a higher probability for the system to be atop the dissociation barrier.
The molecular beam experiment with excitations of the $`\nu _3`$ asymmetrical stretch mode of CH<sub>4</sub> of Ref. shows that a single excitation of the $`\nu _3`$ asymmetrical stretch mode enhances dissociation, but the measured reactivity of the $`\nu _3`$ stretch mode is too low to account for the total vibrational activation observed in the molecular beam study of Ref. . It means that excitation of another mode than the $`\nu _3`$ stretch will be more effective for dissociation. Our simulations show that indeed excitation of $`\nu _3`$ stretch will enhance dissociation, but predict that excitation of the $`\nu _1`$ symmetrical stretch mode will be more effective if the dissociation occurs primary in the orientation with multiple bonds pointing towards the surface. The contribution of the $`\nu _1`$ symmetrical stretch mode cannot be measured directly, because it has no infra-red activity. However, the contribution of the $`\nu _1`$ mode can be estimated using a molecular beam study as follows. The contribution of the $`\nu _3`$ stretch has already been determined. Similarly the contribution of the $`\nu _4`$ umbrella mode can be determined. The contribution of the $`\nu _2`$ bending can be estimated from our simulations to be somewhat lower than the $`\nu _4`$ umbrella contribution. The total contribution of all vibrations is known from Ref. , and a simple subtraction will give us then the contribution of the $`\nu _1`$ stretch. At high translational energies the accessibility of the dissociation channel for molecules with an excited $`\nu _4`$ umbrella mode is near to that of the molecules with excited stretch modes, and for CD<sub>4</sub> the inelastic scattering is also enhanced. So the excitation of the $`\nu _4`$ umbrella mode can still contribute significantly to the vibrational activation, because it has also higher Boltzmann population in the molecular beam than the stretch modes.
This research has been financially supported by the Council for Chemical Sciences of the Netherlands Organization for Scientific Research (CW-NWO), and has been performed under the auspices of the Netherlands Institute for Catalysis Research (NIOK).
|
no-problem/0003/hep-ph0003058.html
|
ar5iv
|
text
|
# A General Classification of Three-Neutrino Models and 𝑈_{𝑒3}
## 1 Introduction
Over the years, and especially since the discovery of the large mixing of $`\nu _\mu `$ seen in atmospheric neutrino experiments, there have been numerous models of neutrino masses proposed in the literature. In the last two years alone, as many as one hundred different models have been published. One of the goals of this paper is to give a helpful classification of these models. Such a classification is possible because in actuality there are only a few basic ideas that underlie the vast majority of published neutrino mixing schemes. After some preliminaries, we present in section 2 a general classification of three-neutrino models that have a hierarchical neutrino spectrum. In section 3 we discuss the parameter $`U_{e3}`$, which describes the ‘1-3’ mixing of neutrinos. Since theoretical models are constructed to account for the solar and atmospheric neutrino oscillation data, which tightly constrain the ‘1-2’ and ‘2-3’ mixings but not the ‘1-3’ mixing, the parameter $`U_{e3}`$ will be very important in the future for distinguishing among different kinds of models and testing particular schemes.
There are four indications of neutrino mass that have guided recent attemps to build models: (1) the solar neutrino problem, (2) the atmospheric neutrino anomaly, (3) the LSND experiment, and (4) dark matter. There are many excellent reviews of the evidence for neutrino mass.<sup>1</sup>
(1) The three most promising solutions to the solar neutrino problem are based on neutrino mass. These are the small-angle MSW solution (SMA), the large-angle MSW solution (LMA), and the vacuum oscillation solution (VO). All these solutions involve $`\nu _e`$ oscillating into some other type of neutrino — in the models we shall consider, predominantly $`\nu _\mu `$. In the SMA solution the mixing angle and mass-squared splitting between $`\nu _e`$ and the neutrino into which it oscillates are roughly $`\mathrm{sin}^22\theta 5.5\times 10^3`$ and $`\delta m^25.1\times 10^6eV^2`$. For the LMA solution one has $`\mathrm{sin}^22\theta 0.79`$, and $`\delta m^23.6\times 10^5eV^2`$. (The numbers are best-fit values from a recent analysis.<sup>2</sup>) And for the VO solution $`\mathrm{sin}^22\theta 0.93`$, and $`\delta m^24.4\times 10^{10}eV^2`$. (Again, these are best-fit values from a recent analysis.<sup>3</sup>)
(2) The atmospheric neutrino anomaly strongly implies that $`\nu _\mu `$ is oscillating with nearly maximal angle into either $`\nu _\tau `$ or a sterile neutrino, with the data preferring the former possibility.<sup>4</sup> One has $`\mathrm{sin}^22\theta 1.0`$, and $`\delta m^23\times 10^3eV^2`$.
(3) The LSND result, which would indicate a mixing between $`\nu _e`$ and $`\nu _\mu `$ with $`\delta m^20.31eV^2`$ is regarded with more skepticism for two reasons. The experimental reason is that KARMEN has failed to corroborate the discovery, although it is true that KARMEN has not excluded the whole LSND region. The theoretical reason is that to account for the LSND result and also for both the solar and atmospheric anomalies by neutrino oscillations would require three quite different mass-squared splittings, and that can only be achieved with four species of neutrino. This significantly complicates the problem of model-building. In particular, it is regarded as not very natural, in general, to have a fourth sterile neutrino that is extremely light compared to the weak scale. (There are some theoretical frameworks that can give light sterile particles, but they tend to give many of them, not just one.) For these reasons, we assume that the LSND results do not need to be explained by neutrino oscillations, and the classification we present includes only three-neutrino models.
(4) The fourth possible indication of neutrino mass is the existence of dark matter. If a significant amount of this dark matter is in neutrino mass, it would imply a neutrino mass of order several eVs . In order then to achieve the small mass splittings needed to explain the solar and atmospheric anomalies one would have to assume that $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ were nearly degenerate. We shall not focus on such models in our classification, which is primarily devoted to models with “hierarchical” neutrino masses. However, in most models with nearly degenerate masses, the neutrino mass matrix consists of a dominant piece proportional to the identity matrix and a much smaller hierarchical piece. Since the oscillations are caused by the small hierarchical piece, such models can be classified together with hierarchical models.
In sum, the models we shall classify are those which assume (a) three flavors of neutrino that oscillate ($`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$), (b) the atmospheric anomaly explained by $`\nu _\mu `$-$`\nu _\tau `$ oscillations with nearly maximal angle, and (c) the solar anomalies explained by $`\nu _e`$ oscillating primarily with $`\nu _\mu `$ with either small angle (SMA) or large angle (LMA, VO).
There are several major divisions among models. One is between models in which the neutrino masses arise through the see-saw mechanism,<sup>5</sup> and those in which the neutrino masses are generated directly at low energy. In see-saw models, there are both left- and right-handed neutrinos. Consequently, there are five fermion mass matrices to explain: the four Dirac mass matrices, $`U`$, $`D`$, $`L`$, and $`N`$ of the up quarks, down quarks, charged leptons, and neutrinos, respectively, and the Majorana mass matrix $`M_R`$ of the right-handed neutrinos. The four Dirac mass matrices are all roughly of the weak scale, while $`M_R`$ is generally much larger than the weak scale. After integrating out the superheavy right-handed neutrinos, the mass matrix of the left-handed neutrinos is given by $`M_\nu =N^TM_R^1N`$. In conventional see-saw models there are three right-handed neutrinos, one for each of the three families of quarks and leptons. And, typically, in such conventional see-saw models there is a close relationship between the $`3\times 3`$ Dirac mass matrix $`N`$ of the neutrinos and the other $`3\times 3`$ Dirac mass matrices $`L`$, $`U`$, and $`D`$. Usually the four Dirac matrices are related to each other by grand unification and/or flavor symmetries. That means that in conventional see-saw models neutrino masses and mixings are just one aspect of the larger problem of quark and lepton masses, and are likely to shed great light on that problem, and perhaps even be the key to solving it. However, in most see-saw models the Majorana matrix $`M_R`$ is either not related or is tenuously related to the Dirac mass matrices of the quarks and leptons. The freedom in $`M_R`$ is the major obstacle to making precise predictions of neutrino masses and mixings in most see-saw schemes.
There are also what we shall refer to as unconventional see-saw models in which the fermions that play the role of the heavy right-handed partners of the neutrinos are not part of the ordinary three-family structure but are some other neutral fields. There need not be three of them, and the Dirac mass matrix of the neutrinos therefore need not be $`3\times 3`$ nor need it have any particular connection to the other Dirac mass matrices $`L`$, $`U`$, and $`D`$. Such unconventional see-saw models we classify together with non-see-saw models.
In non-see-saw schemes, there are no right-handed neutrinos. Consequently, there are only four mass matrices to consider, the Dirac mass matrices of the quarks and charged leptons, $`U`$, $`D`$, and $`L`$, and the Majorana mass matrix of the light left-handed neutrinos $`M_\nu `$. Typically in such schemes $`M_\nu `$ has nothing directly to do with the matrices $`U`$, $`D`$, and $`L`$, but is generated at low-energy by completely different physics.
The three most popular possibilities in recent models for generating $`M_\nu `$ at low energy in a non-see-saw way are (a) triplet Higgs, (b) variants of the Zee model,<sup>6</sup> and (c) R-parity violating terms in low-energy supersymmetry. (a) In triplet-Higgs models, $`M_\nu `$ arises from a renormalizable term of the form $`\lambda _{ij}\nu _i\nu _jH_T^0`$, where $`H_T`$ is a Higgs field in the $`(1,3,+1)`$ representation of $`SU(3)\times SU(2)\times U(1)`$. (b) In the Zee model, the Standard Model is supplemented with a scalar, $`h`$, in the $`(1,1,+1)`$ representation and having weak-scale mass. This field can couple to the lepton doublets $`L_i`$ as $`L_iL_jh`$ and to the Higgs doublets $`\varphi _a`$ (if there is more than one) as $`\varphi _a\varphi _bh`$. Clearly it is not possible to assign a lepton number to $`h`$ in such a way as to conserve it in both these terms. The resulting lepton-number violation allows one-loop diagrams that generate a Majorana mass for the left-handed neutrinos. (c) In supersymmetry the presence of such R-parity-violating terms in the superpotential as $`L_iL_jE_k^c`$ and $`Q_iD_j^cL_k`$, causes lepton-number violation, and allows one-loop diagrams that give neutrino masses. Neutrino mass can also arise at tree level from R-parity-violating terms of the form $`H_uL_i`$, which mix neutrino and Higgs superfields and lead to sneutrino vacuum expectation values.
It is clear that in all of these schemes the couplings that give rise to neutrino masses have little to do with the physics that gives mass to the other quarks and leptons. While this allows more freedom to the neutrino masses, it would from one point of view be very disappointing, as it would mean that the observation of neutrino oscillations is almost irrelevant to the great question of the origin of quark and charged lepton masses.
It should also be mentioned that some models derive the neutrino mass matrix $`M_\nu `$ directly from non-renormalizable terms of the form $`\nu _i\nu _jH_uH_u/M`$ without specifying where these terms come from. While such terms do arise in the conventional see-saw mechanism they can also arise in other ways. Models in which these operators do not arise from a see-saw or where their origin is left unspecified we classify as non-see-saw models.
Another major division among models has to do with the kinds of symmetries that constrain the forms of mass matrices and that, in some models, relate different mass matrices to each other. There are two main approaches: (a) grand unification, and (b) flavor symmetry. Many models use both.
(a) The simplest grand unified group is $`SU(5)`$. In minimal $`SU(5)`$ there is one relation among the Dirac mass matrices, namely $`D=L^T`$, coming from the fact that the left-handed charged leptons are unified with the right-handed down quarks in a $`\overline{\mathrm{𝟓}}`$, while the right-handed charged leptons and left-handed down quarks are unified in a $`\mathrm{𝟏𝟎}`$. In $`SU(5)`$ there do not have to be right-handed neutrinos, though they may be introduced. In $`SO(10)`$, which in several ways is a very attractive group for unification, the minimal model gives the relations $`N=UD=L`$. In realistic models these relations are modified in various ways, for example by the appearance of Clebsch coefficients in certain entries of some of the mass matrices. It is clear that unified symmetries are so powerful that very predictive models are possible. Most of the published models which give sharp predictions for masses and mixings are unified models.
(b) Flavor symmetries can be either abelian or non-abelian. Non-abelian symmetries are useful for obtaining the equality of certain elements of the mass matrix, as in models where the neutrino masses are nearly degenerate, and in the so-called “flavor democracy” schemes, which will be discussed later. Abelian symmetries are useful for explaining hierarchical mass matrices through the so-called Froggatt-Nielson mechanism.<sup>7</sup> The idea is simply that different elements of the mass matrices arise at different orders in flavor symmetry breaking. In particular, different fermion multiplets can differ in charge under a $`U(1)`$ flavor symmetry that is spontaneously broken by some “flavon” expectation value (or values), $`f_i`$. Thus, different elements of the fermion mass matrices would be suppressed by different powers of $`f_i/Mϵ_i1`$, where $`M`$ is the scale of flavor physics. This kind of scheme can explain small mass ratios and mixings in the sense of predicting them to arise at certain orders in the small quantities $`ϵ_i`$. A drawback of such models compared to many grand unified models is that actual numerical predictions, as opposed to order of magnitude estimates, are not possible. On the other hand, models based on flavor symmetry involve less of a theoretical superstructure built on top of the Standard Model than do unified models, and could therefore be considered more economical in a certain sense. Unified models put more in but get more out than do abelian-flavor-symmetry models.
The most significant new fact about neutrino mixing is the largeness of the mixing between $`\nu _\mu `$ and $`\nu _\tau `$ This comes as somewhat of a surprise from the point of view of both grand unification and flavor symmetry approaches. Since grand unification relates leptons to quarks, one might expect lepton mixing angles to be small like those of the quarks. In particular, the mixing between the second and third family of quarks is given by $`V_{cb}`$, which is known to be $`0.04`$. That is to be compared to the nearly maximal mixing of the second and third families of leptons: $`U_{\mu 3}1/\sqrt{2}0.7`$. It is true that even in the early 1980’s some grand unified models predicted large neutrino mixing angles. (Especially noteworthy is the remarkably prophetic 1982 paper of Harvey, Ramond, and Reiss,<sup>8</sup> which explicitly predicted and emphasized that there should be large $`\nu _\mu \nu _\tau `$ mixing. However, in those days the top mass was expected to be light, and that paper assumed it to be 25 GeV. That gave $`V_{cb}`$ to be about $`0.22`$. The corresponding lepton mixing was further boosted by a Clebsch of 3. With the actual value of $`m_t`$ that we now know, the model of Ref. 8 would predict $`U_{\mu 3}`$ to be only 0.12). What makes the largeness of $`U_{\mu 3}`$ a puzzle in the present situation is the fact that we now know that both $`V_{cb}`$ and $`m_c/m_t`$ are exceedingly small.
The same puzzle exists in the context of flavor symmetry. The fact that the quark mixing angles are small suggests that there is a family symmetry that is only weakly broken, while the large mixings of some of the neutrinos would suggest that family symmetries are badly broken.
The first point of interest, therefore, in looking at any model of neutrino mixing is how it explains the large mixing of $`\nu _\mu `$ and $`\nu _\tau `$. This will be the feature that we will use to organize the classification of models.
## 2 Classification of three-neutrino models
Virtually all three-neutrino models published in the last few years fit somewhere in the simple classification now to be described. In fact, almost all of them are cited below. The main divisions of this classification are based on how the large $`\nu _\mu \nu _\tau `$ mixing arises. This mixing is described by the element $`U_{\mu 3}\mathrm{sin}\theta _{23}`$ of the so-called MNS matrix<sup>9</sup> (analogous to the CKM matrix for the quarks).
The mixing angles of the neutrinos are the mismatch between the eigenstates of the neutrinos and those of the charged leptons, or in other words between the mass matrices $`L`$ and $`M_\nu `$. Thus, there are two obvious ways of obtaining large $`\theta _{23}`$: either $`M_\nu `$ has large off-diagonal elements while $`L`$ is nearly diagonal, or $`L`$ has large off-diagonal elements and $`M_\nu `$ is nearly diagonal. Of course this distinction only makes sense in some preferred basis. But in almost every model there is some preferred basis given by the underlying symmetries of that model. This distinction gives the first major division in the classification, between models of what we shall call class I and class II. (It is also possible that the large mixing is due almost equally to large off-diagonal elements in $`L`$ and $`M_\nu `$, but this possibility seems to be realized in very few published models. We will put them into class II.)
If the large $`\theta _{23}`$ is due to $`M_\nu `$ (class I), then it becomes important whether $`M_\nu `$ arises from a non-see-saw mechanism or the see-saw mechanism. We therefore distinguish these cases as class I(1) and class I(2) respectively. In the see-saw models, $`M_\nu `$ is given by $`N^TM_R^1N`$, so a further subdivision is possible: models in which the large mixing comes from large off-diagonal elements in $`M_R`$ we call I(2A); models in which the large mixing comes from large off-diagonal elements in $`N`$ we call I(2B); and models in which neither $`M_R`$ nor $`N`$ have large off-diagonal elements but $`M_\nu =N^TM_R^1N`$ nevertheless does we call I(2C).
The other main class of models, where $`\theta _{23}`$ is due to large off-diagonal elements in $`L`$ the mass matrix of the charged leptons, we called class II. The question in these models is why, given that $`L`$ has large off-diagonal elements, there are not also large off-diagonal elements in the Dirac mass matrices of the other charged fermions, especially $`D`$ (which is typically closely related to $`L`$), causing large CKM mixing of the quarks. In the literature there seem to be two ways of answering this question. One way involves the CKM angles being small due to a cancellation between large angles that are nearly equal in the up and down quark sectors. We call this class II(1). The main examples of this idea are the so-called “flavor democracy models”. The other idea is that the matrices $`L`$ and $`D^T`$ (related by unified or flavor symmetry) are “lopsided” in such a way that the large off-diagonal elements only affect the mixing of fermions of one handedness: left-handed for the leptons, making $`U_{\mu 3}`$ large, and right-handed for the quarks, leaving $`V_{cb}`$ small. We call this approach class II(2).
Schematically, one then has
$$\begin{array}{cc}I\hfill & \mathrm{Large}\mathrm{mixing}\mathrm{from}M_\nu \hfill \\ & (1)\mathrm{Non}\mathrm{see}\mathrm{saw}\hfill \\ & (2)\mathrm{See}\mathrm{saw}\hfill \\ & \mathrm{A}.\mathrm{Large}\mathrm{mixing}\mathrm{from}M_R\hfill \\ & \mathrm{B}.\mathrm{Large}\mathrm{mixing}\mathrm{from}N\hfill \\ & \mathrm{C}.\mathrm{Large}\mathrm{mixing}\mathrm{from}N^TM_R^1N\hfill \\ II\hfill & \mathrm{Large}\mathrm{mixing}\mathrm{from}L\hfill \\ & (1)\mathrm{CKM}\mathrm{small}\mathrm{by}\mathrm{cancellation}\hfill \\ & (2)\mathrm{lopsided}L.\hfill \end{array}$$
(1)
Now let us examine the different categories in more detail, giving examples from the literature.
I(1) Large mixing from $`M_\nu `$, non-see-saw.
This kind of model gives a natural explanation of the discrepancy between the largeness of $`U_{\mu 3}=\mathrm{sin}\theta _{23}`$ and the smallness of $`V_{cb}`$. $`V_{cb}`$ comes from Dirac mass matrices, which are all presumably nearly diagonal like $`L`$, whereas $`U_{\mu 3}`$ comes from the matrix $`M_\nu `$; and since in non-see-saw models $`M_\nu `$ comes from completely different physics than do the Dirac mass matrices it is not at all surprising that it has a very different form from the others, containing some large off-diagonal elements. While this basic idea is very simple and appealing, these models have the drawback that in non-see-saw models the form of $`M_\nu `$, since it comes from new physics unrelated to the origin of the other mass matrices, is highly unconstrained. Thus, there are few definite predictions, in general, for masses and mixings in such schemes. However, in some schemes constraints can be put on the new physics responsible for $`M_\nu `$.
As we saw, there are a variety of attractive ideas for generating a non-see-saw $`M_\nu `$ at low energy, and there are published models of neutrino mixing corresponding to all these ideas.<sup>10-21</sup> $`M_\nu `$ comes from triplet Higgs in Refs. 10-12; from the Zee mechanism in Refs. 13-15; and from R-parity and lepton-number-violating terms in supersymmetry in Refs. 16 and 17.
In Ref. 18 a “democratic form” of $`M_\nu `$ is enforced by a family symmetry. (The democratic form is one in which all the elements of the matrix are equal or very nearly equal. In most schemes of “flavor democracy”, as we shall see later, it is the charged lepton mass matrix $`L`$ that is assumed to have a democratic form and $`M_\nu `$ is assumed approximately diagonal, giving models of class II(1), But in Ref. 18 the opposite is assumed.) Several other models in class I(1) exist in the literature.<sup>19,20</sup>
There is a basic question that has to be answered by any model of class I(1), namely why the mass splitting seen in solar neutrino oscillations ($`\delta m_{12}^2`$) is much smaller than that seen in atmospheric oscillations ($`\delta m_{23}^2`$). If all the elements of $`M_\nu `$ were of the same order, then indeed large mixing angles would be typical, as desired to explain the atmospheric neutrino oscillations, but the neutrino mass ratios would then also be typically of order unity, and one would expect $`\delta m_{12}^2\delta m_{23}^2`$. Conversely, if there is a small parameter in $`M_\nu `$ that accounts for the ratios of mass splittings, then the question arises why the mixing angles are not also controlled by that small parameter.
A satisfactory answer to these questions requires that $`M_\nu `$ have a special form. Three satisfactory forms are possible, as has been pointed out in several analyses.<sup>21</sup> We shall consider them in turn.
(a) In the literature one finds that the majority of models of class I(1) (and, as we shall see later, many models of other classes too) assume the following form for $`M_\nu `$:
$$M_\nu =\left(\begin{array}{ccc}m_{11}& m_{12}& m_{13}\\ m_{12}& s^2M+O(\delta )M& scM+O(\delta )M\\ m_{13}& scM+O(\delta )M& c^2M+O(\delta )M\end{array}\right),$$
(2)
where $`s\mathrm{sin}\theta `$, $`c\mathrm{cos}\theta `$, $`\theta \pi /4`$, $`\delta 1`$, and $`m_{ij}M`$. By a rotation in the 2-3 plane by an angle close to $`\theta `$, the 2-3 block will be diagonalized and the matrix will take the form
$$M_\nu ^{}\left(\begin{array}{ccc}m_{11}& cm_{12}sm_{13}& cm_{13}+sm_{12}\\ cm_{12}sm_{13}& O(\delta )M& 0\\ cm_{13}+sm_{12}& 0& M\end{array}\right).$$
(3)
It is clear that for $`m_{ij}\stackrel{_<}{_{}}\delta M`$ there is a hierarchy of mass eigenvalues, and that $`\delta m_{23}^2M^2`$ and $`\delta m_{12}^2=O(\delta ^2)M^2`$. On the other hand the atmospheric neutrino angle $`\theta _{23}`$, which is approximately given by $`\theta `$, is of order one. The value of the solar angle depends on the size of $`m_{ij}`$. In particular $`\theta _{12}(cm_{12}sm_{13})/(\delta M)`$. Consequently, either small angle or large angle solutions of the solar neutrino problem can be naturally obtained.
One sees that in order to get the hierarchy among neutrino mass splittings and at the same time a large atmospheric angle one has assumed a form for $`M_\nu `$ in Eq. (2) that has a special relationship among the 22, 23, 32, and 33 elements. If such a relationship existed simply accidentally, then the model would be “fine-tuned” to some extent. Specifically, the 2-3 block of $`M_\nu `$ would have a determinant that was of $`O(\delta )`$ times its “natural” value.
A number of the models in the literature that are of class I(1) are indeed fine-tuned in this way. However, two ways of achieving the special form in Eq. (2) in a technically natural way in models of class I(1) have been proposed in the literature: (i) factorization, and (ii) permutation symmetry.
(i) The idea of factorization is that the neutrino mass matrix arises from one-loop contributions that are dominated by a single diagram, giving $`(M_\nu )_{ij}\lambda _i\lambda _jM`$, where $`\lambda _i`$ is the coupling of $`\nu _i`$ to the particles in the loop. If $`\lambda _2\lambda _3`$ then the form in Eq. (2) results. A good example of this kind of model is Ref. 16, where $`\lambda _i`$ is an R-parity-violating and lepton-number-violating coupling of $`\nu _i`$ to a quark-squark (or lepton-slepton) pair in supersymmetry.
A factorized form of $`M_\nu `$ can also arise at tree-level by a non-standard see-saw mechanism in which $`\lambda _i`$ is a Dirac coupling of $`\nu _i`$ to a single heavy Majorana fermion that is integrated out. This is the basic idea in the papers in Ref. 20. (A special case of this is supersymmetric models with R-parity-violating terms that mix neutrinos with other neutralinos. In these the neutralinos play the role of the heavy fermions in the see-saw, and factorized forms of $`M_\nu `$ can result.)
(ii) The other idea for achieving the form in Eq. (2) is permutation symmetry. The basic idea is to use non-abelian symmetry to relate different elements of $`M_\nu `$. Generally the relationship will be one of equality, thus giving maximal mixing angles. A good example is the model of Ref. 10, in which an $`S_2\times S_2`$ permutation symmetry among four left-handed neutrinos is used to obtain the form
$$M_\nu =\left(\begin{array}{cccc}A& B& C& D\\ B& A& D& C\\ C& D& A& B\\ D& C& B& A\end{array}\right).$$
(4)
Then by assuming that the linear combination of $`(\nu _1\nu _2)/\sqrt{2}`$ acquires a superlarge Majorana mass, the residual three light species of neutrino end up with a mass matrix
$$M_\nu ^{}=\left(\begin{array}{ccc}A+B& F& F\\ F& A& B\\ F& B& A\end{array}\right)=(A+B)I+\left(\begin{array}{ccc}0& F& F\\ F& B& B\\ F& B& B\end{array}\right),$$
(5)
which in effect has the form in Eq. (2), since the part proportional to the identity does not contribute to oscillations. From this one sees that it is possible to get the form in Eq. (2) in a technically natural way using flavor symmetries. Again, either small or large solar angle can arise depending on the magnitude of $`F/B`$.
(b) Another form for $`M_\nu `$ that is satisfactory is
$$M_\nu =\left(\begin{array}{ccc}m_{11}& cM& sM\\ cM& m_{22}& m_{23}\\ sM& m_{23}& m_{33}\end{array}\right),$$
(6)
where $`m_{ij}M`$, and as before $`s\mathrm{sin}\theta `$ and $`c\mathrm{cos}\theta `$, with $`\theta \pi /4`$. By a rotation in the 2-3 plane by angle $`\theta `$ one brings this to the form
$$M_\nu ^{}=\left(\begin{array}{ccc}m_{11}& M& 0\\ M& m_{22}^{}& m_{23}^{}\\ 0& m_{23}^{}& m_{33}^{}\end{array}\right).$$
(7)
This pseudo-Dirac form in the 1-2 block shows that $`\nu _1`$ and $`\nu _2`$ will be maximally mixed with nearly degenerate masses approximately equal to $`M`$, while the third neutrino will have smaller mass. Thus $`\delta m_{23}^2M^2`$ and $`\delta m_{12}^2m_{ij}M`$. Such a form always gives bimaximal mixing, i.e. large mixing angle for both atmospheric neutrinos and solar neutrinos, in contrast to Eq. (2) which can give either large or small angle solutions for the solar neutrino problem.
The form in Eq. (6) can easily be achieved using various family symmetries.<sup>11,12,15</sup> A particularly interesting possibility<sup>12,15</sup> is that the symmetry in question is $`L_eL_\mu L_\tau `$, which if exact would allow only the large elements of order $`M`$ in Eq. (6).
An interesting and instructive model in which $`M_\nu `$ is of the form given in Eq. (6) is found in Ref. 13. This model is based on the Zee mechanism, which gives a neutrino mass matrix $`M_\nu `$ that is symmetric but has vanishing diagonal elements. In particular it can give a matrix of the form
$$M_\nu \left(\begin{array}{ccc}0& m/\sqrt{2}& m/\sqrt{2}\\ m/\sqrt{2}& 0& \mathrm{\Delta }\\ m/\sqrt{2}& \mathrm{\Delta }& 0\end{array}\right),$$
(8)
where $`\mathrm{\Delta }m`$. There is some mild fine-tuning in this model in the sense that in order for the 12 and 13 elements of $`M_\nu `$ to be nearly equal in magnitude (as must be so to have nearly maximal atmospheric angle) a relation among the couplings and masses of the Zee model must be satisfied that has no basis in symmetry.
(c) A third possible form for $`M_\nu `$ is
$$M_\nu =\left(\begin{array}{ccc}M^{}& m_{12}& m_{13}\\ m_{12}& m_{22}& M\\ m_{13}& M& m_{33}\end{array}\right),$$
(9)
where $`m_{ij}M`$. In such a scheme, $`\nu _\mu `$ and $`\nu _\tau `$ are nearly maximally mixed and nearly degenerate, with $`\delta m_{23}^2m_{ij}MM^2`$. Therefore, in order for the splitting $`\delta m_{12}^2`$ to be even smaller, it must be that $`M^{}M`$ to great accuracy. If this is not to be a fine-tuning of parameters, then it must be the consequence of some non-abelian flavor symmetry.
I(2A) See-saw $`M_\nu `$, large mixing from $`M_R`$
In models of class of I(2), as in class I(1), the large atmospheric neutrino mixing angle comes from $`M_\nu `$, which however is now assumed to arise from the conventional see-saw mechanism. $`M_\nu `$ therefore has the form $`N^TM_R^1N`$, where $`N`$ is a $`3\times 3`$ matrix typically related by symmetry to $`L`$, $`U`$, and $`D`$. In class I(2A), the large off-diagonal elements in $`M_\nu `$ are assumed to come from $`M_R`$, while the Dirac neutrino matrix $`N`$ is assumed to be nearly diagonal and hierarchical like the other Dirac matrices $`L`$, $`U`$, and $`D`$. Many examples of models of class I(2A) exist in the literature.<sup>22-30</sup> As with the models of class I(1), these models have the virtue of explaining in a natural way the difference between the lepton angle $`U_{\mu 3}`$ and the quark angle $`V_{cb}`$. The quark mixings all come from Dirac matrices, while the lepton mixings involve the Majorana matrix $`M_R`$, which it is quite reasonable to suppose might have a very different character, with large off-diagonal elements.
However, there is a general problem with models of this type, which not all the examples in the literature convincingly overcome. The problem is that if $`N`$ has a hierarchical and nearly diagonal form, it tends to communicate this property to $`M_\nu `$. For example, suppose we take $`N=\mathrm{diag}(ϵ^{},ϵ,1)M`$, with $`1ϵϵ^{}`$. And suppose that the $`ij^{th}`$ element of $`M_R^1`$ is called $`a_{ij}`$. Then the matrix $`M_\nu `$ will have the form
$$M_\nu \left(\begin{array}{ccc}ϵ^2a_{11}& ϵ^{}ϵa_{12}& ϵ^{}a_{13}\\ ϵ^{}ϵa_{12}& ϵ^2a_{22}& ϵa_{23}\\ ϵ^{}a_{13}& ϵa_{23}& a_{33}\end{array}\right).$$
(10)
If all the non-vanishing elements $`a_{ij}`$ are of the same order of magnitude, then obviously $`M_\nu `$ is approximately diagonal and hierarchical. The contribution to the leptonic angles coming from $`M_\nu `$ would therefore typically be proportional to the small parameters $`ϵ`$ and $`ϵ^{}`$. One way that a $`\theta _{23}`$ of $`O(1)`$ could arise is that the small parameter coming from $`N`$ gets cancelled by a correspondingly large parameter from $`M_R^1`$.<sup>22</sup> The trouble is that to have such a relationship between the magnitudes of parameters in $`N`$ and $`M_R`$ is usually unnatural, since these matrices have very different origins. This problem has been pointed out by various authors.<sup>23</sup> We shall call it the Dirac-Majorana conspiracy problem.
This problem is avoided in models in which the hierarchies in $`N`$ and $`M_R`$ are controlled by the same family symmetries and the same small parameters. Example of such correlated hierarchies can be found in the papers of Ref. 24.
Another way of getting around the Dirac-Majorana conspiracy problem is to assume a special form for $`M_R`$. An apparently simple solution is to take the 2-3 block of $`M_R`$ to be skew diagonal. For example, suppose
$$M_R\left(\begin{array}{ccc}M^{}& 0& 0\\ 0& 0& M\\ 0& M& 0\end{array}\right),N\left(\begin{array}{ccc}ϵ^{}& & \\ & ϵ& \\ & & 1\end{array}\right).$$
(11)
where $`ϵ^{}ϵ1`$. Then the 2-3 block of $`M_\nu =N^TM_R^1N`$ is also approximately skew diagonal, and one has that $`\nu _\mu `$ and $`\nu _\tau `$ are nearly degenerate and maximally mixed, as needed to explain the atmospheric neutrino anomaly. A number of models in the literature exploit this idea.<sup>25</sup>
Unfortunately, as most of the papers in Ref. 25 noted, this idea has a problem with solar neutrinos. The problem is that it is unnatural in such a scheme for the splitting $`\delta m_{12}^2`$ to be smaller than $`\delta m_{23}^2`$. One has $`m_2m_3ϵM`$ and $`m_1ϵ^2M^{}`$. Therefore, unless $`M^{}`$ is tuned with great accuracy this scheme cannot give a satisfactory solution to the solar neutrino problem.
It is clear that if one seeks to avoid the Dirac-Majorana conspiracy problem and also to explain both solar and atmospheric neutrino oscillations, an even cleverer choice of the forms of $`N`$ and $`M_R`$ must be found. Several papers have found such forms.<sup>26-29</sup>. In the model of Ref. 27, for instance, the Dirac and Majorana matrices of the neutrinos have the forms
$$N=\left(\begin{array}{ccc}x^2y& 0& 0\\ 0& x& x\\ 0& O(x^2)& 1\end{array}\right)m_D,M_R=\left(\begin{array}{ccc}0& 0& A\\ 0& 1& 0\\ A& 0& 0\end{array}\right)m_R,$$
(12)
where $`x`$ and $`y`$ are small parameters. If one computes $`M_\nu =N^TM_R^1N`$ one finds that
$$M_\nu =\left(\begin{array}{ccc}0& O(x^4y/A)& x^2y/A\\ O(x^4y/A)& x^2& x^2\\ x^2y/A& x^2& x^2\end{array}\right)m_D^2/m_R.$$
(13)
Observe that this gives a maximal mixing of the second and third families, without having to assume any special relationship between the small parameters in $`N`$ (namely $`x`$, and $`y`$) and the parameter in $`M_R`$ (namely $`A`$). This example is generalized in the papers of Ref. 28.
Note that the matrix in Eq. (13) is of the general form given in Eq. (2), but here it arises through the see-saw mechanism. An interesting point about the form of $`M_\nu `$ in Eq. (13) is that it gives bimaximal mixing. This is easily seen by doing a rotation of $`\pi /4`$ in the 2-3 plane, bringing the matrix to the form
$$M_\nu ^{}=\left(\begin{array}{ccc}0& z& z^{}\\ z& 0& 0\\ z^{}& 0& 2x^2\end{array}\right).$$
(14)
In the 1-2 block this matrix has a Dirac form, giving nearly maximal mixing of $`\nu _e`$.
Another interesting model that avoids the Dirac-Majorana conspiracy problem, but requires a mild fine-tuning to get the hierarchy among neutrino mass splittings, is given in Ref. 29. The Majorana and Dirac neutrino mass matrices in that model have the form
$$N=\left(\begin{array}{ccc}0& 0& x\\ 0& x& 0\\ x& 0& 1\end{array}\right)m,M_R^1=\left(\begin{array}{ccc}aM^1& bM^1& 0\\ bM^1& cM^1& 0\\ 0& 0& M^1\end{array}\right),$$
(15)
where $`x1`$, $`M/M^{}x^2`$, and $`a,b,c1`$. This gives
$$M_\nu =\left(\begin{array}{ccc}ϵx^2& 0& ϵx\\ 0& c& b\\ ϵx& b& a+ϵ\end{array}\right)(m^2x^2/M).$$
(16)
Here $`ϵ\frac{M}{M^{}}\frac{1}{x^2}1`$. The atmospheric neutrino angle will be of order unity if $`a`$, $`b`$, and $`c`$ are all of the same order, which requires no fine-tuning or Dirac-Majorana conspiracy. However, to make $`\delta m_{12}^2\delta m_{23}^2`$ requires that the condition $`\sqrt{acb^2}a,b,c`$, which does not arise from any symmetry, be satisfied.
Models of class I(2A) can be constructed that predict either small or large values of the solar neutrino angle $`\theta _{12}`$.
I(2B) See-saw $`M_\nu `$, large mixing from $`N`$
We now turn to see-saw models in which the large atmospheric neutrino angle comes from large off-diagonal elements in the Dirac neutrino mass matrix $`N`$ rather than in the Majorana matrix $`M_R`$.
At least at first glance, this seems to be a less natural approach. The point is that if the large $`\theta _{23}`$ is due to large off-diagonal elements in $`N`$, it might be expected that the other Dirac mass matrices, $`U`$, $`D`$, and $`L`$, would also have large off-diagonal elements, giving large CKM angles. The model in Ref. 31 only attempts to describe the lepton sector and so does not resolve this problem. In Ref. 32 it is assumed that $`N`$ has large off-diagonal elements and $`L`$ does not, but the difference in character of these matrices is not accounted for. In the interesting model of Ref. 33 the difference between $`N`$ and the other Dirac matrices is accounted for by a fine-tuning. In that model all of the quark and lepton mass matrices are given (in terms of relatively few parameters) by linear combinations of certain matrices that are hierarchical and nearly diagonal. In order that $`N`$ have off-diagonal elements that are comparable to its diagonal elements, an accidental cancellation must occur that suppresses the diagonal elements.
There are ways to construct models of class I(2B) in which the difference between $`N`$ and the other Dirac matrices is explained without fine-tuning.<sup>34</sup> However, experience seems to show that this approach is harder to make work than the others.
I(2C) See-saw $`M_\nu `$, large mixing from $`N^TM_R^1N`$
In order for the see-saw mass matrix $`M_\nu =N^TM_R^1N`$ to have large off-diagonal elements it is not necessary that either $`M_R`$ or $`N`$ have large off-diagonal elements, as emphasized in Ref. 35. Following Ref. 35, consider the matrices
$$N\left(\begin{array}{ccc}ϵ^{}& ϵ^{}& ϵ^{}\\ ϵ^{}& ϵ& ϵ\\ ϵ^{}& ϵ& 1\end{array}\right)m,M_R^1=\left(\begin{array}{ccc}r_1& 0& 0\\ 0& r_2& 0\\ 0& 0& r_3\end{array}\right)M^1,$$
(17)
where it is assumed that $`r_2ϵ^2r_3,r_1ϵ^2`$. Then to leading order in small quantities $`M_\nu `$ has the form
$$M_\nu \left(\begin{array}{ccc}(ϵ^{}/ϵ)^2& ϵ^{}/ϵ& ϵ^{}/ϵ\\ ϵ^{}/ϵ& 1& 1\\ ϵ^{}/ϵ^{}& 1& 1\end{array}\right)r_2ϵ^2(m^2/M).$$
(18)
It is easy to understand what is happening. The fact that $`r_2`$ is much larger than $`r_1`$ and $`r_3`$ means that the right-handed neutrino of the second family is much lighter than the other two. Effectively, then, one right-handed neutrino dominates $`M_R^1`$. As a consequence one obtains an approximately “factorized” form for $`M_\nu `$, just as one did in the unconventional see-saw models considered in the papers of Ref. 20, in which a single right-handed fermion also dominated. Those unconventional see-saw models could also be considered as examples of class I(2C).
II(1) Large mixing from $`L`$, CKM small by cancellation
We now turn to those models in which the large value of $`\theta _{23}`$ comes predominantly from the charged lepton mass matrix $`L`$, with $`M_\nu `$ being nearly diagonal. The issue that arises in such models is whether the other Dirac mass matrices, especially $`D`$ and $`U`$, also have large off-diagonal elements, and if so why this does not lead to large CKM angles for the quarks. Some published models do not deal with this question since they are only models of the lepton sector and do not attempt to describe the quarks at all.<sup>36</sup> However, while it may in non-see-saw models make sense to discuss $`M_\nu `$ apart from the other mass matrices, it would seem that under most reasonable assumptions the matrix $`L`$ should have some relationship to $`U`$ and $`D`$.
Why, then, are the CKM angles small? One possibility is that the CKM angles are small because of an almost exact cancellation between large angles needed to diagonalize $`U`$ and $`D`$. That, in turn, would imply that $`U`$ and $`D`$, even though highly non-diagonal, have nearly identical forms. This idea is realized in most so-called “flavor democracy” models.<sup>37</sup>
A “flavor-democratic” mass matrix is one in which all the elements are equal:
$$M_{FD}\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right)m_{\mathrm{}}$$
(19)
A Dirac mass matrix can have such a form as the result of separate $`S_3`$ permutation symmetries acting on the left-handed and right-handed fermions. This form is of rank 1, and thus gives only one family a mass. Of course, in realistic models based on flavor democracy it is assumed that the mass matrices also get small corrections that come from the breaking of the permutation symmetries, which give rise to masses for the lighter families.
It is clear that the flavor democratic form is diagonalized by a unitary matrix with rotation angles that are large. In fact, the matrix is
$$U_{FD}=\left(\begin{array}{ccc}1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 0& 2/\sqrt{6}& 1/\sqrt{3}\end{array}\right).$$
(20)
The reason why the CKM angles are small in flavor democracy models is that both $`U`$ and $`D`$ are assumed to have approximately the democratic form. Thus, the large rotation angles nearly cancel between the up and down sectors. However, as first noted in Ref. 38, it is possible to have large neutrino mixing angles if $`M_\nu `$ is assumed to have not the democratic form but a nearly diagonal form. This difference in form is plausible, given that $`M_\nu `$ is a Majorana matrix rather than a Dirac matrix like the others. In this case, the angles required to diagonalize $`M_\nu `$ would be small, and the MNS matrix would come predominantly from diagonalizing $`L`$.
In Ref. 38, it is assumed that $`M_\nu `$ is diagonal and hierarchical in form, and the elements of $`M_\nu `$ are assumed to arise entirely from the breaking of the permutation symmetries of the model. The model of Ref. 38 therefore has the neutrino masses being hierarchical. However, most published versions of the flavor democracy idea<sup>39</sup> assume that $`M_\nu `$ is approximately proportional to the identity matrix. The form $`M_\nu I`$ is invariant under an $`S_3`$ permutation of the left-handed neutrinos. (However, it should be noted that $`M_\nu I`$ is not the most general form consistent with permutation symmetry, and so to make this form technically natural some further symmetries must be invoked.) Small deviations from the identity matrix would arise from terms that break the flavor symmetries of the model. In such versions, the three neutrino masses are nearly degenerate, but the splittings can be made hierarchical to accomodate the solar and atmospheric data.
An exact flavor-democratic form of $`L`$ would leave two charged leptons degenerate and therefore one of the neutrino mixing angles undefined. And if $`M_\nu `$ were exactly proportional to the identity matrix, all three neutrinos would be degenerate and all three neutrino mixing angles would be undefined. Exactly what angles are predicted for the neutrinos depends, therefore, on the form of the small contributions to the mass matrices that break the permutation symmetries. There are many possibilities. In some, the MNS matrix comes out to be very close to $`U_{FD}^{}`$. However, it is not surprising, given the degeneracies the the exact permutation-symmetric forms give, that the small permutation-symmetry-breaking contributions to the mass matrices can lead to additional large mixings, and to forms for the MNS matrix that depart significantly from $`U_{FD}^{}`$. It is typical in flavor democracy models for the angle $`\theta _{12}`$ to come out large, and in many cases it comes out to be close to $`\pi /4`$ as in the matrix $`U_{FD}^{}`$. However, it is possible for $`\theta _{12}`$ to be small. This can happen if the matrix $`M_\nu `$ is such that the neutrinos $`\nu _1`$ and $`\nu _2`$ form a pseudo-Dirac pair. Then the 1-2 angles from the diagonalization of both $`L`$ and $`M_\nu `$ will be close to $`\pi /4`$ and their difference can be small.
The number of possible models, based on different ways to break permutation symmetry, is large. There exists an extensive and growing literature in this area. There are also many models based not on the pure flavor-democratic form in Eq. (19), but on forms in which all the elements of the mass matrix are assumed to be approximately equal in magnitude, but allowed to differ in complex phase. This is sometimes called the “Universal Strength for Yukawa couplings” approach or USY.<sup>40</sup>
The idea of flavor democracy is an elegant one, especially in that it uses one basic idea to explain the largeness of the leptonic angles, the smallness of the quark angles, and the fact that one family is much heavier than the others. On the other hand, it is based on very special forms for the mass matrices which come from very specific symmetries. It is in this sense a narrower approach to the problem of fermion masses than some of the others we have discussed.
It would be interesting to know whether simple models of class II(1), in which the CKM angles are small by cancellations of large angles, can be constructed using ideas other than flavor democracy.
II(2) Large mixing from “lopsided” $`L`$
We now come to an idea for explaining the largeness of $`\theta _{23}`$ that has great flexibility, in the sense that it can be implemented in many different kinds of models: grand unified, models with abelian or non-abelian flavor symmetries, see-saw or non-see-saw neutrino masses, and so on. The basic idea of the “lopsided” $`L`$ approach is that the charged-lepton and down-quark mass matrices have the approximate forms
$$L\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& ϵ\\ 0& \sigma & 1\end{array}\right)m_D,D\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& \sigma \\ 0& ϵ& 1\end{array}\right)m_D.$$
(21)
The “$``$” sign is used because in realistic models these $`\sigma `$ and $`ϵ`$ entries could have additional factors of order unity, such as from Clebsch coefficients. The fact that $`L`$ is related closely in form to the transpose of $`D`$ is a very natural feature from the point of view of $`SU(5)`$ or related symmetries, and is a crucial ingredient in this approach. The assumption is that $`ϵ1`$, while $`\sigma 1`$. In the case of the charged leptons $`ϵ`$ controls the mixing of the second and third families of right-handed fermions (which is not observable at low energies), while $`\sigma `$ controls the mixing of the second and third families of left-handed fermions, which contributes to $`\theta _{23}`$ and makes it large. For the quarks the reverse is the case because of the “$`SU(5)`$” feature: the small $`O(ϵ)`$ mixing is in the left-handed sector, accounting for the smallness of $`V_{cb}`$, while the large $`O(\sigma )`$ mixing is in the right-handed sector, where it cannot be observed and does no harm.
In this approach the three crucial elements are these: (a) Large mixing of neutrinos (in particular of $`\nu _\mu `$ and $`\nu _\tau `$) caused by large off-diagonal elements in the charged-lepton mass matrix $`L`$; (b) these off-diagonal elements appearing in a highly asymmetric or lopsided way; and (c) $`L`$ being similar to the transpose of $`D`$ by $`SU(5)`$ or a related symmetry.
What makes this approach so flexible is that the problem of obtaining a realistic pattern of neutrino masses is decoupled from the problem of getting large $`\theta _{23}`$. The large $`\theta _{23}`$ arises from $`L`$ while the neutrino mass spectrum arises from $`M_\nu `$. Thus one is freed from having very special textures for $`M_\nu `$ as was the case in class I models. This is also true of the flavor democracy schemes; however, there the necessity of near cancellation between up and down quark angles forced a very particular kind of mass matrix texture and flavor symmetry. The lopsided mass matrices, by contrast, can be achieved in many ways, as can be seen from Refs. 41-47.
The first paper that has all three elements that define this approach seems to be Ref. 41, which proposed a very specific idea for generating the fermion mass hierarchies. The ideas of that paper were further explored in Ref. 42.
The lopsided $`L`$ idea next is seen in three papers that appeared almost simultaneously.<sup>43-45</sup> It is interesting that the same basic mechanism of lopsided $`L`$ was arrived at independently by these three groups of authors from completely different starting points. In Ref. 43 the model is based on $`E_7/SU(5)\times U(1)`$, and the structure of the mass matrices is determined by the Froggatt-Nielson mechanism. In Ref. 44 the model is based on $`SO(10)`$, and does not use the Froggett-Nielson approach. Rather, the constraints on the form of the mass matrices come from assuming a “minimal” set of Higgs for $`SO(10)`$-breaking and choosing the smallest and simplest set of Yukawa operators that can give realistic mass matrices for the quarks and charged leptons. Though both Refs. 43 and 44 assume a unified symmetry larger than $`SU(5)`$, in both it is the $`SU(5)`$ subgroup that plays the critical role in relating $`L`$ to $`D^T`$. The model of Ref. 45, like that of Ref. 43, uses the Froggatt-Nielson idea, but is not based on a grand unified group. Rather, the fact that $`L`$ is related to $`D^T`$ follows ultimately from the requirement of anomaly cancellation for the various $`U(1)`$ flavor symmetries of the model. However, it is well known that anomaly cancellation typically enforces charge assignments that can be embedded in unified groups. So that even though the model does not contain an explicit $`SU(5)`$, it could be said to be “$`SU(5)`$-like”.
In Ref. 46 are listed numerous papers that have used the lopsided $`L`$ approach in the context of grand unified theories. A variety of symmetries — abelian, non-abelian continuous, and non-abelian discrete — are used in these models to constrain the forms of mass matrices. In Ref. 47 are papers that are not unified and do not discuss the quark mass matrices, so that the third element of the approach ($`L`$ being related to $`D^T`$ by a symmetry related to $`SU(5)`$) is not explicitly present.
As pointed out in Ref. 48, models based on lopsided $`L`$ can give either large-angle or small-angle solutions to the solar neutrino problem.
A predictive model with lopsided L
We shall now briefly describe a particular model of class II(2). A remarkable fact about this model is that it was not constructed to explain neutrino phenomenology; rather it emerged from the attempt to find a realistic model of the masses of the charged leptons and quarks in the context of $`SO(10)`$. In fact, it is one of the most predictive models of quark and lepton masses that exists in the literature. The idea of the model was to take the Higgs sector of $`SO(10)`$ to be as minimal as possible, and then to find what this implied for the mass matrices of the quarks and charged leptons. In fact, in the first paper proposing this model<sup>49</sup> no attention was paid to the neutrino mixings at all. Only subsequently was it noticed that the model actually predicts a large mixing of $`\nu _\mu `$ with $`\nu _\tau `$ and this led to a second paper, in which the implications for neutrino phenomenology were stressed.<sup>44</sup> The reason for the large mixing of $`\nu _\mu `$ and $`\nu _\tau `$ in this model is precisely the fact that the charged lepton mass matrix has a lopsided form.
The reason this lopsided form was built into the model of Refs. 44 and 49, was that it was necessary to account for certain well-known features of the mass spectrum of the quarks. In particular, the mass matrix entry that is denoted $`\sigma `$ in Eq. (21) above plays three crucial roles in this model that have nothing to do with neutrino mixing. (1) It is required to get the Georgi-Jarlskog<sup>50</sup> factor of 3 between $`m_\mu `$ and $`m_s`$. (2) It explains the value of $`V_{cb}`$. (3) It explains why $`m_c/m_tm_s/m_b`$. Remarkably, it turns out not only to perform these three tasks, but also gives mixing of order 1 between $`\nu _\mu `$ and $`\nu _\tau `$. Not often are four birds killed with one stone.
In constructing the model, several considerations played a part. First, a “minimal” set of Higgs for $`SO(10)`$ was assumed. It has been shown<sup>51</sup> that the smallest set of Higgs that will allow a realistic breaking of $`SO(10)`$ down to $`SU(3)\times SU(2)\times U(1)`$, with natural doublet-triplet splitting,<sup>52</sup> consists of a single adjoint ($`\mathrm{𝟒𝟓}`$), two pairs of spinors ($`\mathrm{𝟏𝟔}+\overline{\mathrm{𝟏𝟔}}`$), a pair of vectors ($`\mathrm{𝟏𝟎}`$), and some singlets. The adjoint, in order to give the doublet-triplet splitting, must have a VEV proportional to the $`SO(10)`$ generator $`BL`$. This fact is an important constraint. Second, it was required that the qualitative features of the quark and lepton spectrum should not arise by artificial cancellations or numerical accidents. Third, it was required that the Georgi-Jarlskog factor arise in a simple and natural way. Fourth, it was required that the entries in the mass matrices should come from operators of low dimension that arise in simple ways from integrating out small representations of fermions.
Having imposed these conditions of economy and naturalness, a structure emerged that had just six effective Yukawa terms (just five if $`m_u`$ is allowed to vanish). These gave the following mass matrices:
$$\begin{array}{cc}U^0=\left(\begin{array}{ccc}\eta & 0& 0\\ 0& 0& \frac{1}{3}ϵ\\ 0& \frac{1}{3}ϵ& 1\end{array}\right)m_U,\hfill & D^0=\left(\begin{array}{ccc}0& \delta & \delta ^{}\\ \delta & 0& \sigma +\frac{1}{3}ϵ\\ \delta ^{}& \frac{1}{3}ϵ& 1\end{array}\right)m_D\hfill \\ & \\ N^0=\left(\begin{array}{ccc}\eta & 0& 0\\ 0& 0& ϵ\\ 0& ϵ& 1\end{array}\right)m_U,\hfill & L^0=\left(\begin{array}{ccc}0& \delta & \delta ^{}\\ \delta & 0& ϵ\\ \delta ^{}& \sigma +ϵ& 1\end{array}\right)m_D.\hfill \end{array}$$
(22)
(The first papers<sup>49,44</sup> gave only the structures of the second and third families, while this was extended to the first family in a subsequent paper.<sup>53</sup>) Here $`\sigma 1.8`$, $`ϵ0.14`$, $`\delta |\delta ^{}|0.008`$, $`\eta 0.6\times 10^5`$. The patterns that are evident in these matrices are due to the $`SO(10)`$ group-theoretical characteristics of the various Yukawa terms. Notice several facts about the crucial parameter $`\sigma `$ that is responsible for the lopsidedness of $`L`$ and $`D`$. First, if $`\sigma `$ were not present, then instead of the Georgi-Jarlskog factor of 3, the ratio $`m_\mu /m_s`$ would be given by 9. (That is, the Clebsch of $`\frac{1}{3}`$ that appears in $`D`$ due to the generator $`BL`$ gets squared in computing $`m_s`$.) Since the large entry $`\sigma `$ overpowers the small entries of order $`ϵ`$, the correct Georgi-Jarlskog factor emerges. Second, if $`\sigma `$ were not present, $`U`$ and $`D`$ would be proportional, as far as the two heavier families are concerned, and $`V_{cb}`$ would vanish. Third, by having $`\sigma 1`$ one ends up with $`V_{cb}`$ and $`m_s/m_b`$ being of the same order ($`O(ϵ)`$) as is indeed observed. And since $`\sigma `$ does not appear in $`U`$ (for group-theoretical reasons) the ratio $`m_c/m_t`$ comes out much smaller, of $`O(ϵ^2)`$, also as observed. In fact, with this structure, the mass of charm is predicted correctly to within the level of the uncertainties.
Thus, for several reasons that have nothing to do with neutrinos one is led naturally to exactly the lopsided form that is found to give an elegant explanation of the mixing seen in atmospheric neutrino data.
From the very small number of Yukawa terms, and from the fact that $`SO(10)`$ symmetry gives the normalizations of these terms, and not merely order of magnitude estimates for them, it is not surprising that many precise predictions result. In fact there are altogether nine predictions.<sup>53</sup> Some of these are post-dictions (including the highly non-trivial one for $`m_c`$). But several predictions will allow the model to be tested in the future, including predictions for $`V_{ub}`$, and the mixing angles $`U_{e2}`$ $`U_{e3}`$.
## 3 Expectations for the parameter $`U_{e3}`$
All of the models that we have discussed aim to explain the atmospheric neutrino anomaly by saying that there is maximal mixing between $`\nu _\mu `$ and $`\nu _\tau `$, i.e. that $`U_{\mu 3}1/\sqrt{2}`$, and they all aim to explain the solar neutrino problem either by the small-angle MSW solution, in which $`U_{e2}0.05`$, or by one of the large-angle solutions (large-angle MSW or vacuum oscillation), in which $`U_{e2}1/\sqrt{2}`$. In this section we examine the other mixing, which is described by $`U_{e3}`$. $`U_{e3}`$ is independent of the other two mixings, and a priori could take values ranging from zero up to the present limit of about 0.2 . However, what we find is that the great majority of viable models give one of four mixing patterns, which we label with the Greek letters $`\alpha `$ through $`\delta `$. These patterns are indicated in the following table.
$$\begin{array}{ccc}Pattern\hfill & U_{e2}& U_{e3}\hfill \\ & & \\ & & \\ \alpha \hfill & \mathrm{sin}\theta _{LA}& O(m_{\nu _\mu }/m_{\nu _\tau }),O(\sqrt{m_e/m_\mu })0.05\hfill \\ & & \\ \alpha ^{}\hfill & \mathrm{sin}\theta _{LA}& \sqrt{m_e/m_\mu }\mathrm{sin}\theta _{atm}\hfill \\ & & \\ \alpha ^{\prime \prime }\hfill & \mathrm{sin}\theta _{LA}& \frac{2}{\sqrt{6}}\sqrt{m_e/m_\mu }\hfill \\ & & \\ & & \\ \beta \hfill & \mathrm{sin}\theta _{LA}& 0\hfill \\ & & \\ & & \\ \gamma \hfill & \mathrm{sin}\theta _{SA}& \mathrm{sin}\theta _{SA}\mathrm{tan}\theta _{atm}\hfill \\ & & \\ \gamma ^{}\hfill & \sqrt{m_e/m_\mu }\mathrm{cos}\theta _{atm}& \sqrt{m_e/m_\mu }\mathrm{sin}\theta _{atm}\hfill \\ & & \\ & & \\ \delta \hfill & \mathrm{sin}\theta _{SA}& \mathrm{sin}\theta _{SA}\mathrm{tan}\theta _{atm}\hfill \end{array}$$
In this table $`\theta _{SA}`$ and $`\theta _{LA}`$ stand for the value of $`\theta _{12}`$ in the small-angle and large-angle solutions of the solar neutrino problem, respectively. What one sees is that if the solar angle is maximal one expects either that $`U_{e3}`$ will be of order 0.05 (pattern $`\alpha `$) or that it will vanish (pattern $`\beta `$). In most models that fit pattern $`\alpha `$ only the order of magnitude of $`U_{e3}`$ is predicted. However, some models predict it sharply. A particularly interesting prediction that arises in certain types of models is that $`|U_{e3}|=\sqrt{m_e/m_\mu }\mathrm{sin}\theta _{atm}`$, which we distinguish with the name $`\alpha ^{}`$. A special case of this occurs in some flavor democracy models, where $`\mathrm{sin}\theta _{atm}=2/\sqrt{6}`$, and we call this $`\alpha ^{\prime \prime }`$.
If the solar angle is small, i.e. the small-angle MSW solution, one typically finds one of two results for $`U_{e3}`$: either it is given by the relation $`U_{e3}U_{e2}\mathrm{tan}\theta _{atm}`$ (pattern $`\gamma `$) or it is small compared to that value (pattern $`\delta `$). In certain models with pattern $`\gamma `$ there is the further prediction for the solar angle that $`U_{e2}\sqrt{m_e/m_\mu }\mathrm{cos}\theta _{atm}`$. We call this pattern $`\gamma ^{}`$, and it leads to the same prediction for $`U_{e3}`$ that one has in pattern $`\alpha ^{}`$.
We will first give some general preliminaries and then proceed to analyze different kinds of models, showing why the four patterns we have described are the ones that arise in the great majority of published models.
The lepton mixing matrix, or “MNS matrix” has the form
$$U_{MNS}=U_L^{}U_\nu ,$$
(23)
where $`U_L`$ is the unitary matrix that diagonalizes $`L^{}L`$, and $`U_\nu `$ is the unitary matrix that diagonalizes $`M_\nu ^{}M_\nu `$. It is convenient to write $`U_L`$ in the form
$$U_L=\left(\begin{array}{ccc}1& 0& 0\\ 0& \overline{c}_{23}& \overline{s}_{23}\\ 0& \overline{s}_{23}& \overline{c}_{23}\end{array}\right)\left(\begin{array}{ccc}\overline{c}_{13}& 0& \overline{s}_{13}\\ 0& 1& 0\\ \overline{s}_{13}& 0& \overline{c}_{13}\end{array}\right)\left(\begin{array}{ccc}\overline{c}_{12}& \overline{s}_{12}& 0\\ \overline{s}_{12}& \overline{c}_{12}& 0\\ 0& 0& 1\end{array}\right),$$
(24)
where $`\overline{s}_{ij}\mathrm{sin}\overline{\theta }_{ij}`$, and so on. One can write $`U_\nu `$ in a similar way, with the corresponding angles being denoted $`\stackrel{~}{\theta }_{ij}`$. (Henceforth, a bar over a quantity means that it comes from the charged lepton sector, while a tilde means that it comes from the neutrino sector.) Consequently, if we assume all quantities are real the MNS matrix can be written
$$U_{MNS}=\left(\begin{array}{ccc}\overline{c}_{13}\overline{c}_{12}& \overline{s}_{12}& \overline{s}_{13}\overline{c}_{12}\\ \overline{c}_{13}\overline{s}_{12}& \overline{c}_{12}& \overline{s}_{13}\overline{s}_{12}\\ \overline{s}_{13}& 0& \overline{c}_{13}\end{array}\right)\left(\begin{array}{ccc}1& 0& 0\\ 0& c_{23}& s_{23}\\ 0& s_{23}& c_{23}\end{array}\right)\left(\begin{array}{ccc}\stackrel{~}{c}_{13}\stackrel{~}{c}_{12}& \stackrel{~}{c}_{13}\stackrel{~}{s}_{12}& \stackrel{~}{s}_{13}\\ \stackrel{~}{s}_{12}& \stackrel{~}{c}_{12}& 0\\ \stackrel{~}{s}_{13}\stackrel{~}{c}_{12}& \stackrel{~}{s}_{13}\stackrel{~}{s}_{12}& \stackrel{~}{c}_{13}\end{array}\right),$$
(25)
where $`s_{23}\mathrm{sin}(\stackrel{~}{\theta }_{23}\overline{\theta }_{23})`$. What makes these expressions useful is that for hierarchical mass matrices, and most of the mass matrices that we shall have to deal with, the angles $`\overline{\theta }_{ij}`$ and $`\stackrel{~}{\theta }_{ij}`$ are given with sufficient accuracy very simply in terms of ratios of elements of the mass matrices. Equation (25) tells us that
$$\begin{array}{ccc}U_{e2}& =& \overline{c}_{13}\overline{c}_{12}\stackrel{~}{c}_{13}\stackrel{~}{s}_{12}\overline{s}_{12}c_{23}\stackrel{~}{c}_{12}+\overline{s}_{12}s_{23}\stackrel{~}{s}_{13}\stackrel{~}{s}_{12}+\overline{s}_{13}\overline{c}_{12}s_{23}\stackrel{~}{c}_{12}+\overline{s}_{13}\overline{c}_{12}c_{23}\stackrel{~}{s}_{13}\stackrel{~}{s}_{12}\hfill \\ & & \\ U_{e3}& =& \overline{s}_{12}s_{23}\stackrel{~}{c}_{13}\overline{s}_{13}\overline{c}_{12}c_{23}\stackrel{~}{c}_{13}+\overline{c}_{13}\overline{c}_{12}\stackrel{~}{s}_{13}\hfill \\ & & \\ U_{\mu 3}& =& \overline{c}_{12}s_{23}\stackrel{~}{c}_{13}+\overline{c}_{13}\overline{s}_{12}\stackrel{~}{s}_{13}\overline{s}_{13}\overline{s}_{12}c_{23}\stackrel{~}{c}_{13}.\hfill \end{array}$$
(26)
As we shall now see, for the practically interesting cases these expressions can be greatly simplified due to the smallness of certain angles. First let us consider the form in Eq. (2). From Eq. (2) one sees immediately that $`\stackrel{~}{s}_{23}s`$. From Eq. (3) one sees that $`\stackrel{~}{s}_{13}(cm_{13}+sm_{12})/m_3`$ and $`\stackrel{~}{s}_{12}(cm_{12}sm_{13})/m_2`$, where $`m_2`$ and $`m_3`$ are the second and third eigenvalues of $`M_\nu `$. ($`m_3M`$ and $`m_2=O(\delta )M`$.) Since $`s,c1`$, one expects that $`cm_{13}+sm_{12}cm_{12}sm_{13}`$, unless there is fine-tuning. Consequently, one expects that $`\stackrel{~}{s}_{13}(m_2/m_3)\stackrel{~}{s}_{12}\stackrel{~}{s}_{12}`$.
The same is true in virtually all models for the charged lepton sector, i.e. $`\overline{s}_{13}(m_\mu /m_\tau )\overline{s}_{12}\overline{s}_{12}`$. It is also usually true in models (except the flavor democracy models) that if the 1-2 mixing is large it is due to the angle $`\stackrel{~}{\theta }_{12}`$ being large rather than the angle $`\overline{\theta }_{12}`$. In other words, $`\stackrel{~}{s}_{12}`$ may be large or small depending on which solution to the solar neutrino problem is assumed, but $`\overline{s}_{12}`$ is small in almost all models (except the flavor democracy ones), implying that $`\overline{s}_{13}`$ is even smaller and in fact negligible. However, $`\stackrel{~}{s}_{13}`$ can be significant if $`\stackrel{~}{s}_{12}1`$. These facts allow one to write
$$\begin{array}{ccc}U_{e2}& & \stackrel{~}{s}_{12}\overline{s}_{12}c_{23}\stackrel{~}{c}_{12}\hfill \\ & & \\ U_{e3}& & \overline{s}_{12}s_{23}+\stackrel{~}{s}_{13}\hfill \\ & & \\ U_{\mu 3}& & s_{23}+\overline{s}_{12}\stackrel{~}{s}_{13}\hfill \end{array}$$
(27)
Now let us turn to models of class I(1) that have $`M_\nu `$ of the form given in Eq. (2). There are two cases to consider: either large- or small-angle solution to the solar neutrino problem. If small-angle then one has that $`U_{e2}0.05`$, and therefore, barring accidental cancellations, $`\overline{s}_{12},\stackrel{~}{s}_{12}\stackrel{_<}{_{}}0.05`$. Thus $`\stackrel{~}{s}_{13}0.05`$ and the formulas can be simplified to $`U_{e3}\overline{s}_{12}s_{23}`$ and $`U_{e2}\overline{s}_{12}c_{23}+\stackrel{~}{s}_{12}`$. If the solar neutrino angle is predominantly from the charged lepton sector, i.e. $`\overline{s}_{12}\stackrel{~}{s}_{12}`$, then one has the predictions that $`U_{e2}\overline{s}_{12}c_{23}`$ and $`U_{e3}\overline{s}_{12}s_{23}`$, and therefore $`U_{e3}U_{e2}\mathrm{tan}\theta _{23}U_{e2}\mathrm{tan}\theta _{atm}`$. In other words, we have the mixing pattern $`\gamma `$. It is known experimentally that $`c_{23}0.7`$ and that (for small-angle MSW) $`U_{e2}0.05`$, and so these relations imply that $`|\overline{s}_{12}||U_{e2}/c_{23}|0.07`$. It is quite interesting that this is numerically close to $`\sqrt{m_e/m_\mu }`$. The relation $`\overline{s}_{12}\sqrt{m_e/m_\mu }`$ is what would be obtained in models where the 1-2 block of the charged-lepton mass matrix has the Weinberg-Wilczek-Zee-Fritzsch form.<sup>54</sup> In such models one can get the fairly sharp predictions for both $`U_{e2}`$ and $`U_{e3}`$ that we call pattern $`\gamma ^{}`$. The very interesting point that $`U_{e2}=\sqrt{m_e/m_\mu }\mathrm{cos}\theta _{23}`$ can arise in a simple way and that it gives a good fit for the small-angle MSW solution was first emphasized in Ref. 29. One of the models that gives the pattern $`\gamma ^{}`$ predictions is the small-angle case of the model of Refs. 44 and 53.
The other possibility in the small solar angle case is that the solar angle comes predominantly from the neutrino sector, i.e. $`\stackrel{~}{s}_{12}\overline{s}_{12}`$. Then it is apparent that one would have $`U_{e3}U_{e2}`$, in other words what we called mixing pattern $`\delta `$. Of course, one could have $`\overline{s}_{12}\stackrel{~}{s}_{12}`$, but such a coincidence is not what one would typically expect.
Next let us consider models of class I(1) with $`M_\nu `$ of the form given in Eq. (2) but with large-angle solar solution. In that case, as noted, in virtually all published models the large solar angle comes from the neutrino sector. Thus $`\stackrel{~}{s}_{12}1`$ and $`\overline{s}_{12}1`$. One then expects, as seen above, that $`\stackrel{~}{s}_{13}(m_2/m_3)\stackrel{~}{s}_{12}m_2/m_3`$, which for hierarchical models is $`m_{\nu _\mu }/m_{\nu _\tau }(\delta m_{12}^2/\delta m_{23}^2)^{1/2}`$. For large-angle MSW solution to the solar neutrino problem this gives $`\stackrel{~}{s}_{13}0.05`$. One typically finds in most models that $`\overline{s}_{12}\sqrt{m_e/m_\mu }0.07`$. Thus the two terms in the expression $`U_{e3}\overline{s}_{12}s_{23}+\stackrel{~}{s}_{13}`$ are typically of the same order but not sharply predicted. Consequently, all one can say is that $`U_{e3}=O(m_{\nu _\mu }/m_{\nu _\tau })`$ or $`\sqrt{m_e/m_\mu }`$. In other words, one has what we called mixing pattern $`\alpha `$. Similar results follow for the vacuum oscillation solution with hierarchical neutrino masses. In that case, however, $`\delta m_{12}^2`$ is much smaller, so that one has $`\stackrel{~}{s}_{13}10^4`$, which is negligible. Therefore $`U_{e3}`$ comes from the single term $`\overline{s}_{12}s_{23}`$. In most models one has no sharp prediction for this, and therefore the mixing pattern is again $`\alpha `$. However, in some models having the WWZF form for the 1-2 block of $`L`$ it is predicted that $`\overline{s}_{12}\sqrt{m_e/m_\mu }`$, in which case the mixing pattern is $`\alpha ^{}`$. (A good example of a model with pattern $`\alpha ^{}`$ is the large-angle version of the model in Refs. 44 and 53. This version is discussed in Refs. 48 and 55.)
So far we have been considering models of class I(1) in which the matrix $`M_\nu `$ has the form given in Eq. (2). Now let us consider the form given in Eq. (6). This form only gives large-angle solutions to the solar neutrino problem. It is apparent by inspection of Eq. (6) that $`\delta m_{23}^2M^2`$, while $`\delta m_{12}^2m_{ij}M`$. (More precisely, it turns out that $`\delta m_{12}^22(m_{11}+c^2m_{22}+2scm_{23}+s^2m_{33})M`$.) Thus typically $`m_{ij}/M\delta m_{12}^2/\delta m_{23}^210^3`$ or $`10^7`$ for large-angle MSW and vacuum oscillation solutions respectively. It is straightforward to show that Eq. (6) gives $`\stackrel{~}{s}_{13}(scm_{22}+(c^2s^2)m_{23}+scm_{33})/M`$. Consequently, unless there is some artificial tuning of the $`m_{ij}`$ one can conclude that $`\stackrel{~}{s}_{13}\stackrel{_<}{_{}}10^3`$ and hence negligible. Therefore, $`U_{e3}\overline{s}_{12}s_{23}`$. Generally, this means mixing pattern $`\alpha `$, but where $`\overline{s}_{12}`$ is predicted to be $`\sqrt{m_e/m_\mu }`$ one has pattern $`\alpha ^{}`$.
This brings us to models of class I(2). As can be seen from Eqs. (13), (15), and (18) most models of this class that do not involve fine-tuning seem to yield the form for $`M_\nu `$ given in Eq. (2). These models give the same results for $`U_{e3}`$ as do class I(1) models that have the form in Eq. (2). The same is also effectively true for most models of class II(2), namely the models with lopsided $`L`$. It is true that in class II(2) models the large 2-3 mixing comes from the charged lepton sector rather than from $`M_\nu `$. However, as can be seen from Eq. (25) it does not much matter in computing the MNS matrix where the 2-3 mixing originates. In class II(2) models, if the 2-3 block of the charged-lepton mass matrix $`L`$ is diagonalized, the matrix $`M_\nu `$ generally goes over to the form in Eq. (2). (This will be the case if the neutrino masses have the hierarchy $`m_3m_2m_1`$, as typically is the case in class II(2).)
Let us consider, finally, the models of class II(1). Almost all published models of this class are of the “flavor democracy” type, as we have seen. Up to now we have analyzed predictions for $`U_{e3}`$ using the forms given in Eqs. (24) and (25). However, these forms are convenient when the mass matrices have a hierarchy among their elements, which is not the case for the flavor democratic form, Eq. (19). Therefore we shall analyze the flavor democracy models in a different way.
In flavor democracy models, it is assumed that the lepton mass matrices have the following forms
$$\begin{array}{ccc}L\hfill & =& M_{FD}+\mathrm{\Delta }L,\hfill \\ & & \\ M_\nu \hfill & =& m_\nu I+\mathrm{\Delta }M_\nu ,\hfill \end{array}$$
(28)
where $`M_{FD}`$ is the form in Eq. (19), $`I`$ is the identity matrix, and $`\mathrm{\Delta }L`$ and $`\mathrm{\Delta }M_\nu `$ are small corrections that break the flavor permutation symmetries of the model (generally $`S_3\times S_3`$). In Ref. 38 the parameter $`m_\nu `$ vanishes and $`M_\nu =\mathrm{\Delta }M_\nu `$ has a hierarchical form, thus giving $`m_3m_2m_1`$ for the three neutrino masses. But the more usual assumption is that $`m_\nu 0`$, giving $`m_3m_2m_1`$. However, there is still assumed to be a hierarchy in $`\mathrm{\Delta }M_\nu `$ so as to get $`\delta m_{12}^2\delta m_{23}^2`$.
The first step in diagonalizing $`L`$ is to transform it by the orthogonal matrix $`U_{FD}`$ given in Eq. (20).
$$\begin{array}{ccc}L^{}U_{FD}^{}LU_{FD}\hfill & =& U_{FD}^{}(M_{FD}+\mathrm{\Delta }L)U_{FD}\hfill \\ & & \\ & =& \left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 3\end{array}\right)m_{\mathrm{}}+\mathrm{\Delta }L^{}.\hfill \end{array}$$
(29)
Define
$$(\mathrm{\Delta }L)_{ij}\delta _{ij}m_{\mathrm{}},(\mathrm{\Delta }L^{})_{ij}\delta _{ij}^{}m_{\mathrm{}},$$
(30)
Then the $`\delta _{ij}^{}`$ are given by
$$\begin{array}{ccc}\delta _{11}^{}& =& \frac{1}{2}(\delta _{11}+\delta _{22}\delta _{12}\delta _{21}),\hfill \\ \delta _{22}^{}& =& \frac{1}{6}(\delta _{11}+\delta _{22}+\delta _{12}+\delta _{21})\hfill \\ & & \frac{1}{3}(\delta _{13}+\delta _{31}+\delta _{23}+\delta _{32})+\frac{2}{3}\delta _{33},\hfill \\ \delta _{33}^{}& =& \frac{1}{3}\mathrm{\Sigma }_{ij}\delta _{ij},\hfill \\ \delta _{12}^{}& =& \frac{1}{2\sqrt{3}}(\delta _{11}\delta _{22}+\delta _{12}\delta _{21}2\delta _{13}+2\delta _{23}),\hfill \\ \delta _{13}^{}& =& \frac{1}{\sqrt{6}}(\delta _{11}\delta _{22}+\delta _{12}\delta _{21}+\delta _{13}\delta _{23}),\hfill \\ \delta _{23}^{}& =& \frac{1}{3\sqrt{2}}\mathrm{\Sigma }_i(\delta _{1i}+\delta _{2i}2\delta _{3i}).\hfill \end{array}$$
(31)
The next step in the diagonalization is to rotate away the 13, 31, 23, and 32 elements of $`L^{}`$ as follows
$$\begin{array}{ccc}L^{\prime \prime }& =& \left(\begin{array}{ccc}1& 0& \delta _{13}^{}/3\\ 0& 1& \delta _{23}^{}/3\\ \delta _{13}^{}/3& \delta _{23}^{}/3& 1\end{array}\right)L^{}\left(\begin{array}{ccc}1& 0& \delta _{31}^{}/3\\ 0& 1& \delta _{32}^{}/3\\ \delta _{31}^{}/3& \delta _{32}^{}/3& 1\end{array}\right)\hfill \\ & & \\ & & \left(\begin{array}{ccc}\delta _{11}^{}& \delta _{12}^{}& 0\\ \delta _{21}^{}& \delta _{22}^{}& 0\\ 0& 0& 3\end{array}\right)m_{\mathrm{}}.\hfill \end{array}$$
(32)
Finally the 1-2 block of $`L^{\prime \prime }`$ is diagonalized
$$L_{diag}=\left(\begin{array}{ccc}\mathrm{cos}\theta ^{}& \mathrm{sin}\theta ^{}& 0\\ \mathrm{sin}\theta ^{}& \mathrm{cos}\theta ^{}& 0\\ 0& 0& 1\end{array}\right)L^{\prime \prime }\left(\begin{array}{ccc}\mathrm{cos}\theta _{\mathrm{}}& \mathrm{sin}\theta _{\mathrm{}}& 0\\ \mathrm{sin}\theta _{\mathrm{}}& \mathrm{cos}\theta _{\mathrm{}}& 0\\ 0& 0& 1\end{array}\right),$$
(33)
where
$$\mathrm{tan}2\theta _{\mathrm{}}=\frac{2(\delta _{11}^{}\delta _{12}^{}+\delta _{21}^{}\delta _{22}^{})}{(\delta _{22}^2+\delta _{12}^2\delta _{21}^2\delta _{11}^2)}.$$
(34)
As emphasized in Ref. 56, there is no reason a priori for this angle to be small, a point to which we shall return presently.
Altogether, then, the matrix $`U_L`$ that diagonalizes $`L^{}L`$ is given by
$$U_L=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{3}}\\ 0& \frac{2}{\sqrt{6}}& \frac{1}{\sqrt{3}}\end{array}\right)\left(\begin{array}{ccc}1& 0& \delta _{31}^{}/3\\ 0& 1& \delta _{32}^{}/3\\ \delta _{31}^{}/3& \delta _{32}^{}/3& 1\end{array}\right)\left(\begin{array}{ccc}\mathrm{cos}\theta _{\mathrm{}}& \mathrm{sin}\theta _{\mathrm{}}& 0\\ \mathrm{sin}\theta _{\mathrm{}}& \mathrm{cos}\theta _{\mathrm{}}& 0\\ 0& 0& 1\end{array}\right).$$
(35)
The usual assumption is that $`M_\nu `$ is nearly diagonal, so that $`U_\nu I`$ and the MNS matrix is given by $`U_{MNS}=U_L^{}U_\nu U_L^{}`$. From Eq. (35) one has then
$$\begin{array}{ccc}U_{\mu 3}& & 2\mathrm{cos}\theta _{\mathrm{}}/\sqrt{6}+O(\delta ),\hfill \\ & & \\ U_{e2}& & \mathrm{cos}\theta _{\mathrm{}}/\sqrt{2}\mathrm{sin}\theta _{\mathrm{}}/\sqrt{6}+O(\delta ),\hfill \\ & & \\ U_{e3}& & 2\mathrm{sin}\theta _{\mathrm{}}/\sqrt{6}(\mathrm{sin}\theta _{\mathrm{}}\delta _{32}^{}\mathrm{cos}\theta _{\mathrm{}}\delta _{31}^{})/3\sqrt{3}.\hfill \end{array}$$
(36)
Since the angle $`\theta _{\mathrm{}}`$ is very sensitive to the nine parameters $`\delta _{ij}`$ and has no reason a priori to be small, as is apparent from Eq. (34), it might seem that the flavor democracy idea has no predictivity as far as the MNS matrix elements are concerned. However, a posteriori we do know that the CKM angles are small, and that strongly suggests that $`\theta _{\mathrm{}}`$ is small. The point is that if a large angle $`\theta _{\mathrm{}}`$ were required in the diagonalization of $`L`$, one would typically expect to find that large angles $`\theta _u`$ and $`\theta _d`$ were required in the diagonalization of $`U`$ and $`D`$ as well. Unless there were a conspiracy and $`\theta _u\theta _d`$, large CKM angles would result. Under the assumption that $`\mathrm{\Delta }L`$ has the same form (with different values of parameters) as $`\mathrm{\Delta }U`$ and $`\mathrm{\Delta }D`$, one can conclude that $`\theta _{\mathrm{}}1`$.
There are many possible forms for $`\mathrm{\Delta }L`$ that give vanishing $`\theta _{\mathrm{}}`$. If such a form is chosen, then one has
$$\begin{array}{ccc}U_{\mu 3}& =& 2/\sqrt{6}+O(\delta ),\hfill \\ & & \\ U_{e2}& =& 1/\sqrt{2}+O(\delta ),\hfill \\ & & \\ U_{e3}& =& O(\delta ).\hfill \end{array}$$
(37)
The exact value is evidently dependent on the scheme of symmetry breaking. However, since the parameters $`\delta _{ij}`$ are involved in generating the interfamily hierarchy of of charged lepton masses, one expects that $`U_{e3}`$ will closely related to small lepton mass ratios. In fact, this is the case, and typically one finds that $`U_{e3}\sqrt{m_e/m_\mu }`$, in other words pattern $`\alpha `$. In a popular scheme of symmetry breaking,<sup>57,58</sup> for instance, $`|U_{e3}|\frac{2}{\sqrt{6}}\sqrt{m_e/m_\mu }`$, which we have called pattern $`\alpha ^{\prime \prime }`$. However, there are also schemes of symmetry breaking<sup>57</sup> where $`U_{e3}=0`$, which we called pattern $`\beta `$.
In conclusion, we see that there are a few patterns of neutrino mixing that tend to arise in the great majority of published models. And although there is not a one-to-one correspondence between the type of model and the value of $`U_{e3}`$, it is clear that knowledge of $`U_{e3}`$ will give great insight into the possible underlying mechanisms that are responsible for neutrino mixing.<sup>59</sup>
## References
1. J.W.F. Valle, hep-ph/9911224; S.M. Bilenky, Lectures at the 1999 European School of High Energy Physics, Casta Papiernicka, Slovakia, Aug. 22-Sept. 4, 1999 hep-ph/0001311.
2. M.C. Gonzalez-Garcia, P.C. de Holanda, C. Peña-Garay, and J.C.W. Valle, hep-ph/9906469
3. V. Barger and K. Whisnant, hep-ph/9903262
4. M.C. Gonzalez-Garcia, talk at International Workshop on Particles in Astrophysics and Cosmology: From Theory to Observation, Valencia, Spain, May 3-8, 1999.
5. M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, Proc. Supergravity Workshop at Stony Brook, ed. P. Van Nieuwenhuizen and D.Z. Freedman (North-Holland, Amsterdam (1979)); T. Yanagida, Proc. Workshop on unified theory and the baryon number of the universe, ed. O. Sawada and A. Sugamoto (KEK, 1979).
6. A. Zee, Phys. Lett. B93, 389 (1980); Phys. Lett. B161, 141 (1985).
7. C. Froggatt and H.B. Nielson, Nucl. Phys. B147, 277 (1979).
8. J.A. Harvey, D.B. Reiss, and P. Ramond, Nucl. Phys. B199, 223 (1982).
9. Z. Maki, M. Nakagawa, and S. Sakata, Prog. Theor. Phys. 28, 870 (1962).
10. R.N. Mohapatra and S. Nussinov, Phys. Rev. D60, 013002 (1999) (hep-ph/9809415).
11. C.D. Froggatt, M. Gibson, and H.D. Nielson, Phys. Lett. B446, 256 (1999) (hep-ph/9811265).
12. A.S. Joshipura, hep-ph/9808261; A.S. Joshipura and S.D. Rindani, hep-ph/9811252; R.N. Mohapatra, A. Perez-Lorenzana, C.A. deS. Pires, Phys. Lett. B474, 355 (2000) (hep-ph/9911395).
13. C. Jarlskog, M. Matsuda, S. Skadhauge, and M. Tanimoto, Phys. Lett. B449, 240 (1999) (hep-ph/9812282).
14. E. Ma, Phys. Lett. B442, 238 (1998) (hep-ph/9807386); K. Cheung and O.C.W. Kong, hep-ph/9912238.
15. P. Frampton and S. Glashow, Phys. Lett. B461, 95 (1999) (hep-ph/9906375); A.S. Joshipura and S.D. Rindani, Phys. Lett. B464, 239 (1999) (hep-ph/9907390).
16. M. Drees, S. Pakvasa, X. Tata, T. terVeldhuis, Phys. Rev. D57 5335 (1998) (hep-ph/9712392).
17. E.J. Chun, S.K. Kang, C.W. Kim, and U.W. Lee, Nucl. Phys. B544, 89 (1999) (hep-ph/9907327); A.S. Joshipura and S.K. Vempati, Phys. Rev. D60, 095009 (1999) (hep-ph/9808232); B. Mukhopadhyaya, S. Roy, and F. Vissani, Phys. Lett. B443, 191 (1998) (hep-ph/9808265); O.C.W. Kong, hep-ph/9808304; K. Choi, E.J. Chun, and K. Hwang, Phys. Rev. D60, 031301 (1999) (hep-ph/9811363); D.E. Kaplan and A.E. Nelson, JHEP 0001:033 (2000) (hep-ph/9901254); A.S. Joshipura and S.K. Vempati, Phys. Rev. D60, 111303 (1999) (hep-ph/9903435); J.C. Romao, M.A. Diaz, M. Hirsch, W. Porod, and J.W.F. Valle, hep-ph/9907499; O. Haug, J.D. Vergados, A. Faessler, and S. Kovalenko, hep-ph/9909318; E.J. Chun and S.K. Kang, Phys. Rev. D61, 075012 (2000) (hep-ph/9909429).
18. K. Fukuura, T. Miura, E. Takasugi, and M. Yoshimura, Osaka Univ. preprint, OU-HET-326 (hep-ph/9909415).
19. G.K. Leontaris and J. Rizos, CERN-TH-99-268 (hep-ph/9909206); W. Büchmuller and T. Yanagida, Phys. Lett. B445, 399 (1999) (hep-ph/9810308); C.K. Chua, X.G. He, and W.Y. Hwang, hep-ph/9905340; A. Ghosal, hep-ph/9905470; J.E. Kim and J.S. Lee, hep-ph/9907452; U. Mahanta, hep-ph/9909518.
20. S.L. King, Phys. Lett. B439, 350 (1998) (hep-ph/9806440); S. Davidson and S.L. King, Phys. Lett. B445, 191 (1998) (hep-ph 9808296); E. Ma and D.P Roy, Phys. Rev. D59, 097702 (1999) (hep-ph/9811266); Q. Shafi and Z. Tavartkiladze, Phys. Lett. B451, 129 (1999) (hep-ph/9901243); S. King, Nucl. Phys. B562, 57 (1999) (hep-ph/9904210); W. Grimus and H. Neufeld, hep-ph/9911465.
21. R. Barbieri, L.J. Hall, D. Smith, A. Strumia, and N. Weiner, JHEP, 9812:017 (1998) (hep-ph/9807235); R. Barbieri, L.J. Hall, and A. Strumia, Phys. Lett. B445, 407 (1999) (hep-ph/9808333); Y. Grossman, Y. Nir, and Y. Shadmi, JHEP 9810:007 (1998) (hep-ph/9808355); C.D. Froggatt, M. Gibson, and H.D. Nielson, Phys. Lett. B446, 256 (1999) (hep-ph/9811265).
22. C.H. Albright and S. Nandi, Phys. Rev. D53, 2699 (1996) (hep-ph/9507376); H. Nishiura, K. Matsuda, and T. Fukuyama, Phys. Rev. D60, 013006 (1999) (hep-ph/9902385); K.S. Babu, B. Dutta, and R.N. Mohapatra, Phys. Lett. B458, 93 (1999) (hep-ph/9904366).
23. M. Jezabek and Y. Sumino, Phys. Lett. B440, 327 (1998) (hep-ph/9807310); G. Altarelli and F. Feruglio, Phys. Lett. B439, 112 (1998) (hep-ph/9807353). G. Altarelli, F. Feruglio, and I. Masina, hep-ph/9907532.
24. B. Stech, Phys. Lett. B465, 219 (1999) (hep-ph/9905440); R. Dermís̆ek and S. Raby, hep-ph/9911275; A. Aranda, C.D. Carone, and R.F. Lebed, hep-ph/0002044.
25. G.K. Leontaris, S. Lola, C. Scheich, J.D. Vergados, Phys. Rev. D53, 6381 (1996) (hep-ph/9509351); Y. Koide, Mod. Phys. Lett. A11, 2849 (1996) (hep-ph/9603376); P. Binetruy, S. Lavignac, S. Petcov, and P. Ramond, Nucl. Phys. B496, 3 (1997) (hep-ph/9610481); B.C. Allanach, Phys. Lett. B450, 182 (1999) (hep-ph/9806294); G. Eyal, Phys. Lett. B441, 191 (1998) (hep-ph/9807308); S. Lola and J.D. Vergados, Prog. Part. Nucl. Phys. 40, 71 (1998) (hep-ph/9808269).
26. G. Costa and E. Lunghi, Nuov. Cim. 110A, 549 (1997) (hep-ph/9709271).
27. M. Jezabek and Y. Sumino, Phys. Lett. B440, 327 (1998) (hep-ph/9807310).
28. G. Altarelli and F. Feruglio, Phys. Lett. B439, 112 (1998) (hep-ph/9807353); E.Kh. Akhmedov, G.C. Branco, and M.N. Rebelo, hep-ph/9911364.
29. M. Bando, T. Kugo, and K. Yoshioki, Phys. Rev. Lett. 80, 3004 (1998) (hep-ph/9710417).
30. M. Abud, F. Buccella, D. Falcone, G. Ricciardi, and F. Tramontano, DSF-T-99-36 (hep-ph/9911238).
31. A.K. Ray and S. Sarkar, Phys. Rev. D61, 035007 (2000) (hep-ph/9908294).
32. J. Hashida, T. Morizumi, and A. Purwanto, hep-ph/9909208.
33. K. Oda, E. Takasugi, M. Tanaka, and M. Yoshimura, Phys. Rev. D59, 055001 (1999) (hep-ph/9808241).
34. Q. Shafi and Z. Tavartkiladze, BA-99-39 (hep-ph/9905202); D.P. Roy, Talk at 6th Topical Seminar on Neutrino and AstroParticle Physics, San Miniato, Italy, 17-21 May 1999 (hep-ph/9908262).
35. G. Altarelli, F. Feruglio, and I. Masina, hep-ph/9907532. See also R. Barbieri, P. Creminelli, and A. Romanino, Nucl. Phys. B559, 17 (1999) (hep-ph/9903460).
36. E. Malkawi, Phys. Rev. D61, 013006 (2000) (hep-ph/9810542); Y.L. Wu, Eur. Phys. J. C10, 491 (1999) (hep-ph/9901245).
37. For a review see H. Fritzsch, Talk at Ringberg Euroconference on New Trends in Neutrino Physics, Ringberg, Ger. 1998 (hep-ph/9807234), and references therein.
38. H. Fritzsch and Z.Z. Xing, Phys. Lett. B372, 265 (1996) (9509389).
39. M. Fukugita, M. Tanimoto, and T. Yanagida, Phys. Rev. D57, 4429 (1998) (hep-ph/9709388); M. Tanimoto, Phys. Rev. D59, 017304 (1999) (hep-ph/9807283); H. Fritzsch and Z.Z. Xing, Phys. Lett. B440, 313 (1998) (hep-ph/9808272); R.N. Mohapatra and S. Nussinov, Phys. Lett. B441, 299 (1998) (hep-ph/9808301); M. Fukugita, M. Tanimoto, and T. Yanagida, Phys. Rev. D59, 113016 (1999) (hep-ph/9809554); S.K. Kang and C.S. Kim, Phys. Rev. D59, 091302 (1999) (hep-ph/9811379); M. Tanimoto, T. Watari, and T. Yanagida, Phys. Lett. B461, 345 (1999) (hep-ph/9904338); M. Tanimoto, hep-ph/0001306. For an excellent review of flavor democracy schemes of neutrino mass mixing see H. Fritzsch and Z.Z. Xing, hep-ph/9912358.
40. G.C. Branco, M.N. Rebelo, and J.I. Silva-Marcos, Phys. Lett. B428, 136 (1998) (hep-ph/9802340); I.S. Sogami, H. Tanaka, and T. Shinohara, Prog. Theor. Phys. 101, 707 (1999) (hep-ph/9807449); G.C. Branco, M.N. Rebelo, and J.I. Silva-Marcos, hep-ph/9906368.
41. K.S. Babu and S.M. Barr, Phys. Lett. B381, 202 (1996) (hep-ph/9511446).
42. S.M. Barr, Phys. Rev. D55, 1659 (1997) (hep-ph/9607419).
43. J. Sato and T. Yanagida, Phys. Lett. B430, 127 (1998) (hep-ph/9710516).
44. C.H. Albright, K.S. Babu, and S.M. Barr, Phys. Rev. Lett. 81, 1167 (1998) (hep-ph/9802314).
45. N. Irges, S. Lavignac, and P. Ramond, Phys. Rev. D58, 035003 (1998) (hep-ph/9802334); J.K. Elwood, N. Irges, and P. Ramond, Phys. Rev. Lett. 81, 5064 (1998) (hep-ph/9807228).
46. Y. Nomura and T. Yanagida, Phys. Rev. D59, 017303 (1999) (hep-ph/9807325); N. Haba, Phys. Rev. D59, 035011 (1999) (hep-ph/9807552); G. Altarelli and F. Feruglio, JHEP 9811:021 (1998) (hep-ph/9809596); Z. Berezhiani and A. Rossi, JHEP 9903:002 (1999) (hep-ph/9811447); K. Hagiwara and N. Okamura, Nucl. Phys. B548, 60 (1999) (hep-ph/9811495); G. Altarelli and F. Feruglio, Phys. Lett. B451, 388 (1999) (hep-ph/9812475); K.S. Babu, J. Pati, and F. Wilczek, (hep-ph/9812538); M. Bando and T. Kugo, Prog. Theor. Phys. 101, 1313 (1999) (hep-ph/9902204); Y. Nir and Y. Shadmi, JHEP 9905:023 (1999) (hep-ph/9902293); Y. Nomura and T. Sugimoto, hep-ph/9903334; K.I. Izawa, K. Kurosawa, Y. Nomura, and T. Yanagida, Phys. Rev. D60, 115016 (1999) (hep-ph/9904303); Q. Shafi and Z. Tavartkiladze, BA-99-63 (hep-ph/9910314); P. Frampton and A. Rasin, IFP-777-UNC (hep-ph/9910522).
47. R. Barbieri, L.J. Hall, G.L. Kane, and G.G. Ross, OUTP-9901-P (hep-ph/9901228); E. Ma, Phys. Rev. D61, 033012 (hep-ph/9909249).
48. C.H. Albright and S.M. Barr, Phys. Lett. 461, 218 (1999) (hep-ph/9906296).
49. C.H. Albright and S.M. Barr, Phys. Rev. D58, 013002 (1998) (hep-ph/9712488).
50. H. Georgi and C. Jarlskog, Phys. Lett. B86 (1979) 297.
51. S.M. Barr and S. Raby, Phys. Rev. Lett. 79, 4748 (1998).
52. S. Dimopoulos and F. Wilczek, report No. NSF-ITP-82-07 (1981), in The unity of fundamental interactions Proceedings of the 19th Course of the International School of Subnuclear Physics, Erice, Italy, 1981 ed. A. Zichichi (Plenum Press, New York, 1983); K.S. Babu and S.M. Barr, Phys. Rev. D48, 5354 (1993); Phys. Rev. D50, 3529 (1994).
53. C.H. Albright and S.M. Barr, Phys. Lett. B452, 287 (1999) (hep-ph/9901318).
54. S. Weinberg, Trans. NY Acad. Sci. 38, 185 (1977); F. Wilczek and A. Zee, Phys. Lett. B70, 418 (1977); H. Fritzsch, Phys. Lett. B70, 436 (1977).
55. C.H. Albright and S.M. Barr, hep-ph/0002155.
56. M. Tanimoto, Phys. Rev D59, 017304 (1999) (hep-ph/9807283).
57. M. Fukugita, M. Tanimoto, and T. Yanagida, Phys. Rev. D57, 4429 (1998) (hep-ph/9709388).
58. H. Fritzsch and Z.Z. Xing, Phys. Lett. B440, 313 (1998) (hep-ph/9808272); H. Fritzsch and Z.Z. Xing, hep-ph/9912358.
59. E.Kh. Akhmedov, G.C. Branco, and M.N. Rebelo, hep-ph/9912205 gives an analysis of $`U_{e3}`$ that is different from but not inconsistent with the one given here.
|
no-problem/0003/astro-ph0003032.html
|
ar5iv
|
text
|
# Non-detection of a pulsar-powered nebula in Puppis A, and implications for the nature of the radio-quiet neutron star RX J0822–4300
## 1. Introduction
While the vast majority of neutron stars so far discovered are seen as radio pulsars, there are also a small but increasing number of neutron stars which have very different observational properties. Approximately half these sources are soft $`\gamma `$-ray repeaters (SGRs) or anomalous X-ray pulsars (AXPs), both of which show pulsed X-rays at long periods ($`P10`$ s) (e.g. Mereghetti (1999)). The remaining sources are grouped together as “radio-quiet neutron stars” (RQNS; Caraveo, Bignami, & Trümper (1996); Brazier & Johnston (1999)), most of which are characterized by unpulsed thermal X-ray emission at a temperature of a few million degrees, a complete lack of radio emission, and very high X-ray to optical ratios. Many of these sources have been associated with supernova remnants (SNRs), and are thus probably quite young ($``$20 kyr) objects.
The AXPs and SGRs are quite distinct from radio pulsars in their properties, and are believed to be either “magnetars” (neutron stars with magnetic fields $`B10^{14}`$ G; Thompson & Duncan (1996)) or exotic accreting systems (e.g. van Paradijs, Taam, & van den Heuvel (1995)); however, an interpretation for the RQNS is less clear. Brazier & Johnston (1999) argue that RQNS are energetic young radio pulsars like the Crab pulsar, but whose beams do not cross our line of sight. However, Vasisht et al. (1997) and Frail (1998) propose that RQNS are neutron stars with large initial periods ($`P_00.5`$ s) and/or high magnetic fields ($`B10^{14}`$ G) and are thus possibly related to the SGRs and AXPs, while Geppert, Page & Zannias (1999) suggest that they may rather be fast-spinning but weakly-magnetized sources.
One way to distinguish between all these possibilities would be to detect pulsations from a RQNS. The period and period derivative of the source could then be used to infer a surface magnetic field (Manchester & Taylor (1977)), while if the RQNS can also be associated with a SNR, an independent age determination for the latter can be used to estimate an initial period for the neutron star (e.g. Reynolds (1985)).
The RQNS RX J0822–4300 (Petre, Becker, & Winkler (1996)) is near the center of and is almost certainly associated with the young ($`<`$5000 yr; Winkler et al. (1988); Arendt, Dwek, & Petre (1991)) and nearby (2.2 kpc; Reynoso et al. (1995)) supernova remnant Puppis A (G260.4–3.3). Recently, Pavlov, Zavlin & Trümper (1999; hereafter PZT99) and Zavlin, Trümper & Pavlov (1999; hereafter ZTP99) have analyzed two archival ROSAT datasets on RX J0822–4300, separated by 4.6 yr. In each dataset they find evidence for weak pulsations, the periods of which are slightly different as would be expected for pulsar spin-down. The resulting period, $`P=75.5`$ ms, and period derivative, $`\dot{P}=1.49\times 10^{13}`$ s s<sup>-1</sup>, when combined with the age of the SNR, imply a dipole magnetic field $`B=3.4\times 10^{12}`$ G, a spin-down luminosity $`\dot{E}=1.4\times 10^{37}`$ erg s<sup>-1</sup> and an initial period $`P_055`$ ms, all of which (despite the radio-quiet nature of the source) are properties typical of a young energetic radio pulsar associated with a SNR.
However energetic young pulsars in SNRs have some unmistakable signatures. Every young ($``$20 kyr) pulsar located within the confines of a SNR powers an observable pulsar wind nebula (PWN) — a filled-center synchrotron source resulting from the confinement of the relativistic pulsar wind by external pressure. Thus a simple test to determine if RX J0822–4300 is indeed an energetic young pulsar, as argued by Brazier & Johnston (1999) and as implied by the detection of pulsations by PZT99 and ZTP99, is to see if it has an associated PWN. At radio wavelengths, existing data (e.g. Arendt et al. (1990); Dubner et al. (1991)) let us put no useful constraints on the presence or absence of a PWN associated with RX J0822–4300. This is because these observations were carried out at relatively low frequencies (where Puppis A is brightest) and low spatial resolution, resulting in a great deal of confusing emission at the position of RX J0822–4300 from both the SNR shell and from diffuse internal emission. We have therefore carried out new observations towards RX J0822–4300, at higher frequency and spatial resolution than previous measurements, aimed at searching for a radio PWN associated with RX J0822–4300 and thus determining whether its properties are consistent with it being a young pulsar. Our observations are described in §2, while in §3 we demonstrate the absence of any radio PWN at the position of RX J0822–4300 and quantify the consequent upper limits. In §4 we argue that this non-detection implies that RX J0822–4300 must have properties very different from the young energetic pulsars which do power observable PWN.
## 2. Observations
Radio observations towards RX J0822–4300 were made with the Australia Telescope Compact Array (ATCA; Frater, Brooks, & Whiteoak (1992)) in its 0.750D configuration on 1999 July 24/25. In this configuration, the array contains ten baselines in the range 31 m to 719 m, and another five baselines in the range 3750 m to 4469 m. Since these two sets of baselines cannot be easily combined in a single image, this effectively results in two sets of data: one appropriate for imaging extended structure on a wide range of scales (a “large scale” image), and another sensitive only to a narrow range of spatial scales, but at much higher spatial resolution (a “small scale” image).
Two separate observations were made, each of duration 12 h. In the first, data were collected at frequencies of 1.4 and 2.5 GHz, while in the second, a single observation was made centered at 4.8 GHz. At 1.4 and 2.5 GHz the bandwidth was 128 MHz, while at 4.8 GHz the bandwidth was 256 MHz, in all cases divided into 4-MHz channels.
Observations at 1.4/2.5 GHz consisted of a two-point mosaic with mean position centered on RX J0822–4300; at 4.8 GHz, observations consisted of a single pointing, offset 2.5 arcmin to the west of RX J0822–4300 so as to avoid sidelobe contamination from the nearby bright source PMN J0820–4259. Amplitudes were calibrated by observations of PKS B1934–638, assuming flux densities of 14.9, 11.1 and 5.7 Jy at 1.4, 2.5 and 4.8 GHz respectively. Instrumental gains and polarization were determined using regular observations of PKS B0823–500.
## 3. Analysis and Results
After standard editing and calibration using the MIRIAD package, total intensity images were formed at each frequency, using a multi-frequency synthesis approach to both improve the $`uv`$ coverage and minimize the effects of bandwidth smearing (Sault & Wieringa (1994)). At 1.4 and 2.5 GHz, mosaic images were formed using maximum entropy deconvolution, both pointings being deconvolved simultaneously (Sault, Staveley-Smith, & Brouw (1996)). The 4.8-GHz image was deconvolved using the CLEAN algorithm. At each frequency, both “large scale” and “small scale” images were formed. At 2.5 GHz, the two shortest baselines sample emission from the SNR which fills the entire field of view of the “large scale” image and prevents it from being successfully deconvolved; therefore at this frequency these two baselines were not used when forming the “large scale” image.
In all cases, deconvolution was constrained to only act on specific regions of the image, namely the shell component of the SNR (defined using the 0.8 GHz MOST image of Bock, Turtle, & Green (1998)) and background point sources outside the SNR. The resulting model was subtracted from the $`uv`$ data to produce a dataset which contained visibilities corresponding only to emission from the interior of the SNR. This dataset was then imaged, deconvolved, smoothed with a gaussian restoring beam, and corrected for the mean primary beam response of the ATCA antennas.
In all six images (1.4, 2.5 and 4.8 GHz; “small scale” and “large scale”) no radio emission could be seen at or around the position of RX J0822–4300; one such image is shown in Fig 1. To quantify these non-detections we performed a series of simulations, in each of which we modeled the appearance of a PWN by using a circular disk of a given surface brightness and radius, centered on RX J0822–4300. This simple morphology is a reasonable approximation to observed PWN, most of which are centered on their associated pulsar with approximately constant surface brightness across their extent. In each simulation, the Fourier transform of this disk was added to the $`uv`$ data from which the shell emission had been subtracted, and the imaging and deconvolution process was then repeated.<sup>1</sup><sup>1</sup>1The increase in antenna temperature resulting from the flux of the disk is negligible in all cases. For a given diameter, we increased the brightness of the simulated disk until it could clearly be distinguished from the underlying noise (this criterion corresponds to a $`5\sigma `$ detection for small diameters, but is closer to $`3\sigma `$ for larger sources). We were thus able to quantify the sensitivity of the data to a PWN as a function of its size, incorporating effects due to non-Gaussian noise in the image, unrelated background sources and the limited range of spatial scales sampled by the interferometer.
The results of these simulations are shown in Fig 2. At each frequency, the sensitivity curve consists of four regimes. At the smallest scales, each curve is essentially flat, corresponding to the sensitivity of the “small scale” image to an unresolved source. The curve then increases approximately as $`S_{\mathrm{min}}D^2`$, as expected for an extended source (cf. Fig 2 of Gaensler et al. (2000)); slight deviations from this relation are due to the effects of noise-fluctuations in the data. At a certain scale the sensitivity of the “large scale” image to an unresolved source becomes better than that of the ”small scale” image to a resolved source, and the curve becomes flat once more. Finally, at scales which are resolved by the “large scale” image, the curve once again increases proportional to $`D^2`$. The curve at each frequency terminates at the largest scale detectable by the interferometer; note that the 2.4 GHz curve ends prematurely because the two shortest baselines were not used, as discussed above.
## 4. Discussion and Conclusions
To determine whether the limits derived in Fig 2 are constraining, we need to determine an expected size and flux density for a PWN associated with RX J0822–4300. A PWN will expand until the pulsar wind comes into pressure equilibrium with the external pressure. As discussed by Gaensler et al. (2000), there are two possible sources for this pressure: either the external gas pressure, $`p_{\mathrm{gas}}=nkT`$, or the ram pressure produced by the pulsar’s motion, $`p_{\mathrm{ram}}=\rho V^2`$, where $`n`$ and $`\rho `$ are the number and mass density respectively of the ambient medium, $`V`$ is the velocity of the pulsar, and $`T`$ is the temperature of ambient gas.
By modeling the infrared emission from Puppis A, Arendt, Dwek & Petre (1991) derive parameters for the gas interior to the SNR (into which the PWN is expanding) of $`n=1.3`$ cm<sup>-3</sup> and $`T=36\times 10^6`$ K; similar values are derived from X-ray spectroscopy (e.g. Winkler et al. (1981); Berthiaume et al. (1994)). This corresponds to a pressure $`p_{\mathrm{gas}}=0.050.11`$ nPa which, if equated with the pressure $`\dot{E}/4\pi r^2c`$ from the pulsar wind (where $`r`$ is the radius of the PWN and $`\dot{E}=1.4\times 10^{37}`$ erg s<sup>-1</sup>), results in a PWN of diameter 11–16 arcsec at a distance 2.2 kpc.
For the same number density as used above, and assuming the ambient gas to be composed only of atomic hydrogen, we find that $`p_{\mathrm{ram}}=2.2V_3^2`$ nPa, where $`V=10^3V_3`$ km s<sup>-1</sup>. Balancing this pressure with that from the pulsar wind (e.g. Gaensler et al. (2000)) results in a PWN of angular diameter $`4V_3^1`$ arcsec.
Since the mean velocity of the pulsar population is $`V_30.38`$ (Cordes & Chernoff (1998)), and in fact in this particular instance the offset of RX J0822–4300 from the dynamical center of the SNR argues that $`V_3>1`$ (Petre, Becker, & Winkler (1996)), it is likely that $`p_{\mathrm{ram}}>p_{\mathrm{gas}}`$, and that the smaller of the two sizes we have just estimated, corresponding to a bow-shock morphology, should apply. We note that in such a case, it is still reasonable to model the PWN as a circular disk, since in observed bow-shock nebulae most of the emission is concentrated close to the head of the nebula. In any case, regardless of the dominant source of confining pressure, the expected extent of a PWN powered by RX J0822–4300 is small. Although we believe the sizes derived above to be robust, we conservatively adopt a maximum angular size for any PWN of 30 arcsec to take into account possible uncertainties in $`V`$, $`n`$, $`T`$, or the distance to the source. From Fig 2, it can be seen that at all three frequencies, the upper limit on the flux density for such a source is $``$7 mJy.
Assuming a typical PWN spectral index of $`\alpha =0.3`$ ($`S_\nu \nu ^\alpha `$), an upper limit of 7 mJy at 1.4 GHz corresponds to a broad-band radio luminosity (integrated between 10 MHz and 100 GHz) of $`L_R=2\times 10^{30}`$ erg s<sup>-1</sup>. Defining $`ϵL_R/\dot{E}`$ to be the ratio between a PWN’s broad-band radio luminosity and its spin-down luminosity, we find that for $`\dot{E}=1.4\times 10^{37}`$ erg s<sup>-1</sup> as reported by PZT99 and ZTP99, we can derive an upper limit $`ϵ<10^7`$. This is a more stringent limit on $`ϵ`$ than has been derived for almost any other pulsar (cf. Frail & Scharringhausen (1997); Gaensler et al. (2000)). In particular, this upper limit is sharply at odds with the values of $`ϵ`$ seen for other young ($`20`$ kyr) pulsars, almost all of which produce radio PWN or have upper limits consistent with $`ϵ10^4`$ (Frail & Scharringhausen (1997); Gaensler et al. (2000)). The glaring exception to this is PSR B1509–58 in the SNR G320.4–1.2 (MSH 15–52), which powers an X-ray PWN but for which no radio PWN has yet been detected (Gaensler et al. (1999)). However, this can be understood in terms of the low ambient density ($`n<0.01`$ cm<sup>-3</sup>), which results in severe adiabatic losses and a consequently underluminous radio PWN (Bhattacharya (1990)). This condition is not satisfied for RX J0822–4300, and so cannot be considered as a possible explanation for the non-detection of a PWN.<sup>2</sup><sup>2</sup>2Furthermore, PSR B1509–58 powers a bright X-ray PWN (e.g. Seward et al. (1984); Brazier & Becker (1997)), while no X-ray nebula is seen around RX J0822–4300 (Pavlov, Sanwal, & Zavlin (2000)). We thus find that any PWN in Puppis A has a radio luminosity three orders of magnitude fainter than expected for the spin parameters derived by PZT99 and ZTP99.
Nevertheless, if we assume, as Brazier & Johnston (1999) have argued, that RX J0822–4300 is a rotation-powered pulsar, what spin parameters can we infer for it? If we require that $`ϵ10^4`$ as seen for other young pulsars, the maximum value of $`\dot{E}4\pi ^2I\dot{P}/P^3`$ which is consistent with our non-detection of a radio PWN is $`10^{33}`$ erg s<sup>-1</sup>. Meanwhile, it is unlikely that the characteristic age, $`\tau P/2\dot{P}`$, of the pulsar is more than 50 kyr, $``$10 times the true age of the system. These upper limits on $`\dot{E}`$ and $`\tau `$ correspond to lower limits $`P>3.5`$ s, $`\dot{P}>1.1\times 10^{12}`$ s s<sup>-1</sup> and $`B>6.4\times 10^{13}`$ G, parameters which are very similar to those seen for the SGRs/AXPs (Kaspi, Chakrabarty, & Steinberger (1999)) and for the young radio pulsar PSR J1814–1744 (Pivovaroff, Kaspi, & Camilo (2000); Camilo et al. (2000)), but quite different than those of other young pulsars in SNRs, for which typically $`P<0.2`$ s, $`\dot{E}>10^{36}`$ erg s<sup>-1</sup> and $`B10^{12}`$ G.
Whether RX J0822–4300 indeed has a long initial period and high magnetic field, or has some other properties such that it does not produce a detectable radio nebula or radio pulsations, the lack of a PWN around this source (and around other RQNS such as 1E 1207.4–5209 in the SNR G296.5+10.0; Mereghetti, Bignami, & Caraveo (1996); Giacani et al. (2000)), argues that at least some RQNS have drastically different properties from young radio pulsars.
Brazier & Johnston (1999) list six RQNS which are younger than 20 kyr and nearer than 3.5 kpc. Excluding two RQNS from their list which do power PWN and thus may well be radio pulsars beaming away from us, but including the recently-discovered RQNS in the young and nearby SNR Cassiopeia A (Tananbaum (1999); Pavlov et al. (2000); Chakrabarty et al. (2000)), this implies a Galactic birth-rate for such sources of at least once every $``$200 years, comparable to or even in excess of the birth-rate for radio pulsars (e.g. Lyne et al. (1998)). Radio-quiet neutron stars thus point to the possibility that pulsars like the Crab are not the most common manifestation of neutron star.
We thank Froney Crawford for assistance with the observations, and George Pavlov and Ulrich Geppert for helpful suggestions. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. B.M.G. acknowledges the support of NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. B.W.S. is supported by NWO Spinoza grant 08-0 to E.P.J. van den Heuvel.
|
no-problem/0003/astro-ph0003283.html
|
ar5iv
|
text
|
# INFRARED CLASSIFICATION OF GALACTIC OBJECTS
## 1 INTRODUCTION
The IRAS all-sky survey provides a unique opportunity to classify the infrared properties of astronomical objects from a homogeneous data set obtained with a single facility. Cross-correlations of various catalogues with the IRAS Point Source Catalogue (PSC) showed that certain Galactic objects tend to cluster in well defined regions of IRAS color–color diagrams. Notable examples include HII regions (e.g. Hughes & MacLeod 1989; Wood & Churchwell 1989 \[WC\]) and AGB stars (van der Veen & Habing 1988; VH). The reason for this clustering was not understood, nor was it clear whether such biased analysis based on pre-selection implies reliable selection criteria. From detailed modeling of dusty winds we were able to validate the VH selection criterion proposed for AGB stars and to explain its origin (Ivezić & Elitzur 1995; IE95). Here we extend this approach to all Galactic PSC sources in an unbiased analysis of IRAS fluxes without prior selections.
## 2 IRAS DATA AND ITS CLASSES
There are 6338 PSC sources with flux quality of at least 2 in all four IRAS channels. Removing the 651 sources identified as extra-galactic (Beichman et al 1985) produces our basic data set of 5687 Galactic IR objects. For AGB stars we have shown (IE95) that a high IRAS quality does not guarantee the flux is intrinsic to the point source itself; high-quality 60 and 100 $`\mu `$m PSC fluxes can have a cirrus origin instead. Figure 1 shows that the problem afflicts all sources, not just AGB stars. Its x-axis is cirr3/$`F_{60}`$, a measure of the ratio of cirrus background noise to the 60 $`\mu `$m signal. Intrinsic fluxes should have nothing to do with background emission, yet the \[100–60\] color is strongly correlated with cirr3/$`F_{60}`$ when this noise indicator exceeds a certain threshold. The PSC fluxes of sources above this threshold reflect cirrus, not intrinsic emission. Following IE95, we remove all sources with cirr3 $`>2F_{60}`$ to eliminate cirrus contamination. This leaves 1493 objects that can be considered a reliable representation of Galactic infrared point sources.
We submitted these data to the program AutoClass for an unbiased search of possible structure. AutoClass employs Bayesian probability analysis to automatically abstract a given data base into classes (Goebel et al 1989). AutoClass detected that the PSC sources belong to four distinct classes that occupy separate regions in the 4-dimensional space spanned by IRAS fluxes. Figure 2a shows the 2-dimensional projection of this space onto the \[100–60\]–\[25–12\] color plane. The IRAS colors of black body emission, which are independent of the black body temperature as long as it exceeds $``$ 700 K, are marked by the large dot. There is hardly any data at that point. Instead, the data are spread far away from it, indicating that IRAS fluxes are dominated by surrounding dust. But rather than random scatter, the data show clear structure, with the four AutoClass classes occupying well defined color regions. As a further check on the reality of these classes we constructed their Galactic distributions, shown in Figure 2b.
Classes A and B clearly separate in the color–color diagram. Classes C and D, on the other hand, are mixed together and are distinguished by their flux levels — class C fluxes are typically 10–100 times higher in all 4 bands. In principle, a single family could produce such behavior if split into nearby and distant objects, but this is not the case here. Extragalactic and heliocentric selection effects are ruled out by the Galactic distributions. Both classes are comprised of Galactic disk objects, but the median flux of class C is 16 times higher. If the two were drawn from the same population, class C sources would be on the average 4 times closer and their Galactic latitude distribution 4 times wider. Instead, the latitude histograms are essentially the same, both are centered on $`b=0\mathrm{°}`$ with full-width at half-maximum $`1.5\mathrm{°}\pm 0.3\mathrm{°}`$ for class C and $`1.5\mathrm{°}\pm 0.1\mathrm{°}`$ for class D. Class D fluxes vary by more than $`10^3`$, consistent with a population distributed throughout the entire Galactic plane.<sup>1</sup><sup>1</sup>1Class A fluxes have a similar dynamic range. In contrast, class C fluxes vary by a factor of only $``$ 20 and exceed $`10^3`$ Jy at both 60 and 100 $`\mu `$m for all sources. This is consistent only with a population confined to, at most, $``$ 5 kpc around the Galactic center with a minimal luminosity of $`10^3`$ $`L_{}`$. The Galactic longitude distributions corroborate these conclusions. Although both class C and D sources channel most of their luminosity to the far-IR, their spatial and luminosity distributions are very different.
## 3 THEORY
In IE95 we identified the cause of the particular IRAS colors of AGB stars: The dusty wind problem possesses general scaling properties. For a given dust composition, the solutions are predominantly determined by a single input parameter — the overall optical depth. All other input is largely irrelevant. As a result, the solutions occupy well defined regions in color–color diagrams. Indeed, the IRAS measured colors of AGB stars fall in the regions outlined by wind solutions with visual optical depth $`\tau _V`$ $``$ 100.
In Ivezić & Elitzur (1997; IE97) we showed that scaling is a general property of radiatively heated dust under all circumstances, not just in AGB winds. The most general dust radiative transfer problem contains only two input quantities whose magnitudes matter — $`\tau _V`$ and the dust temperature $`T_1`$ at some point. All other input is defined by dimensionless, normalized profiles that describe (1) the spectral shape of the external radiation, (2) the spectral shape of the dust absorption and scattering coefficients, and (3) the dust spatial distribution. Physical dimensions such as, e.g., luminosity and linear sizes are irrelevant. Scaling applies to arbitrary geometries, and in IE97 we conducted extensive numerical studies of its consequences for spherical shells. With a black body spectral shape for the heating radiation, the temperature $`T`$ hardly affects IRAS colors when $`T>`$ 2000 K. The reason is that $`T`$ is much higher than the Planck equivalent of the shortest IRAS wavelength (12 $`\mu `$m), which is only 240 K. Although this result was obtained in spherical solutions, its validity is general. Since neither its luminosity nor spectral shape matter, the heating source is quite irrelevant for the IRAS colors. Different objects segregate in IRAS color–color diagrams not because they have different central sources, but rather because their dust shells are different.
In our extensive study of spherical shells we examined also the effect of $`T_1`$, which was selected as the dust temperature on the shell inner boundary. As with the heating radiation, and for the same reasons, $`T_1`$ hardly affects IRAS colors when varied over the plausible range of dust sublimation temperatures 700–2000 K (IE97). This leaves the dust’s composition, optical depth and spatial distribution as the only potentially significant reasons for the variation of IRAS colors among different families of Galactic objects. However, dust properties generally do not show large variations among different objects and thus cannot be expected to induce substantial variations in infrared emission. The same applies to $`\tau _V`$, whose range of values is similar in most families. We conclude that objects segregate in IRAS color–color diagrams primarily because their spatial dust distributions are different.
We corroborate these conclusions in detailed modeling with the code DUSTY (Ivezić, Nenkova & Elitzur 1999). In these calculations, a point source emitting as a black body with temperature 5000 K is surrounded by dust whose absorption and scattering coefficients are those of standard interstellar mix. The dust is distributed in a spherical shell whose temperature on the inner boundary is $`T_1`$ = 1000 K. The shell density profile is taken as a power law $`r^p`$, with $`p`$ a free parameter. Scaling ensures that the solution set for each $`p`$ is a one-parameter family characterized by $`\tau _V`$, leading to the distinct tracks shown in the \[60–25\]–\[25–12\] color–color diagram in figure 3. These tracks properly delineate the distribution of IRAS sources and we have verified that the same applies also to the \[100–60\] colors shown in fig. 2. Tracks for different $`p`$ differ in the color-color regions that they cross and in the distance induced by $`\tau _V`$ variation; the steeper the density profile, the smaller is the distance along the track for the same change in $`\tau _V`$. Both properties arise from the differences in relative amounts of material placed at different temperatures, and both are reflected in the data. Class A sources are well explained by the $`p`$ = 2 track, indeed they cluster close to the black-body point. In contrast, classes C and D are explained by tracks of much flatter density distributions and are located rather far from the black-body colors; it takes only $`\tau _V`$ $``$ 0.1 to move an object along these tracks all the way from the black body point to the region populated by IRAS sources.
Class B cannot be reasonably explained by the same models, even though its colors can be produced by extending the $`p`$ = 2 track beyond $`\tau _V`$ = 100. In addition to the implausible optical depths this requires, class B is unlikely to be the high-$`\tau _V`$ end of class A given their different Galactic distributions. An alternative to $`\tau _V`$-variation is to modify colors by removal of hot dust. The displayed tracks have $`T_1`$ = 1000 K, as appropriate for shells whose inner boundaries are controlled by dust sublimation, but in detached shells $`T_1`$ is both lower and arbitrary. The dashed-line tracks in figure 3 show the effect of lowering $`T_1`$ on the $`p`$ = 2 track. The difference between $`T_1`$ = 1000 K and 300 K is marginal because both are higher than the Planck equivalent of 12 $`\mu `$m. However, further reduction in $`T_1`$ alters the track significantly, adequately explaining class B colors.
## 4 DISCUSSION
Our modeling confirms that the primary reason for different IRAS classes is different dust density distributions. Only 5% of the sources require an additional variation of the dust temperature on the shell inner boundary. Spherical shells were employed here as the simplest method to model extended, three dimensional dust distributions. Power laws were used just for illustration purposes and although their actual values should not be taken literally, they provide a good indication of the overall behavior of the density distributions in the four IRAS classes. The presence of disks, expected in various sources, should not significantly affect our conclusions. The standard disk spectrum $`F_\nu \nu ^{1/3}`$ produces a single point in color–color diagrams and thus cannot effect the observed scatter. This spectrum is modified if the disk is embedded in an extended shell, but then the disk is expected to dominate only at sub-mm and mm wavelengths, longer than those observed by IRAS (Miroshnichenko et al 1999). Flared disk emission can be shown equivalent to that from a flat disk embedded in an appropriate spherical shell. Therefore, such a configuration, too, cannot modify our conclusions.
We have queried the SIMBAD database about our sample sources. SIMBAD identifications are occasionally ambiguous (“maser”), sometimes less than informative (“part of cloud”), and reliability is not always certain. Nevertheless, they provide useful clues when there are major trends in the data. Class A returned the most decisive results — 88% of its members have possible optical identifications, of which roughly 90% are commensurate with AGB stars. Since class A obeys the VH criterion, this corroborates our earlier finding (IE95) that this criterion is both sufficient and necessary for AGB selection. In IE95 we present a thorough analysis of the IRAS fluxes of AGB stars, including color tracks geared specifically for these objects (single chemistry grains, appropriate stellar temperature, etc.). While that analysis was necessary to verify the VH criterion, the $`p`$ = 2 track presented here captures the essence of the more detailed study, demonstrating that the density distribution is the leading factor in controlling IRAS colors. These circumstellar shells have the steepest density distribution, setting them apart in the color–color diagrams. In addition, the color tracks are primarily controlled by a single parameter, $`\tau _V`$, hence the compactness of this class color region.
Class B had the same identification rate but its composition not as homogeneous. Planetary nebulae comprise 40% of positive and possible identifications. At 13%, the only other significant group is “emission-line stars”, a classification consistent with planetary and reflection nebulae. The remaining identifications span a variety of objects, indicating that detached shells may occasionally form under different circumstances. Significantly, the optical depths required for class B are lower than for the others, the dashed-line tracks in fig. 3 terminate at $`\tau _V`$ = 10.
SIMBAD identification rates for the two other classes are much lower — only 32% for class C and 21% for D. Among the 38 class C identifications, HII regions comprise the only significant group with 15 (40%). In class D, too, HII regions comprise the single largest group with 23% of the identifications, followed by young stellar objects at 11% and planetary nebulae at 10%. These two classes are clearly dominated by star formation and early stages of stellar evolution, in agreement with their Galactic distributions and with previous attempts to associate star-forming regions with IRAS colors. In the most extensive study of this kind, 83% of 1302 IRAS selected sources were found to be embedded in molecular clouds and thus trace star formation, and the selection criterion (\[25–12\] $``$ 0, \[60–25\] $``$ 0.4) essentially identifies classes C and D (Wouterloot & Brand 1989; Wouterloot et al 1990).
The IRAS colors of classes C and D imply dust density distributions flatter than for AGB stars. These colors are spread over large regions, reflecting variation in density profiles in addition to optical depth. The spread in \[100–60\] colors is smaller than in \[60–25\] because all shells are optically thin at both 100 and 60 $`\mu `$m while their 25 $`\mu `$m optical depth can become significant (IE97). Class C colors occupy a sub-region of class D and are produced by the optically-thick end ($`\tau _V`$ $``$ 1) of flat density distributions. Objects whose colors fall in that region can belong to either class C or D. However, among all sources with class D colors, those with high fluxes ($`>`$ 1000 Jy at both 60 and 100 $`\mu `$m) concentrate in a compact color region, hence the separate class C. Since all class C sources have $`L>`$ $`10^3`$ $`L_{}`$, they are high-mass objects and their concentration in the inner $``$ 5 kpc of the Galaxy shows that the high-mass star formation rate decreases with distance from the Galactic center. This result is in agreement with studies of the initial mass function inside and outside the solar circle (Garmany et al 1982; Wouterloot et al 1995; Casassus et al 1999).
Our results explain the findings of all earlier studies that were based on object pre-selection, and reveal the limitations of that approach. By example, consider the WC study. After identifying IRAS counterparts of known ultracompact HII regions, WC proposed the corresponding colors as a necessary and sufficient selection for all ultracompact HII regions and proceeded to estimate the birthrate of O stars. Codella et al (1994) then found that most HII regions in a more extended catalog indeed obeyed the WC color criterion. However, that included also diffuse, not just compact HII regions, therefore WC overestimated the O star birthrate. This clearly demonstrates the shortcomings of any classification based on a pre-selected population. The unbiased analysis presented here shows that IRAS colors reflect primarily the dust density profiles of circumstellar shells and provide a unique indication of the underlying object only in the case of AGB stars. IRAS data in itself is sufficient for differentiating young and old stellar objects; apart from a limited number of detached shells, IRAS sources belong to two distinct groups as is evident from both the color–color diagrams and the Galactic distributions: (1) class A sources are at the late stages of stellar evolution and (2) class C and D sources are objects at the early evolutionary stages. This differentiation occurs because the density distributions of dust around young stellar objects have flatter profiles, reflecting the different dynamics that govern the different environments.
Support by NASA and NSF is gratefully acknowledged.
|
no-problem/0003/nucl-th0003067.html
|
ar5iv
|
text
|
# Baryon distribution for high energy heavy ion collisions in a Parton Cascade Model
## I INTRODUCTION
Heavy ion experiments at BNL-AGS and CERN-SPS have been performed motivating by the possible creation of QCD phase transition and vast body of systematic data such as proton, pion strangeness particles distributions, HBT correlation, flow, dileptons and $`J/\psi `$ distributions have been accumulated including mass dependence and their excitation functions . Data from forthcoming experiment at BNL-RHIC will be available soon.
Strong stopping of nuclei has been reported both at AGS and at SPS energies . It is reported that baryon stopping power can be understood within a hadronic models if we consider multiple scattering of nucleon using reasonable $`pp`$ energy loss . For example, within string based models , baryon stopping behavior at SPS energies is well explained by introducing diquark breaking mechanism in which diquark sitting at the end of the string breaks. Diquark breaking leads to large rapidity shifts of the baryon. Constituent quark scattering within a formation time has to be considered in order to generate Glauber type multiple collision at initial stage of nuclear collisions in microscopic transport models which describe full space-time evolution of particles.
Event generators based on perturbative QCD (pQCD) are proposed such as HIJING (Heavy Ion Jet Interaction Generator), VNI (Vincent Le Cucurullo Con Giginello), in order to describe ultra-relativistic heavy ion collisions emphasizing the importance of mini-jet productions. VNI can follow the space-time history of partons and hadrons. The parton cascade model of VNI has been applied to study several aspects of heavy-ion collisions even at SPS energies . However, original version of VNI implicitly assumed the baryon free region at mid-rapidity during the formation of hadrons, because only two parton cluster (mesonic cluster) formations are included in the Monte-Carlo event generator VNI .
In this work, The baryon distribution at SPS and RHIC energy are discussed using modified version of parton cascade simulation code VNI . The main features of the parton cascade model to be used here are that implementation of baryonic cluster formation and during the parton/beam cluster decay higher hadronic resonance states are allowed to produce in order to be able to calculate baryon distribution in heavy ion collisions.
## II PARTON CASCADE MODEL
First of all, the main features of the parton cascade model of VNI as well as the main points of the modification will be presented. Relativistic transport equations for partons based on QCD are basic equations which are solved on the computer in parton cascade model. The hadronization mechanism is described in terms of dynamical parton-hadron conversion model of Ellis and Geiger . The main features in the Monte Carlo procedure are summarized as follows.
1) The initial longitudinal momenta of the partons are sampled according to the measured nucleon structure function $`f(x,Q_0^2)`$ with initial resolution scale $`Q_0`$. We take GRV94LO (Lowest order fit) for the nucleon structure function. The primordial transverse momenta of partons are generated according to the Gaussian distribution with mean value of $`p_{}=0.44`$GeV. The individual nucleons are assigned positions according to a Fermi distribution for nuclei and the positions of partons are distributed around the centers of their mother nucleons with an exponential distribution with a mean square radius of 0.81fm.
2)With the above construction of the initial state, the parton cascading development proceeds. Parton scattering are simulated using closest distance approach method in which parton-parton two-body collision will take place if their impact parameter becomes less than $`\sqrt{\sigma /\pi }`$, where $`\sigma `$ represents the parton-parton scattering cross section calculated by pQCD within a Born approximation. Both spacelike and timelike radiation corrections are included within the leading logarithmic approximation. Elementary $`22`$ scatterings, $`12`$ emissions and $`21`$ fusions are included in the parton cascading.
3) Parton clusters are formed from secondary partons that have been produced by the hard interaction and parton branching. The probability of the parton coalescence to form color-neutral cluster $`\mathrm{\Pi }`$ is defined as
$$\mathrm{\Pi }_{ijC}=\{\begin{array}{ccc}0,\hfill & L_{ij}L_0,\hfill & \\ 1\mathrm{exp}(\frac{L_0L_{ij}}{L_cLij}),\hfill & L_0<L_{ij}L_c,\hfill & \\ 1,\hfill & L_{ij}>L_c,\hfill & \end{array}$$
(1)
where $`L_c=0.8`$fm is the value for the confinement length scale and $`L_0=0.6`$fm is introduced to account for finite transition region. $`L_{ij}`$ is defined by the distance between parton $`i`$ and its nearest neighbor $`j`$:
$$L_{ij}\mathrm{min}(\mathrm{\Delta }_{i1},\mathrm{},\mathrm{\Delta }_{ij},\mathrm{},\mathrm{\Delta }_{in}),$$
(2)
where $`\mathrm{\Delta }_{ij}=\sqrt{r_i^\mu r_{j\mu }}`$ is the Lorenz-invariant distance between partons. So far, only the following two-parton coalescence
$`g+g`$ $``$ $`C_1+C_2,g+gC+g,g+gC+g+g,`$ (3)
$`q+\overline{q}`$ $``$ $`C_1+C_2,q+\overline{q}C+g,`$ (4)
$`q+g`$ $``$ $`C+q,q+gC+g+q.`$ (5)
have been considered in the VNI model. In this work, if diquarks are formed with the above formation probability, baryonic cluster formation is included as
$`qq+q`$ $``$ $`C,`$ (6)
$`\overline{q}\overline{q}+\overline{q}`$ $``$ $`C,`$ (7)
$`q_1q_2+\overline{q}_3`$ $``$ $`q_1\overline{q}_3+q_2,`$ (8)
$`q_1q_2+g`$ $``$ $`q_1q_2q_3+\overline{q}_3.`$ (9)
Note that by introducing those cluster formation processes, We do not introduce any new parameters into the model.
4) Beam clusters are formed from primary partons (remnant partons) which do not interact during the evolution even though they travel in the overlapping region of nuclei. They may be considered as the coherent relics of the original hadron wavefunctions, and should have had soft interactions. Those underlying soft interactions are simulated by the beam cluster decay into hadrons in VNI because additional possibility that several parton pairs undergo soft interactions. This may give a non-negligible contribution to the ‘underlying event structure’ even at the collider energies. The primary partons are grouped together to form a massive beam cluster with its four-momentum given by the sum of the parton momenta and its position given by the 3-vector mean of the partons’ positions.
5) The decay probability density of each parton cluster into final state hadrons including hadronic resonances is chosen to be a Hagedorn density state. The appropriate spin, flavor, and phase-space factors are also taken into account. In the decay of parton/beam cluster, higher hadronic resonance states up to mass of 2GeV can be produced in our model.
To summarize, the main different points from original version are 1) baryonic cluster formation. 2) inclusion of higher hadronic resonance up to mass of 2GeV. 3) exact conservation of flavor, i.e. (baryon number, charge,etc). 4) reasonable total momentum conservation: total momentum is conserved within 10% at RHIC energy for central Au+Au collision.
## III RESULTS
### A Elementary collisions
Since our version of parton cascade code differs from original version of VNI, we have to check the model parameters. First, particle spectra from $`p\overline{p}`$ collisions at $`\sqrt{s}=200`$GeV calculated by the modified version of VNI are studied to see the model parameter dependence. Here we see the $`K`$-factor dependence as mentioned in Ref. . In Fig. 1, experimental data on pseudorapidity distributions (left panel) and the invariant cross sections (right panel) are compared to the calculation of the parton cascade model with different parameters on the treatment of so-called $`K`$-factor. The calculations (upper three figures) are done by adding the constant factor to the reading-order pQCD cross sections:
$$\sigma _{pQCD}(Q^2)=K\times \sigma ^{LO}(Q^2)$$
with values $`K=1,2,2.5`$. While bottom figure corresponds to the calculation changing the $`Q^2`$ scale in the running coupling constant $`\alpha _s`$ as
$$\sigma _{pQCD}(Q^2)=\sigma ^{LO}(\alpha _s(\eta Q^2))$$
with the value $`\eta =0.075`$. We also plot the contribution from parton cluster decay in the left panel with dotted lines. The contribution of parton cluster decay which is come from interacted parton coalescence changes according to the choice of the correction scheme. We can fit the $`p\overline{p}`$ data of pseudorapidity distributions with different correction schemes as seen in Fig. 1 by changing the parameter (in actual code, parv(91)) which controls the multiplicity from beam cluster. We have to check the model with various elementary data including incident energy dependence in order to fix model parameters. Next we will present some results on nuclear collisions with those parameters.
### B Comparison with SPS data
The baryon stopping problem is one of the important element in nucleus-nucleus collisions. Original version of VNI implicitly assumed baryon free region at midrapidity, because baryonic parton cluster formation is not included. Baryons only come from beam cluster, not parton cluster formation in the original version of VNI. We can now discuss the baryon stopping problem with our modified version of VNI.
We have calculated the net proton distribution at SPS energy to show the reliability of the modeling of beam cluster formation in the parton cascade model. Fig. 2 compares the parton cascade calculation for Pb+Pb collision at the laboratory energy of $`E_{lab}=158`$ AGeV with the $`K`$-factor 1.0 (original version uses $`\eta =0.035`$) of net protons with the data . It is seen that contribution from parton cluster is neglibigly small, thus baryon stopping behavior is fully explained by soft physics (in this case, beam cluster decay) when we chose the $`K`$-factor 1.0 at SPS energies. It should be noted that there is no microscopic dynamics in the modeling of the beam cluster formation in the parton cascade model, but it is a simple fit to the data of $`pp`$ collisions.
### C Predictions for RHIC
The $`K`$-factor dependence of both net proton and charged particle rapidity distribution are studied in Fig. 3 In terms of net proton distribution, there is no strong $`K`$-factor dependence. We can see that parton cluster formation and its decay predict almost baryon free at mid-rapidity region regardless of the choice of $`K`$-factor, though there are lots of protons and antiprotons at mid-rapidity. We conclude that hard parton scattering plays no rule for the baryon stopping within a parton cascade model. However, note that string based model like HIJING/B predicts proton rapidity density of 10 and UrQMD predicts 12.5 at mid-rapidity. However, as pointed out in Ref. , charged hadron multiplicity is strongly depend on how to chose the leading order correction scheme.
Fig. 4 displays the net baryon number distributions as a function of rapidity obtained by parton distribution from parton cascade before hadronization with the $`K`$-factor of 1 (left) and 2.5 (right). Net baryon number of time-like partons are distributed around mid rapidity region but its contribution are small as consistent with the net proton distribution in Fig. 3.
## IV SUMMARY
In summary, first, we have checked that different treatments for the inclusion of higher-order pQCD corrections in parton cascade model can fit the elementary $`p\overline{p}`$ collisions. We have to check other elementary processes to fix the model parameters. We show the net proton rapidity distribution at SPS energies to demonstrate that the beam cluster treats underlying soft physics in the parton cascade model reasonably well for nucleus nucleus collisions. Then, we have calculated the net proton rapidity distribution at RHIC energy as well as charged particle distributions using modified version of parton cascade code VNI in which we newly introduced baryonic parton cluster formation and higher hadronic resonance states from decay of parton and beam cluster. Within a framework of perturbative parton cascading and dynamical hadronization scheme, we predict almost baryon free plasma at RHIC energy. The charged particle rapidity distributions are also studied with the parameter set which are fitted by $`p\overline{p}`$ collisions. Strong $`K`$-factor dependence on the hadron multiplicity is seen as previously being found by Ref. . we can not fix the $`K`$-factor from only rapidity and transverse momentum distributions for $`p\overline{p}`$ collisions.
In this work, we consider only two or three parton coalescence, but in dense parton matter produced in heavy ion collisions, this assumption might be broken down. Inverse processes like hadron conversion to parton such as $`Cq\overline{q}`$ are also ignored which might become important at higher colliding energies.
## ACKNOWLEDGMENTS
This work should have been collaborated with Klaus Geiger if he had not had perished in the air crash. I would like to thank Dr. S. A. Bass and Prof. R. S. Longacre for careful reading of this paper and useful comments. I am indebted to S. Ohta for encouragements and useful comments.
|
no-problem/0003/cond-mat0003247.html
|
ar5iv
|
text
|
# Nonlinear electrodynamics of p-wave superconductors
## I Introduction
The number and variety of superconducting materials for which evidence of exotic Cooper pairing (i.e., pairing in a state other than the usual s-wave) exists is constantly increasing. For high temperature superconducting oxides (HTSC’s) the consensus is indeed that the pairing state is, in nearly all cases, at least predominantly d-wave, specifically of the $`d_{x^2y^2}`$ form, with lines of nodes. Rather persuasive (although not conclusive) evidence in the form of both experiments and theoretical arguments, has recently been brought forward for p-wave superconductivity in $`\mathrm{Sr}_2\mathrm{RuO}_4`$. The pairing state currently favored by many is of the same form as that of the A phase in $`{}_{}{}^{3}\mathrm{He}`$, which has point nodes. Several heavy fermion (HF) materials, the discovery of which predated that of HTSC’s but for which determinations of the pairing state have proved harder to achieve, are now also believed with varying degrees of certainty, to belong in the exotic camp. There are also results indicating that superconducting families of organic salts such as $`\kappa (\mathrm{BEDT}\mathrm{TTF})_2\mathrm{Cu}(\mathrm{NCS})_2`$ and $`(\mathrm{TMTSF})_2\mathrm{X}`$ ($`\mathrm{X}=\mathrm{PF}_6,\mathrm{ClO}_4`$, etc.) also exhibit unconventional superconductivity. In some cases it has been argued that the pairing appears to be in the p-wave.
Determination of pairing states is not easy, particularly if one wishes to know more details than merely their overall symmetry. Even in the best studied HTSC’s, questions such as what is the angle between lines of nodes in orthorhombic compounds, or whether true nodes, rather than very deep minima, exist, are still matters for occasionally heated debate. The situation is much worse for the other materials mentioned, where the evidence is much more preliminary, and at times contradictory. The determination of the pairing state is often hampered by difficulties in interpreting results. Regions (points or lines) where the energy gap vanishes are often the signature of exotic pairing (but not invariably, the B phase of $`{}_{}{}^{3}\mathrm{He}`$ is a well-known counterexample). These “gap nodes” lead to various power law behaviors for quantities that otherwise would behave exponentially with temperature, but sometimes there are alternative explanations for the power laws. It is difficult moreover, to distinguish between experimental outcomes arising from zeroes in the energy gap and those arising only from strong anisotropy. An additional complication is that for non s-wave superconducting materials, the order parameter (OP) state at the surface may easily differ from that in the bulk.
It is therefore important to study probes of the OP symmetry able to discern as unambiguously as possible details of the pairing state, such as the existence, nature and position of the bulk OP nodes. One such probe is afforded by the nonlinear Maxwell-London electrodynamics of exotic pairing states in the Meissner regime. Electrodynamic effects probe the sample over a scale determined by the penetration depth $`\lambda `$, which is large for the materials of interest. It was pointed out in the context of d-wave superconductivity that order parameter nodes lead to observable nonlinear effects at low temperatures, the chief quantities of experimental interest being the magnetic field dependent penetration depth $`\lambda (H)`$, the nonlinear transverse component of the magnetic moment, $`m_{}`$, induced by the application of a magnetic field, and the torque associated with this transverse moment.. Further developments of the method, always in the context of predominantly d-wave superconductivity, showed that it can be used to perform node spectroscopy, that is, to infer in detail the angular structure of the regions where the order parameter vanishes (nodes) or is very small (“quasinodes”).
These developments took place within the study of the high temperature oxide superconductors. For these materials, the temperature scales as set by $`T_c`$ are higher and achieving the required low temperature conditions is very easy. However, recent improvements in experimental techniques involving torsional oscillators and torque magnetometry make it possible to measure extremely small moments and torques at dilution refrigerator temperatures. Experiments to accurately measure $`\lambda `$ in that temperature range are also being planned. Thus, the relevant region for performing nonlinear electrodynamics experiments in low $`T_c`$ materials is becoming accessible.
With this in mind, we take up in this work the question of the use of methods based on nonlinear electrodynamics to study exotic superconducting materials, other than HTSC’s. Specifically, we will consider here simple OP’s both with point nodes and with three-dimensional nodal lines, as would occur for example in p-wave superconductivity. Our efforts will focus on the calculation of the dependence of $`m_{}`$ (or its associated torque) on the magnetic field and the appropriate angle of rotation. We also compute the field dependence of the low temperature penetration depth. We will present estimates based on published values of the relevant material parameters showing that the required measurements appear to be technically feasible. These estimates are presented, not to prejudge the pairing state associated with any material, but rather to show the expected signal if the material indeed does have the assumed OP.
In the next Section we introduce the geometries and the order parameter forms that we study. We then calculate the nonlinear relation between current and superfluid flow field, using an extension of the three dimensional methods of Ref.. From these relations, we obtain the physical quantities of interest, through the appropriate generalization of existing perturbation methods. In Section III we summarize our results, and consider the question of the experimental feasibility of using this method on several materials. We conclude with a discussion of the advantages and limitations of the method and of the specific treatment presented in this work.
## II Methods and Results
### A Maxwell-London Electrodynamics
We first briefly outline the nonlinear Maxwell-London equations, on which our method is built. We will not dwell into any details that were discussed elsewhere.. When a magnetic field $`𝐇_a`$ is applied to a superconductor a superfluid flow field $`𝐯(𝐫)`$ is set up. The relation between $`𝐯(𝐫)`$ and the local magnetic field $`𝐇(𝐫)`$ is given by the second London equation:
$$\times 𝐯=\frac{e}{c}𝐇.$$
(1)
where $`e`$ is the proton charge. Ampère’s law for steady-state currents, $`\times 𝐇=\frac{4\pi }{c}𝐣`$, can be combined with Eq. (1) to obtain:
$$\times \times 𝐯=\frac{4\pi e}{c^2}𝐣(𝐯).$$
(2)
In this equation there are still two unknown fields. For a solution to be obtained, the functional relationship between $`𝐣`$ and $`𝐯`$ is needed. This can be found by using the two-fluid model. The quasiparticle excitation spectrum, $`E(ϵ)=(ϵ^2+\left|\mathrm{\Delta }(s)\right|^2)^{1/2}`$, is modified by a Doppler shift to $`E(ϵ)+𝐯_f𝐯`$. Here $`ϵ`$ is the quasiparticle energy referred to the Fermi surface, $`\mathrm{\Delta }(s)`$ denotes the OP dependence on the point $`s`$ on the Fermi surface, and $`𝐯_f`$ is the Fermi velocity. This leads to a relation between $`𝐣`$ and $`𝐯`$ of the form
$$𝐣(𝐯)=𝐣_{lin}(𝐯)+𝐣_{nl}(𝐯).$$
(3)
After some algebra, the linear and nonlinear parts can be written, respectively, as:
$$𝐣_{lin}(𝐯)=eN_f_{FS}d^2sn(s)𝐯_f(𝐯_f𝐯),$$
(5)
$$𝐣_{nl}(𝐯)=2eN_f_{FS}d^2sn(s)𝐯_f_0^{\mathrm{}}𝑑ϵf(E(ϵ)+𝐯_f𝐯),$$
(6)
where $`N_f`$ is the total density of states at the Fermi level, $`n(s)`$ the local density of states at the Fermi surface (FS), normalized to unity, and $`f`$ the Fermi function. The first term in (3), given by (5), is the usual linear relation $`𝐣_{lin}=e\stackrel{~}{\rho }𝐯`$, where $`\stackrel{~}{\rho }`$ is the superfluid density tensor. At $`T=0`$ the nonlinear (in $`𝐯`$) corrections, described by (6), can be written as:
$$𝐣_{nl}(𝐯)=2eN_f_{FS}d^2sn(s)𝐯_f\mathrm{\Theta }(𝐯_f𝐯\left|\mathrm{\Delta }(s)\right|)[(𝐯_f𝐯)^2\left|\mathrm{\Delta }(s)\right|^2]^{1/2},$$
(7)
which is valid for any $`\mathrm{\Delta }(s)`$. The key point is that the step function in Eq.(7) restricts the integration over the FS by
$$\left|\mathrm{\Delta }(s)\right|+𝐯_f𝐯<0.$$
(8)
Thus, when the OP has nodes (or very deep minima) in the Fermi surface, only regions near these nodes participate in populating the quasiparticle spectrum. The integration is dominated by contributions from these regions, and can be written as a sum over local contributions from each of them. The values of the Fermi velocity in the integrand can be replaced by their local values at the corresponding nodal region. We need, therefore, more information on the geometry of the superconductor and the angular dependence of the order parameter to carry out the above integration.
### B Geometry and order parameters
We will consider superconducting materials of orthorhombic or higher symmetry and denote the crystallographic axes as $`a`$, $`b`$ and $`c`$, corresponding as customary to the $`x`$, $`y`$ and $`z`$ directions. We will assume that the samples are infinite in a plane parallel to the direction of the applied field, and of thickness $`d`$ in the direction normal to this plane. This allows us to solve (2) and (3) analytically. Effects of the sample finite extension in the plane would have to be taken into account numerically, but this is unnecessary since it has been shown in the context of d-wave pairing, that such effects merely lead to a small increase in the amplitude of the nonlinear signal, and to no change in its angular or field dependence. On the other hand, the effects of the thickness $`d`$ are very important and we will include them fully.
We will consider two simple types of p-wave OP’s in this paper. The first is representative of the case where the OP has point nodes, as might occur in $`\mathrm{Sr}_2\mathrm{RuO}_4`$. Up to a phase factor (the nonlinear electrodynamic effects depend only on the absolute value of the OP) we write for the angular dependence of the OP near the nodes:
$$\mathrm{\Delta }(\theta )=\mathrm{\Delta }_0\mathrm{sin}(\theta ),$$
(9)
where $`\theta `$ is a polar angle. Only the local properties at the nodes are important: the form (9) is assumed only near the nodes, e.g. near $`\theta =0`$ it means $`\mathrm{\Delta }(\theta )\mathrm{\Delta }_0\theta `$. Thus, the parameter $`\mathrm{\Delta }_0`$ must be thought of as the slope of the OP near the node, rather than its maximum value. The second type we will consider is a prototype of OP’s with line nodes, as they might occur in some heavy fermion compounds or even in $`\mathrm{Sr}_2\mathrm{RuO}_4`$. Again, up to an unimportant phase factor, we assume the form for the angular dependence near the nodal line:
$$\mathrm{\Delta }(\theta )=\mathrm{\Delta }_0\mathrm{cos}(\theta ),$$
(10)
where the above warning as to the interpretation of $`\mathrm{\Delta }_0`$ as the slope near the nodal region must be repeated. These two forms are archetypes for the possibly more intricate forms of the angular dependence of the OP in real materials. In non s-wave superconductors, the OP need not belong to a one-dimensional representation. Because of this, there may be OP collective modes and internal structure effects that are not included in our considerations. The angular dependence of the OP, however, will be in a solid very strongly pinned by crystal effects (this is obviously not the case in liquid $`{}_{}{}^{3}\mathrm{He}`$). The internal structure of the OP should then not affect our nonlinear results, since only the application of small dc fields is involved.
The experimental setup we envision would involve applying a field parallel to the $`ac`$ plane, with the direction normal to the slab being along the $`b`$ axis. The sample would then be rotated about the $`b`$ axis while the magnetic field remains fixed. The currents then flow in various directions depending on their orientation relative to the OP under consideration. We will denote by $`\psi `$ the angle between the applied field $`𝐇_a`$ and the $`z`$ axis and we will investigate the angular dependence of the transverse magnetic moment or the torque as a function of $`\psi `$. We will also calculate the field dependence of the penetration depth for the directions of symmetry.
In this geometry we can solve the problem analytically. When $`𝐇_a`$ is applied in the $`ac`$ plane, the fields have only $`x`$ and $`z`$ components, which depend only on the coordinate $`y`$. Eq. (2) then reduces to
$$\frac{d^2𝐯}{dy^2}+\frac{4\pi e}{c^2}𝐣(𝐯)=0.$$
(11)
For our given geometry, $`j_{nli}`$ and $`v_i`$ have odd parity with respect to the $`y`$ coordinate, so it is sufficient to solve the boundary value problem for $`y0`$. The two required boundary conditions are:
$`{\displaystyle \frac{c}{e}}(\times 𝐯)|_{y=d/2}`$ $`=`$ $`𝐇_a,`$ (13)
$`𝐯|_{y=0}`$ $`=`$ $`0.`$ (14)
We can now proceed to explicitly calculating the nonlinear currents.
### C Nonlinear currents
First, we carry out the integration in Eq. (7). This can be performed exactly on a three dimensional Fermi surface, without the need to take recourse to the approximations discussed in Appendix A of Ref.. The relevant regions of integration as discussed below (8) are contained within a small range near the nodes, with boundaries that can be expressed in terms of limiting angles $`\theta _c`$, as determined from $`(𝐯_f𝐯)^2=\left|\mathrm{\Delta }(\theta _c)\right|^2`$.
Consider first the OP given in Eq.(9). In this case $`𝐯_f(s)`$ can be replaced in the relevant regions of the integrand by its local value, along the $`z`$ axis, at the nodes. By symmetry, we can restrict ourselves to the node at $`\theta =0`$ since the contribution from $`\theta =\pi `$ is identical. Thus, we have $`𝐯_f(0,0,v_{fz})`$, and the restriction (8) means that in performing the integral in (7), we can replace $`_{FS}d^2sn(s)`$ by $`_{\mathrm{\Omega }_c}𝑑\varphi \theta 𝑑\theta /4\pi `$, where $`\mathrm{\Omega }_c`$ denotes the region $`|\theta |<\theta _c`$, with $`\theta _c^2=(v_{fz}v_z)^2/\mathrm{\Delta }_0^2`$, and with no restrictions on $`\varphi `$. It is easy to see that this yields only a $`z`$-component to the nonlinear current. The integrals are elementary and one finds:
$$j_{nlz}=\frac{1}{3}eN_f\frac{v_{fz}^4}{\mathrm{\Delta }_0^2}v_z^3,$$
(15)
where $`\mathrm{\Delta }_0`$ the local gap slope.
For the case of an order parameter as given in (10), where the nodal line is at $`\theta =\pi /2`$, we can take $`v_{fz}=0`$ over the region of integration. which is then limited to $`|\theta \pi /2|<\theta _c`$, where $`(\theta _c\pi /2)^2=\left(\mathrm{cos}\alpha v_fv_{}/\mathrm{\Delta }_0\right)^2`$. Here $`v_{}`$ is the projection of $`𝐯`$ on the $`xy`$ plane, and $`\alpha `$ the angle between $`v_{}`$ and the in-plane $`v_f`$. We make the replacement $`_{FS}d^2sn(s)_{\mathrm{\Omega }_c}𝑑\varphi 𝑑\theta /4\pi `$, where $`\mathrm{\Omega }_c`$ is the region of integration as defined by $`\theta _c`$. For orthorhombic symmetry, $`v_f`$ depends on $`\varphi `$. We then transform the integral over $`\varphi `$ to one over $`\alpha `$ using the relation $`\varphi =\beta +\alpha `$, where $`\beta `$ is the (fixed) angle $`v_{}`$ makes with the $`x`$ axis, and $`\alpha `$ is restricted (from Eq. (8)) to $`\pi /2<\alpha <3\pi /2`$. The integration is lengthier but straightforward and results in two components to the current. The $`x`$ component is:
$$j_{nlx}=\frac{1}{3}eN_f\frac{v_{fx}^3}{\mathrm{\Delta }_0}v_x\left(v_x^2+v_y^2\right)^{1/2}\mathrm{\Lambda }_x(𝐯),$$
(16)
where $`\mathrm{\Lambda }_x(𝐯)(1/5)[(3+2\delta ^2)+(1\delta ^2)(v_x^2/(v_x^2+v_y^2))]`$ ($`\mathrm{\Lambda }_x1`$ when the material has tetragonal symmetry), $`v_{fx}`$ is the Fermi speed along the $`x`$ axis and the $`ab`$ plane anisotropy is characterized by $`\delta \lambda _x/\lambda _y`$, where $`\lambda _i`$ denotes the penetration depth along the $`i`$ direction. The $`y`$ component, $`j_{nly}`$, is obtained by making the obvious replacements in (16). The coefficients in (15) and (16) are given in terms of the local values of the Fermi velocity and are therefore independent of the detailed shape of the Fermi surface.
These expressions for $`𝐣`$ as a function of $`𝐯`$ can be inserted into Eq. (11), which then becomes a nonlinear differential equation in terms of the flow field only. Implementing a perturbation scheme to lowest order in the flow field will allow exact expressions for both OP’s to be obtained. This is addressed in the next subsection.
### D Perturbation solution
For an OP of the form (9), we can insert Eq. (15) into Eq. (11). We can then write the equation for the component carrying the nonlinear term as:
$$\frac{^2v_z}{Y_z^2}v_z+\left(\frac{v_z}{v_{cz}}\right)^2v_z=0,$$
(17)
where we have introduced the dimensionless coordinate $`Y_iy/\lambda _i`$. The local critical velocity is defined as $`v_{ci}\mathrm{\Delta }_0/v_{fi}`$, (for $`i=x,y,z`$) and we have used the three dimensional relation $`1/\lambda _i^2=(4\pi e^2/3c^2)N_fv_{fi}^2`$. From Eq. (13), the boundary condition at the surface $`Y_i=Y_{is}d/(2\lambda _i)`$ can be written in terms of our new variables:
$$\frac{v_x}{Y_x}|_{Y=Y_{xs}}=\frac{e\lambda _x}{c}H_a\mathrm{cos}\psi ,\frac{v_z}{Y_z}|_{Y_z=Y_{zs}}=\frac{e\lambda _z}{c}H_a\mathrm{sin}\psi .$$
(18)
We now expand $`v_z(Y_z)`$ to first order in the parameter $`\alpha _z=(v_{fz}/\mathrm{\Delta }_0)^2`$, which is small in the typical experimental situations. We write $`v_z(Y_z)=v_{0z}(Y_z)+\alpha _zv_{1z}(Y_z)`$. To zeroth order, we have the usual linear equation:
$$\frac{^2v_{0z}}{Y_z^2}v_{0z}=0,$$
(19)
with $`v_{0z}`$ satisfying the boundary conditions (18), (13). The solution is:
$$v_{0z}(Y_z)=c_z\mathrm{sinh}(Y_z),$$
(20)
where
$$c_z=\frac{e\lambda _zH_a\mathrm{sin}\psi }{c\mathrm{cosh}(Y_{zs})}.$$
(21)
The nonlinear part $`v_{1z}`$ satisfies:
$$\frac{^2v_{1z}}{Y_z^2}v_{1z}+v_{0z}^3=0.$$
(22)
The boundary conditions are $`v_{1z}/Y_z|_{Y=Y_{zs}}=0`$ and $`v_{1z}(0)=0`$. The complete solution to Eq. (22) is found by elementary methods and is given by:
$$v_{1z}(Y_z)=(1/8)c_z^3\left[c_1\mathrm{sinh}(Y_z)+3Y_z\mathrm{cosh}(Y_z)(1/4)\mathrm{sinh}(3Y_z)\right],$$
(23)
where $`c_1=(3/2)(\mathrm{sinh}(2Y_{zs})2Y_{zs})\mathrm{tanh}(Y_{zs})9/4`$.
The magnetic field in the sample can be calculated from the field $`𝐯`$ via Eq. (1). Including also the purely linear component arising from $`v_x`$ we obtain:
$$H_x(Y_z)=\frac{H_a\mathrm{sin}\psi }{\mathrm{cosh}(Y_{zs})}\left[\mathrm{cosh}(Y_z)+\frac{1}{8}\left(\frac{H_a\mathrm{sin}\psi }{H_{0z}\mathrm{cosh}(Y_{zs})}\right)^2f_H(Y_z)\right],$$
(25)
$$H_z(Y_x)=\frac{H_a\mathrm{cos}\psi }{\mathrm{cosh}(Y_{xs})}\mathrm{cosh}(Y_x),$$
(26)
where the nonlinear depth dependence is contained in $`f_H`$:
$$f_H(Y_z)=3Y_z\mathrm{sinh}(Y_z)(3/4)\mathrm{cosh}(3Y_z)+(c_1+3)\mathrm{cosh}(Y_z),$$
(27)
and we have introduced the characteristic field
$$H_{0i}=\frac{\varphi _0}{\pi ^2\lambda _i\xi _i}.$$
(28)
Here $`\varphi _0`$ is the superconducting flux quantum, and $`\xi _i=v_{fi}/(\pi \mathrm{\Delta }_0)`$ is the local coherence length. As opposed to the d-wave case where the nodal lines give rise to a quadratic nonlinear contribution, we now find a nonlinear effect cubic in the applied field. Physically, this is quite transparent: the phase space volume available to the quasiparticle excitations increases as the cube of the field for point nodes, and as the square for line nodes. The nonlinear term anisotropically increases the magnetic field penetration because of quasiparticle occupation near the nodes, i.e., fewer Cooper pairs are participating in the current responsible for bulk flux exclusion.
We can gain some insight into these results by examining the spatial dependence of the nonlinear part of the field as displayed in Fig. 1. There the quantity $`H_{nlx}`$, defined as the last term in (25) normalized to unity at its maximum, is plotted as a function of dimensionless distance $`D`$ from the surface ($`D=Y_{sz}Y_z`$). The thickness of the sample is taken to be $`d>>\lambda _z`$ so that the behavior shown is that corresponding to a thick slab. The nonlinear field is constrained by the boundary conditions to vanish at the surface: the boundary condition implies an extremum for the nonlinear flow field at the surface and since $`H_xv_{1z}/Y_z`$, we see that $`H_{nlx}`$ must vanish there. It then increases rapidly reaching its maximum at about one half of a penetration depth and then decays exponentially inside the sample, as does the linear part. Thus arises the characteristic maximum of the nonlinear field seen in this Figure.
The current is most easily obtained from $`𝐇`$ through Ampère’s law for steady-state currents, which gives the result:
$$j_z(Y_z)=\frac{cH_a\mathrm{sin}\psi }{4\pi \lambda _z\mathrm{cosh}(Y_{zs})}\left[\mathrm{sinh}(Y_z)\frac{1}{8}\left(\frac{H_a\mathrm{sin}\psi }{H_{0z}\mathrm{cosh}(Y_{zs})}\right)^2f_J(Y_z)\right],$$
(30)
$$j_x(Y_x)=\frac{cH_a\mathrm{cos}\psi }{4\pi \lambda _x\mathrm{cosh}(Y_{xs})}\mathrm{sinh}(Y_x).$$
(31)
The $`Y_z`$ dependence of the nonlinear current is contained in $`f_J(Y_z)`$ which is given by:
$$f_J(Y_z)=3Y_z\mathrm{cosh}(Y_z)+(9/4)\mathrm{sinh}(3Y_z)(c_1+6)\mathrm{sinh}(Y_z).$$
(32)
In Eq. (30), the first term on the right is the linear contribution, proportional to the applied field, while the second term is the nonlinear correction, modifying the total current.
We now turn to the case where the OP is of the form (10). For simplicity, we consider tetragonal symmetry and quote the results for the orthorhombic case later. The current in Eq. (16) can be simplified using that the magnetic field is applied in the $`xz`$ plane, so that $`v_y`$ is zero. The current is substituted in (11) to give, for the component containing the nonlinear term:
$$\frac{^2v_x}{Y_x^2}v_x+\frac{\left|v_x\right|}{v_{cx}}v_x=0.$$
(33)
The method of solution is identical to that above with the only major difference being that the expansion parameter is now $`\alpha _x=(v_{fx}/\mathrm{\Delta }_0)`$, linear rather than quadratic. The linear velocity field $`v_{0x}`$ is $`v_{0x}=c_x\mathrm{sinh}(Y)`$, while the nonlinear term is written as
$$v_{1x}(Y_x)=(1/6)c_x|c_x|\left[\mathrm{cosh}(2Y_x)4\mathrm{cosh}(Y_x)+4c_2\mathrm{sinh}(Y_x)+3\right],$$
(34)
where
$$c_x=\frac{e\lambda _xH_a\mathrm{cos}\psi }{c\mathrm{cosh}(Y_{xs})},c_2=\mathrm{tanh}(Y_{xs})\mathrm{sinh}(Y_{xs}).$$
(35)
The magnetic field is calculated again via the London equation. In this case the nonlinear part is (as in the d-wave case) proportional to $`H_a^2`$, rather than to $`H_a^3`$ as was found for the point nodes. This follows again from phase-space arguments. After including the contribution from the purely linear component of $`𝐯`$ one finds:
$$H_z(Y_x)=\frac{H_a\mathrm{cos}\psi }{\mathrm{cosh}(Y_{xs})}\left[\mathrm{cosh}(Y_x)+\frac{1}{6}\left(\frac{H_a|\mathrm{cos}\psi |}{H_{0x}\mathrm{cosh}(Y_{xs})}\right)g_H(Y_x)\right],$$
(37)
$$H_x(Y_z)=\frac{H_a\mathrm{sin}\psi }{\mathrm{cosh}(Y_{zs})}\mathrm{cosh}(Y_z).$$
(38)
Here $`g_H`$, which determines the nonlinear contribution to the magnetic field penetration, is given by:
$$g_H(Y_x)=2\mathrm{sinh}(2Y_x)+4\mathrm{sinh}(Y_x)4c_2\mathrm{cosh}(Y_x).$$
(39)
It is again useful to plot the nonlinear component of the magnetic field. We consider $`H_{nlz}`$, the last term in (37) normalized to its maximum value. Fig. 2 shows $`H_{nlz}`$ plotted versus dimensionless distance $`D`$ from the surface ($`DY_{sx}Y_x`$). One sees again the rapid increase of the field near the surface of the sample, followed by the usual exponential decay. The plot is very similar to that in Fig. 1 for point nodes, but the field decays less rapidly into the sample. Within about three penetration depths, the nonlinear field is reduced to 20% of its maximum magnitude.
The total current is composed of linear and nonlinear terms, as found from Ampère’s law:
$`j_x(Y_x)`$ $`=`$ $`{\displaystyle \frac{cH_a\mathrm{cos}\psi }{4\pi \lambda _x\mathrm{cosh}(Y_{xs})}}\left[\mathrm{sinh}(Y_x){\displaystyle \frac{1}{6}}\left({\displaystyle \frac{H_a|\mathrm{cos}\psi |}{H_{0x}\mathrm{cosh}(Y_{xs})}}\right)g_J(Y_x)\right],`$ (41)
$`j_z(Y_z)`$ $`=`$ $`{\displaystyle \frac{cH_a\mathrm{sin}\psi }{4\pi \lambda _z\mathrm{cosh}(Y_{zs})}}\mathrm{sinh}(Y_z).`$ (42)
where
$$g_J(Y_x)=4[\mathrm{cosh}(2Y_x)\mathrm{cosh}(Y_x)+c_2\mathrm{sinh}(Y_x)],$$
(43)
is the function determining the penetration of the nonlinear currents.
We can now use our solutions for both OP’s to derive expressions for the experimentally relevant quantities.
### E The transverse magnetic moment
The expression for the magnetic moment in terms of the currents is:
$$𝐦=\frac{1}{2c}𝑑\mathrm{𝐫𝐫}\times 𝐣(𝐯).$$
(44)
By making use of standard identities and the parity of $`𝐯`$, Eq. (44) can be rewritten more conveniently as:
$$m_{x,z}=\frac{VH_{ax,z}}{4\pi }\frac{Acv_{z,x}}{2\pi e}|_{y=\frac{d}{2}},$$
(45)
where $`A`$ is the surface area of the plane along which the field is applied, $`V`$ is the volume of the sample, and we have used that $`𝐯`$ is odd in $`z`$. The magnetic moment perpendicular to the applied field is given by $`m_x\mathrm{cos}\psi m_z\mathrm{sin}\psi `$. The linear terms of the velocity fields contribute to this quantity only if there is anisotropy in the penetration depth tensor. In that case, the linear term in the transverse magnetic moment, denoted by $`\stackrel{~}{m}_{}`$, is:
$$\stackrel{~}{m}_{}=\frac{1}{4\pi }AH_a(\lambda _z\lambda _x)\mathrm{sin}2\psi .$$
(46)
This term can be distinguished from the nonlinear contribution, $`m_{}`$, because of its different field and angular dependences.
For an OP with point nodes, (9), $`m_{}`$ is obtained from (45), (23) and (II D) as:
$$m_{}(\psi )=\frac{1}{4\pi }A\lambda _zH_a\left(\frac{H_a}{H_{0z}}\right)^2[(3^{3/2}/32)f_s(\psi )]𝒦_𝒮(Y_{zs}).$$
(47)
This quantity is proportional to the cube of the applied field, rather than, as in the d-wave case, to the square. This reflects the reduced phase space when the nodal regions are points rather than lines on the FS. Thus larger values of $`H_a`$ are very advantegeous provided that $`H_a`$ is kept below the field of first flux penetration, $`H_{f1}`$, so that the sample remains in the Meissner regime. The angular dependence of $`m_{}(\psi )`$ is contained in $`f_s(\psi )`$ which is normalized to unity at its maximum and given by:
$$f_s(\psi )=(16/3^{3/2})\mathrm{cos}\psi \mathrm{sin}^3\psi ,$$
(48)
while the dependence of $`m_{}`$ on the material thickness $`d`$ is given by the function $`𝒦_𝒮`$,
$$𝒦_𝒮\left(Y_{zs}\right)=(1/2)\mathrm{sech}^4(Y_{zs})\left[3Y_{zs}2\mathrm{sinh}(2Y_{zs})+(1/4)\mathrm{sinh}(4Y_{zs})\right].$$
(49)
The torque associated with $`m_{}`$ is simply obtained by multiplying (47) by $`H_a`$, therefore it has the same thickness and angular dependence.
The function $`f_s(\psi )`$ is displayed as the solid line in Fig. 3. The transverse magnetic moment and torque in this case are maximal for the field direction corresponding to $`\psi =\pi /3`$, and vanish at directions corresponding to the nodes or antinodes of the OP. The $`\pi `$ periodicity of $`m_{}`$ matches that of the energy since the angular dependence of the quasiparticle energy arises solely from that of $`|\mathrm{\Delta }(\theta )|^2`$.
Since $`m_{}`$ is an extensive quantity and it is often the case that larger samples can be made in film form rather than grown as free standing crystals, it is of considerable interest to examine the thickness dependence of the results, as given by $`𝒦_𝒮`$. The behavior of $`𝒦_𝒮`$ is displayed in Fig. 4, where $`𝒦_𝒮`$ is plotted (solid line) as a function of $`Y_{sz}d/(2\lambda _z)`$. It is seen that $`𝒦_𝒮`$ increases rapidly with $`Y_{sz}`$, reaching $`90\%`$ of its maximum value of unity when $`d5\lambda `$. If the sample is a thick slab, $`d\lambda _z`$, then $`𝒦_𝒮1`$, so that $`m_{}`$ is (for the same area $`A`$) maximal and independent of $`d`$. On the other hand, the decrease of $`𝒦_𝒮`$ with thickness is substantial: for films where $`d=\lambda _z`$, $`𝒦_𝒮`$ is $`𝒦_𝒮(1/2)=.017`$, a reduction of over 80%. Such a decrease, however, may very well be compensated by a larger increase in $`A`$, compared to a free standing crystal. However, for extremely thin films, $`d\lambda _z`$, then $`𝒦_𝒮1/40(d/\lambda _z)^5`$, and despite the increase of $`H_{f1}`$ in thin films, the amplitude of the signal would almost certainly be too small.
When the order parameter is of the form (10), the results obtained from the previous expressions (34) and (II D) for a compound with tetragonal symmetry yield, when substituted in (45), the expression:
$$m_{}(\psi )=\frac{1}{4\pi }A\lambda _xH_a\left(\frac{H_a}{H_{0x}}\right)[(4/3^{5/2})f_c(\psi )]𝒦_𝒞(Y_{xs}).$$
(50)
We have introduced the fuctions $`f_c(\psi )`$, and $`𝒦_𝒞`$ characterizing, respectively, the angular and the thickness dependences of the result. They are given by:
$`f_c(\psi )`$ $`=`$ $`(3^{3/2}/2)|\mathrm{cos}\psi |\mathrm{cos}\psi \mathrm{sin}\psi ,`$ (52)
$`𝒦_𝒞\left(Y_{xs}\right)`$ $`=`$ $`\left[\mathrm{sech}(Y_{xs})1\right]^2\left[1+2\mathrm{sech}(Y_{xs})\right].`$ (53)
The nonlinear transverse moment is now proportional to the square of the applied field, as in the d-wave case, because of the linear character of the nodal regions. The function $`f_c(\psi )`$ is normalized to unity and it is plotted as the dashed line of Fig. 3. It is seen there that the angular signature is different from that in the previous case: $`m_{}`$ has now a maximum when $`\psi =\mathrm{arctan}(\sqrt{2}/2)`$, although it is zero again for fields applied along the nodes and antinodes ($`\psi =\pi /2,0`$) in the OP. As in the previous case, $`𝒦_𝒞`$ is small for small thickness. If we are dealing with a thick slab, $`𝒦_𝒞1`$, but when $`d\lambda _x,𝒦_𝒞.036`$. In the limit $`d\lambda _x`$, $`𝒦_𝒞3/64(d/\lambda _x)^4`$. As seen in Fig. 4, the overall characteristics of $`𝒦_𝒞`$ (dashed line) are very similar to those of $`𝒦_𝒮`$, but $`𝒦_𝒞`$ is larger in magnitude throughout. Both curves show a 50% drop in signal when $`d3\lambda _x`$.
It is somewhat tedious but straightforward to generalize, starting from the form (16), the calculation of the fields and currents for this order parameter to the case where there is penetration depth anisotropy in the $`ab`$ plane. The result is that, when the sample is rotated about the $`b`$ axis, Eq. (50) is simply modified to:
$$m_{}(\psi )=\frac{1}{4\pi }A\lambda _xH_a\left(\frac{H_a}{H_{0x}}\right)[(4/3^{5/2})f_c(\psi )]𝒦_𝒞(Y_{xs})\mathrm{\Gamma }_x,$$
(54)
where $`\mathrm{\Gamma }_x(1/5)(4+\delta ^2)`$. Thus, the angular, field and thickness dependences remain the same, while only an overall anisotropy factor is needed.
### F Penetration depth
Measuring the field dependence of the penetration depth at low temperatures is another possible way of exploring the nonlinear Meissner effect. The reduction of the current via quasiparticle population results in a lower superfluid density and hence a larger penetration depth. Indeed, this was the first quantity studied in this area, although it is only very recently that experimental measurements have been attempted for HTSC’s.
In the presence of nonlinear effects, several possible definitions of the penetration length which coincide in the linear limit give slightly different results. The appropriate definition depends on the experimental setup. We have calculated above the spatial current and field distributions, and these results can be used to obtain the nonlinear contributions to $`\lambda `$ for any definition. We briefly illustrate this here by computing the components of the penetration depth along the $`x`$ and $`z`$ directions via the definition:
$$\frac{1}{\lambda _z(𝐇_a)}=\frac{1}{\lambda _zH_a}\left(\frac{H_x}{Y_z}\right)_{Y=Y_{zs}},\frac{1}{\lambda _x(𝐇_a)}=\frac{1}{\lambda _xH_a}\left(\frac{H_z}{Y_x}\right)_{Y=Y_{xs}},$$
(55)
where $`\lambda _i`$ is the zero field penetration depth along the $`i`$ direction. We will assume that the field is applied along a symmetry direction for each OP and will not place any restrictions on the thickness $`d`$.
We consider first the order parameter with point nodes, (9). The penetration depth when the applied field is perpendicular to the z-axis, ($`\psi =\pi /2`$) is obtained from (55) and (II D):
$$\frac{1}{\lambda _z(H_a)}=\frac{1}{\lambda _z}\left\{\mathrm{tanh}(Y_{zs})\frac{3}{32}\left(\frac{H_a}{H_{0z}}\right)^2_𝒮\right\},$$
(56)
where $`_𝒮=\mathrm{sech}^4(Y_{zs})[4Y_{zs}+\mathrm{sinh}(4Y_{zs})]`$. The most obvious difference between this and the results for d-wave is that the field correction is proportional to the square of the field, rather than to the field itself. This follows, once more, from phase space arguments. For a thick slab, $`_𝒮8`$, while in the very thin film limit, $`_𝒮(4/3)(d/\lambda _z)^3`$.
For the OP given in (10) one similarly gets:
$$\frac{1}{\lambda _x(H_a)}=\frac{1}{\lambda _x}\left\{\mathrm{tanh}(Y_{xs})\frac{2}{3}\left(\frac{H_a}{H_{0x}}\right)_𝒞\right\},$$
(57)
where $`_𝒞=1\mathrm{sech}^3(Y_{xs})`$, which goes to unity for a thick slab. For a very thin film superconductor, $`_𝒞=2/3(d/\lambda _x)^2`$. Here the penetration depth correction, $`\delta \lambda /\lambda _x`$ is linear in the applied filed as in the d-wave case, as a result of the presence in both cases of nodal lines. The dependence on thickness in the very thin film limit is cubic in $`d`$ for the point nodes and quadratic for line nodes.
## III Discussion
We have calculated the magnitude of the nonlinear electrodynamics effects as a function of field and angle. We believe that some remarks are now in order as to the feasibility of observing effects of the rough magnitude of those predicted. In making these remarks, we do not claim any experimental expertise in the relevant areas. We have in mind compounds with materials characteristics such as those of $`\mathrm{UBe}_{13}`$ or $`\mathrm{Sr}_2\mathrm{RuO}_4`$. In mentioning these compounds, we do not intend to propose that any of them belong to a specific pairing state. We merely wish to roughly estimate the level of the signal that would be predicted in the event that the material turned out to have a pairing state with a certain nodal structure. Our considerations can straightforwardly be extended to other materials.
First, our calculations have been performed, strictly speaking, in the low temperature limit. In practice, this means the temperature regime in the region $`T<\stackrel{~}{T}(H)\mathrm{\Delta }_0H/H_0`$, so that thermal excitations do not destroy the nonlinear Meissner effect. Assuming that, in order to maximize the signal, the applied field is close to $`H_{f1}`$, the field of first flux penetration, this still implies that the experiments must be performed at temperatures well below $`T_c`$. This means, for the materials of interest, at dilution refrigerator temperatures. This is undeniably a disadvantage when compared with the situation for HTSC’s, but one that can be overcome by using torque magnetometry or torsion oscillator techniques to measure the torque associated with the transverse moment. These techniques can be adapted to use in conjunction with a dilution refrigerator and their sensitivity can surpass that of the SQUID methods used for HTSC’s. These considerations pertain to the transverse moment and the associated torque. Several groups are planning, in the context of checking the very low temperature behavior of $`\lambda `$ in HTSC’s for deviations from the linear power law behavior, measurements of $`\lambda `$ at dilution refrigerator temperatures. These techniques could be combined with the high resolution methods already employed to measure the field dependence of $`\lambda `$.
We consider next the magnitude of the low $`T`$ effect. The maximum amplitude of the transverse moment depends on the values of the penetration depth, the characteristic field $`H_{0i}`$, and the maximum field one can apply while remaining in the Meissner regime, which is $`H_{f1}`$. Because $`m_{}`$ is an extensive quantity, it also depends on the size (specifically the surface area) of the samples available. As an illustrative exercise, we have estimated a putative signal for the transverse magnetic moment amplitude for a number of compounds by getting from the literature values of available crystal sizes and of the experimental parameters (such as penetration depths, and correlation lengths in the appropriate directions) that appear in our expressions. These values are subject to very considerable uncertainty and in most cases different references do not agree with each other, but they are sufficient for the purposes of our exercise. By inserting them in the appropriate formulae, (47), and (50), we obtain numerical values of the possible signal. The results are summarized in Table I, where we present our estimate for the maximum amplitude $`M_{}`$ for several materials. $`M_{}`$ is defined as the value of $`m_{}`$ for a thick slab at $`H_a=H_{c1}`$ and at the angle $`\psi `$ for which $`m_{}`$ is maximal. The critical field $`H_{c1}`$ is calculated from $`H_{c1}=(\varphi _0/(4\pi \lambda ^2))\mathrm{ln}(\lambda /\xi )`$ and is used as a conservative approximation to $`H_{f1}`$ since $`H_{c1}`$ is smaller (by a large factor for YBCO) than $`H_{f1}`$. We have also included in the Table, for the purposes of comparison, one typical HTSC compound (YBCO), with the signal in that case computed from Ref. for a d-wave state. For the other materials, we have assumed the OP in Eq. (9) for $`\mathrm{Sr}_2\mathrm{RuO}_4`$, while for the listed heavy fermion materials and organic salt we have taken the OP of Eq. (10). It bears repeating that these choices are illustrative and do not imply any judgement on our part as to the likelihood of what the pairing state actually might be. Rather, our point is that the techniques in this paper can be implemented to infer the nodal structure of the pairing state. The results in Table I are expressed in physical units and also, for purposes of comparison, as ratios to the corresponding estimate for YBCO in a d-wave state. The numbers in the table are very encouraging: they are in all cases comparable to or larger than those for YBCO and always comfortably exceed the resolution of the experimental techniques discussed above. For the penetration depth results, the situation is similarly favorable, since the changes in the penetration depth induced by a field close to $`H_{f1}`$ are considerably larger than the lower limit (a few $`\mathrm{\AA }`$ resolution) already achieved in YBCO.
We have to consider also the limitations of this work and the presence of other phenomena, besides temperature excitations, that may reduce the signal. First, there is the question of impurities. As has been seen in the context of YBCO, good quality samples characterized by a transition temperature not appreciably degraded, and by the appropriate power law behavior of $`\lambda `$ with temperature, should exhibit a signal substantially of the magnitude calculated here for a clean system. The decrease in the nonlinear signal associated with nonlocal effects at lower fields can also complicate the situation. However, these effects are quite small for fields close to $`H_{f1}`$ in the typical situation where this field is considerably larger than the equilibrium $`H_{c1}`$. In any case nonlocal effects are absent for several special crystal orientations, which can then be chosen. There also important questions as to what the effect of using more realistic forms (still containing point or line nodes) of the order parameter would be, or of including in more detail the local value properties of the Fermi surface. All told, however, we believe that the estimates in the Table show that there is a sufficient cushion between the maximum value and the experimental resolution so that one can expect an observable signal.
In conclusion, we have calculated here the nonlinear signal arising from the presence of point or line nodes in some simple p-wave order parameters. We have shown that there is a likelihood that these effects will be observable in materials currently being studied. The results given can straightforwardly be extended, if and when the experimental situation warrants it, to the study of the low frequency response, to more complicated or mixed order parameters, and a more general node spectroscopy procedure for p-wave materials can be performed as in Ref..
###### Acknowledgements.
We are indebted to Igor Z̆utić for conversations on the theories presented in this paper and for reading a draft of the manuscript. We thank Paul Crowell and Allen Goldman for numerous conversations concerning what can and cannot be done experimentally. This work was supported in part by the Petroleum Research Fund, administered by the ACS.
|
no-problem/0003/astro-ph0003248.html
|
ar5iv
|
text
|
# The line of sight velocity distributions of simulated merger remnants
## 1 Introduction
Kinematic studies of early-type galaxies have revealed a remarkable variety of interesting behavior; some galaxies have rotation axes “misaligned” with respect to their minor axes (Franx, Illingworth, & de Zeeuw 1991), while in others the inner regions counterrotate with respect to the rest of the galaxy (Statler, Smecker-Hane, & Cecil 1996; Bender & Surma 1992; van der Marel & Franx 1993). Such intriguing kinematics could plausibly result if these galaxies are the end-products of disk-galaxy mergers (Toomre & Toomre 1972), and N-body simulations have gone some ways toward showing that mergers can indeed produce remnants with distinctive kinematics (Hernquist & Barnes 1991; Barnes 1992, 1998; Balcells & González 1998). However, other theories have been put forward for such kinematic features, particularly in the case of counterrotation (Kormendy 1984; Bertola, Buson, & Zeilinger 1988). Distinguishing between major mergers and other explanations for distinctive kinematics in galaxies has been especially difficult.
The projected luminosity profiles and isophotal shapes of simulated disk galaxy mergers are reasonably good matches to those of elliptical galaxies (eg. Barnes 1988; Hernquist 1992, 1993; Governato, Reduzzi, & Rampazzo 1993; Heyl, Hernquist, & Spergel 1994), but few workers have investigated the projected kinematics of simulated merger remnants. Hernquist (1992, 1993) described principal-axis profiles of projected mean velocity and velocity dispersion for several disk-disk merger remnants, and Heyl, Hernquist, & Spergel (1996) studied line of sight velocity distributions for a somewhat larger sample of objects. These studies showed that kinematic misalignments of merger remnants are observable, and indicated that skewness of line profiles could provide information on the initial orientations of the merging disks. However, while systematically exploring different projections, these studies were limited to equal-mass mergers, and did not examine the structure of line profiles in detail or map velocity fields in two dimensions.
Therefore, we studied line of sight velocity distributions for a larger sample of simulated merger remnants. We examined eight mergers between disk galaxies with mass ratios of 1:1 and another eight mergers between disk galaxies with mass ratios of 3:1. We limited our analysis to a single projection along the intermediate axis of each remmant, but we complement a extensive presentation of major-axis kinematics with detailed examinations of individual line profiles and with two-dimensional maps of key kinematic parameters. This work extends the studies described above to unequal-mass mergers, clarifies the connection between initial conditions and line profile, and provides predictions to be compared with kinematic studies of early-type galaxies using the next generation of integral-field spectrometers.
The outline of this paper is as follows. The rest of Section 1 describes the merger simulations and the methods we use to extract line of sight velocity distributions and represent the distributions with Gauss-Hermite parameters. Sections 2 and 3 present the results for the equal-mass and unequal-mass mergers, respectively. Section 4 compares our results to observational studies and summarizes our conclusions.
### 1.1 Merger simulations
The remnants analyzed here came from a modest survey of parabolic encounters between model disk galaxies (Barnes 1998). Each model had three components: a central bulge with a shallow cusp (Hernquist 1990), an exponential/isothermal disk with constant scale height (Freeman 1970; Spitzer 1942), and a dark halo with a constant-density core (Dehnen 1993; Tremaine et al. 1994). Density profiles for these components are
$`\rho _\mathrm{b}`$ $``$ $`r^1(r+a_\mathrm{b})^3,`$ (1)
$`\rho _\mathrm{d}`$ $``$ $`\mathrm{exp}(R/R_\mathrm{d})\mathrm{sech}^2(z/z_\mathrm{d}),`$ (2)
$`\rho _\mathrm{h}`$ $``$ $`(r+a_\mathrm{h})^4,`$ (3)
where $`r`$ is spherical radius, $`R`$ is cylindrical radius in the disk plane, and $`z`$ is distance from the disk plane.
Adopting simulation units with $`G=1`$, the model used in the equal-mass mergers has a bulge mass of $`M_\mathrm{b}=0.0625`$, a disk mass of $`M_\mathrm{d}=0.1875`$, and a halo mass of $`M_\mathrm{h}=1`$. The bulge scale length is $`r_\mathrm{b}=0.0417`$, the disk scale radius and scale height are $`R_\mathrm{d}=0.0833`$ and $`z_\mathrm{d}=0.007`$, and the halo scale radius is $`r_\mathrm{h}=0.1`$. With these parameter choices, the model has a half-mass radius $`r_{1/2}0.28`$, and the circular velocity and orbital period at this radius are $`v_{1/2}1.5`$ and $`t_{1/2}1.2`$. The model may be roughly scaled to the Milky Way by equating our units of length, mass, and time to $`40\mathrm{kpc}`$, $`2.2\times 10^{11}\mathrm{M}_{}`$, and $`2.5\times 10^8\mathrm{yr}`$, respectively.
In the unequal-mass mergers, the larger model had the same parameters as listed above, while the small model was scaled down by a factor of $`3`$ in mass and $`\sqrt{3}`$ in radius in rough accord with the standard luminosity-rotation velocity relation for disk galaxies.
Each experiment used a total of $`131072`$ particles, $`65536`$ assigned to the luminous components, and $`65536`$ assigned to the dark halos. The models were run with a tree code using a spatial resolution of $`ϵ=0.01`$ and a time-step $`\mathrm{\Delta }t=1/128`$. With these integration parameters, total energy was conserved to within $`0.5`$% peak-to-peak.
All eight equal-mass merger simulations used the same initial orbit, leading in each case to close ($`r_\mathrm{p}0.2`$) parabolic encounter. Disk angles for these experiments are listed in Table 1; $`i`$ and $`\omega `$ are the inclination and argument of pericenter (Toomre & Toomre 1972), while the subscripts $`1`$ and $`2`$ label the two disks. After merging, remnants were evolved for several more dynamical times before being analyzed.
The eight unequal-mass merger simulations generalize the equal-mass simulations A, B, C, and D by allowing the mass of either galaxy to vary by a factor of three. Like their equal-mass counterparts, these experiments adopted a parabolic initial orbit with pericentric separation $`r_p=0.2`$. Table 2 lists the inclinations and pericentric arguments for each simulation; here $`i_1`$ and $`\omega _1`$ are the angles for the larger disk, while $`i_2`$ and $`\omega _2`$ are the angles for its smaller companion.
Some salient properties of these merger remnants are summarized here; for a more detailed discussion, see Barnes (1998). All sixteen remnants are ellipsoidal objects with luminosity profiles generally following a de Vaucouleurs law. The projected half-light radii $`R_\mathrm{e}`$ of the equal-mass remnants range from $`0.133`$ to $`0.157`$, while for the unequal-mass remnants $`R_\mathrm{e}`$ ranges from $`0.099`$ to $`0.123`$. Fig. 1 shows axial ratios determined from the inertia tensor for the most tightly-bound half of the luminous particles in each object. On the whole, the remnants of equal-mass mergers are triaxial or prolate, while those produced by unequal-mass mergers tend to be more oblate.
### 1.2 Gauss-Hermite analysis
For all sixteen simulated remnants we extracted five frames, each containing $`65536`$ luminous particles – that is, particles from the bulges and disks of the progenitor galaxies. Each set of five frames is equally-spaced over a total of $`0.5`$ time units; this interval is long enough that individual particles are sampled at effectively random orbital phases, but not long enough for the remnant to undergo significant evolution. We shifted each frame to place the potential minimum at the origin and rotated it to diagonalize the moment of inertia tensor for all particles with a potential less than $`0.8`$ times the minimum potential. In what follows, we use $`X`$, $`Y`$, and $`Z`$ for the major, intermediate, and minor axes of the remnants.
To measure line of sight velocity distributions as a function of position along a given axis, we created a two dimensional grid, with one dimension representing position and the other dimension representing velocity, thus simulating a slit in a spectrometer. Typically, we placed the slit along the major ($`X`$) axis and projected along the intermediate ($`Y`$) axis, although other options were used in preliminary investigations. The width of the slit was set at $`0.03`$, which is roughly $`20`$% of the projected half-light radius. For each frame from each remnant, the particles falling within the slit were binned in position and velocity. The grid spacing along the slit was set to a minimum of $`0.02`$ and increased as necessary to keep the total number of particles falling within the range above a minimum. The width of the velocity bins was set to a fixed value of $`0.2`$, spanning the velocity range $`|v|4`$ with $`40`$ bins.
To map the line of sight velocity distributions across the plane of the sky, we used a generalization of the above procedure. However, two adjustable bin dimensions were created instead of one along the given slit.
After binning the data, the velocity distribution at each location was fit with a parameterized Gauss-Hermite series (van der Marel & Franx 1993). The value of each parameter was determined by combining the five frames and performing a least-squares fit; uncertainties were estimated by comparing fits of individual frames. Gauss-Hermite functions are modified Gaussians with additional skewness and kurtosis parameters; they provide an effective way to parameterize the moderately non-Gaussian distributions which arise in systems which have undergone incomplete violent relaxation. The formula for the fitting function is
$`P(v)`$ $`=`$ $`\gamma {\displaystyle \frac{\alpha (w)}{\sigma }}[1+h_3H_3(w)+h_4H_4(w)],`$ (4)
where $`w(vv_0)/\sigma `$ and
$`\alpha (w)`$ $``$ $`{\displaystyle \frac{1}{\sqrt{2\pi }}}e^{w^2/2},`$ (5)
$`H_3(w)`$ $``$ $`{\displaystyle \frac{1}{\sqrt{6}}}(2\sqrt{2}w^33\sqrt{2}w),`$ (6)
$`H_4(w)`$ $``$ $`{\displaystyle \frac{1}{\sqrt{24}}}(4w^412w^2+3).`$ (7)
This function has five parameters: $`\gamma `$, $`v_0`$, $`\sigma `$, $`h_3`$, and $`h_4`$. The normalization factor $`\gamma `$ has little physical significance in our study. The mean velocity $`v_0`$ and velocity dispersion $`\sigma `$ have dimensions of velocity, while the $`h_3`$ and $`h_4`$ parameters represent the skewness and kurtosis of the velocity distribution and are dimensionless. When $`h_3=h_4=0`$, the Gauss-Hermite series produces a normal Gaussian profile. When $`h_3`$ has the same sign as $`v_0`$ the distribution’s leading wing is broad and the trailing wing is narrow, while when $`h_3`$ and $`v_0`$ have opposite signs the trailing wing is broad and the leading wing is narrow. When $`h_4>0`$, the distribution has a narrow peak with broad wings, and when $`h_4<0`$, the distribution has a broad peak with narrow wings.
### 1.3 Orbit classification
In order to discover which orbital families are responsible for various features in the velocity distributions, we assigned each particle to an orbit family using the algorithm described in Fulton & Barnes (submitted). This algorithm follows each particle for a number of radial periods and classifies its orbit by examining the sequence of principal plane crossings. To save time and slightly reduce the effects of discreteness, we calculated the trajectories using a quadrupole-order expansion of the gravitational field (White 1983). For the present purpose all “boxlet” orbits were counted as boxes; thus the major orbital families recognized here are Z-tubes, which rotate about the minor axis, X-tubes, which rotate about the major axis, and boxes, which do not rotate.
## 2 Equal-Mass Mergers
A merger of comparable-mass galaxies usually eliminates many of the initial characteristics of both galaxies in the formation of the new galaxy. The resulting remnants are supported partly by rotation and may sometimes be flattened, but the overall structure of the galaxies as well as the dynamics are radically changed. (This is in contrast to the 3:1 mergers, where the disk of the larger galaxy often survives the merger.) Furthermore, the dynamics of individual 1:1 mergers produced with different initial parameters vary greatly.
### 2.1 Typical 1:1 merger parameter curves
Fig. 2 plots the Gauss-Hermite parameters as functions of position along the major axis for remnant E. This nearly oblate and rapidly rotating object was produced by a direct encounter between disks with inclinations of $`i_1=0`$ and $`i_2=71`$; it has a fairly simple structure which contrasts the more complex cases described below. Fig. 3 shows examples of line of sight velocity distributions at two different places in the galaxy. These distributions are shown for all the particles, and for particles sorted by orbital family (Z-tubes, X-tubes, and box orbits).
Within one about effective radius, the mean velocity shows a roughly linear trend with position, while at larger radii the velocity profile rises more gradually and may start to level off. This shape is seen in all but one the remnants studied here, though the profile amplitude and the radius at which it levels off varies from one remnant to another. The exception, in which the outskirts of the remnant counterrotate with respect to the rest, will be described shortly. None of the 1:1 remnants attain rotation velocities comparable to their circular velocities, implying that rotation plays a relatively minor role in supporting these objects.
The velocity dispersion profile in Fig. 2 climbs from a local minimum at the center to a gentle peak on either side, and then falls off slowly at greater distances from the origin. Similar profiles are seen in all of the 1:1 remnants; this uniformity may be understood from the Jeans equations, since all of these moderately anisotropic remnants have similar density profiles. The central regions have a large percentage of particles from the Hernquist-model bulges of the progenitor galaxies. These particles still follow $`r^1`$ density profiles at small $`r`$ and their dispersion therefore scales as $`r^{1/2}`$, producing the central minima noted above. At larger radii the rather gradual fall-off in $`\sigma `$ may reflect the increasing contribution of dark matter, which dominates the mass budget beyond about one effective radius.
In this as in most 1:1 remnants, the $`h_3`$ parameter is has the same sign as the mean velocity $`v_0`$; as Fig. 3b shows, the velocity profile has broad leading and narrow trailing wings. This asymmetric profile arises through a combination of box orbits and Z-tube orbits. The former, which have a symmetric and rather narrow velocity distribution, effectively localize the peak of the profile, while the latter, which have a wide distribution with nonzero mean, populate the broad leading wing. Further from the center the trend of $`h_3`$ with position is reversed, and the outermost points are consistent with $`h_3=0`$; this may occur because the outer regions of this rather oblate remnant are almost exclusively populated by Z-tube orbits.
The $`h_4`$ profile in Fig. 2 shows a significant peak at the center, falls off rapidly at slightly larger radii, then appears to increase again before becoming statistically consistent with $`h_4=0`$ at the outer points plotted. All the remnants in our sample have $`h_4>0`$ at small radii and more nearly Gaussian distributions at large radii. However, there are large variations from remnant to remnant, so it’s not clear if the results presented for remnant E should be considered typical. As Fig. 3a shows, the distinctly triangular shape of the velocity distribution near the center is almost entirely due to particles on box orbits. Further from the center the fraction of box orbits declines and the velocity distribution of the box orbits becomes less triangular; both of these trends tend to reduce the measured values of $`h_4`$.
Having examined the behavior of the parameters on the major axis, we briefly describe two-dimensional maps of the mean velocity and velocity dispersion in Fig. 4. Mean rotation velocities are highest in the equatorial plane, while elsewhere a roughly cylindrical rotation pattern is seen. The zero velocity contour is slightly tilted with respect to the $`Z`$ axis, indicating a modest amount of rotational misalignment at larger radii. If the galaxy only contained Z-tube orbits, the contours would run parallel to the $`Z`$ axis; the net streaming of particles in X-tube orbits cause the slant and misaligns the rotational axes.
On the whole, the velocity dispersion falls off rather more rapidly away from the major axis than it does along the major axis; the dispersion contours are roughly aligned with, although rounder than, the surface density contours. Note, however, the two closed contours representing dispersion maxima directly above and below the central minimum, and the vertical elongation of the next lower contour. Several more examples of this feature will be presented shortly.
### 2.2 Variety in 1:1 mergers
In equal-mass mergers, the degree of violent relaxation depends on the initial orientations of the progenitor disks. Thus in contrast to the relatively simple product of a direct encounter just described, mergers of inclined or retrograde disks produce remnants with a wide range of kinematic properties. Here we describe a few examples.
#### 2.2.1 Counterrotation at large radii
Fig. 5 shows Gauss-Hermite parameters as functions of major-axis position for remnant H. This object is similar in shape to remnant E but rotates more slowly than any other remnant in our sample; it was produced by a retrograde encounter between disks with inclinations of $`i_1=109`$ and $`i_2=180`$. In the course of the encounter, the first disk absorbed a good deal of orbital angular momentum, producing a spheroidal structure which rotates in the same sense as the progenitors once orbited each other. The second disk, which suffered an exactly retrograde passage, remained relatively thin and retained its original sense of rotation. This rather peculiar combination of circumstances accounts for the counterrotation seen at large radii. Near the center, the virtual slit includes a substantial fraction of particles from the first progenitor, while at larger radii the kinematics near the major axis are dominated by particles from the second disk. This produces a change in the direction of mean velocity as the distance along the major axis from the center increases.
Besides its counterrotation, this remnant has other peculiar kinematic features. For example, $`h_3`$ and $`v_0`$ have opposite signs over most of the range plotted in Fig. 5, implying that the velocity profile has narrow leading and broad trailing wings; this is atypical for an equal-mass merger remnant. Moreover, within the effective radius the $`h_4`$ parameter is quite high, indicating that the profile’s wings are relatively broad in comparison to the overall dispersion. These broad wings may result from the superposition of two distinct velocity systems with widely-separated mean velocities.
#### 2.2.2 Major-axis rotation at small radii
Fig. 6 presents maps of $`v_0`$ and $`\sigma `$ for remnant G, a slowly-rotating and relatively prolate object produced by a merger of disks with inclinations $`i_1=109`$ and $`i_2=71`$. This remnant has a large population of X-tube orbits which dominate the net rotation at small radii; the central regions thus rotate about the remnant’s major axis. At larger radii the angular momentum vector is largely determined by Z-tube orbits, favoring a more normal pattern of minor-axis rotation. Note that the zero velocity contour is inclined with respect to the minor axis, indicating that X-tube orbits still contribute to the net rotation; moreover, at large radii the X-tube orbits rotate in the opposite direction than they do near the center.
The dispersion contours in Fig. 6 are elongated perpendicular to the projected density contours, and the highest $`\sigma `$ values are seen in two regions on the minor axis above and below the center of the remnant. An explanation for this behavior will be given shortly.
#### 2.2.3 Severe rotational misalignment
As a last example of kinematic diversity among equal-mass mergers, Fig. 7 presents maps of all four Gauss-Hermite parameters for remnant C. Like remnant G just described, this object was produced by a merger of two inclined disks and contains a large number of X-tube orbits. Within the effective radius the rotation axis is severely misaligned with the minor axis, while at larger radii the velocity contours are more nearly parallel to the minor axis. This occurs because X-tube orbits make a significant contribution to the net angular momentum at small radii, while Z-tube orbits play a larger role at large radii.
Along the minor axis, the velocity dispersion peaks on either side of the remnant’s center. Recall that such minor-axis peaks were also seen in previous dispersion maps (Figs. 4 and 6). In remnants C and G the locations of these peaks and the general shape of the high-dispersion regions resembles the spatial distribution of X-tube orbits. Such orbits encircle the waists of prolate galaxies, and travel roughly along the $`Y`$ axis – that is, towards or away from the virtual observer – in the regions of peak dispersion. Further evidence that X-tube orbits are responsible for these peaks appears in Fig. 8, which compares velocity profiles near the center and near a peak of the dispersion. The central profile, symmetric and relatively narrow, is dominated by box orbits. In contrast, the profile near the peak is dominated by X-tubes; with some X-tubes rotating in one direction and the rest in the other direction, the velocity distribution becomes broader. This effect is strong in remnants C and G because they have large populations of X-tube orbits.
If the X-tube population was “cold”, so that most particles stayed close to the closed orbits which parent them, the X-tube distribution at the peaks would be composed of two counterrotating streams and corresponding peaks would also occur in the $`h_4`$ map. No such peaks are seen in Fig. 7; instead, the $`h_4`$ map shows a broad ridge of high values extending diagonally from lower left to upper right. The $`h_3`$ map, on the other hand, shows a gradient running along the same diagonal, with low values at the lower left and high values at the upper right. These patterns are considerably clearer than those seen in the other 1:1 remnants; most of the $`h_3`$ and $`h_4`$ maps we have examined seem too noisy to yield definite results. But we do not yet understand even the relatively simple patterns in the present example. For example, whereas $`v_0`$ and $`h_3`$ vary together in the central region of remnant E (see Fig. 2), here the contours of $`v_0`$ and $`h_3`$ are roughly orthogonal within one effective radius. Further modeling is needed to disentangle the roles of different orbit families and determine the range of behavior consistent with dynamical equilibrium.
## 3 Unequal-Mass Mergers
Mergers between galaxies of significantly different masses are less violent than equal-mass mergers. For sufficiently large mass ratios, the more massive galaxy may survive essentially unscathed. Of the eight 3:1 mergers we examined, six have similar kinematic parameter curves and oblate shapes. This kinematic uniformity arises because the larger disk basically survives the merging process.
### 3.1 Oblate 3:1 mergers
Fig. 9 plots the Gauss-Hermite parameters as functions of position along the major axis for remnant B<sub>1</sub>. This remnant was produced by a merger between a large disk with inclination $`i_1=109`$ and a small disk with inclination $`i_2=71`$; though somewhat more triaxial than most of the unequal-mass remnants (see Fig. 1), it serves to illustrate the kinematic structure of typical 3:1 merger remnants. Velocity profiles at selected positions are shown in Fig. 10.
The mean velocities along the major axis reveal a rotation curve similar to those seen in early type spiral and S0 galaxies; the rotation velocity smoothly increases with distance from the center, then levels off at larger radii. The amplitude of this curve is nearly twice that of remnant E (Fig. 2). As the latter is the fastest rotator among our sample of equal-mass mergers, it is at once evident that the 3:1 remnants are kinematically distinct from their 1:1 counterparts.
Along the major axis, the velocity dispersion declines only gradually out to about $`1.5R_\mathrm{e}`$, then falls off rapidly at larger radii. As Fig. 10a shows, the high dispersion at small radii is due to a population of box orbits associated with a central bar or triaxial bulge. At larger radii the dominant orbit family shifts from boxes to Z-tubes, most of which rotate in the same direction as the initial disk of the larger progenitor. The relatively low dispersions seen in the outer regions indicate that this disk has survived the merger without a great deal of dynamical heating.
The 3:1 merger remnants often have rather complex major-axis $`h_3`$ curves, and remnant B<sub>1</sub> is no exception. Within $`|X|<0.05`$ the $`h_3`$ parameter has the same sign as $`v_0`$; the velocity profile has broad leading and narrow trailing wings, much as in typical 1:1 remnants. But at slightly larger $`|X|`$ values $`h_3`$ abruptly changes sign and the shape of the profile is inverted. Representative profiles at $`X=0.03`$ and $`X=0.09`$ are shown in Figs. 10a and 10b, respectively. As noted above, the former is dominated by box orbits; its broad leading wing is populated with Z-tube orbits. In contrast, the latter is dominated by Z-tubes, and its broad trailing wing is largely populated by boxes.
Compared to the other cases presented here, remnant B<sub>1</sub> has rather small $`h_4`$ values over most of the major axis; while the central peak in Fig. 9 is significant, the measured values rapidly fall to near zero further from the center. There is some hint that $`h_4`$ actually becomes negative at intermediate $`|X|`$, but this is not compelling as most points have error bars consistent with $`h_4=0`$. The three rightmost points have positive $`h_4`$ values which appear significant; these may be due to incomplete phase mixing as no corresponding upturn is seen on the other side.
Fig. 11 presents maps of all four Gauss-Hermite parameters for remnant B<sub>1</sub>. As a group, the 3:1 mergers have fairly regular velocity fields; remnant B<sub>1</sub> is typical in this regard, showing somewhat faster rotation near the disk plane and a nearly cylindrical rotation pattern elsewhere. Some asymmetry which may be due to a long-lived warp is evident, but the zero-velocity contour falls close to the minor axis, so there is little kinematic misalignment. The dispersion contours are somewhat elongated parallel to the minor axis, but $`\sigma `$ falls off monotonically with increasing $`|Z|`$, showing no sign of the off-axis peaks noted in the 1:1 remnants. Both the good kinematic alignment and the lack of off-axis dispersion peaks are expected in view of the relative scarcity of X-tube orbits in this and most other 3:1 merger remnants.
The $`h_3`$ and $`h_4`$ parameter maps show definite large-scale patterns which, however, are not easy to interpret. Particularly puzzling is the $`h_3`$ map; $`h_3`$ is basically inversion-symmetric along the major axis (Fig. 9), but no simple symmetry is seen across the face of the system. The $`h_4`$ map shows peaks on the minor axis above and below the center of the galaxy. As Fig. 10c shows, the broad-winged profile at these locations is largely due to Z-tube orbits, with some contribution from X-tubes and boxes. Curiously, each wing is dominated by particles from a different progenitor.
### 3.2 Prolate 3:1 mergers
While most of the unequal-mass merger remnants in our sample are much like the one just described, two have kinematic properties somewhat reminisicent of the equal-mass remnants. These are remnants A<sub>1</sub> and A<sub>2</sub>, which – probably not by coincidence – are also the two most prolate of the 3:1 remnants (Fig. 1). Both objects were produced by direct encounters between disks with inclinations of $`0`$ and $`71`$; here we describe remnant A<sub>1</sub>, which results when the larger disk has inclination $`i_1=0`$.
Fig. 12 shows how the velocity distribution parameters vary along the major axis of remnant A<sub>1</sub>. While this remnant rotates faster than any of the 1:1 remnants, its rotation curve rises rather gradually compared to those of typical 3:1 remnants. Moreover, the dispersion profile is nearly flat instead of falling off at large radii. These kinematic properties indicate that the larger disk, while only slightly thickened by the in-plane merger, has been significantly heated in the radial and azimuthal directions. In fact, the encounter triggers the formation of a very strong bar in the larger disk, and this bar in turn accounts for the nearly-prolate figure of the final merger remnant.
The major-axis $`h_3`$ and $`h_4`$ curves for this object also resemble those seen in many 1:1 merger remnants. Over most of the measured range, $`h_3`$ has the same sign as $`v_0`$, indicating that the velocity profile has a broad leading and narrow trailing wings; only beyond $`2R_\mathrm{e}`$ does the profile revert to the shape characterisitc of a rotating disk. The $`h_4`$ parameter is also distinctly greater than zero along most of the major axis, like many 1:1 remnants but unlike the oblate 3:1 sample.
The mean velocity and dispersion maps in Fig. 13 reveal kinematic properties intermediate between equal-mass and unequal-mass merger remnants. The velocity field shows no sign of kinematic misalignment; this is typical of 3:1 remnants. On the other hand, the dispersion has maxima on the minor axis above and below the center, like those seen in remnants G and C. Consistent with the explanation for these peaks advanced in § 2.2.3, we note that remnant A<sub>1</sub> has a relatively large population of X-tube orbits for a 3:1 merger.
## 4 Conclusions
During a merger, stellar orbits are scattered by the fluctuating gravitational potential. However, the potential settles down long before orbits can be completely randomized; consequently, merger remnants preserve significant “memories” of their progenitors (eg. Barnes 1998 and references therein). In this study we have shown that such memories can be partly recovered from the line of sight velocity profiles of merger remnants.
### 4.1 Comparison with observations
Observations of early-type galaxies reveal a wide variety of kinematic phenomena similar to those seen in our sample of remnants. Very briefly, we will touch on some of these similarities.
#### 4.1.1 Misaligned rotation
As pointed out in sections 2.1 and 2.2.3 as well as in previous studies, kinematic misalignments are expected in merger remnants, and especially in equal-mass mergers. Franx, Illingworth, and de Zeeuw (1991) present a study of kinematic misalignment in elliptical galaxies; most of the galaxies they observed have small misalignments. While remnant C (Fig. 7) is dramatically misaligned, as a whole the equal-mass remnants described here are better aligned than samples reported in earlier work (Barnes 1992). The incidence of severe misalignment probably depend on several factors; for example, central density profile can have a significant impact on the phase-space available to major-axis tube orbits. Until the factors which favor misalignment are better understood, it’s not clear if the observed scarcity of severe kinematic misalignment can constrain the role of equal-mass mergers in the formation of elliptical galaxies.
#### 4.1.2 Kinematically decoupled cores
Hernquist and Barnes (1991) presented a dissipational simulation showing that the core of a merger remnant could decouple and counterrotate. We have examined the quantitative effect that counterrotation can have on the observed kinematics of merger remnants. Several galaxies, such as NGC 1700 (Statler, Smecker-Hane, & Cecil 1996), NGC 4365, NGC 4406, NGC 5322 (Bender & Surma 1992), IC 1459, NGC 1374, NGC 4278 (van der Marel & Franx 1993), NGC 4816, and IC 4051 (Mehlert et al. 1998) all show line of sight kinematics similar to the line of sight velocity distributions of our models (though some of these galaxies are strong candidates for other scenarios that create counterrotation). In particular, we find amplitudes of $`h_3`$ and $`h_4`$ similat to those reported in the observational studies. This shows that major merger can produce remnants with the degree of skewness and kurtosis observed in counterrotating systems. We expect that as more galaxies are observed further examples with line of sight velocity distributions similar to ours will be found.
Also worth noting are the observations of NGC 253 by Anantharamaiah & Goss (1996) which found an orthogonally rotating core which was suspected to be caused by a merger event. One of our 1:1 merger models also produced an orthogonally rotating core (see Fig. 6).
#### 4.1.3 Counterrotating populations
Early-type galaxies with extended counterrotating populations are rare but not unknown. Some of these systems may have formed by episodic galaxy building (Thakar & Ryden 1996), but others are harder to explain in this way. For example, NGC 4550 (Rubin et al. 1992) has counterrotating disks of comparable radial extent and luminosity; Pfenniger (1999) has proposed this galaxy formed by an in-plane merger of two disk galaxies. Our analysis of remnant H shows that a somewhat wider range of merger scenarios can produce counterrotating populations.
#### 4.1.4 Rapid rotators
Barnes and Hernquist (1992) and Schweizer and Seitzer (1992), among others, have suggested that S0 galaxies could be made by mergers. Fisher (1997) has collected a sample of S0 galaxies with line of sight velocity distributions fit using Gauss-Hermite parameters. Comparing his observations to our simulations, we find a good match between Fisher’s parameters and the parameters for our disky 3:1 mergers. The overall shapes of the Gauss-Hermite parameters plotted on the major axis are remarkably similar, except that our rotation curve near the origin is less steep than the observed S0 galaxies, and some details near the center of our simulations (such as the first twist in the $`h_3`$ parameters) are not apparent in Fisher’s data.
Based on measurements of the ratio of mean velocity to velocity dispersion for a set of faint elliptical galaxies, Rix, Carollo, and Freeman (1999) have argued that these galaxies rotate too rapidly to be products of dissipationless mergers. When we compare these measurements to $`v/\sigma `$ ratios we measured in our simulations (Fig. 14), we find that unequal-mass mergers can not only produce the same peak $`v/\sigma `$ ratios but also produce the same relations between $`v/\sigma `$ and radius. Moreover, both our results and the results of Rix, Carollo, and Freeman show similar ranges of maximum $`v/\sigma `$ ratios, with values in the range of $`1`$ to $`4`$. We conclude that unequal mass mergers can produce remnants with the dynamics, including the $`v/\sigma `$ ratios, characteristic of these faint ellipticals. This result complements a recent study by Naab, Burkert, & Hernquist (1999), which finds that unequal-mass mergers can also produce the disky isophotes characteristic of faint ellipticals and S0 galaxies.
### 4.2 Summary
We have used Gauss-Hermite expansions to measure the line of sight velocity profiles of simulated merger remnants. Even relatively modest values of $`N`$ provide enough data to obtain significant detections of non-Gaussian profiles. Some key results are listed below.
1. Equal-mass merger remnants exhibit a variety of kinematic features rather than any single unique “merger signature”. However, certain features seem common to most of the remnants in our sample; these include slowly rotating inner regions, relatively flat dispersion profiles, off-axis dispersion peaks, and velocity distributions with broad leading and narrow trailing wings.
2. Unequal-mass merger remnants show much less variation in kinematic properties; instead, the larger disk often survives with only moderate damage. Such disk-dominated remnants are characterized by relatively rapid rotation, falling dispersion profiles, and velocity distributions with narrow leading and broad trailing wings. For mass ratios of 3:1, between half and three-fourths of the remnants in our study had strong disk-like kinematics.
3. Simulated remnants have many kinematic characteristics similar to those observed in early-type galaxies. For example, we described counterrotating populations, misaligned rotation, and kinematically decoupled cores resembling those reported in some elliptical galaxies, and rapid rotation consistent with faint ellipticals and S0 galaxies. However, our simulations don’t always match observed galaxies. For example, the mean velocity and $`h_3`$ parameters usually have opposite signs in luminous elliptical galaxies (eg. Bender, Saglia, & Gerhard 1994), while these parameters often have the same sign in our simulated remnants. More work needs to be done to examine the connections between simulated remnants and real galaxies; in particular, the effects of random viewing angles must be taken into account before definitive comparisons of models and observations are possible.
We thank Hans-Walter Rix and Andreas Burkert for stimulating discussions, and the referee for a prompt and helpful report. JEB acknowledges partial support from NASA grant NAG 5-8393.
|
no-problem/0003/hep-th0003251.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Spacetimes with singularities occur in many classical theories of gravity. This includes classical superstring theory i.e. supergravity, where there are examples of spacelike, null, and timelike singularities, some of which are naked. An important question is how, or indeed whether, quantum string theory resolves these singularities.
As pointed out in Ref. , some singularities are unphysical and cannot be patched up by string theory. A canonical example is the negative-mass Schwarzschild geometry which has a naked singularity. If string theory were to patch that up by smoothing it out to a small region of strong but finite curvature, then by mass negativity this would signal an instability of the theory as the vacuum would be unstable.
String theory resolves some spacetime singularities in a beautiful way. The basic idea is that once spacetime curvatures or other invariant measures of quantum corrections such as the dilaton are too large for supergravity to be applicable, other degrees of freedom take over. Examples of this are the gravity / gauge theory correspondences with sixteen supercharges of Ref. . Let us now turn to an introduction to these correspondences; a comprehensive review of the subject including the original $`AdS`$/CFT correspondences of Ref. may be found in Ref. .
The starting point for correspondences with sixteen supercharges is a system of $`N`$ D$`p`$-branes of Type II string theory. These D$`p`$-branes are hypersurfaces where open strings must end. The open strings distinguish which brane they end on via their Chan-Paton factors, which they carry on their endpoints at zero energy cost. This gives rise to a $`U(N)`$ gauge theory living on the worldvolume of the D$`p`$-branes. In addition, the D$`p`$-branes carry mass and Ramond-Ramond charge in the ten-dimensional bulk due to interactions between open and closed strings. See Ref. and Fig.1.
To engineer gravity/gauge theory correspondences, one takes a special low-energy limit of the D$`p`$-brane system. In the limit, the complicated physics on the branes reduces to $`d=p+1`$ supersymmetric $`U(N)`$ Yang-Mills theory (SYM) with sixteen supercharges. For $`p>3`$, the SYM theory is nonrenormalizable and so there is a need for new ultraviolet degrees of freedom in order to define the theory. String theory provides them in a well understood fashion for $`p<6`$. We will not discuss the $`p6`$ cases here.
In the limit, gauge theory energies $`E`$ are taken to be well below the string scale, $`E\mathrm{}_\mathrm{s}0`$. In order to keep the brane physics nontrivial, the gauge coupling on the branes, which is a derived quantity, $`g_{\mathrm{SYM}}^{2(p)}=(2\pi )^{p2}g_\mathrm{s}\mathrm{}_\mathrm{s}^{p3},`$ is held fixed. Here $`g_\mathrm{s}`$ is the string coupling constant. BPS W-boson-like states in the gauge theory, which in the bulk picture are open strings stretched perpendicularly between D$`p`$-branes, have mass $`Ur\mathrm{}_\mathrm{s}^2`$ which is also held fixed. In this relation, $`r`$ is the separation between the D$`p`$-branes.
In string theory the bulk Newton constant is also a derived quantity, e.g. in $`d=10`$ we have $`G_{10}=8\pi ^6g_\mathrm{s}^2\mathrm{}_\mathrm{s}^8.`$ The D$`p`$-brane supergravity geometry may be written in terms of a harmonic function $`H_p=1+c_pg_\mathrm{s}N(\mathrm{}_\mathrm{s}/r)^{7p}`$, where $`c_p`$ is a constant. Use of the above scaling relations results in the loss of the $`1`$ from $`H_p`$, i.e. the asymptotically flat part of the geometry. As a result, the bulk description of the system of $`N`$ D$`p`$-branes in the above low-energy limit is in terms of string theory on the near-horizon geometry.
This limit is called the decoupling limit because at such low energies the coupling between open and closed strings is turned off. Hence, the gauge theory on the branes and the bulk string theory on the near-horizon geometry may each be considered as a unitary theory on their own. This property makes a duality between the two theories feasible. Such a duality is often termed ‘holographic’ because it relates a $`d=10`$ bulk string theory to a $`d=p+1`$ gauge field theory. Note also that in this limit, there is no gravity on the brane, unlike the situation that occurs in the Randall-Sundrum scenario of Ref.s .
For general $`p`$, the spacetime curvature of the D$`p`$-brane near-horizon geometry is not constant but varies with radial variable $`U`$. The local value of the string coupling, $`g_\mathrm{s}e^\mathrm{\Phi }`$, also varies, but differently. For example, the cases $`p<3`$ have a dilaton singularity at $`U0`$ and a curvature singularity at $`U\mathrm{}`$, while for $`p>3`$ cases it is the other way around. We then need to know what degrees of freedom take over where the bulk $`d=10`$ supergravity description goes bad. In regions where the dilaton is strong, we use S-duality which takes us to a description with a weak dilaton. In regions where the curvatures become strong, it turns out that the right description is the SYM theory. For details, see Ref. and further development reviewed in Ref. . Before moving on, we will just note that by the nature of duality, only one description may be weakly coupled in any given region of parameter- and $`U`$\- space.
If we wish to do gauge theory at finite temperature $`T`$, we use the nonextremal D-brane geometry with Hawking temperature $`T`$, taken in the decoupling limit. This gives rise to a black hole type generalization of the near-horizon geometry. In principle, this is a useful picture in the context of the black hole information problem, because we have a duality of the black hole type system to a quantum field theory which is a manifestly unitary theory. The difficulty in solving the information problem in practice is the strong-weak nature of the duality: when the bulk picture is weakly coupled (calculable), the brane picture is strongly coupled and vice-versa.
There have been many further generalizations of the prototype correspondences; see Ref.s . We note here, however, that some attempts to construct supergravity duals to certain gauge theories turned out to be unphysical, partly because analyzing the supergravity theory in situations of interest is generally very difficult. See e.g. Ref. for a discussion of some of the problems encountered.
## 2 The pure $`𝒩=2`$ correspondence
We are interested in a system which gives pure gauge theory with eight supercharges living on the branes. Examples with eight supercharges and lots of hypermultiplet matter included the original $`AdS_3\times S^3\times T^4`$ and $`AdS_3\times S^3\times K3`$ correspondences and Ref.s . However, we want no hypermultiplets, and this will have drastic consequences for the properties of our spacetime.
Our setup of Ref. includes many dual realizations in terms of branes. The one on which we will concentrate is obtained by wrapping D$`(p+4)`$-branes on $`K3`$. Other options including D$`(p+1)`$-branes strung between two parallel NS5-branes, and D$`(p+2)`$ wrapped on collapsing S<sup>2</sup> cycles in $`K3`$. Related work appeared in yet another dual realization, the heterotic one, in Ref. .
### 2.1 The setup
Here we will focus on the $`p=2`$ case for simplicity. Our starting point is therefore a system of $`N`$ D6-branes wrapped on a $`K3`$ surface.
For the gauge theory side, we build the $`d=2+1`$ ’t Hooft coupling out of the $`d=6+1`$ coupling and the $`K3`$ volume $`(2\pi R)^4`$:
$$\lambda _2g_{\mathrm{SYM},2}^2N=\frac{g_{\mathrm{SYM},6}^2N}{(2\pi R)^4}=\frac{g_\mathrm{s}N\mathrm{}_\mathrm{s}^3}{R^4}.$$
(1)
The gauge theory will indeed be $`2+1`$ dimensional as long as excitations are low-energy by comparison to the characteristic scale of excitations in the compact $`K3`$. We therefore keep $`1/R`$ finite in the decoupling limit.
For the bulk side of things, in constructing the supergravity solution we usually start by finding the conserved charges. Generally, a supergravity solution is then uniquely determined via a no-hair argument. It is important to realize, however, that this argument fails when the geometry has a naked singularity, and so we will have to proceed with caution if we encounter one.
We begin with $`N`$ D6-branes, so we have D6-brane charge $`N`$. Once we wrap a D6 on a $`K3`$, an additional charge arises due to the curvature of $`K3`$, see Ref.s . As a result, we also have D2-brane charge $`(N)`$. The ADM tension formula is protected by supersymmetry and so we have
$$\tau _{\mathrm{ADM}}=\frac{N}{(2\pi )^3g_\mathrm{s}\mathrm{}_\mathrm{s}^7}R^4\frac{N}{(2\pi )^3g_\mathrm{s}\mathrm{}_\mathrm{s}^3}.$$
(2)
Applying the no-hair theorem naively, we find that the metric the string sees is, in the decoupling limit,
$$\begin{array}{cc}\frac{dS^2}{\mathrm{}_\mathrm{s}^2}=\hfill & \frac{R^2dx_{}^2}{\sqrt{(\lambda _2/U)[1(\lambda _2/U)]}}+\sqrt{\frac{[1(\lambda _2/U)]}{(\lambda _2/U)}}ds_{K3}^2\hfill \\ & +R^2\sqrt{(\lambda _2/U)[1(\lambda _2/U)]}\left\{dU^2+U^2d\mathrm{\Omega }_2^2\right\}.\hfill \end{array}$$
(3)
We have used spherical coordinates $`(U,\mathrm{\Omega }_2)`$ for the three dimensions transverse to both the $`K3`$ and the $`d=2+1`$ worldvolume $`x_{}`$.
Let us inspect our classical metric. At
$$U=\lambda _2,$$
(4)
there is a pathology; some components of the metric become imaginary for $`U<\lambda _2`$. Computing curvature invariants, we find that there is a singularity at $`U=\lambda _2`$, the locus of which is a two-sphere. The would-be horizon, located where $`g^{UU}0`$, is at $`U=0`$, and so the singularity is naked.
In a related (heterotic) context, this singularity was studied in Ref.s , and in Ref. it was dubbed the ‘repulson’ because massive particles are repelled by the singularity. This behavior is reminiscent of the inside of extremal Reissner-Nördstrom black hole.
Given our previous caveat on use of no-hair theorems, and the fact that some classical spacetime singularities are unphysical and cannot be resolved by quantum stringy effects, we have cause for concern about our repulson. In fact, it turns out that the repulson singularity is excised via a stringy mechanism, and we now turn to the description of this excision phenomenon.
### 2.2 Probe physics and spacetime singularity resolution
The physics seen by a probe in string theory depends on the probe and the target. If we probe a target made of fundamental strings with another string, the best spatial resolution possible turns out to be the string scale $`\mathrm{}_\mathrm{s}`$. This happens because the string is an extended object; more energy-momentum pumped into the probe string does not result in greater position-space resolution but rather in stretching the probe. The existence of a minimum distance is related to the T-duality symmetry of string theory (for a spatial direction compactified on a circle of radius $`R`$, T-duality exchanges $`R/\mathrm{}_\mathrm{s}\mathrm{}_\mathrm{s}/R`$). If a D-brane is used as a probe of other D-branes, different physics results. For example, in the case of D0-branes, Matrix Theory (see e.g. Ref. ) gives a characteristic scale of $`g_\mathrm{s}^{1/3}\mathrm{}_\mathrm{s}`$.
In our situation we are interested in probing the system of $`N`$ D6-branes wrapped on the $`K3`$. It turns out that the best probe for answering our question about singularity resolution is a clone, namely a single D6-brane also wrapped on the $`K3`$.
We begin with the spacetime or bulk side of the story. We take large-$`N`$, so that the probe D6 can be thought of as a ‘test’-brane, and the $`N`$ ‘source’-branes are represented by their supergravity solution. By supersymmetry, the static potential between the source and probe branes vanishes. The action for the probe brane turns out to be
$$S=\left\{\frac{1}{2}\stackrel{}{v}^2\frac{R^4}{(2\pi )^2g_\mathrm{s}\mathrm{}_\mathrm{s}^7}\left[12(\lambda _2/U)\right]\right\}+𝒪(\stackrel{}{v}^4),$$
(5)
where $`\stackrel{}{v}`$ is the velocity of the brane in the $`(U,\mathrm{\Omega }_2)`$ directions. The coefficient of $`v^iv^j`$ is the metric on moduli space, and we see that for the D-branes wrapped on $`K3`$ it is not flat as it would have been for branes wrapped on T<sup>4</sup>. The moduli space metric has a zero, which signals the vanishing of the ‘local’ tension of the probe. The locus of this zero is a sphere of radius
$$U_\mathrm{e}2\lambda _2=2g_{\mathrm{SYM}}^2N.$$
(6)
Notice that the radius $`U_\mathrm{e}`$ is twice as far out as the radius of the repulson singularity. We may now ask what physics is signified by the vanishing of the local tension of the probe. By inspection of the metric and dilaton, we find that nothing special happens at $`U_\mathrm{e}`$. On the other hand, the volume of $`K3`$, which varies with $`U`$ as
$$\mathrm{Vol}(K3)=\mathrm{}_\mathrm{s}^4\frac{[1(\lambda _2/U)]}{(\lambda _2/U)},$$
(7)
goes to the special value $`\mathrm{}_\mathrm{s}^4`$ at $`U_\mathrm{e}`$.
It is perhaps easiest to interpret the physics by performing T- and S-dualities to turn the D6-brane wrapped on a $`K3`$ into a heterotic string wrapped on a circle. The D6- and D2-brane charges (+1,-1) turn into winding and momentum charges (+1,-1), and the $`\mathrm{}_\mathrm{s}^4`$-sized $`K3`$ turns into a $`\mathrm{}_\mathrm{s}`$-sized circle. From perturbative string theory it is known that such strings wound on such a circle are massless and provide the gauge bosons for an enhanced $`SU(2)`$ symmetry. Alternatively, in the dual realization with D3-branes strung between two NS5-branes, the $`SU(2)`$ is that of the two NS5-branes, which is restored at the radius $`U_\mathrm{e}`$ because brane bending due to the D3’s causes the NS5’s to touch there. Dualizing back to our original system, we still have the $`SU(2)`$ enhanced symmetry, at the locus $`U=U_\mathrm{e}`$, which we dub the ‘enhançon’. This $`SU(2)`$ symmetry is broken at any distance $`U>U_\mathrm{e}`$; this is a Higgs mechanism in disguise.
We can also use the heterotic dual picture to understand physically why there is no notion of ‘inside the enhançon’ for the probe brane; it is just the minimum-distance phenomenon we mentioned at the beginning of this section. We can also compute the Compton wavelength of the probe as it approaches the enhançon, and we find that it expands smoothly upon approach to the enhançon locus. We refer the reader to Ref. for details and further explanation.
Now let us imagine trying to build the singular repulson geometry, one brane at a time. Notice that the radius of the enhançon locus, $`U_\mathrm{e}`$ of Eq. (6), is linear in $`N`$. Thus, the first brane has a small enhançon radius. The second brane cannot go inside the enhançon radius of first brane, and so we get a pair of branes at finite separation. The third brane cannot go inside the enhançon radius of the first two, and so on; in this way a sphere of $`N`$ evenly spaced branes is built up. The source branes then form a ‘Dyson sphere’ of a radius twice that at which the classical naked singularity occurred. Therefore, the singularity is excised - in quantum string theory it was never really there. The gravitational field is flat inside the Dyson sphere, by symmetry. Notice also that at large ’t Hooft coupling this Dyson sphere is macroscopically large.
### 2.3 The enhançon phenomenon and Seiberg-Witten theory
In the last subsection we saw that stringy effects saved the classical spacetime from embarrassment by excising its naked singularity. We now proceed to exhibit the corresponding phenomenon on the gauge theory side of the story.
For the analogue of our singularity excision in classical gravity, we must look to nonperturbative gauge theory. Fortunately, all we need for study of the moduli space physics is the Seiberg-Witten (S-W) curve. For the case $`N=2`$, in moduli space there are two branch points possessing enhanced gauge symmetry, namely where the monopole and dyon of the theory become massless. The large-$`N`$ version of the SW curve<sup>2</sup><sup>2</sup>2We have switched to the $`p=3`$ case for convenience. was worked out in Ref. ,
$$y^2=\underset{i=1}{\overset{N}{}}(x\varphi _i)^2\mathrm{\Lambda }^{2N},$$
(8)
where $`\mathrm{\Lambda }`$ is a nonperturbatively generated scale and the $`\varphi _i`$ are the adjoint scalar field vevs.
In the situation of interest, namely the system of a single probe brane far away from $`(N1)`$ other branes, the vevs are
$$\varphi _i=0,i=1\mathrm{}N1,\varphi _N\mathrm{\Lambda }.$$
(9)
Solving for the branch points $`y=0`$ gives $`2(N1)`$ points $`x`$ on an $`S^1`$ of radius $`\mathrm{\Lambda }`$, and two at $`x=\varphi _N`$ (to $`𝒪(1/N)`$). More generally, little is known about Seiberg-Witten theory for the $`d=p+1`$ gauge theories, but by analogy we will obtain a $`S^{4p}`$ of radius $`\mathrm{\Lambda }`$. This gives the brane positions as shown in Fig.2.
We can also use the S-W curve to deduce the physics if we try to bring the probe inside the sphere. For this, we set $`\varphi _N<\mathrm{\Lambda }`$. By solving for the branch points in this case, we find that they all lie on the sphere. The conclusion we can draw about the physics is that as we try to adiabatically move the probe brane ‘inside’ the sphere, it actually smoothly melts into the sphere.
It is worth noting at this point that there are other solutions for the branch points, but we used a symmetry argument by analogy with the bulk computation to pin down the spherically symmetric one. This suggests that there are other configurations on the bulk side which correspond to the other solutions from the SW curve, or that there is a degeneracy to explore.
Although we do not have space to discuss the details here, we have constructed the phase diagram of the $`d=p+1`$ systems with eight supercharges. This turns out to be significantly less straightforward than for sixteen supercharges. One reason is that there is a region of the phase diagram which does not have an obvious weakly coupled dual. This is the region describing energies of the order of the masses of strings stretched between different branes on the enhançon sphere. In addition, the finite temperature version of our setup does not permit black hole horizons. For the details, and some suggestions about the nature of the missing component of the phase diagram, we refer the reader to Ref. .
Ref. shows that the enhançon phenomenon also appears in $`SO(2N+1)`$, $`USp(2N)`$ and $`SO(2N)`$ gauge theories.
In the future we expect to develop further the physics of the enhançon phenomenon, and to make links with other recent studies of singularity resolution in string theory, such as Ref. and Ref. .
## Acknowledgments
The author wishes to acknowledge co-authors on , and in addition helpful discussions with Eric D’Hoker and Gary Horowitz.
This work was supported in part by NSF grant PHY94-07194.
|
no-problem/0003/cond-mat0003142.html
|
ar5iv
|
text
|
# Visualizing the particle-hole dualism in high temperature superconductors.
0pt0.4pt 0pt0.4pt 0pt0.4pt
## Abstract
Recent Scanning Tunneling Microscope (STM) experiments offer a unique insight into the inner workings of the superconducting state of high-Tc superconductors. Deliberately placed inside the material impurities perturb the coherent state and produce additional excitations. Superconducting excitations — quasiparticles — are the quantum mechanical mixture of negatively charged electron (–e) and positively charged hole (+e). Depending on the applied voltage bias in STM one can sample the particle and hole content of a superconducting excitation. We argue that the complimentary cross-shaped patterns observed on the positive and negative biases are the manifestation of the particle-hole dualism of the quasiparticles.
The dual particle-wave character of microscopic objects is one of the most striking phenomena in nature. While posing deep philosophical problems, the dualism is ubiquitous in the microworld. Most notably, the two-slit experiments of Stern and Gerlach revealed the interference and, hence, the wave nature of electrons. In the condensed matter systems, such explicit visualization of the wave nature of the constituent electrons was missing until just recently. The breakthrough came when the researchers from the IBM labs realized that the best way to elucidate the electrons inside a material is to place an impurity in an otherwise perfect crystal structure. By building corrals of the impurities on the clean surface, and observing the generated patters through the scanning tunneling microscope (STM), the experimenters were able to demonstrate the laws of the wave optics using the conduction electron waves.
The analog of the conduction electrons in the superconductors are the quasiparticles. Unlike electrons, the superconducting quasiparticles do not carry definite charge. Like the Cheshire cat, the quasiparticle is a combination of an electron and its absence (“hole”). And much like the Cheshire cat, the superconducting quasiparticles have never been seen in nature. Until now. In the series of beautiful experiments J.C. Davis’ group observe just that – the interference of the superconducting quasiparticles, which depending on the way one looks at them show their electron or hole parts.
Pan et al. explore the structure of the superconducting state in Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>2</sub> high-temperature superconductor in the vicinity of Ni and Zn impurities. To visualize the local quasiparticle states they employ the STM technique. There is one aspect of the electron tunneling into the superconducting state that makes it qualitatively different from the tunneling in conventional metals. The STM tip contains only the regular electrons which carry a unit of charge (–e). On the other hand, quasiparticles that live inside the superconductor do not possess a well-defined charge. Upon entering the superconductor, an electron that arrived from the normal STM tip must undergo a transformation into the Bogoliubov quasiparticles native to the superconductor. The detailed process of conversion of an electron into a quasiparticle is a deep theoretical problem. Another example of such process occurs in the case of the tunneling into fractional quantum Hall liquid, where the natural quasiparticles carry a fractional, but well defined, charge. Fortunately, for the low intensity tunneling exact details of the particle-quasiparticle conversion become irrelevant, and the tunneling amplitude is simply determined by the overlap between the electron state and the quasiparticle state of the same energy. That is to say that the tunneling intensity is high if the “electronic content” of the quasiparticle is high, and wise versa.
The most striking experimental observation that Pan et al. make is that close to the impurity additional electronic states are generated, with the energies inside the superconducting gap. That such states should exist in conventional (s-wave) superconductors was first predicted by Shiba and others in the late 1960’s, while for the unconventional (d-wave) superconductors these states were predicted and intensively studied in . Experimentally, these intra-gap states in conventional (Nb) superconductor were previously observed in IBM experiments .
The low-lying impurity states are produced when the local impurity is sufficiently strong so as to significantly disturb the superconducting order parameter in its neighborhood. It has been found theoretically that in the conventional superconductors an impurity that doesn’t have it’s own magnetic moment, or “spin,” should not produce states inside the gap. On the other hand, in the d-wave superconductors, even a potential (spinless) impurity produces two states located symmetrically above and below the chemical potential.
In the latest experiments, Pan et al. see the impurity states inside the gap. However, the most surprising is the spatial structure of the impurity states. For a Ni impurity, the positive-energy state, which corresponds to adding an electron, has the largest weight at the impurity site, smaller weight on the next-nearest neighbors and even smaller weight on the nearest neighbors. The negative-energy state, which corresponds to removing an electron, shows a complimentary pattern: The impurity state is evenly distributed between the impurity’s four nearest neighbors, with almost no weight at the impurity site, or at the next-nearest neighbors.
In this paper we show how this highly non-trivial impurity state structure can be accounted for by using the particle-hole dualism of quasiparticles in superconducting state. We implement simple but realistic model of the doped cuprate superconductor with potential impurity. We demonstrate that the alternating intensity of the impurity states is the manifestation of the quantum wave nature of the quasiparticles scattering from the impurity.
Qualitatively, the spatial distribution of tunneling intensity can be understood as follows. Let us define the respective amplitudes of particle and hole parts of the Bogoliubov quasiparticle, $`u_n(i)`$ and $`v_n(i)`$ for site $`i`$ and for particular eigenstate $`n`$. They obey the normalization condition $`_n|u_n(i)|^2+|v_n(i)|^2=1`$ for any fixed site $`i`$. Consider now a site where, say, $`u_n(i)`$ is large and close to 1. It follows therefore that for the same site the $`v_n(i)`$ would have to be small, since the normalization condition is almost fulfilled by $`|u_n(i)|^2`$ term alone. Similarly, for the sites where $`v_n(i)`$ has large magnitude, $`u_n(i)`$ would have to be small. Recall now that large $`u(i)`$ component would mean that quasiparticle has a large electron component on this site. Hence the electron will have large probability to tunnel into superconductor on this site and the tunneling intensity for electrons (positive bias) will be large. Conversely, for those sites the hole amplitude is small $`|v(i)||u(i)|`$ and the hole intensity (negative bias) will be small. Similarly, for sites with large hole amplitudes $`|v(i)||u(i)|`$ the electron amplitude will be suppressed and this site will be bright on the hole bias. Therefore if there is a particular pattern for the large particle amplitude (sampled on positive bias) on certain sites $`i`$, the complimentary pattern of bright sites for hole tunneling (on negative bias) will develop as a consequence of the inherent particle-hole mixture in superconductor. This is the main physics, we believe, behind the cross rotation upon bias switch.
Our numerical results are summarized on Fig. 1 where the particle and hole like intensity is plotted near the impurity site.
To model the high-temperature superconductors we utilize the highly-anisotropic structure of the cuprates and focus on a single layer of the material. In the simplified model, the conduction electrons live on the copper sites, $`i`$, and can hop to the neighboring sites, $`j`$, with a certain probability measured by the quantity $`t`$. In addition to that, the electrons that occupy the neighboring sites feel mutual attraction of a strength $`V`$. Formally, this model is represented by the Hamiltonian,
$`H_0=t{\displaystyle \underset{<i,j>,\sigma }{}}tc_{i\sigma }^{}c_{j\sigma }V{\displaystyle \underset{<i,j>}{}}n_in_j,`$ (1)
were a quantum-mechanical operator $`c_{i\sigma }^{}`$ creates an electron on site $`i`$, the operator $`c_{j\sigma }`$ eliminates an electron from the site $`j`$, and $`n_i=c_i^{}c_i+c_i^{}c_i`$ represents the electron density on site $`i`$. The electron spin, $`\sigma `$, can point up or down. This model, referred to as the $`tV`$ model, is known to produce the d-wave pairing for the electron densities close to one electron per lattice site. The model, however, is invalid very close to the half-filled case (exactly one electron per Cu site) where the cuprates are no longer superconductors, but rather antiferromagnetic insulators. For the model parameters we use $`t=V=300meV`$. Such choice ensures that the electronic band structure of the cuprates is accurately represented, and the superconducting gap in the electronic density of states is on the order of $`0.1t=30meV`$. The local impurity is introduced by modifying the electron energy on a particular site. The corresponding correction to the Hamiltonian is
$`H_{imp}=V_{imp}(n_0+n_0)S_{imp}(n_0n_0).`$ (2)
The first term is the potential part of the impurity energy that couples to the total electronic density on site 0, and the second term describes the magnetic interaction of the impurity spin and the electronic spin density on the same site. We assume that the impurity spin is large and can be treated classically, as if it were a local magnetic field. We solve the impurity problem in the Hartree-Fock approximation, which replaces the two-body interaction in $`H_0`$ with an effective singe-electron potential. Our goal is to determine $`V_{imp}`$ and $`S_{imp}`$ so as to match both the location of the impurity states within the gap and the spatial distribution of their intensity.
The important aspect that we include in the treatment of the impurity problem is the absence of the particle-hole symmetry. The particle-hole symmetry is lost as soon as we depart from the half-filled insulating part of the cuprate phase diagram, and hence is related to the amount of doping. While the asymmetry is not large, it results in two important effects. First, it leads to redistribution of the spectral weight among the impurity sites and its neighbors; second, it changes the position of the impurity level. The first effect is closely analogous to the Friedel oscillations, which occur in the vicinity of an impurity in a normal metal. An impurity essentially plays a role of a boundary condition imposed on the scattered electronic states. Since the most important states are in the vicinity of the Fermi level, these states oscillate in space with the corresponding filling-dependent Fermi wave vector. This generates oscillation in the electronic density of states. Similar phenomenon occurs for impurities in the cuprates. However, it is compounded by the superconducting character of the quasiparticles, as well as the anisotropy imposed by the Cu-O lattice. Figure 2 demonstrates the high sensitivity of the impurity state intensity on the doping.
Similarly, the effect of changing the impurity level position by the particle-hole asymmetry (doping) has important consequences. In the particle-hole symmetric case, the impurity levels approach the chemical potential as the strength of the impurity increases. In the limit of an infinitely strong impurity (“unitary” limit), the levels lie exactly at the chemical potential. For a finite doping, the position of the levels changes. In figure 3 we show how the impurity level position, $`\omega _0`$, changes as a function of doping for two different impurity strengths. Analytically, the change can be estimated from the non-self-consistent $`T`$-matrix approximation. One finds that for non-zero chemical potential $`\mu `$ (with $`\mu =0`$ at the half filling), the impurity levels are shifted by by an amount proportional to $`\mu /W`$, where $`W`$ is the bandwidth. Hence neglecting this effect can cause an error in the impurity strength estimate. In fact, in a doped superconductor the impurity levels can cross the chemical potential at a finite impurity strength. While this effect turns out to be not very important in the case of Ni impurity in BSCO, we believe that it is indeed relevant for Zn in BSCO. Unlike Ni, the Zn impurity levels appear to be very close to the chemical potential. If we neglect the particle-hole asymmetry, this would suggest that Zn impurity is in the unitary limit. However, inclusion of the asymmetry shift would lead to a finite impurity strength, and would imply high sensitivity of the Zn level position on the doping. One of the characteristics of the unitary impurity states is that their spectral weight tends to zero on the impurity site, with the maxima positioned on the nearest neighbors. On the contrary, in the experiment, the spectral weight is maximized on the impurity site. This suggests that neither Ni nor Zn is in the unitary limit. More data on doping dependence of the position of Zn level inside the gap would help to clarify how relevant the particle-hole asymmetry effect is.
In our quantitative analysis we focus on the case of Ni impurity in BSCO. The simple impurity interaction model of Eq. (2) seems not to describe properly the Zn case. A possible reason is that the effect of Zn is not fully local, and may include, for instance, modification of the hopping parameters in its vicinity and interactions with other bands, present in Cu-O plane. Accounting for such effects would require a number of extra fitting parameters, which would reduce credibility of the obtained results. Hence, we restrict our attention to the Ni impurities, where the only fitting parameter needed is the local impurity strength. We find that for an attractive impurity with a strength around $`V_{imp}=3t=900meV`$ and the average fillings of about 0.85 electrons per site ($`15\%`$ doping), the impurity levels are situated within the gap with the intensity distribution that corresponds to the experimental pattern around Ni impurity (figure 1). By including a weak spin part of the impurity interaction, $`S_{imp}=0.2t=60meV`$, we reproduce the fine energy splitting of the impurity peaks also observed in the experiment of Pan et al. . The site-dependent spectral intensities are shown in figure 4.
That the spin part should be present in the interaction follows from the atomic structure of the Ni<sup>2+</sup> that substitutes for Cu in the copper-oxygen layer and believed to have spin S = 1. The strength of the conduction electron – Ni spin coupling is extremely hard to determine from the first principles. The close analysis of the Ni impurity states in the superconductor enabled us to extract the approximate strength of both the potential and the spin coupling between Ni impurity and electrons in BSCO.
Based on our theoretical analysis we can make the following predictions for the properties of Ni impurities in BSCO: (1) For the lower-doped BSCO samples, the Ni-induced peaks should shift closer to the chemical potential, and the patters should change according to Fig. 2; (2) In the presence of the in-plane magnetic field, there should be Zeeman splitting of the peaks on the scale of $`0.1meV`$/Tesla. For some impurities, Zeeman splitting will enhance, and for the others suppress the intrinsic peak splitting due to the impurity spin (this effect depends on the relative alignment of the impurity spin and the external magnetic field); (3) for a similar potential strength impurity, but with a larger value of spin (Mn), the peak splitting in zero magnetic field should increase.
In conclusion, we find that a simple effective model well describes rich physics of the STM images near Ni site. The model we adopted here is the simplest effective model of the real material, where only the on-site impurity effects are considered. As such, the model does not address many aspects of the impurity influence on the electronic states of the host material. Surprisingly, even such a simple model exhibits a very rich set of phenomena as a function of doping and impurity strength. We find that to explain the experimental data we need to include both the non-magnetic scattering of carriers from Ni site as well as spin interaction between carriers and the impurity spin. The most striking feature observed in the experiment — the rotation of the “impurity cross” as a function of bias — appears to be a universal feature of the theoretical model. This rotation is the manifestation of the quantum-mechanical nature of the quasiparticles in the superconducting state, and is a consequence of the unique particle-hole composition of the quasiparticles.
99
|
no-problem/0003/hep-lat0003010.html
|
ar5iv
|
text
|
# Free energy of an SU(2) monopole-antimonopole pair.
## 1 Introduction.
It is well known that some Higgs theories with non-Abelian gauge group admit stable monopole solutions . In certain cases, most notably in grand unified theories, the residual unbroken gauge group is non-Abelian. It is then particularly interesting to study the interaction between two monopoles, or between monopole and antimonopole, induced by the quantum fluctuations of the unbroken gauge group. Beyond the relevance that these interactions may have for the original theory, they can help clarify the low energy properties of the residual theory itself: from the point of view of the low energy theory, monopoles are point-like external sources of non-Abelian gauge field, so they act as non-trivial probes of the strong coupling dynamics. Indeed, already some time ago t’Hooft and Mandelstam proposed that condensation of magnetic monopoles could be responsible for the confinement mechanism . If the vacuum state of a non-Abelian gauge theory is characterized by the presence of a monopole condensate, this will screen the monopole-antimonopole interaction, which should therefore exhibit a Yukawa-like behavior. If there is no condensate whatever, one should expect instead a Coulombic interaction between the monopoles, as is the case in classical SU(2) theory. Finally, in a theory characterized by a condensation of electric charges, the monopole-antimonopole interaction energy should increase linearly with separation. While a substantial of work has already been done to understand the role of monopole condensation for the confinement mechanism , to the best of our knowledge a precise determination of the monopole-antimonopole interaction potential in a quantum non-Abelian theory is still lacking. In this paper we plan to fill this void, presenting a numerical calculation of the monopole-antimonopole potential in the $`SU(2)`$ theory. Our results show that the monopole-antimonopole interaction potential is screened, buttressing the conjecture of a monopole condensate. We also study the monopole-antimonopole system in the high-temperature, deconfined phase of SU(2). The interaction still exhibits screening, which can be ascribed to a magnetic mass. We will investigate the temperature dependence of this mass.
## 2 Monopole-antimonopole configuration and calculation of the free energy.
The procedure for introducing $`SU(N)`$ monopole sources on the lattice was devised by Ukawa, Windey and Guth and Srednicki and Susskind , who built on earlier seminal results by t’Hooft , Mack and Petkova and Yaffe . In this paper we will follow the method of Ref. . In three dimensions, an external monopole-antimonopole pair can be introduced by “twisting” the plaquettes transversed by a string joining the monopole and the antimonopole (see Fig. 1), i.e. by changing the coupling constant of these plaquettes according to $`\beta `$ to $`z_n\beta `$, where $`z_n=\mathrm{exp}(2\pi ın/N)`$ is an element of the center of the group <sup>1</sup><sup>1</sup>1The use of a string of plaquettes with modified couplings to study the disorder was also advocated by Groeneveld, Jurkiewicz and Korthals Altes .. The location of the string is unphysical. It can be changed by redefining link variables on the plaquettes transversed by the string by $`Uz_n^1U`$, as illustrated in Fig. 2. The fact that $`z_n`$ is an element of the center $`SU(N)`$ guarantees that the above redefinition is a legitimate change of variables. On the other hand, the position of the cubes that terminate the string cannot be changed. Those cubes contain two external monopoles of charge $`z_n`$ and $`z_n^1`$ respectively (or, equivalently, a monopole of charge $`z_n`$ and its antimonopole). In the $`SU(2)`$ theory we consider in this paper, the only non-trivial element of the center of the group is $`z=1`$ and thus monopole and antimonopole coincide.
In four dimensions, a static monopole-antimonopole pair is induced by replicating the string on all time slices. The insertion of a monopole-antimono-pole pair can be reinterpreted in terms of the electric flux operator and several investigations have been devoted to the study of such operator (see for instance Refs. ). However, as we already stated, accurate numerical information on the free energy of a monopole-antimonopole has never been obtained.
The numerical calculation of a free energy by stochastic simulation can be a quite challenging problem, since the free energy is related to the partition function and the partition function, which is the normalizing factor for the simulation, cannot be directly measured. In our case, we need the ratio of the partition functions of a system containing the monopole-antimonopole pair to the partition function of the free system. In order to calculate this quantity we define a generalized system where the coupling constant $`\beta `$ has been replaced by $`\beta ^{}`$ on all the plaquettes transversed by the string. For sake of precision, if we place the monopole at the spatial location of integer value coordinates $`x_0+\frac{1}{2},y_0+\frac{1}{2},z_0+\frac{1}{2}`$ and the antimonopole at $`x_0+\frac{1}{2},y_0+\frac{1}{2},z_0+d+\frac{1}{2}`$, the plaquettes with coupling $`\beta ^{}`$ will be all the $`x`$-$`y`$ plaquettes with lower corner in $`x_0,y_0,z,t`$, where $`z_0+1zz_0+d`$ and $`0tN_t1`$. $`N_x,N_y,N_z,N_t`$ denote the extents of the lattice in the four dimensions. The $`\frac{1}{2}`$ offsets in the coordinates of the monopole and antimonopole are due to the fact that we consider them located at the center of two spatial cubes, namely those with lowest corners in $`x_0,y_0,z_0`$ and $`x_0,y_0,z_0+d`$, respectively. For the monopole-antimonopole configuration we use periodic boundary conditions. We will also consider single monopole configurations. For these we use periodic boundary conditions only in time and the two spatial directions orthogonal to the string, while we choose free boundary conditions in the spatial direction parallel to the direction of the string. We let the string run from the mid-point of the lattice to the free boundary. This places a single monopole in the middle of the lattice and emulates as well as possible a configuration where the antimonopole has been removed to infinity.
Let us denote by $`M`$ the set of all the plaquettes with modified coupling constant. The action of the modified system is then
$$S(\beta ,\beta ^{})=\frac{1}{2}\left(\beta \underset{PM}{}\text{Tr}(U_P)+\beta ^{}\underset{PM}{}\text{Tr}(U_P)\right)$$
(1)
and the corresponding partition function is
$$Z(\beta ,\beta ^{})=\underset{𝒞}{}e^{S(\beta ,\beta ^{})}$$
(2)
where $`𝒞`$ denotes the set of all configurations. For $`\beta ^{}=\beta `$ our generalized system obviously reduces to a homogeneous $`SU(2)`$ lattice gauge system with coupling constant $`\beta `$, whereas for $`\beta ^{}=\beta `$ it becomes an $`SU(2)`$ model with a static monopole-antimonopole pair. In this latter case, as discussed above, the partition function becomes independent on the actual location of the string, and depends only on the distance $`d`$ between monopole and antimonopole (beyond depending, of course, on $`\beta `$ and the extent of the lattice). The change of free-energy induced by the presence of the pair is thus
$$F=T\mathrm{log}\frac{Z(\beta ,\beta )}{Z(\beta ,\beta )}$$
(3)
where $`T=N_ta`$ is the temperature of the system ($`a`$ denotes, as usual, the lattice spacing). This is the quantity we want to calculate. For $`\beta ^{}`$ not equal to $`\beta \mathrm{or}\beta `$, $`Z(\beta ,\beta ^{})`$ does depend on the location of the string. Nevertheless Eq. 2 continues to define the partition function of a statistical system, which we will use for our calculation of $`F`$. The basic idea is that we will perform Monte Carlo simulations of the modified system for a set of values of $`\beta ^{}`$ ranging from $`\beta `$ to $`\beta `$ which is sufficiently dense that for all steps in $`\beta ^{}`$ we can reliably estimate the change induced in $`\mathrm{log}Z`$. Conceptually this corresponds to the observation that for an infinitesimal change in $`\beta ^{}`$ the change in free energy will be
$$\frac{F_\beta ^{}}{\beta ^{}}=\frac{1}{2}\underset{PM}{}\text{Tr}(U_P)_\beta ^{}$$
(4)
and thus the free energy of the monopole pair can be computed as
$$F=_\beta ^\beta 𝑑\beta ^{}\frac{F_\beta ^{}}{\beta ^{}}=\frac{1}{2}_\beta ^\beta 𝑑\beta ^{}\underset{PM}{}\text{Tr}(U_P)_\beta ^{}$$
(5)
where the integrand in the r.h.s. is an observable, namely the energy
$$E=\frac{1}{2}\underset{PM}{}\text{Tr}(U_P)_\beta ^{}$$
(6)
of the strip of plaquettes transversed by the string. However, implementing Eq. 5 would be inefficient, due to the large number of subdivisions that the numerical integration would require for accuracy. Rather, we calculate $`F`$ by following the Ferrenberg-Swendsen multi-histogram method .
Following Ref. , we choose a set of $`N+1`$ values $`\{\beta _{}^{}{}_{i}{}^{}\}`$ ranging from $`\beta _{}^{}{}_{0}{}^{}=\beta `$ to $`\beta _{}^{}{}_{N}{}^{}=\beta `$. In the actual calculation, we chose them equally spaced, although this is not necessary. The number $`N`$ is determined by a criterion that will be explained below. For each value $`\beta _{}^{}{}_{i}{}^{}`$ we perform a Monte Carlo simulation of the corresponding system and record the values of the energy $`E`$ of the plaquettes transversed by the string (see Eq. 6) in a histogram $`h_i(E)`$. More exactly, in our calculation rather than simply accumulating the entries in the histograms, we distributed the measured energy values over the four neighboring vertices according to the weights of a cubic interpolation formula. This substantially increases the accuracy when the values in the histograms are subsequently used to approximate integrals over the density of states: $`\rho (E)f(E)𝑑EZ_E(h(E)/n)f(E)`$, $`n`$ being the total number of entries in the histogram. Indeed, by using this procedure, we were able to reduce the number of bins to $`200`$ without noticeable discretization effects.
From each separate histogram one can obtain an independent estimate of the density of states
$$\frac{\rho _i(E)}{Z_i}=\frac{h_i(E)}{n_i}e^{\beta _{}^{}{}_{i}{}^{}E}$$
(7)
where $`Z_i`$ is the partition function for the specific value $`\beta _{}^{}{}_{i}{}^{}`$ and $`n_i`$ is the total number of histogram entries. Of course, the normalizing factors $`Z_i`$ are still not known. Indeed, the entire goal of the computation is to calculate the relative magnitude of the partition functions $`Z_i`$. Starting from Eqs. 7 one can however obtain the partition functions $`Z_i`$ up to a common constant of proportionality by a self-consistent procedure. We start from the crude approximation that all $`Z_i`$ are equal. Since we are only interested in ratios of partition functions, we can set this common value to 1. We combine then the estimates of the density of states given by Eqs. 7 into a first approximation
$$\rho (E)=\frac{1}{N}\underset{i}{}\rho _i(E)$$
(8)
From this value of the density of states we can now obtain a better approximation of the partition functions
$$Z_i=\underset{E}{}\rho (E)e^{\beta _{}^{}{}_{i}{}^{}E}$$
(9)
Equations (7-9) can now be iterated to get $`Z_i`$ up to a multiplicative factor. If the iterations converge, the final results will provide a self-consistent set of values for the partition functions $`Z_i`$, up to a common constant. In particular, we will be able to obtain $`\frac{Z(\beta ,\beta )}{Z(\beta ,\beta )}=\frac{Z_N}{Z_0}`$ and the free energy of the monopole-antimonopole pair.
A necessary condition for the above procedure to work well is that the histograms corresponding to adjacent $`\beta ^{}`$ have a sufficient overlap. In fig. 3 we plot the histograms obtained in a typical run and one can see that they exhibit indeed a substantial overlap.
In our calculation we found that the self-consistent procedure outlined above converged rapidly for all the values of lattice size, coupling constant and monopole-antimonopole separation we considered. We checked that the results for the free energy we obtained with the multihistogram method are consistent with the values one can obtain from the numerical integration of $`F_\beta ^{}/\beta ^{}`$ (cfr. Eq. 5), within the error of the latter procedure.
## 3 Computational details and numerical results.
We simulated a pure $`SU(2)`$ system with a combined overrelaxation and multihit-Metropolis algorithm for minimal autocorrelation time. We studied systems with coupling constants varying between $`\beta =2.476`$ and $`\beta =2.82`$, spatial sizes ranging from $`16^2\times 32`$ to $`32^2\times 64`$ and Euclidean temporal extents ranging from $`16`$ down to $`2`$. The corresponding temperatures span the deconfinement phase transition, which occurs at $`N_t10`$ for $`\beta =2.6`$ and between $`N_t=7`$ and $`N_t=8`$ for $`\beta =2.476`$ and $`\beta =2.5`$ . Full details of lattice sizes and couplings used in our simulations are given in table 1. For each lattice size, value of $`\beta `$ and monopole separation, we performed a sequence of simulations, starting with $`\beta ^{}=\beta `$ and decreasing $`\beta ^{}`$ in steps (cfr. Sect. 2) to its final value $`\beta ^{}=\beta `$. Precisely, we performed $`5000`$ thermalization steps at $`\beta ^{}=\beta `$ followed by measurements separated by $`50`$ updates. We decreased $`\beta ^{}`$, performed another $`500`$ thermalization followed by the same number of measurements, again separated by $`50`$ updates, and so on, until completion of the measurements with $`\beta ^{}=\beta `$. For each measurement, as a variance reduction technique, we performed $`384`$ upgrades of the links in the plaquettes belonging to the flux tube $`M`$, while keeping all other link variables fixed. In this way, we obtained $`384`$ histogram entries per configuration.
The number of measurements in each individual simulation, as well as the number of steps in $`\beta ^{}`$, depended on monopole separation and lattice size and are given in tables 2 and 3. In order to estimate the error we proceeded as follows. For all data we performed a standard jackknife evaluation of the error based on 10 subsamples. The data are however highly correlated and this leads to an underestimate of the error. An error analysis based on the full correlation matrix would have been computationally too costly. Instead, we performed seven totally independent calculations of the free energy for a few data points and calculated the error from the variance of the results. This came out approximately four times larger than the corresponding estimate of the error from the jackknife method. Thus we multiplied all of the jackknife errors by a factor of four. While this universal rescaling can only produce an approximation to the true errors, because the correlation in the data will generally vary from data point to data point, we feel that it gives the most realistic estimate of the actual errors that can be obtained without embarking in an error analysis of prohibitive cost. Our code was written in Fortran 90 and was run on the SGI-Origin 2000 at the Boston University Center for Computational Science. It performs at $`140`$ MFlops on a single 190MHz R10000 CPU and scales well up to 64 CPU’s. The total CPU-time needed for the simulations was $`3\times 10^4`$ CPU hours.
We measured the free energy of the monopole-antimonopole pair for several values of lattice size, coupling constant $`\beta `$ and monopole-antimonopole separation $`d`$. Tables 4 and 5 list all our results.
Typical results are illustrated in Figure 4, where we plot the data for the monopole-antimonopole free energy obtained at $`\beta =2.6`$. The flattening of the potential at large separation gives evidence for the screening of the interaction. The lines in the figure reproduce fits to a Yukawa potential
$$F(r)=F_0c\frac{e^{mr}}{r}$$
(10)
The fits give clear indications for a non-vanishing screening mass (see the tables for details) and rule out a Coulombic behavior of the potential.
We fit all of our data to a Yukawa potential, as in Eq. 10. We used the points at separation $`d=1`$ to $`4`$ for the fits. In the confined phase, it is also possible to perform meaningful fits through the points at separation $`2`$ to $`6`$, leaving out the point at $`d=1`$, where one expects the value of the potential to be most affected by lattice distortions. For higher temperatures the rapid flattening of the potential makes the fits more sensitive to the removal of the first point. An alternative procedure consists in fitting the data to a lattice Yukawa potential<sup>2</sup><sup>2</sup>2We are grateful to M. Chernodub and M. Polikarpov for bringing this point to our attention., as suggested in this context in Ref. . The results of the fits for the screening masses are reproduced in Table 6. Unprimed (primed) quantities refer to the values obtained from fits to a continuum (lattice) potential of the data with $`d`$ ranging from $`1`$ to $`4`$. For the conversion in units of the string tension $`\sqrt{\sigma }`$ we used $`a\sqrt{\sigma }=0.1989,0.1834,0.1326,0.0663`$ for $`\beta =2.476,2.5,2.6,2.82`$, respectively. We took the values for $`\beta =2.5,2.6`$ from Ref. and calculated the other values from the known scaling behavior of the theory. The fact that all fits produce a non-vanishing value for the screening mass is a clear indication that the data are not consistent with a Coulombic monopole-antimonopole interaction. To reinforce this point, we attempted a direct Coulombic fit to the data for $`\beta =2.6`$, $`N_t=16,\mathrm{\hspace{0.33em}6}`$ and found the fit quality to be $`Q10^{62}`$ for $`N_t=16`$ and $`Q10^{68}`$ for $`N_t=6`$, definitely ruling out a Coulombic behavior of the potential in both phases.
In table 7 we compare our values for the screening mass at $`\beta =2.5`$ and $`\beta =2.6`$ with with the values for the lightest glueball mass in the SU(2) theory . We reproduce the results from fits with the continuum Yukawa potential to the data points at separation $`1`$ through $`6`$ ($`m_1`$), from fits, always with the continuum potential, where we disregarded the points at separation $`1`$ which have the largest discretization error ($`m_2`$), and from fits to the points at separation $`1`$ through $`6`$ with the lattice Yukawa potential ($`m^{}`$). The values $`m_2`$ and $`m^{}`$ are consistent and are approximately twice as large as the mass of the lightest glueball. This indicates that the predominant coupling of the monopoles is to glueball excitations. Our results do not rule out that the lightest glueball may dominate screening at long distances, but this not visible within the range of lattice separations ($`a5a`$) for which we can obtain sufficiently accurate results.
In figure 5 we plot our results for the screening mass $`m`$ vs. temperature. At high temperature $`m`$ should be identified with the magnetic screening mass. Our results are consistent with data obtained by Stack by another method. They appear to be somewhat larger than the values for $`m`$ obtained, by a yet different technique, by Heller, Karsch and Rank . The authors of Ref. quote results, however, for systems with larger $`N_t`$ and higher $`\beta `$ than we generally used in our investigation. The closest comparison can be made between our result for $`\beta =2.82`$, namely $`m/T=2.09(0.69)`$, and the results in Ref. : $`m/T=2.01(0.29),1.24(0.04)`$ for $`\beta =2.74,2.88`$ respectively. These latter sets of results are reasonably consistent. It is interesting that, within the accuracy of our data, there is no indication for a discontinuous behavior of $`m`$ at the deconfinement transition. The apparently continuous behavior of $`m`$ should not come, though, as a surprise. We should remember that the quantity we study is based on insertion in the SU(2) theory of an operator (the sheet of plaquettes with modified coupling joining the world line of the monopoles) which is dual to the $`xy`$ Wilson loops . Space-space Wilson loops exhibit an area law behavior both below and above the deconfinement transition and, correspondingly, one would expect that the free energy of monopole-antimonopole pair, whose propagation spans a dual space-time surface, should exhibit screened behavior on both sides of the phase transition. The discontinuity at the phase transition occurs in the behavior of space-time Wilson loops or in the correlation of timelike Polyakov loops. Accordingly, we would expect a discontinuity in the partition function of the system with the monopole-antimonopole pair propagating in the space direction. In order to test this idea, we also measured the partition function of a system where we changed the sign of the $`ty`$ plaquettes crossing a string joining monopole and antimonopole separated by $`d`$ lattice sites in the $`z`$ direction and propagating in the $`x`$ direction. We performed the calculation at $`\beta =2.6`$ with a lattice of size $`N_x=N_y=20,N_z=40,N_t=6`$. While the physical meaning of the “free energy” $`F=(1/N_x)\mathrm{log}[Z(\beta ,\beta )/Z(\beta ,\beta )]`$ become less obvious (it would be the free energy for a low-temperature system confined in a periodic box of width $`N_x`$), our results, listed in table 8 and illustrated in Fig. 6, show that above the phase transition this quantity does exhibit a confined behavior. The dashed line in the figure reproduces a fit of a Coulomb plus linear form $`a+b/x+cx`$ with parameters $`a=2.107(2),b=0.658(3),c=0.0167(3)`$. It is interesting to observe that from the fit we get $`\sqrt{c}=0.1291(11)`$, while the string tension on a $`20^4`$ lattice at the same value of $`\beta `$ is $`a\sqrt{\sigma }=0.1326(30)`$.
It is interesting to compare our results to the solutions of the classical theory. Since to the best knowledge of the authors there is no analytical solution to the classical two monopole problem, we have investigated its properties numerically. To do this, we found the minimal energy solutions on a lattice and checked their behavior. We put the classical system on a 3-dimensional lattice with free boundary conditions. (We used free boundary condition because the calculation itself shows that the potential has a long range behavior and with free boundary conditions we can reduce finite size effects.) We started both from a random non-Abelian configuration and and from a random Abelian configuration and performed iterative local minimization to relax the system to its lowest energy state. There were no surprises as we found that for both initial conditions the system relaxed to a minimal energy state of the same energy and that the non-Abelian solution, after going to a maximally Abelian gauge, turned out to be entirely Abelian in nature. Also, the potential of interaction was well fit by the Coulomb form $`V=1/(4r)`$. (One expects a coefficient 1/4 in the Coulomb potential because the total magnetic flux from the monopole is $`\mathrm{\Phi }=\pi `$. This value has been numerically confirmed in our calculation.)
In Figure 7 we compare the monopole-antimonopole potentials in the confined phase of the quantum system and in the classical system. The comparison shows a clear difference between the classical and quantum case and reinforces the conclusion that quantum fluctuations introduce a screening of magnetic monopoles. We further illustrate this point by displaying in Fig. 8 snapshots of a typical quantum configuration (from a simulation at $`\beta =2.476`$ and $`N_t=12`$) and of the classical solution. In the pictures, vectors show the magnetic field in a maximal Abelian projection and the size of the dots measure the non-Abelian character of the configuration at that point (it is proportional to the square of the components orthogonal to the Abelian projection). In the classical case the location of the monopole-antimonopole pair is evident. In the quantum case it is marked by the crosses in the middle of the second picture. Had we not marked the location of monopole and antimonopole in the quantum case, the reader would be hard pressed in finding where they are. The marked difference between the classical and quantum configurations gives a vivid illustration of how the quantum fluctuations of the gluon field provide a mechanism for the screening of external monopole sources, which is most likely also responsible for confinement in the low temperature phase and for the emergence of the magnetic mass in the high temperature phase.
## 4 Conclusions.
We have measured the free energy of a monopole-antimonopole pair in pure SU(2) gauge theory at finite temperature. We find that the interaction is screened in both the confined and deconfined phase. The mass of the object responsible for the screening at low temperature is approximately twice the established value for the lightest glueball, indicating a prevalent coupling to glueball excitations. There is no noticeable discontinuity in the screening mass at the deconfinement transition, but in the deconfined phase we clearly see an increase of the screening mass with temperature. Our results support the hypothesis of existence of monopole condensate in the vacuum of the SU(2) theory and provide evidence that some glueball excitation could serve as a “dual photon” in the dual superconductor hypothesis of quark confinement. Finally, we would like to observe that the method we have developed for the calculation of the monopole-antimonopole free energy is applicable to other models, beyond the SU(2) theory considered in this paper. While moderately demanding in computer resources, it appears capable of producing accurate numerical results for the monopole-antimonopole potential of interaction. Thus it could be used to shed light on the dynamics of other interesting systems that are expected to exhibit the formation of electric or magnetic condensates in their vacuum states.
Acknowledgments. We gratefully acknowledge conversations and exchanges of correspondence with Maxim Chernodub, Urs Heller, Chris Korthals Altes, Christian Lang, Peter Petreczky, Misha Polikarpov, John Stack, Matthew Strassler and Terry Tomboulis. We also grateful to Philippe de Forcrand for bringing to our attention a discrepancy between his own results for the monopole-antimonopole free energy and the values we presented in an earlier version of this paper, which helped us correct a programming error in the implementation of the multihistogram method, and for correspondence regarding the free energy of the space-like ’t Hooft loop. This research was supported in part under DOE grant DE-FG02-91ER40676 and by the U.S. Civilian Research and Development Foundation for Independent States of FSU (CRDF) award RP1-187.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.