id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9904/hep-ph9904400.html
|
ar5iv
|
text
|
# Nonperturbative contribution to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi and Gribov-Levin-Ryskin equation
## Abstract
By studying the nonperturbative contribution to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi and Gribov-Levin-Ryskin equation, it is found that (i) the nonperturbative contribution suppresses the evolution rate at the low $`Q^2`$, small-x region; (ii) the nonperturbative contribution weakens the shadowing effect. The method in this paper suggests a smooth transition from the low $`Q^2`$ (”soft” ), where nonperturbative contribution dominates, to the large $`Q^2`$ (”hard” ) region, where the perturbative contribution dominates and the nonperturbative contribution can be neglected.
PACS numbers:12.38.Aw, 13.60.Hb
The properties of parton distribution at small-x region ( x is the value of the Bjorken variable ) have recently been the important subject \[1-5\]. Recent measurements of the structure functions for the deep-inelastic ep scattering at HERA discovered their dramatic rise as x decreases from $`10^2`$ to $`10^4`$ . The predictions of Glück, Reya and Vogt (GRV) by using the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi ( DGLAP ) evolution equation at very low $`Q^2`$ ( $`Q^2`$ is the negative of the square of the four-momentum transferred by the lepton to the nucleon ) are in broad agreement with this result. However, the GRV model fails in low $`Q^2`$ quantitatively, that is to say, the evolution rate is faster than that of the experiments. <sup>1</sup><sup>1</sup>1 Ref. shows that the value of $`F_2`$ given by GRV model is lower than that of the experiments at $`Q^2=0.4GeV^2`$ while it is higher at $`Q^2=6.5GeV^2`$. It then becomes a challenging problem how to determine the structure function at the low $`Q^2`$ region. Another important question is whether the shadowing effect, which the Gribov-Levin-Ryskin (GLR) equation describes, can be observed by the current experiments at HERA.
The purpose of this letter is to study the DGLAP and GLR equation in low $`Q^2`$ by considering the nonperturbative contribution. It will be showed that (i) the nonperturbative contribution suppresses the evolution rate at the low $`Q^2`$, small-x region; (ii) the nonperturbative contribution weakens the shadowing effect. So the nonperturbative contribution is very important in the low $`Q^2`$ region.
The DGLAP equation for the gluon distribution at small-x region in the DLLA is given by
$$\frac{^2xg(x,Q^2)}{\mathrm{ln}(1/x)\mathrm{ln}(Q^2)}=\frac{3\alpha _s(Q^2)}{\pi }xg(x,Q^2)$$
(1)
By considering the shadowing effect, the DGLAP equation can be modified in the form ( called GLR equation):
$$\frac{^2xg(x,Q^2)}{\mathrm{ln}(1/x)\mathrm{ln}(Q^2)}=\frac{3\alpha _s(Q^2)}{\pi }xg(x,Q^2)\frac{81\alpha _s^2(Q^2)}{16R^2Q^2}[xg(x,Q^2)]^2$$
(2)
In the previous studies people used the perturbative QCD effective coupling ( the leading order coupling) $`\alpha _s(Q^2)`$ to study the equation, where
$$\alpha _s(Q^2)=\frac{12\pi }{(332n_f)\mathrm{log}(Q^2/\mathrm{\Lambda }_{QCD}^2)}$$
(3)
By using the formula (3), the Eq.(2) can be cast in the form:
$$_y_tG(y,t)=cG(y,t)\lambda \mathrm{exp}[t\mathrm{exp}(t)]G^2(y,t)$$
(4)
where $`y=\mathrm{ln}(1/x),t=\mathrm{ln}[\mathrm{ln}(Q^2/\mathrm{\Lambda }_{QCD}^2)]`$, $`G(y,t)=xg(x,Q^2)`$, $`c=12/(112n_f/3)`$ with $`n_f`$ the number of quark flavors and $`\lambda =9\pi ^2c^2/16R^2\mathrm{\Lambda }_{QCD}^2`$. The equation (4) has been studied by many authors . Eskola, Qiu and Wang applied the eq.(4) to study the shadowing in heavy nuclei.
However, it should be noted that in the large $`Q^2`$ the formula (3) derived from perturbative QCD is a good approximation while in the low $`Q^2`$ nonperturbative contribution should be included. To avoid the ghost-pole problem in the behavior of a running coupling, Shirkov and Solovtsov obtained the QCD running coupling in the leading order as:
$$\alpha _{an}(Q^2)=\frac{12\pi }{(332n_f)}[\frac{1}{\mathrm{log}(Q^2/\mathrm{\Lambda }_{QCD}^2)}+\frac{1}{1Q^2/\mathrm{\Lambda }_{QCD}^2}]$$
(5)
The second term on the right-hand side of Eq. (5) is clearly the nonperturbative contribution. It is noted that $`\alpha _{an}(Q^2=0GeV^2)=\frac{12\pi }{(332n_f)}`$ depends only on group factors and does not depend on the $`\mathrm{\Lambda }_{QCD}`$. It can be found that in the large $`Q^2`$ the running coupling $`\alpha _{an}(Q^2)`$ is dominated by perturbative contribution and the nonperturbative contribution can be neglected while in the low $`Q^2`$ the nonperturbative contribution is very notable.
By applying the formula (5) to DGLAP equation <sup>2</sup><sup>2</sup>2The reason that the formula (5) replaces formula(3) in DGLAP equation will be given in the ending of the text., the Eq. (1) can be written as:
$$_y_tG(y,t)=cG(y,t)[1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]$$
(6)
Adopting the semi-classical approximation , which amounts to keeping only the first order derivatives of the function $`z(y,t)=\mathrm{log}[G(y,t)]`$, Eq.(6) is rewritten as:
$$_yz(y,t)_tz(y,t)=c[1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]$$
(7)
Eq. (7) can be solved by using the method of characteristic . Let
$$p=_tz(y,t),q=_yz(y,t)$$
(8)
Eq. (7) can be written as the following general form:
$$F(p,q,t,y,z)=0$$
(9)
The characteristic differential equations of Eq. (9) have the following form:
$`{\displaystyle \frac{dt(\tau )}{d\tau }}=F_p,{\displaystyle \frac{dy(\tau )}{d\tau }}=F_q,{\displaystyle \frac{dz(\tau )}{d\tau }}=pF_p+qF_q,`$
$`{\displaystyle \frac{dp(\tau )}{d\tau }}=(F_t+pF_z),{\displaystyle \frac{dq(\tau )}{d\tau }}=(F_y+qF_z),`$ (10)
where $`\tau `$ is the ”inner” time.
$`F_p,F_q,F_t,F_y,F_z`$ is
$`F_p=q,F_q=p,F_y=0,F_z=0.`$
$`F_t=ce^t[1\mathrm{exp}(e^t)+\mathrm{exp}(e^t+t)]/[1\mathrm{exp}(e^t)]^2`$ (11)
Same as Ref. , the initial conditions to solve Eqs. (10) are:
$`t_0=\mathrm{log}[\mathrm{log}(Q_0^2/\mathrm{\Lambda }_{QCD}^2)]=\mathrm{log}[\mathrm{log}(4GeV^2/\mathrm{\Lambda }_{QCD}^2)],`$
$`y_0=\mathrm{log}(1/x_0)=\mathrm{log}(100),p_0=c/\delta _{bare},z_0=\mathrm{log}(3.38),`$
$`q_0=c[1+{\displaystyle \frac{e^{t_0}}{1\mathrm{exp}(e^{t_0})}}]/p_0,\mathrm{\Lambda }_{QCD}=0.2GeV,\delta _{bare}=0.5.`$ (12)
For clear comparison between the results of Eqs. (10) with those of DGLAP equation not including the nonperturbative contribution , we solve the Eqs. (10) numerically by adopting the Runge-Kutta methods . Fig. 1 shows the evolution path (y,t) corresponding to the dashed line by solving the Eqs. (10) compared with the evolution path (y,t) corresponding to the solid line by solving the characteristic differential equations of DGLAP not including nonperturbative contribution.
Like the DGLAP evolution equation, by using the formula (5), the GLR equation can be cast in the form:
$$_y_tG(y,t)=cG(y,t)[1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]\lambda \mathrm{exp}[t\mathrm{exp}(t)][1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]^2G^2(y,t)$$
(13)
The semi-classical approximation of the Eq. (13)
$$_yz(y,t)_tz(y,t)=c[1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]\lambda \mathrm{exp}[t\mathrm{exp}(t)+z][1+\frac{\mathrm{exp}(t)}{1\mathrm{exp}(e^t)}]^2$$
(14)
By using the method of characteristic, the solution of Eq. (14) are showed in Fig. 2.
To conclude, recent experiments at HERA have supplied much information about the nucleon structure at both large $`Q^2`$ and low $`Q^2`$. In large $`Q^2`$, the DGLAP equation derived from perturbative QCD can describe the behavior of parton distribution. The challenging problem is how to make a unified treatment on nucleon structure at both large $`Q^2`$ and low $`Q^2`$. This letter proposes a way to meet the requirement. It is well known that the parton distribution includes both perturbative QCD and nonperturbative QCD effects. The input distribution reflects the nonperturbative QCD, and the DGLAP equation itself is the perturbative QCD. So it seems that the DGLAP equation describing the perturbative QCD effect and the input distribution describing the nonperturbative QCD effect together can give the comprehensive description to the parton distribution. However, it can be seen clearly that the input distribution does not include all nonperturbative effects, that is to say, some nonperturbative effects are reflected through the running coupling. By considering the nonperturbative effects in running coupling, it is a natural way to apply the DGLAP equation in the low $`Q^2`$ region. So the evolution equation itself includes both perturbative and nonperturbative effects. It should be noted that this way is a work ansatz, which can not be derived from the theory. In viewing the Fig. 1, it can be found that the nonperturbative contribution to DGLAP is very notable.
Although the predictions of GRV model by applying the DGLAP evolution equation at very low $`Q^2`$ are in broad agreement with HERA experiments, the evolution rate which results from the model is faster than that of the experiments. By considering the nonperturbative contribution to DGLAP, the discrepancy between GRV model and the experiments can be explained naturally. From analysing the DGLAP equation, it can be found that the running coupling determines the evolution rate, which becomes slow by considering the nonperturbative contribution, especially in very low $`Q^2`$ such as $`Q^2=0.65GeV^2`$.
Recently, one of the important questions is whether the shadowing effect can be observed by the current experiments at HERA. Some people say “can” such as Shabelski and Treleani while other people say “cannot” such as Golec-Biernat, Krasny and Riess . Ayala, Gay Ducati and Levin argue that the shadowing effect is large in the gluon distribution but small in $`F_2(x,Q^2)`$. Like the DGLAP equation, the GLR equation can be treated by the same method. In this paper, a firm conclusion about this question does not be made. Nevertheless, in viewing the Fig.2, it can be concluded that shadowing effect in the GLR equation, which is modified by the nonperturbative contribution, is not so notable as what has been studied in the case when gluons concentrate in ”hot-spots” within proton ($`R=2GeV^1`$). By analysing the value of $`\alpha _s(Q^2)`$ and $`\alpha _{an}(Q^2)`$, the simple explanation of this result is that the linear term in the GLR equation is proportional to $`\alpha _s(Q^2)`$ or $`\alpha _{an}(Q^2)`$ while the nonlinear term is proportional to $`\alpha _s^2(Q^2)`$ or $`\alpha _{an}^2(Q^2)`$, so the shadowing effect is weakened due to $`\alpha _s(Q^2)>\alpha _{an}(Q^2)`$, especially at low $`Q^2`$ region. The result means that the nonperturbative contribution weakens the shadowing effect. Some people discussed the nuclear shadowing by applying the GLR evolution equation without considering the nonperturbative contribution. It might be interesting to restudy the nuclear shadowing by applying the GLR equation (14), which is modified by the nonperturbative contribution.
In this paper, the DGLAP and GLR equation are solved by applying the method of characteristic. From the initial conditions (12), it can be found that the start point is $`Q^2=4GeV^2`$, which is not very low, because in very low $`Q^2`$ the evolution equation might be too complicated to treat and the semi-classical approximation is not a good approximation. However, the conclusions shown in this paper can be deduced easily to the very low $`Q^2`$ region, where the nonperturbative contribution becomes more dominant because the difference between the formula (3) and formula (5) $`\alpha _s(Q^2)\alpha _{an}(Q^2)`$ which is $`(0.50.4)=0.1`$ at $`Q^2=0.65GeV^2`$ is much more notable than that $`(0.30.29)=0.01`$ at $`Q^2=4GeV^2`$.
In summary, it is believed that QCD, which includes the perturbative part and the nonperturbative part, is a complete theory to describe all strong interaction experiments. Nevertheless, as the fundamental dynamical model, the DGLAP or GLR equation itself only reflects the perturbative effect. Unfortunately, almost all strong interaction experiments such as the deep-inelastic ep scattering at HERA involve both perturbative effect and nonperturbative effect. The purpose of the paper is to develop a fundamental dynamical model which itself includes nonperturbative effect. By studying the model, we make the conclusions: (i) the nonperturbative contribution suppresses the evolution rate at the low $`Q^2`$, small-x region; (ii) the nonperturbative contribution weakens the shadowing effect. Those conclusions are helpful to explain the recent experiments in low $`Q^2`$ region.
If the results of this paper are compared with the recent HERA data in detail, the quark distribution must also be discussed in the low $`Q^2`$ region, but it should be noted that the recent HERA data is available at a few isolated values of averaged x and $`Q^2`$, especially in very low $`Q^2`$, so how to analyse the HERA experiments and compare the experiment data with the results of the model developed in this paper will be a challenging work. Therefore in this paper the quark distribution is not discussed. However, the method developed in this paper can be easily extended to the evolution equation for the quark distribution. Even the method can also be applied to discuss the parton distribution at large-x, low $`Q^2`$ region, because the DGLAP equation modified by the nonperturbative contribution can be applied to both the large-x region and small-x region (in small-x region, it is possible that the GLR equation describes the parton behavior , but it is difficult to check the GLR equation because the nonperturbative contribution weakens the shadowing effect showed in this paper).
The main purpose of this paper is to developed a method, which the evolution equation itself includes the nonperturbative contribution. The method has the important theoretical feature that it suggests a smooth transition from the low $`Q^2`$ (”soft” ), where nonperturbative contribution dominates, to the large $`Q^2`$ (”hard” ) region, where the perturbative contribution dominates and the nonperturbative contribution can be neglected. Although the method is only a first step in considering the nonperturbative contribution to QCD dynamical equations, it can be checked not only by the HERA experiments but also by other strong interaction experiments because the method proposed in this paper is adaptable to wide region.
The author is grateful to C.Wang for his helpful discussions. This work was supported in part by the Foundation of Shanghai Jiaotong University.
Figure Captions
Fig. 1. The dashed curve corresponds to the results of Eqs. (10) and the solid curve corresponds to the results of the DGLAP equation without considering the nonperturbative contribution.
Fig. 2. As for Fig. 1, except for the GLR equation in the case $`R=2eV^2`$.
|
no-problem/9904/astro-ph9904243.html
|
ar5iv
|
text
|
# The interstellar clouds of Adams and Blaauw revisited: an HI absorption study - II
## 1 Introduction
In the preceding paper (Rajagopal, Srinivasan and Dwarakanath, 1998; paper I) we presented the results of a program to obtain the HI absorption profiles towards a selected sample of bright stars. As mentioned there, detailed optical absorption studies exist in the direction of these stars in the lines of NaI and CaII. The motivation for such an observational program was also described in the previous paper. In the present paper, we wish to discuss the results obtained by us. There are two major issues that we wish to address and discuss later:
* Are the properties of interstellar clouds seen in optical absorption the same as those seen in HI emission and absorption in general?
* What is the origin and nature of the faster clouds seen in optical and UV absorption ?
Let us elaborate a bit on the first issue. Although the general picture of the ISM that emerged from optical and radio observations, respectively, is the same, viz., clouds in pressure equilibrium with an intercloud medium, it has not been possible to directly compare the inferred properties. Whereas both the column densities and the spin temperatures of the clouds have been estimated from HI observations, there have only been indirect and often unreliable estimates for the clouds seen in optical absorption. Our HI absorption measurements, combined with HI emission measurements in the same directions will enable us to directly estimate for the first time the column densities and spin temperatures of the clouds seen in optical absorption.
The second question mentioned above arises as follows. The existence of a high velocity tail in the distribution of random velocities of clouds was firmly established by Blaauw (1952) from the data obtained by Adams (1949). As mentioned in paper I, it was noticed quite early on by Routly and Spitzer (1952) that the faster clouds have a smaller NaI to CaII ratio than the lower velocity clouds. Early HI emission measurements (see paper I for references) in the direction of the bright O and B stars provided an added twist. Whereas the lower velocity clouds clearly manifested themselves in the HI emission measurements, the higher velocity clouds were not detected. To illustrate this point, we show in Fig 1 the optical absorption features and the HI emission profiles towards the star HD 219188. It may be seen that there is no counterpart in HI emission to the higher velocity optical absorption feature.
In the next two sections (sections 2 and 3) we will discuss the results of our absorption survey presented in paper I. We shall classify the HI absorption features into two broad classes, viz., “low velocity” and “high velocity”. It is of course difficult to define a sharp dividing line between “low” and “high” velocities. But based upon earlier analyses of NaI to CaII ratios, as well as HI emission measurements, we adopt a velocity of the order of 10 km s<sup>-1</sup>as the dividing line. The main conclusions from our study are summarized in section 4. Section 5 is devoted to a detailed discussion of the nature and origin of the high velocity clouds.
## 2 The low velocity clouds
In this section we discuss the low velocity absorption features (i.e, $`v\stackrel{<}{_{}}10`$km s<sup>-1</sup>). To recall from paper I, we detected HI absorption in all but 4 of the 24 fields we looked at. We will discuss these four unusual fields at the end of the next section.
### 2.1 Coincident absorption features
All the lines of sight in our sample show optical absorption from CaII at both low and high velocities. In most of the fields where we have detected HI absorption, they occur at roughly the same velocities as the optical absorption lines for $`v\stackrel{<}{_{}}10`$km s<sup>-1</sup>. Not surprisingly, the HI absorption features have one to one correspondence with the HI emission features in the fields for which earlier emission measurements exist. In Table 1 we have listed all the fields with “matching” velocities in optical absorption and HI emission and absorption.
For illustration we have shown in the upper panels of Fig. 2 the HI absorption spectra (optical depth) in 3 fields; HI optical depth is plotted as a function of V<sub>LSR</sub>. The arrows indicate the velocities of the optical absorption lines. For comparison, we have shown in the lower panels the HI emission in these 3 fields: the data obtained by Habing (1968) has been digitized and re-plotted. The first field contains the star HD 34816. The optical spectrum obtained by Adams (1949) towards this star shows CaII absorption at $``$14.0 and $`+`$ 4.1 km s<sup>-1</sup>. The HI absorption profile shows a prominent feature at 6 km s<sup>-1</sup>(we have obtained spectra towards 2 radio sources in this field but only one of them is shown here). As may be seen from the lower panel, there is a corresponding emission feature at this velocity. As mentioned in paper I, absorption features in the optical and HI spectra may be taken to be at “matching” velocities provided they are within $``$3 km s<sup>-1</sup>of one another (this window is to account for blending effects in the optical spectra and different corrections adopted for solar motion). In view of this one may conclude that the optical absorption at $`+`$ 4.1 km s<sup>-1</sup>and HI emission and absorption at 6 km s<sup>-1</sup>arise in the same cloud, even though the radio source is 20’ away from the star.
The second panel pertains to the field containing the star HD 42087. In this field also we have two radio sources within the primary beam, and the spectrum towards one of them is shown. The absorption features are clearly seen at 4.4 km s<sup>-1</sup>and 12.4 km s<sup>-1</sup>, with the latter being much stronger. The HI emission spectrum (shown in the panel below) shows a broad peak centred at $``$ 15 km s<sup>-1</sup>. The absorption feature at 12.4 km s<sup>-1</sup>may be taken to be the counterpart of the optical absorption at 10.2 km s<sup>-1</sup>. There is no HI absorption at negative velocities corresponding to the other optical absorption lines indicated by the arrows. This may be due to the fact that the 2 radio sources in the field are 32’ and 42’ respectively from the star in question. Given that the star is at a distance of 1.3 kpc (paper I, Table 1) it is conceivable that we are not sampling all the gas seen in optical absorption.
The spectra towards HD 148184 is shown in panel 3. Again there is good agreement between the HI spectrum and the optical spectrum as far as the lower velocity optical absorption is concerned. As in the previous two cases there is no counterpart of the higher velocity optical absorption in the HI spectrum. These two examples will suffice to illustrate the general trend in Table 1 viz., there is reasonably good agreement at low velocities ($`v\stackrel{<}{_{}}10`$ km s<sup>-1</sup>) between the optical absorption features and the HI spectra.
Returning to Table 1, we have listed the derived optical depths in column 4 and the velocity width of the HI absorption in column 5. In the last column we have given the derived spin temperatures. These are obtained by supplementing our HI absorption measurements with brightness temperatures derived from emission measurements (mostly from Habing, 1968; and in some cases from Burton, 1985). To the best of our knowledge, this is the first direct determination of the temperatures of the interstellar clouds seen in optical absorption. To be precise, the temperature derived by us is the spin temperature which may be taken to be an approximate measure of the kinetic temperature.
The correspondence between the optical absorption features and the HI absorption suggest that one is sampling the same clouds in both cases. The derived spin temperatures and velocity widths are consistent with these clouds belonging to the same population as the standard cold diffuse clouds in the raisin-pudding model of the ISM. While it is conceivable that at low velocities one may merely be sampling the local gas (regardless of direction), statistical tests carried out by Habing (1969) suggest that this is unlikely.
### 2.2 Non-coincident absorption features
As we have already encountered in the case of HD 42087 (see Fig. 2), sometimes there is a mismatch between optical and radio spectrum even at low velocities. We mention two specific cases here.
#### HD 37742
The HI absorption spectrum obtained by us (Fig. 3) shows a deep absorption feature at 9.5 km s<sup>-1</sup>. There are no other absorption features down to an optical depth limit of 0.03. The optical absorption features are at 3.6 and $``$21 km s<sup>-1</sup>. Since the radio source is only 12’ away from the star, given the distance estimate of 500 pc to the star the discrepancy between the optical spectrum and the HI absorption spectrum is significant and intriguing. While the gas seen strongly absorbing in HI could be located beyond the star, one is left wondering as to why one does not see the low velocity cloud seen in optical absorption.
#### HD 119608
The optical absorption spectrum towards this star obtained by Münch and Zirrin (1961) shows 2 minima at 1.3 and 22.4 km s<sup>-1</sup>. We have obtained HI absorption towards 2 strong radio sources in this field both within 15’ of the star. Both show a deep absorption feature at $``$5.4 km s<sup>-1</sup>(Fig. 4). This agrees with the HI emission feature shown in the lower panel. However the HI emission spectrum also shows a broad feature peaking at $``$ 20 km s<sup>-1</sup>. This is also seen in the more recent measurement of Danly etal (1992). If this feature is indeed to be identified with the optical absorption at 22.4 km s<sup>-1</sup>, then this represents an interesting case where the gas in the line of sight to the star causing optical absorption manifests itself in HI emission but not absorption. This could happen for example if the spin temperature of this gas is sufficiently high as to make the HI optical depth below our detection limit. HD 119608 is a high latitude star and one is presumably sampling the halo gas, and warrants a deeper absorption study.
## 3 The high velocity clouds
Although in the previous section, we were primarily concerned with establishing the correspondence between the optical absorption features and the HI absorption spectra at low velocities, we did have occasion to comment on the absence of HI absorption from the high velocity clouds \[The high velocity clouds we are discussing are those that populate the tail of the velocity distribution obtained by Blaauw (1952) and not those that are commonly referred to as HVCs in the literature\]. It turns out that in all but 4 cases, we fail to detect HI absorption at velocities corresponding to the high velocity ($`\stackrel{>}{_{}}`$ 10 km s<sup>-1</sup>) optical absorption lines. To illustrate this generic trend, we have shown some additional examples in Fig. 5. It may be seen in the figure that the high velocity optical absorption features (indicated by arrows) are not seen in the HI emission spectra either. A discussion of this will form the major part of section 5.
In only 4 out of the 24 lines of sight have we detected HI absorption at velocities $`\stackrel{>}{_{}}`$ 10 km s<sup>-1</sup>and which clearly correspond to the optical absorption lines. We discuss these below.
### 3.1 Coincident absorption features at high velocities
#### HD 14134, HD 14143
These two stars (in the same field) are members of the h and $`chi`$ Persei clusters (Münch 1957). There were 3 radio sources within our primary beam (all within 10’ of the star). The HI absorption features towards one of them is shown in Fig. 6. The prominent high velocity absorption features towards the 3 sources are at $``$52.8, $``$50.3 and $``$46.1 km s<sup>-1</sup>, respectively (Table 1 of paper I). These should be compared with the optical absorption features towards the 2 stars in question which are at $``$46.8 and $``$50.8 km s<sup>-1</sup>. Thus there is reasonable coincidence between the optical and HI data. Nevertheless we wish to now point out that the high velocity may not represent random motion but rather systematic motion.
While interpreting his pioneering observations Münch (1957) attributed the high velocity features in the optical spectra to anomalous motions in the Perseus arm. Since then it has generally been accepted that there are streaming motions in the Perseus arm with velocities ranging from $``$10 to $``$30 km s<sup>-1</sup>(Blaauw and Tolbert 1966; Brand and Blitz 1993). For the sake of completeness we have listed in Table 2 the spin temperature of the gas derived by us by combining our measurements with existing HI emission measurements.
#### HD 21291
The spectrum towards this star near the Perseus arm has a prominent Na D line at a velocity of $``$34 km s<sup>-1</sup>(Münch, 1957). The HI absorption spectrum shows a feature at $``$31.2 km s<sup>-1</sup>. The contribution to radial velocity from Galactic rotation can only be $``$ 10 km s<sup>-1</sup>, thus indicating significant peculiar motion of the gas. In our opinion one must attribute this to streaming motion of the gas as in the case discussed above. HI emission clearly shows spatially extended gas covering the longitude range from $``$136 to 141 at the velocity of interest. This strengthens the conclusion that one must not attribute the observed velocity to random motions.
#### HD 159176
There is pronounced optical absorption at $``$ 22.5 km s<sup>-1</sup>which might be identified with the HI absorption seen by us at $``$20.8 km s<sup>-1</sup>. Given the longitude of 356, it is difficult to attribute this to Galactic rotation. The measured velocity must correspond either to random velocity or systematic motion.
#### HD 166937
In the case of this star, there is no strict coincidence (within 3 km s<sup>-1</sup>) between the HI and optical absorption at high velocities. However we see two HI absorption features at velocities close to and straddling the optical feature at 41.1 km s<sup>-1</sup>, so we include this field in our list of high velocity coincidences. It may be noted that this star is also close to the Galactic center direction.
## 4 Summary of results
In the preceding section we described the first attempt to directly compare HI absorption with optical absorption features arising in the ISM. We summarize below the main results:
* HI absorption measurements were carried out towards 24 fields. Each field has existing optical absorption spectra towards a bright star. In 20 of these fields we detected HI absorption features.
* In all but 4 of these 20 fields, the HI absorption features at low velocities ($`<`$ 10 km s<sup>-1</sup>) correspond to the optical absorption lines.
* In most cases there is also corresponding HI emission.
* The spin temperatures derived by us for the low velocity gas is consistent with the standard values for the cold diffuse HI clouds.
* This lends strong support to the hypothesis that (at least) the low velocity clouds seen in optical absorption belong to the same population as those sampled in extensive HI studies.
* In 20 out of 24 fields surveyed, we did not detect any HI absorption corresponding to optical absorption at high velocities ($`v>10`$km s<sup>-1</sup>). It is unlikely that in all cases this is due to our line of sight not sampling the gas seen in optical absorption; in several cases the line of sight to the radio sources would have sampled this gas even if its linear size was of the order of 1 pc.
* Curiously, the early emission measurements also failed to detect HI gas at high velocities (in these same lines of sight). Given the size of the telescopes used, beam dilution could have accounted for the non-detection if the gas was “clumpy”. Our absorption measurements rule this out as a generic explanation. More recent and sensitive measurements indicate that the high velocity gas has much smaller column density than the low velocity gas (N(HI) $`<`$10<sup>18</sup> cm<sup>-2</sup>). If this is the case, then it is not difficult to reconcile why we do not see it in absorption since our sensitivity limit was $`\tau \stackrel{>}{_{}}`$0.1. But then the correlation between high velocity and low column density would have to be explained. We venture to offer some suggestions in the next section.
* Fields with no HI absorption: We wish to record that in the fields containing the stars HD 38666, HD 93521, HD 205637 and HD 220172 we did not detect any HI absorption - even at low velocities. These are all high latitude stars. HI emission spectra also show only weak features. For illustration we show in Fig. 7 the HI absorption and emission spectrum towards HD 93521. From very detailed investigations - the case of HD 93521 is a good example - it has been concluded that most of the optical absorption arises from warm gas in the halo (detailed references may be found in Spitzer and Fitzpatrick 1993 and Welty, Morton and Hobbs 1996). The fact that we do not see HI absorption is consistent with the interpretation of this gas being warm. Weak HI emission indicates low column density also.
## 5 Discussion
As we have already argued, our absorption measurements, taken together with earlier emission measurements, establishes that the low velocity clouds seen in optical absorption are to be identified with the standard HI clouds - their column densities and spin temperatures match.
But the true nature of the high velocity clouds seen in optical absorption is still unclear. There are two questions to be addressed: (1) Do the high velocity clouds belong to a different population, and (2) is there a causal connection between their higher velocities and lower column densities? We wish to address these two questions below.
An unambiguos indication that the high velocity clouds may have very different properties compared to their low velocity counterparts comes from an HI absorption study towards the Galactic center (Radhakrishnan and Sarma 1980). Given the statistics of clouds derived from optical studies (8 to 12 per kpc), if the high velocity clouds had optical depths comparable to the low velocity clouds, then an absorption experimant towards the Galactic center should straightaway reveal a velocity distribution similar to the one derived by Blaauw (1952) from Adams’ data. The velocity distribution derived by Radhakrishnan and Sarma from precisely such a study did not reveal a pronounced high velocity tail. The velocity dispersion of 5 km s<sup>-1</sup>derived by them was in good agreement with the low velocity component of Blaauw’s distribution. Instead of a pronounced high velocity tail seen in optical and UV studies, there was at best a hint of a high velocity population of very weakly absorbing clouds. Even this conclusion has remained controversial (Schwarz, Ekers and Goss 1982).
As for the possible correlation between higher velocities of clouds and lower column densities, fairly conclusive evidence comes from UV absorption studies. Since the UV absorption lines have larger oscillator strengths, they can be used to probe smaller column densities than is possible with optical absorption lines. The analysis of Hobbs (1984) seems to confirm this expectation - in several lines of sight there is more high velocity UV absorption features than in the optical. A more direct inference can be drawn from the work of Martin and York (1982). For the two lines of sight they studied, there is a clear indication of lower column density (N(HI)) at higher velocities.
Over the years, three broad suggestions have been put forward in an attempt to elucidate the nature of the high velocity clouds.
#### Circumstellar clouds
According to an early suggestion due to Schlüter, Schmidt and Stumpf (1953), the high velocity clouds seen in optical absorption are to be identified with circumstellar clouds. This was an attempt to explain the predominance of negative velocities in the high velocity absorption features. If the clouds in the vicinity of massive stars are accelerated by the combined effect of stellar winds and radiation from the stars, then in an absorption study against the stars one would detect only those clouds accelerated towards us. A few years later, Oort and Spitzer (1955) developed the well known “rocket mechanism“ in which the UV radiation from the star ionizes the near side of the cloud resulting in ablation and consequent acceleration of the cloud. This mechanism will naturally result in the higher velocity clouds having smaller mass and therefore smaller column density. The difficulty with this mechanism, however, is that one will have to invoke another mechanism to explain the large positive velocities which are also seen in absorption studies. In view of this we will not dwell any further on this scenario.
#### Relic SNRs
An alternative scenario was advanced by Siluk and Silk (1974). Their suggestion was that the high velocity optical absorption features arise in very old supernova remnants (SNRs) which have lost their identity in the ISM. Their primary objective in advancing this scenario was to explain the high velocity tail of the velocity distribution of optical absorption features. The point was that if the absorption features arise not in interstellar clouds but in SNRs in their very late stages of evolution, then it would result in a power law distribution of velocities; such a distribution according to them provided a good fit to the observations.
While this suggestion is quite attractive, it suffers from two drawbacks: (1) The early studies on the evolution of SNRs predicted the formation of very dense shells beyond the radiative phase. Such compressed shells were essential to explain the observed absorption features and the derived column densities. However, more recent studies which take into account the effects of the compressed magnetic field and cosmic ray pressure in the shells suggest that either dense shells do not form or if they do, do not last long enough (Spitzer 1990; Slavin and Cox 1993). (2) Given a supernova rate of one per $``$ 50 years in the Galaxy, the statistics of absorption features requires that the SNRs enter the radiative phase (and as a consequence develop dense shells) when they are still sufficiently small so as not to overlap with one another. This would indeed be the case if the intercloud medium into which the SNRs expand is dense enough (n $``$ 0.1 cm<sup>-3</sup>). But if a substantial fraction of the ISM is occupied by low density hot gas (n$``$0.003 cm<sup>-3</sup>; T $``$ 5$`\times `$10<sup>5</sup>K) such as indicated by UV and soft X-ray observations then the supernova bubbles are likely to intersect with one another and perhaps even burst out of the disk of the Galaxy before developing dense shells (Cowie and York 1978). In view of these two drawbacks, we do not favour this suggestion.
#### Shocked clouds
The third possibility is that the high velocity absorption features do arise in interstellar clouds but which have been engulfed and shocked by supernova blast waves. Indeed we feel that this is the most plausible explanation for it has support from several quarters. The earliest evidence that the high velocity gas may be “shocked” came from the Routly-Spitzer effect. The NaI/CaII ratio in the fast clouds was lower (sometimes by several orders of magnitude) than in the slow clouds. The variation in NaI/CaII ratio was primarily attributed to the variable gas phase abundance of calcium in these clouds. Due to its relatively high condensation temperature calcium is likely to be trapped in grains. Spitzer has argued that the observed trend in NaI/CaII ratio could be understood if the calcium is released back into the gas phase in the high velocity clouds due to sputtering. This is indeed what one would expect if the interstellar cloud is hit by an external shock, which in turn drives a shock into the clouds (Spitzer 1978). Supernova blast waves are the most likely candidates.
Earlier in this section we referred to an HI absorption study by Radhakrishnan and Sarma towards the Galactic center. While they did not find strong absorption at high velocities they did conclude that there must be a population of weakly absorbing high velocity clouds. Radhakrishnan and Srinivasan (1980) examined this more closely and advanced the view that in order to explain the optical depth profile centered at zero velocity one had to invoke two distinct velocity distributions: a standard narrow distribution with a velocity dispersion of $``$ 5 km s<sup>-1</sup>, and a second one with a much higher velocity dispersion of $``$35 km s<sup>-1</sup>. While arguing strongly for a high velocity tail, they stressed that the latter distribution must consist of a population of very weakly absorbing clouds. They went on to suggest that this population of weakly absorbing clouds might be those that have been shocked by expanding SNRs; the very process of acceleration by SNRs might have resulted in significant loss of material and heating of the clouds, leading to low HI optical depths.
To conclude this discussion we wish to briefly summarize the expected life history of a cloud hit by a supernova blast wave. The first consequence of a cloud being engulfed by an expanding SNR is that a shock will be driven into the cloud itself resulting in an eventual acceleration of the cloud. The effect of this shock and a secondary shock propogating in the reverse direction after the cloud has been overtaken by the blast wave, is to compress and flatten the cloud. Eventually various instabilities are likely to set in which will fragment the cloud.
The detailed history of the cloud depends upon two important timescales: the time taken for the cloud shock to cross the cloud and the evolutionary timescale of the SNR. If the former is much smaller than the latter, the cloud is likely to be destroyed. However if the reverse is true, then the shocked cloud will survive and be further accelerated as a consequence of the viscous drag of the expanding hot interior. Clouds accelerated in such a manner will however suffer substantial evaporation due to heat conduction from the hot gas inside the SNR. Partial fragmentation could further reduce the size of the cloud. For detailed calculation and discussion we refer to McKee and Cowie (1975), Woodward (1976), McKee, Cowie and Ostriker (1978), Cowie, McKee and Ostriker (1981) and a more recent paper by Klein, McKee and Woods (1995).
To summarize the above discussion, in our opinion the shocked cloud scenario has all the ingredients needed to explain the observational trends. In particular it would explain why the high velocity clouds seen so clearly in optical and UV absorption lines do not manifest themselves in HI observations. But this observation is predicated on the conjecture that the higher velocity clouds are not only warmer, but have smaller column densities. There is certainly an indication of this from optical and UV absorption studies. To recall, in UV observations which are sensitive to much smaller column densities than optical studies, the higher velocity absorption features are more pronounced. But it would be desirable to quantify the correlation between velocity and column densities. Reliable column densities are difficult to obtain from optical observations because of blending of lines and also depletion onto grains. The column densities derived from UV observations are also uncertain because of the effects of saturation of the lines. In view of these difficulties it would be rewarding to do a more systematic and much more sensitive HI absorption study, supplemented by emission studies.
|
no-problem/9904/astro-ph9904314.html
|
ar5iv
|
text
|
# Expectations from Realistic Microlensing Models of M31. I: Optical Depth
## 1 Introduction
The ongoing microlensing observations towards the LMC and SMC have provided extremely puzzling results. On the one hand, analysis of the first two years of observations (Alcock et al. 1997a) suggest a halo composed of objects with mass $`0.5M_{}`$ and a total mass in MACHOs out to 50 kpc of around $`2.0\times 10^{11}M_{}`$. On the other hand, producing such a halo requires extreme assumptions about star formation, galaxy formation, and the cosmic baryonic mass fraction. An attractive possibility is that the microlenses do not reside in the halo at all! Alternative suggested locations are the LMC halo (Kerins & Evans 1999), the disk of the LMC itself (Sahu 1994), a warped and flaring Galactic disk (Evans et al. 1998), or an intervening population (Zhao 1998). Unfortunately, the low event rates, uncertainties in the Galactic model, and the velocity-mass-distance degeneracy in microlensing all conspire to make precise determinations of the MACHO parameters difficult. Over the next decade, second generation microlensing surveys, monitoring ten times the number of stars in the LMC will improve the overall statistics (and numbers of “special” events) considerably, allowing an unambiguous determination of the location of the microlenses. Even so, the paucity of usable lines of sight within our halo makes determination of the halo parameters such as the flattening or core radius very difficult.
The Andromeda Galaxy (M31) provides a unique laboratory for probing the structure of galactic baryonic halos (Crotts 1992). Not only will the event rate be much higher than for LMC lensing, but it will be possible to probe a large variety of lines of sight across the disk and bulge and though the M31 halo. Furthermore, it provides another example of a bulge and halo which can be studied, entirely separate from the Galaxy. Recently, two collaborations, MEGA and AGAPE, have begun observations looking for microlensing in the stars of M31. Previous papers have made it clear that a substantial microlensing signal can be expected. In this paper we calculate, using realistic mass models, optical depth maps for M31. The results suggest that we should be able to definitively say whether M31 has a dark baryonic halo with only a few years or less of microlensing data. We also discuss how their variation with halo parameters may allow us to determine the M31 halo structure. This is particularly important in evaluating the level of resources that should be dedicated towards the ongoing observational efforts. Preliminary results suggest that the core radius and density profile power-law should be the easiest parameters to extract.
The paper is organized in the following manner. In the next section we briefly discuss the M31 models we used. Following this we present optical depth maps for various halo models, discuss the microlensing backgrounds and finish with a quick discussion of the implications of the maps.
## 2 Modeling
Sources are taken to reside in a luminous two-component model of M31 consisting of an exponential disk and a bulge. The disk model is inclined at an angle of 77 and has a scale length of 5.8 kpc and a central surface brightness of $`\mu _R=20`$ (Walterbos & Kennicutt 1988). The bulge model is based on the “small bulge” of Kent (1989) with a central surface brightness of $`\mu _R=14`$. This is an axisymmetric bulge with a roughly exp(-$`r^{0.4}`$) falloff in volume density with an effective radius of approximately 1 kpc and axis ratio, $`c/a0.8`$. Values of the bulge density are normalized to make $`M_{bulge}=4\times 10^{10}M_{}`$.
The predominant lens population is taken to be the M31 dark matter halo. We explore a parametrized set of M31 halo models. Each model halo is a cored “isothermal sphere” determined by three parameters: the flattening ($`q`$), the core radius ($`r_c`$) and the MACHO fraction ($`f_b`$):
$$\rho (x,y,z)=\frac{V_c(\mathrm{})^2}{4\pi G}\frac{e}{a^2q\mathrm{sin}^1e}\frac{1}{x^2+y^2+(z/q)^2+a^2},$$
(1)
where a is the core radius, q is the x-z axis ratio, $`e=\sqrt{1q^2}`$ and $`V_c(\mathrm{})=240`$km/s is taken from observations of the M31 disk. In section 4 we briefly consider the optical depth due to other populations such as the bulge stars.
More details of our modeling are given in Gyuk & Crotts (1999) where in particular the velocity distributions (necessary for calculation of the microlensing rate) are discussed. These considerations do not affect the optical depths treated here.
## 3 Optical Depth Maps
The classical microlensing optical depth is defined as the number of lenses within one Einstein radius of the source-observer line-of-sight (the microlensing tube):
$$\tau =_0^D\frac{\rho _{\mathrm{halo}}(d)}{M_{\mathrm{lens}}}\frac{4GM_{\mathrm{lens}}}{c^2}\frac{(Dd)d}{D}$$
(2)
Such a configuration is intended to correspond to a “detectable magnification” of at least a factor of $`1.34`$. Unfortunately, in the case of non-resolved stars (“pixel lensing”) we have typically
$$\pi \sigma ^2S_{M31}>>L_{}.$$
(3)
where $`S_{M31}`$ is the background surface brightness, $`4\pi \sigma ^2`$ is the effective area of the seeing disk and $`L_{}`$ is the luminosity of the source star. Thus it is by no means certain that a modest increase of $`L_{}1.34L_{}`$, as the lens passes within an Einstein radius, will be detectable. Furthermore, even for the events detected, measurement of the Einstein timescale $`t_0`$ is difficult. Thus measurement of the optical depth may be difficult. Nonetheless, advances have been made in constructing estimators of optical depth within highly crowded star fields (Gondolo 1999), which do not require the Einstein timescale for individual events, although they still require evaluation of the efficiency of the survey in question for events with various half maximum timescales. The errors on the derived optical depths will likely be larger than for the equivalent number of classical microlensing events. It is clear, however, that image image subtraction techniques (Tomaney & Crotts 1996, Alcock et al. 1999a, b) can produce a higher event rate than conventional photometric monitoring. Thus one needs models of the optical depth, even if expressed only in terms of the cross-section for a factor 1.34 amplification in order to understand how microlensing across M31 will differ depending on the spatial distribution of microlensing masses in the halo and other populations.
The above expression for the optical depth must be slightly amended to include the effects of the three-dimensional distribution of the source stars, especially of the bulge. We thus integrate the source density along the line of sight giving
$$\tau =\frac{_0^{\mathrm{}}\rho (S)_0^S\frac{\rho _{\mathrm{halo}}(s)}{M_{\mathrm{lens}}}\frac{4GM_{\mathrm{lens}}}{c^2}\frac{(Ss)s}{S}𝑑s𝑑S}{_0^{\mathrm{}}\rho (S)𝑑S}$$
(4)
The results of this calculation as a function of position for a variety of halo models are shown in Figure 3. The most important attribute is the strong modulation of the optical depth from the near to far side of the M31 disk as was first remarked on by Crotts (1992). Near-side lines-of-sight have considerably less halo to penetrate and hence a lower optical depth. This can be seen nicely in Figure 1 where we plot the optical depth along the minor axis for the four models depicted in Figure 3. While all models exhibit the strong variation from near to far, the fractional variation in $`\tau `$ across the minor axis is most pronounced for less flattened models, and changes in $`\tau `$ along the minor axis occur most rapidly for models with small core radii. This can be understood geometrically: in the limit of an extremely flattened halo the pathlength (and density run) through the halo is identical for locations equidistant from the center. Small core radii tend to make the central gradient steeper and produce a maximum at a distance along the minor axis comparable to the core size. This maximum is especially prominent in the flattened halos.
Variations in core radii and flattening are also reflected in the run of optical depth along the major axis. In Figure 2 we show the optical depth along the major axis displaced by -10 on the minor axis. The gradients in the small core radii models are much larger than for large core radii. Asymptotically the flattened halos have a larger optical depth.
## 4 Background Lensing
Unfortunately, the M31 halo is not the only source of lenses. As mentioned above, the bulge stars can also serve as lenses. We show in Figure 4 the optical depth contributed by the bulge lenses. The effect of the bulge lenses is highly concentrated towards the center. This is a mixed blessing. On the one hand the bulge contribution can thus be effectively removed by deleting the central few arcminutes of M31. Beyond a radius of 5 arcminutes, bulge lenses contribute negligibly to the overall optical depth. On the other hand the source densities are much higher in the central regions and thus we expect the bulk of our halo events to occur in these regions. We discuss this point in more detail in a forthcoming paper (Gyuk & Crotts 1999). The bulge of M31 might easily serve as an interesting foil to the Galactic Bulge, which produces microlensing results which seem to require a special geometry relative to the observer, or other unexpected effects (Alcock et al. 1997b, Gould 1997).
In addition to the M31 bulge lensing a uniform optical depth across the field will be contributed by the Galactic halo. This contribution will be of order $`10^6`$ corresponding to a 40% Galactic halo as suggested by the recent LMC microlensing results. Finally, disk self lensing will occur. The magnitude of the optical depth for this component will however be at least an order of magnitude lower than the expected halo or bulge contributions (Gould 1994) and hence is ignored in these calculations.
## 5 Discussion and Conclusions
The optical depth maps for M31 shown above exhibit a wealth of structure and clearly contain important information on the shape of the M31 halo. The most important of these information bearing characteristics is the asymmetry in the optical depth to the near and far sides of the M31 disk. A detection of strong variation in the optical depth from front to back will be a clear and unambiguous signal of M31’s microlensing halo, perhaps due to baryons. No other lens population or contaminating background can produce this signal. However, the lack of a strong gradient should not be taken as conclusive proof that M31 does not have a halo. As discussed above, strong flattening or a large core radius can reduce or mask the gradient. Nevertheless, the halo should still be clearly indicated by the high microlensing rates observed outside the bulge region. In such a case, however, careful modeling of the experimental efficiency and control over the variable star contamination will be necessary to insure that the observed event are really microlensing.
Further information about the structure of the M31 baryonic halo can be gleaned from the distribution of microlensing along the major axis. A strong maximum at the minor axis is expected for small core radii especially for spherical halos.
The combination of the change in event rate both along the major and minor axis directions can in principle reveal both the core radius and flattening from a microlensing survey. How easily such parameters can be measured depends critically on the rate at which events can be detected, which we discuss in paper II of this series, along with estimates of the expected accuracy. Additionally, we will discuss strategies to optimize such surveys for measuring shape parameters.
|
no-problem/9904/astro-ph9904042.html
|
ar5iv
|
text
|
# Galaxy Formation by Galactic Magnetic Fields
## 1. Introduction
It is generally believed that the hierarchical structure today seen in the universe is generated by gravitational instability of cold dark matter (CDM) whose density is about 10 times higher than that of baryonic matter, and galaxies form in virialized dark matter haloes by cooling of baryonic gas and subsequent star formation. The characteristic mass range of galaxies is about $`10^{812}M_{}`$, which can be understood as a mass range in which baryonic gas can cool sufficiently within a dark matter halo to form a galaxy (e.g., Blumenthal et al. 1984 for a review). However, spheroidal components reside only in relatively massive galaxies with $`M=10^{1012}M_{}`$ and less massive galaxies are mostly irregular types or late-type spiral galaxies (Burstein et al. 1997; Hunter 1997). \[There is a population called as dwarf spheroidal galaxies, but they are generally considered to be different from giant elliptical galaxies and bulges. They are more similar to dwarf irregulars (Bender et al. 1992; Mao & Mo 1998).\] Elliptical galaxies and bulges appear to form a very uniform, old stellar population with very little scatter in metallicity and star formation history, and they are generally considered to have formed by intensive starbursts at high redshifts (e.g., Renzini 1999). The trigger mechanism for starbursts is unknown, and it is often assumed to be mergers of disk galaxies when galaxy formation is modeled in the context of hierarchical structure formation in the CDM universe (Kauffman, White, & Guiderdoni 1993; Baugh, Cole, & Frenk 1996). However, there is no evidence for spheroidal galaxy formation through mergers, and the merger hypothesis does not give a clear explanation for the fact that all spheroidal galaxies are very old and only in massive galaxies. These trends appear to run opposite to the expectation of structure formation in the CDM universe, in which smaller objects form earlier and then massive objects later. This is currently considered as a major challenge for galaxy formation in the CDM universe (e.g., Renzini 1999), and it is interesting to seek for a mechanism which triggers starbursts only in massive and high-redshift objects. In this letter we show that a possible strong dependence of star formation activity on magnetic field strengths, which is suggested by observations of magnetic fields in local galaxies, gives a candidate for such a mechanism.
## 2. Galactic Magnetic Fields and Star Formation Activity
Interstellar magnetic fields in galaxies are strong enough to significantly affect interstellar gas dynamics on both global scales characteristic of galactic structure and small scales characteristic of star formation (see, e.g., Vallée 1997; Zweibel & Heiles 1997 for a review). Fitt and Alexander (1993; hereafter FA) derived strengths of volume-averaged magnetic fields for 146 spiral galaxies, which is a complete sample including various types from Sa to irregular galaxies, and they found a strange result: the dispersion in strength is surprisingly narrow within a factor of less than 2 with an average strength of 3 $`\mu `$G, in spite of a wide range of absolute radio luminosity of galaxies (almost 4 orders of magnitude). Why is magnetic field strength so uniform at $`3\mu `$G for various types of galaxies with such a wide range of luminosity? Clearly there should be a physical reason for this strange fact.
Vallée (1994) investigated the relation between the magnetic field strength and star formation activity by using 48 galaxies out of the FA sample, and found that there is a correlation between magnetic field strength ($`B`$) and star formation efficiency (SFE) as B $`\left(\mathrm{SFE}\right)^i`$ with $`i=0.13\pm 0.04`$. \[The same relation was found also for star formation rate (SFR) and $`B`$.\] From this result, one may conclude that star formation activity does not strongly affect the strength of magnetic fields. However, it is also likely that star formation rate is affected by magnetic fields. If we take the result of Vallée in this context, the observed correlation suggests that star formation activity depends quite sensitively on $`B`$ as SFE $`B^q`$ with $`q=1/i7.7\pm 2.6`$. Here we point out that, if this is the case, the surprisingly narrow dispersion in galactic magnetic fields is naturally explained by an observational selection effect. If the strength of magnetic field in an object is significantly smaller than a few $`\mu `$G, the star formation activity becomes much lower than in typical spiral galaxies, and such object cannot produce enough stars to be observed as a galaxy. On the other hand, if magnetic fields are stronger than a few $`\mu `$G, such objects would experience strong starburst and all interstellar gas will be converted into stars. If the star formation time scale is shorter than the dynamical time scale for disk formation, they will become present-day spheroidal systems. Magnetic fields in elliptical galaxies are difficult to measure since there are no relativistic electrons to illuminate the magnetic fields by synchrotron radiation. Then the reason why the observed dispersion of magnetic fields is quite small can be understood by an observational selection effect; we cannot observe or measure magnetic field strength when the strength is significantly different from the typically observed value of $`3\mu `$G.
Therefore the observed correlation between $`B`$ and SFE, as well as the surprisingly narrow dispersion of measured values of magnetic field strength, well suggests a strong dependence of star formation activity on magnetic field strength. This idea is further supported by an amorphous spiral galaxy M82, which has abnormally large magnetic field of $`10\mu `$G in the sample of FA. M82 is actually known as an archetypal starburst galaxy, and the strong field strength suggested by FA is caused by the very strong field ($`50\mu `$G) in the nuclear region of this galaxy, where intense star formation is underway (Klein, Wielebinski, & Morsi 1988; Vallée 1995a). Unfortunately our knowledge for physics of star formation is poor and it is not clear from a theoretical point of view why SFE depends so strongly on $`B`$, but it is not unreasonable. In fact, at least one effect is known by which magnetic fields help star formation: the magnetic braking (e.g., McKee et al. 1993 for a review). In order for a protostellar gas cloud to collapse into stars, a significant amount of initial angular momentum must be transported outward. Magnetic fields play an important role for this angular momentum loss by Alfvén waves launched into ambient medium. Strongly magnetized objects with large scale high gas density would be followed by magnetic braking of individual regions with angular momentum losses, ending in outgoing Alfvén waves, gas collapse, and star formation on a large scale.
It should be noted that the above hypothesis does not contradict with the well-known Schmidt law of star formation. Vallée (1995b, 1997) found a relation between magnetic field strengths and interstellar gas density on a scale of a whole galaxy as $`Bn^k`$ with $`k=0.17\pm 0.03`$. By using this relation and $`B`$-SFR relation, we find SFR $`n^{k/i}`$ with $`k/i=1.3\pm 0.1`$. This relation agrees well (Vallée 1997) with some direct estimates of the relation between SFR and $`n`$ (e.g., Kennicutt 1989; Shore & Ferrini 1995; Niklas & Beck 1997). At the same time, these relations suggest that the thermal energy of interstellar matter is not in equipartition with the magnetic field energy or cosmic ray energy (Kamaya 1996). It is possible that what is physically affected by magnetic fields is gas density, and then the gas density determine SFE by the Schmidt law, producing the observed $`B`$-SFE relation.
Anyway, the correlation between $`B`$ and SFE has actually been observed, and it is natural to consider this relation holds also in high redshift objects. \[The chance probability to observe such a correlation from an uncorrelated parent population is only 12% (Vallée 1994).\] Therefore the following analysis is valid unless some unknown processes violate this empirical relation in high redshift objects.
## 3. Magnetic Fields in Hierarchical Structure Formation
As discussed above, it is suggested that spheroidal components of galaxies formed by starbursts induced by strong magnetic fields. In the following we consider the magnetic field generation and formation of spheroidal systems in the framework of the standard cosmological structure formation in the CDM universe, and show that spheroidal systems form only in relatively massive objects at high redshifts. Recently it is discussed that galactic magnetic fields are generated at the stage of collapse of protogalactic clouds (Kulsrud et al. 1997). Although there are some other scenarios as the origin of galactic magnetic fields, such as earlier field generation on cosmological scales or dynamo amplification after galaxies form (see, e.g., Zweibel & Heiles 1997), we assume here that magnetic fields are generated during collapse of protogalactic clouds. When a dark matter halo decouples from the expansion of the universe and collapses into a virialized object, the gravitational energy of baryonic gas is converted into turbulent motions and thermal energy by generated shocks. It can be shown that a very weak magnetic field ($`10^{21}`$ G), which has been generated by thermoelectric current before collapse (i.e., the battery mechanism), is amplified up to a strength nearly in equipartition with turbulent energy (Kulsrud & Anderson 1992; Kulsrud et al. 1997). The time scale for equipartition is given by $`r/\upsilon `$, where $`r`$ is the length scale of the system and $`\upsilon `$ the turbulent velocity. (The essence of this result can be understood by a dimensional analysis. The equation of magnetic field generation is $`d𝐁/dt=\mathrm{rot}\left(𝐯\times 𝐁\right)`$, where $`𝐯`$ is the velocity field of fluid, and when we estimate rot by the inverse of the system scale, it is clear that the field evolution time scale is given by $`r/\upsilon `$.) If we take the radius and three-dimensional velocity dispersion of a virialized halo as $`r`$ and $`\upsilon `$, it is straightforward to see that this time scale is given by the dynamical time scale of the halo. Then we can estimate the strength of magnetic fields by the turbulent energy density of baryonic gas in a collapsed object, which is determined by using the well-known spherical collapse model (e.g., Peebles 1980). For a dark halo with mass $`M_h`$ and formation redshift $`z`$, the turbulent energy density becomes $`ϵ_B\left(3\mathrm{\Omega }_BM_h\upsilon ^2/8\pi \mathrm{\Omega }_0r^3\right)=6.4\times 10^{13}h^{8/3}M_{12}^{2/3}\left(1+z\right)^4\mathrm{\Omega }_B\mathrm{erg}\mathrm{cm}^3`$, where $`h`$ is the Hubble constant normalized at 100 km/s/Mpc, $`M_{12}=M_h/\left(10^{12}M_{}\right)`$, and $`\mathrm{\Omega }_B`$ and $`\mathrm{\Omega }_0`$ are the baryon and matter density in units of the critical density in the universe. In the second expression above, we have assumed the Einstein-de Sitter universe ($`\mathrm{\Omega }_0=1`$), but extension to other cosmological models is easy. According to the theory of Kulsrud and Anderson (1992), we assume that one-sixth of the turbulent energy is converted into magnetic field energy and then the equipartition magnetic field becomes $`B1.6h^{4/3}M_{12}^{1/3}\left(1+z\right)^2\mathrm{\Omega }_B^{1/2}\mu `$G. The observational properties of disk galaxies including our Galaxy are well understood if they are considered to have formed at $`z`$ 0–1 (Mao & Mo 1998), and hence this theory of magnetic field generation gives a roughly correct strength for our Galaxy with $`M_h10^{12}M_{}`$.
## 4. Emergence of the Hubble Sequence
Then spheroidal galaxies are expected to form at high redshifts in massive dark matter halos, because magnetic field becomes stronger with increasing mass and redshift as $`BM_h^{1/3}\left(1+z\right)^2`$. In order to discuss more quantitatively, we equate the previously defined star formation efficiency (in §2) to $`\nu =\left(t_{SF}\right)^1`$, where $`t_{\mathrm{SF}}`$ is the time scale on which interstellar gas is converted into stars. The star formation rate $`\dot{M}_{}`$ in a galaxy is given by $`\dot{M}_{}=M_{\mathrm{gas}}/t_{\mathrm{SF}}`$, where $`M_{\mathrm{gas}}`$ is the mass of interstellar gas in the galaxy. As mentioned earlier, the observed relation between $`B`$ and $`\nu `$ is $`\nu B^q`$ with $`q7.7\pm 2.6`$. We set the normalization of this relation by SFE at our Galaxy. The star formation rate in our Galaxy is about a few $`M_{}`$/yr and mass of interstellar gas in the disk is about $`6\times 10^9M_{}`$ at present, suggesting that $`t_{\mathrm{SF}}2`$ Gyr at $`B3\mu `$G (e.g., Binney & Tremaine 1987). We also normalize the strength of $`B`$ to 3 $`\mu `$G for a halo whose baryonic mass is $`6\times 10^{10}M_{}`$ and whose virialization occurred at $`z=1`$, supposing our Galaxy. Then we can calculate the ratio of baryonic dynamical time to $`t_{\mathrm{SF}}`$, $`\mathrm{\Gamma }t_{\mathrm{dyn}}/t_{\mathrm{SF}}`$, which can be considered as a criterion for spheroidal formation. In calculation of $`t_{\mathrm{dyn}}`$, we use a baryon density $`\lambda ^3`$ times higher than the value at virialization, considering the contraction of baryons due to cooling and dissipation, where $`\lambda 0.05`$ is typical dimensionless angular momentum of dark haloes (Warren et al. 1992). If $`\mathrm{\Gamma }1`$, spheroidal systems are expected to form.
In Figure 1, we have plotted the contour of $`\mathrm{\Gamma }`$ = 1, 10, and 100 by thin solid lines in a plane of baryon mass of a collapsed object ($`M_{\mathrm{baryon}}`$) and its formation redshift, assuming $`q=5`$. (Observed mass of galaxies is considered to be mainly baryonic mass.) For reference, we have also plotted the contour of magnetic field strengths in Fig. 2. We have used a flat, $`\mathrm{\Lambda }`$-dominated cosmological model, where $`\mathrm{\Lambda }`$ is the cosmological constant, with the standard cosmological parameters of $`(h,\mathrm{\Omega }_0,\mathrm{\Omega }_\mathrm{\Lambda },\sigma _8)=(0.7,0.3,0.7,1)`$. This universe is well consistent with various observations such as ages of globular clusters, high redshift type Ia supernovae (Perlmutter et al. 1998), cosmic microwave background (Bunn & White 1997), and abundance of clusters of galaxies (Kitayama & Suto 1997). In the following we discuss the emergence of the Hubble sequence using Fig. 1. Objects lying above the thin solid lines are expected to form spheroidals, but we have to check how such objects are typical in the context of cosmological structure formation in the CDM universe. For this purpose we have plotted mass scale of $`n`$-$`\sigma `$ fluctuation as defined in Blumenthal et al. (1984), by dashed lines. The lines are defined by $`n\sigma (M_h,z)=\delta _c`$, where $`\sigma (M_h,z)`$ is the root-mean-square of density fluctuation predicted by linear theory in the CDM universe at scale of $`M_h`$ and at redshift $`z`$ (Peacock & Dodds 1994; Sugiyama 1995), and $`\delta _c1.69`$ is a critical density contrast at which an object virializes (Peebles 1980). The three dashed lines correspond to $`n`$ = 1, 2, and 3, and these lines represent typical mass scales of collapsed objects as a function of $`z`$. (For larger $`n`$, the cosmological abundance of objects is statistically suppressed as $`e^{n^2/2}`$.) In the region where the dashed lines are above the solid lines, spheroidal galaxies can form as cosmologically typical objects. Figure 1 shows that objects with $`M_{\mathrm{baryon}}\stackrel{>}{}10^{910}M_{}`$ with $`2`$$`3\sigma `$ fluctuation will form spheroidal systems because $`\mathrm{\Gamma }1`$, at high redshifts of $`z`$ 3–10 with $`B50\mu `$G (see also Fig. 2). It is interesting to note that this magnetic field strength is roughly the same with that in the starbursting nucleus of M82, as mentioned in §2. Spheroidal formation with $`M_{\mathrm{baryon}}\stackrel{<}{}10^{910}M_{}`$ is inhibited because of low $`\mathrm{\Gamma }`$ for cosmologically typical objects. On the other hand, if $`M_{\mathrm{baryon}}`$ is larger than $`10^{12}M_{}`$, formation of galaxies is inhibited by too long cooling time ($`t_{\mathrm{cool}}`$) compared to the Hubble time at each redshift ($`t_H`$), as shown by dotted lines which are contours of $`t_{\mathrm{cool}}/t_H`$ (Blumenthal et al. 1984). Therefore, our model can explain the observed mass range ($`10^{1012}M_{}`$) and old stellar populations seen in spheroidal galaxies. The region for spheroidal galaxy formation is schematically indicated in Figs. 1 and 2 by the thick solid line. After spheroidal galaxy formation, gas accretion onto some of them is possible at lower redshifts. Since such gas accreting recently is located below the thin solid lines in Figure 1, it results in disk formation of spiral galaxies at $`z\stackrel{<}{}1`$. On the other hand, if recent gas accretion is negligible, such object will be seen as present-day elliptical galaxies.
Since solid lines (constant $`\mathrm{\Gamma }`$) and the dashed lines ($`n`$-$`\sigma `$ lines) are approximately parallel in the region of spheroidal formation, our scenario for the origin of the Hubble sequence predicts that elliptical galaxies should lie on a constant $`n`$-$`\sigma `$ density fluctuation with $`n`$ 2–3, and with decreasing $`n`$, the type of galaxies becomes later in the Hubble sequence, i.e., from early to late spiral, and then into irregular galaxies at $`1\sigma `$ fluctuation. In fact, this trend is exactly what has been observed in galaxies (Blumenthal et al. 1984; Burstein et al. 1997), giving a further support for our scenario. Most properties or relations observed in present-day galaxies, such as the Tully-Fisher or Faber-Jackson relation, existence of the fundamental plane for galaxies and their distribution on it, can be explained by formation of early type galaxies from higher $`n`$-$`\sigma `$ fluctuations than later Hubble types (Burstein et al. 1997). The density-morphology relation, which is the correlation between galaxy types and number density, is also explained; higher $`n`$-$`\sigma `$ fluctuations occur preferentially in denser regions destined to become rich clusters, and hence one expects to find more ellipticals there, as is observed (Blumenthal et al. 1984). Higher $`n`$-$`\sigma `$ objects are expected to show stronger spatial clustering than lower ones (e.g., Mo & White 1996), and it is consistent with the stronger clustering observed for elliptical galaxies than late type galaxies (Loveday et al. 1995). Various observations suggest that giant galaxies formed at higher redshifts of $`z\stackrel{>}{}3`$, and were then followed by a sequence of less and less massive galaxies forming at lower and lower redshifts, leading down to the formation of dwarfs at recent ($`z\stackrel{<}{}0.5`$) (Fukugita, Hogan & Peebles 1996; Sawicki, Lin, & Yee 1997). The proposed scenario gives an explanation for this trend which is sometimes termed as “downsizing”, otherwise it seems opposite to the expectation in the CDM universe.
## 5. Discussion & Conclusions
For very low mass objects, star formation is strongly suppressed by the absence of physical triggers when magnetic fields are weak, and this may give an explanation for the fact that the faint end of luminosity function of galaxies is much flatter than expected from the mass function of dark haloes (Kauffmann et al. 1993; Baugh et al. 1996). Because of the strong dependence of star formation activity on magnetic fields, stellar luminosity of galaxies would quite rapidly decrease with decreasing mass of galaxies. Then it is expected that, in the faint end of galaxy luminosity function, the mass of galaxies does not so change compared to the change in luminosity. Since the number density of objects is determined by mass of objects, the faint end is expected to be relatively flat, as observed. Irregular galaxies or late-type galaxies are expected to show a wide range of star formation activity within a narrow range of galaxy masses depending on the magnetic field strength in them, and in fact such a trend has been observed (Hunter 1997).
It is well known that galaxy interactions or mergers induce intensive starbursts, but the mechanism which triggers starbursts in galaxy interactions is still poorly known. Mergers are followed by cloud collisions leading to high gas density, strong turbulences and strong magnetic fields, and ending in gas collapse and star formation in clouds. Therefore the hypothesis presented in this letter, i.e., strong dependence of star formation activity on magnetic field strengths, may also be important in the merger-induced starbursts. Some fraction of elliptical galaxies may have formed by such a process, but we have argued that the observed trends of ellipticals, i.e., being massive and old, originate mainly from properties of gravitationally bound objects in the standard theory of cosmological structure formation.
We have presented a new idea for the origin of the Hubble sequence of galaxies. The key of this scenario is the strong dependence of star formation activity on average magnetic fields in galaxies, which is just a speculation from a theoretical point of view, but motivated by several observational facts about magnetic fields in nearby galaxies. This only one speculation provides a simple explanation for the surprisingly narrow dispersion in the magnetic field strengths observed in spiral galaxies, and also for most properties of galaxies seen along the Hubble sequence. To verify this “magnetic galaxy formation” scenario as the origin of the Hubble sequence, it is indispensable to confirm observationally that starbursts are actually triggered by stronger magnetic fields. Comprehensive measurements of galactic magnetic fields in larger samples of galaxies are necessary, especially for starburst galaxies at local as well as at high redshifts.
The author would like to thank T. Kitayama, M. Shimizu, and an anonymous referee for useful comments and discussions.
|
no-problem/9904/hep-th9904036.html
|
ar5iv
|
text
|
# References
NDA-FP-58
April 1999
Running gauge coupling and quark-antiquark potential from dilatonic gravity
Shin’ichi NOJIRI<sup>1</sup><sup>1</sup>1e-mail: nojiri@cc.nda.ac.jp, snojiri@yukawa.kyoto-u.ac.jp and Sergei D. ODINTSOV<sup>2</sup><sup>2</sup>2e-mail: odintsov@mail.tomsknet.ru, odintsov@itp.uni-leipzig.de
Department of Mathematics and Physics
National Defence Academy, Hashirimizu Yokosuka 239, JAPAN
$`\mathrm{}`$ Tomsk Pedagogical University, 634041 Tomsk, RUSSIA
and NTZ, Inst.Theor.Phys., University of Leipzig, Augustusplatz 10/11, 04109 Leipzig, Germany
abstract
The running gauge coupling and quark-antiquark potential in $`d`$-dimensions are calculated from the explicit solution of $`d+1`$-dimensional dilatonic gravity. This background interpolates between usual AdS in UV and flat space with singular dilaton in IR and it realizes two-boundaries AdS/CFT correspondence. The behaviour of running coupling and potential is consistent with results following from IIB supergravity.
AdS/CFT correspondence predicts some properties of quantum gauge theory from higher-dimensional classical (super)gravity. In the more complicated versions of AdS/CFT correspondence there are two boundaries: UV and IR boundaries .
In the recent paper , we presented two-boundaries AdS/CFT correspondence in dilatonic gravity (in the presence not only of metric tensor but also dilaton). The corresponding background interpolates between standard AdS at $`y=\mathrm{}`$ (UV) and flat background with singular dilaton at $`y=0`$ (IR). The properties (in particular, conformal dimensions) for minimal or dilaton coupled scalar around such background have been investigated. Very similar background has been lately found in refs. (see also related refs.) for IIB supergravity. In these refs. the behaviour of running coupling in gauge theory and quark-antiquark potential (with possible confinement due to dilatonic effects) has been studied. In particular, the modification of these quantities due to dilaton effects is explicitly found. Note that first example of running gauge coupling via bulk/boundary correspondence has been presented for Type 0 strings in refs.. Background with singular dilaton under discussion gives another example of this beatiful phenomenon.
In the present note we calculate the running gauge coupling and quark-antiquark potential in $`d`$-dimensions working with our background realizing two-boundaries AdS/CFT correspondence in dilatonic gravity. We show that for some cases it may be similar to result of refs. . Some clarifying remarks on interpretation of CFTs on two boundaries as IR/UV duality are also given.
We start with the following action in $`d+1`$ dimensions:
$$S=\frac{1}{16\pi G}d^{d+1}x\sqrt{g}\left(R\mathrm{\Lambda }\alpha g^{\mu \nu }_\mu \varphi _\nu \varphi \right).$$
(1)
In the following, we assume $`\lambda ^2\mathrm{\Lambda }`$ and $`\alpha `$ to be positive. In AdS/CFT correspondence dilatonic gravity with above action describes $`𝒩=4`$ super Yang-Mills theory superconformally interacting with $`𝒩=4`$ conformal supergravity. The corresponding conformal anomaly with non-trivial dilaton contribution has been found in ref. via AdS/CFT correspondence.
The equations of motion can be solved and the solution is given by
$`ds^2`$ $`=`$ $`{\displaystyle \underset{\mu ,\nu =0}{\overset{d}{}}}g_{\mu \nu }dx^\mu dx^\nu =f(y)dy^2+y{\displaystyle \underset{i,j=0}{\overset{d1}{}}}\eta _{ij}dx^idx^j`$ (2)
$`f`$ $`=`$ $`{\displaystyle \frac{d(d1)}{4y^2\left(\lambda ^2+\frac{\alpha c^2}{y^d}\right)}}`$ (3)
$`\varphi `$ $`=`$ $`c{\displaystyle 𝑑y\sqrt{\frac{d(d1)}{4y^{d+2}\left(\lambda ^2+\frac{\alpha c^2}{y^d}\right)}}}`$ (4)
$`=`$ $`\varphi _0+{\displaystyle \frac{1}{2}}\sqrt{{\displaystyle \frac{(d1)}{d\alpha }}}\mathrm{ln}\left\{{\displaystyle \frac{2\alpha c^2}{\lambda ^2y^d}}+1\pm \sqrt{\left({\displaystyle \frac{2\alpha c^2}{\lambda ^2y^d}}+1\right)^21}\right\}.`$
Here $`\eta _{ij}`$ is the metric in the flat (Lorentzian) background and $`c`$ is a constant of the integration. The boundary discussed in AdS/CFT correspondence lies at $`y=\mathrm{}`$.
In the string theory, the coupling on the boundary manifold, which would be the coupling in $`𝒩=4`$ $`SU(N)`$ super-Yang-Mills when $`d=4`$, is proportional to an exponential of the dilaton field $`\varphi `$. Although the relation of the present model with the string theory is not always clear (the model under discussion may be considered as low-energy string effective action), we assume the gauge coupling has the following form (by the analogy with refs.)
$$g=g^{}\mathrm{e}^{2\beta \sqrt{\frac{\alpha }{d(d1)}}\left(\varphi \varphi _0\right)}.$$
(5)
Since the relation with the string theory is not clear, we put the constant coefficient $`\beta `$ in the exponent to be undetermined. The factor $`\sqrt{\frac{\alpha }{d(d1)}}`$ and the shift of a constant $`\varphi _0`$ are given for convenience and they may be always absorbed into the redefinition of $`\beta `$ and $`g^{}`$, respectively.
The coupling $`g`$ is monotonically decreasing function and vanishes at $`y=0`$ when $`\beta >0`$ ($`\beta <0`$) and $`+`$ ($``$) in the sign in (4) are chosen. On the other hand, the coupling is increasing one and diverges to $`+\mathrm{}`$ at $`y=0`$ for $`\beta >0`$ ($`\beta <0`$) and $``$ ($`+`$) sign. Therefore the $`\pm `$ sign would correspond to strong/weak coupling duality. When $`y`$ is large, which corresponds to asymptotically anti de Sitter space, $`g`$ behaves as
$$g=g^{}\left(1\pm \frac{2\beta c\sqrt{\alpha }}{d\lambda y^{\frac{d}{2}}}+\left(\frac{2\beta }{d}1\right)\frac{2\beta c^2\alpha }{d\lambda ^2y^d}+\mathrm{}\right).$$
(6)
If we define a new coordinate $`U`$ by
$$y=U^2,$$
(7)
$`U`$ expresses the scale on the (boundary) $`d`$ dimensional Minkowski space, which can be found from (2). Following the correspondence between long-distances/high-energy in the AdS/CFT scheme, $`U`$ can be regarded as the energy scale of the boundary field theory. Then from (6), we obtain the following renormalization group equation
$$\beta (U)U\frac{dg}{dU}=d(gg^{})\left(d\frac{d^2}{2\beta }\right)\frac{(gg^{})^2}{g^{}}+\mathrm{}.$$
(8)
The leading behaviour is identical with that in for $`d=4`$ and the next to leading behavior of $`𝒪\left((gg^{})^2\right)`$ becomes also identical with theirs if we choose $`\beta =4`$. It maybe also noted that Type 0 strings on AdS background lead to running gauge coupling as well .
We now consider the static potential between “quark” and “anti-quark”. We evaluate the following Nambu-Goto action
$$S=\frac{1}{2\pi }𝑑\tau 𝑑\sigma \sqrt{\mathrm{det}\left(g_{\mu \nu }^s_\alpha x^\mu _\beta x^\nu \right)}.$$
(9)
with the “string” metric $`g_{\mu \nu }^s`$, which could be given by multiplying a dilaton function $`h(\varphi )`$ to the metric tensor in (2). Especially we choose $`h(\varphi )`$ by
$$h(\varphi )=\mathrm{e}^{2\gamma \sqrt{\frac{\alpha }{d(d1)}}\left(\varphi \varphi _0\right)}=1\pm \frac{2\gamma c\sqrt{\alpha }}{d\lambda y^{\frac{d}{2}}}+\mathrm{}.$$
(10)
Here $`\gamma `$ is an undetermined constant as $`\beta `$ in (5). In order to treat general case, we assume $`\gamma \beta `$ in general. We consider the static configuration $`x^0=\tau `$, $`x^1x=\sigma `$, $`x^2=x^3=\mathrm{}=x^{d1}=0`$ and $`y=y(x)`$. Substituting the configuration into (9), we find
$$S=\frac{T}{2\pi }𝑑xh\left(\varphi (y)\right)y\sqrt{\frac{f(y)}{y}\left(_xy\right)^2+1}.$$
(11)
Here $`T`$ is the length of the region of the definition of $`\tau `$. The orbit of $`y`$ can be obtained by minimizing the action $`S`$ or solving the Euler-Lagrange equation $`\frac{\delta S}{\delta y}_x\left(\frac{\delta S}{\delta \left(_xy\right)}\right)=0`$. The Euler-Lagrange equation tells that
$$E_0=\frac{h\left(\varphi (y)\right)y}{\sqrt{\frac{f(y)}{y}\left(_xy\right)^2+1}}$$
(12)
is a constant. If we assume $`y`$ has a finite minimum $`y_0`$, where $`_xy|_{y=y_0}=0`$, $`E_0`$ is given by
$$E_0=h\left(\varphi (y_0)\right)y_0.$$
(13)
Introducing a parameter $`t`$, we parametrize $`y`$ by
$$y=y_0\mathrm{cosh}t.$$
(14)
Then using (3), (10), (12) and (14), we find
$`x`$ $`=`$ $`{\displaystyle \frac{y_0^{\frac{1}{2}}}{A}}{\displaystyle _{\mathrm{}}^t}𝑑t\mathrm{cosh}^{\frac{3}{2}}t\left\{1\pm B\mathrm{sinh}^2t\left(\mathrm{cosh}^2t\mathrm{cosh}^{2\frac{d}{2}}t\right)+𝒪(y_0^d)\right\}`$ (15)
$`A{\displaystyle \frac{2\lambda }{\sqrt{d(d1)}}},B{\displaystyle \frac{2\gamma c\sqrt{\alpha }}{d\lambda }}.`$
Taking $`t+\mathrm{}`$, we find the distance $`L`$ between ”quark” and ”anti-quark” is given by
$`L`$ $`=`$ $`{\displaystyle \frac{Cy_0^{\frac{1}{2}}}{A}}\pm {\displaystyle \frac{BS_dy_0^{\frac{d+1}{2}}}{A}}+𝒪(y_0^{\frac{2d+1}{2}})`$ (16)
$`C`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\mathrm{cosh}^{\frac{3}{2}}t={\displaystyle \frac{2^{\frac{3}{2}}\mathrm{\Gamma }\left(\frac{3}{4}\right)^2}{\sqrt{\pi }}}`$
$`S_d`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\mathrm{cosh}^{\frac{1}{2}}t\mathrm{sinh}^2t\left(1\mathrm{cosh}^{\frac{d}{2}}t\right).`$
Especially $`S_4=C`$. Eq.(16) can be solved with respect to $`y_0`$ and we find
$$y_0=\left(\frac{C}{AL}\right)^2\left\{1\pm \frac{BS_d}{C}\left(\frac{AL}{C}\right)^d+𝒪\left(L^{2d}\right)\right\}.$$
(17)
Using (12), (14) and (16), we find the following expression for the action $`S`$
$`S`$ $`=`$ $`{\displaystyle \frac{T}{2\pi }}E(L)`$
$`E(L)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t{\displaystyle \frac{dx}{dt}}{\displaystyle \frac{h\left(\varphi (y(t))\right)^2y(t)^2}{h\left(\varphi (y_0)\right)^2y_0}}.`$ (18)
Here $`E(L)`$ expresses the total energy of the “quark”-“anti-quark” system. The energy $`E(L)`$ in (S0.Ex5), however, contains the divergence due to the self energies of the infinitely heavy “quark” and “anti-quark”. The sum of their self energies can be estimated by considering the configuration $`x^0=\tau `$, $`x^1=x^2=x^3=\mathrm{}=x^{d1}=0`$ and $`y=y(\sigma )`$ (note that $`x_1`$ vanishes here) and the minimum of $`y`$ is $`y_0`$. Using the parametrization of (14) and identifying $`t`$ with $`\sigma `$ ($`t=\sigma `$), we find the following expression of the sum of self energies:
$`E_{\mathrm{self}}={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑th\left(\varphi (y(t))\right)y(t)\sqrt{{\displaystyle \frac{f\left(y(t)\right)\left(_ty(t)\right)^2}{y}}}.`$ (19)
Then the finite potential between “quark” and “anti-quark” is given by
$`E_{q\overline{q}}(L)`$ $``$ $`E(L)E_{\mathrm{self}}`$
$`=`$ $`\left({\displaystyle \frac{C}{AL}}\right)\left\{D_0\pm B\left(D_d+F_d+{\displaystyle \frac{S_dD_0}{C}}\right)\left({\displaystyle \frac{AL}{C}}\right)^d+𝒪(L^{2d})\right\}`$
$`D_d`$ $``$ $`2{\displaystyle _0^{\mathrm{}}}𝑑t\mathrm{cosh}^{\frac{d+1}{2}}t\mathrm{e}^t`$
$`=`$ $`{\displaystyle \frac{2^{d\frac{5}{2}}}{(d3)!!\sqrt{\pi }}}\mathrm{\Gamma }\left({\displaystyle \frac{d1}{4}}\right)^2{\displaystyle \frac{4}{d1}}`$
$`F_d`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\mathrm{sinh}^2t\mathrm{cosh}^{\frac{1}{2}}t\left(1\mathrm{cosh}^{\frac{d}{2}}t\right).`$
Especially since $`F_4=D_4=C`$, we obtain for $`d=4`$
$$E_{q\overline{q}}=\stackrel{~}{L}^1\left\{\frac{2^{\frac{3}{2}}}{\sqrt{\pi }}\mathrm{\Gamma }\left(\frac{3}{4}\right)^2+4\pm B\left(\frac{2^{\frac{1}{2}}}{\sqrt{\pi }}\mathrm{\Gamma }\left(\frac{3}{4}\right)^2+\frac{8}{3}\right)\stackrel{~}{L}^4+𝒪(\stackrel{~}{L}^8)\right\}.$$
(21)
Here
$$\stackrel{~}{L}\frac{AL}{C}.$$
(22)
The behavior in (21) is essentially identical with the results in but there are some differences. The coefficient in the leading term is different by an adding constant $`4`$, which comes from the ambiguity when subtracting the self energy<sup>3</sup><sup>3</sup>3Since $`y_0`$, the minimum of $`y`$, depends on the distance $`L`$ between “quark” and “anti-quark” as in (17), the self-energy (19) should depend on the distance $`L`$, which gives the ambiguity in the coefficient in the potential energy in (S0.Ex7). . In the second term, there would be an ambiguity coming from the finite renormalization but by properly choosing the undetermined parameter $`\gamma `$ and the constant of the integration $`c`$ in (3), we can reproduce the result in (21).
Let us try to understand better the background under consideration in relation with AdS/CFT correspondence. We have two boundaries at $`y=0`$ and $`y=\mathrm{}`$. Then the renormalization flow would connect two conformal field theories. To be specific, we only consider $`d=2`$ case in the following. We also choose $``$ sign in $`\pm `$ in (4) and $`\varphi _0=0`$ since $`\varphi _0`$ can be absorbed into the redefinition of $`G`$. We rescale the metric tensor by
$$g_{\mu \nu }\stackrel{~}{g}_{\mu \nu }=\mathrm{e}^{2\sqrt{2\alpha }\varphi }g_{\mu \nu }.$$
(23)
The redefined metric behaves when $`y\mathrm{}`$ as
$$d\stackrel{~}{s}^2\stackrel{~}{g}_{\mu \nu }dx^\mu dx^\nu =\frac{1}{2\lambda ^2y^2}dy^2+y\underset{i,j=0}{\overset{d1}{}}\eta _{ij}dx^idx^j.$$
(24)
If we change the coordinate by $`y=w^1`$, we obtain the standard metric on anti de Sitter space. On the other hand, the redefined metric behaves when $`y0`$ as
$$d\stackrel{~}{s}^2=\frac{4\alpha c^2}{\lambda ^2}\left\{\frac{1}{2\alpha c^2y^2}dy^2+y^1\underset{i,j=0}{\overset{d1}{}}\eta _{ij}dx^idx^j\right\}.$$
(25)
which is the metric of anti de Sitter space again. In the redefinition of (23), the action in (1) is rewritten as follows:
$$S=\frac{1}{16\pi G}d^{d+1}x\sqrt{g}\mathrm{e}^{\sqrt{2\alpha }\varphi }\left(R\mathrm{\Lambda }\mathrm{e}^{2\sqrt{2\alpha }\varphi }+3\alpha g^{\mu \nu }_\mu \varphi _\nu \varphi \right).$$
(26)
Therefore the effective Newton constant $`\stackrel{~}{G}`$ and the effective cosmological constant $`\mathrm{\Lambda }`$ are given by
$$\stackrel{~}{G}=\mathrm{e}^{\sqrt{2\alpha }\varphi }G,\stackrel{~}{\mathrm{\Lambda }}=\mathrm{e}^{2\sqrt{2\alpha }\varphi }\mathrm{\Lambda }.$$
(27)
In the usual AdS/CFT correspondence, where the dilaton is constant, the central charge $`c`$ of the conformal field theory on the boundary is given by
$$c=\frac{3l}{2G}.$$
(28)
here the length scale $`l`$ is defined by $`\mathrm{\Lambda }=\frac{d(d1)}{l^2}`$. Similarly defining $`\stackrel{~}{l}`$ by $`\stackrel{~}{\mathrm{\Lambda }}=\frac{d(d1)}{\stackrel{~}{l}^2}`$, we can define the effective central charge by
$$\stackrel{~}{c}\frac{3\stackrel{~}{l}}{2\stackrel{~}{G}}=\frac{3l}{2G}.$$
(29)
Note that $`\stackrel{~}{c}`$ is a constant everywhere, which would tell that the renormalization flow connects conformal field theories with same central charges. Therefore the conformal field theories on two boundaries would correspond to IR/UV duality (for a recent related discussion of RG flow see ref.).
Finally let us note that one can easily generalize the present study for more general background containing other fields like gauge fields , antisymmetric tensor fields, etc.
Acknoweledgements The work by SDO has been partially supported by RFBR project N99-02-16617 and by Graduate College Quantum Field Theory at Leipzig University and by Saxonian Ministry of Science and Arts.
|
no-problem/9904/hep-ex9904034.html
|
ar5iv
|
text
|
# First Results from Dark Matter Search Experiment in the Nokogiriyama Underground Cell
## I Introduction
There are a number of observational evidences to believe that a large fraction of the matter in the Universe exists in the form of non-baryonic particle dark matter. Supersymmetric neutralino is one of the most plausible candidates for such exotic particle dark matter. Various experimental efforts are being made aiming at detection of low energy nuclear recoils caused by the elastic scatterings of the neutralinos off nuclei.
Conventional detectors like semiconductor detectors or scintillators generally have a quenching factor less than unity. Here the quenching factor is defined as the ratio of the energy detection efficiency for a nuclear recoil to that for an electron. On the other hand, because the bolometer is sensitive to the whole energy deposited in the absorber the quenching factor of the bolometer should be unity in principle. Actually a quenching factor close to unity has been measured by Milan group.
We have been developing bolometers with lithium fluoride absorbers. Fluorine is considered to have a large cross section for elastic scattering of the axially-coupled neutralino off the nucleus compared with other nuclei. Recently we have successfully constructed the bolometer array with a total mass of 168 g and installed it in the Nokogiriyama underground cell with a depth of 15 m w. e.
In this paper we report on the first results from the experiment performed in the Nokogiriyama underground cell using the bolometer array.
## II Experimental Set-up and Measurement
The bolometer array used in this work contains eight 21 g LiF bolometers. The schematic drawing of the bolometer array is shown in Fig. 1. The neutron transmutation doped (NTD) germanium thermistors with the similar temperature dependence of the resistance are attached to the crystals. The thermistor senses a small temperature rise of the absorber crystal caused by the neutralino-nucleus scattering. Each crystal is placed on four copper posts and thermally insulated by the Kapton sheets. Moderate thermal anchoring of the crystal to the copper holder with a temperature of 10 mK is realized by a oxygen free copper (OFC) ribbon. The lithium fluoride crystals are checked by a low-background Ge spectrometer prior to the construction of the bolometer. The concentration of radioactive contaminations is less than 0.2 ppb for U, 1 ppb for Th, and 2 ppm for K. The bolometer array is mounted on a mixing chamber of a dilution refrigerator which is mostly made of low-radioactivity materials radio-assayed in advance by low-background Ge spectrometer.
Each thermistor is biased through a 100 M$`\mathrm{\Omega }`$ load resistor. The voltage change across the thermistor is fed into the eight channel source follower circuit placed at the 4 K stage which include a low noise junction field effect transistor (J-FET), Hitachi 2SK163. Since the J-FET does not work at this low temperature, it is connected to a printed circuit board with thin stainless steel tubes and manganin wires to be thermally isolated from the circuit board with a temperature of 4 K and the temperature of the FET is maintained above 100 K by the heat produced by itself.
The signal from the source follower circuit is in turn amplified by an eight channel voltage amplifier placed just above the refrigerator. The output of the voltage amplifier is fed into a double pole low-pass filter with a cut-off frequency of 226 Hz and in turn into the 16-bit waveform digitizer to record the pulse shape of the signal for off-line analysis.
The passive radiation shielding consists of 10 cm-thick oxygen free high conductivity copper layer, 15 cm-thick lead layer, 1 g cm<sup>-2</sup>-thick boric acid layer and 20 cm-thick polyethylene layer. The latter two layers act as a neutron shield. In order to avoid muon-induced background we employ a veto system which consists of 2 cm-thick plastic scintillators.
The constructed detector system is installed in the Nokogiriyama underground cell which is located about 100 km south from Tokyo and relatively easy to access. The depth of an overburden of sand is inferred to be about 15 m w.e. In this work six bolometers of the bolometer array are used and energy spectrum are measured for about ten days. Two bolometers have some problems in the cooling procedure.
Since the detector is enclosed in a cryogenic vacuum can during the measurements it is impossible to place the gamma-ray source close to the detector for energy calibration. The energy calibration during the measurements is, therefore, performed by 662 keV gamma-rays from a <sup>137</sup>Cs source and 1333 keV and 1173 keV gamma-rays from a <sup>60</sup>Co source placed outside a helium dewar of the dilution refrigerator and inside the radiation shieldings. Furthermore, the sharp peak at 4.78 MeV due to the neutron capture reaction of <sup>6</sup>Li observed in the background spectrum is also used for pthe energy calibration. Fig. 2 shows one of the obtained energy calibration plots. Linearities of the six bolometers up to 5 MeV are recognized. It must be noted that linearity down to 60 keV gamma-ray is confirmed prior to this measurement using gamma-ray from <sup>241</sup>Am source set inside the cryostat.
## III Energy Spectra and Dark Matter Limits
Fig. 3 shows the energy spectra obtained by the six bolometers during ten days. The bump in the low energy region is considered to be due to microphonics caused by a helium liquefier which recondenses evaporated helium gas from the dewar. While the similar spectra are obtained for four bolometers (D3, D5, D6, and D8), the spectra for the other two bolometers (D1 and D4) are affected by microphonics below 30 to 40 keV because of their low detector gains.
Comparing the measured energy spectrum with the expected recoil spectrum, the exclusion limits for the cross section for elastic neutralino scattering off the nucleus can be extracted. The calculation is performed in the same manner used in Ref. . The theoretical recoil spectrum is calculated assuming a Maxwellian dark matter velocity distribution with rms velocity of 230 km/s, and then folded with the measured energy resolution and the nuclear form factor. We also assume the local halo density of the neutralino to be 0.3 GeV/cm<sup>3</sup>. The spin factors calculated assuming an odd group model as a nuclear shell model are 0.75 for <sup>19</sup>F and 0.417 for <sup>7</sup>Li. Since the detector responses for the six bolometers are not the same, the upper limit of the cross section is evaluated independently from the spectrum of each detector. For a given neutralino mass the lowest value of the cross section is taken as a combined limit from the results of the six detectors.
The calculated exclusion limits in case of the spin-dependent interaction are given in Fig. 4. For comparison the exclusion limits derived from the data in the other experiments at deep underground sites and the scatter plots predicted in the minimal supersymmetric theories are also shown. Although the other experiments except for Osaka experiment are performed at deep underground laboratories, our experiment gives comparable limits for the light neutralino. This owes to the large cross section for the spin-dependent interaction of <sup>19</sup>F and the low energy threshold of the bolometer. The sensitivity for neutralinos with a mass below 5 GeV is improved by this work.
## IV Prospects
Compton scattered gamma-rays from the aperture of the shielding and gamma-rays produced through the interaction of cosmic ray muon within the shielding materials are considered major background sources. In order to reduce the muon-correlated background, the veto efficiency must be improved. The present incompleteness of the veto is due to the penetrations for vacuum tubes and the tube of the helium liquefier. Increasing of the coverage of the plastic scintillator will improve the veto efficiency up to 98%. Against the Compton scattered gamma-ray background, internal lead shielding with a thickness of 20 mm surrounding the lithium fluoride bolometer array will be installed. The shielding is made of over 200 year old low-activity lead with a concentration of <sup>210</sup>Pb of less than 0.05 pCi/g. The Compton scattered gamma-rays can be reduced by two orders of magnitude by this internal shielding. Since lead fluorescence X-rays are produced mainly by the muon interaction, their contribution can be ignored if muons are sufficiently vetoed. If these improvements are realized, the sensitivity of this experiment will be improved by more than an order of magnitude even at this shallow depth.
The detector system will be installed in a underground facility with a sufficient depth where cosmic muon induced background is expected to be negligible. The long-term measurements in a deep underground site will bring the sensitivity to the spin-dependent interaction below the level predicted by the supersymmetric theory.
## Acknowledgments
We would like to thank Prof. Komura for providing us with the low-radioactivity old lead. This research is supported by the Grant-in-Aid for COE Research by the Japanese Ministry of Education, Science, Sports and Culture. W.O. are grateful to Special Postdoctoral Researchers Program for support of this research.
|
no-problem/9904/quant-ph9904099.html
|
ar5iv
|
text
|
# Quantum soliton generation using an interferometer
## Abstract
For the first time a method for realizing macroscopic quantum optical solitons is presented. Simultaneous photon-number and momentum squeezing is predicted using soliton propagation in an interferometer. Extraction of soliton pulses closer to true quantum solitons than their coherent counterparts from mode-locked lasers is possible. Moreover, it is a general method of reducing photon-number fluctuations below the shot-noise level for non-soliton pulses as well. It is anticipated that similar reductions in particle fluctuations could occur for other forms of interfering bosonic fields whenever self-interaction nonlinearities exist, for example, interacting ultracold atoms.
Quantum solitons are energy eigenstates or photonic number states for optical systems. The particle number fluctuations of these quantum states are zero and this property is preserved by the nonlinear system. A quantum soliton is a fundamental state in nature capable of allowing information bits to propagate arbitrarily long distances in the presence of dispersion and nonlinearity. Even approximate forms of such an idealised quantum object have been difficult to observe.
Mode-locked lasers do not produce quantum solitons but instead have a Poissonian distribution of number states . Solitons excited by such laser pulses have initial fluctuations in the four soliton parameters of photon number, momentum, position and phase. Neither are they minimum uncertainty states. Position fluctuations grow quadratically for a freely propagating fundamental soliton due to the linear dispersion acting on the initial momentum fluctuations. Similarly phase noise grows due to the initial fluctuations in its conjugate variable photon number acted upon by the nonlinearity. Typically initial fluctuations in position and phase become insignificant compared to the increase due to quantum diffusion.
In this Letter quantum soliton generation using an approach based on interference of optical fields is proposed. The idea that one can manipulate the internal quantum noise structure of propagating initially coherent solitons to produce more than 11dB photon number squeezing using an asymmetric Sagnac loop was first put forward by the author recently . That followed surprising results on interference of solitonic fields in nonlinear optical loop mirrors to produce more than 15dB excess noise . Here it is shown that in the easily accessible regime of macroscopic photon numbers of order $`10^810^9`$ typical in picosecond and subpicosecond laser experiments, it is possible to produce soliton pulses with greater than 10dB reduction in photon number fluctuations. In addition, momentum fluctuations can be reduced to more than 6dB below that of a coherent state pulse. Initial experiments have recently observed 3.9dB (6.0dB inferred from 79% detection efficiency) (5.7 dB has independently been observed) photon-number squeezing at room-temperature using the method disclosed earlier and discussed here in more detail.
Any scheme which involves interference of bosonic fields (different in either direction or polarization) and where at least one has undergone evolution according to an effective ($`1+1`$)D nonlinear Schrödinger equation could exhibit similar behaviour. This includes waveguided atomic solitons , spatial optical solitons for nonlinearities that depend on the particle density flux and optical pulses in cascaded quadratic media or phase-mismatched $`\chi ^{(2)}`$ simulton interactions . Results are presented for an ideal Mach-Zehnder interferometer since the two counterpropagating fields of a Sagnac fiber loop are assumed to have propagated in two independent waveguides. It is important that the system is described using a dispersive quantum field theory in order to contain the essential physics and to make predictions for current optical pulse experiments which use pulse durations corresponding to $`70\text{fs}<t_0<2\text{ps}`$. For optical systems the nonlinearity and dispersion can be turned on/off in one arm easily. For systems containing real massive interacting particles the loop results are more appropriate. Optical experiments were performed using Sagnac loops to reduce low-frequency noise. Importantly, the scheme is not critically sensitive to exact coupling ratios, powers or fibre lengths although optimization is required to reach beyond the 10dB limit. The interference of optical fields has several distinct advantages over direct spectral filtering including greater noise reduction, smooth output pulse envelopes and simultaneous photon-number/momentum squeezing. The model investigated to demonstrate the idea – the nonlinear Schrödinger equation – is ubiquitious in the physics of dispersive self-interacting fields and describes perturbations in a wide variety of physical systems. It is anticipated that particle number fluctuations in more general systems such as described by a quantum Ginzburg-Landau equation could exhibit similar behaviour to the idealized solitons discussed in this Letter. There are strong reasons to believe this is the case especially for weak damping which occurs in optical fibers in the $`1.5\mu `$m regime.
The nonlinear Schrödinger equation has been used in various forms for the study of Bose-Einstein condensation , waveguided ultracold atoms and propagating coherent quantum optical solitons where the quantum nonlinear Schrödinger equation (QNLSE) governs dynamics of the photon flux amplitude. The Raman-modified stochastic nonlinear Schrödinger equation for the normalised photon flux fields $`\{\varphi (\zeta ,\tau ),\varphi ^{}(\zeta ,\tau )\}`$ in the positive-P representation is given by
$`{\displaystyle \frac{\mathrm{ln}\varphi }{\zeta }}=`$ $``$ $`{\displaystyle \frac{i}{2}}\left(1\pm {\displaystyle \frac{^2}{\tau ^2}}\right)+if\varphi ^{}\varphi +\sqrt{i}\mathrm{\Gamma }_e`$ (1)
$`+`$ $`i{\displaystyle _{\mathrm{}}^\tau }𝑑\tau ^{}h(\tau \tau ^{})\varphi ^{}(\tau ^{})\varphi (\tau ^{})+i\mathrm{\Gamma }_v,`$ (2)
where length and time variables $`(\zeta ,\tau )`$ in the comoving frame at speed $`\omega ^{}`$ (group-velocity at the carrier frequency) in the laboratory frame $`(x,t)`$ are $`\tau =(tx/\omega ^{})/t_0,\zeta =x/x_0,x_0=t_0^2/|k^{\prime \prime }|`$. For this equation and its Hermitean conjugate for $`\varphi ^{}`$, the characteristic time scale $`t_0`$ will be chosen to be the pulse width and the soliton period is $`\pi /2`$ times longer than the dispersion length $`x_0`$ determined by $`t_0`$ and the second-order dispersion $`k^{\prime \prime }`$. The quantum noise from the electronic nonlinearity $`\mathrm{\Gamma }_e`$ is a real delta-correlated Gaussian noise with variance given by the product of the electronic fraction $`f`$ (ideal QNLSE has $`f=1,h(\tau )=\mathrm{\Gamma }_v=0`$) and inverse photon number scale $`1/\overline{n}=\chi t_0/|k^{\prime \prime }|\omega _{}^{}{}_{}{}^{2}`$. Silica fiber Raman gain has a peak near $`13`$ THz and $`f=0.81`$ for Raman inhomogeneous model parameters used in evaluating the response function $`h(\tau )`$ and noise $`\mathrm{\Gamma }_v`$ corresponding to the Raman gain curve in Reference . Quantum field propagation is performed numerically using techniques discussed elsewhere . All simulations without Raman used $`\overline{n}=10^8`$ with averaging over $`10^5`$ trajectories for the positive-P representation. Error bars in the plots represent the estimated combined sampling and step-size error.
To illustrate the interference at the output beamsplitter of unequal amplitude solitonic fields consider the interference term of the transmitted photon spectral flux proportional to
$$\varphi _1^{}(\omega )\varphi _2(\omega )\varphi _2^{}(\omega )\varphi _1(\omega ).$$
(3)
The interference term above is the same as obtained from a generalised quadrature-phase operator measurement in homodyne detection where the local oscillator (weak field) can have nonclassical statistics . One can consider the beamsplitter of a loop to act first as a state preparation device and later, combined with the weak field and photon detector, as a measurement apparatus. It is easy to show that placing a phase-shifter in the weak field arm can be used to switch from sub-Poissonian to super-Poissonian statistics in analagy with changing a quadrature-phase measurement from the squeezed quadrature to the anti-squeezed quadrature. Since the two fields $`\varphi _1`$ and $`\varphi _2`$ in each arm of the interferometer typically has a different spectral distribution due to the different initial amplitudes, there exists a spectral filtering mechanism. Importantly, it has been shown previously that the internal quantum noise structure for the two fields will be quite different . These two effects combined will alter the quantum statistics of the resultant field. In addition, the interferometer is obviously sensitive to pulse chirp, a frequency dependent phase-shift, induced by group-velocity dispersion which is important for fiber lengths of a few soliton periods discussed here. These aspects are not present in any single-mode description as used in Reference where single polarization and nonlinear polarization rotation interferometric configurations were considered with one arm as free space. This paper goes beyond the one arm configuration and shows that noise reduction is possible even for solitonic fields in both arms.
While a heuristic analysis based on the energy input-output curve for Sagnac loops can sometimes estimate whether the output photon-number noise is expected to be below shot noise, it is generally not reliable. It is suggested that an appropriate method for large photon numbers which does not require the more sophisticated techniques used with the positive-P representation is to use the truncated Wigner representation which for the ideal QNLSE with coherent state inputs involves solving only the classical equations argmented with Gaussian noise on the initial conditions . The classical nonlinear phase shift and dispersion in each arm of the loop, which has long been known to support effective soliton switching and reduction of dispersive waves , leads to a characteristic input-output curve for the highly asymmetric (90:10) and near balanced (60:40) case. The transmitted pulse photon-number (scaled by $`\overline{n}`$) versus the input amplitude $`N`$ with $`\varphi (0,\tau )=N\text{sech}(\tau )`$ inputs is given in Fig. 1 for the case 90:10 at $`\zeta =\pi ,2\pi `$ and 60:40 at $`\zeta =2\pi `$. A propagation distance longer than $`\zeta =\pi `$ for the 60:40 case was chosen so that input-output curves contain at least one turning point. One can see that after propagating 2 soliton periods for the 90:10 case, the input-output curve’s slope is positive leading to excess noise at the output for all inputs except near the turning points $`N=1.35,1.5,1.85`$ where a saturation effect might be expected. The double dip structure in Fig. 2, which has been observed experimentally , corresponds closely to these classical turning points. Slightly beyond the classical turning points a negative slope represents a region stable against changes in the input state. In the 60:40 case, the input-output curve in Fig. 1 suggests that significant squeezing would occur near $`N=1.62`$ where the slope changes sign. The quantum-field theoretic results discussed next do not predict noise reduction for $`N=1.62,\zeta =2\pi `$ — in sharp contrast to the simplified classical input-output picture the quantum theory actually predicts 15 dB excess noise in this case which the truncated Wigner theory also predicts. A simple explanation for this disagreement lies in the difference between a loop and an interferometer with only one nonlinear pathway. In the former situation quantum theory allows quantum phase diffusion to develop in both arms independently whereas the classical argument would have input noise appear reduced in strength in both arms. Clearly this is exacerbated in the 60:40 loop compared to a 90:10 loop. The heuristic argument is expected to disagree with quantum theory for a 90:10 loop but with a much reduced error compared to the 60:40 case. Quantum field-theoretic results for the 90:10 case will now be discussed in more detail.
For a fiber loop wih a beamsplitter transmission of 90% large photon-number squeezing in the transmitted field (i.e., the ”dark” port for a 50:50 beamsplitter arrangement) is predicted. Large noise reduction occurs over a wide variation of coupling ratios as well. Although optimal parameters for the largest reduction in photon-number fluctuations are still under investigation, it is predicted that $`11\pm 1`$ dB squeezing at $`\zeta =3`$ is possible for coherent $`N\text{sech}(\tau )`$ input pulses in the absence of Raman noise for $`N=1.5`$. Obviously smaller energy pulses can be used in combination with longer propagation distances. Significant reduction in the momentum fluctuations, up to 6 dB below that for a coherent pulse, also occurs but the latter result is preliminary. In the $`N=1.5`$ case the lower pulse energy arm of the Sagnac loop is only dispersive radiation while the other arm contains a solitonic field (emergent soliton plus dispersive radiation) whose internal noise structure was recently described by us . As discussed earlier the nonlinearity in the lower energy arm is not required to observe noise reduction but must be included to correctly predict the noise levels expected from a loop.
The predicted variation of the photon-number squeezing in dB versus the input pulse amplitude parameter $`N`$ with $`\varphi (0,\tau )=N\text{sech}(\tau )`$ coherent inputs after propagating in a Sagnac loop of length $`\zeta =\pi `$ using the ideal QNLSE is given in Fig. 2. The influence of the Raman effect for $`t_0=0.1`$ps at room temperature (shown in Fig. 3) was determined to be not large in this case even though the input pulse to the fiber had more than twice the energy of a fundamental soliton in the case of $`N=1.5`$. The photon-number fluctuations are however still increased by the Raman effect to $`8.3\pm 0.4`$dB below shot-noise at $`\zeta =\pi `$ for $`N=1.5`$ including 0.1dB/km losses. There is a strong similarity between variation with energy for a fixed distance and variation with distance for a fixed input energy since the nonlinearity allows soliton pulses of larger(smaller) energy to experience similar effects over shorter(longer) distances. Both in Fig. 2 and Fig. 3 the loop output exhibits excess noise, which was also predicted and observed for direct spectral filtering, and is clearly a general feature in nonlinear systems. After $`N=1.5`$, the low energy pulse switches from dispersive radiation to a solitonic field with an internal noise structure whose spectral correlations can lead to excess noise or squeezing depending on the bandwidth of any filtering mechanism . The noise reduction predicted for a one soliton period fiber is also given to demonstrate the possibility of significant squeezing in short fibers provided that the pulse launched into the fundamental propagating mode of the fiber matches the initial conditions assumed here. The variation for $`\zeta =\pi /2`$ appears similar to $`\zeta =\pi `$ but with higher energy required as expected for the nonlinear fiber. For comparison a much longer propagation distance of 16 soliton periods using $`N^2=10/9`$ is given in Fig. 3 so that a fundamental soliton propagates in one arm and the weak pulse experiences free propagation (non-loop case). While solitons are not necessary the squeezing predicted for pulses in the normal dispersion regime is less and at higher energy than soliton pulses as expected. In this case the pulse quickly temporally broadens from the group velocity dispersion while the pulse spectrum broadens initially and then reaches an equilibrium even for $`N>1`$. This is in contrast to the breathing in the anomalous regime. In Fig. 3 the case $`N=3`$ is given without Raman and reaches $`6.4\pm 0.1`$dB below shot noise.
In summary, photon number fluctuations for the two coupling regimes of the Sagnac loop – slightly asymmetric and highly asymmetric – have quite different characteristics. The nonlinear optical loop mirror (slight asymmetry) usually increases the photon-number fluctuations above the shot-noise level for coherent inputs . Sub-shot noise statistics is possible however the 60:40 loop is much more sensitive and without precise control would almost certainly produce excess noise at the output. On the other hand, photon-number noise may be significantly reduced below the shot-noise level easily for highly asymmetric coupling. We have shown for the first time that large photon-number squeezing of solitonic fields can be produced by a Sagnac loop over a significant range of input energies and the squeezing is larger than predicted for spectrally filtered optical fiber solitons . Despite this similarities in the output photon statistics for the two approaches exist due to the use of the same nonlinear propagation to produce the internal quantum correlations of a quantum soliton not present in the initial coherent state. In essence a nonlinear interferometer is capable of producing optical pulses closer to true quantum solitons than their coherent counterparts from mode-locked lasers. The application of these ideas to controlling fluctuations in atomic wavepackets is particularly interesting given recent experimental progress with Na atom coherent condensates with densities $`10^{15}`$ cm<sup>-3</sup> using optical traps . This is several orders of magnitude larger than estimated to observe atomic solitons .
|
no-problem/9904/physics9904010.html
|
ar5iv
|
text
|
# Application of avalanche photodiodes as a readout for scintillator tile-fiber systems
## 1 Introduction
Recently intensive R&D on scintilator tile-fiber readouts is being carried out in order to satisfy the needs of calorimeters in new LHC experiments , . The specific requirements for these types of detectors are:
* Operation in magnetic field up to $`4Tesla`$.
* Large linear dynamic range - $`10^5`$ for the detector-preamplifier couple.
* Long lifetime of the photodetectors - $`5`$ to $`10`$ years of operation at high luminosity.
* Radiation hardness - up to $`2Mrad`$ integrated dose.
* Small size - because of very large number of channels in use.
* Sufficient $`Signal/Noise`$ ratio - to measure the signal from a minimum ionising particle ($`MIP`$).
* Capability of measuring the signal generated by a radioactive source as a DC current to a precision of $`1\%`$.
* Reasonable price.
Two main types of photodetectors satisfy the above requirements: hybrid photodiodes (HPD) and avalanche photodiodes (APD). The advantages of avalanche photodiodes over the other types of photodetectors are:
* Insensitivity to magnetic field.
* Linear dynamic range of $`10^6`$.
* Fast response ($`<1ns`$).
* High quantum efficiency in the range from $`200nm`$ to $`1100nm`$.
* Small size ($`10\times 10\times 5mm^3`$).
* Low price of APD and preamplifier.
Possible disadvantages of the photodiodes are low internal gain ($`50300`$) compared to the PMT and HPD one and relativelly high excess noise factor.
The present paper is devoted to the investigation of the applicability of APD’s as a readout for scintilator tile-fiber systems. The choice of the scintilator tiles design has been determined by the requirements for two of the LHC experiments under preparation - CMS and LHCb. The main goal of our research was to achieve $`Signal/Noise`$ ratio enough to measure the $`MIP`$ signal from scintilator tile-fiber and good $`e^{}/MIP`$ separation needed for the preshower detector of the LHCb experiment. For this purpose, various designs of the scintilator tile-fiber system have been developed (section 2). The APD characteristics, electronics and calibration of the systems under investigation are presented in sections 3, 4 and 5 correspondently. The results from measurements performed with cosmic muons and muon beams at SPS accelerator are reported in section 6.
It is shown that cooling of the photodiodes reduces dark current and increases gain hence allowing us to achieve by times higher $`Signal/Noise`$ ratio and to reach our goals.
## 2 Scintillator tile - fibre system
Two different designs of scintillator tile - fibre system have been studied (fig.1). The first one is proposed for CMS HCAL while the second one is under investigation for the preshower detector in LHCb experiment .
Three different types of WLS fibers - green Kuraray Y11 /$`0.84mm`$ diameter/, green Bicron 91A /$`1mm`$/ and red Bicron 99-172 /$`1mm`$/ have been used. In order to estimate the scintilators light output, cosmic muons measurments using PMT FEU85 have been performed. The WLS fibers were directly coupled to the PMT window. The signal from PMT was read out by charge-sensitive ADC LeCroy 2249W with $`250fC/channel`$ sensitivity using a $`160ns`$ gate triggered by two scintillator counters. Calibration of PMT was done by fitting the single photoelectron distribution, produced by $`15ns`$ light pulses of blue LED . The average number of photoelectrons induced by cosmic muons are presented in Table 1.
Taking into account the PMT photocathode quantum efficiency at $`500nm`$ emission peak of green WLS fibers ($`6\%`$) one can estimate $`180`$ photons from $`4\times 4\times 1cm^3`$ scintillator with $`8`$ coils of fiber. Coupling of WLS fiber to a clear fiber with the help of optical connector leads to the reduction of the light output at the level of $`20\%35\%`$. In order to improve the light collection we have matted the scintillator surface on which the WLS fibers were placed. This gaves a $`20\%`$ increase of the light yield.
## 3 Avalanche Photodiodes
The avalanche photodiodes under investigation are manufactured by ”InterQ” Ltd. using planar technology on a $`p`$type high resistivity silicon with a resistivity of $`2k\mathrm{\Omega }cm`$ ,. They are $`n^+p\pi p^+`$ type (R’APD).The multiplication region of the diodes is produced by ion implantation and two-stage diffusion of boron and phosphorus (fig.2). The active surface of the detectors is $`0.5mm^2`$. Some characteristics of the diodes are presented in Table 2.
## 4 Electronics
The schematic view of the specialy designed for our investigation electronics is presented on fig.3.
The chain consists of a charge-sensitive preamplifier, CR-RC shaper and current feedback amplifier, which can drive a back terminated $`50\mathrm{\Omega }`$ line. A JFET SST309 transistor with high forward transconductance (determined by the very short shaping time - 25 ns) about 15 mS per 10 mA drain current and a relatively low input capacity - 6 pF has been used. The noises of the shaping amplifier become significant (the noise bandwidth is 10 MHz) due to the short shaping time. For this reason, a microwave bipolar transistors with low $`R_{BB^{}}`$ have been used. The thermostability of the preamplifier was achivied via negative feedback.
Main parameters of the preamplifier are:
* Equivalent charge noise $`400e^{}`$ at $`1pF`$ detector capacity (fig.4).
* Sensitivity of the preamplifier is $`160mV/10^6e^{}`$. (This coefficient depends on value of capacitor $`C1`$).
* $`25ns`$ shaping time.
* Full width of output signal $`150ns`$.
* Maximum output voltage about $`3.5V`$.
## 5 Calibration of the readout
The calibration of the readout has been done using $`15ns`$ long blue light pulses emited by LED. The light has been splited up between APD and PMT readouts throught a special optical connector. The signals from both readouts have been registrated by qADC with $`160ns`$ gate. The measurements have been done at different light intensities, reverse bias voltages and temperatures . The summary results are presented below:
* The normalized internal gain of APD as a function of applied bias voltage is shown on fig.5. The gain at $`60V`$ and $`22^oC`$ is about $`6`$. Cooling of the photodiodes leeds to breakbown voltage decrease and to a valuable gain increase mainly at voltages near to the breakdown.
* The APD and preamplifier noise in terms of ADC channels vs. applied bias voltage is ploted on fig.6. At low temperatures the noise remains low up to particular voltage near to the breakdown and then increases drastically. On the other hand at high temperatures, noise is higher and increases smoothly. The equivalent noise charge can be estimated taking into account that the preamplifier gain is $`68e^{}`$ per ADC channel. The small increase of noise at low bias voltages is due to larger capacity of photodiodes which is a result of the absence of full depletion of charges in APD at these voltages.
* Fig.7 shows the Signal/Noise ratio as a function of the bias voltage. The cooling of APD allow us to increase drastically the Signal/Noise ratio. The maximum of this ratio can be achivied at bias voltages near the breakdown where the signal is already large while the noise is still low. Further increase of the voltage results in abrupt increase of the noise.
It’s clear that even for low intensity ligth sources, cooled APD gives a considerable S/N ratio. The large increase of the gain of APD at low temperatures is a result from fact that the multiplication process in the APD is affected by the temperature. This happens since electrons lose energy to the phonons, whose energy density increases with the temperature, and at lower temperatures it takes shorter for the electrons to reach the energy required for impact ionization.
## 6 Measurments with muons
We have performed series of measurments of the scintilator tile-fiber readout using cosmic muon flux and muon beam at SPS at CERN. The tests have been made with PMT and APD readouts in a wide temperature range. We have obtained $`1.52`$ times higher efficiency of Bicron 99-172 fiber over Bicron 91A fiber and $`1.251.5`$ times over Kuraray Y11 fiber with APD readout because of the higher photodiode quantum efficiency at longer wavelengths of incoming light. The results obtained with Bicron 99-172 fiber are presented on fig.8 and fig.9. Due to the long right-handed tail in the muon signal distribution, in what follows the $`S/N`$ ratio is defined as a ratio of the signal distribution maximum $`MPV`$ (Most Probable Value) to the $`\sigma `$ of pedestal. The tail is determined by a low light yield from the scintillator and by the so called excess noise factor (F) of the photodiodes. Taking into account that F rises linearly with the bias voltage applied, we have to find the operational voltage at which the $`S/N`$ ratio is enough good, keeping the RMS of the signal distribution as low as possible. We have found that the compromise is reached at $`U110V`$ (see fig.10) for the photodiode used.
We have calculated the $`MIP`$ detection efficiency (the ratio of the number of events in the signal distribution above $`2\sigma `$ from the pedestal to the number of all events in the signal distribution) for different bias voltages (fig.12). Another important issue is the ability to separate clearly the signals from muons ($`MIP`$) and electrons. For example, for the $`LHCb`$ preshower detector the signal is considered as a $`MIP`$ when it is by five times lower than the muon $`MPV`$. Otherwise it is treated as an electron signal. A separation ratio (the ratio of the number of the events in signal distribution below $`5MPV`$ to the number of all events in the signal distribution) as a function of the bias voltage is presented in fig.13. For the bias voltage of 110 V we have satisfactory levels for the detection efficiency $`85\%`$ and a separation ratio of $`95\%`$.
We also have performed tests with different gates (fig.11) in order to investigate the behavior of the readout system in various readout schemes. The best result for the $`S/N`$ ratio is when the ADC gate is near to full width of the signal ($`160ns`$). At narrower gates (for example $`50ns`$) $`S/N`$ decreases by $`25\%`$. At wider gates $`S/N`$ falls down rapidly because much more noise is integrated.
## 7 Conclusions
Avalanche photodiodes were tested as a scintilator tile-fiber readout for CMS HCAL and LHCb ECAL preshower. Various types of fibres were investigated and Bicron 99-172 one was determined to be the most efficient. A low noise charge sensitive preamplifier with $`400e^{}`$ equivalent charge noise was designed to gain signal from photodiodes. For LHCb ECAL preshower scintilator a $`Signal/Noise`$ ratio of $`6.5`$ and for CMS HCAL scintillator $`Signal/Noise`$ ratio of $`1.3`$ were achieved cooling the APDs down to $`10^oC`$. A satisfactory $`MIP`$ detection efficiency of $`85\%`$ and an excellent $`e^{}/MIP`$ separation of $`95\%`$ for preshower scintilator were reached.
## 8 Acknowledgements
We would like to express our gratitude to Dr. Y. Musienko for the useful discussions and consultations and to P.Slavchev (”Inter Q” Ltd.) for developing and manufacturing the cooling device.
|
no-problem/9904/chao-dyn9904017.html
|
ar5iv
|
text
|
# Multiple Attractor Bifurcations: A Source of Unpredictability in Piecewise Smooth Systems
## Figure Captions
### Figure 1
(a) The Border Collision Circuit.
(b) $`ii_{crit}`$.
(c) $`i>i_{crit}`$.
### Figure 2
(a) and (b) Bifurcation diagrams for the two-dimensional time-1 map of the circuit depicted in Fig. 1 with $`Q_{bias}=1.0`$, $`\rho =0.10742`$, $`\mathrm{\Omega }=1.0642`$ and $`\mathrm{\Gamma }=1.2040`$. In creating the bifurcation diagrams (a) and (b) small noise was inserted leading to two different realizations.
(c) and (d) Bifurcation diagrams for the equivalent canonical form, Eq. (3), near criticality with $`\tau _<=0.7`$, $`\tau _>=2.0`$ and $`d=0.3`$. In creating (c) and (d), small noise was inserted leading to two different realizations.
### Figure 3
Two-dimensional period plot in parameter space for the canonical form, Eq. (3), with $`0.5<\tau _<<0.9`$, $`2.2<\tau _><1.8`$, $`d=0.3`$, $`\mu =1`$, and initial conditions $`(x_0,y_0)=(7.0,2.0)`$ and $`(x_0,y_0)=(1.2,0.0)`$.
### Figure 4
Probability $`b(\mu >\mu _2)`$, plotted logarithmically, as a function of $`1/v`$, based on 1,000,000 experiments for every value of $`1/v`$. Solid lines and plus symbols represent square noise, while dashed lines and asterisks represent circular noise.
|
no-problem/9904/math9904130.html
|
ar5iv
|
text
|
# Ultimate Polynomial Time
## 1 Introduction
A model of Computation and Complexity over a ring was developed in and , generalizing the classical $`𝒩𝒫`$-completeness theory . Of particular interest is the model of Complexity over the ring $``$ of complex numbers.
In the model of complexity over $``$, a machine is allowed to input, to output and to store complex numbers, to compute polynomials and to branch on equality (See the textbook for background). This model shares some of the features of the classical (Turing) model of computation (There is a discussion in ). It is known that the hypothesis $`𝒫𝒫𝒩𝒫`$ in the Turing setting implies $`𝒫𝒩𝒫`$ over $``$. ($`𝒫𝒫`$ stands for Bounded Probability Polynomial Time. If $`𝒫𝒫`$ would happen to contain $`𝒩𝒫`$, then there would be polynomial time randomized algorithms for such tasks as factorizing large integers or breaking most modern cryptographic systems).
In , the hypothesis $`𝒫𝒩𝒫`$ over the Complex numbers was related to a number-theoretical conjecture. Define a straight-line program as a list
$$s_0=1,s_1=x,s_2,\mathrm{},s_\tau $$
where $`s_i`$ is, for $`i2`$, either $`s_j+s_k`$, $`s_js_k`$ or $`s_js_k`$, for some $`j,k<i`$. Each $`s_i`$ is thus a polynomial in $`x`$. The straight-line program is said to compute the polynomial $`s_\tau (x)`$.
Given a polynomial $`f[x]`$, the quantity $`\tau (f)`$ is defined as the smallest $`\tau `$ such that there exists a straight-line program $`s_0,\mathrm{},s_\tau `$ computing $`f(x)`$. For instance, $`\tau (x^{2^n}1)=2+n`$. Similarly, if $`g[x_1,\mathrm{},x_n]`$, then $`\tau (g)`$ is the minimal length of a straight-line program $`s_0=1,s_1=x_1,\mathrm{},s_n=x_n,s_{n+1},\mathrm{},s_\tau =g(x)`$.
###### The $`\tau `$ Conjecture for Polynomials .
There is a constant $`a>0`$ such that for any univariate polynomial $`f[x]`$,
$$n(f)<\tau (f)^a$$
where $`n(f)`$ is the number of integer zeros of $`f`$, without multiplicity.
It is known that the $`\tau `$-Conjecture for polynomials implies $`𝒫𝒩𝒫`$ over $``$. A main step towards this result is the fact that, if the $`\tau `$-Conjecture is true, then the polynomials
$$p_d(x)=(x1)(x2)\mathrm{}(xd)$$
are ultimately hard to compute. This means that there cannot be constants $`a`$ and $`b`$ such that, for any degree $`d`$, for some non-zero polynomial $`f`$ (depending on $`d`$), we would have
$$\tau \left(p_d(x)f(x)\right)<a\left(\mathrm{log}_2d\right)^b$$
Therefore, all non-zero multiples of $`p_d`$ are hard to compute, hence the wording ultimately hard.
The goal of this paper is to define a new complexity class $`𝒰𝒫`$, of ultimate polynomial time problems. This class will contain $`𝒫𝒦`$, where $`𝒫`$ is the class of problems decidable in polynomial time and $`𝒦`$ is the class of problems definable without constants (See and Definition 1 below). Moreover:
###### Theorem 1.
The implications (a) $``$ (b) $``$ (c) $``$ (d) are true:
* The $`\tau `$-conjecture for polynomials.
* $`d`$, $`p_d`$ is ultimately hard to compute.
* $`𝒰𝒫𝒩𝒫𝒦`$ over $``$.
* $`𝒫𝒩𝒫`$ over $``$.
The implication (a) $``$ (b) $``$ (d) appears in , the hypothesis (c) in-between is new. It is at least as likely as the $`\tau `$-conjecture, while still implying $`𝒫𝒩𝒫`$.
We will also show a $`𝒩𝒫`$-hardness result for the class $`𝒰𝒫`$: there is a structured problem $`(HN,HN^{\mathrm{yes}})𝒩𝒫𝒦`$, such that:
###### Theorem 2.
$`𝒰𝒫𝒩𝒫𝒦`$ over $``$ if and only if $`(HN,HN^{\mathrm{yes}})𝒰𝒫`$ over $``$.
The problem $`(HN,HN^{\mathrm{yes}})`$ is precisely the (structured) Hilbert Nullstellensatz, known to be $`𝒩𝒫`$-complete over $``$ ().
This paper was written while the author was visiting Mathematical Sciences Research Institute in Berkeley. The author also wishes to thank Pascal Koiran and Steve Smale for their comments and suggestions.
## 2 Background and Notations
Recall from that $`^{\mathrm{}}`$ is the disjoint union
$$^{\mathrm{}}=\underset{i=0,1,\mathrm{}}{}^i$$
This means that there is a well-defined size function,
$$\begin{array}{cccc}\mathrm{Size}:& \hfill ^{\mathrm{}}& & \hfill \\ & \hfill x& & \mathrm{Size}(x)=i\text{ such that }x^i\hfill \end{array}$$
A decision problem $`X`$ is a subset of $`^{\mathrm{}}`$. It is in the class $`𝒫`$ if and only if there is a machine $`M`$ over $``$, that terminates for any input $`x`$ in time bounded by a polynomial on $`\mathrm{Size}(x)`$, and such that
$$M(x)=0xX$$
where $`M(x)`$ is the result of running $`M`$ with input $`x`$. Without loss of generality we may assume that $`M(x)\{0;1\}`$.
Under some circumstances, it is possible to assume that the machine $`M`$ above has only coefficients 0 or 1 (This is called a constant-free machine). However, one may have to replace problem $`X`$ over $``$ by problem $`X`$ over $``$, with unit cost. (This is the contents of Propositions 3 and 9 of Chapter 7 of ). In order to avoid this technical complication and keep the same problem over $``$, we will follow another approach to Elimination of Constants.
This approach was introduced by Koiran in . The idea is to consider only machines for a subclass of problems. This subclass will contain most of the interesting examples, while precluding pathological cases such as $`X=\{\pi \}`$.
###### Definition 1 (Koiran).
A problem $`L`$ is said to be definable without constants if for each input size $`n`$ there is a formula $`F_n`$ in the first order theory of $``$ such that $`0`$ and $`1`$ are the only constants occurring in $`F_n`$, and for any $`x^n`$, $`xL`$ if and only if $`F_n(x)`$ is true (there is no restriction on the size of $`F_n`$.
For future reference, we quote below Theorem 2 of . The original statements of both Definition 1 and Theorem 3 are actually more general (for any algebraically closed field of characteristic 0).
###### Theorem 3 (Koiran).
Let $`LK^{\mathrm{}}`$ be a problem which is definable without constants. If $`L𝒫`$, $`L`$ can be recognized in polynomial time by a constant-free machine.
The class of all the problems definable without constants will be denoted by $`𝒦`$.
We will need crucially in the sequel the notion of a structured problem. A structured problem is a pair $`(X,X^{\mathrm{yes}})`$, $`X^{\mathrm{yes}}X^{\mathrm{}}`$. A non-structured problem $`X`$ can always be written as the structured problem $`(^{\mathrm{}},X)`$. The class $`𝒰𝒫`$ will be meaningful only as a class of structured problems. But first of all, recall that
###### Definition 2.
A structured problem $`(X,X^{\mathrm{yes}})`$ belongs to the class $`𝒫`$ if and only if $`X𝒫`$ and $`X^{\mathrm{yes}}𝒫`$.
###### Definition 3.
A structured problem $`(X,X^{\mathrm{yes}})`$ belongs to the class $`𝒦`$ if and only if $`X𝒦`$ and $`X^{\mathrm{yes}}𝒦`$.
###### Definition 4.
A structured problem $`(X,X^{\mathrm{yes}})`$ belongs to the class $`𝒩𝒫`$ if and only if:
* The problem $`X`$ belongs to the class $`𝒫`$.
* There is a machine $`M`$ with input $`x,g`$ such that
$$xX\text{ and }g^{\mathrm{}}\text{ s.t. }M(x,g)=0xX^{\mathrm{yes}}$$
* Furthermore, there is a polynomial $`p`$ such that, for all $`xX^{\mathrm{yes}}`$, there is $`g^{\mathrm{}}`$ such that $`M(x,g)=0`$ and the running time of $`M`$ with input $`x,g`$ is no more than $`p(\mathrm{Size}(x))`$.
###### Example 1.
Let $`HN`$ be the class of all lists $`(m,n,f_1,\mathrm{},f_m)`$ where $`f_1,\mathrm{},`$ $`f_m`$ are polynomials in $`n`$ variables. Each polynomial $`f=f_Ix^I`$ is represented sparsely by a list of monomials $`(S,m_1,\mathrm{},m_S)`$, where each monomial is a list $`(f_I,I_1,\mathrm{},I_n)`$.
An important convention to have in mind: integers appearing in the definition of a problem should be represented in bit representation. In this case, $`m,n,S,I_j`$ are all lists of zeros and ones. Complex values are represented by one complex number. With this convention, $`HN`$ is clearly in the class $`𝒫`$.
We also define $`HN^{\mathrm{yes}}`$ as the subset of polynomial systems in $`HN`$ that have a common root over $``$.
The definition above of the structured problem $`(HN,HN^{\mathrm{yes}})`$ can be translated into first order constant-free formulae over $``$. Therefore, $`(HN,HN^{\mathrm{yes}})𝒦`$. It is also $`𝒩𝒫`$-complete over the complex numbers (Theorem 1 in Chapter 5 of ).
###### Example 2.
Let
$`X`$ $`=`$ $`\left\{(m,x)\times \right\}`$
$`X^{\mathrm{yes}}`$ $`=`$ $`\left\{(m,x)X\text{ such that }x\{1,2,\mathrm{},m\}\right\}`$
with the convention that $`m`$ is in bit representation, while $`x`$ is a complex number. Hence, $`\mathrm{Size}((m,x))=O(1+\mathrm{log}_2(m))`$. Then the problem $`(X,X^{\mathrm{yes}})`$ is in $`𝒩𝒫`$ over $``$. The machine $`M(x,g)`$ can be constructing by guessing the bit decomposition $`g_i`$ of $`x`$, and computing $`xg_i2^i`$.
Again, $`(X,X^{\mathrm{yes}})`$ is definable without constants.
## 3 Construction of the class $`𝒰𝒫`$
In Chapter 7 of , it is proved that if the problem $`(X,X^{\mathrm{yes}})`$ from Example 2 would happen to belong to the class $`𝒫`$, then condition (b) in Theorem 1 would be false. Therefore (b) implies $`𝒫𝒩𝒫`$over $``$.
The class $`𝒰𝒫`$ will be constructed by abstracting the same reasoning. The construction relies on some geometric properties of structured problems in $`𝒫`$. The notation that follows will be used in the sequel:
Let $`(X,X^{\mathrm{yes}})`$ be a structured problem with $`X𝒫`$. We denote by $`X^i`$ the set $`\{xX:\mathrm{Size}(x)=i\}`$ of size $`i`$ instances of the problem. Then we write $`\overline{X^i}`$ for its Zariski closure over $``$. We can define a new object associated to $`X`$ as:
$$\overline{X}=\underset{i=0,1,\mathrm{}}{}\overline{X^i}$$
We can think of $`\overline{X}`$ as the closure of $`X`$, indeed it is the smallest ‘closed’ problem containing $`X`$. Remark that in Examples 1 and 2, we have respectively $`X=\overline{X}`$ and $`HN=\overline{HN}`$.
We can also decompose each Zariski-closed set $`\overline{X^i}`$ into a finite union of irreducible components (affine varieties). Thus it makes sense to write $`\overline{X}`$ as the countable union:
$$\overline{X}=X_j$$
where each $`X_j`$ is an affine variety lying in some $`^s`$, where $`s=\mathrm{Size}(x),xX_j`$. We can further define:
$`X_j^{\mathrm{yes}}`$ $`=`$ $`X_jX^{\mathrm{yes}}`$
$`X_j^{\mathrm{no}}`$ $`=`$ $`X_jX^{\mathrm{yes}}`$
(See Figure 1). Using this notation,
###### Definition 5.
The class $`𝒰𝒫`$ is the class of all structured problems $`(X,X^{\mathrm{yes}})`$ such that $`X𝒫`$ and for all $`X_i`$, there is a non-zero polynomial $`f_i[x_1,\mathrm{},x_{s_i}]`$, where $`s_i=\mathrm{Size}(x)`$ for $`xX_i`$, with the following properties:
* $`\tau (f_i)`$ is polynomially bounded in $`S_i`$.
* $`X_i^{\mathrm{yes}}Z(f)`$ or $`X_i^{\mathrm{no}}Z(f)`$
###### Proposition 1.
$`𝒫𝒦𝒰𝒫`$
###### Proof of Proposition 1.
Let $`(X,X^{\mathrm{yes}})`$ be in $`𝒫𝒦`$. Let $`M=M(x)`$ be the machine that recognizes $`xX^{\mathrm{yes}}`$ in polynomial time, where the input $`x`$ is assumed to be in $`\overline{X}`$. Although it is possible that an $`xX_i`$ is not in $`X`$, it is still possible to recognize $`xX^{\mathrm{yes}}`$ in polynomial time. Indeed, $`X`$ is also in $`𝒫`$. The machine $`M(x)`$ will check $`xX`$ and $`xX^{\mathrm{yes}}`$.
Now we apply elimination of constants (Theorem 3), and choose $`M`$ to be constant-free.
The nodes of the machine $`M`$ are supposed to be numbered. Given an input $`x`$, the path followed by input $`x`$ is the list of nodes traversed during the computation of $`M(x)`$.
When the input is restricted to one of the affine varieties $`X_i`$’s, we can define the canonical path (associated to $`X_i`$ as the path followed by the generic point of $`X_i`$. This corresponds to the following procedure:
At each decision node, at time $`T`$, branch depends upon an equality $`F^T(x)=0`$, where $`x`$ is the original input. The polynomial $`F`$ can be computed within the machine running time. In case $`F^T(x)=0`$ for all $`xX_i`$, we follow the Yes-path and say that this branching is trivial.
If not, we follow the no-path and say that this branching is non-trivial. The fact that $`X_i`$ is a variety is essential here, since it guarantees that only a codimension $`1`$ subset of inputs may eventually follow the Yes-path at this time.
The set of inputs that do NOT follow the canonical path can be described as the zero-set of
$$f_i=F^T$$
where the product ranges over the non-trivial branches only. The polynomial $`f_i`$ can be computed in at most twice the running time of the machine $`M`$ restricted to $`X_i`$. By hypothesis, this is polynomial time in the size of $`xX_i`$.
Since we assumed that $`M`$ returns only $`0`$ or $`1`$, the set of the inputs that follow the canonical path (i.e. $`Z(f_i)`$) is either all in $`X_i^{\mathrm{yes}}`$ or all in its complementary $`X_i^{\mathrm{no}}`$.
There are now two possibilities. First possibility, $`X_i^{\mathrm{yes}}`$ has measure zero in $`X_i`$, and therefore it must be contained in $`Z(f_i)`$. Second possibility, $`X_i^{\mathrm{yes}}`$ has non-zero measure, hence it contains the complementary of $`Z(f_i)`$, and hence $`X_i^{\mathrm{no}}`$ is a subset of $`Z(f_i)`$. ∎
## 4 Proof of the Theorems
###### Proof of Theorem 1.
(a) $``$ (b) is trivial, refer to Chapter 7.
(b) $``$ (c): Let $`(X,X^{\mathrm{yes}})`$ be the problem in Example 2. Since $`X_i^{\mathrm{no}}`$ is generic in $`X_i`$, all inputs in $`X_i^{\mathrm{yes}}`$ should escape the canonical path. Hence, if $`f_d`$ is the polynomial that defines the canonical path, $`f_d(i)=0`$ for $`i=1,2,\mathrm{},d`$. But then it cannot be evaluated in time polylog($`d`$), by hypothesis (b). Hence, under the assumption (b), the problem $`(X,X^{\mathrm{yes}})`$ is not in $`𝒰𝒫`$. It does belong to $`𝒩𝒫𝒦`$, so $`𝒰𝒫𝒩𝒫𝒦`$.
(c) $``$ (d) : Using Theorem 2, Condition (c) implies that $`(HN,HN^{\mathrm{yes}})𝒰𝒫`$. However, since $`(HN,HN^{\mathrm{yes}})𝒦`$, Proposition 1 implies $`(HN,HN^{\mathrm{yes}})𝒫`$. Hence $`𝒫𝒩𝒫`$over $``$. ∎
###### Proof of Theorem 2.
Let $`(X,X^{\mathrm{yes}})𝒩𝒫𝒦`$ and assume that $`(HN,HN^{\mathrm{yes}})𝒰𝒫`$. We have to show that $`(X,X^{\mathrm{yes}})𝒰𝒫`$.
For each $`X_i`$, one can embed $`(X_i,X_i^{\mathrm{yes}})`$ into some $`(HN_i,HN_i^{\mathrm{yes}})`$ as follows:
Let $`M=M(x)`$ be the deterministic polynomial time machine to recognize $`X`$, and let $`N=N(x,g)`$ be the non-deterministic polynomial time machine to recognize $`X^{\mathrm{yes}}`$. We can assume without loss of generality that $`M`$ and $`N`$ are constant-free (Theorem 3).
Let $`T`$ be the maximum running time of $`M`$ and $`N`$ when the input is restricted to $`X_i`$. Let $`\varphi (x)`$ be the combined Register Equations of machines $`M`$ and $`N`$ for time $`T`$ (Theorem 2 in Chapter 3 of ). Thus, $`\varphi (x)`$ is a system of polynomial equations with integer coefficients and indeterminate coefficients $`x_1,x_2,\mathrm{}`$. The polynomial system $`\varphi (x)`$ can be constructed in polynomial time from $`x`$, and the size of $`\varphi (x)`$ is polynomially bounded by the size of $`x`$.
We claim that $`\varphi (X_i)`$ is contained in some $`HN_j`$, and that in that case $`\varphi (X_i^{\mathrm{yes}})HN_j^{\mathrm{yes}}`$ and $`\varphi (X_i^{\mathrm{no}})HN_j^{\mathrm{no}}`$.
Indeed, $`X_i^s`$ for some $`s`$, and $`\varphi (^s)HN_j`$ for some $`j`$. Then $`xX_i`$ belongs to $`X^{\mathrm{yes}}`$ if and only if the corresponding $`\varphi (x)`$ has a solution over $``$.
We now distinguish two cases:
Case 1: $`HN_j^{\mathrm{yes}}`$ has measure zero in $`HN_j`$. Thus $`HN_j^{\mathrm{yes}}Z(\widehat{f}_j)`$ for an easy-to-compute polynomial $`\widehat{f}_j`$. In that case, since $`X_i^{\mathrm{yes}}`$ gets mapped into $`HN_j^{\mathrm{yes}}`$, the composition $`f_i=\widehat{f}_j\varphi `$ gives the polynomial associated to $`X_i`$.
Case 2: $`HN_j^{\mathrm{no}}`$ has measure zero in $`HN_j`$. Thus $`HN_j^{\mathrm{no}}Z(\widehat{f}_j)`$ for an easy-to-compute polynomial $`\widehat{f}_j`$. In that case, since $`X_i^{\mathrm{no}}`$ gets mapped into $`HN_j^{\mathrm{no}}`$, $`f_i=\widehat{f}_j\varphi `$ is the polynomial associated to $`X_i`$. ∎
## 5 Ultimate Complexity
Let $`(Y,Y^{\mathrm{yes}})`$ be a problem over $``$, definable without constants and with $`Y`$ semi-decidable (i.e. $`Y`$ is the halting set of some machine). The closure $`\overline{Y}`$ is well-defined and can be written as a countable union of irreducible varieties $`Y_i`$.
For any machine $`M`$ to solve $`(Y,Y^{\mathrm{yes}})`$, one can produce a family of polynomials $`f_i`$, vanishing on the set of inputs that follow the canonical-path of $`M`$ restricted to $`Y_i`$. As in item (2) of Definition 5, we have
$$Y_i^{\mathrm{yes}}Z(f_i)\text{ or }Y_i^{\mathrm{no}}Z(f_i)$$
Also, for each input size $`s`$, one has a finite number of indices $`i`$ corresponding to components i $`Y_i\overline{Y}`$ of size-$`s`$ input. We can thus maximize over those indices $`i`$:
$$u_M(s)=\underset{i:Y_i^i}{\mathrm{max}}\tau (f_i)$$
This invariant may be called ‘ultimate running time’, and is a lower bound (up to a constant) for the worst-case running time of $`M`$. As with ordinary complexity theory, one can define the ‘ultimate complexity’ class of a problem as the class of functions $`u:`$ such that $`M,c>0:xu_M(x)cu(x)`$ and $`M`$ recognizes $`(Y,Y^{\mathrm{yes}})`$. This provides notions such as ‘ultimate logarithmic time’ or ‘ultimate exponential time’.
In , a similar construction is used to obtain lower bounds for some specific decision problems. Those problems, however, had a very simple geometric structure (for each ‘input size’, $`X^{\mathrm{yes}}`$ was a finite set in $``$). The motivation of this paper was to extend some of the ideas therein and in Chapter 7 of to non-codimension-1 problems.
|
no-problem/9904/cond-mat9904219.html
|
ar5iv
|
text
|
# Instantons in the Langevin dynamics: an application to spin glasses.
## 1 Introduction
It is well known that all spin-glass systems have complicated free energy structures with many metastable states separated by large barriers . The calculation of heights of these barriers is a difficult task even for the long-range spin-glass models because most of the methods developed in the spin glass theory do not give such information. For example, the application of the standard replica method allows to find the equilibrium free energy which does not contain any information about the heights of the barriers. The standard dynamical approach gives the properties of different metastable states and their history dependence and might contain, in principle, the information about the transition rates between them but in the limit of long-range interaction the probability of transitions becomes exponentially low in the number of spins, $`N`$, and, thus, such processes are neglected by the mean field derivation of the equations for the correlation and response functions on which this aproach is usually based. The modification of the replica method that allows to estimate the energy barriers between metastable states was suggested in Ref.. The main idea of this modification is to study the free energy of the state constrained to have a certain overlap with the given state. The main drawback of it is that it is not clear whether the state corresponding to the energy maximum found by this method is dynamically accessible and that it is indeed a bottleneck of a transition process. The problem is exacerbated by the fact that the replica method weights all states with the Boltzmann weight so it might miss the rare saddle points of low energy in favour of more abundant high energy ones. In this paper we develop an alternative technique which is based on the modified dynamical approach for the calculation of the barriers between the metastable states.
Before we discuss how to modify the dynamical theory so it does not neglect rare processes such as transitions over the barriers we briefly review the standard dynamical approach to the spin-glasses. In this approach one starts from the Lagrangian formulation of the Langevin dynamics, averages over the disorder and, making the saddle point approximation, arrives at a closed system of equations for the sigle site spin-spin correlation function $`D(t_1,t_2)`$ and response function $`G(t_1,t_2)`$. The corrections to the saddle point approximation are small in $`1/N`$ because interaction has a range $`N`$. For example, in the case of the spherical p-spin interacting model one gets a set of integro-differential equations which were solved numerically (and partially analytically); the solution is made possible by the fact that these equations are *forward propagating* in time which is in turn due to the causality of the Langevin equations and initial conditions imposed in the past . By construction these mean field equations describe the most probable evolution of the system in time and ignore all rare processes such as transitions over the barriers. Empirically we visualize the processes that give the main contribution to the conventional dynamical theory as motion down the energy landscape or small fluctuations near the bottom of the valley.
Consider now a typical dynamical process that corresponds to the transition between two close metastable states in a free energy landscape shown in Fig. 1. The system initially is in the state 1 and we want to find the probability of the transition to the state 2. The path from point 1 to point 2 consists of the uphill motion from state 1 to the unstable stationary point 3 and the motion from unstable point 3 to stable state 2. Only the first part of the motion corresponds to the rare process, and, therefore, the probability of the transition between the states 1 and 2 is determined by the uphill motion from 1 to 3. In the Lagrangian formulation of the Langevin dynamics the uphill motion can be described as an instanton which action gives the probability of the process. Note that to find this solution one needs to “force” the system to go upward, i.e. one needs to apply a boundary condition in future (at point 3 in our example) that destroys causality of the theory. This complicates enormously the dynamical equations describing the instanton motion compared to the dynamical equations describing typical processes (such as motion from 3 to 2).
The main result of this paper is that an instanton motion can be mapped into a usual motion going back in time. In our example it means that the uphill motion from point 1 to point 3 can be mapped into the usual downhill motion from point 3 to point 1. This allows one to solve the usual *forward propagating* equations instead of solving the complicated equations describing the instanton motion. In general, this method does not allow one to find the barrier beween one given state and another but it allows one to find some instanton processes which, hopefully, correspond to typical barriers in a system. In the simple example of a spherical Sherrington-Kirkpatrick model which we consider here it gives the barrier that separates doubly degenerate ground states that differ by the sign of the magnetization; this barrier is physically important because it controls the decay of the ground state magnetization in this system. In this model there are only two locally stable states and $`N`$ locally unstable ones, naturally one expects that these unstable states are saddle points and the trajectory which connects two ground states must go through one of them but these general qualitative arguments do not indicate which of these saddle points should be used. Qualitatively the problem is that even in this simple model the lowest saddle point might not connect two different minima but connect one minima to itself (false pass). Our approach proves that it is sufficient to climb up to the lowest of them in order to get from one ground state to another.
Further, we confirm that in case when the free energy landscape of the problem is known explicitly, the probability of the transition obtained as an action of the instanton solution is $`e^{\mathrm{\Delta }F/T}`$ where $`\mathrm{\Delta }F`$ is the difference between the free energy at the end of the instanton trajectory (point 3) and the free energy of the stationary state (point 1). The free energy at the unstable fixed point 3 should be understood as $`F=ETS,`$ where $`E`$ is the energy and $`S`$ is the entropy defined as a logarithm of the configuration space restricted to the direction perpendicular to the trajectory. Of course, there is no much need to calculate the transition probability if the energy of the saddle point in known exactly and it is established that the saddle point is dynamically accessible; the advantage of the method is that it can be also used in the cases where energy lanscape can not be found explicitly, e.g. in all problems in which the disorder average was performed first.
The paper is organized as follows: In Section 2 we prove that an instanton equation of motion can be mapped into a usual equation of motion reversed in time. In Section 3 we consider an instanton transition in the spherical SK model. Section 4 summarizes our results.
## 2 Instanton equations
We start from the Langevin equation describing the overdamped relaxation of the system with energy $``$
$$\mathrm{\Gamma }_0^1_t\sigma _i=\frac{\delta \beta }{\delta \sigma _i}+\xi _i.$$
Here $`\xi _i`$ is the Langevin noise with the correlator
$$\xi _i(t_1)\xi _j(t_2)=2\mathrm{\Gamma }_0^1\delta _{i,j}\delta (t_1t_2)$$
(1)
Below we shall choose the time units so that $`\mathrm{\Gamma }_0=1`$. Using the standard approach we get a path integral formulation of the problem with the Lagrangian
$$=\underset{i}{}\widehat{\sigma }_i^2i\widehat{\sigma }_i\left(_t\sigma _i+\frac{\delta \beta }{\delta \sigma _i}\right)+\frac{1}{2}\frac{\delta ^2\beta }{\delta \sigma _i^2}.$$
(2)
The Green functions are defined by
$`𝒢_{i,j}(t_1,t_2)`$ $``$ $`\left[\begin{array}{cc}\widehat{D}_{i,j}(t_1,t_2)& G_{i,j}^{}(t_1,t_2)\\ G_{i,j}(t_1,t_2)& D_{i,j}(t_1,t_2)\end{array}\right]`$ (5)
$`=`$ $`{\displaystyle D\sigma D\widehat{\sigma }e^{_{t_i}^{t_f}𝑑t}\left[\begin{array}{c}i\widehat{\sigma }_i(t_1)\\ \sigma _i(t_1)\end{array}\right][i\widehat{\sigma }_j(t_2),\sigma _j(t_2)]},`$ (8)
and the dynamical action $`A`$ is defined by
$$e^A=D\sigma D\widehat{\sigma }e^{_{t_i}^{t_f}𝑑t}.$$
(9)
Usually one fixes only initial boundary conditions. In this case the Green function G is casual, the anomalous Green function $`\widehat{D}`$ is zero, and the action vanishes. But if one considers rare processes as transitions over the barriers, then one should also fix the final boundary conditions. In that case the Green function G does not need to be causal, and the action and the Green function $`\widehat{D}`$ do not necessarily vanish.
Now we construct a mapping between an uphill motion (with a negative action monotonically decreasing in time) and the downhill motion (with zero action) going back in time. Consider arbitrary instanton process ($`\sigma (t),\widehat{\sigma }(t)`$) and impose both initial and final boundary conditions. Applying the transformation
$$[i\widehat{\sigma },\sigma ][i\widehat{\sigma }+\stackrel{}{}_t\sigma ,\sigma ]$$
(10)
to the Lagrangian (2) we get
$``$ $``$ $`_n={\displaystyle \underset{i}{}}\widehat{\sigma }_i^2i\widehat{\sigma }_i\left(_t\sigma _i+{\displaystyle \frac{\delta \beta }{\delta \sigma _i}}\right)+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\delta ^2\beta }{\delta \sigma _i^2}}`$ (11)
$`+\beta \left(H(t_f)H(t_i)\right)`$
Note that this Lagrangian differs from the original Lagrangian (2) only by the sign of time derivative and constant boundary term. Therefore inverting time in the Lagrangian (11) one can make the Lagrangians (2,11) equivalent. Therefore, the Lagrangian (11) describes normal downhill motion formally inverted in time.
The Green function defined as averaged with respect to Lagrangian (3)
$$𝒢(t_1,t_2)=\left[\begin{array}{c}i\widehat{\sigma }(t_1)\\ \sigma (t_1)\end{array}\right][i\widehat{\sigma }(t_2),\sigma (t_2)]_{}$$
(12)
and the Green function defined as averaged with respect to the Lagrangian (11)
$$𝒢_n(t_1,t_2)=\left[\begin{array}{c}i\widehat{\sigma }(t_1)\\ \sigma (t_1)\end{array}\right][i\widehat{\sigma }(t_2),\sigma (t_2)]__n$$
(13)
are related by
$$𝒢(t_1,t_2)=\left[\begin{array}{cc}1& \stackrel{}{}_{t1}\\ 0& 1\end{array}\right]𝒢_n(t_1,t_2)\left[\begin{array}{cc}1& 0\\ \stackrel{}{}_{t2}& 1\end{array}\right].$$
(14)
Because the Green function $`𝒢_n`$ corresponds to the normal downhill process inverted in time, it should have the form
$$𝒢_n=\left[\begin{array}{cc}0& G_n^{}\\ G_n& D_n\end{array}\right].$$
(15)
Therefore the transformation (14) in components is
$$D(t_1,t_2)=D_n(t_1,t_2)$$
(16)
$$G(t_1,t_2)=G_n(t_1,t_2)+_2D_n(t_1,t_2)$$
(17)
$$\widehat{D}(t_1,t_2)=_1G_n(t_1,t_2)+_2G_n^{}(t_1,t_2)+_1_2D_n(t_1,t_2).$$
(18)
Note that the response function $`G_n`$ should be purely advanced due to formal inversion of time with respect to the downhill motion.
Thus, in order to construct the Green functions for the instanton processes one should find the Green functions for the corresponding normal process, invert time, and apply the transformation (14).
Now we show that the action can be expressed through the energies and configuration spaces of the initial and final states: Suppose that initially the system is at the stable state $`s`$, and we want to find the probability to escape this state going trough the unstable fixed point $`u`$ which is the final state of the instanton motion. Note that in all the above considerations we assumed completely fixed boundary conditions for $`\sigma _i.`$ But physically, initial and final states correspond to some regions in configuration states which we denote as $`\mathrm{\Gamma }_s`$ and $`\mathrm{\Gamma }_u`$ respectively. To emphasize the difference between the process with completely fixed boundary conditions and the processes with physical boundary conditions we refer to the former as elementary processes. According to Eqs.(2,11) the probability of an elementary process of motion from $`s`$ to $`u`$ ($`w_{su}`$) is related to the elementary process of motion from $`u`$ to $`s`$ ($`w_{us}`$) through
$$w_{su}=w_{us}e^{\beta (E(t_f)E(t_i))}.$$
(19)
The probability to escape the state $`s`$ is
$$W_{su}=\mathrm{\Gamma }_uw_{su}.$$
(20)
On the other hand the probability to go from the unstable state $`u`$ to the stable state $`s`$ is 1, therefore
$$W_{us}=w_{us}\mathrm{\Gamma }_s=1.$$
(21)
Combining Eqs.(19,20,21) for the probability to escape the stable state $`u`$ we get
$$W_{su}=\mathrm{exp}[\beta (E(t_f)E(t_i))+S(t_f)S(t_i)],$$
(22)
where $`S=\mathrm{ln}\mathrm{\Gamma }`$ is the entropy. Note that at the stable point the entropy $`S(t_i)`$ is just the equilibrium entropy corresponding to this state. The entropy of the final state $`S(t_f)`$ can not be defined thermodynamically because it corresponds to the configuration space of the unstable state. Defining the free energy as $`F=ETS`$ one can write (22) as
$$W_{su}=e^{\beta F(t_f)+\beta F(t_i)}.$$
(23)
## 3 Instanton transition in the spherical SK model.
The Hamiltonian of the spherical SK model is
$$=\frac{1}{2}\sigma _iJ_{ij}\sigma _j,$$
(24)
with the spherical constraint $`_i\sigma _i\sigma _i=N`$ imposed. The Hamiltonian becomes diagonal in the basis of eigenvectors of the matrix $`J_{i,j}`$
$$=\frac{1}{2}\underset{\mu }{}ϵ_\mu s_\mu ^2,$$
(25)
where
$`{\displaystyle \underset{j}{}}J_{i,j}\sigma _j^\mu `$ $`=`$ $`ϵ_\mu \sigma _i^\mu ,`$ (26)
$`\sigma _i`$ $`=`$ $`{\displaystyle \underset{\mu }{}}s_\mu \sigma _i^\mu .`$ (27)
The Lagrangian corresponding to this model is
$$(s,\lambda )=\underset{\mu }{}\left[\widehat{s}_\mu ^2+i\widehat{s}_\mu (_t+ϵ_\mu +\lambda )s_\mu \right]+\frac{1}{2}(N+2)\lambda ,$$
(28)
where $`\lambda `$ is the time-dependent Lagrange multiplier field which appears in the equation of motion due to the constraint and we take $`\beta =1`$ for convenience. The functional integral over the variables $`s_\mu ,\widehat{s}_\mu `$ should be performed with the weight that includes $`\delta `$function that ensures the constraint. Using the integral representation of this $`\delta `$function
$$\delta (\underset{\mu }{}s_\mu ^2N)=D\varphi e^{i{\scriptscriptstyle 𝑑t\varphi (_\mu s_\mu ^2N)}},$$
(29)
we get the following Lagrangian
$`(s,\widehat{s},\lambda ,\varphi )`$ $`=`$ $`{\displaystyle \underset{\mu }{}}\left(\widehat{s}_\mu ^2i\widehat{s}_\mu (_t+ϵ_\mu +\lambda )s_\mu +i\varphi s_\mu ^2\right)`$ (30)
$`+{\displaystyle \frac{1}{2}}(N+2)\lambda i\varphi N.`$
At low temperatures the condensation into the lowest eigenvalue $`\mu =0`$ eventually takes place. Therefore we introduce the condensate $`S_0`$ and integrate over $`s_\mu `$ with $`\mu 1`$ getting
$$=\widehat{S}_0^2i\widehat{S}_0(+ϵ_0+\lambda )S_0+i\varphi S_0^2+\frac{1}{2}(N+2)\lambda i\varphi N$$
$$\frac{1}{2}\underset{\mu 1}{}Tr\mathrm{ln}𝒢_\mu ,$$
(31)
where the matrix Green function
$$𝒢_\mu =\left[\begin{array}{cc}\widehat{D}_\mu (t_1,t_2)& G_\mu ^{}(t_1,t_2)\\ G_\mu (t_1,t_2)& D_\mu (t_1,t_2)\end{array}\right]$$
(32)
satisfies the equation
$$\left[\begin{array}{cc}2& _{t1}+\lambda (t_1)+ϵ_\mu \\ _{t1}+\lambda (t_1)+ϵ_\mu & \varphi (t_1)\end{array}\right]𝒢_\mu =\delta (t_1t_2).$$
(33)
The number of cites $`N`$ is large, therefore we will perform the integrals with the weight $`\mathrm{exp}(𝑑t)`$ where $``$ is given by (31) in the saddle point approximation. Taking the variation with respect to $`\varphi ,\lambda ,\widehat{S}_0,S_0`$ and transforming the variables via $`2i\varphi \varphi ,i\widehat{S}\sqrt{N}\widehat{S}`$ we get the saddle point equations
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 1}{}}D_\mu +S_0^2`$ $`=`$ $`1,`$ (34)
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 1}{}}G_\mu +\widehat{S}_0S_0`$ $`=`$ $`0,`$ (35)
$`(+ϵ_0+\lambda )S_02\widehat{S}_0`$ $`=`$ $`0,`$ (36)
$`(+ϵ_0+\lambda )\widehat{S}_0+\varphi S_0`$ $`=`$ $`0,`$ (37)
where
$$D_\mu (t)=D_\mu (t,t),G_\mu (t)=G_\mu (t,t+\delta ).$$
Note that Eqs.(34-37) contain only the equal time Green functions. Therefore it is convenient to write the equations directly for the equal time Green functions instead of (33):
$`G_\mu `$ $`=`$ $`2\widehat{D}_\mu +\varphi D_\mu ,`$ (38)
$`\widehat{D}_\mu `$ $`=`$ $`2(\lambda +ϵ_\mu )\widehat{D}_\mu +\varphi (1+2G_\mu ),`$ (39)
$`D_\mu `$ $`=`$ $`2(\lambda +ϵ_\mu )D_\mu +2(1+2G_\mu ),`$ (40)
We assume that the system is initially at the equilibrium. This corresponds to the stationary solution of Eqs.(34-40)
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 1}{}}D_\mu +S_0^2=1,`$ (41)
$`D_\mu ={\displaystyle \frac{1}{\lambda _s+ϵ_\mu }},`$ (42)
$`\lambda _s=ϵ_0,`$ (43)
$`G_\mu =\widehat{D}_\mu =\widehat{S}_0=\varphi =0,`$ (44)
where $`\lambda _s`$ is the value of $`\lambda `$ for this stable stationary solution. Eq.(43) follows from Eq.(36) because we assumed that there is a nonzero condensate density $`S_0.`$ Note that there are two equilibrium states with $`S_0>0`$ and $`S_0<0.`$
Our goal is to find the probability of the instanton transition from one state to the other. For definiteness, assume that the initial state is one with $`S_0>0.`$ Obviously, $`S_0`$ first decreases to zero during the instanton process and then it becomes negative. Only the first uphill part of the motion gives the contribution to the action, therefore we will consider only this uphill part of the trajectory. The end point of this uphill motion corresponds to an unstable fixed point of Eqs.(34-40). The condensate density $`S_0`$ is zero at this point therefore from Eqs.(34-40) we get the following equations corresponding to this stationary (but unstable) solution:
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 1}{}}D_\mu `$ $`=`$ $`1,`$ (45)
$`D_\mu `$ $`=`$ $`{\displaystyle \frac{1}{\lambda _u+ϵ_\mu }},`$ (46)
$`G_\mu `$ $`=`$ $`\widehat{D}_\mu =\widehat{S}_0=\varphi =0,`$ (47)
where $`\lambda _u`$ is the value of $`\lambda `$ at this unstable stationary solution. Physically, it is natural to expect that that this solution corresponds to the condensate in the first eigenstate $`\mu =1.`$ Indeed, taking $`\lambda _u=ϵ_1+\frac{1}{S_1^2},`$ where $`S_1`$ is the condensate at the first eigenstate $`\mu =1,`$ we get
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 2}{}}D_\mu +S_1^2`$ $`=`$ $`1,`$ (48)
$`D_\mu `$ $`=`$ $`{\displaystyle \frac{1}{\lambda _u+ϵ_\mu }}.`$ (49)
These equations are similar to Eqs.(41-43) for the stable fixed point with the difference that the system condenses into the first eigenstate.
Now we need to find the trajectory connecting the fixed points mentioned above. It was shown in the previous section that an uphill trajectory can be mapped into a downhill trajectory going back in time. To show this, according to Eqs.(10,17), one should take
$`G_\mu `$ $`=`$ $`{\displaystyle \frac{1}{2}}D_\mu ,`$ (50)
$`\widehat{S}_0`$ $`=`$ $`S_0,`$ (51)
where $`D_\mu ,S_0`$ should satisfy the downhill equations with inverse time
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu 1}{}}D_\mu +S_0^2`$ $`=`$ $`1,`$ (52)
$`(+ϵ_0+\lambda )S_0`$ $`=`$ $`0,`$ (53)
$`D_\mu `$ $`=`$ $`2(\lambda +ϵ_\mu )D_\mu +2`$ (54)
Indeed, taking
$`\varphi `$ $`=`$ $`\lambda ,`$ (55)
$`\widehat{D}_\mu `$ $`=`$ $`{\displaystyle \frac{1}{4}}^2D_\mu {\displaystyle \frac{1}{2}}D_\mu \lambda ,`$ (56)
along with Eqs.(50,51), one can show that Eqs.(34-40) are reduced to Eqs.(52-54). The trajectory connecting the stable and unstable saddle points can be found analytically; (see Appendix A) the result is
$`\lambda (t)+ϵ_0`$ $`=`$ $`(\lambda _u+ϵ_0)f[2|\lambda _u+ϵ_0|(t_0t)]`$ (57)
$`D_\mu (t)`$ $`=`$ $`{\displaystyle \frac{1}{\lambda _u+ϵ_\mu }}f[2|\lambda _u+ϵ_0|(t_0t)]`$ (58)
$`+{\displaystyle \frac{1}{ϵ_\mu }}f[2|\lambda _u+ϵ_0|(tt_0)],`$
where
$$f[x]=\frac{1}{e^x+1},$$
(59)
where $`t_0`$ is an arbitrary finite time which reflects the translational invariance in time.
The last step is to calculate the action which determines the probability of the instanton process. Note, that to find it, one should take $`Tr\mathrm{ln}`$ of the operator containing $`\lambda ,\varphi `$ which are functions of time. The necessary calculation is presented in Appendix B, and the answer, simplified with the help of Eqs.(34-40), is
$$A/N=\frac{1}{2}𝑑t\left(\varphi +\frac{2}{N}\underset{\mu 1}{}\frac{G_\mu }{D_\mu }\right).$$
(60)
Using that $`\varphi =\lambda `$ and $`G_\mu =\frac{1}{2}D_\mu `$ we get
$$A/N=\frac{1}{2}\left(\lambda +\frac{1}{N}\underset{\mu 1}{}\mathrm{ln}D_\mu \right)_{t=t_i}^{t=t_f}.$$
(61)
Note that at the fixed points $`D_\mu =1/(ϵ_\mu +\lambda ),`$ therefore one can write
$$A=\left[F(\lambda (t_f))F(\lambda (t_i))\right],$$
(62)
where $`F(\lambda )`$ is the equilibrium free energy of this model
$$F(\lambda )/N=\lambda /2+\frac{1}{2N}\underset{\mu 1}{}\mathrm{ln}(ϵ_\mu +\lambda ).$$
(63)
The result (62) is in the agreement with the general result (23). As we mentioned in Sec.2 the free energy at the unstable fixed point cannot be defined thermodynamically. But in this simple model one can formally eliminate the unstable direction (i.e. impose the constraint $`S_0=0`$) and then it becomes possible to define the free energy thermodynamically. That is why we got the difference of the equilibrium free energies in (63). Note that the Lagrange multiplier $`\lambda `$ changes during the instanton process on $`\lambda (t_f)\lambda (t_i)=ϵ_1ϵ_0.`$ The typical distance between neighboring energy levels at the edges of energy level distribution is of order of $`1/\sqrt{N}.`$ Therefore one can expand (63) in $`\mathrm{\Delta }\lambda `$ getting
$$A=\frac{N}{2T}S_0^2(ϵ_1ϵ_0),$$
(64)
where we restored the temperature $`T`$. Note that this action is of the order of $`\sqrt{N}.`$
## 4 Discussion and conclusions.
We developed the method that simplifies the calculation of the probablity of rare processes such as transitions over the barriers. Our method is based on the Lagrangian approach to the dynamics in which rare processes correspond to the instantons. Generally, in order to obtain the probability of a particular transition between two given states one need to apply a boundary condition in future and in the past, this destroys the causality of the theory: the response function $`G=\widehat{s}s`$ becomes non causal and the anomalous correlation function $`\widehat{D}=\widehat{s}\widehat{s}`$ appears. This complicates the description of the instanton processes.
The main result of this paper is that an instanton process can be mapped into a usual process going back in time. So, knowing the correlation functions of the corresponding normal process one can construct the instanton correlation functions. We showed that this mapping gives the sensible probability to escape a free energy well, $`e^{\mathrm{\Delta }F/T}`$, where $`\mathrm{\Delta }F`$ is the depth of the free energy well. The free energy at the end of the instanton trajectory cannot be defined thermodynamically because at this point the system is at the unstable equilibrium, instead it should be defined by $`F=ETS`$, where $`E`$ is the energy and $`S`$ is the statistical entropy at the end of the instanton trajectory, i.e. entropy constrained to the states orthogonal to the descending direction.
We applied this approach to the spherical SK model which usual dynamical properties were studied in Ref.. This model has just two ground states, corresponding to the energy level $`ϵ_0`$, and no other metastable states. Although the relaxation towards each of these two states is exponential, model exhibits aging behavior when the system relaxes from a random spin configuration to the equilibrium. We considered the instanton transition from one ground state to the other. In accordance with our general result the equations describing the instanton process in this model can be transformed into the usual equations with inverted time. This transformation allows one to find analytically the instanton trajectory. The probability of this transition was found to be $`e^{S_0^2N(ϵ_1ϵ_0)/2T}`$ where $`ϵ_1`$ is the energy of the first (unstable) level. It shows that although the system has $`N`$ saddle points and a complicated phase space the path connecting two ground states with opposite magnetization might go via the saddle point with the lowest energy. The typical distance between the neighboring energy levels at the edge of the energy spectrum is of order $`1/\sqrt{N},`$ therefore the action is of order $`\sqrt{N}.`$ Note that the distance between the energy levels $`ϵ_1ϵ_0`$ is different for different samples, therefore the transition probability is not a self-averaging quantity. Therefore in this problem it would be very difficult to get the correct answer for the transition probability in any technique which involves averaging at the beginning of calculations.
We hope that this method can be used to find the barriers between the metastable states in more complicated spin glasses like $`p>2`$ spin models or SK model. The first attempt of application of the instanton method to SK model was done in Ref. It is a more complicated problem because there are many metastable states in these glasses and therefore the averaging should be done at the beginning of calculations. The scaling of the action $`\sqrt{N}`$ which we got for $`p=2`$ model is probably specific for the spherical model because the barriers in spin glasses with exponential number of states are due to the nonlinearity of the dynamical equations which is absent in $`p=2`$ spherical model.
## Appendix A
In this Appendix we will find the trajectory connecting the unstable fixed point with the stable one. It is natural to invert time in Eqs.(52-54) so that they will describe the usual downhill motion:
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu }{}}\stackrel{~}{D}_\mu +\stackrel{~}{S}_0^2`$ $`=`$ $`1,`$ (65)
$`(+ϵ_0+\stackrel{~}{\lambda })\stackrel{~}{S}_0`$ $`=`$ $`0,`$ (66)
$`\stackrel{~}{D}_\mu `$ $`=`$ $`2(\stackrel{~}{\lambda }+ϵ_\mu )\stackrel{~}{D}_\mu +2,`$ (67)
where tilde means invertion of time with respect to the instanton motion, for example $`\stackrel{~}{S}_0(t)=S_0(t).`$ We need to find the solution of these equatiosn corresponding to the downhill trajectory that begins from the unstable fixed point and ends at the stable one. Therefore the initial boundary condition is ($`t=\mathrm{}`$)
$`\stackrel{~}{D}_\mu (\mathrm{})`$ $`=`$ $`{\displaystyle \frac{1}{ϵ_\mu +\lambda _u}},\mu 1,`$ (68)
$`S_0`$ $`=`$ $`0,`$ (69)
$`\stackrel{~}{\lambda }(\mathrm{})`$ $`=`$ $`ϵ_1+{\displaystyle \frac{1}{S_1^2}},`$ (70)
$`\stackrel{~}{\lambda }(\mathrm{})`$ $`=`$ $`\lambda _u,`$ (71)
and the final one ($`t=\mathrm{}`$) is
$`\stackrel{~}{D}_\mu (\mathrm{})`$ $`=`$ $`{\displaystyle \frac{1}{ϵ_\mu ϵ_0}},\mu 1,`$ (72)
$`\stackrel{~}{\lambda }(\mathrm{})`$ $`=`$ $`ϵ_0,`$ (73)
$`S_0`$ $``$ $`0.`$ (74)
The solution of the Eq.(67) satisfying to the boundary condition (68) is
$$\stackrel{~}{D}_\mu (t)=2_{\mathrm{}}^t𝑑t^{}e^{2_t^{}^t[\stackrel{~}{\lambda }(t^{\prime \prime })+ϵ_\mu ]𝑑t^{\prime \prime }}$$
(75)
Using Eqs.(66,67) one can write Eq.(65) in the form
$$\stackrel{~}{\lambda }+ϵ_0=\frac{1}{N}\underset{\mu }{}(ϵ_\mu ϵ_0)D_\mu +1,$$
(76)
which will be more convenient for us. For simplisity let us take $`ϵ_0=0`$ further in this Appendix. Substitution of Eq.(75) into Eq.(76) gives
$$\stackrel{~}{\lambda }(t)=\frac{2}{N}\underset{\mu }{}_{\mathrm{}}^t𝑑t^{}e^{2_t^{}^t[\stackrel{~}{\lambda }(t^{\prime \prime })+ϵ_\mu ]𝑑t^{\prime \prime }}\stackrel{~}{\lambda }(t^{}).$$
(77)
Introducing the function
$$F(t)=\mathrm{\Gamma }(t)\stackrel{~}{\lambda }(t),$$
(78)
where $`\mathrm{\Gamma }(t)`$ is defined by (up to the multiplication by a constant)
$$\frac{\mathrm{\Gamma }(t)}{\mathrm{\Gamma }(t^{})}=e^{2_t^{}^t\stackrel{~}{\lambda }(t^{\prime \prime })𝑑t^{\prime \prime }},$$
(79)
one can write Eq.(76) as a linear integral equation on $`F(t)`$
$$F(t)=\frac{2}{N}_{\mathrm{}}^t𝑑t^{}F(t^{})\underset{\mu }{}e^{2ϵ_\mu (tt^{})},$$
(80)
which can be easily solved giving
$$F(t)=Ae^{2\lambda _ut},$$
(81)
where $`A`$ is an arbitrary constant. Now using Eq.(78) one can write
$$e^{2\lambda _u(tt^{})}=\frac{\lambda (t)}{\lambda (t^{})}e^{2_t^{}^t\stackrel{~}{\lambda }(t^{\prime \prime })𝑑t^{\prime \prime }},$$
(82)
then taking a logarithm and differentiating with respect to $`t`$ we get a differential equation on $`\stackrel{~}{\lambda }`$
$$_t\stackrel{~}{\lambda }(t)=2\stackrel{~}{\lambda }(t)[\stackrel{~}{\lambda }_u\stackrel{~}{\lambda }(t)],$$
(83)
which can be solved giving
$$\stackrel{~}{\lambda }(t)=\stackrel{~}{\lambda }_uf[2|\stackrel{~}{\lambda }_u|(tt_0)],$$
(84)
where
$$f[x]=\frac{1}{e^x+1},$$
(85)
and $`t_0`$ is an arbitrary time which reflects the translational invariance in time. In case $`ϵ_00`$ one can easily generalize Eq.(84) to
$$\stackrel{~}{\lambda }(t)+ϵ_0=(\stackrel{~}{\lambda }_u+ϵ_0)f[2|\stackrel{~}{\lambda }_u+ϵ_0|(tt_0)].$$
(86)
Knowing $`\stackrel{~}{\lambda }(t)`$ one can find $`\stackrel{~}{D}_\mu `$
$`\stackrel{~}{D}_\mu (t)`$ $`=`$ $`{\displaystyle \frac{1}{\lambda _u+ϵ_\mu }}f[2|\lambda _u+ϵ_0|(tt_0)]`$ (87)
$`+{\displaystyle \frac{1}{ϵ_\mu }}f[2|\lambda _u+ϵ_0|(t_0t)].`$
## Appendix B
The main problem in calculation of the action is to find
$$A_\mu \frac{1}{2}Tr\mathrm{ln}\left[\begin{array}{cc}2& +\lambda +ϵ_\mu \\ +\lambda +ϵ_\mu & \varphi \end{array}\right].$$
(88)
Note that taking the variational derivative of (88) with respect to $`\lambda (t)`$ and $`\varphi (t)`$ we get respectively $`G_\mu (t,t)=1/2+G_\mu (t)`$ and $`\frac{1}{2}D_\mu (t).`$ The idea of our method of calculation of (88) is to find a functional which gives the same functions ($`G_\mu +\frac{1}{2}`$ and $`\frac{1}{2}D_\mu `$) when one takes the variations with respect to $`\lambda `$ and $`\varphi .`$ Up to boundary terms this functional should be equal to the action, and these boundary terms can be found from the requirement that the action should be zero for any downhill trajectory.
Now let us find the functional mentioned above: Eqs.(38-40) have the following invariant
$$D_\mu \widehat{D}_\mu (1+G_\mu )G_\mu =c,$$
(89)
where $`c`$ is an arbitrary constant. But initially $`G_\mu =\widehat{D}_\mu =0,`$ therefore $`c=0.`$ The condition (89) can be satisfied automatically by introducing the new variables
$$D_\mu =\eta _\mu \eta _\mu ,G_\mu =\eta _\mu \widehat{\eta }_\mu 1/2,\widehat{D}_\mu =\widehat{\eta }_\mu \widehat{\eta }_\mu \frac{1}{4\eta _\mu ^2}.$$
(90)
The new variables $`\eta _\mu ,\widehat{\eta }_\mu `$ satisfy the following equations
$`\eta _\mu =(\lambda +ϵ_\mu )\eta _\mu +2\widehat{\eta }_\mu ,`$ (91)
$`\widehat{\eta }_\mu =(\lambda +ϵ_\mu )\widehat{\eta }_\mu +\varphi \eta _\mu {\displaystyle \frac{1}{2\eta _\mu ^3}}.`$ (92)
These equations can be obtained by taking the variations of the functional
$$\mathrm{\Gamma }_\mu =\left[\widehat{\eta }_\mu ^2\widehat{\eta }_\mu (+ϵ_\mu +\lambda )\eta _\mu \frac{1}{2}\varphi \eta _\mu ^2\frac{1}{4\eta _\mu ^2}\right]$$
(93)
with respect to $`\widehat{\eta }`$ and $`\eta .`$ Note that the variational derivatives of (88) and $`\mathrm{\Gamma }_\mu `$ with respect to $`\lambda ,\varphi `$ are the same if we take $`\mathrm{\Gamma }_\mu `$ at the saddle point with respect to $`\widehat{\eta },\eta ,`$ i.e. Eqs.(91,92) should be satisfied. Therefore the action $`A_\mu `$ can be written as
$$A_\mu =\mathrm{\Gamma }_\mu +T(\widehat{D}_\mu (t),D_\mu (t),G_\mu (t),\lambda (t),\varphi (t))|_{t=t_i}^{t=t_f},$$
(94)
where $`T`$ is an unknown function. The initial and final times $`t_i`$ and $`t_f`$ correspond to the fixed points of Eqs.(34-40), therefore the abnormal functions $`\widehat{D}_\mu ,G_\mu ,\varphi `$ should be zero at these points and we can simplify (94)
$$A_\mu =\mathrm{\Gamma }_\mu +T(D_\mu (t),\lambda (t))|_{t=t_i}^{t=t_f}.$$
(95)
Using the saddle point equations (34-40) and Eqs.(91,92) one can simplify the total action $`A`$ getting
$`A/N={\displaystyle \frac{1}{2}}{\displaystyle 𝑑t\left(\varphi +\frac{1}{N}\underset{\mu }{}\frac{2G_\mu }{D_\mu }\frac{1}{2N}\underset{\mu }{}\mathrm{ln}D_\mu \right)}`$
$`+{\displaystyle \underset{\mu }{}}T(D_\mu (t),\lambda (t))|_{t=t_i}^{t=t_f}.`$ (96)
But this action should be zero for any downhill trajectory, therefore the third and forth terms in (96) should cancel each other, and we finally get
$$A/N=\frac{1}{2}𝑑t\left(\varphi +\frac{1}{N}\underset{\mu }{}\frac{2G_\mu }{D_\mu }\right).$$
(97)
|
no-problem/9904/cond-mat9904434.html
|
ar5iv
|
text
|
# Scale Invariant Dynamics of Surface Growth
## I Introduction
The idea that scale invariance is at the origin of critical phenomena associated with equilibrium second order phase transitions has proven to be very fruitful. The analysis of scale transformations in equilibrium statistical systems, now known as renormalization group (RG), has indeed allowed for the explicit calculation of critical exponents and, moreover, has led to the introduction of new fundamental concepts such as scaling and universality.
The extension of the RG approach to non-equilibrium phenomena, where scale invariance is widely observed, and the identification of new universality classes, is of great importance from both theoretical and practical points of view. Technically, the RG ideas can be implemented in different ways. The most standard one for systems at equilibrium is to consider their stationary probability distribution written in terms of continuum coarse-grained fields, and study them perturbatively around their corresponding upper critical dimension. The most systematic way to extend the previous methods to non-equilibrium systems, where in general the stationary probability distribution is not known, is to cast them into a continuum dynamical equation , or equivalently into a generating functional or action . This last one can, in principle, be treated using the same perturbative techniques developed to deal with equilibrium systems. However, there are some cases where perturbative methods around a mean field solution are not suitable. In these cases the $`ϵ`$-expansion fails to give information on the relevant physics. This turns out to be governed by a strong coupling, perturbatively inaccessible fixed point. The prototypical example of this class of systems is the well known Kardar-Parisi-Zhang (KPZ) equation for surface growth , where the properties of rough surfaces have not been so far explained satisfactorily in generic spatial dimension. This is a problem of great theoretical importance since the KPZ describes not only the properties of rough surfaces, but is also related to the Burgers equation of turbulence , to directed polymers in random media , and to systems with multiplicative noise . In particular, one of the most debated issues in this context is the existence of an upper critical dimension, above which the system is well described by the nontrivial infinite dimensional limit.
Although the usual approach fails for KPZ, the presence of generic scale invariance suggests that also for this system the basic idea of the RG approach should be applicable in some form.
Real space approaches have proven useful wherever standard perturbative techniques fail . This, for example, is the case of fractal growth, and in particular for Diffusion-Limited-Aggregation. However, the attempts to apply standard real space techniques to the KPZ problem (and to surface growth in general) fail because of a fundamental technical difficulty: The anisotropy of the scaling properties of the system. That is, in order to cover with blocks (in the Kadanoff sense ) a surface, isotropic blocks cannot be used: Lengths in different directions must scale in different ways, and the relative shape of blocks has to depend upon the scale via an exponent that is unknown. This makes conceptually non-trivial the application of real space RG procedures to surface growth processes.
In this paper we investigate the scale invariant properties of generic interface growth processes through the introduction of a real-space method. To achieve this goal we introduce some new ingredients permitting us to overcome the aforementioned problem. In particular, we introduce the idea that the statistical properties of growing surfaces on large scales can be described in terms of an effective scale invariant dynamics for renormalized blocks. Such dynamics is the fixed point of a RG transformation relating the parameters of the dynamics at different coarse-graining levels. The study of the RG flow, of the fixed points and of their stability gives the universality classes and their associated exponents.
As a first application of the method, we study the KPZ growth dynamics and obtain accurate estimates for the roughness exponent (when compared with numerical results) in spatial dimensions from $`d=1`$ to $`d=9`$. Furthermore, an analytical approximation allows us to exclude the existence of a finite upper critical dimension for KPZ dynamics and suggests that the roughness exponent decays as $`1/d`$ for large dimensions, shedding light on a currently much debated issue.
In order to show the generality of the new real space scheme and test its accuracy, we also apply it to the well known linear theory, the Edwards-Wilkinson (EW) equation. We reproduce the expected behavior in different dimensions, confirming the general applicability of the method.
The paper is organized as follows. In section II we present the general RG method, the main concepts, the basic equations, and discuss all the approximations involved. In section III we review some results associated with KPZ growth and apply the new RG method to such problem. We present some simple analytical approximations, explicit results for spatial dimensions up to $`d=9`$, and discuss the large dimensional limit in detail. In section IV we report results on the analysis of the Edwards-Wilkinson equation. In section V a critical discussion of the method and of the results is reported. Partial accounts of the work presented here have already been published recently, with a slightly different notation.
## II Real Space RG for Surface Growth
In order to present the RG method let us consider a generic surface growth model where the height is a single-valued function $`h(\stackrel{}{x},t)`$, with $`\stackrel{}{x}`$ the position in a $`d`$-dimensional substrate and $`t`$ denoting time. The possibility of having overhangs will not be considered here, as they are known to be irrelevant for the asymptotic behavior of KPZ-like growth . The generic growth model under consideration can be either described at the microscopic level by a stochastic equation or by a discrete dynamical rule. In the first case $`h`$ and $`\stackrel{}{x}`$ are continuous variables, while in the latter they are discrete.
The roughness of a system, when considered on a substrate of linear size $`L`$, is defined by
$$W^2(L,t)=\frac{1}{L^d}\underset{\stackrel{}{x}}{}\left[h(\stackrel{}{x},t)\overline{h}(t)\right]^2,$$
(1)
where
$$\overline{h}(t)=\frac{1}{L^d}\underset{\stackrel{}{x}}{}h(\stackrel{}{x},t).$$
(2)
If we start the growth process from a flat configuration, for short times the roughness grows as
$$W(L,t)t^\beta $$
(3)
until it reaches a stationary state characterized by
$$W(L)L^\alpha .$$
(4)
The crossover between the two behaviors occurs at a characteristic time $`t_s`$, that scales with $`L`$ as $`L^z`$. This is the time scale over which correlations decay in the stationary state. The exponents $`\alpha `$, $`\beta `$ and $`z`$ are to a large extent universal for many different growth processes, and are related by the trivial scaling relation $`\beta =\alpha /z`$.
We now introduce the real space renormalization group (RSRG) procedure aimed at the study the stationary state and in particular at the determination of the roughness exponent $`\alpha `$. The following subsections are structured as follows. In A) we introduce the geometric elements or blocks (equivalent to the Kadanoff blocks in standard RSRG methods) suitable to deal with anisotropic situations. In B) we discuss the effective dynamics of the previously defined blocks at a generic scale. In C) we introduce the RG equation and explain how the roughness exponent is determined. Finally in D) we analyze critically the approximations involved in general in the method.
### A Geometric description
The first nontrivial problem in the development of a RSRG approach is to find a sensible description of the geometry of the growing surface at a generic scale, i. e. how to build the analog of a block-spin transformation. Given the anisotropy of the system, the shape of the blocks must depend on the scale. Therefore, subdividing a cell in subcells is not a feasible task and the explicit construction of the block-spin transformation is not possible.
Hence we develop an alternative strategy. To obtain a description at a generic scale $`k`$ of the growing surface, we consider a partitioning of the $`(d+1)`$-dimensional space in cells of lateral size $`L_k=L_0b^k`$ and vertical size $`h_k`$. Here $`b`$ is a constant and $`k`$ labels the scale (Fig. 1).
A cell is declared to be empty or filled according to a majority rule. In this way we pass from the microscopic description $`h(\stackrel{}{x},t)`$ to a coarse-grained one at scale $`k`$, fully defined by the number $`h(i,k,t)`$ of filled blocks in the column $`i`$. Heights at scale $`k`$ are measured in units of $`h_k`$. The only characteristic vertical length at scale $`k`$ is that fixed by typical intrinsic fluctuations of the surface of a lateral size $`L_k`$. This suggests to take
$$h_k=\sqrt{c}W(L_k)L_k^\alpha ,$$
(5)
where $`\sqrt{c}`$ is a proportionality constant that will be discussed later. This equation expresses the requirement of scale invariance in the geometric description. Any other choice would result either in a redundant description (if $`h_k/W(L_k)0`$ as $`k\mathrm{}`$) where too many (infinite) blocks would be needed to describe fluctuations in the same column, or in a too coarse description (if $`h_k/W(L_k)\mathrm{}`$ as $`k\mathrm{}`$). By imposing Eq. (5), we always have a meaningful covering of the surface upon scale changes. Observe that since in general $`\alpha 1`$, the shape (i.e. the ratio of vertical to horizontal length) of the blocks changes with the scale $`k`$. Contrarily to the usual RG approach, the definition of the block-spin transformation depends explicitly on the roughness exponent $`\alpha `$, the calculation of which is the final goal of the method.
The constant $`c`$ in Eq. (5) fixes the unit of measure of our blocks. Its optimal value can be determined as follows. The distribution of microscopic height fluctuations within a block can be mapped into an effective distribution with the same average $`\overline{h}`$ and standard deviation. For simplicity we take it to be bimodal
$`P[h(x)]=p`$ $`\delta `$ $`\{h(x)[\overline{h}+(1p)h_k]\}`$ (6)
$`+(1p)`$ $`\delta `$ $`\{h(x)[\overline{h}ph_k]\}.`$ (7)
This distribution results from mapping all points with microscopic height larger than $`\overline{h}`$ to $`\overline{h}+(1p)h_k`$ and those smaller than $`\overline{h}`$ to $`\overline{h}ph_k`$. The parameter $`p`$ describes the degree of asymmetry of the distribution: The fluctuations inside a block can then be calculated, using Eq. (7), as
$$W^2(L_k)=p(1p)h_k^2,$$
(8)
which implies that the constant $`c`$ is given by
$$c=\frac{1}{p(1p)}.$$
(9)
For a symmetric distribution $`p=1/2`$ and therefore $`c=4`$. In general the height distribution is not symmetric, i. e. there is some nonvanishing skewness and one must consider $`c4`$.
### B Dynamic description
The second step in the construction of the RG procedure is the definition of the effective dynamics at a generic scale $`k`$, i.e. the determination of the growth rules for the blocks defined in the previous subsection. The effective dynamics will depend on a set of scale dependent parameters. The changing of scale induces a flow in the parameter space whose fixed points correspond to the scale invariant dynamics.
Analogously to what happens in the usual application of the RG approach to equilibrium systems, it may happen that mechanisms not appearing in the microscopic rule are generated upon coarse-graining. In the language of equilibrium systems this means that operators not included in the bare Hamiltonian can be generated iteratively. Conversely, microscopic ingredients can prove to be irrelevant and be progressively eliminated when going to coarser scales. Therefore, exactly as in the equilibrium case the choice of the parametrization of the effective dynamics is not trivial: Principles as the preservation of symmetries and conservation laws must be the guidelines. In general, the effective dynamics will be defined in terms of the transition rates for the addition of occupied blocks at a generic coarse-grained scale, that is,
$$r[h(i,k)h^{}(i,k)]=r(x_k^1,x_k^2,\mathrm{},x_k^n).$$
(10)
The number of parameters $`x_k^i`$ is in principle arbitrary, although in the applications presented below it will be limited to one . It is clear that the more complete the parametrization the better the final description of the statistical scale invariant state. We will discuss this problem in detail in subsection D.
### C The RG Equations
So far we have defined the geometrical and dynamical aspects of the coarse-graining procedure. These give us the necessary ingredients to introduce the RG transformation. The explicit derivation of it is based on the following property of the roughness $`W`$. Let us consider a $`d`$-dimensional system of linear size $`L`$ and partition it in $`(L/b)^d`$ blocks of size $`b^d`$ (labeled by the index $`j`$). It is straightforward to verify that the total roughness can be decomposed as
$`W^2(L)`$ $`=`$ $`{\displaystyle \frac{1}{(L/b)^d}}{\displaystyle \underset{j=1}{\overset{(L/b)^d}{}}}\left\{{\displaystyle \frac{1}{b^d}}{\displaystyle \underset{ij}{}}\left[h(i)\overline{h}(j)\right]^2\right\}`$ (11)
$`+`$ $`{\displaystyle \frac{1}{(L/b)^d}}{\displaystyle \underset{j=1}{\overset{(L/b)^d}{}}}\left[\overline{h}(j)\overline{h}\right]^2,`$ (12)
where $`\overline{h}(j)`$ is the average height within block $`j`$. The interpretation of this formula is simple: The first term on the right hand side is the averaged value of the roughness within blocks of size $`b^d`$, while the second term is the fluctuation of the average value of $`h`$ among blocks.
In our coarse-graining procedure this property is read as follows: If one takes $`L=L_{k+1}=bL_k`$ the first term on the right hand side is $`W^2(L_k)`$, the total roughness within a block of size $`L_k`$; the second is the roughness of the configuration in which blocks of size $`L_k`$ are considered as flat objects. This second contribution is obviously proportional to the square of the height of a block $`h_k^2`$. Hence, employing Eq. (5),
$`W^2(L_{k+1})`$ $`=`$ $`W^2(L_k)+\omega ^2(b,k)h_k^2`$ (13)
$`=`$ $`\left[1+c\omega ^2(b,k)\right]W^2(L_k)`$ (14)
$`=`$ $`F_b(k)W^2(L_k),`$ (15)
where $`\omega ^2(b,k)`$ is the roughness in the stationary state of a system of $`b^d`$ sites of unit height that evolves according to the dynamical rules specified by $`(x_k^1,x_k^2,\mathrm{},x_k^n)`$, and
$$F_b(k)\left[1+c\omega ^2(b,k)\right].$$
(16)
Note that the dependence on the scale $`k`$ is only through the parameters $`\{x_k\}`$.
Eq. (15) is the equation that relates the width at scales $`k`$ and $`k+1`$. In order to proceed further, we must evaluate the function $`F_b(k)`$, or equivalently $`\omega ^2(b,k)`$. To do so, we identify all the possible surface configurations of a system of composed of $`b^d`$ sites, and write down a master equation for their associated probabilities $`\rho _i`$
$$_t\rho _i=\underset{j}{}\rho _jP_{ji}\rho _i\underset{j}{}P_{ij}.$$
(17)
$`P_{ij}`$ is the rate for the transition between configuration $`i`$ and $`j`$ and depends on the set of parameters $`\{x_k\}`$. Imposing the stationarity condition $`_t\rho _i=0`$ and the normalization $`_i\rho _i=1`$ the master equation can be solved. If we call $`W_i^2`$ the roughness of configuration $`i`$, then we can write
$$\omega ^2(b,k)=\underset{i}{}\rho _i(k)W_i^2.$$
(18)
Depending on the particular structure of the master equation the explicit solution of the previous equation may be difficult or impossible. In such cases it may be more useful to determine $`\omega ^2(b,k)`$ numerically by performing (relatively small) Monte Carlo simulations. We will describe examples of both analytical and numerical computations of $`\omega ^2(b,k)`$.
Let us suppose now that $`\omega ^2(b,k)`$ has been determined. Eq. (15) gives an explicit relation between the roughness at two different scales. Observe that so far the scale invariance idea has not been implemented. We have just studied how the width changes upon changing the level of description. The last task to be performed is the determination of the RG transformation relating the parameters of the dynamics at scale $`k`$ with those at scale $`k+1`$. This is done by means of a self-consistency requirement for the description of the same system at two different levels of detail, i.e. the total width of a system should be independent on the size of the blocks we use to describe it. To make this idea more precise, let us consider the case of a dynamics parametrized by only one parameter $`x_k`$. Let us take a system of size $`L=L_{k+2}`$. By applying Eq. (15) we have
$$W^2(L_{k+2})=F_b(x_{k+1})W^2(L_{k+1}).$$
(19)
This procedure can be iterated again on each of the resulting systems of size $`L_{k+1}`$, obtaining
$$W^2(L_{k+2})=F_b(x_{k+1})F_b(x_k)W^2(L_k).$$
(20)
The same quantity can alternatively be computed by considering directly the whole system as composed by $`b^{2d}`$ systems of size $`L_k`$. Applying again Eq. (15)
$$W^2(L_{k+2})=F_{b^2}(x_k)W^2(L_k).$$
(21)
Imposing the consistency of the two procedures one has an implicit RG transformation for $`x_k`$
$$F_b(x_{k+1})=\frac{F_{b^2}(x_k)}{F_b(x_k)},$$
(22)
or explicitly
$$x_{k+1}=R(x_k)F_b^1\left[\frac{F_{b^2}(x_k)}{F_b(x_k)}\right].$$
(23)
This equation provides the evolution of the parameter under a change of scale.
If a fixed point $`x^{}`$ such that
$$x^{}=R(x^{})$$
(24)
exists, then the parameter $`x^{}`$ characterizes the scale invariant dynamics of the system. The knowledge of it directly allows the determination of the exponent $`\alpha `$. Since $`W^2(L_{k+1})/W^2(L_k)`$ is equal to $`b^{2\alpha }`$ we have
$`\alpha `$ $`=`$ $`\underset{k\mathrm{}}{lim}{\displaystyle \frac{1}{2}}\mathrm{log}_b\left[{\displaystyle \frac{W^2(L_{k+1})}{W^2(L_k)}}\right]`$ (25)
$`=`$ $`\underset{k\mathrm{}}{lim}{\displaystyle \frac{1}{2}}\mathrm{log}_bF_b(k)={\displaystyle \frac{1}{2}}\mathrm{log}_bF_b(x^{}).`$ (26)
To analyze the stability of the fixed point we linearize the RG transformation around it
$$x_{k+1}x^{}=R(x_k)x^{}R^{}(x^{})(x_kx^{}).$$
(27)
Hence if $`|R^{}(x^{})|<1`$ the scale invariant dynamics specified by $`x^{}`$ is an attractive fixed point under changes of scale.
Extension of the previous formalism to the case of $`n`$ parameters of the dynamics is straightforward. The $`n`$ RG transformations are obtained by imposing the consistency of the description of the same system when divided in $`2^d`$ and $`4^d`$ blocks, in $`4^d`$ and $`16^d`$ blocks, and so on .
### D Approximations
Let us discuss now the approximations involved in the method. There are two steps where approximations come into play: The first is the choice of the parametrization of the scale invariant dynamics. The second is the computation of $`\omega ^2(b,k)`$.
With respect to the first problem, it is reasonable to expect that under coarse-graining the microscopic dynamics will flow towards a scale invariant dynamics depending in principle on an infinite number of parameters. This proliferation is analogous to what happens in RSRG approaches to equilibrium systems. The restriction to a finite (and small) number of parameters involves unavoidably an approximation, due to the projection of the RG flow onto the sub-space spanned by these parameters.
However, a very important difference with respect to equilibrium critical phenomena is that here the scale invariant dynamics is “self-organized critical”, that is, there are no relevant operators. Only irrelevant fields, with negative scaling dimensions, need to be parametrized. The system is by definition on the critical manifold and, by iteration of the RSRG transformation, it converges to the stable fixed point, without any fine tuning of parameters. The projection onto a low-dimensional parameter space yields a projected RG flow which will share these same properties. The fixed point in this sub-space, being the projection of the actual fixed point in the high-dimensional space, will have the same qualitative properties. Even the simplest parametrization capturing the correct symmetries of the dynamics can provide a quite accurate determination of the properties of the system in this case. On the contrary, when relevant fields are present, as in second order phase transitions, truncation effects are quite dramatic. The reason is that relevant fields have, in general, a non-vanishing component on any discrete (lattice) operator. Any approximation due to truncation is amplified by the RG iteration thus driving the flow out of the critical manifold along the relevant directions. The determination of the fixed point becomes then very difficult.
The second source of approximation is the computation of $`\omega ^2(b,k)`$. As stated above this quantity is the stationary roughness of a system composed of $`b^d`$ substrate sites evolving according to the dynamical rules specified by the parameters $`(x_k^1,\mathrm{},x_k^n)`$. This is a perfectly well defined quantity that may in principle be computed to any degree of accuracy by solving the master equation. However very often the structure of the master equation is too complicated to allow for a full solution. One then has to devise suitable simplifications to make the analytical computation feasible. This involves approximations that affect the final result. We will see an example of this way of proceeding and discuss how the effect of the approximation can be controlled. Alternatively, when $`b`$ and $`d`$ are not too large, one can resort to the numerical evaluation of $`\omega ^2(b,k)`$. In practice this boils down to performing Monte Carlo simulations of small systems evolving with different values of the parameters $`\{x_k\}`$. It is important to stress that the MC procedure involves no approximation, except for the fluctuations associated with statistical sampling. We will describe below an example of this alternative way of computing $`\omega ^2(b,k)`$.
A delicate issue is also the choice of the boundary conditions. In the conceptual framework described above, $`\omega ^2(b,k)`$ is the roughness of a section of size $`b`$ of an infinitely extended surface. This would suggest the use of open boundary conditions. On the other hand, when integrating out degrees of freedom relative to height fluctuations inside the cell, one should not consider the fluctuation of the average slope. This slope effect is eliminated if one uses closed (i.e. periodic) boundary conditions. Even though the choice of the appropriate boundary conditions is not trivial, we will see, in the KPZ case, that the use of periodic or open boundary conditions has little effect on the value of the exponent. Furthermore, one expects that both truncation errors and those induced by neglecting fluctuations of boundary conditions vanish as the parameter $`b`$ grows. Arguments in support of this conclusion are reported in the Appendix I.
## III RG for KPZ dynamics
### A The problem of KPZ growth
The Kardar-Parisi-Zhang equation is the minimal continuum equation capturing the physics of rough surfaces. After appearing in 1986 , an overwhelming number of studies has been devoted to elucidate its properties. It reads
$$\frac{h(x,t)}{t}=\nu ^2h+\frac{\lambda }{2}(h)^2+\eta (x,t).$$
(28)
where $`h(x,t)`$ is a height variable at time $`t`$ and position $`x`$ in a $`d`$-dimensional substrate of linear size $`L`$. $`\nu `$ and $`\lambda `$ are constants and $`\eta `$ is a gaussian white noise. As a consequence of a tilting (Galilean) invariance $`\alpha +z=2`$, and since in general $`z=\alpha /\beta `$, there is only one independent exponent, say $`\alpha `$. The difference between the KPZ equation and the linear equation (Edwards-Wilkinson), describing surfaces growing under the effect of random deposition and surface tension, is the presence of a nonlinear term proportional to $`\lambda `$. This nonlinear term is generated by microscopical processes giving rise to lateral growth, i.e. the fact that growth velocity is normal to the local surface orientation.
Exact results indicate that in $`d=1`$ there is only a rough phase for KPZ with $`\alpha =1/2`$. Instead, standard field theoretical methods predict the presence of a roughening transition above $`d=2`$ ; i.e., there are two RG attractive fixed points and an unstable fixed point separating them. More specifically, there is a gaussian fixed point with $`\alpha =0`$ describing a flat phase (characterized by a vanishing renormalized nonlinear coupling) and a nontrivial one describing the rough phase (in which the renormalized nonlinear coupling diverges in perturbation theory). Perturbative methods fail to give any prediction for the exponents in the rough phase. For $`d>2`$, an $`ϵ`$-expansion ($`d=2+ϵ`$) around the gaussian solution can be performed and the exponents at the roughening transition evaluated to all orders in perturbation theory . These results seem to indicate the presence of an anomaly in $`d=4`$ for the roughening transition. This has been interpreted as an indication that $`d_c=4`$ is the upper critical dimension for the rough phase, i.e., for the strong coupling fixed point . Above this dimension the exponents should take the values known for $`d=\mathrm{}`$ . Applications of non-perturbative methods such as functional renormalization group and Flory-type arguments also suggested that $`d_c=4`$, in agreement with a $`1/d`$-expansion around the $`d=\mathrm{}`$ limit. The mode-coupling approximation led to contradictory results, suggesting the existence of a finite $`d_c`$ or $`d_c=\mathrm{}`$ . Arguments for a finite $`d_c`$ based on directed or invasion percolation have also been proposed.
On the other hand, numerical results seem to indicate that the exponent $`\alpha `$ decays continuously with the system dimensionality up to $`d=7`$, excluding therefore $`d=4`$ as upper critical dimension .
Finally, some doubts have been cast on the validity of the continuum approach to study rough surfaces . Summing up, the issue of the behavior of the KPZ dynamics for $`d2`$ is a highly debated one, and it is extremely desirable to have alternative approaches shedding light into the problem. In what follows we present the application of our new RG scheme to KPZ growth.
### B Simplest RG scheme
#### 1 Parametrization of the dynamics
The modelization of the dynamics at a generic scale should keep the number of parameters to a minimum and catch all the relevant physical mechanisms of the process. The main feature of the KPZ dynamics is lateral growth. Therefore we take as the only parameter defining the dynamics at a generic scale $`k`$, the ratio $`x_k`$ between lateral and vertical growth (i.e. random deposition). More formally, the growth rate for the addition of an occupied block on column $`i`$ is
$`r_i`$ $``$ $`r\left[h(i)h(i)+1\right]`$ (29)
$`=`$ $`1+x_k{\displaystyle \underset{jn.n.i}{}}\mathrm{max}[0,h(j)h(i)].`$ (30)
Eq. (30) states that the rate for lateral growth is proportional to the difference in height between neighboring columns (Fig. 2). Overhangs, known to be irrelevant on large scales , are not allowed. This dynamics can be seen as a generalization of the Eden growth model.
Few observations are in order. We call the parameter $`x`$ appearing in Eq. (30) “lateral growth” parameter, but this is an abuse of language: $`x_k`$ cannot be identified with the parameter $`\lambda `$ of the KPZ equation. Instead the term that multiplies $`x_k`$ in Eq. (30) is a combination of the discretized Laplacian, of the discretized square gradient and of other discrete operators. The explicit dependence of $`x`$ on $`\nu `$ and $`\lambda `$ cannot be disentangled. Other parametrizations are clearly possible and they will be discussed below.
Eq. (30) has the nice feature that it contains as limiting cases both the random deposition process ($`x_k=0`$) and the infinitely strong “lateral growth” ($`x_k=\mathrm{}`$) leading to flat surfaces. Most importantly, it is easy to see that $`x^{}=\mathrm{}`$ is, by construction, a fixed point of the RSRG with $`\alpha =0`$. This feature makes it possible the determination of the upper critical dimension above which the stable solution leads to $`\alpha =0`$. In this situation we expect $`x^{}=\mathrm{}`$ to be an attractive fixed point. Below the critical dimension, on the other hand, the fixed point $`x^{}=\mathrm{}`$ must be unstable and an intermediate fixed point with finite $`\alpha `$ must appear. The RSRG accommodating for a fixed point at $`x^{}=\mathrm{}`$, naturally allows to address the issue of the upper critical dimensionality.
#### 2 d=1
We restrict ourselves for the moment to the one-dimensional case and illustrate in detail the application of the RG approach, i. e. the computation of $`\omega ^2(b,k)`$, the determination of the scale invariant dynamics and of the exponent $`\alpha `$. It is very instructive to consider first the dynamics Eq. (30) supplemented by the condition that the height difference between adjacent columns is restricted to values such that ($`|\mathrm{\Delta }h|\mathrm{\Delta }h_{max}`$), with $`\mathrm{\Delta }h_{max}=1`$. This greatly reduces the number of possible surface configurations allowing for a full analytical treatment. For the system of size $`b=2`$, assuming periodic boundary conditions, there are only 2 nonequivalent configurations, while for the system of size 4 there are 6 of them (Fig. 3). Using the definition Eq. (30) of the growth rates for the addition of a block, one has simply, for $`b=2`$,
$`P_{11}`$ $`=`$ $`0`$ (31)
$`P_{12}`$ $`=`$ $`2`$ (32)
$`P_{21}`$ $`=`$ $`1+2x_k`$ (33)
$`P_{22}`$ $`=`$ $`0.`$ (34)
In configuration 1 only vertical growth is possible (in two sites) leading always to configuration 2. Only one site can instead grow in configuration 2 and the rate for this is the sum of the rate of one vertical and two lateral contributions. Hence the master equation reads
$`{\displaystyle \frac{\rho _1}{t}}`$ $`=`$ $`(1+2x_k)\rho _22\rho _1`$ (35)
$`{\displaystyle \frac{\rho _2}{t}}`$ $`=`$ $`2\rho _1(1+2x_k)\rho _2.`$ (36)
Imposing the stationarity condition $`_t\rho _1=_t\rho _2=0`$ and the normalization $`\rho _1+\rho _2=1`$ one has
$$\rho _2=\frac{2}{3+2x_k}$$
(37)
and then considering that the width associated with configurations 1 and 2 is, respectively, $`0`$ and $`1/4`$
$$\omega ^2(2,x_k)=\frac{2}{4(3+2x_k)}.$$
(38)
For the system of size $`b^2=4`$ one finds in an analogous way
$$\omega ^2(4,x_k)=\frac{51+86x_k+40x_k^2}{4(47+106x_k+68x_k^2+8x_k^3)}.$$
(39)
Plugging these two expressions into the RG equation (22) with $`c=4`$ <sup>*</sup><sup>*</sup>*In $`d=1`$ the distribution is known to be symmetric. one finds that the explicit form of the RG transformation is
$$R(x)=\frac{293+804x+636x^2+160x^3+32x^4}{2(59+148x+156x^2+64x^3)}$$
(40)
and that there exists only one finite fixed point for
$$x^{}2.08779\mathrm{}$$
(41)
Such a fixed point is attractive since
$$R^{}(x^{})0.03548\mathrm{}$$
(42)
Hence no matter how small or large the microscopic value of $`x`$ is, upon coarse-graining the dynamics flows towards an attractive scale invariant dynamics, characterized by a ratio $`x^{}`$ of the lateral to vertical growth rates. The roughness associated with this scale invariant dynamics is
$$\alpha =\frac{1}{2}\mathrm{log}_2F_{b=2}(x^{})0.177352\mathrm{}$$
(43)
that must be compared with the known exact value $`\alpha =1/2`$.
The apparent poor performance of the method is due to the assumption that $`\mathrm{\Delta }h_{max}=1`$, which allows for full analytical treatment, but is clearly wrong. The point is that even if at the microscopic level the dynamics is of restricted type, the effective dynamics at generic scale defined by the renormalization procedure will proliferate in a non restricted one. Allowing larger steps ($`\mathrm{\Delta }h_{max}>1`$) increases the number of superficial configurations and makes the analytical determination of the function $`\omega ^2(b,x_k)`$ impossible. Still this task can be performed numerically via simulation of systems of such a small size. Fig. 4 reports the results obtained by considering increasing values of $`\mathrm{\Delta }h_{max}`$. The value of $`x^{}(\mathrm{\Delta }h_{max})`$ converges already for $`\mathrm{\Delta }h_{max}=8`$ to $`x^{}0.726`$, corresponding to a value $`\alpha =0.507\mathrm{}`$ in excellent agreement with the exact value. Further increases of $`\mathrm{\Delta }h_{max}`$ do not change the results, indicating that in the scale invariant dynamics the probability of steps larger than 8 is negligible.
#### 3 $`d>1`$
The computation via Monte Carlo method of $`\omega ^2(2,x_k)`$ and $`\omega ^2(4,x_k)`$ can be performed with very little computational effort also in higher dimension. We considered $`d=1,\mathrm{},9`$ with less than a week of CPU time of a workstation. The results are reported in Table I and summarized graphically in Fig. 5. We find a finite attractive fixed point for all dimensions, with an exponent $`\alpha `$ in remarkably good agreement with the best numerical results available. This is the first theoretical approach providing estimates for the roughness exponent that match in all dimensions with numerics. No anomalies are found for $`d=4`$ where other approaches find an upper critical dimension. The extrapolation to $`d\mathrm{}`$ suggests that the fixed point is always stable and that $`\alpha `$ decreases with $`d`$ but remains always nonvanishing. The fixed point parameter $`x^{}`$ grows exponentially with the dimension. These results are confirmed by an analytical expansion of the method in high dimensions, that is presented in subsection D.
### C Robustness of the results
#### 1 $`b>2`$
In order to analyze the stability of the results upon increasing the value of $`b`$ it is convenient to introduce the function
$$\alpha _{\mathrm{}}(x)=\frac{1}{2}\mathrm{log}_{\mathrm{}}F_{\mathrm{}}(x).$$
(44)
With this definition one can express the fixed point condition Eq. (24) as
$$\alpha _b(x^{})=\alpha _{b^2}(x^{}),$$
(45)
and see that the fixed point is stable if
$$|R^{}(x^{})|=\left|2\frac{\alpha _{b^2}^{}(x^{})}{\alpha _b^{}(x^{})}1\right|<1,$$
(46)
i. e.
$$0<\frac{\alpha _{b^2}^{}(x^{})}{\alpha _b^{}(x^{})}<1.$$
(47)
Such a formula can be extended also to the case where the size of the larger system considered is not $`b^2`$ but a generic $`b^{}>b`$. We study the stability of the results for growing $`b`$ by computing $`\alpha _b(x)`$ with $`b_i=2,4,8,16,\mathrm{}`$ and imposing the consistency between two successive $`b_i`$. The value of $`b`$ indicated in the plots is the smaller one; for instance, $`b=4`$ label the results obtained imposing the consistency between $`b=4`$ and $`b^{}=8`$.
In Fig. 6 we report the plot of the curves $`\alpha _b(x)`$ in $`d=1`$ for $`b=2,4,8,16,32`$. Remarkably they all meet practically at the same point indicating that $`x^{}`$ and $`\alpha `$ virtually do not change by increasing the number of cells. Fig. 7 reports the values of the exponent $`\alpha `$ in $`d=1`$ (empty circles). Observe that fluctuations are extremely small. The value of the fixed point parameter $`x^{}`$ is reported in Fig. 8 (empty circles). Again it remains practically unchanged when $`b`$ grows.
In higher dimensions the results are less stable. The values of $`\alpha `$ and of $`x^{}`$ for $`d=2,3`$ and 4 are reported in Fig. 7 and 8, respectively (empty symbols). A clear trend is present for $`d=2`$: the exponent initially decreases as $`b`$ is increased, then reaches a minimum and starts growing. This behavior is reflected in the value of $`x^{}`$ that first grows and then decreases.
The decreasing part of the pattern is present in the analogous plots for $`d=3`$ and $`d=4`$. For large dimensions however, it is increasingly more time consuming to perform the computation for large systems. In particular for $`d=4`$, the largest system that could be simulated is $`b=16`$ and for such a system size the trend is still decreasing. Therefore it is not possible to decide from a numerical point of view whether for larger values of $`b`$ $`\alpha `$ would converge to zero or to a finite value. These data do not provide any conclusive indication on whether $`d=4`$ is the upper critical dimension for KPZ growth. However, as it will be shown below, such a conclusion is ruled out by the results with other parametrizations and by the analytical large-$`d`$ expansion of the method.
The reason for the difference in the stability of the results for large number of cells in $`d=1`$ and $`d2`$ is probably related to crossover phenomena. In the RG flow there are two competing fixed points; this reflects the existence of two universality classes, strong coupling KPZ and EW. In $`d=1`$ this fixed points are associated with the same roughness exponent $`\alpha =1/2`$ and to similar values of $`x^{}`$. Therefore, any crossover phenomenon between the two fixed points has little effect in our formalism. In $`d2`$ instead, the two scale invariant dynamics are associated with different exponents and also very different values of the parameter $`x^{}`$, which is finite for KPZ and infinite for EW. We interpret the initial decrease in the value of $`\alpha `$ in the KPZ case as the effect of a crossover caused by the presence of the EW fixed point. It is not clear to us, however, why the fixed point found for $`b=2`$ is so close to the results of the numerical simulations.
#### 2 Open boundary conditions
The calculation of $`\omega ^2(b,k)`$ can also be performed with open boundary conditions, that is assuming that the height of the columns outside the system which are in contact with the boundary is the same of their neighbors inside the system. This means that no “lateral growth” event can be caused in the system by the environment around it.
The results are also presented in Fig. 7 and 8 (filled symbols). Interestingly, in this case the accuracy of the method for $`b=2`$ is not as good as for periodic boundary conditions, but the error remains below 10%, indicating a low sensitivity to the boundary conditions even for small number of cells. For larger number of cells the difference goes quickly to zero.
For higher dimensions the general dependence of $`x^{}`$ and $`\alpha `$ on $`b`$ remains unchanged: In $`d=2`$ $`\alpha `$ is initially high, then decreases and finally increases again. The variations with $`b`$ are however less strong than when periodic boundary conditions are considered. For $`d=4`$ it is more clear than in the case with periodic boundary conditions that $`\alpha `$ does not converge to zero for large $`b`$.
#### 3 Other parametrizations of the dynamics
As stated above the parametrization (30) of the KPZ scale invariant dynamics is by no means unique. Actually, given the problems of slow and nonmonotonic convergence towards the asymptotic values, it turns out clearly that the parametrization (30) is quite far from being optimal and better parametrizations would help. In order to keep things as simple as possible we started considering transition rates of the form
$$r_i=1+x_k\underset{jn.n.i}{}\left\{\mathrm{max}[0,h(j)h(i)]\right\}^\gamma $$
(48)
with $`\gamma `$ constant; for $`\gamma =1`$ it coincides with (30). By comparing the values of $`\alpha `$ obtained with several $`\gamma `$ an interesting pattern can be spotted (Fig. 9). While for small number of cells $`b`$ the estimate gets worse with increasing $`\gamma `$, the opposite is true for large $`b`$. For large values of $`\gamma `$ the estimate for $`\alpha `$ converges quite rapidly. For $`\gamma =9`$ and $`\gamma =20`$ we find on the largest systems $`\alpha =0.399`$, suggestive of a convergence towards 0.4. For $`d=4`$ the sizes that can be simulated are too small to allow the determination of the asymptotic value of $`\alpha `$. However, it is clearly seen that $`\alpha `$ does not go to zero as $`b`$ is increased.
The same type of behavior is found by using an exponential parametrization of the dynamics
$$r_i=1+x_k\underset{jn.n.i}{}\mathrm{exp}\left\{\gamma \mathrm{max}[0,h(j)h(i)]\right\}1.$$
(49)
In $`d=2`$ for large $`\gamma `$ the estimate of $`\alpha `$ on the largest system is 0.399, exactly as with Eq. (48). In $`d=4`$ again we cannot precisely determine where $`\alpha `$ is converging to. Again the data strongly suggest that this limit is finite.
The study of these two alternative parametrizations of the dynamics consistently indicates a value of $`\alpha =0.399`$ in $`d=2`$ and a finite $`\alpha >0`$ in $`d=4`$ suggesting strongly that 4 is not the upper critical dimension of the KPZ.
One could in principle imagine a parametrization of the effective dynamics, more in the spirit of the KPZ equation, of the type
$$r_1=1+\nu _k|^2h(i)|+\lambda _k[h(i)]^2$$
(50)
However, there is no reason for believing that such a parametrization would be better for the KPZ rough phase; additional operators are very likely to be present in the scale invariant dynamics. Moreover the dynamics described by Eq. (50) is plagued by numerical instabilities, as pointed out by Bray and Newman.
### D The $`d\mathrm{}`$ limit and the upper critical dimension.
The results presented so far show that $`\alpha >0`$ even for large number of cells in $`d=4`$, thus indicating that 4 is not the upper critical dimension for KPZ growth processes. By using the RG procedure it is actually possible to go beyond this numerical conclusion: The existence of any finite upper critical dimension can be ruled out. This result is obtained when the function $`\omega ^2(b,x_k)`$ is computed analytically in the large-$`d`$ limit. The basic fact allowing this calculation is that when $`d1`$ one expects $`\alpha 1`$, which suggests that surface fluctuations are small
$$\omega (b,x_k)b^\alpha 1+\alpha \mathrm{ln}b+O(\alpha ^2).$$
(51)
For small $`b`$ one may reasonably account for the fluctuations of the interface by considering only two possible values of $`h(i)`$, $`h_0`$ (“low sites”) and $`h_0+1`$ (“high sites”) (Fig. 11). Starting from a flat surface ($`h(i)=0,i`$), one considers growth events occurring according to the rates Eq. (30), with the restriction that no block can be deposited on top of an already grown one. Only when the whole layer at height 1 is grown one allows growth to level 2 and so on. This approximation allows the analytical evaluation of $`\omega ^2(b,k)`$, the identification of the fixed points and the study of their stability. We will check a posteriori the consistence of the results with the assumption, and see that the existence of a finite upper critical dimension can be excluded. Let us now present the details of the calculation.
Within the “two layers” approximation it is convenient to group together all configurations with the same number of high sites: we will call “state” $`n`$ the set of all surface configurations with $`n`$ sites at height $`h_0+1`$ and the remaining $`b^dn`$ sites at height $`h_0`$. The state $`n=0`$, corresponding to a flat surface, is equivalent to the state with $`n=b^d`$. This classification is useful because the only transitions permitted from state $`n`$ are those to state $`n+1`$. The master equation for the probability $`\rho _n`$ of being in state $`n`$ (i.e. of having any of the configurations with $`n`$ high sites) is then greatly simplified
$$_t\rho _n=\rho _{n1}r(n1n)\rho _nr(nn+1).$$
(52)
$`r(nn+1)`$ is the average of all the rates (30) for the growth processes that transform one configuration with $`n`$ high sites in one with $`n+1`$ of them. We can write this quantity in the form
$$r(nn+1)=(b^dn)+x_k\mathrm{\Omega }_n.$$
(53)
The first term on the right hand side is simply the total rate of vertical growth \[1 in Eq. (30)\] for configurations with $`n`$ high sites. Observe that it is obviously equal to the number ($`b^dn`$) of sites where vertical growth is allowed. $`x_k\mathrm{\Omega }_n`$ is the rate for lateral growth: $`\mathrm{\Omega }_n`$ is the average number of lateral walls in configurations with $`n`$ high sites. Its precise computation is not easy, since it would require the knowledge of the stationary probability for each configuration belonging to state $`n`$. However, when $`x_k=0`$ the computation is trivial since growth occurs only via uncorrelated deposition and high and low sites are randomly distributed. The number of low sites, where growth is allowed, is $`b^dn`$; each of them has $`2d`$ neighbors which are occupied with probability $`n/b^d`$. Hence the average number of lateral walls is
$$\mathrm{\Omega }_n=(b^dn)2d\frac{n}{b^d}$$
(54)
The form of $`\mathrm{\Omega }_n`$ for $`x0`$ is in general more complicated, but a numerical computation for large dimensions, namely for $`d=7`$, shows that Eq. (54) is a good approximation for all values of $`x_k`$ (Fig. 12). We assume the validity of Eq. (54) for all values of $`x_k`$. This leads to
$$r(nn+1)=(b^dn)\left[1+2d\frac{n}{b^d}x_k\right].$$
(55)
The stationary solution of Eq. (52) is
$$\rho _n=\rho _0\frac{r(01)}{r(nn+1)},n=1,\mathrm{},b^d1.$$
(56)
By imposing the normalization condition $`_{n=0}^{b^d1}\rho _n=1`$ and approximating the sum by an integral one obtains
$$\rho _0=\left\{1+\frac{b^d}{2dx_k}\left[2d\mathrm{ln}b+\mathrm{ln}\left(\frac{1+2dx_k}{b^d+2dx_k}\right)\right]\right\}^1.$$
(57)
Equations (56) and (57) provide a complete description of the stationary probability density. Given that the roughness of all configurations with $`n`$ high sites is $`n/b^d(1n/b^d)`$, the total roughness of the surface can be computed as
$$\omega ^2(b,x_k)=\underset{n=0}{\overset{b^d1}{}}\rho _n\left(1\frac{n}{b^d}\right)\frac{n}{b^d}.$$
(58)
Using the fact that $`b^d1`$ and assuming $`dx_k1`$, we obtain
$$\omega ^2(b,x_k)=\rho _0\frac{b^d}{2dx_k}.$$
(59)
Inserting Eq. (59) with $`b=2`$ and $`b=4`$ in the fixed point equation (24) yields, to leading order in $`d`$,
$$x^{}=2^{d+1}\mathrm{ln}2.$$
(60)
The assumption $`dx_k1`$ is therefore self-consistent for sufficiently large $`d`$. Notice that an exponential dependence of $`x^{}`$ on $`d`$ was already found in the numerical implementation of the method (Fig. 8). Using Eq. (26) one obtains the value of the roughness exponent
$$\alpha \frac{1}{3(\mathrm{ln}2)^2}\frac{1}{d}.$$
(61)
Finally, by computing the derivative of the RG transformation at the fixed point
$$R^{}(x^{})=1+\frac{1}{2\mathrm{ln}2}\frac{1}{d}+O(1/d^2),$$
(62)
we see that the fixed point is attractive for all finite dimensions.
In conclusion we find that for large-$`d`$ the RG has a fixed point $`x^{}`$ corresponding to an exponent $`\alpha 1/d`$ and therefore strictly positive in all finite dimensions. On the contrary the existence of a finite upper critical dimension would have implied, for $`d>d_c`$, either the absence of a finite fixed point or its instability.
At this point we must use the analytical result to check the consistency of the two layers assumption. Such assumption is correct provided the rate of its violation is negligible for all values of $`n`$. Processes violating the assumption are those in which an event of vertical growth occurs on top of an high site. Their rate in state $`n`$ is $`r_{up}=n`$, that must be compared to the total rate of processes not violating the restriction $`r(nn+1)`$ computed for $`x=x^{}`$. By imposing $`r_{up}(n)r(nn+1)`$ we get
$$n(b^dn)\left(1+\frac{2dn}{b^d}x^{}\right).$$
(63)
Let us consider $`n=b^d1`$ which is the situation that maximizes $`r_{up}`$ and minimizes $`r(nn+1)`$. Then
$$b^d11+\frac{2d(b^d1)}{b^d}x^{}.$$
(64)
Since $`b^d1`$, this means
$$b^d2dx^{}2^{d+2}d.$$
(65)
Hence the two layers assumption is correct for $`b=2`$ but fails for $`b=4`$. Therefore the value of $`\omega ^2(4,x_k)`$ is systematically underestimated by Eq. (59) since fluctuations involving more than two layers are neglected despite being likely. The consequence of this on our results is understood by considering Eq. (22). In such a formula we estimate correctly the left hand side, while the right hand side is underestimated (Fig. 13). Since the fixed point parameter $`x^{}`$ and the exponent $`\alpha `$ are given by the intersection of the curves it is clear that we get an upper bound for $`x^{}`$ and a lower bound for $`\alpha `$. This is confirmed by the comparison of our estimate of $`\alpha `$, Eq. (61), and $`x^{}`$ with the numerical results of Ala-Nissila et al. and the value of $`x^{}`$ computed numerically for $`d=1,\mathrm{},9`$ (Fig. 14).
These results have been obtained for the smallest values of $`b`$, namely $`b=2`$. In the previous sections we showed that for low dimensions the results for small $`b`$ are in good agreement with numerics, but for larger $`b`$ there are deviations. A very reasonable question therefore concerns the robustness of the large-$`d`$ results when $`b`$ grows. As we have shown above the two layers approximation breaks down for $`b>2`$. In order to extend the above calculation to larger $`b`$ one should replace the two layers approximation with some less restrictive but still doable calculational scheme. We have not been able to fulfill this task and hence we cannot directly show whether a fixed point exists for finite $`x^{}`$ when $`b\mathrm{}`$. However, the two layers assumption is valid for any $`b`$ in the neighborhood of the fixed point $`x^{}=\mathrm{}`$ that gives a flat surface $`\alpha =0`$. This fixed point exists in any dimension, and its stability can be safely analyzed using the previous two layer assumption as follows.
Let us introduce $`ϵ=1/x`$. The derivative of the RG transformation at the fixed point $`ϵ^{}=0`$ is (see Eq. (46))
$$R^{}(ϵ=0)=\frac{2\alpha _b^{}(ϵ=0)}{\alpha _{b^2}^{}(ϵ=0)}1$$
(66)
where now prime indicates derivative with respect to $`ϵ`$. To first order in $`ϵ`$ we have (see Appendix II)
$$\alpha _{b^2}(ϵ)=\frac{1}{4\mathrm{ln}b}\mathrm{ln}\left[1+c\mu ϵb^{2d+2}\right].$$
(67)
Then
$$\alpha _{b^2}^{}(ϵ=0)=\frac{1}{4\mathrm{ln}b}c\mu b^{2d+2}.$$
(68)
Analogously
$$\alpha _b^{}(ϵ=0)=\frac{1}{2\mathrm{ln}b}c\mu b^{d+1}.$$
(69)
Hence
$$R^{}(ϵ=0)=b^{d+1}11$$
(70)
and the fixed point corresponding to $`\alpha =0`$ is unstable. As a consequence, a finite fixed point with $`\alpha >0`$ must exist and be stable when $`b\mathrm{}`$ for any large and finite $`d`$. This supports the conclusion that there is no finite upper critical dimension.
## IV RG for the Edwards-Wilkinson dynamics
So far we have applied the new RG method to a KPZ-like dynamics. Now we intend to show that it is more general and can be applied to growth mechanisms belonging to universality classes other than KPZ. In particular we study in this section its application to the exactly solvable, Edwards-Wilkinson (EW) equation for which the roughness exponent is known in any dimension. In particular $`\alpha =1/2`$ in $`d=1`$, while $`\alpha =0`$ for $`d2`$ with logarithmic corrections at $`d=2`$.
The parametrization Eq. (30) of the dynamics describes a growth model where only deposition events can take place and the symmetry between up and down in the $`h`$ direction is clearly broken. Such a dynamics is inherently out of equilibrium and therefore cannot accommodate the scale invariant dynamics of the Edwards-Wilkinson growth process, which is an equilibrium one, with growth rules symmetric along the growth direction. We now introduce a generalized dynamics which admits the KPZ and EW dynamics as particular limiting cases.
Let us consider the quantities
$`K_d(i)`$ $`=`$ $`{\displaystyle \underset{jnni}{}}\mathrm{max}[0,h(j)h(i)]`$ (71)
$`K_u(i)`$ $`=`$ $`{\displaystyle \underset{jnni}{}}\mathrm{max}[0,h(i)h(j)].`$ (72)
In the KPZ case described so far we have allowed only deposition of particles and written
$$r_i=1+x_kK_u(i).$$
(73)
We now allow also for evaporation of particles. That is, we consider the transition rate for site $`i`$ as
$$r_i=1+x_k|ϵK_u(i)(1ϵ)K_d(i)|$$
(74)
and with probability
$$P_b=1/r_i$$
(75)
a random deposition/evaporation event takes place ($`h_ih_i+1`$ with probability $`ϵ`$ and $`h_ih_i1`$ with probability $`1ϵ`$), while with probability
$$P_l=x_k|ϵK_u(i)(1ϵ)K_d(i)|/r_i$$
(76)
we have a “lateral” event
$$h_ih_i+\frac{ϵK_u(i)(1ϵ)K_d(i)}{|ϵK_u(i)(1ϵ)K_d(i)|}.$$
(77)
For $`ϵ=1`$ only deposition is allowed and we have the transition rates for the KPZ dynamics. For $`ϵ=1/2`$, we have up-down (deposition-evaporation) symmetry and the rates are
$$r_i=1+x_k|^2h(i)|$$
(78)
where $`^2h(i)=[K_u(i)K_d(i)]/2`$ is the discretized Laplacian evaluated at site $`i`$. Therefore we expect this case to correspond to EW dynamics. Let us verify that for $`ϵ=1/2`$ the average interface velocity does not depend on the interface configuration (which is a basic property of EW dynamics)
$$v=\frac{1}{L^d}\underset{i}{}v_i=\frac{1}{L^d}\underset{i}{}r_i\mathrm{\Delta }h_i.$$
(79)
Since
$`\mathrm{\Delta }h_i`$ $`=`$ $`P_b0+P_l{\displaystyle \frac{K_u(i)K_d(i)}{|K_u(i)K_d(i)|}}`$ (80)
$`=`$ $`{\displaystyle \frac{1}{r_i}}\left(x|K_u(i)K_d(i)|{\displaystyle \frac{K_u(i)K_d(i)}{|K_u(i)K_d(i)|}}\right)`$ (81)
$`=`$ $`{\displaystyle \frac{1}{r_i}}\left(x[K_u(i)K_d(i)]\right)`$ (82)
and as $`_iK_u(i)=_iK_d(i)`$, we have that
$$v=\frac{x}{L^d}\underset{i}{}[K_u(i)K_d(i)]=0.$$
(83)
With this generic dynamics we can perform the RG procedure exactly as in the KPZ case. The evaluation of the function $`\omega ^2(b,x)`$ is carried out again using small Monte Carlo simulations with periodic boundary conditions. The results are reported in Fig. 15. For $`d=1`$ the value of $`\alpha `$ for $`b=2`$ is $`0.4`$ below the exact value $`1/2`$, but for $`b>2`$ the correct value is rapidly approached. The situation is completely different in $`d=2`$. In such case for $`b=2`$ the exponent $`\alpha `$ is around 0.25, but when $`b`$ is increased, the fixed point is shifted monotonically towards $`\mathrm{}`$. In $`d=3`$ the behavior of the fixed point for finite $`x^{}`$ is similar. From these plots we can conclude that the behavior of the EW dynamics is very different in $`d=1`$ and $`d>1`$. For $`d=1`$ there is a stable fixed point with $`\alpha =1/2`$. For $`d>1`$ there is no stable fixed point with $`\alpha 0`$. Hence the RG method is able to capture the difference between KPZ and EW dynamics and correctly describes both the rough and the flat phases of the Edwards-Wilkinson growth. We speculate that the reason why one needs to consider $`b>2`$ is probably related to the fact that on a system of size $`b=2`$ the discretized Laplacian and square gradient take the same form.
## V Discussion
In the previous sections we have introduced a general method for studying surface growth models by means of a real space renormalization group procedure. The anisotropy of the scale invariant properties of surfaces makes the very definition of the RSRG highly nontrivial, since the direct integration of degrees of freedom at small scale cannot be performed. For this reason we had to devise an alternative route: the main ingredient is that the integration of degrees of freedom is performed implicitly by imposing the self-consistency between two descriptions of the same object at different levels of coarse-graining. The application of such an approach to the KPZ dynamics yields several results that can be summarized as follows.
* The scale invariant dynamics is identified and parametrized as a function of the “lateral growth” parameter $`x`$. This parameter turns out to have a non-trivial attractive fixed point under RG transformation for all dimensions.
* The KPZ roughness exponent $`\alpha `$ estimated for small $`b`$ is in very good agreement with large scale simulations of discrete models.
* For larger values of $`b`$ the estimate of $`\alpha `$ is stable in $`d=1`$ while it changes noticeably for $`d2`$, presumably owing to crossover effects. When $`b\mathrm{}`$ it converges towards the correct result.
* The results are robust with respect to changes in the parametrization of the dynamics and in the boundary conditions.
* No evidence is found of the existence of an upper critical dimension for KPZ. Moreover, we show very strong evidence that no such an upper critical dimension exists.
* By changing the nature of the parametrization of the dynamics at generic scale, the method is able to describe the EW dynamics and capture the existence for it of an upper critical dimension above which only a trivial (flat) phase exists.
Regarding the general nature of the approach it is worth remarking that the key point in the method is the identification of the scale invariant dynamics. In some sense the procedure can be seen as a kind of finite size scaling approach allowing for the evaluation of scaling exponents via the extrapolation of small size MC simulations. However the crucial point is that the MC data do not directly determine the exponent; they rather allow the identification of the scale invariant parameters of the dynamics which in turn determines the exponent.
With respect to the estimates of the roughness exponent for small $`b`$, it is remarkable that the accuracy (in the sense of the discrepancy with known numerical results) seems to be the same in all dimensions $`d2`$. This is not the typical situation in ordinary critical phenomena, where usually RSRG methods fail in high dimensions. There are at least two reasons why usual RSRG schemes are inaccurate in high dimensions:
a) The necessity of defining an explicit geometrical mapping between degrees of freedom at two scales (spanning rule, majority rule, bond-moving, etc.).
b) The presence of relevant fields and the need to compute exponents from the derivatives of the RG transformation at the fixed point.
Due to the success of field theory in high dimensions (close to $`d_c`$) in usual critical phenomena, RSRG methods have been mostly devised to work in low dimensions where the predictions of the $`ϵ`$-expansion become less reliable. RSRG methods based on an explicit geometric mapping (block-spin transformation) are quite accurate (and sometimes even exact) close to the lower critical dimension. Problems related with this geometric transformations become worse and worse as the dimension increases. In our perspective, the only limitation has to do with the quality of the parametrization of the RG transformation. For example, the parametrization of the RSRG transformation for ferromagnetic systems based on the Migdal transformation of Ising spins gives inaccurate results in high $`d`$. However, if one uses the parametrization of the $`\varphi ^4`$ theory, one recovers $`d_c=4`$ within the RSRG Migdal approach even for ferromagnetic systems.
In any case, notice that our RSRG method does not need an explicit geometrical definition of the RG transformation. Therefore it bypasses the problems related to a). In some sense, this is similar to the phenomenological RG method where the RG transformation is defined implicitly through finite size scaling arguments. Remarkably, phenomenological RG calculations are quite accurate.
With respect to point b), as discussed above the absence of relevant fields makes truncation errors much less important than in ordinary applications of the RG. Furthermore we note that in the KPZ problem one has to compute exponents depending only on the RG transformation at the fixed point, i. e. the critical parameter. This is profoundly different from what happens in Ising-like problems, where some exponents (as, for example, the correlation length exponent $`\nu `$) depend on the derivatives of the RG transformation around it: As a consequence $`\nu `$-type exponents are rather difficult to estimate since, even if the location of the fixed point is determined accurately, the computation of the derivatives is much less precise. No exponents of such type exist in the KPZ case. This is, in our opinion, a further reason for the great accuracy of the new method with respect to the usual RSRG.
As a final point, it is worth discussing the current limitations of the method. It is clear from the results presented, that a most important role in the method is played by the choice of the parametrization of the scale invariant dynamics. This is particularly true since one deals with a monoparametric description of the growth process: If one could easily introduce several parameters and study their flow under the RG, the stability of the results when details are changed would be greatly improved. Within the present framework, the inclusion of additional parameters is however not straightforward. The problem is that additional RG equations are provided only by the use of equation (15) with $`b`$, $`b^2`$, $`b^3`$ and so on. This requires the computation of $`\omega ^2(b,k)`$ on systems whose size becomes quickly prohibitively large. A remarkable improvement of the method would therefore be the identification of additional RG transformations independent from Eq. (15). Despite this difficulty, we believe that the theoretical framework presented here constitutes an important new element in the field of surface growth and deserves further investigation, in particular with respect to possible applications to other open problems, and generalization to deal with time-dependent properties.
In summary, in this paper we have presented a real space renormalization group method developed to deal with surface growth processes. The new method overcomes the difficulties inherent to standard real space renormalization group analysis of anisotropic situations. It is based on the definition of anisotropic blocks of generic scale and of a parametrized effective dynamics for the evolution of such blocks. Imposing the surface width to be the same when using different scales (different block sizes) we write a renormalization group equation. Its associated fixed points define the scale invariant effective dynamics and permit to determine the roughness exponent $`\alpha `$.
We have employed the new method to study the Kardar-Parisi-Zhang and the Edwards-Wilkinson universality classes. In particular, for KPZ we compute the $`\alpha `$ exponent in dimensions from $`d=1`$ to $`d=9`$. The results are in very good agreement with the best numerical estimates in all dimensions. Moreover, we present analytical calculation excluding the possibility of KPZ having a finite upper critical dimension. On the other hand, well known results for the EW universality class are obtained, confirming the generality of the method.
## VI Acknowledgments
We acknowledge interesting discussions with A. Gabrielli, A. Maritan, G. Parisi, A. Stella, C. Tebaldi, G. Bianconi, and A. Vespignani. This work has been partially supported by the European network contract FMRXCT980183, and by a M. Curie fellowship, ERBFMBICT960925, to M. A. M.
## A The RSRG method for large $`b`$
In this Appendix we investigate the behavior of the method for large values of $`b`$. At the fixed point $`x^{}`$, Eq. (22) reads
$$\omega ^2(b^2,x^{})=c\omega ^4(b,x^{})+2\omega ^2(b,x^{}).$$
(A1)
If we now assume that, for $`b1`$,
$$\omega ^2(b,x)b^{2\alpha }\left[A(x)+B(x)b^\omega +\mathrm{}\right]$$
(A2)
as it should, we find that Eq. (A1) becomes
$`A(x^{})[1cA(x^{})]2cA(x^{})B(x^{})b^\omega `$ (A3)
$`2A(x^{})b^{2\alpha }+\mathrm{subleading}\mathrm{terms}=0`$ (A4)
We see that for $`b\mathrm{}`$ the fixed point tends to
$$x_{\mathrm{}}^{}:cA(x_{\mathrm{}}^{})=1$$
(A5)
whereas for large but finite $`b`$
$$x_b^{}=x_{\mathrm{}}^{}\frac{2D(x_{\mathrm{}}^{})}{A^{}(x_{\mathrm{}}^{})}b^\mathrm{\Delta }$$
(A6)
with
$`\mathrm{\Delta }`$ $`=`$ $`\mathrm{min}\{\omega ,2\alpha \}\text{and}D(x^{})`$ (A7)
$`=`$ $`\{\begin{array}{cc}B(x^{})& \text{ if }\omega <2\alpha \text{,}\\ A(x^{})+B(x^{})& \text{ if }\omega =2\alpha \text{,}\\ A(x^{})& \text{ if }\omega >2\alpha \text{.}\end{array}`$ (A11)
The RG estimate $`\widehat{\alpha }`$ of the exponent $`\alpha `$ is given by
$`\widehat{\alpha }={\displaystyle \frac{\mathrm{ln}F_b(x_b^{})}{2\mathrm{ln}b}}=`$ (A12)
$`\alpha +{\displaystyle \frac{\mathrm{ln}\left[b^{2\alpha }+cA(x_b^{})+cB(x_b^{})b^\omega +\mathrm{}\right]}{2\mathrm{ln}b}}=`$ (A13)
$`\alpha +{\displaystyle \frac{\mathrm{ln}\left[1+b^{2\alpha }2cD(x_{\mathrm{}}^{})b^\mathrm{\Delta }+cB(x_{\mathrm{}}^{})b^\omega +\mathrm{}\right]}{2\mathrm{ln}b}}`$ (A14)
and converges to the exact value for $`b\mathrm{}`$. Note that only the finite $`b`$ corrections depend on $`x^{}`$. In particular, if $`\omega <2\alpha `$
$$\widehat{\alpha }=\alpha \frac{cB(x_{\mathrm{}}^{})b^\omega }{2\mathrm{ln}b}$$
(A15)
If $`\omega >2\alpha `$
$$\widehat{\alpha }=\alpha \frac{b^{2\alpha }}{2\mathrm{ln}b}$$
(A16)
With respect to the stability of the fixed point one finds
$$R^{}(x^{})=b^{2\alpha }\left(\frac{B}{A}+\frac{B^{}}{A^{}}\right)b^\omega +\mathrm{}$$
(A17)
This means that, as it should be expected, the fixed point becomes more and more stable as $`b`$ increases: for larger $`b`$ fewer RG iterations are necessary to reach the scale invariant regime.
These results for large $`b`$ are not surprising. When the systems become large, the effect of the boundary conditions is clearly small and also the choice of the parametrization of the scale invariant dynamics tends to become irrelevant, since parameters that are not included in the explicit parametrization are generated by the RG procedure on large sytems. The formulae (A15-A17) certify that the RG method is asymptotically correct.
## B Computation of $`\omega ^2(b,ϵ_k)`$.
In this Appendix we present the derivation of Eq. (67). Let us consider $`b`$ and $`d`$ arbitrarily large but finite, so that $`ϵ_kb^d`$. We have
$$\underset{n=1}{\overset{b^d1}{}}\rho _n=\rho _0b^d\underset{n=1}{\overset{b^d1}{}}\frac{1}{\mathrm{\Omega }_n/ϵ_k+b^dn}=\rho _0b^dϵ_kg+𝒪(ϵ_k^2),$$
(B1)
where
$$g=\underset{n=1}{\overset{b^d1}{}}\frac{1}{\mathrm{\Omega }_n}.$$
(B2)
Hence
$$\rho _0=\frac{1}{1+b^dϵ_kg}+𝒪(ϵ_k^2)$$
(B3)
and
$$\rho _n=\frac{b^dϵ_k}{\mathrm{\Omega }_n}+𝒪(ϵ_k^2).$$
(B4)
The roughness of a system of size $`b`$ is therefore
$`\omega ^2(b,ϵ_k)`$ $`=`$ $`{\displaystyle \underset{n=1}{\overset{b^d1}{}}}\rho _n{\displaystyle \frac{n}{b^d}}\left(1{\displaystyle \frac{n}{b^d}}\right)`$ (B5)
$``$ $`b^dϵ_k{\displaystyle _{1/b^d}^{11/b^d}}{\displaystyle \frac{y(1y)}{\mathrm{\Omega }_{yb^d}}}𝑑y.`$ (B6)
Fog large $`b`$ and $`d`$ and infinitely strong lateral growth parameter $`1/ϵ_k`$, the set of high sites will form, when $`n0`$ a $`d`$-dimensional hypersphere. Hence $`\mathrm{\Omega }_n`$ will scale as the perimeter of such hypersphere
$$\mathrm{\Omega }_nn^{(d1)/d}n0.$$
(B7)
Similarly for $`nb^d`$ the low sites will form a shrinking hypersphere and $`\mathrm{\Omega }_n(b^dn)^{(d1)/d}`$ for $`nb^d`$. Hence it is reasonable to assume
$$\mathrm{\Omega }_{yb^d}=b^{d1}\widehat{\mathrm{\Omega }}(y)$$
(B8)
with $`\widehat{\mathrm{\Omega }}(y)y^{(d1)/d}`$ for $`y0`$ and $`\widehat{\mathrm{\Omega }}(y)(1y)^{(d1)/d}`$ for $`y1`$. The form of $`\widehat{\mathrm{\Omega }}(y)`$ for intermediate values of $`y`$ is not known, but we expect it to be nonsingular and not dependent on $`b`$. In conclusion
$$\omega ^2(b,ϵ_k)=b^{d+1}ϵ_k\mu $$
(B9)
with
$$\mu =_0^1𝑑y\frac{y(1y)}{\widehat{\mathrm{\Omega }}(y)}$$
(B10)
a finite geometrical constant.
|
no-problem/9904/astro-ph9904325.html
|
ar5iv
|
text
|
# OPTICAL STUDIES OF THE X-RAY TRANSIENT XTE J2123–058 – II. PHASE RESOLVED SPECTROSCOPY
## 1 Introduction
The X-ray transient XTE J2123–058 was discovered by the RXTE satellite on 1998 June 27 (Levine, Swank and Smith 1998, ). An optical counterpart was promptly identified by Tomsick et al. (1998a, ). The discovery of apparent Type-I X-ray bursts (Takeshima and Strohmayer 1998, ) indicated that the compact object was a neutron star. Interest in the object increased dramatically when Casares et al. (1998, ) reported the presence of a strong optical modulation and attributed this to an eclipse; the orbital period was subsequently determined to be 6.0-hr both photometrically (Tomsick et al. 1998b, ; Ilovaisky & Chevalier 1998, ) and spectroscopically (Hynes et al. 1998, ). Tomsick et al. (1998b, ) suggested that the 0.9-mag modulation is likely actually due to the changing aspect of the heated companion in a high inclination system, although partial eclipses appear also to be superposed on this (Zurita, Casares & Hynes 1998, ). In this paper we present the results of our spectrophotometric study of XTE J2123–058 using the William Herschel Telescope (WHT), La Palma. Our photometric observations are described in a companion paper in this proceedings, Zurita et al. (1999, hereafter Paper I, ).
## 2 Our dataset
We observed XTE J2123–058 through two 6-hr binary orbits on 1998 July 19–20. We used the blue arm of the ISIS dual-beam spectrograph on the 4.2-m William Herschel Telescope to obtain 28 spectra. The R300B grating combined with an EEV $`4096\times 2048`$ CCD gave an unvignetted coverage of $``$4000–6500 Å with some useful data outside this range. An 0.7–1.0 arcsec slit gave a spectral resolution 2.9–4.1 Å. Each spectrum was calibrated relative to a second star on the slit. Absolute calibration was tied to the spectrophotometric standard Feige 110 (Oke 1990, ). Wavelength calibration was obtained from a copper-argon arc lamp, with spectrograph flexure corrected using sky emission lines.
Our average spectrum shown in Fig. 1 is derived from a straight sum of count rates before slit loss and extinction corrections to maximize the signal to noise ratio. The spectral energy distribution was determined from an average of calibrated spectra interpolated onto a uniform phase grid, i.e. it is a uniformly weighted average over all phases.
## 3 The line spectrum
At first glance, XTE J2123–058 presents a nearly featureless blue spectrum, with only the Bowen blend (N iii/C iii 4640 Å) and He ii 4686 Å prominent. In addition, however, a number of weaker emission lines are present, the Balmer lines exhibit complex profiles and weak interstellar absorption features are seen.
Balmer lines from H$`\beta `$ to H$`\delta `$ appear to show broad absorption and an emission core. The wavelength range marked underneath each Balmer line in Fig. 2 corresponds to $`\pm 1500`$ km s<sup>-1</sup>; this is intended to be an approximate guide to the width, rather than a fit. The emission core may partly be Balmer emission but at least some is attributable to coincident He ii lines (since we see He ii 4542 Å and 5412 Å we would expect to also see related lines such as 4859 Å). This broad absorption plus narrow emission Balmer line structure is also seen in other systems, for example the neutron star LMXB 4U 2129+47 (Thorstensen & Charles 1982, ) and the black hole candidate GRO J0422+32 in outburst (Shrader et al. 1994, ).
Based on the strength of the Na D1 line and the calibration of Munari & Zwitter (1997, ) we estimate $`E(BV)=0.12\pm 0.05`$.
## 4 Emission line behavior
Both N iii/C iii 4640 Å (Bowen blend) and He ii 4686 Å emission lines show changes in integrated flux over an orbital cycle and He ii also reveals complex line profile changes, with multiple S-wave components present in the trailed spectrogram shown in Fig. 3.
The light curves of the two lines shown in Fig. 4 appear somewhat different in structure; the Bowen blend peak is broader and earlier than that of He ii. This suggests an origin from different sites within the system. The Bowen blend light curve in fact appears similar to the continuum light curves shown in Fig. 7; Bowen emission may therefore originate on the heated face of the companion star, with the modulation arising from the varying visibility of the heated region.
He ii emission, shows a strong peak near phase 0.75 with a suggestion of a weaker one near 0.25. The modulation probably indicates that the emission region is optically thick. We should be cautious in interpreting the light curve, however, as the complex line behavior indicates multiple emission sites. The integrated light curve is an average of different light curves of several regions. A possible resolution to this problem will be to use Doppler tomography to locate the dominant emission sites (perhaps the two brightest spots). Then we can fit a toy model in which each of these spots (treated as a point source) is allowed to vary smoothly in brightness over an orbital cycle. This procedure may allow us to approximately allow us to deconvolve the light curves of the different regions.
## 5 Doppler tomography
We have used the technique of Doppler Tomography (Marsh & Horne 1988, ) to identify emission sites in velocity space. Both the back-projection method implemented in molly and maximum-entropy method of doppler give similar results, as does the alternative maximum entropy implementation of Spruit . Fig. 5 shows a maximum entropy tomogram generated with doppler. Appropriate values were chosen for instrumental resolution ($`180`$ km s<sup>-1</sup>) and phase smearing. We overplot the position of the Roche lobe of the companion, the accretion stream ballistic velocity, the Keplerian velocity along the accretion stream and the Keplerian velocity around the disk edge. These are derived from uncertain system parameters, so they should be viewed cautiously. The parameters are determined from light curve fits (see Paper I). We should also beware that one of the fundamental assumptions of Doppler tomography, that we always see all of the line flux at some velocity, is clearly violated, as the integrated line flux is not constant.
In spite of these cautions, we can learn something from the exercise. The dominant emission site (corresponding to the main S-wave) appears on the opposite side of the neutron star from the companion. It is inconsistent with the heated face of the companion and the stream/disk impact point, although a tail does appear to extend upwards towards the expected stream position. As the emission appears to form an arc roughly centered on the neutron star position, it is tempting to associate it with asymmetric disk emission. Unfortunately the velocity of the strongest emission is too low for disk material. This can be seen from the fact that it lies inside the circle representing the Keplerian velocity at the disk edge; the inner disk will have higher velocities than this. If the observed bright spot is indeed emission from the disk then it must come from sub-Keplerian material. A more promising explanation is suggested by the similarity to some SW Sex type cataclysmic variables (e.g. V1315 Aql; compare H$`\beta `$ tomograms in Dhillon, Marsh and Jones 1991, , and Hellier 1996, ). This is that the emission is actually associated with an extension of the accretion stream beyond its nominal disk impact point. One possible model involves a disk-anchored magnetic propeller (Horne 1999 ), suggested for SW Sex systems, which ejects some of the stream material from the system. An alternative is that some material splashes from the stream impact point, rising high above the disk. Such material will follow a trajectory similar to that seen, with the brightest observed spot corresponding to the point where this splashing material reimpacts the disk.
## 6 The spectral energy distribution
We show in Fig. 6 our average spectrum after dereddening using the extinction curve of Cardelli et al. 1989, ) and our reddening estimate of $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.12`$ derived from the Na D1 line (see above). We also show the photometry of Tomsick et al. (1998a, ) from 1998 June 30 and a spectrum of GRO J0422+32 (1992 August 17) provided by C.R. Shrader. This black hole X-ray transient has a 5.1-h orbital period, making it the black hole system most similar to XTE J2123–058. Both the photometry of Tomsick et al. and our spectroscopy appear steeper than the spectrum of GRO J0422+32, taking a steep blue power-law form. In view of the difficulties of accurately calibrating U band data, it is unclear whether the apparent flattening off of the spectrum at high energies is real or an artifact.
## 7 Continuum behavior
We show in Fig. 7 light curves for three ‘continuum’ bins at 4500 Å, 5300 Å and 6100 Å. The light curves show very similar shapes, with no significant differences in profile or amplitude within this wavelength range. The apparent differences between them, most noticeable around phase 0.6, are likely due to calibration uncertainties: coverage on each night was approximately from phase 0.6 through 1.6.
The light curve morphology is fully discussed in Paper I. We believe it is mainly due to the changing aspect of the heated companion star, which is the dominant light source at maximum light, near phase 0.5. At minimum light (phase 0.0) the heated face is obscured and we see the accretion disk only. At all phases the unilluminated parts of the companion star are expected to contribute negligible flux as the outburst amplitude is $`5`$ magnitudes.
The lack of strong color dependence then indicates that either the disk and heated companion have similar temperatures, or that both are sufficiently hot that we only see the $`F_\nu \nu ^2`$ Rayleigh-Jeans part of the spectrum. The very steep spectral energy distribution shown in Fig. 6 suggests that the latter is the case.
## 8 Conclusion
Our main findings are summarized below. Where appropriate, we make comparisons with the black hole X-ray transient, GRO J0422+32. This system has a 5.1-h orbital period and may be the black hole system most similar to XTE J2123–058. We also note some similarities to the neutron star LMXB 4U 2129+47 which has a 5.2-hr orbital period and shows similar large amplitude photometric variations to XTE J2123–058.
* High excitation emission lines dominate the spectrum (He ii, N iii/C iii, C iv (see upper left panel). No He i emission is seen. The blue line spectrum looks rather similar to that of 4U 2129+47 (Thorstensen & Charles 1982, ).
* He ii emission is dominated by a region that coincides with neither the heated companion star, the ballistic accretion stream nor a Keplerian disk. This is unlike GRO J0422+32 (Casares et al. 1995, ), the only other transient for which outburst Doppler tomography has been performed. In GRO J0422+32, He ii emission appears to originate from the accretion stream/disk impact point. Our observations may possibly be explained by extension of the stream beyond the initial impact point. This may be similar to the He ii emission in 4U 2129+47 (Thorstensen & Charles 1982, ) which has a radial velocity modulation with similar phase and amplitude to that we see.
* The continuum spectral energy distribution is very blue, and steeper than in black hole X-ray transients such as GRO J0422+32 (see Fig. 6). This implies that both the heated face of the companion star, and the disk, are hotter than in similar short-period black hole systems. Such a conclusion is supported by the dominance of high-excitation emission lines. We suggest tentatively that this may indicate more efficient X-ray irradiation in this (neutron star) transient compared to black hole systems. This has previously been suggested by King, Kolb and Szuszkiewicz (1997, ) as an explanation for why most transient LMXBs contain black holes whereas apparently all persistent sources contain neutron stars. Before we can claim this is a firm conclusion we will need to assess the uncertainty in the spectral energy distribution and perform a more systematic comparison with other systems taking into account differences in X-ray luminosity.
## Acknowledgments
Thanks to Chris Shrader for providing the spectrum of GRO J0422+32 for comparison. Doppler tomography used molly and doppler software by Tom Marsh and dopmap software by H.C. Spruit (). The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. RIH is supported by a PPARC Research Studentship.
## References
|
no-problem/9904/physics9904055.html
|
ar5iv
|
text
|
# Acoustic Band Gap and Coherent Behavior in a Periodic Two-Dimensional System
## Figure Captions
(a) Acoustic bands for a square lattice of air-cylinders for $`\beta =10^3`$. A complete band gap lies between the two horizontal dashed lines. (b) The dependence of the complete band gap on the cylinder concentration $`\beta `$.
The left column: The phase diagram for the two dimensional phase vectors defined in the text. Right column: The spatial distribution of acoustic energy. For brevity, 200 cylinders are considered in the computation. In the plot of the energy distribution, the geometrical spreading factor has been removed. Note that in the case of $`ka=0.01`$ there are a few phase vectors which do not point to the same direction as the others because of the effect of the finite boundary; when the cylinder number goes to infinity, this effect disappears.
Acknowledgments. The work received support from the National Central University and the National Science Council.
Correspondence and request for materials should be addressed to Z.Y.
Email : zhen@joule.phy.ncu.edu.tw
|
no-problem/9904/astro-ph9904048.html
|
ar5iv
|
text
|
# The Galactic Center Isolated Nonthermal Filaments as Analogs of Cometary Plasma Tails
## 1 Introduction
The isolated nonthermal filaments (hereafter, NTFs) in the Galactic center (hereafter, GC) have remained unexplained since their discovery by Morris and Yusef-Zadeh (1985). It is generally accepted that these are magnetic structures emitting synchrotron radiation since their emission is strongly linearly polarized with the magnetic field generally aligned with the long axis of the filaments (Bally & Yusef-Zadeh 1989; Gray et al. 1995; Yusef-Zadeh, Wardle, & Parastaran 1997; Lang, Morris, & Echevarria 1999). These structures are notable for their exceptionally large length to width ratios, of order 10 to 100, and remarkable linearity (Yusef-Zadeh 1989; Morris 1996).
To date, seven objects have been classified as NTFs. Six of these point perpendicularly to the Galactic plane, but the most recently discovered NTF is parallel to the plane (Anantharamaiah et al 1999). The filaments have lengths up to 60 pc and often show feathering and sub-filamentation on smaller transverse scale when observed at high spatial resolution (Liszt and Spiker 1995; Yusef-Zadeh, Wardle, & Parastaran 1997; Lang, Morris & Echevarria 1999; Anantharamaiah et al 1999). The observed radio 20/90 cm spectral indices (defined as the source flux, $`S`$, varying as $`\nu ^\alpha `$) show a range, $`0.3<\alpha <0.6`$ (LaRosa et al 1999). To date, there is no strong evidence that the spectral index varies as a function of length along the NTF (Lang, Morris & Echevarria 1999; Kassim et al. 1999). Lastly, it appears that all well studied NTFs may be associated with molecular clouds and/or H II regions (Serabyn & Morris 1994; Uchida et al. 1996; Stahgun et al 1998).
Several different types of models have been proposed for the filaments. These include magnetic field generation by an accretion disk dynamo with subsequent transport of field into the interstellar medium (Heyvaerts, Norman, & Pudritz 1988); electrodynamic models of molecular clouds moving with velocity $`𝐯`$ across a large scale ordered magnetic field, $`𝐁`$, resulting in current formation by $`𝐯\times 𝐁`$ electric fields and subsequent pinching of these currents into filaments (Benford 1988, 1997; Lesch & Reich 1992); magnetic reconnection between a molecular cloud field and the large-scale ordered field (Serabyn & Morris 1994); and particle injection into interstellar magnetic field ropes at a stellar wind termination shock (Rosner and Bodo 1996). Nicholls and Le Stange (1995) proposed a specifically tailored model for G359.1-0.2, also called the “Snake”. They invoke a high velocity star with a strong stellar wind that is falling through the galactic disk, to create a long wake, which they call a “star trail”. They must, however, fine tune their model in order to obtain radio emission from the trail by requiring the high energy electrons to be injected into the trail from the supernova remnant G359.1-0.5.
Although each of these models can in principle explain the particle acceleration and radio emission, none except for the specialized star trail scenario has satisfactorily accounted for the observed structure of the filaments. For instance, Rosner and Bodo (1996) employ a stellar wind termination shock as the source of high energy particles that they assume are loaded onto pre-existing interstellar field lines. The width of the resulting NTF is the radius of the stellar wind bubble. Synchrotron cooling leads, through a thermal instability, to collapse of the filaments and amplification of the internal magnetic field. The streaming of the particles along these otherwise quiescent field lines is assumed to produce the observed long threads. However, MHD stability is a problem for this model, and indeed for all models listed above, because magnetic fields left to their own devices will deform through a rich variety of modes. These range from kink and sausage instabilities for ideal MHD to tearing modes for resistive plasmas (e.g. Parker 1977; Cravens 1997). Unless very special magnetic field configurations and boundary conditions are imposed, and these are difficult to achieve even in the laboratory, the length and thinness of the NTFs cannot be explained as static structures.
In this paper, we adopt the viewpoint that the filaments are not equilibrium structures but are rather dynamical structures embedded in a flow. In a flow, the growth of many of the local instabilities is suppressed by the advection. A similar conclusion has been reached by Chandran, Cowley, & Morris (1999) who argue that the filaments represent locally illuminated regions of a large scale strong magnetic field that is formed through the amplification of a weak halo field by a galactic accretion flow. In contrast, we propose an alternative model in which the advection of a weak galactic field in a large scale outflow from the central region is amplified locally by encounters with interstellar clouds. We find that several key elements of the previously published scenarios are the natural consequences of this cloud-wind interaction picture.
## 2 The Comet Model
The key feature of the physical interaction between a comet and the magnetized solar wind was identified by Alfvén (1957) and elaborated by many subsequent studies (e.g. Russell et al. 1991; Luhmann 1995; Cravens 1997). As the magnetic field that is carried in the wind plasma encounters the comet, the field progress is retarded through the coma because the magnetic diffusion times are much longer than the advective timescale. Mass loading of the solar wind from the coma produces a velocity gradient, $`v_w/x`$, where $`x`$ is the cross-tail direction. The external field drapes over the coma and is stretched by the wind, ultimately forming a current sheet in the antisolar direction. This field line draping, for a molecular cloud, is depicted in Figure 1.
Remote observations show that cometary streamers routinely display aspect ratios of a 100 or more (Jockers 1991). Direct in situ plasma measurements of comet Giacobini-Zinner have confirmed the overall picture of magnetic field draping. In particular, these encounter measurements show that the central tail axis consists of a plasma sheet with very low magnetic field (Siscoe et al. 1986). This sheet is surrounded by a low density plasma that is threaded with the draped wind magnetic field that has been compressed and amplified by the flow. Transverse pressure balance requires that the draped field is about a factor of 5 to 10 stronger than the ambient field with amplification occurring because of flux conservation and field line stretching. This is our basic cometary analogy, that the ambient field is anchored in the cloud and that the field line tilt and amplification result from the shearing between the wake and the external wind. This picture, which explains solar system scale phenomena rather well, is more than a mere analogy. Any magnetized wind that impacts a finite blunt body with low resistivity will deflect around the object and drape the field along the wake flow. The field diffusion time is $`t_d=L^2/\eta `$, where here $`L`$ is the cloud radius and $`\eta `$ is the resistivity. For the GC, this timescale is many orders of magnitude larger than the wind advection time. Consequently, the field evolution around the molecular cloud is similar to the cometary case provided the field is ordered on the cloud size scale, $`L`$.
We now explore the consequences of this scenario for the NTFs. Consider a galactic scale wind with a mass loss rate $`\dot{M}`$. This wind need not eminate only from the GC. With the broad spatial distribution of star forming regions in the inner galaxy, we would anticipate a roughly cylindrical – not radial – outflow which would be sampled by clouds moving in whatever orbits they happen to have relative to the plane. Consequently, the wakes so produced should generally be perpendicular to the plane. For simplicity, however, we will assume here a compact source. The number density in the wind is given by $`n_W=\dot{M}v_{w,3}^1r_{100}^2`$ cm<sup>-3</sup>, for a mass loss rate in $`M_{}`$yr<sup>-1</sup>, a wind speed $`v_{w,3}`$ in $`10^3`$ km s<sup>-1</sup>, and a distance $`r_{100}`$ in 100 pc. For a cloud to survive in a postulated galactic scale wind, its internal pressure must at least balance the ram pressure of the background. We assume that the cloud pressure is given by $`P=\rho _c\sigma _c^2`$ where $`\rho _c`$ is the cloud mass density and $`\sigma _c`$ its internal velocity dispersion. Hence, for a wind of density $`\rho _w`$ and speed $`v_w`$, the required cloud density is given by $`\rho _c=\rho _w(\sigma /v_w)^2`$. It has been inferred from Ginga and ASCA X-ray observations that the inner Galaxy displays a strong wind (Yamauchi et al. 1990; Koyama et al. 1996). The average wind density within a radius of 80 pc of the GC is around 0.3 cm<sup>-3</sup> with a temperature of 10 keV and an expansion velocity of about 3000 km s<sup>-1</sup>(Koyama et al. 1996). These parameters correspond to a mass loss rate of 10<sup>-2</sup> M yr<sup>-1</sup> for a wind speed of around 1000 km s<sup>-1</sup>, which yields a critical cloud density of order $`n_c10^3`$ cm<sup>-3</sup> for $`\sigma =20`$ km s<sup>-1</sup>(a typical linewidth for molecular clouds in the GC region, see Morris & Serabyn ) although clouds nearer the center will need higher densities to survive. This density estimate is a lower limit. For clouds to survive in the GC tidal field they must have densities at least an order of magnitude above this (e.g. Güsten 1989). The effect on the cloud population is that massive, dense clouds will survive while lower density, low mass clouds likely disperse on a dynamical timescale and thus the cloud population may depend on galactocentric distance. Dense clouds form wakes by geometrically blocking and deflecting the wind. We identify this wake, drawn out by the wind, with the NTFs. This scenario is sketched in Figure 1. Thus follows the essential predictive feature of our model: since the filaments are not static structures, the classic MHD instabilities do not limit the aspect ratio as they would for a static equilibrium field.
What determines the structural properties of the wake, i.e. its aspect ratio and length? Given that $`t_dL/v_w`$, the draped field is stretched by the wind. If $`\delta v`$ is the boundary layer shear between the wind and cloud wake and $`\mathrm{\Delta }`$ is a characteristic length for the layer (of order $`L`$), then the axial field, $`B_z`$, as a function of distance $`Z`$ behind the cloud is given by the induction equation, $`B_z/t=(v/x)B_0`$, where $`B_0`$ is the external field. This has the approximate solution:
$$B_z=B_0\frac{\delta v}{\mathrm{\Delta }}\frac{z}{v_w}.$$
(1)
The axial field will continue to amplify until the draped magnetic field pressure balances the ram pressure of the wind. In other words, when the wake Alfvén speed equals the wind speed, $`B_z/(4\pi \rho _0)^{1/2}=v_w`$, the field can no longer be stretched. This provides a critical length:
$$z_c=5\times 10^2n^{1/2}v_{w,3}^2\mathrm{\Delta }/(B_{0,\mu }\delta v_3),$$
(2)
where $`v_{w,3}=v_w/10^3`$km s<sup>-1</sup>, $`n`$ is the number density, and $`B_{0,\mu }`$ is the external field in $`\mu `$G. Thus, for $`n1`$ cm<sup>-3</sup> and $`B_{0,\mu }10`$, the predicted aspect ratio is $`z_c/\mathrm{\Delta }50`$. Notice that stronger ambient fields lead to shorter wakes.
We now address the question of stability for the filaments. In the MHD case, the velocity shear must exceed the Alfvén speed to produce a growing mode for the Kelvin-Helmholtz instability (KHI). Other classical instabilities, such as the streaming, sausage, and kink modes, have a similar criterion (e.g. Wang 1991). Nonlinear models by Malagoli, Bodo & Rosner (1996) find that the fastest growing mode has a wavenumber given by $`k\mathrm{\Delta }0.05`$. The KHI can therefore be suppressed if the draped field amplification length is less than $`2\pi /k`$. Hence, for stability $`z_c40\pi \mathrm{\Delta }`$. With this constraint, we find a lower limit on the external wind field strength, $`B_{0,\mu }40n^{1/2}v_{w,3}/\delta v_3`$. Equipartition for the wind plasma gives $`B_{0,\mu }20`$ for the parameters derived from the ASCA data, which is in surprisingly good agreement with the stability constraint. Thus the expected amplified field strength is $`B_z2`$ mG for $`z/\mathrm{\Delta }`$ given by eq. (2).
Thus the key parameters that can be derived from our model are the aspect ratio which depends on the wind parameters, and the magnetic field strength in the filament. The observed aspect ratios can be explained using equation (2) with wind parameters consistent with the ASCA data. There are no direct measurments of the magnetic fields in the filaments. An estimate for the magnetic field can be derived from the observed synchrotron luminosities using a minimum energy analysis. The synchrotron luminosities are around $`10^{33}10^{34}`$ erg s<sup>-1</sup> (Gray et al. 1995; Lang, Morris, & Echevarria 1999; Kassim et al. 1999) and yield a magnetic field of $`0.1`$ mG about an order of magnitude smaller than our model result. Another estimate for the field strength comes from assuming that the particles traverse the length of an NTF in a time equal to their synchrotron lifetime. The synchrotron lifetime is $`t_{\frac{1}{2}}=1.20\times 10^4B_{z,mG}^2E_{GeV}^1`$ yrs, where $`B_{z,mG}`$ is the axial field in milligauss and $`E`$ is the electron energy in GeV (e.g. Moffatt 1975). Without reacceleration, assuming that the electrons are injected near one end and radiate as they stream at the Alfvén speed (e.g. Wentzel 1974), the observed filament lengths give a field strength of 1 mG for a length scale of 30 pc. Fields strengths of 1 mG have also been derived from dynamical arguments by Yusef-Zadeh and Morris (1987). We therefore conclude that our estimate of 1 mG is very reasonable and that the minimum energy analysis of such structures which assumes static and/or equilibrium conditions may produce misleading results. Note that the synchrotron lifetime arguement indicates that reacceleration or acceleration along the length of the filament is not required, although as we now discuss acceleration along the filament is expected in our picture.
Finally, since the NTFs are radiating via synchrotron emission, we address the question of particle energization. The observed emission requires only a very small population of relativistic particles, of order $`10^5`$cm<sup>-3</sup>. The maximum energy that is available for conversion to high energy particles is $`VB_z^2/8\pi `$, where $`V`$ is the volume of the wake. The maximum mean energy per particle that results from this conversion is 10 GeV. This is more than enough to explain the radio emission. A number of mechanisms that may be responsible for particle acceleration are natural consequences of this MHD configuration. The wake must contain a current sheet. Such structures have been extensively studied in space plasmas. The simulation of sheared helmet plumes in the solar corona by Einaudi et al. (1999) is particularly relevant to our scenario. They show that a current sheet imbedded in a wake flow is unstable to the generation of a local turbulent cascade without destruction of the large scale advected structure. Such cascades efficiently accelerate particles through wave-particle interactions (Miller et al. 1997). This turbulent acceleration would therefore occur along the entire length of the filament, thus spectral aging would not be observed in this scenario.
## 3 Discussion and Conclusions
Santillan et al. (1999) have recently published numerical MHD simulations of cloud collisions with a magnetized galactic disk. Although these are ideal MHD and not of wind flow, they clearly demonstrate that field line draping occurs as the interstellar clouds move through a background large scale field. In particular, their Fig. 4 shows the formation of a narrow straight tail for the cloud slamming into a transverse field imbedded in a planar gas layer. Dynamically, this simulation differs from wind flow because the cloud is slowed by the environmental gas. Yet the essential physical process is the same and closely resembles the simulations of cometary tail evolution by Rauer et al. (1995). This cloud-wind interaction, which may destroy the clouds if their masses are low enough (see Vietri, Ferrara, & Miniati 1997), is able to generate long magnetized tails with large aspect ratios.
Nonetheless, it has been argued in the literature that there is no need for a dynamical explanation of these structures. Recalling our discussion of the various proposed static models for the NTFs, the common explanation for their stability hinges on the existence of a pervasive background field. We see two ways of interpreting the magnetic field measurements obtained from the filaments. One is to assume that they represent local enhacements of an otherwise weak, but invisible, field. The other is to assume that one is seeing a region that happens to be locally illuminated but is otherwise extensive and uniform. We explicitly adopt the local enhancement picture and propose a dynamical mechanism that can amplify the field to much higher strength and still be stable. On the other hand, assuming a pervasive field still leaves the stability question unresolved for the following reasons. A force-free equilibrium background field that is presumably anchored in the turbulent gas of the galactic center certainly will not be stable. For instance, the solar corona has a pervasive field that suffers both local and global instabilites. Moreover, to stablize a filament, a pervasive field must have a pressure gradient perpendicular to the filaments, so the field cannot be uniform. If it has gradients and curvature, a static magnetic field is likely to be unstable. In contrast, stability is not an issue for a dynamical model, whether the flow is accreting (Chandran et al. 1999) or, as in our case, an outflowing wind.
The simplest geometry predicted by the cometary analogy is that every filament should be associated with a molecular cloud on the side toward the galactic plane. This is seen for the Sgr C filament (Liszt & Spiker 1995) and the “Snake” (Uchida et al. 1996). The model does not, however, require this and more complex geometrical arrangements are certainly possible in which environmental clouds interact with or are superimposed on the filaments almost anywhere along their lengths. For instance, Yusef-Zadeh & Morris (1987) find that a milligauss field suffices to stabilize the filaments against ram pressure by colliding molecular clouds. We note that this is precisely the field strength produced dynamically by the cometary model.
In addition, a final state, where the cloud is completely dissipated, could still permit the survival of the filament and has a cometary analog. There are many instances in comets where the tail completely separates from the coma and yet maintains structural coherence as it is advected in the solar wind (e.g. Brandt & Niedner 1987). These so-called disconnection events could also occur in our picture. In such instances, there would be no cloud at either end of the filament.
We close by emphasizing that our aim here has been the exploration of the consequences of a general scenario that can serve as a framework for more quantitative calculations of the physical properties of the Galactic Center filaments. Although we use the special conditions at the GC to constrain the mechanisms, the model is not constructed specifically to explain the NTFs. Instead, they result from the conditions that likely arise in any starburst galactic nucleus (see Mezger, Duschl, & Zykla 1996) and should be observable in such environments.
We thank G. Einaudi, J. R. Jokipii, N. Kassim, C. Lang, J. Lazio, M. Niedner, and M. Vietri for discussions, and A. Santillan for permission to quote his results prior to publication. We especially thank the referee, Mark Morris, for his critical reading of the manuscript and for discussions, and B. D. G. Chandran for communicating his paper in advance of publication. TNL is supported by a NAVY-ASEE faculty fellowship from the Naval Research Laboratory and a NASA JOVE grant to Kennesaw State University. SNS is partially funded by NASA and thanks the Astrophysics Group of the Physics Department of the University of Pisa for a visiting appointment during summer 1998.
Figure 1. Schematic of the interaction of a magnetized wind encountering a molecular cloud of radius L. The wind velocity is $`𝐯_𝐰`$ and the cloud velocity is $`𝐯_𝐜`$. The advected wind magnetic field $`𝐁_0`$ is impeded by the cloud and we show how successive field lines or flux ropes are strectched and draped by the flow around the cloud into a long thin wake. The draped field, denoted $`𝐁_z`$, is oppositely directed in the wake and forms a current sheet along the wake mid-plane. Solar system studies indicate that such a plasma-magnetic field configuration leads to particle acceleration through turbulent dissipation and we therefore identify such a wake as a nonthermal filament.
|
no-problem/9904/astro-ph9904020.html
|
ar5iv
|
text
|
# Acknowledgments
## Acknowledgments
This work was supported in part by the DOE (at Chicago, Fermilab and Case Western Reserve) and by the NASA (at Fermilab through grant NAG 5-7092).
|
no-problem/9904/nucl-th9904012.html
|
ar5iv
|
text
|
# References
NBI-99-11
The Transverse Structure of the Baryon Source in Relativistic Heavy Ion Collisions
Alberto Polleri$`^{a,}`$<sup>1</sup><sup>1</sup>1Present address: Institute for Theoretical Physics, University of Heidelberg, Philosophenweg 19, D-69120 Heidelberg, Germany., Raffaele Mattiello <sup>a</sup>, Igor N. Mishustin<sup>a,b</sup> and Jakob P. Bondorf<sup>a</sup>.
<sup>a</sup>The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark.
<sup>b</sup>The Kurchatov Institute, Russian Scientific Center, Moscow 123182, Russia.
(6 April 1999)
## Abstract
A direct method to reconstruct the transverse structure of the baryon source formed in a relativistic heavy ion collision is presented. The procedure makes use of experimentally measured proton and deuteron spectra and assumes that deuterons are formed via two-nucleon coalescence. The transverse density shape and flow profile are reconstructed for Pb+Pb collisions at the CERN-SPS. The ambiguity with respect to the source temperature is demonstrated and possible ways to resolve it are discussed.
PACS number(s): 25.75 -q, 25.75 -Dw, 25.75.Ld
Keywords: relativistic heavy ion collisions, coalescence model, clusters, transverse flow.
One of the main goals of modern nuclear physics is to understand the behavior of nuclear matter under extreme conditions. This can provide a solid basis for a reliable description of the nuclear equation of state and therefore answer many unsolved problems in astrophysics and early universe cosmology. In laboratory experiments there is only one way towards this goal, which is to study high energy collisions between heavy nuclei.
The complicated non-equilibrium dynamics associated with such collisions are nowadays the object of intensive theoretical investigations. There have been many attempts to develop a realistic approach describing the full space-time evolution of the system . Furthermore, due to the large number of produced particles and their subsequent re-scattering, a certain degree of local thermodynamic equilibrium might be established at an intermediate stage of the reaction . One can therefore try to use simpler models, dealing with the properly parametrised phase space distribution functions for different particle species.
However, on this road one always encounters an inherent problem. What can be measured in an experiment are only the momentum spectra of particles. Each measurement is only a projection of the full phase space information on the space-time points where particles decouple, the so-called freeze-out region. The fundamental ambiguity that several phase space distributions, after summing over the freeze-out coordinates of particles, can lead to the same single particle spectrum is left, in principle, unresolved. We see that one must disentangle what can be referred to as an inversion problem.
Although it is clear that independent single particle spectra are not sufficient to resolve the ambiguity, we fortunately have at our disposal a promising probe, the deuteron (light nuclear clusters, in general). Previous studies have shown that deuteron production cross sections can be understood in terms of the phenomenological coalescence model. It is founded on the fact that a neutron and a proton can fuse into a deuteron only if their relative distance in position and momentum space is small, on the order of the size of the deuteron wave function. Furthermore, because of the small binding energy ($``$ 2 MeV), deuterons can survive break-up only when scatterings are rare. These two constraints reveal that deuteron production can only take place at freeze-out. This is the reason why, with a suitable use of single and composite particle spectra, one can constrain the freeze-out phase space distribution of protons.
Another important observation made in recent experiments is the development of collective effects during the reaction. The collective behavior is revealed by increasing inverse slopes of spectra for particles of increasing mass. This can be understood in terms of collective flow of nuclear matter, leading to a position-dependent local velocity.
In this letter we extend our previous work , attempting to reconstruct the phase space distribution of protons directly from the observed proton and deuteron spectra, exploiting the coalescence prescription together with the notion of collective flow. It is assumed that one can characterise the system at freeze out by a position-dependent collective velocity and a position-independent local temperature $`T_0`$. We are mainly interested in the transverse dynamics at mid-rapidity for central collisions of large symmetric systems. We therefore make use of a relativistic description of collective flow based on the Bjorken picture for the longitudinal expansion, together with a longitudinally-independent transverse velocity $`\stackrel{}{v}_{}(\stackrel{}{r}_{})`$. The contraction of the four-momentum $`p^\mu `$ with the collective four-velocity $`u^\mu `$ can be written as
$$p_\mu u^\mu (x)=\gamma _{}(r_{})(m_{}\mathrm{cosh}(y\eta )\stackrel{}{p}_{}\stackrel{}{v}_{}(r_{})),$$
(1)
where $`m_{}=\sqrt{m^2+p_{}^2}`$ is the transverse particle mass while $`y`$ and $`\eta `$ are the momentum and space-time rapidities, respectively. Choosing a transverse profile $`n_p(r_{})`$ for the local density, we can write the proton phase space distribution as
$$f_p(x,p)=(2\pi )^3e^{p_\mu u^\mu \left(x\right)/T_0}B_pn_p(r_{}).$$
(2)
From now on we drop the subscripts in $`\gamma _{}`$, $`\stackrel{}{v}_{}`$ and $`\stackrel{}{r}_{}`$, understanding that these quantities denote the transverse degrees of freedom. In the above expression the normalisation constant for the Boltzmann factor in the local frame is defined as
$$B_p^1=d^3\stackrel{}{p}e^{m_{}\mathrm{cosh}y/T_0}=4\pi m^2T_0K_2(m/T_0),$$
(3)
where $`K_2`$ is the modified Bessel function of second order. The local density must be normalised to the measured differential multiplicity $`dN_p/dy`$. To do this we first calculate the invariant momentum spectrum. It can be obtained by integrating the phase space distribution on the freeze-out hypersurface, using the Cooper-Frye formula
$$S_p(p_{})=\frac{d^3N_p}{dyd^2\stackrel{}{p}_{}}=\frac{1}{(2\pi )^3}𝑑\sigma _\mu p^\mu f_p(x,p).$$
(4)
The form of the integration measure follows from our simplifying choice of freeze-out hypersurface as a sheet of constant longitudinal proper time $`\tau =\tau _0`$. We neglect longitudinal edge effects which are negligible for the mid-rapidity region of the spectra. We also assume a simultaneous freeze-out in the transverse direction and neglect surface emission at $`\tau <\tau _0`$. Within this assumption the integration measure assumes the form
$$p^\mu d\sigma _\mu =\tau _0m_{}cosh(y\eta )d^2\stackrel{}{r}d\eta .$$
(5)
After substituting this expression in eq. (4), the $`\eta `$ integration can be done analytically. Now the rapidity density can be obtained by integrating out the transverse momentum dependence of $`S_p(p_{})`$. Finally, one arrives at the normalisation condition for the local proton density
$$\frac{dN_p}{dy}=2\pi \tau _0𝑑rr\gamma (r)n_p(r).$$
(6)
This completes the definition of the proton phase space distribution.
The deuteron phase space distribution can be calculated on the basis of the coalescence model. In the density matrix formalism one obtains
$$f_d(x,p)=\frac{3}{8}\frac{d^3\stackrel{}{y}d^3\stackrel{}{q}}{\left(2\pi \right)^3}f_p(x_+,p_+)f_n(x_{},p_{})P_d(\stackrel{}{y},\stackrel{}{q}),$$
(7)
where the factor $`\frac{3}{8}`$ accounts for the spin-isospin coupling of the neutron-proton pair into a deuteron state. It is assumed that neutrons (not measured) evolve in the same way as protons and that $`f_n=R_{np}f_p`$, where $`R_{np}=11.5`$ is the neutron to proton ratio in the source. In the following we will take $`R_{np}=1.2`$. The phase space coordinates of the coalescing pair are $`x_\pm =x\pm y/2`$ and $`p_\pm =p/2\pm q`$ while $`P_d`$ is the Wigner density for the deuteron relative motion. The coalescence prescription is greatly simplified when considering large and hot systems. In this case one can neglect the smearing effect of the deuteron Wigner density in comparison to the characteristic scales of the system in position and momentum space . Then the deuteron phase space distribution becomes
$$f_d(x,p)\frac{3}{8}R_{np}\left[f_p(x,p/2)\right]^2.$$
(8)
This expression simply means that two nucleons, each with 4-momentum $`p/2`$, form a deuteron with 4-momentum $`p`$ at space-time point $`x`$.
Repeating the same reasoning for deuterons as done above for protons and inserting the corresponding expressions for $`f_p`$ and $`f_d`$ into eq. (8), one obtains the relation
$$n_d(r)=\lambda _dn_p^2(r)$$
(9)
between the local densities of deuterons and protons. The proportionality coefficient $`\lambda _d`$ has dimension $`L^3`$ and carries information on the characteristic scales in the problem. Its explicit form is
$$\lambda _d=\frac{3}{8}R_{np}(2\pi )^3\frac{B_p^2}{B_d}.$$
(10)
We now have built up the framework to establish the reconstruction procedure. In a fashion which holds both for protons and for deuterons, using eqs. (2), (4) and (5), we can calculate their invariant momentum spectra. Due to our simple choice of freeze-out hypersurface the integrations over the space-time rapidity and the azimuthal angle can be easily performed, leading to the expression
$$S(p_{})=C𝑑rrK_1(\frac{\gamma (r)m_{}}{T_0})I_0(\frac{v(r)\gamma (r)p_{}}{T_0})\tau _0n(r),$$
(11)
where $`C=4\pi Bm_{}`$ and $`K_1`$ and $`I_0`$ are modified Bessel functions of first and zeroth order. Here one can explicitly see the ambiguity in the description of the individual single particle spectrum. The two functions $`v(r)`$ and $`n(r)`$ cannot be mapped out uniquely from only one function as with the transverse momentum spectrum. This is true if protons and deuterons are treated independently, but with the link provided by the coalescence model, this ambiguity can, at least partially, be removed. Let us introduce for convenience a new, dimensionless, density function
$$\stackrel{~}{n}(v)=\frac{rdr}{vdv}\tau _0n(r)$$
(12)
and change the integration variable in eq. (11) from $`r`$ to $`v`$. In this way we obtain the new expression for the momentum spectrum
$$S(p_{})=C𝑑vvK_1(\frac{\gamma m_{}}{T_0})I_0(\frac{v\gamma p_{}}{T_0})\stackrel{~}{n}(v),$$
(13)
from which the one-to-one correspondence between $`\stackrel{~}{n}(v)`$ and $`S(p_{})`$ is evident, since all the functions in the integrand are single-valued. The normalisation condition (6) for $`n_p(r)`$, due to the definition (12), now becomes
$$\frac{dN_p}{dy}=2\pi 𝑑vv\gamma \stackrel{~}{n}_p(v),$$
(14)
together with the analogous one for $`\stackrel{~}{n}_d`$. It is instructive to consider the limit $`T_00`$ in eq. (13). Using the asymptotic expression for large argument for the Bessel functions $`K_1`$ and $`I_0`$ and performing the integration with the saddle point method, one obtains that $`v=p_{}/m_{}`$ and $`m_{}=m\gamma `$. The transverse momentum spectrum is simply expressed through $`\stackrel{~}{n}`$ as
$$S(p_{})\frac{m}{m_{}^3}\stackrel{~}{n}(p_{}/m_{}).$$
(15)
One can easily find $`\stackrel{~}{n}`$ when the observed spectrum has an exponential shape with inverse slope $`T_{}`$ as
$$S^{exp}(p_{})\frac{m_{}}{(mT_{})^{3/2}}e^{\left(m_{}m\right)/T_{}}.$$
(16)
Expressing the momentum variable in terms of velocity, $`p_{}=mv\gamma `$, we obtain
$$\stackrel{~}{n}(v)\gamma ^4b_0^{3/2}e^{b_0\left(\gamma 1\right)},$$
(17)
with $`b_0=m/T_{}`$. Observe that the exponent behaves as $`\mathrm{exp}(b_0v^2/2)`$ for small $`v`$ and as $`\mathrm{exp}(b_0\gamma )`$ for large $`v`$. In the limiting case $`T_00`$ the function $`\stackrel{~}{n}`$ is therefore uniquely determined from the spectrum. A finite temperature introduces an inevitable intrinsic smearing, which precludes an exact determination of $`\stackrel{~}{n}`$ for a single particle species.
The role of the coalescence model is now to establish the link between proton and deuteron spectra. Substituting in eq. (9) the definition of $`\stackrel{~}{n}`$ from eq. (12), both for protons and for deuterons, we obtain the simple differential equation
$$\stackrel{~}{n}_d(v)=\frac{\lambda _d}{\tau _0}\frac{vdv}{rdr}\stackrel{~}{n}_p^2(v),$$
(18)
which can be directly integrated and leads to the closed solution
$$r^2=2\frac{\lambda _d}{\tau _0}_0^v𝑑uu\frac{\stackrel{~}{n}_p^2(u)}{\stackrel{~}{n}_d(u)}.$$
(19)
Therefore, by independently extracting the functions $`\stackrel{~}{n}_p`$ and $`\stackrel{~}{n}_d`$ from the observed momentum spectra $`S_p(p_{})`$ and $`S_d(p_{})`$, we can find the function $`r(v)`$ by a simple numerical integration. Inverting the obtained function as $`r(v)v(r)`$, we obtain the collective velocity profile. Substituting it in eq. (12), we finally obtain the local proton density
$$n_p(r)=\frac{v(r)}{r}\frac{dv(r)}{dr}\frac{\stackrel{~}{n}_p(v(r))}{\tau _0}.$$
(20)
We now explore more closely the content of eqs. (19) and (20). Taking the limit for small transverse velocity in eq. (19), one finds that $`v(r)Hr`$ and the dimensionful scale in position space is set by the “Hubble constant”
$$H=\left(\frac{\tau _0\stackrel{~}{n}_d(0)}{\lambda _d\stackrel{~}{n}_p^2(0)}\right)^{1/2}=\left(\frac{\tau _0}{3R_{np}}\left(\frac{mT_0}{\pi }\right)^{3/2}\frac{\stackrel{~}{n}_d(0)}{\stackrel{~}{n}_p^2(0)}\right)^{1/2}.$$
(21)
The r.h.s. of this expression is obtained from eqs. (3) and (10), using the asymptotic expression for large argument for the Bessel function $`K_2`$ in the Boltzmann factors. Using this result and taking the limit for $`r0`$ in eq. (20), we obtain the proton spatial density at the origin
$$n_p(0)\frac{1}{3R_{np}}\left(\frac{mT_0}{\pi }\right)^{3/2}\frac{\stackrel{~}{n}_d(0)}{\stackrel{~}{n}_p(0)}.$$
(22)
This result will be used in the following discussion.
The described procedure to reconstruct the proton phase space distribution can now be applied to the transverse momentum spectra, measured by the NA44 collaboration at the CERN-SPS, for Pb+Pb collisions at $`158`$ A GeV . Using these data, we fitted $`\stackrel{~}{n}_p`$ and $`\stackrel{~}{n}_d`$, using eq. (13) for protons and deuterons and assuming different values for $`T_0`$. Guided by eq. (17) we chose a rather flexible form of the profile function, characterized by three parameters, $`k`$, $`a`$ and $`b`$, as
$$\stackrel{~}{n}(v)\gamma ^ke^{\left(ab/2\right)v^2b\left(\gamma 1\right)}.$$
(23)
The exponential factor has now the limiting behavior $`\mathrm{exp}(av^2)`$ for small $`v`$, while $`\mathrm{exp}(b\gamma )`$ for large $`v`$. We separated the scales $`a`$ and $`b`$ for small and large $`v`$, in order to have more freedom in fitting the curvature of the spectrum. The parameters $`k_p`$, $`a_p`$, $`b_p`$, $`k_d`$, $`a_d`$, $`b_d`$, are extracted from the experimental data with a Monte Carlo search minimising
$$\chi ^2=\underset{j}{}\left(\frac{S^{exp}(p_j)S^{theo}(p_j)}{S^{exp}(p_j)}\right)^2.$$
(24)
The only constraint imposed was $`\delta _d>2\delta _p`$: because of our choice of $`\stackrel{~}{n}`$ in eq. (23), the integrand in eq. (19) is exponentially divergent for $`v1`$, thereby giving the limit $`v(r)1`$ for $`r\mathrm{}`$. The values obtained in this way are listed in Tab. 1 and the reconstructed function $`\stackrel{~}{n}_p`$ is shown in Fig. 1. We do not show the deuteron profile since it is very similar to the proton one. The fitted spectra, normalised as $`dN_p/dy=22`$ and $`dN_d/dy=0.3`$, are shown in Fig. 2. One clearly sees that the calculated spectra for different temperatures are indistinguishable from one another. On the other hand, the reconstructed profiles $`\stackrel{~}{n}`$ are very different for different temperatures. They are very much peaked at large temperatures and become broad as the temperature drops. This is simple to understand, since a lower temperature introduces less momentum spread than a higher one and $`\stackrel{~}{n}`$ compensates for that by broadening. One can notice that the high-temperature profiles have a structure resembling a blast wave where most particles have approximately the same velocity.
In all calculations the freeze-out time was fixed at $`\tau _0=10`$ fm/c . After numerical integration of eq. (19), we obtained the function $`v(r)`$ shown in Fig. 3 for different temperatures. It shows a linear rise at small $`r`$ and saturates for large $`r`$. The velocity profiles clearly depend on the temperature chosen.
Making use of eq. (20) we also obtained the local proton density, plotted in Fig. 4. One can observe a behavior similar to $`\stackrel{~}{n}_p`$. At high temperature the density shows a shell-like structure which disappears as the temperature is lower. It should be emphasised that the shapes of $`v(r)`$ and $`n_p(r)`$ are quite sensitive to the curvature of the momentum spectra.
From the plots of $`v(r)`$ in Fig. 3, one might have the impression that flow is somehow stronger at high temperature. This is not the case, since the apparent flattening of $`v(r)`$ at small $`r`$ is accompanied by a higher saturation value and a corresponding larger extension of $`n_p(r)`$. To see this more quantitatively we calculated the mean-square value for the velocity field. The mean, taken with respect to the proton density in the global frame defined by $`\rho _p(r)=\gamma (r)n_p(r)`$, is calculated as
$$v^2=\left(\frac{dN_p}{dy}\right)^1d^2rv^2(r)\tau _0\rho _p(r),$$
(25)
and the results are listed in Tab. 1 for different temperatures. As one can see, $`v^2^{1/2}`$ indeed grows with decreasing $`T_0`$.
The broadening of $`n_p`$ as the temperature decreases is dictated by the conservation of the total phase space volume. The particular value chosen for $`\tau _0`$ also affects the resulting transverse extension of the source. All of this is evident after examining eq. (21). A large temperature results in a large $`H`$, i.e. in a smaller transverse extension scale $`H^1`$. As a consequence the average proton density is larger. This is explicit in eq. (22). To quantify the transverse size of the source we have also calculated the mean-square transverse radius
$$r^2=\left(\frac{dN_p}{dy}\right)^1d^2rr^2\tau _0\rho _p(r).$$
(26)
The resulting values of $`r^2^{1/2}`$ are listed in Tab. 1. One can observe the increasing mean transverse size with decreasing temperature. It is therefore necessary to know $`T_0`$ precisely in order to determine the system size. This information cannot be extracted solely from proton and deuteron spectra. Contrary to what is commonly done, source radii cannot be extracted from the $`d/p^2`$ ratio (usually assumed to be inversely proportional to the source volume) unless $`T_0`$ is known or further assumptions are made. One possibility relies on a rough estimate of the characteristic value of $`n_p`$ at freeze-out. Taking the total hadron number density to be $`\rho _0/3`$, where $`\rho _0=0.17\text{fm}^3`$ is the nuclear saturation density, with an average hadron-hadron cross section of $`30`$ mb, we have a mean free path of $`6`$ fm. This is long enough to reach the freeze-out conditions, especially for a rapidly expanding system. Since the ratio of all hadrons ($`p`$, $`n`$, $`\pi `$, $`K`$, $`\mathrm{}`$) to protons is $`15`$ for Pb+Pb collisions at $`158`$ A GeV , we can estimate that
$$n_p\frac{1}{15}\times \frac{1}{3}\times \mathrm{\hspace{0.17em}0.17}\text{fm}^30.0038\text{fm}^3.$$
(27)
With this number, one can say that our reconstruction procedure suggests the preferred values of $`T_0100`$ MeV and $`r^2^{1/2}9.2`$ fm, as follows from Fig. 4 and Tab. 1. This is also consistent with the temperature dependence in eq. (22). Comparison with microscopic simulations of the freeze-out distributions at AGS energies and at SPS energies also shows that the temperature around 100 MeV would be preferable. To make a definite statement, one must finally keep in mind that the average source size is influenced by the choice of $`\tau _0`$. A larger (smaller) value would result in a smaller (larger) average size. Internal consistency requires that $`\tau _0`$ is large enough so that the flow field has time to develop and the source to expand transversely during the reaction. Details about these isuues can only be discussed within a dynamical calculation, unless an independent measurement of $`\tau _0`$ is available.
It is clear that to resolve the remaining ambiguity in the source temperature one needs some additional experimental information. Recently, $`\pi \pi `$ HBT correlations data were used as a constraint . Since pions may freeze-out in a different way than protons, it would be even better to consider $`pp`$ HBT correlations, although they are much more sensitive to final state interactions than pions. We would also like to mention that the neutron phase space distribution, here assumed to be proportional to the proton one, could be also reconstructed with a similar procedure for triton spectra. Furthermore, heavier clusters are more sensitive to collective flow than to temperature , providing additional constraints.
In conclusion, we have demonstrated how the proton phase space distribution at freeze-out can be reconstructed from the measured transverse momentum spectra of protons and deuterons. The calculations, made with several simplifying assumptions, show that the proposed method gives meaningful results, although the ambiguity with respect to the temperature of the source remains. Two modifications will make this approach more realistic. First, the deuteron size should be explicitly included in the calculation. The wave function’s tail plays a quantitatively important role, especially at high energies, when the large number of produced particles forces nucleons to be far apart at freeze-out. Second, the freeze-out hypersurface itself is more complicated than a constant proper time hyperbola. A correct description of early evaporation of fast particles is needed in order to have the high momentum part of the spectra under control. Nevertheless the described procedure provides a basis for more quantitative analyses, possibly including HBT correlations for protons.
The authors would like to thank Andrew Jackson for many valuable and constructive suggestions and Ian Bearden, Jens Jørgen Gaardøje, Allan Hansen and the NA44 group for a very fruitful collaboration and for providing their preliminary data for our analysis. This work was supported in part by I.N.F.N. (Italy).
|
no-problem/9904/nucl-th9904084.html
|
ar5iv
|
text
|
# Hot Hypernuclear Matter in the Modified Quark Meson Coupling Model
## I introduction
The majority of nuclear phenomena have been successfully described in relativistic mean-field theory using only hadronic degrees of freedom. However, due to the observations which revealed the medium modification of the internal structure of the baryons, it has become essential to explicitly incorporate the quark-gluon degrees of freedom while respecting the established model based on the hadronic degrees of freedom in nuclei. One of the first models put forward along these lines is the quark-meson coupling (QMC) model, proposed by Guichon, which describes nuclear matter as a collection of non-overlapping MIT bags interacting through the self-consistent exchange of scalar $`\sigma `$ and vector $`\omega `$ mesons in the mean field approximation with the meson fields directly coupled to the quarks. The scalar $`\sigma `$ meson is supposed to simulate the exchange of correlated pairs of pions and may represent a very broad resonance observed in $`\pi \pi `$ scattering, while the vector $`\omega `$ meson is identified with the actual meson having a mass of about 780 MeV. In the chiral models, the scalar $`\sigma `$ and vetcor $`\omega `$ mean fields represent the $`u,d`$ quark condensates. The QMC model thus incorporates explicitly the quark degrees of freedom and this has nontrivial consequences. It also have been extended to study superheavy finite nuclei.
In the so-called modified quark meson coupling model (MQMC), it has been further suggested that including a medium-dependent bag parameter may be essential for the success of relativistic nuclear phenomenology. It was found that when the bag parameter is significantly reduced in the nuclear medium with respect to its free-space value, large cancelling isoscalar Lorentz scalar and vector potentials for the nucleon in nuclear matter emerge naturally. Such potentials are comparable to those suggested by relativistic nuclear phenomenology and finite density QCD sum rules. The density-dependence of the bag parameter is introduced by coupling it to the scalar meson field. This coupling is motivated by invoking the nontopological soliton model for the nucleon. In this model a scalar soliton field provides the confinement of the quarks. This effect of the soliton field is, roughly speaking, mimicked by the introduction of the bag parameter in the Bag Model. When a nucleon soliton is placed in a nuclear environment, the scalar soliton field interacts with the scalar mean field. It is thus reasonable to couple the bag parameter to the scalar mean fields.
The QMC model was extended to finite temperatures to investigate the liquid-gas phase transition in nuclear matter. Recently, the MQMC model has been also extended to finite temperature and has been applied to the study of the properties of nuclear matter where it was found that the bag parameter decreases appreciably above a critical temperature $`T_c200`$ MeV indicating the onset of quark deconfinement. The effect of glueball exchange as well as a realization of the broken scale invariance of quantum chromodynamics has also been investigated in the MQMC model through the introduction of a dilaton field. It was found that the introduction of the dilaton potential improves the shape of the saturation curve at T=0 and affects hot nuclear matter significantly.
In the present work, we extend the MQMC model to hot hypernuclear matter by introducing the scalar $`\zeta `$ and vector $`\varphi `$ mean fields that are coupled to the $`s`$-quark in addition to the $`\sigma `$ and $`\omega `$ fields which are coupled to the $`u,d`$-quarks. The $`\zeta `$ and $`\varphi `$ fields are identified with the real mesons having masses $`m_\zeta =975`$ and $`m_\varphi =1020`$ MeV respectively . Hypernuclear matter is considered to contain the octet $`p,n,\mathrm{\Lambda },\mathrm{\Sigma }^+,\mathrm{\Sigma }^0,\mathrm{\Sigma }^{},\mathrm{\Xi }^0`$ and $`\mathrm{\Xi }^{}`$ baryons. We introduce an ideal gas of kaons to keep zero net strangeness density $`\rho _S=0`$. We simplify the calculations by considering symmetric hypernuclear matter whereby the octet baryons reduce to 4 species: $`2N,\mathrm{\Lambda },3\mathrm{\Sigma },2\mathrm{\Xi }`$.
The outline of the paper is as follows. In section II, we present the MQMC model for hypernuclear matter at finite temperature, together with the details of the self-consistency conditions for the vector and scalar mean fields. In section III, we discuss our results and present our conclusions.
## II The Quark Meson Coupling Model for Hypernuclear Matter
The quark field $`\psi _q(\stackrel{}{r},t)`$ inside a bag of radius $`R_i`$ representing a baryon of species $`i`$ satisfies the Dirac equation
$`\left[i\gamma ^\mu _\mu m_q^0+(g_\sigma ^q\sigma g_\omega ^q\omega _\mu \gamma ^\mu )\delta _{qr}+(g_\zeta ^q\zeta g_\varphi ^q\varphi _\mu \gamma ^\mu )\delta _{qs}\right]\psi _q(\stackrel{}{r},t)=0.`$ (1)
where $`m_q^0`$ is the current mass of the quarks of flavor $`q`$ and where $`r`$ refers to the up or down quarks and $`s`$ refers to the strange quark. The Kronecker deltas insure that the u, d quarks are coupled only to the $`\sigma `$ and $`\omega `$ fields while the s-quark is coupled only to the $`\zeta `$ and $`\varphi `$ fields. In the mean field approximation all the meson fields are treated classically and, for nuclear matter in this approximation, these fields are translationally invariant. Moreover, because of rotational invariance, the space-like components of the vector fields vanish so that we have $`\omega _\mu \gamma ^\mu =<\omega _0>\gamma ^0=\omega \gamma ^0`$ and $`\varphi _\mu \gamma ^\mu =<\varphi _0>\gamma ^0=\varphi \gamma ^0`$.
The single-particle quark and antiquark energies in units of $`R_i^1`$ for quark flavor $`q`$ are given by
$`ϵ_{q}^{}{}_{\pm }{}^{n\kappa }=\mathrm{\Omega }_q^{n\kappa }\pm \left(g_\omega ^q\omega R_i\delta _{qr}+g_\varphi ^q\varphi R_i\delta _{qs}\right)`$ (2)
where
$`\mathrm{\Omega }_q^{n\kappa }=\sqrt{x_{n\kappa }^{q}{}_{}{}^{2}+R_i^2m_{}^{}{}_{q}{}^{2}}`$ (3)
and
$`m_q^{}=m_q^0g_\sigma ^q\sigma \delta _{qr}g_\zeta ^q\zeta \delta _{qs},`$ (4)
are the effective quark kinetic energy and effective quark mass, respectively. The boundary condition for each quark of flavor $`q`$ at the bag surface is given by
$`i\gamma \widehat{n}\psi _q^{n\kappa }(x_{n\kappa }^q)=\psi _q^{n\kappa }(x_{n\kappa }^q),`$ (5)
which determines the quark momentum $`x_{n\kappa }^q`$ in the state characterized by specific values of $`n`$ and $`\kappa `$. For a given value of the bag radius $`R_i`$ for baryon species $`i`$ and the scalar fields $`\sigma `$ and $`\zeta `$, the quark momentum $`x_{n\kappa }^q`$ is determined by the boundary condition Eq. (5) which, for quarks of flavor $`q`$ in a spherical bag, reduces to $`j_0(x_{n\kappa }^q)=\beta _qj_1(x_{n\kappa }^q)`$, where
$`\beta _q=\sqrt{{\displaystyle \frac{\mathrm{\Omega }_{}^{}{}_{q}{}^{n\kappa }(\sigma ,\zeta )R_im_q^{}(\sigma ,\zeta )}{\mathrm{\Omega }_{}^{}{}_{q}{}^{n\kappa }(\sigma ,\zeta )+R_im_q^{}(\sigma ,\zeta )}}}.`$ (6)
The quark chemical potential $`\mu _q`$, assuming that there are three quarks in the baryon bag, is determined from $`3=_qn_q`$ where $`n_q`$ is the number of quarks of flavor q and is determined by
$`n_q={\displaystyle \underset{n\kappa }{}}\left[{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{+}{}^{n\kappa }/R\mu _q)/T}+1}}{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{}{}^{n\kappa }/R+\mu _q)/T}+1}}\right].`$ (7)
The total energy from the quarks and antiquarks of each baryon of species $`i`$ is
$`E_{tot}^i={\displaystyle \underset{n\kappa q}{}}n_q{\displaystyle \frac{\mathrm{\Omega }_q^{n\kappa }}{R}}\left[{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{+}{}^{n\kappa }/R_i\mu _q)/T}+1}}+{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{}{}^{n\kappa }/R_i+\mu _q)/T}+1}}\right].`$ (8)
The bag energy for baryon species $`i`$ is given by
$`E_{bag}^i=E_{tot}^i{\displaystyle \frac{Z_i}{R_i}}+{\displaystyle \frac{4\pi }{3}}R_i^3B_i(\sigma ,\zeta ).`$ (9)
where $`B_i=B_i(\sigma ,\zeta )`$ is the bag parameter. In the simple QMC model, the bag parameter $`B`$ is taken as $`B_0`$ corresponding to its value for a free baryon. The medium effects are taken into account in the MQMC model by coupling the bag parameter to the scalar meson fields. In the present work we generalize the coupling suggested in the latter references to the case of two scalar meson fields by using the following ansatz for the bag parameter
$`B_i=B_0\mathrm{exp}\left[{\displaystyle \frac{4}{3}}{\displaystyle \frac{((n_u+n_d)g_\sigma ^B\sigma +n_sg_\zeta ^B\zeta )}{M_i}}\right]`$ (10)
with $`g_\sigma ^B`$ and $`g_\zeta ^B`$ as additional parameters. The spurious center-of-mass momentum in the bag is subtracted to obtain the effective baryon mass
$`M_i^{}=\sqrt{E_{bag}^{i}{}_{}{}^{2}<p_{cm}^2>^i},`$ (11)
where
$`<p_{cm}^2>^i={\displaystyle \frac{<x^2>^i}{R_i^2}}`$ (12)
and
$`<x^2>^i={\displaystyle \underset{n\kappa q}{}}n_qx_{n\kappa }^{q}{}_{}{}^{2}\left[{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{+}{}^{n\kappa }/R_i\mu _q)/T}+1}}+{\displaystyle \frac{1}{e^{(ϵ_{q}^{}{}_{}{}^{n\kappa }/R_i+\mu _q)/T}+1}}\right].`$ (13)
The bag radius $`R_i`$ for baryon species $`i`$ is obtained through the minimization of the baryon mass with respect to the bag radius
$`{\displaystyle \frac{M_i^{}}{R_i}}=0.`$ (14)
The total energy density at finite temperature $`T`$ and at finite baryon density $`\rho _B`$ reads
$`\epsilon `$ $`=`$ $`{\displaystyle \underset{i}{\overset{Baryons}{}}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle d^3k\sqrt{k^2+M_{i}^{}{}_{}{}^{2}}(f_i+\overline{f}_i)}+{\displaystyle \frac{1}{2}}m_\omega ^2\omega ^2+{\displaystyle \frac{1}{2}}m_\varphi ^2\varphi ^2+{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2+{\displaystyle \frac{1}{2}}m_\zeta ^2\zeta ^2`$ (15)
$`+`$ $`\epsilon _K^{id},`$ (16)
where $`\gamma _i`$ is the spin-isospin degeneracy factor of baryon species $`i`$ and where the last term corresponds to the energy density of the $`K`$-mesons treated here as an ideal gas. In Eq. (16) $`f_i`$ and $`\overline{f}_i`$ are the Fermi-Dirac distribution functions for the baryons and antibaryons of species $`i`$,
$`f_i={\displaystyle \frac{1}{e^{(ϵ_i^{}\mu _i^{})/T}+1}},`$ (17)
and
$`\overline{f}_i={\displaystyle \frac{1}{e^{(ϵ_i^{}+\mu _i^{})/T}+1}},`$ (18)
where $`ϵ_i^{}`$ and $`\mu _i^{}`$ are, respectively, the effective energy and effective chemical potential of baryon species $`i`$. These are given by $`ϵ_i^{}=\sqrt{k^2+M_{i}^{}{}_{}{}^{2}}`$ and $`\mu _i^{}=B_i\mu _B+S_i\mu _S\left(g_{\omega i}\omega +g_{\varphi i}\varphi \right)`$ where $`B_i`$ and $`S_i`$ are the baryon and strangeness quantum numbers and where $`g_{\omega i}=(n_u+n_d)g_{\omega i}^q`$ and $`g_{\varphi i}=n_sg_{\varphi i}^q`$ . The chemical potentials $`\mu _B,\mu _S`$ are determined by the self-consistency equations for the total baryonic density
$`\rho _B={\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle \underset{i}{}}B_i\gamma _i{\displaystyle d^3k(f_i\overline{f}_i)},`$ (19)
and the total strangeness density
$`\rho _S={\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle \underset{i}{\overset{Baryons}{}}}S_i\gamma _i{\displaystyle d^3k(f_i\overline{f}_i)}+{\displaystyle \underset{i}{\overset{Kaons}{}}}S_i\rho _{Ki}^{id}=0`$ (20)
where in the last equation we have introduced the contribution of an ideal gas of $`K`$ and $`K^{}`$ mesons to make the total strangeness density vanish identically. The vector mean fields are determined by
$`\omega ={\displaystyle \underset{i}{}}{\displaystyle \frac{g_{\omega i}}{m_\omega ^2}}B_i\rho _i,`$ (21)
and
$`\varphi ={\displaystyle \underset{i}{}}{\displaystyle \frac{g_{\varphi i}}{m_\varphi ^2}}B_i\rho _i.`$ (22)
The pressure is the negative of the grand thermodynamic potential density and is given by
$`P`$ $`=`$ $`{\displaystyle \frac{1}{3}}{\displaystyle \underset{i}{}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle d^3k\frac{k^2}{ϵ_i^{}}(f_i+\overline{f}_i)}+{\displaystyle \frac{1}{2}}m_\omega ^2\omega ^2+{\displaystyle \frac{1}{2}}m_\varphi ^2\varphi ^2{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2{\displaystyle \frac{1}{2}}m_\zeta ^2\zeta ^2+P_K^{id},`$ (23)
where the summation $`i`$ runs over the 8 species of the baryon octet which reduces for symmetric hypernuclear matter to 4 species with $`\gamma _i=4,2,6`$ and $`4`$ for $`N`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }`$ and $`\mathrm{\Xi }`$, respectively. In Eq. (23) $`P_K^{id}`$ is the pressure of the ideal gas of $`K`$ mesons. The density of the $`K`$-mesons of species $`i`$ is given by
$`\rho _{K}^{id}{}_{i}{}^{}={\displaystyle \frac{\gamma _i^K}{(2\pi )^3}}{\displaystyle d^3k(b_i\overline{b}_i)}`$ (24)
where the spin-isospin degeneracy $`\gamma _i^K=4,6`$ for $`K,K^{}`$, respectively. The Bose-Einstein distribution functions for the $`K`$ mesons are given by
$`b_i={\displaystyle \frac{1}{e^{[\sqrt{k^2+M_i^2}\mu _i]/T}1}}`$ (25)
and
$`\overline{b}_i={\displaystyle \frac{1}{e^{[\sqrt{k^2+M_i^2}+\mu _i]/T}1}}`$ (26)
where $`\mu _i=S_i\mu _S`$ is the chemical potential of $`K`$-meson of species $`i`$. The total energy density and total pressure of the $`K`$-meson ideal gas are given by
$`\epsilon _K^{id}={\displaystyle \underset{i}{}}{\displaystyle \frac{\gamma _i^K}{(2\pi )^3}}{\displaystyle d^3k\sqrt{k^2+M_i^2}(b_i\overline{b}_i)},`$ (27)
and
$`P_K^{id}={\displaystyle \frac{1}{3}}{\displaystyle \underset{i}{}}{\displaystyle \frac{\gamma _i^K}{(2\pi )^3}}{\displaystyle d^3k\frac{k^2}{\sqrt{k^2+M_i^2}}(b_i\overline{b}_i)},`$ (28)
respectively.
The scalar mean fields $`\sigma `$ and $`\zeta `$ are determined through the minimization of the thermodynamic potential or the maximizing of the pressure with respect to these fields. The pressure depends explicitly on the scalar mean fields through the $`\frac{1}{2}m_\sigma ^2\sigma ^2`$ and $`\frac{1}{2}m_\zeta ^2\zeta ^2`$ terms in Eq. (23). It also depends on the baryon effective masses $`M_i^{}`$ which in turn also depend on $`\sigma `$ and $`\zeta `$. If we write the pressure as a function of $`M_i^{}`$, $`\sigma `$ and $`\zeta `$, the extremization of $`P(M_i^{},\sigma ,\zeta )`$ with respect to the scalar mean field $`\sigma `$ can be written as
$`\left({\displaystyle \frac{P}{\sigma }}\right)_\zeta ={\displaystyle \underset{i}{}}\left({\displaystyle \frac{P}{M_i^{}}}\right)_{\mu _B,T}\left({\displaystyle \frac{M_i^{}}{\sigma }}\right)_\zeta +\left({\displaystyle \frac{P}{\sigma }}\right)_{\{M_i^{}\}}=0,`$ (29)
where
$`\left({\displaystyle \frac{P}{\sigma }}\right)_{\{M_i^{}\}}=m_\sigma ^2\sigma ,`$ (30)
with a similar experssion for the extremization of $`P(M_i^{},\sigma ,\zeta )`$ with respect to the scalar mean field $`\zeta `$. The derivative of the pressure with respect to effective mass $`M_i^{}`$ reads
$`\left({\displaystyle \frac{P}{M_i^{}}}\right)_{\mu _i,T}=`$ $``$ $`{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle d^3k\frac{k^2}{ϵ_{}^{}{}_{}{}^{2}}\frac{M_i^{}}{ϵ_i^{}}\left[f_i+\overline{f}_i\right]}`$ (31)
$``$ $`{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle \frac{1}{T}}{\displaystyle d^3k\frac{k^2}{ϵ_i^{}}\frac{M_i^{}}{ϵ_i^{}}\left[f_i(1f_i)+\overline{f}_i(1\overline{f}_i)\right]}`$ (32)
$``$ $`{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle \frac{1}{T}}g_{\omega i}\left({\displaystyle \frac{\omega }{M_i^{}}}\right)_{\mu _i,T}{\displaystyle d^3k\frac{k^2}{ϵ_i^{}}\left[f_i(1f_i)\overline{f}_i(1\overline{f}_i)\right]}`$ (33)
$``$ $`{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _i}{(2\pi )^3}}{\displaystyle \frac{1}{T}}g_{\varphi i}\left({\displaystyle \frac{\varphi }{M_i^{}}}\right)_{\mu _i,T}{\displaystyle d^3k\frac{k^2}{ϵ_i^{}}\left[f_i(1f_i)\overline{f}_i(1\overline{f}_i)\right]}`$ (34)
$`+`$ $`m_\omega ^2\omega \left({\displaystyle \frac{\omega }{M_i^{}}}\right)_{\mu _i,T}`$ (35)
$`+`$ $`m_\varphi ^2\varphi \left({\displaystyle \frac{\varphi }{M_i^{}}}\right)_{\mu _i,T}.`$ (36)
Since the baryon chemical potential $`\mu _B`$ and temperature are treated as input parameters, the variation of the vector mean field $`\omega `$ with respect to the effective baryon mass $`M_i^{}`$ at a given value of the baryon density $`\rho _B`$ reads
$`\left({\displaystyle \frac{\omega }{M_i^{}}}\right)_{\mu _i,T}={\displaystyle \frac{\left[g_{\omega i}/m_\omega ^2\right]\left[\gamma _i/(2\pi )^3\right]\left[1/T\right]d^3k\frac{M_i^{}}{ϵ_i^{}}\left[f_i(1f_i)\overline{f}_i(1\overline{f}_i)\right]}{1+_j\left[g_{\omega j}^2/m_\omega ^2\right]\left[\gamma _j/(2\pi )^3\right]\left[1/T\right]d^3k\left[f_j(1f_j)+\overline{f}_j(1\overline{f}_j)\right]}}.`$ (37)
with a similar expression for $`\left(\varphi /M_i^{}\right)_{\mu _i,T}`$. The coupling of the scalar mean fields $`\sigma `$ and $`\zeta `$ with the quarks in the non-overlapping MIT bags through the solution of the point like Dirac equation should satisfy the self-consistency condition. These constraints are essential to obtain the correct solution of the scalar mean fields $`\sigma `$ and $`\zeta `$.
## III Results and Discussions
We have studied hypernuclear matter at finite temperature using the modified quark meson coupling model which takes the medium-dependence of the bag into account. We choose a direct coupling of the bag parameter to the scalar mean fields $`\sigma `$ and $`\zeta `$ in the form given in Eq. (10). The bag parameter is taken as that adopted by Jin and Jennings $`B_0^{1/4}=188.1`$ MeV and the free nucleon bag radius $`R_0=0.60`$ fm. We have taken the current quark masses to be $`m_u=m_d=0`$ and $`m_s=150`$ MeV. For $`g_\sigma ^q=1`$, the values of the vector meson coupling constant and the parameter $`g_\sigma ^B`$, as fitted from the saturation properties of nuclear matter, are given as $`g_\omega ^2/4\pi =(3g_\omega ^q)^2/4\pi =5.24`$ and $`g_{\sigma }^{B}{}_{}{}^{2}/4\pi =(3g_{\sigma }^{B}{}_{}{}^{q})^2/4\pi `$=3.69. The $`Z_i=2.03,1.814,1.629`$ and $`1.505`$ are chosen to reproduce the baryon masses at their experimental values $`M_{N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi }}=`$939, 1157, 1193, and 1313 MeV respectively. Normal nuclear matter saturation density is taken as $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm<sup>-3</sup>. The extra coupling constants needed to couple the scalar and vector mean fields $`\zeta `$ and $`\varphi `$ to the $`s`$-quark are chosen to satisfy $`SU(6)`$ symmetry where $`|g_\zeta ^q|=\sqrt{2}|g_\sigma ^q|`$ and $`|g_\varphi ^q|=\sqrt{2}|g_\omega ^q|`$ and $`|g_{\zeta }^{B}{}_{}{}^{q}|=\sqrt{2}|g_{\sigma }^{B}{}_{}{}^{q}|`$. If it is assumed that the mean fields $`\zeta `$ and $`\varphi `$ are positive definite, then all the coupling constants are positive and the absolute value signs become redundant.
The $`\sigma `$ mean field is supposed to simulate the exchange of correlated pairs of pions and may represent a very broad resonances observed in $`\pi \pi `$ scattering and fixed at $`m_\sigma =550`$ MeV, while the vector $`\omega `$ meson is identified with the actual meson whose mass is $`m_\omega =783`$ MeV. Since the mean fields, $`\sigma `$ and $`\omega `$, are considered as $`<u\overline{d}>`$ condensates, they interact only with $`u,d`$-quark in the baryons. On the other hand, the scalar and vector mean fields $`\zeta ,\varphi `$ are considered as actual mesons with $`m_\zeta =975`$ MeV and $`m_\varphi =1020`$ MeV, respectively. They are considered as $`<s\overline{s}>`$ condensates and interact only with the $`s`$-quarks in the baryons. This picture is consistent with the chiral models. We fixed the total strangeness density $`\rho _S`$ to $`0`$ by introducing an ideal gas of $`K`$ and $`K^{}`$ mesons where $`m_K=`$ 495 MeV and $`m_K^{}=`$ 892 MeV. The contribution of other $`K`$ mesons was found to be negligible. It is supposed in the ideal gas limit that the Kaons do not interact with the mean fields $`\zeta `$ and $`\varphi `$. The extension to the case that the Kaons interact with the $`\zeta `$ and $`\varphi `$ fields will be considered in a future work.
We first solve Eqs. (19), (20), (21) and (22) self consistently for given values of temperature $`T`$ and densities $`\rho _B`$ and $`\rho _S`$ to determine the baryonic and strangeness chemical potentials $`\mu _B`$ and $`\mu _S`$, respectively. These constraints are given in terms of the effective baryon masses $`M_i^{}`$ which depend on the bag radii $`R_i`$, the quark chemical potentials $`\mu _q^i`$ and the mean fields. For given values of the scalar fields $`\sigma ,\zeta `$ and vector fields $`\omega ,\varphi `$, the quark chemical potential and bag radius of species $`i`$ are obtained using the self consistency conditions Eqs. (7) and (14), respectively. The pressure is evaluated for specific values of temperature $`T`$ and chemical potentials $`(\mu _B,\mu _S)`$ which now become input parameters. We then determine the values of $`\sigma `$ and $`\zeta `$ by using the extermization conditions as given in Eq. (29). These constraints take into account the coupling of the quark with the scalar mean fields in the frame of the point like Dirac equation exactly.
The dependence of the baryon effective masses $`M_{N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi }}^{}`$ on the total baryonic density $`\rho _B`$ and temperature is shown in Fig. 1 where it is seen that the baryon masses decrease with baryonic density except at the highest temperatures where the effective masses become almost density-independent. Moreover, for a given baryonic density $`\rho _B`$ it is seen that as the temperature is increased the mass $`M_i^{}`$ of species $`i`$ first increases slightly up to about $`T=150`$ MeV and then decreases rather rapidly for $`T>150`$ MeV. These results are displayed in a different manner in Fig. 2 which plots the baryon masses $`M_i^{}`$ as a function of $`T`$ for $`\rho _B=0`$ fm <sup>-3</sup> and $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup>. It is seen that the effective baryon masses $`M_{N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi }}^{}`$ with $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup> are less than those with zero baryonic density. Moreover, the effective baryonic masses $`M_{N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi }}^{}`$ increase only slightly, if at all, with temperature up to about $`T=150`$ MeV beyond which they decrease rapidly. This behaviour is qualitatively similar to our earlier results for normal nuclear matter where the rapid decrease in the nucleon’s effective mass was, however, found to start at rather higher temperatures $`T>200`$ MeV. This rapid decrease of $`M_i^{}`$ with increasing temperature resembles a phase transition at high temperatures and low density, when the system becomes a dilute gas of baryons in a sea of baryon-antibaryon pairs.
In Fig. 3, we display the scalar mean fields $`\sigma `$ and $`\zeta `$ as functions of the total baryonic density $`\rho _B`$ for various temperatures. It is seen that the value of $`\sigma `$ initially decreases with increasing temperature for temperatures less than 150 MeV. The scalar mean field $`\zeta `$ is almost negligible for such low temperatures. However, as the temperature reaches 150 MeV there are indications of an increase in $`\sigma `$ at low baryon densities where it attains a nonzero value at $`\rho _B=0`$. For still higher temperatures, the situation is more dramatic with the value of $`\sigma `$ increasing with temperature for all values of $`\rho _B`$. This is also qualitatively similar to our earlier results for $`SU(2)`$ nuclear matter. Furthermore, the scalar mean field $`\zeta `$ becomes important for $`T>150`$ MeV and also increases with temperature for all values of $`\rho _B`$. An interesting new feature here is that the scalar mean fields $`\sigma `$ and $`\zeta `$ tend to take almost constant values irrespective of density at high temperatures which was not seen in our earlier calculations for $`SU(2)`$ nuclear matter even at temperatures as high $`240`$ MeV . Fig. 4 displays $`\sigma `$ and $`\zeta `$ versus $`T`$ for $`\rho _B=0`$ and $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup>. It is seen that the $`\sigma `$ field has a nonzero (and almost constant) value only at the higher density until the temperature reaches about $`T=150`$ MeV when a rapid increase sets in at both densities. The increase at zero density is actually more dramatic and the $`\sigma `$ field rapidly attains values equal to those occuring at $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup>. This is another indication of a phase transition to a system of baryon-antibaryon pairs. The behaviour of the $`\zeta `$ field is qualitatively similar except that its value is negligible at both densities for temperatures less than about $`T=150`$ MeV.
In Fig. 5, we display the baryonic density dependence of the bag parameters for $`N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi }`$ for different values of the temperature. For each baryon, the bag parameter increases with temperature for temperatures less than 150 MeV. However, the situation is completely reversed after the phase transition takes place. For temperatures $`T>150`$ MeV the bag parameters display a dramatic decrease with temperature for all densities. This can be seen more clearly in Fig. 6 which displays $`B_i`$ vs $`T`$ for $`\rho _B=0`$ fm <sup>-3</sup> and $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup>. The bag parameters are almost constant until the temperature exceeds $`T=150`$ MeV when they start to decrease rapidly. This indicates the onset of quark deconfinement above the critical temperature: at high enough temperature and/or baryon density there is a phase transition from the baryon-meson phase to the quark-gluon phase. This behaviour is also in qualitative agreement with our earlier results for $`SU(2)`$ nuclear matter except that the decrease here is more dramatic indicating that the phase transition is much stronger than in ordinary nuclear matter. Our results are also comparable to those obtained from lattice QCD calculations which have so far only explored the zero baryon density axis of the phase diagram in a meaningful way. The lattice results with 2 light quark flavors, indicate that the transition from hadronic matter to a quark-gluon plasma occurs at a temperature $`T140`$ MeV and that for low densities it may not be a phase transition at all but what is called a rapid crossover.
Fig. 7 displays the relative abundance $`\rho _i/\rho _B`$ for each baryon species. At low temperatures the nucleons $`N`$ are almost the only constituents. However, the contribution of the hyperons starts to be noticeable when the temperature reaches $`100`$ MeV and becomes more important when the temperature is increased to $`T>150`$ MeV. At temperatures $`T>200`$ MeV the contribution of the nucleons falls down to about half the total baryonic density. This can also be seen in Fig. 8 where we display the ratio of the baryonic strangeness density to the total baryonic density. As mentioned earlier, we keep the net strangeness of the system fixed to zero by introducing Kaons so that $`\rho _S=\rho _S^{Baryons}+\rho _S^{Mesons}=0`$. It is seen that the net strangeness of the baryons $`\rho _S^{Baryons}`$ is small at low temperatures $`T<150`$ MeV (note the logarithmic scale). However, as the temperature increases the net strangeness of the baryon octet becomes significant. For $`T>150`$ MeV, the ratio $`\rho _S^{Baryons}/\rho _B`$ becomes of order one. Therefore, the $`K`$-meson plays a significant role at high temperatures and probably also in the phase transition.
In Fig. 9, we display the relative abundances of the various baryon species $`\rho _i/\rho _B`$ versus temperature at normal nuclear matter density $`\rho _{B}^{}{}_{0}{}^{}=0.17`$ fm <sup>-3</sup>. It is seen that the the nucleons are dominant at low temperatures and their relative abundance $`\rho _N/\rho _B`$ decreases very slowly at first and deviates very little from $`1`$ until the temperature reaches about 100 MeV when $`\rho _N/\rho _B`$ starts to decrease rapidly and becomes negligibly small for temperatures larger than about 280 MeV. On the other hand, the abundance of the hyperons is negligible at low temperatures and increases significantly as the temperature is increased beyond $`T=100`$ MeV. They become more abundant than the nucleons for temperatures larger than 200 MeV. The relative abundance of hyperons in high energy heavy ion collisions can therefore be used as a simple thermometer to measure the temperature of the hot nuclear matter produced in the reaction and probably to study the phase transition to the quark gluon plasma.
In conclusion, we have generalized the MQMC model to the case of hot hypernuclear matter by introducing two new meson fields that couple to the strange quarks and using $`SU(6)`$ symmetry to relate the coupling constants. The results are qualitatively similar to those obtained for $`SU(2)`$ nuclear matter including the onset of quark deconfinement. It is observed, however, that in this model quark deconfinement in $`SU(3)`$ (or hypernuclear) matter is much stronger than in $`SU(2)`$ (or normal) nuclear matter and that hyperons become more abundant than nucleons at high temperatures $`T>200`$ MeV. An ideal gas of Kaons was introduced to keep zero net strangeness density. The interaction of the Kaons with the meson fields will be considered in a future work.
###### Acknowledgements.
Financial support by the Deutsche Forschungsgemeinschaft through the grant GR 243/51-1 is gratefully acknowledged.
|
no-problem/9904/cond-mat9904114.html
|
ar5iv
|
text
|
# Density-functional theory for attraction between like-charged plates
## I Introduction
Solutions containing macromolecules are ubiquitous in the everyday life. From food colloids to the DNA, we are surrounded by these giant molecules which directly or indirectly govern every aspect of our lives. In many cases the macromolecules in solution posses a net charge. The electrostatic repulsion between the polyions is, often, essential to stabilization of colloidal suspensions. In the biological realm the electrostatics is responsible for the condensation of the DNA and formation of actin bundles, while various physiological mechanisms depend on the electrostatic interactions between the proteins and the microions. In spite of their ubiquity, our understanding of polyelectrolyte solutions is far from complete.
The effort to fathom the role of electrostatics as it applies to the colloidal suspensions goes back over half a century to the classic works of Derjaguin and Landau and of Verwey and Overbeek (DLVO). These in turn were based on the pioneering studies of Gouy and Chapman of double layers in metal electrodes. Following these early contributions, a large effort has been devoted to solve the Poisson-Boltzmann (PB) equation in various geometries. The mean-field treatment, based on the solution of the PB equation, suggests that the interaction between two equally charged macroions in a suspension containing counterions is always repulsive. In recent years, however, this dogma began to be questioned based on simulations, analytical calculations and experiments, which indicated that for small distances and large charge densities, two like-charged polyions might actually attract!
The fundamental goal of this paper is to demonstrate that this attraction is linked to the correlations between the microions omitted in the mean-field theories, and to establish the conditions under which the attraction becomes possible. We shall consider the interaction between two infinite uniformly charged plates confining their own point-like counterions. The mean-field approximation for this system is obtained by solving the PB equation which, due to the planar symmetry, can be done analytically. Once the density profile is obtained, all the other thermodynamic quantities can be easily derived. Thus, it is not difficult to demonstrate that the pressure at the mean-field level, in units of energy, is simply the density of counterions at the mid-plane between the plates. Since this is always positive, no attraction is possible within the mean-field theory.
Realization that the correlations between the counterions can strongly modify the mean-field predictions goes back a number of years. One of the first approaches proposed by Kjellander and Marčelja was to include the correlations through the numerical solution of the Anisotropic Hypernetted Chain Equation (AHNC). These authors found that the force per unit area (pressure) can become negative in the presence of divalent counterions. Monte Carlo (MC) simulations performed by Guldbrand et al. also indicate that as the surface charge density is increased, the pressure decreases if the distance between the charged surfaces is sufficiently small. As in the case of the AHNC calculations, attraction was found only in the presence of divalent counterions. These authors, however, did not analyze the case of very high charge density and short distance between the plates. In addition, since in the above calculations it is difficult to separate the different physical contributions to the pressure, the mechanism that drives the attraction remains unclear.
A different theoretical approach which attempted to shed some light on the mechanism of attraction was advanced by Stevens and Robbins. These authors proposed a density-functional theory similar to the one often employed in studies of simple liquids. This approach introduces a grand-potential free energy, $`\mathrm{\Omega }\left[\rho (𝒓)\right]`$, which is a functional of the non-uniform density of counterions $`\rho (𝒓)`$. The equilibrium properties of the system are obtained through the minimization of the total free energy. The practical problem with this method is that the exact form of the functional is not known. When the correlations between the microions are omitted, the minimization of the grand potential, $`\mathrm{\Omega }_{\mathrm{PB}}`$, becomes trivial and leads to the usual PB equation. In order to account for the correlations between the counterions, Stevens and Robbins appealed to the Local Density Approximation (LDA). Within this approach an additional contribution, $`f_{\mathrm{LDA}}`$, is added to the mean-field expression, $`\mathrm{\Omega }_{\mathrm{PB}}`$. The expression for $`f_{\mathrm{LDA}}`$ adopted by Stevens and Robbins was obtained through the extrapolation of the MC data for the homogeneous One-Component Plasma (OCP), but with the homogeneous density replaced by an inhomogeneous density profile. The minimization of the free-energy functional allowed them to determine the density profile, $`\rho (𝒓)`$, and the pressure, $`P_{\mathrm{OCP}}`$. The LDA, however, is not without its own problems. The major drawback of this approach is that, for short distances and high charge densities, the LDA is unstable. The reason for the instability is due to the fact that as the density of counterions in the vicinity of the plates increases, the chemical potential decreases, what attracts more particles to the region. This, in turn, leads to an unphysical “chain reaction” where all the counterions condense onto the plates. Clearly, when the distance between the counterions becomes smaller than some threshold value, $`s_{\mathrm{corr}}`$, the LDA ceases to be a reliable approximation.
An improvement over the LDA is, the so called, Weighted Density Approximation (WDA). In this case, the excess free energy is taken to be a function of an average density, $`\rho _w(𝒓)=\mathrm{d}^3𝒓^{}w(|𝒓𝒓^{}|)\rho (𝒓^{})`$, averaged over a region of radius $`s=s_{\mathrm{corr}}`$, where the interactions between the counterions are the strongest. The difficulty in the practical implementation of this scheme is the determination of a proper weight function. The simplest possible form for $`w(|𝒓𝒓^{}|)`$, used by Stevens and Robbins, was to assume that this function has a long-range variation comparable to the wall separation. In this case, the weighted density $`\rho _w(𝒓)`$ is approximated by the homogeneous density independent of $`𝒓`$. However, when the walls are not close, $`L>s_{\mathrm{corr}}`$, the weighted function is no longer uniform and the approximation adopted by Stevens and Robbins becomes unrealistic.
A beautiful explanation of the attraction between like-charged plates has been recently advanced by Rouzina and Bloomfield. These authors present a picture of attraction as arising from the ground-state configuration of the counterions. Clearly at zero temperature the counterions will recondense onto the surface of the plates forming two intercalating Wigner crystals. The authors advance a hypothesis that even at finite temperatures, relevant to the common experimental conditions, the attraction is still governed by the zero-temperature correlations. A somewhat different formulations based on field-theoretic methodology have also been proposed. In these approximations the attraction arises as a result of correlated fluctuations in the counterion charge densities. Although providing a nice qualitative explanation of the origin of the attraction, these simple theories fail to yield a quantitative agreement with the simulations.
In this paper we propose a different form of the weighted-density approach, which rectifies the problems of the earlier theories while still remaining numerically tractable. The excess free energy and the weight function, $`w(|𝒓𝒓^{}|)`$, are derived from the Debye-Hückel-Hole (DHH) theory of the OCP. The density profile is determined by minimizing the free-energy density with respect to the local density. Once the density profile is obtained, the free energy of the system is calculated by inserting it into the expression for the free-energy functional. Given the free energy, all the thermodynamic properties of the system can be easily calculated. A careful analysis of the behavior of the pressure as a function of the charge density and the distance between the plates allows us to explore the nature and the origin of the attraction.
The remainder of the paper is organized as follows. The model and the PB approximation for the density-functional approach are described in Sec. II. The WDA is introduced and applied in Sec. III. Our results and conclusions are summarized in Sec. IV.
## II The Poisson-Boltzmann Approach
We consider two large, charged, thin surfaces each of area $`𝒜`$, separated by a distance $`L`$ (see Fig. 1). The two plates with a negative surface charge density, $`\sigma `$, confine positive point-like monovalent counterions with charge $`e`$. The overall charge neutrality of the system is guaranteed by the constraint
$$_{L/2}^{L/2}dz\rho (z)=\frac{2\sigma }{e},$$
(1)
where $`\rho (z)`$ is the local number density of counterions and $`z`$ is the Cartesian coordinate perpendicular to the plates. The space between the plates is assumed to be a dielectric continuum of constant $`\epsilon `$.
In order to explore the thermodynamic properties of the system, we use a density-functional approach. The grand potential of the system is
$$\mathrm{\Omega }\left[\rho \right][\rho ]\mu N,$$
(2)
where $`N`$ is the total number of counterions, $`\mu `$ is their chemical potential and the functional $``$ is derived from the free-energy density of the homogeneous system, with the uniform density of counterions, $`\rho _\mathrm{c}=N/L𝒜`$, replaced by the local density $`\rho (z)`$. For dilute systems, the ionic correlations can be neglected and the grand-potential functional (per unit area) becomes
$``$
$$\frac{\beta \mathrm{\Omega }\left[\rho \right]}{𝒜}=_{L/2}^{L/2}dz\rho (z)\left\{\mathrm{ln}\left[\mathrm{\Lambda }^3\rho (z)\right]1\right\}+\frac{\beta }{2}_{L/2}^{L/2}dz\varphi (z)\left[e\rho (z)+q(z)\right]\beta \mu _{L/2}^{L/2}dz\rho (z),$$
(3)
$``$
where the electrostatic potential,
$$\varphi (𝒓)=\mathrm{d}^3𝒓^{}\frac{e\rho (𝒓^{})+q(𝒓^{})}{\epsilon |𝒓𝒓^{}|},$$
(4)
due to the symmetry of the problem, depends only on the $`z`$ coordinate. $`\mathrm{\Lambda }`$ is the de Broglie thermal wavelength of the counterions, $`\beta =1/k_BT`$ and $`q(z)=\sigma \left[\delta (zL/2)+\delta (z+L/2)\right]`$ is the surface charge density of the plates. The functional minimization of this expression,
$$\frac{1}{𝒜}\frac{\delta \beta \mathrm{\Omega }}{\delta \rho (z)}=0,$$
(5)
produces the optimum density profile,
$$\rho (z)=\rho _0\mathrm{exp}\left[\beta e\varphi (z)\right].$$
(6)
The constant $`\rho _0`$ is determined from the overall charge-neutrality condition, Eq. (1),
$$\rho _0\frac{2\sigma }{e{\displaystyle _{L/2}^{L/2}}dz\mathrm{exp}\left[\beta e\varphi (z)\right]}.$$
(7)
The electrostatic potential is obtained by solving the Poisson equation,
$$\frac{\mathrm{d}^2\varphi (z)}{\mathrm{d}z^2}=\frac{4\pi }{\epsilon }\left[e\rho (z)+q(z)\right],$$
(8)
with the distribution of free ions given by Eq. (6). We find
$$\varphi (z)=\frac{1}{\beta e}\mathrm{ln}\left[\mathrm{cos}^2\left(\frac{zz_0}{\lambda }\right)\right]\varphi _0,$$
(9)
where $`\varphi _0`$ is the reference potential, which we will set to zero. Here $`\lambda =1/\sqrt{2\pi \lambda _B\rho _0}`$ and $`\lambda _B=\beta e^2/\epsilon `$ is the Bjerrum length. Eq. (8) has to obey two boundary conditions, namely,
$`E(z=0)`$ $`=`$ $`0,`$ (10)
$`E\left(z=\pm {\displaystyle \frac{L}{2}}\right)`$ $`=`$ $`\pm {\displaystyle \frac{4\pi \sigma }{\epsilon }}.`$ (11)
From the first equation, the electric field vanishes at the mid-plane and, therefore, $`z_0=0`$. The second equation imposes the discontinuity of the electric field at both charged surfaces, leading to
$$\frac{1}{\lambda }\mathrm{tan}\left(\frac{L}{2\lambda }\right)=\frac{2\pi \sigma \lambda _B}{e}.$$
(12)
The potential at a point $`z`$ is, then, given by
$$\varphi (z)=\frac{1}{\beta e}\mathrm{ln}\left[\mathrm{cos}^2\left(\frac{z}{\lambda }\right)\right],$$
(13)
with $`\lambda `$ the root of Eq. (12). The optimum density profile derived from this potential,
$$\rho (z)=\frac{\rho _0}{\mathrm{cos}^2(z/\lambda )},$$
(14)
can now be substituted into the free-energy functional, allowing the calculation of the total free energy. The thermodynamic properties of the system can be determined from a suitable differentiation of the total free energy. For example, the force between the two plates is given by the minus derivative of the free energy with respect to the separation $`L`$ between the two surfaces. This differentiation leads to a particularly simple expression for the force per unit of area (or pressure),
$$\beta P=\rho _0.$$
(15)
We note that although it might be tempting to attribute this simple result to the contact theorem, this is not the case, since the conditions under which this theorem holds are violated in the present geometry; Eq. (15) is purely a mean-field result.
## III The Weighted-Density Approximation
For dense systems, the correlations between the microions become relevant. For instance, if a counterion is present at the position $`𝒓`$, due to the electrostatic repulsion, the probability that another counterion is located in its vicinity is drastically reduced. The correlations in the positions of the counterions reduce the mean-field estimate of the electrostatic free energy. No exact method exists for calculating this excess contribution. The simplest approximation, the LDA, consists of adding to the Eq. (3) a local functional,
$$f_{\mathrm{LDA}}=_{L/2}^{L/2}dz\rho (z)f_{\mathrm{corr}}\left[\rho (z)\right],$$
(16)
where $`f_{\mathrm{corr}}\left[\rho (z)\right]`$ is the correlational free energy per particle. Within the LDA one normally uses the expression derived for the homogeneous system, in which the uniform density $`\rho _\mathrm{c}=N/L𝒜`$ is replaced by the local density profile $`\rho (𝒓)`$. Unfortunately, as was mentioned above, the LDA is unstable when the one-particle density $`\rho (𝒓)`$ is a rapidly varying function of the position. For example, for high surface charge densities, the minimization of the grand potential has no solution. To circumvent this and related problems intrinsic to the LDA, Tarazona and Curtin and Ashcroft proposed a WDA, in which the free-energy density, $`f_{\mathrm{LDA}}`$, is replaced by
$$f_{\mathrm{WDA}}=_{L/2}^{L/2}dz\rho (z)f_{\mathrm{corr}}\left[\rho _w(z)\right].$$
(17)
The fundamental difference between the LDA and the WDA is that the latter is assumed to depend not on the local density $`\rho (𝒓)`$, but on some average density within the neighborhood of the point $`𝒓`$,
$$\rho _w(𝒓)=\mathrm{d}^3𝒓^{}w[|𝒓𝒓^{}|;\rho (𝒓)]\rho (𝒓^{}).$$
(18)
This provides a control mechanism which prevents an unphysical, singular, buildup of concentration at one point. The grand potential is obtained by adding the excess free energy per area, given by Eq. (17), to the Eq. (3),
$`{\displaystyle \frac{\beta \mathrm{\Omega }\left[\rho \right]}{𝒜}}`$ $`=`$ $`{\displaystyle _{L/2}^{L/2}}dz\rho (z)\left\{\mathrm{ln}\left[\mathrm{\Lambda }^3\rho (z)\right]1\right\}`$ (22)
$`+{\displaystyle \frac{\beta }{2}}{\displaystyle _{L/2}^{L/2}}dz\varphi (z)\left[e\rho (z)+q(z)\right]`$
$`+\beta {\displaystyle _{L/2}^{L/2}}dz\rho (z)f_{\mathrm{corr}}\left[\rho _w(z)\right]`$
$`\beta \mu {\displaystyle _{L/2}^{L/2}}dz\rho (z).`$
Minimization of this expression leads to the optimum particle number density,
$$\rho (z)=\rho _0\mathrm{exp}\left[\beta e\varphi (z)\beta \mu _{\mathrm{ex}}(z)\right],$$
(23)
where the excess chemical potential derived from $`f_{\mathrm{WDA}}`$, Eq. (17), is
$`\mu _{\mathrm{ex}}(z)`$ $`=`$ $`{\displaystyle \frac{\delta f_{\mathrm{WDA}}}{\delta \rho (z)}}`$ (24)
$`=`$ $`f_{\mathrm{corr}}\left[\rho _w(z)\right]+{\displaystyle _{L/2}^{L/2}}dz^{}\rho (z^{}){\displaystyle \frac{\delta f_{\mathrm{corr}}\left[\rho _w(z^{})\right]}{\delta \rho (z)}},`$ (25)
and the normalization coefficient is
$$\rho _0\frac{2\sigma }{e{\displaystyle _{L/2}^{L/2}}dz\mathrm{exp}\left[\beta e\varphi (z)\beta \mu _{\mathrm{ex}}(z)\right]}.$$
(27)
The electrostatic potential satisfies the Poisson equation, Eq. (8), with the charge density given by the Eq. (23). Integrating the Poisson equation over a rectangular shell of area $`𝒜`$ and width $`z`$, and appealing to the Gauss’ theorem, an integro-differential equation for the electric field $`E(z)`$ can be obtained,
$$(\overline{z})=4\pi \overline{\sigma }\frac{{\displaystyle _0^{\overline{z}}}d\overline{z}^{}\mathrm{exp}\left[\overline{\mu }_{\mathrm{ex}}(\overline{z}^{})+{\displaystyle _0^{\overline{z}^{}}}d\overline{z}^{\prime \prime }(\overline{z}^{\prime \prime })\right]}{{\displaystyle _0^{\overline{L}/2}}d\overline{z}^{}\mathrm{exp}\left[\overline{\mu }_{\mathrm{ex}}(\overline{z}^{})+{\displaystyle _0^{\overline{z}^{}}}d\overline{z}^{\prime \prime }(\overline{z}^{\prime \prime })\right]},$$
(28)
where $`e\beta \lambda _BE`$, $`\overline{\sigma }\sigma \lambda _B^2/e`$, $`\overline{z}z/\lambda _B`$, $`\overline{L}L/\lambda _B`$ and $`\overline{\mu }_{\mathrm{ex}}\beta \mu _{\mathrm{ex}}`$. The local density $`\rho (z)`$, which enters in the calculation of the excess chemical potential, Eq. (24), can be obtained from the derivative of the electric field, since $`𝑬(𝒓)=4\pi e\rho (𝒓)/\epsilon `$. The Eq. (28) explicitly fulfills the two boundary conditions: $`(0)=0`$ and $`(\pm \overline{L}/2)=\pm 4\pi \overline{\sigma }`$.
The solution of this equation depends on the specific form of the excess free-energy density and the weight function $`w\left(|𝒓𝒓^{}|\right)`$. For the homogeneous OCP the electrostatic free energy can be easily obtained using the DHH theory of Nordholm. This is a simple linear theory based on the ideas of Debye and Hückel. The electrostatic potential of the OCP is assumed to satisfy a linearized PB equation. As a correction for the linearization, Nordholm postulated the existence of an excluded-volume region of size $`s_{\mathrm{corr}}`$, from which all other ions are excluded. The size of this region is such that the electrostatic repulsion between two counterions is comparable to the thermal energy. Recent calculations using a generalized Debye-Hückel theory indicate that this exclusion region is responsible for the oscillations observed in the structure factor of the OCP at high couplings. Following Nordholm, we find
$$s_{\mathrm{corr}}=\frac{1}{\kappa _D}\left(1+3\lambda _B\kappa _D\right)^{1/3}\frac{1}{\kappa _D},$$
(29)
where $`\kappa _D=\sqrt{4\pi \lambda _B\rho _\mathrm{c}}`$ is the inverse of the Debye length. The excess free energy per particle is calculated to be
$``$
$$\beta f_{\mathrm{OCP}}=\frac{1}{4}\left[1+\frac{2\pi }{3\sqrt{3}}+\mathrm{ln}\left(\frac{\omega ^2+\omega +1}{3}\right)\omega ^2\frac{2}{\sqrt{3}}\mathrm{tan}^1\left(\frac{2\omega +1}{\sqrt{3}}\right)\right],$$
(30)
$``$
where $`\omega =(1+3\lambda _B\kappa _D)^{1/3}`$. The correlational free energy per particle for the WDA, $`f_{\mathrm{corr}}`$, which appears in (24), is obtained by replacing $`\rho _\mathrm{c}`$ by $`\rho _w(z)`$ in the expression (30), that is, $`f_{\mathrm{corr}}\left[\rho _w(z)\right]=f_{\mathrm{OCP}}\left[\rho _\mathrm{c}\rho _w(z)\right]`$.
To obtain the weighted function we require that the second functional derivative of the free energy $``$ in the limit of homogeneous densities,
$`{\displaystyle \frac{\delta ^2\beta }{\delta \rho (𝒓)\delta \rho (𝒓^{})}}`$ $`=`$ $`{\displaystyle \frac{\delta ^3(𝒓𝒓^{})}{\rho (𝒓)}}+w(|𝒓𝒓^{}|){\displaystyle \frac{\delta \beta \mu _{\mathrm{ex}}(𝒓)}{\delta \rho (𝒓^{})}}`$ (32)
$`+{\displaystyle \frac{\lambda _B}{|𝒓𝒓^{}|}},`$
produces the direct correlation function $`C_2(𝒓)`$ of the homogeneous system,
$$\frac{\delta ^2\beta }{\delta \rho (𝒓)\delta \rho (𝒓^{})}=\frac{\lambda _B}{|𝒓𝒓^{}|}C_2(𝒓𝒓^{}).$$
(33)
Following Groot, we find that a reasonable approximation for the weight function is
$$w(r)=w(|𝒓|)=\frac{3}{2\pi s_{\mathrm{corr}}^2}\left(\frac{1}{r}\frac{1}{s_{\mathrm{corr}}}\right)\mathrm{\Theta }(s_{\mathrm{corr}}r),$$
(34)
where $`\mathrm{\Theta }(x)`$ is the Heaviside step function. It is important to remember that the radius of the excluded-volume region, $`s_{\mathrm{corr}}`$, is now a function of the position, since the average density $`\rho _\mathrm{c}`$, which appears in Eq. (29), is replaced by $`\rho (z)`$, the local density of counterions, see Eq. (18). Taking advantage of the planar symmetry of the system, the expression for the weighted density can be written explicitly as a one-dimensional quadrature,
$``$
$`\rho _w(z)`$ $`=`$ $`{\displaystyle \frac{3}{s_{\mathrm{corr}}^2}}{\displaystyle _{L/2}^{L/2}}dz^{}\rho (z^{}){\displaystyle _0^{\mathrm{}}}d\varrho \varrho \left({\displaystyle \frac{1}{\sqrt{\varrho ^2+(zz^{})^2}}}{\displaystyle \frac{1}{s_{\mathrm{corr}}}}\right)\mathrm{\Theta }\left(s_{\mathrm{corr}}\sqrt{\varrho ^2+(zz^{})^2}\right)`$ (35)
$`=`$ $`{\displaystyle \frac{3}{s_{\mathrm{corr}}^2}}{\displaystyle _{z_<}^{z_>}}dz^{}\rho (z^{}){\displaystyle _0^{\sqrt{s_{\mathrm{corr}}^2(zz^{})^2}}}d\varrho \varrho \left({\displaystyle \frac{1}{\sqrt{\varrho ^2+(zz^{})^2}}}{\displaystyle \frac{1}{s_{\mathrm{corr}}}}\right)`$ (36)
$`=`$ $`{\displaystyle \frac{3}{2s_{\mathrm{corr}}^3}}{\displaystyle _{z_<}^{z_>}}dz^{}\rho (|z^{}|)\left(s_{\mathrm{corr}}|zz^{}|\right)^2,`$ (37)
$``$
where $`z_<\mathrm{max}(L/2,zs_{\mathrm{corr}})`$, $`z_>\mathrm{min}(L/2,z+s_{\mathrm{corr}})`$ and $`s_{\mathrm{corr}}`$ is a function of $`z`$ through $`\rho (z)`$.
## IV Results and Conclusions
Once $`f_{\mathrm{corr}}`$, $`\mu _{\mathrm{ex}}(z)`$ and $`\rho _w(z)`$ are defined, the electric field and, consequently, the optimum density profile can be determined from the numerical iteration of Eq. (28) until convergence is obtained. The Helmholtz free energy, $`F`$, associated with the optimum counterion distribution (23), is determined by substituting it into the free-energy functional $``$,
$``$
$$\frac{\beta F}{𝒜}=\frac{2\sigma }{e}\left[\mathrm{ln}\left(\mathrm{\Lambda }^3\rho _0\right)1\right]\frac{\beta e}{2}_{L/2}^{L/2}dz\rho (z)\varphi (z)\beta \sigma \varphi \left(\frac{L}{2}\right)_{L/2}^{L/2}dz\rho (z)\left\{\beta \mu _{\mathrm{ex}}(z)\beta f_{\mathrm{corr}}\left[\rho _w(z)\right]\right\}.$$
(38)
$``$
Using this expression, the pressure, for different distances between the plates, $`L`$, and various charge densities, $`\sigma `$, can be easily obtained through numerical differentiation,
$$P=\frac{1}{𝒜}\frac{F}{L},$$
(39)
as shown in Fig. 2. When the charge density is below a threshold value, $`\overline{\sigma }<\overline{\sigma }_c`$, the dimensionless pressure, $`\lambda _B^3\beta P`$, is always positive and a monotonically decreasing function of $`\overline{L}`$. Above the critical surface charge density the pressure exhibits a distinct minimum. In particular, we find that for sufficiently high surface charge densities the force between the two like-charged surfaces becomes negative, i.e. the two plates attract!
In order to compare our results with other theories, we assumed that the dielectric medium between the plates is water at room temperature and, consequently, that the Bjerrum length is $`\lambda _B=7.14`$Å. The distance between the plates is fixed at 150 Å and the inverse of the surface charge density, $`\mathrm{\Sigma }=e/\sigma `$, is varied from 40 Å<sup>2</sup> to 1000 Å<sup>2</sup>. Our results, illustrated in Fig. 3, show that for small surface charge densities the pressure increases almost linearly with the inverse charge density, $`\mathrm{\Sigma }`$. In this case, since $`P_{\mathrm{corr}}P_{\mathrm{PB}}`$, the pressure is dominated by the PB behavior. However, when the charge density becomes large, the slope of $`P_{\mathrm{WDA}}`$ increases due to the strong repulsion between the counterions.
We also compare our calculations with the simulations of Guldbrand et al. In this case, the distance between the plates is fixed at 21 Å and the surface charge density is varied from 0.01 C/m<sup>2</sup> to 0.6 C/m<sup>2</sup>, as shown in Fig. 4.
When the density of counterions is small, $`P_{\mathrm{WDA}}`$ does not differ significantly from $`P_{\mathrm{PB}}`$. As the surface charge density is increased, the correlations among the counterions become relevant and $`P_{\mathrm{WDA}}`$ changes its slope and begins to decrease. Our results are in good agreement with the simulations, which also indicate that for a separation of 21 Å the pressure exhibits a region where it decreases with increase in the surface charge density.
###### Acknowledgements.
We acknowledge the fruitful discussions with Marcelo Louzada-Cassou, Rudi Podgornik, Roland Kjellander and Stjepan Marčelja. One of us, Marcia Barbosa, is particularly grateful for the useful discussion with Mark O. Robbins. This work was supported in part by CNPq — Conselho Nacional de Desenvolvimento Científico e Tecnológico and FINEP — Financiadora de Estudos e Projetos, Brazil. This research was also supported by the National Science Foundation under Grant No. PHY94-07194.
|
no-problem/9904/cond-mat9904281.html
|
ar5iv
|
text
|
# Dynamical studies of the response function in a Spin Glass
## I introduction
The non-equilibrium nature of the spin glass phase has been extensively investigated since it was discovered in low frequency ac-susceptibility measurements and dc-relaxation experiments in the early 1980’s . The interpretation of the experimental results and the design of new experimental procedures emanate essentially from two different theoretical approaches: hierarchical phase space models and real space droplet/domain models . A high level of phenomenlogical and theoretical insight into the phenomena has by now been acquired. There remain, however, unresolved problems as to the interpretation and reproducibility of non-equilibrium results. These shortcomings do in part also prevent a useful judgement in-between the applicability of the different models to real 3d spin glasses. The recently reported memory effect, observable in low frequency ac-susceptibility experiments, not only elucidates some paradoxal features of the spin glass phase, but an in detail study of related phenomena also emphasizes the importance of: cooling/heating rates, wait times, thermalisation times etc., i.e the detailed thermal history of the sample on the results from low frequency ac-susceptibility and dc-magnetic relaxation experiments. These complicated non-equilibrium phenomena should be considered in the perspective that an ac-experiment at only a couple of decades higher frequency, $`f`$ 100 Hz (observation time 1/$`\omega `$ 2 ms), in spite of a strong frequency dependence, simply shows an equilibrium character when measured in ordinary ac-susceptometers. The non-equilibrium processes of the system are only active on observation time scales governed by the cooling/heating rate and during a halt at constant temperature these evolve with the time the sample has been kept at constant temperature.
In this paper we report results from low field zero field cooled (zfc) relaxation experiments, i.e. measurements of the response function, at one specific temperature in the spin glass phase. The parameter we vary in a controlled
way is the thermal history in the spin glass phase. The results imply that the reponse function is governed by: the cooling procedure in a rather narrow region just above the measurement temperature, the wait time before the response function is measured, and the cooling/heating procedure in a rather narrow region just below the measurement temperature if an additional undercooling has been carried out. It is also clearly shown that the thermal history, in the spin glass phase, at temperatures well enough separated from the measurement temperature is irrelevant to the reponse function at $`T_m`$. The results are put in relation to a phenomenological real space fractal domain picture that is introduced and discussed in more detail elsewhere .
The investigation is also motivated by a lack of agreement in the detailed behaviour of the non-equilibrium dynamics, when measured on one and the same spin glass material in different magnetometers.
## II experimental
The sample is a bulk piece of Ag(Mn 11at$`\%`$) with a spin glass temperature $`T_g`$ 35 K made by drop-synthesis technique. The experiments were performed in a non-commercial SQUID magnetometer optimised for dynamic studies in low magnetic fields . The sample is cooled in zero magnetic field from a temperature above $`T_g`$, using a controlled thermal sequence, to the measurement temperature, $`T_m`$= 27 K, where a weak magnetic field ($`h`$= 1 Oe) is applied after the system been kept at $`T_m`$ a certain wait time, $`t_w`$. The relaxation of the magnetisation is then recorded as a function of time elapsed after the field application. In our figures of the relaxation data we show the relaxation rate, i.e. the logarithmic derivative of the response function $`S`$=1/$`h`$ d$`m`$/dlog$`t`$, which is the quantity that most clearly exposes changes of the response function after the different thermal procedures. The relaxation rate is related to the out-of-phase component of the ac-susceptibility via $`S`$($`t`$)$``$-2/$`\pi `$ $`\chi ^{\prime \prime }`$($`\omega `$) at $`t`$=1/$`\omega `$ , and $`\chi ^{\prime \prime }(T)`$ is also the quantity that most instructively has been used to visualise the memory phenomenon in spin glasses mentioned above .
To acquaint with the sample, the field cooled and zero field cooled magnetisation is plotted vs. temperature in Fig. 1. The curves are measured in a field of 40 Oe. The cusp in the zfc susceptibility around 35 K closely reflects the spin glass temperature $`T_g`$ of the sample.
## III Results
The classic aging experiment is performed by cooling the sample directly to a measurement temperature below $`T_g`$, wait a controlled amount of time and then switch the magnetic field on or off. Not much attention has been given to the influence of the cooling rate. In Fig. 2, the relaxation rate, $`S(t)`$, is plotted vs. log$`t`$ for three different wait times. The results for two substantially different, but still in a logarithmic time perspective rather similar, cooling rates are plotted, 3 K/min open symbols and 0.5 K/min solid symbols. The wait time dependence of the data displays the characteristics of the ageing phenomenon in spin glasses (a similar ageing phenomenon is also an inherent property of other disordered and frustrated magnetic systems ). The response is visually sensitive to the cooling rate, $`t_c`$, but the influence of $`t_c`$ decreases as the wait time increases. For the $`t_w`$=10 s curves, rapid cooling yields a maximum in $`S`$($`t`$) at a shorter observation time than slow cooling. The two curves are clearly different and do not coalesce anywhere in the measured time interval. For the longer wait times, the position of the maxima do not differ appreciably, but the magnitude is somewhat higher for the rapidly cooled curves. At short observation times, for the $`t_w`$=$`10^3`$ and $`10^4`$ s curves, there are no or weak differences in the relaxation rates, but after some time they start to deviate and there remains a cooling rate dependence all through the long time part of our observation time window.
Concentrating on the position of the maximum in the relaxation rate and identifying this with an effective age of the system, $`t_{aeff}`$, this parameter is apparently governed by the cooling rate and the wait time. When the wait time is short, $`t_{aeff}`$ is governed by the cooling rate, whereas for longer wait times, it is dominated by the wait time, $`t_w`$. The position of the maximum in the relaxation rate is closely equal to the wait time for $`t_w`$ = $`10^3`$ and $`10^4`$ s while for the $`t_w`$ = 10 s the maximum is delayed about a decade in time. Similar tendencies as to the evolution of the effective age with increasing wait time have earlier been reported in connection with a specific method, ‘field quenching’, to achieve a well defined initial state for ageing experiments .
To further investigate how the thermal sequence on approaching the measurement temperature influences the response function, temperature shift experiments under controlled cooling and heating rates were performed. In the negative temperature shift experiment, the system is kept a certain wait time at a shift temperature, $`T_s`$, above $`T_m`$. Therafter the system is cooled to the measurement temperature where the field is applied immediatly after the sample has become thermally stabilised. The positive temperature shift experiment follows a similar procedure, but the temperature $`T_s`$ is below $`T_m`$. Fig. 3 shows the relaxation rate when the system has been subjected to (a) negative and (b) positive temperature shifts. The wait time at $`T_s`$ is 1000 s and $`T_m`$ = 27 K. The cooling and heating rates are 0.5 K/min. Two reference curves measured after the sample has been cooled at 0.5 K/min directly to $`T_m`$ are included in the figure. One corresponds to a wait time $`t_w`$ = 1000 s and the other is measured without any wait time prior to the field application ($`t_w`$ 0). From Fig. 3(a) it is seen that for $`T_s`$ $`>`$ 30 K the response is indistinguishable from what is measured if the sample
is cooled directly to $`T_m`$. The time the sample has been kept at $`T_s`$ is irrelevant for the response at $`T_m`$. When the shift temperature $`T_s`$ is lower than 30 K, the time spent at the higher temperature starts to influence the response at $`T_m`$. The first deviation from the reference ($`t_w`$ 0) curve occurs at long observation times, the maximum in the relaxation rate decreases in magnitude and on further decreasing $`T_s`$ it shifts towards longer times, and finally when $`T_s`$ approaches $`T_m`$, the relaxation rate approaches the $`t_w`$ = 1000 s reference curve. The important implication of this behaviour is that only the thermal history in a limited temperature region just above $`T_m`$ governs the reponse function.
The results from positive temperature shifts shown in Fig. 3(b) give a somewhat more complicated result. A first observation is that when the system has been cooled to and aged at temperatures well below $`T_m`$, the measured curves are identical, but different from the $`t_w`$ 0 reference curve, the maximum in the relaxation rate
occurs at a shorter time than for the reference curve. Thus, the response function is somewhat different after only cooling to the measurement temperature at one specific cooling rate, compared to after substantially undercooling the sample and re-heat it with a similar heating rate. The curves representing $`T_s`$= 15 and 20 K are indistiguishable from each other, but when $`T_s`$ is increased further, the relaxation rate gets surpressed and the maximum shifts towards longer times to finally coalesce with the reference $`t_w`$= 1000 s curve when $`T_s`$ $`T_m`$. Fig. 4 shows relaxation rate curves from experiments where the sample is undercooled to $`T_s`$, and either kept there 1000 s (as in Fig. 3(b)) or immediately re-heated to $`T_m`$, where the relaxation is measured using $`t_w`$ 0. The cooling/heating rates are again 0.5 K/min. The figure shows that the wait time at the lowest temperature does not affect the results, identical curves are measured whether the sample is kept at 20 K for 1000 s or 0 s. For a shift temperature, $`T_s`$= 24.5 K, closer to $`T_m`$, a clear difference is observed between the two curves with different wait times. The implication from these data is again that the thermal history far enough away from the measurement temperature is irrelvant to the response function at $`T_m`$, the response is in the experimental time window fully governed by the previous cooling/heating history in the very neighbourhood of $`T_m`$.
In Fig. 5, results for (a) negative and (b) positive temperature shifts using two different cooling rates 3 K/min and 0.5 K/min (as in Fig. 3) are displayed. The cooling/heating rate from $`T_s`$ to $`T_m`$ = 27 K is always the same, 0.5 K/min. For negative temperature shifts (Fig. 5 (a)) there is no influence of the different cooling rates for $`T_s`$ $`>`$ 29 K. The rapid and slow cooling give the same response at the measurement temperature. For $`T_s`$ = 28 K influences of the different cooling rates start to become observable at observation times, $`t`$ $`>10^3`$ s. At shorter observation times no sign of the different cooling rates can be resolved. When $`T_s`$ gets even closer to $`T_m`$, the influence from the differences in cooling rates appears earlier, but still there is a region at shorter observation times where the response is independent of the cooling rate.
For positive temperature shifts, Fig. 5 (b), the result is different. There are clear cooling rate effects for all different temperatures $`T_s`$. The curve representing rapid cooling always has a larger magnitude and a sharper maximum that occurs at shorter observation times than the corresponding slowly cooled curve. This result implies that the cooling rate in the region just above $`T_m`$ always remains one of the governing parameters for the response after undercooling the sample, i.e. a memory of this cooling process becomes imprinted in the spin structure and is conserved in spite of the re-structuring that occurs at lower temperatures.
## IV Discussion
Our current results on the non-equilibrium response function show that the cooling rate significantly influences the measured response function, especially the reponse at short wait times is dominated by the specific cooling rate employed. Also, in experiments where the sample has been undercooled below the measurement temperature, the cooling rate above the measurement temperature remains one of the governing parameters for the non-equilibrium response. These findings have practical importance for the design of experimental procedures and when making detailed comparisons between results obtained in different experimental set-ups and in different laboratories on similar spin glass materials.
We have, using different experimental procedures, elucidated an apparently paradoxal property of the non-equilibrium spin structure in spin glasses: the spin structure records the cooling history, this thermal history (cooling/heating rate, wait times etc.) remains imprinted in the configuration, and the fragment of the thermal history confined in a rather narrow region close to $`\mathrm{𝑎𝑛𝑦}`$ higher temperature $`T_m`$ within the spin glass phase, can be recovered in a relaxation experiment at $`T_m`$. A continuous memory recording occurs on cooling, in spite of the fact that the spin structure is subjected to substantial reconfiguration at all temperatures below $`T_g`$ as is observed from the ever present ageing behaviour; and although the short time response appears equilibrated at all temperatures in the spin glass phase.
Employing the droplet scaling model and the concepts chaos and overlaplength , the observations of non-equilibrium and ageing behaviour observed after cooling the sample to a temperature in the spin glass phase can be accounted for as an immediate consequence of the growth of the size of correlated spin glass domains at
constant temperature. Interpretations of ageing in these terms have been extensively discussed in numerous articles and are recently reviewed in ref.. A somewhat modified version of the droplet scaling model that has been constructed to account for the memory behaviour is extensively discussed in ref. .
The continuous memory recording of the thermal history that is exemplified by the current experimental results does however require a comment on the paradoxal possibility that reconfiguration on different length scales are fully separable, i.e. that reconfiguration of the spin structure on short length scales allows a simultaneous stability of the old configuration on larger length scales. The droplet model prescribes one unique equilibrium spin glass state with time reversal symmetry and that domains of spin glass ordered regions grow unrestrictedly with time at constant temperature. If the temperature is altered, the equilibrium configuration also alters due to chaos, but there is an overlap on short length scales between the equilibrium configurations, the length scale of which rapidly decreases with increasing separation between the two temperatures. The experimental results show that a memory of the spin structure that has developed at high temperatures remains imprinted in the non-equilibrium structure at lower temperatures (but also that it is rapidly erased if the temperature is increased above the original aging temperature). Such a re-stored spin structure requires that all reconfiguration at lower temperatures must occur only on small lengths scales and in dispersed regions. The bulk of the numerous droplet excitations of different sizes that do occur may not cause irreversible changes of the spin structure on large length scales, but are to be excited within already equilibrated regions of spin glass order.
How can this rather abstract picture of the dynamic spin structure be related to our current experimental observations? The processes that cause the increase of the magnetisation in a zfc magnetic relaxation experiment is a polarisation of spontaneous droplet excitations. The measured quantity, the zfc magnetisation, gives an integrated value of polarisation of all droplet excitation with relaxation time shorter than the observation time and the measured relaxation corresponds to droplets with relaxation time of the order of the observation time, $`t`$. In an ac-susceptibility experiment, the in-phase component gives the integrated value of all droplet excitations with relaxation time shorter than the observation time, $`t`$ = 1/$`\omega `$, and the out-of phase component measures the actual number of droplets of relaxation time equal to 1/$`\omega `$. The non-equilibrium characteristics imply that the distribution of droplet excitations changes with the time spent at constant temperature and that there is an excess of droplet excitations of a size that corresponds to a relaxation time of the order of the wait time. On shorter time scales an equilibrium distribution has been attained, reflected by the equilibrium reponse always obtained in ac-susceptibility experiments at higher frequencies, and in zfc measurements at long but different wait times by the fact that a similar (equilibrium) reponse is approached at the shortest observation times. The implication of the cooling rate dependence is that the actual distribution of active droplets is governed by the cooling rate and the wait time at constant temperature, and that this distribution in turn is governed by the underlying spin configuration. The phenomenon that the sample retains a distribution of droplets that is governed by the cooling rate and any previous wait time at the measurement temperature, then implies that a closely equivalent spin configuration to the original one is also retained when the temperature is recovered.
## V conclusions
We have shown that non-equilibrium dynamics measured at a specific temperature in spin glasses is primarily governed by the thermal history close to this temperature during the cooling sequence. If the sample has been undercooled, the response is also partly affected by the heating rate towards the measurement temperature. The behaviour may be incorporated in a real space picture of a random spin configuration containing fractal spin glass domains of sizes that increase through spontaneous droplet excitations.
The results emphasise the importance of well controlled experimental procedures when studying non-equilibrium dynamics of spin glasses.
## VI acknowledgments
Financial support from the Swedish Natural Science Research Council (NFR) is acknowledged. Numerous and useful discussions on the memory phenomenon and the non-equilibrium nature of the spin glass phase with T. Jonsson, E. Vincent, J.-P. Bouchaud and J. Hammann are acknowledged.
|
no-problem/9904/gr-qc9904040.html
|
ar5iv
|
text
|
# EVOLUTIONARY SEQUENCES OF IRROTATIONAL BINARY NEUTRON STARS
## 1 Introduction
Inspiraling neutron star binaries are expected to be among the strongest sources of gravitational radiation that could be detected by the interferometric detectors currently under construction (GEO600, LIGO, TAMA and Virgo). These binary systems are therefore subject to numerous theoretical studies. Among them are (i) Post-Newtonian (PN) analytical treatments (e.g. , , ) and (ii) fully relativistic hydrodynamical treatments, pioneered by the works of Oohara and Nakamura (see e.g. ) and Wilson et al. . The most recent numerical calculations, those of Baumgarte et al. and Marronetti et al. , rely on the approximations of (i) a quasiequilibrium state and (ii) of synchronized binaries. Whereas the first approximation is well justified before the innermost stable orbit, the second one does not correspond to physical situations, since it has been shown that the gravitational-radiation driven evolution is too rapid for the viscous forces to synchronize the spin of each neutron star with the orbit as they do for ordinary stellar binaries. Rather, the viscosity is negligible and the fluid velocity circulation (with respect to some inertial frame) is conserved in these systems. Provided that the initial spins are not in the millisecond regime, this means that close binary configurations are better approximated by zero vorticity (i.e. irrotational) states than by synchronized states.
Dynamical calculations by Wilson et al. indicate that the neutron stars may individually collapse into a black hole prior to merger. This unexpected result has been called into question by a number of authors (see Ref. for a summary of all the criticisms and their answers). Recently Flanagan has found an error in the analytical formulation used by Wilson et al. . This error may be responsible of the observed radial instability. As argued by Mathews et al. , one way to settle this crucial point is to perform computations of relativistic irrotational configurations. We have performed recently such computations . They show no compression of the stars, although the central density decreases much less than in the corotating case. In the present report, we give more details about the results presented in Ref. and extend them to the compactification ratio $`M/R=0.17`$ (the results of Ref. have been obtained for a compactification ratio $`M/R=0.14`$).
## 2 Analytical formulation of the problem
### 2.1 Basic assumptions
We have proposed a relativistic formulation for quasiequilibrium irrotational binaries as a generalization of the Newtonian formulation presented in Ref. . The method was based on one aspect of irrotational motion, namely the counter-rotation (as measured in the co-orbiting frame) of the fluid with respect to the orbital motion (see also Ref. ). Since then, Teukolsky and Shibata gave two formulations based on the definition of irrotationality, which implies that the specific enthalpy times the fluid 4-velocity is the gradient of some scalar field (potential flow). The three formulations are equivalent; however the one given by Teukolsky and by Shibata greatly simplifies the problem. Consequently we used it in the present work.
The irrotational hypothesis amounts to say that the co-momentum density is the gradient of a potential
$$h𝐮=\mathrm{\Psi },$$
(1)
where $`h`$ and $`𝐮`$ are respectively the fluid specific enthalpy and fluid 4-velocity.
Beside the physical assumption (1), two simplifying approximations are introduced:
1. The spacetime is supposed to have a helicoidal symmetry , which means that the orbits are exactly circular and that the gravitational radiation content of spacetime is neglected.
2. The spatial 3-metric is assumed to be conformally flat (Wilson-Mathews approximation ), so that the full spacetime metric reads
$$ds^2=(N^2B_iB^i)dt^22B_idtdx^i+A^2f_{ij}dx^idx^j,$$
(2)
where $`f_{ij}`$ is the flat space metric.
The Killing vector corresponding to hypothesis (i) is denoted by $`𝐥`$ (cf. Fig. 1).
Approximation (i) is physically well motivated, at least up to the innermost stable orbit. Regarding the second approximation, it is to be noticed that (i) the 1-Post Newtonian (PN) approximation to Einstein equations fits this form, (ii) it is exact for arbitrary relativistic spherical configurations and (iii) it is very accurate for axisymmetric rotating neutron stars . An interesting discussion about some justifications of the Wilson-Mathews approximation may be found in . A stronger justification may be obtained by considering the 2.5-PN metric obtained by Blanchet et al. for point mass binaries. Using Eq. (7.6) of Ref. , the deviation from a conformally flat 3-metric (which occurs at the 2-PN order) can be computed at the location of one point mass (i.e. where it is expected to be maximal), the 3-metric $`h_{ij}`$ being written as
$$h_{ij}=A^2f_{ij}+h_{ij}^{2\mathrm{P}\mathrm{N}}+h_{ij}^{2.5\mathrm{PN}}.$$
(3)
The result is shown in Table 1 for two stars of $`1.4M_{}`$ each. It appears that at a separation as close as $`30\mathrm{km}`$, where the two stars certainly almost touch each other, the relative deviation from a conformally flat 3-metric is below $`2\%`$.
### 2.2 Equations to be solved
We refer to Ref. for the presentation of the partial differential equations (PDEs) which result from the assumptions presented above. In the present report, let us simply mention a point which seems to have been missed by various authors: the existence of the first integral of motion
$$h𝐥𝐮=\mathrm{const}.$$
(4)
does not result solely from the existence of the helicoidal Killing vector $`𝐥`$. Indeed, Eq. (4) is not merely the relativistic generalization of the Bernoulli theorem which states that $`h𝐥𝐮`$ is constant along each single streamline and which results directly from the existence of a Killing vector without any hypothesis on the flow. In order for the constant to be uniform over the streamlines (i.e. to be a constant over spacetime), as in Eq. (4), some additional property of the flow must be required. One well known possibility is rigidity (i.e. $`𝐮`$ colinear to $`𝐥`$) . The alternative property with which we are concerned here is irrotationality \[Eq. (1)\]. This was first pointed out by Carter .
## 3 Numerical procedure
### 3.1 Description
The numerical procedure to solve the PDE system is based on the multi-domain spectral method presented in Ref. . We simply recall here some basic features of the method:
* Spherical-type coordinates $`(\xi ,\theta ^{},\phi ^{})`$ centered on each star are used: this ensures a much better description of the stars than with Cartesian coordinates.
* These spherical-type coordinates are surface-fitted coordinates: i.e. the surface of each star lies at a constant value of the coordinate $`\xi `$ thanks to a mapping $`(\xi ,\theta ^{},\phi ^{})(r,\theta ,\phi )`$ (see for details about this mapping). This ensures that the spectral method applied in each domain is free from any Gibbs phenomenon.
* The outermost domain extends up to spatial infinity, thanks to the mapping $`1/r=(1\xi )/(2R_0)`$. This enables to put exact boundary conditions on the elliptic equations for the metric coefficients: spatial infinity is the only location where the metric is known in advance (Minkowski metric).
* Thanks to the use of a spectral method in each domain, the numerical error is evanescent, i.e. it decreases exponentially with the number of coefficients (or equivalently grid points) used in the spectral expansions, as shown in Fig. 2.
The PDE system to be solved being non-linear, we use an iterative procedure. This procedure is sketched in Fig. 3. The iteration is stopped when the relative difference in the enthalpy field between two successive steps goes below a certain threshold, typically $`10^7`$ (cf. Fig. 4).
The numerical code is written in the LORENE language , which is a C++ based language for numerical relativity. A typical run make use of $`N_r=33`$, $`N_\theta =21`$, and $`N_\phi =20`$ coefficients (= number of collocation points, which may be seen as number of grid points) in each of the domains on the multi-domain spectral method. 8 domains are used : 3 for each star and 2 domains centered on the intersection between the rotation axis and the orbital plane. The corresponding memory requirement is 155 MB. A computation involves $`250`$ steps (cf. Fig. 4), which takes 9 h 30 min on one CPU of a SGI Origin200 computer (MIPS R10000 processor at 180 MHz). Note that due to the rather small memory requirement, runs can be performed in parallel on a multi-processor platform. This especially useful to compute sequences of configurations.
### 3.2 Tests passed by the code
In the Newtonian and incompressible limit, the analytical solution constituted by a Roche ellipsoid is recovered with a relative accuracy of $`10^9`$, as shown in Fig. 2. For compressible and irrotational Newtonian binaries, no analytical solution is available, but the virial theorem can be used to get an estimation of the numerical error: we found that the virial theorem is satisfied with a relative accuracy of $`10^7`$. A detailed comparison with the irrotational Newtonian configurations recently computed by Uryu & Eriguchi will be presented elsewhere. Regarding the relativistic case, we have checked our procedure of resolution of the gravitational field equations by comparison with the results of Baumgarte et al. which deal with corotating binaries \[our code can compute corotating configurations by setting to zero the velocity field of the fluid with respect to the co-orbiting observer\]. We have performed the comparison with the configuration $`z_A=0.20`$ in Table V of Ref. . We used the same equation of state (EOS) (polytrope with $`\gamma =2`$), same value of the separation $`r_C`$ and same value of the maximum density parameter $`q^{\mathrm{max}}`$. We found a relative discrepancy of $`1.1\%`$ on $`\mathrm{\Omega }`$, $`1.4\%`$ on $`M_0`$, $`1.1\%`$ on $`M`$, $`2.3\%`$ on $`J`$, $`0.8\%`$ on $`z_A`$, $`0.4\%`$ on $`r_A`$ and $`0.07\%`$ on $`r_B`$ (using the notations of Ref. ).
## 4 Numerical results
### 4.1 Equation of state and compactification ratio
As a model for nuclear matter, we consider a polytropic equation of state (EOS) with an adiabatic index $`\gamma =2`$:
$$p=\kappa (m_\mathrm{B}n)^\gamma ,e=m_\mathrm{B}n+p/(\gamma 1),$$
(5)
where $`p`$, $`n`$, $`e`$ are respectively the fluid pressure, baryon density and proper energy density, and $`m_\mathrm{B}=1.66\times 10^{27}\mathrm{kg}`$, $`\kappa =1.8\times 10^2\mathrm{J}\mathrm{m}^3\mathrm{kg}^2`$. This EOS is the same as that used by Mathews, Marronetti and Wilson (Sect. IV A of Ref ).
The mass – central density curve of static configurations in isolation constructed upon this EOS is represented in Fig. 5. The three points on this curve corresponds to three configurations studied by our group and that of Marronetti, Mathews and Wilson:
* The configuration of baryon mass $`M_\mathrm{B}=1.625M_{}`$ and compactification ratio $`M/R=0.14`$ is that considered in the dynamical study of Mathews, Marronetti and Wilson and in the quasiequilibrium study of our group (Ref. and this paper).
* The configuration of baryon mass $`M_\mathrm{B}=1.85M_{}`$ and compactification ratio $`M/R=0.17`$ is studied in the present paper.
* The configuration of baryon mass $`M_\mathrm{B}=1.95M_{}`$ and compactification ratio $`M/R=0.19`$ has been studied recently by Marronetti, Mathews and Wilson <sup>2</sup><sup>2</sup>2Marronetti et al. use a different value for the EOS constant $`\kappa `$: their baryon mass $`M_\mathrm{B}=1.55M_{}`$ must be rescaled to our value of $`\kappa `$ in order to get $`M_\mathrm{B}=1.95M_{}`$. Apart from this scaling, this is the same configuration, i.e. it has the same compactification ratio $`M/R=0.19`$ and its relative distance with respect to the maximum mass configuration, as shown in Fig. 5, is the same. by means of a new code for quasiequilibrium irrotational configurations.
### 4.2 Irrotational sequence with $`M/R=0.14`$
In this section, we give some details about the irrotational sequence $`M_\mathrm{B}=1.625M_{}`$ presented in Ref. . This sequence starts at the coordinate separation $`d=110\mathrm{km}`$ (orbital frequency $`f=82\mathrm{Hz}`$), where the two stars are almost spherical, and ends at $`d=41\mathrm{km}`$ ($`f=332\mathrm{Hz}`$), where a cusp appears on the surface of the stars, which means that the stars start to break. The shape of the surface at this last point is shown in Fig. 6.
The velocity field with respect to the co-orbiting observer, as defined by Eq. (52) of Ref. , is shown in Fig. 7. Note that this field is tangent to the surface of the star, as it must be.
The lapse function $`N`$ (cf. Eq. 2) is represented in Fig. 8. The coordinate system $`(x,y,z)`$ is centered on the intersection between the rotation axis and the orbital plane. The $`x`$ axis joins the two stellar centers, and the orbital is the $`z=0`$ plane. The value of $`N`$ at the center of each star is $`N_\mathrm{c}=0.64`$.
The conformal factor $`A^2`$ of the 3-metric \[cf. Eq. (2)\] is represented in Fig. 9. Its value at the center of each star is $`A_\mathrm{c}^2=2.20`$.
The shift vector of nonrotating coordinates, $`𝐍`$, (defined by Eq. (9) of Ref. ) is shown in Fig. 10. Its maximum value is $`0.10c`$.
The $`K_{xy}`$ component of the extrinsic curvature tensor of the hypersurfaces $`t=\mathrm{const}`$ is shown in Fig. 11. We chose to represent the $`K_{xy}`$ component because it is the only component of $`K_{ij}`$ for which none of the sections in the three planes $`x=0`$, $`y=0`$ and $`z=0`$ vanishes.
The variation of the central density along the $`M_\mathrm{B}=1.625M_{}`$ sequence is shown in Fig. 12. We have also computed a corotating sequence for comparison (dashed line in Fig. 12). In the corotating case, the central density decreases quite substantially as the two stars approach each other. This is in agreement with the results of Baumgarte et al. . In the irrotational case (solid line in Fig. 12), the central density remains rather constant (with a slight increase, below $`0.1\%`$) before decreasing. We can thus conclude that no tendency to individual gravitational collapse is found in this case. This contrasts with results of dynamical calculations by Mathews et al. which show a central density increase of $`14\%`$ for the same compactification ratio $`M/R=0.14`$.
### 4.3 Irrotational sequence with $`M/R=0.17`$
In order to investigate how the above result depends on the compactness of the stars, we have computed an irrotational sequence with a baryon mass $`M_\mathrm{B}=1.85M_{}`$, which corresponds to a compactification ratio $`M/R=0.17`$ for stars at infinite separation (second heavy dot in Fig. 5). The result is compared with that of $`M/R=0.14`$ in Fig. 13. A very small density increase (at most $`0.3\%`$) is observed before the decrease. Note that this density increase remains within the expected error ($`2\%`$, cf. Sect. 2.1) induced by the conformally flat approximation for the 3-metric, so that it cannot be asserted that this effect would remain in a complete calculation.
Marronetti, Mathews and Wilson have recently computed quasiequilibrium irrotational configurations by means of a new code. They use a higher compactification ratio, $`M/R=0.19`$ (third heavy dot in Fig. 5). They found a central density increase as the orbit shrinks much pronounced than that we found for the compactification ratio $`M/R=0.17`$: $`3.5\%`$ against $`0.3\%`$. We will present irrotational sequences with the compactification ratio $`M/R=0.19`$ and compare with the results by Marronetti et al. in a future article.
## 5 Innermost stable circular orbit
An important parameter for the detection of a gravitational wave signal from coalescing binaries is the location of the innermost stable circular orbit (ISCO), if any. In Table 2, we recall what is known about the existence of an ISCO for extended fluid bodies. The case of two point masses is discussed in details in Ref. .
For Newtonian binaries, it has been shown that the ISCO is located at a minimum of the total energy, as well as total angular momentum, along a sequence at constant baryon number and constant circulation (irrotational sequences are such sequences). The instability found in this way is dynamical. For corotating sequences, it is secular instead . This turning point method also holds for locating ISCO in relativistic corotating binaries . For relativistic irrotational configurations, no rigorous theorem has been proven yet about the localization of the ISCO by a turning point method. All what can be said is that no turning point is present in the irrotational sequences considered in Sect. 4: Fig. 14 shows the variation as the orbit shrinks of the ADM mass of the spatial hypersurface $`t=\mathrm{const}`$ (which is a measure of the total energy, or equivalently of the binding energy, of the system) for the $`M_\mathrm{B}=1.625M_{}`$ sequence. Clearly, the ADM mass decreases monoticaly, without showing any turning point. Figure 15 shows the evolution of the total angular momentum along the same sequence. Again there is no turning point. The same behaviour holds for the $`M_\mathrm{B}=1.85M_{}`$ sequence, as shown in Figs. 16 and 17.
## 6 Conclusion and perspectives
We have computed evolutionary sequences of quasiequilibrium irrotational configurations of binary stars in general relativity. The evolution of the central density of each star have been monitored as the orbit shrinks. For a compactification ratio $`M/R=0.14`$, the central density remains rather constant (with a slight increase, below $`0.1\%`$) before decreasing. For a higher compactification ratio $`M/R=0.17`$ (i.e. stars closer to the maximum mass configuration), a very small density increase (at most $`0.3\%`$) is observed before the density decrease. It can be thus concluded that no substantial compression of the stars is found, which means that no tendency to individually collapse to black hole prior to merger is observed. Moreover, the observed density increase remains within the expected error ($`2\%`$, cf. Sect. 2.1) induced by the conformally flat approximation for the 3-metric, so that it cannot be asserted that this effect would remain in a complete calculation.
No turning point has been found in the binding energy or angular momentum along evolutionary sequences, which may indicate that these systems do not have any innermost stable circular orbit (ISCO).
All these results have been obtained for a polytropic EOS with the adiabatic index $`\gamma =2`$. We plan to extend them to other values of $`\gamma `$ in the near future. We also plan to abandon the conformally flat approximation for the 3-metric and use the full Einstein equations, keeping the helicoidal symmetry in a first stage.
## Acknowledgments
We would like to thank Jean-Pierre Lasota for his constant support and Brandon Carter for illuminating discussions. The numerical calculations have been performed on computers purchased thanks to a special grant from the SPM and SDU departments of CNRS.
## References
|
no-problem/9904/hep-ph9904292.html
|
ar5iv
|
text
|
# Energy-Scale Dependence of the Lepton-Flavor-Mixing Matrix
## Abstract
We study an energy-scale dependence of the lepton-flavor-mixing matrix in the minimal supersymmetric standard model with the effective dimension-five operators which give the masses of neutrinos. We analyze the renormalization group equations of $`\kappa _{ij}`$s which are coefficients of these effective operators under the approximation to neglect the corrections of $`O(\kappa ^2)`$. As a consequence, we find that all phases in $`\kappa `$ do not depend on the energy-scale, and that only $`n_g1`$ ($`n_g`$: generation number) real independent parameters in the lepton-flavor-mixing matrix depend on the energy-scale.
hep-ph/9904292
OHSTPY-HEP-T-99-010
DPNU-99-11
KEK-TH-620
PACS:12.15.Ff, 14.60.Pq, 23.40.Bw
Recent neutrino experiments suggest the existence of the flavor mixing in the lepton sector -. Studies of the lepton-flavor-mixing matrix, which is called Maki-Nakagawa-Sakata (MNS) matrix , open a new era of the flavor physics. We can predict the lepton-flavor-violating interactions such as $`\mu e\gamma `$ from the MNS matrix. When we consider that the lepton-flavor-violating interactions are related to the new physics at a high energy-scale, it is important to analyze the energy-scale dependence of the MNS matrix in order to obtain information of the new physics.
In this letter we study the energy-scale dependence of the MNS matrix in the minimal supersymmetric standard model (MSSM) with effective dimension-five operators which give masses of neutrinos. In this model the superpotential of lepton-Higgs interaction terms is
$$𝒲=y_{}^{\mathrm{e}}{}_{ij}{}^{}H_\mathrm{d}L_i\overline{E}_j\frac{1}{2}\kappa _{ij}(H_\mathrm{u}L_i)(H_\mathrm{u}L_j).$$
(1)
Here the indices $`i,j`$ $`(=1n_g)`$ stand for the generation number. $`L_i`$ and $`\overline{E}_i`$ are $`i`$-th generation lepton doublet and right-handed charged lepton, $`H_{\mathrm{u},\mathrm{d}}`$ are the Higgs doublets which give Dirac masses to the up- and down-type fermions, respectively. The coefficients matrix ($`\kappa `$) of the effective dimension-five operators which is an $`n_g\times n_g`$ complex and symmetric matrix, gives the neutrino Majorana mass matrix. When we take the diagonal base of the charged lepton Yukawa coupling $`y^\mathrm{e}`$, $`\kappa `$ is diagonalized by the MNS matrix. All the elements of $`\kappa `$ are naturally small if they are generated effectively by the new physics at a high energy-scale $`M`$. One of the most reliable candidate is so-called see-saw mechanism , where the small $`\kappa `$ of $`O(1/M)`$ is generated by the heavy right-handed neutrinos with Majorana masses of $`O(M)`$.
Now let us consider the renormalization of $`\kappa `$. The wave function renormalization of $`L_i`$ is given by $`L_i^{(0)}=Z_{ij}^{1/2}L_j`$ and that of the Higgs doublet is given by $`H_\mathrm{u}^{(0)}=Z_H^{1/2}H_\mathrm{u}`$. Then the renormalization of $`\kappa _{ij}`$ is written as
$$\kappa _{ij}^{(0)}=\left(Z_{ik}^{1/2}Z_{jl}^{1/2}Z_H^1\right)\kappa _{kl}.$$
(2)
Here we adopt the approximation to neglect loop corrections of $`O(\kappa ^2)`$, which are sufficiently small according to the tiny neutrino masses. It corresponds to taking the Feynman diagrams in which $`\kappa `$ appears only once. If $`\kappa `$ is induced by the see-saw mechanism, this approximation is consistent with neglecting terms of $`O(1/M^2)`$ in the see-saw mechanism. Under this approximation $`Z_{ik}`$ becomes diagonal as $`Z_{ik}=Z_i\delta _{ik}+O(\kappa ^2)`$ because there are no lepton-flavor-mixing terms except $`\kappa `$. Therefore eq.(2) becomes simple as
$$\kappa _{ij}^{(0)}=\left(Z_i^{1/2}Z_j^{1/2}Z_H^1\right)\kappa _{ij}.$$
(3)
Equation (3) leads to the RGE of $`\kappa _{ij}`$ as
$$\frac{d}{dt}\kappa _{ij}=\left(\gamma _i+\gamma _j+2\gamma _H\right)\kappa _{ij},$$
(4)
where $`t`$ is the scaling parameter which is related to the renormalization scale $`\mu `$ as $`t=\mathrm{ln}\mu `$. $`\gamma _i`$ and $`\gamma _H`$ are defined as
$$\gamma _i=\frac{1}{2}\frac{d}{dt}\mathrm{ln}Z_i,\text{ }\gamma _H=\frac{1}{2}\frac{d}{dt}\mathrm{ln}Z_H.$$
(5)
By using eq.(4), we can obtain the following two consequences:
(1)All phases in $`\kappa `$ do not depend on the energy-scale. By using the notation $`\kappa _{ij}|\kappa _{ij}|e^{i\phi _{ij}}`$, eq.(4) is rewritten as
$`{\displaystyle \frac{d}{dt}}\mathrm{ln}\kappa _{ij}`$ $`=`$ $`{\displaystyle \frac{d}{dt}}\mathrm{ln}|\kappa _{ij}|+i{\displaystyle \frac{d}{dt}}\phi _{ij}`$ (6)
$`=`$ $`\left(\gamma _i+\gamma _j+2\gamma _H\right).`$
Since $`\gamma _i`$, $`\gamma _j`$ and $`\gamma _H`$ are real, eq.(6) means
$$\frac{d}{dt}\phi _{ij}=0.$$
(7)
Therefore we can conclude that the arguments of all the elements of $`\kappa `$ are not changed by RG evolutions. We should notice that this result does not necessarily mean that phases of the MNS matrix are independent of the energy-scale as we will see later.
(2)Only $`n_g1`$ real independent parameters in the MNS matrix depend on the energy-scale. The combinations of $`\kappa `$’s elements,
$$c_{ij}^2=\frac{\kappa _{ij}^2}{\kappa _{ii}\kappa _{jj}},$$
(8)
are the energy-scale independent quantities because
$`{\displaystyle \frac{d}{dt}}\mathrm{ln}\left({\displaystyle \frac{\kappa _{ij}^2}{\kappa _{ii}\kappa _{jj}}}\right)`$ $`=`$ $`2{\displaystyle \frac{d}{dt}}\mathrm{ln}\kappa _{ij}{\displaystyle \frac{d}{dt}}\mathrm{ln}\kappa _{ii}{\displaystyle \frac{d}{dt}}\mathrm{ln}\kappa _{jj}`$ (9)
$`=`$ $`2\left(\gamma _i+\gamma _j+2\gamma _H\right)\left(2\gamma _i+2\gamma _H\right)\left(2\gamma _j+2\gamma _H\right)`$
$`=`$ $`0.`$
Since the off-diagonal elements of the $`\kappa _{ij}`$ $`(ij)`$ are given by
$$\kappa _{ij}=c_{ij}\sqrt{\kappa _{ii}\kappa _{jj}}\text{ }(ij),$$
(10)
their energy-scale dependence can be completely determined by the diagonal elements $`\kappa _{ii}`$. The diagonal elements $`\kappa _{ii}`$ can always be taken to be real by rephasing neutrino fields, and they never become complex by the RGE in eq.(4). The RGE of $`\kappa `$ can be written by only $`n_g`$ equations. The diagonal form of $`y^\mathrm{e}`$ is held at any energy-scale because there is no lepton-flavor-violating correction to the RGE of $`y^\mathrm{e}`$ up to $`O(\kappa )`$. Since the overall factor of the $`\kappa `$’s elements is nothing to do with the MNS matrix, the energy-scale dependence of the MNS matrix can be determined by $`n_g1`$ real independent parameters.
Let us show an example for the case of three generations. The matrix $`\kappa `$ is parameterized as
$$\kappa =\kappa _{33}\left(\begin{array}{ccc}r_1& c_{12}\sqrt{r_1r_2}& c_{13}\sqrt{r_1}\\ c_{12}\sqrt{r_1r_2}& r_2& c_{23}\sqrt{r_2}\\ c_{13}\sqrt{r_1}& c_{23}\sqrt{r_2}& 1\end{array}\right),$$
(11)
where
$$r_i\frac{\kappa _{ii}}{\kappa _{33}},\text{ }(i=1,2).$$
(12)
The complex parameters, $`c_{ij}`$ are energy-scale independent. There are nine degrees of freedom, which are three complex constants $`c_{ij}(ij)`$ and three energy-scale dependent real parameters $`r_1`$, $`r_2`$ and $`\kappa _{33}`$. Since $`y^\mathrm{e}`$ has diagonal form at any energy-scale and $`\kappa _{33}`$ is nothing to do with the MNS matrix, only two parameters, $`r_1`$ and $`r_2`$, are energy-scale dependent parameters in the MNS matrix.
Here we roughly estimate the energy-scale dependence of the $`r_i`$ in eq.(12) by using the one-loop RGEs in the MSSM . We can easily show the RGE of $`r_i`$ is given by
$$\frac{d}{dt}\mathrm{ln}r_i=\frac{d}{dt}\mathrm{ln}\frac{\kappa _{ii}}{\kappa _{33}}=\frac{1}{8\pi ^2}\left(y_\tau ^2y_i^2\right),\text{ }(i=1,2),$$
(13)
where $`y_\tau `$ and $`y_i`$ are Yukawa couplings of $`\tau `$ and $`i`$-th generation charged lepton, respectively. Neglecting the energy-scale dependence of $`y_\tau `$, the magnitude of the right-hand-side in eq.(13) is roughly given by $`y_\tau ^2(m_Z)/8\pi ^2=O(10^6)/\mathrm{cos}^2\beta `$, where $`m_Z`$ stands for the weak scale and $`\mathrm{tan}\beta =H_\mathrm{u}/H_\mathrm{d}`$. It means that $`r_i`$s are not sensitive to the energy-scale. We should stress here that this fact does not necessarily result in the tiny energy dependence of the MNS matrix. We can explicitly see the significant RGE corrections of the MNS matrix in some situations . In ref. , the drastic change of the MNS matrix by the RGE was obtained when neutrinos of the second and third generations have masses of $`O(\text{eV})`$ with $`\delta m_{23}^23\times 10^3`$ (eV<sup>2</sup>) . This situation corresponds to the case of $`r_1|c_{12}|0`$, $`r_21`$, and $`|c_{23}|1`$ in eq.(11), where the slight change of $`r_2`$ induces the maximal mixing of the second and third generations in the MNS matrix.
In this letter we studied an energy-scale dependence of the MNS matrix in the MSSM with the effective dimension-five operator. The coefficient of the dimension-five operator $`\kappa `$ is small enough to neglect corrections of $`O(\kappa ^2)`$ in RGEs. Under this approximation we found that all phases in $`\kappa `$ do not depend on the energy-scale, and that only $`n_g1`$ real independent parameters in the MNS matrix depend on the energy-scale. Our consequences imply that there must be $`(n_g1)^2=n_g(n_g1)(n_g1)`$ scale independent relations among the MNS matrix elements because the MNS matrix generally has $`n_g(n_g1)`$ real independent parameters when neutrinos are Majorana fermions. These results can be helpful for the lepton flavor physics and search for the new physics at the high energy-scales.
Finally we discuss the possibility to obtain the same consequences in other models with the effective dimension-five operators. The supersymmetry (SUSY) is needed to obtain our consequences. Moreover, it was necessary that the model did not have lepton-flavor-violating terms and non-renomalizable terms except the effective dimension-five operators. Thus we can obtain the same consequences in the SUSY models which have these properties, e.g. the Next-MSSM. On the other hand we cannot directly apply our analysis to the standard model or non-SUSY models because non-zero vertex renormalization generates an additional term in the right-hand-side of eq.(4), which is generally not real. Nevertheless, we can explicitly show that this term is real at one-loop level . Therefore we can obtain the same consequences in the standard model at one-loop level.
We would like to thank K. Hagiwara, M. Harada, J. Hisano and A.I. Sanda for useful discussions and comments. NH would like to thank S. Raby and K. Tobe for useful discussions, and is partially supported by DOE grant DOE/ER/01545-753. NO is supported by the JSPS Research Fellowships for Young Scientists, No. 2996.
|
no-problem/9904/chao-dyn9904022.html
|
ar5iv
|
text
|
# Untitled Document
Physica Scripta, Vol. 59, No. 4, pp. 251–256, 1999
Kepler map
B. Kaulakys and G. Vilutis
Institute of Theoretical Physics and Astronomy, A. Goštauto 12, 2600 Vilnius, Lithuania
Abstract
We present a consecutive derivation of mapping equations of motion for the one-dimensional classical hydrogenic atom in a monochromatic field of any frequency. We analyze this map in the case of high and low relative frequency of the field and transition from regular to chaotic behavior. We show that the map at aphelion is suitable for investigation of transition to chaotic behavior also in the low frequency field and even for adiabatic ionization when the strength of the external field is comparable with the Coulomb field. Moreover, the approximate analytical criterion (taking into account the electron’s energy increase by the influence of the field) yields a threshold field strength quite close to the numerical results. We reveal that transition from adiabatic to chaotic ionization takes place when the ratio of the field frequency to the electron Kepler frequency approximately equals 0.1. For the dynamics and ionization in a very low frequency field the Kepler map can be converted to a differential equation and solved analytically. The threshold field of the adiabatic ionization obtained from the map is only 1.5% lower than the exact field strength of static field ionization.
PACS number(s): 05.45+b, 32.80.Rm, 42.50.Hz
—————————————
Electronic address: kaulakys@itpa.lt
Electronic address: gedas@itpa.lt
1. INTRODUCTION
It is already the third decade when the highly excited hydrogen atom in a microwave field remains one of the simplest and most proper real system for experimental and theoretical investigation of classical and quantum chaos in the nonlinear systems strongly driven by external driving fields (see reviews \[1–5\] and references herein). For theoretical analysis of transition to stochastic behavior and ionization processes of atoms in microwave fields approximate mapping equations of motion, rather than differential equations, are most convenient.
Such a two-dimensional map (for the scaled energy of the one-dimensional atom in a monochromatic field and for the relative phase of the field), later called Kepler map , has been obtained in Refs. after an integration of equations of motion for one period of the intrinsic motion of the electron between two subsequent passings of the aphelion, the largest distance from the nucleus. This map greatly facilitates numerical investigation of dynamics and ionization process and allows even an analytical estimation of the threshold field strengths for the onset of chaos, the diffusion coefficient of the electron in energy space and other characteristics of the system \[3–10\]. Moreover, this map is closely related to the expressions of quasiclassical dipole matrix elements for high atomic states .
The Kepler map for relatively high frequencies of the field (recovered in and sometimes represented for the number of absorbed photons) is relatively simple and is widely used for analysis of classical dynamics, as well as after the quantization – the quantum Kepler map for the quantum dynamics \[3–5,9,10,13\]. In the region of relatively low frequency of the field this map is sufficiently more complex, the threshold field strength for transition to chaotic behavior and ionization is considerably higher than that for transition to chaotic behavior in the medium and high frequency fields.
Since derivation of the mapping equations of motion is based on the classical perturbation theory and the electron’s energy change due to interaction with an external field during the period of motion in the Coulomb field depends on the initial condition, i.e., on the integration interval , complementary analysis of the applicability of the ’standard’ Kepler map is necessary.
It should be noted, that in derivation of the Kepler map one integrates not over the period of the external electromagnetic field but over the period of the electron intrinsic motion in the Coulomb field . This results in some contradictions and difficulties. First, the period of integration and the obtained map depend on the energy of the electron which complicates the quantization problem of the Kepler map \[3–7,13–17\]. Second, the energy change of the electron during the period of the classical intrinsic motion due to interaction with the microwave field depends on the starting conditions, i.e., on the integration interval . This makes it possibility to obtain another map, the Kepler map at perihelion, derived by the integration between two subsequent passings of the perihelion or even a three-dimensional map for the two halves of the intrinsic period . These maps stronger reveal the resonance structure of chaotic dynamics at low frequencies .
Moreover, Nauenberg has presented a so-called canonical Kepler map which agrees with the results of Refs. if taken at perihelion but not with the widely used Kepler map at aphelion. The expressions for the variables of this canonical Kepler map are, however, sufficiently complicated, not explicit and, therefore, they are not comfortable for analytical and even numerical analysis. So, for this reason corroboration of the standard Kepler map’s applicability for the large range of parameters of the problem is significant as well. Note additionally, that in derivation and analysis of the maps in Ref. some inaccuracy and misprints have appeared.
All these aspects indicate the need of additional analysis of the mapping equations of motion for a highly excited classical hydrogenic atom in a monochromatic field. In addition, transition from adiabatic to chaotic ionization mechanism in the a frequency field is of great interest (see, e.g. and references herein).
In this paper we present a consistent derivation of the mapping equations of motion for a one-dimensional classical atom in a monochromatic field of any frequency and analyze transition from regular to chaotic behavior and ionization process. From the fulfilled analysis we conclude that the map at aphelion and an approximate analytical criterion of the onset of chaos are suitable also in the low frequency region, even for adiabatic ionization, where the strength of the external field is comparable with the Coulomb field. Moreover, in this case the map can be transformed to a differential equation and solved analytically.
2. MAPPING EQUATIONS OF MOTION
The direct way of coupling the electromagnetic field to the electron Hamiltonian is through the $`𝐀𝐏`$ interaction, where $`𝐀`$ is the vector potential of the field and $`𝐏`$ is the generalized momentum of the electron. The Hamiltonian of the hydrogen atom in a linearly polarized field $`F\mathrm{cos}(\omega t+\vartheta )`$, with $`F`$, $`\omega `$ and $`\vartheta `$ being the field strength amplitude, field frequency and phase, respectively, in atomic units is
$$H=\frac{1}{2}\left(𝐏\frac{𝐅}{\omega }\mathrm{sin}(\omega t+\vartheta )\right)^2\frac{1}{r}.$$
(1)
The electron energy change due to interaction with the external field follows from the Hamiltonian equations of motion
$$\dot{E}=\dot{𝐫}𝐅\mathrm{cos}(\omega t+\vartheta ).$$
(2)
Note, that Eq. (2) is exact if $`\dot{𝐫}`$ is obtained from equations of motion including influence of the electromagnetic field. Using parametric equations of motion in the Coulomb field we can calculate the change of the electron’s energy in the classical perturbation theory approximation \[6–8,11\].
Measuring the time of the field action in the field periods one can introduce the scale transformation where the scaled field strength and the scaled energy are $`F_s=F/\omega ^{4/3}`$ and $`E_s=E/\omega ^{2/3}`$, respectively. However, it is convenient \[6–8,17\] to introduce the positive scaled energy $`\epsilon =2E_s`$ and the relative field strength $`F_0=Fn_0^4=F_s/\epsilon _0^2`$, with $`n_0`$ being the initial effective principle quantum number, $`n_0=\left(2E_0\right)^{1/2}`$. The threshold values of the relative field strength $`F_0`$ for the ionization onset depends weaker upon the initial effective principle quantum number $`n_0`$ and the relative frequency of the field $`s_0=\omega n_0^3`$ than the scaled field strength $`F_s`$.
We restrict our subsequent consideration to the one-dimensional model, which corresponds to states very extended along the electric field direction. Such classical one-dimensional model was first considered in Refs. for the description of surface-state electrons, while a justification of the use of one-dimensional-like states for periodically driven hydrogen atoms appeared in . Since that the one-dimensional model is widely used in theoretical analysis \[2–11,13–17\].
For the derivation of a map describing the motion of an electron in the superposition of the Coulomb and microwave fields we should integrate dynamical equations over some characteristic period of the system. The peculiarity of the system under consideration is that we are able to obtain explicit expressions for the change of the electron energy only for halves of the period and for the complete period, $`T`$ $`=2\pi /(2E)^{3/2}=2\pi /\omega \epsilon ^{3/2}`$, of the intrinsic electron motion in the Coulomb field \[6–8,11\] but not for the period of the external field.
Integration of Eq. (2) for motion between two subsequent passages at the aphelion (where $`\dot{x}=0`$ and there is no energy exchange between the field and the atom) results in the change of the electron’s energy
$$\mathrm{\Delta }E=\left(\pi F/E\right)𝐉_s^{}(s)\mathrm{sin}\vartheta .$$
Here $`s\epsilon ^{3/2}=\omega /(2E)^{3/2}=\omega /\mathrm{\Omega }`$ is the relative frequency of the field, i.e., the ratio of the field frequency $`\omega `$ to the Kepler orbital frequency $`\mathrm{\Omega }=\left(2E\right)^{3/2}`$, and $`𝐉_s^{}(z)`$ is the derivative of the Anger function with respect to the argument $`z`$. The derivative of the Anger function
$$𝐉_s^{}(s)=\frac{1}{\pi }\underset{0}{\overset{\pi }{}}\mathrm{sin}\left[s\left(x\mathrm{sin}x\right)\right]\mathrm{sin}x\mathrm{d}x$$
is a very simple analytical function which can be approximated quite well by some combination of expansion in powers of $`s`$
$$𝐉_s^{}(s)=\frac{1+\left(5/24\right)s^2}{2\pi \left(1s^2\right)}\mathrm{sin}\pi s,s1$$
and of the asymptotic form
$$𝐉_s^{}(s)=\frac{b}{s^{2/3}}\frac{a}{5s^{4/3}}\frac{\mathrm{sin}\pi s}{4\pi s^2},s1$$
where
$$a=\frac{2^{1/3}}{3^{2/3}\mathrm{\Gamma }\left(2/3\right)}0.4473,b=\frac{2^{2/3}}{3^{1/3}\mathrm{\Gamma }\left(1/3\right)}0.41085.$$
Introducing the scaled energy $`\epsilon =2E/\omega ^{2/3}`$ and the relative field strength $`F_0=F/4E_0^2`$ we have
$$\mathrm{\Delta }\epsilon =\pi F_0\epsilon _0^2h\left(\epsilon \right)\mathrm{sin}\vartheta $$
$`(3)`$
where $`\epsilon _0=2E_0/\omega ^{2/3}`$ and
$$h\left(\epsilon \right)=\frac{4}{\epsilon }𝐉_s^{}(s).$$
$`(4)`$
The change of the field phase $`\vartheta `$ after the electron motion period in the Coulomb field is
$$\mathrm{\Delta }\vartheta =2\pi \omega T=2\pi /\epsilon ^{3/2}.$$
$`(5)`$
Defining the scaled energy and the phase before, $`\epsilon _j,\vartheta _j`$, and after, $`\epsilon _{j+1},\vartheta _{j+1}`$, passages of the electron of one intrinsic motion period we can introduce a generating function $`G(\epsilon _{j+1},\vartheta _j)`$ of the map determined as
$$\epsilon _j=G/\vartheta _j,\vartheta _{j+1}=G/\epsilon _{j+1}.$$
$`(6)`$
In agreement with Eqs. (3) and (5) the generating function is (see also for analogy)
$$G(\epsilon _{j+1},\vartheta _j)=\epsilon _{j+1}\vartheta _j4\pi \epsilon _{j+1}^{1/2}\pi F_0\epsilon _0^2h\left(\epsilon _{j+1}\right)\mathrm{cos}\vartheta _j$$
$`(7)`$
and according to Eqs. (6) it generates the map
$$\{\begin{array}{cc}\epsilon _{j+1}=\hfill & \epsilon _j\pi F_0\epsilon _0^2h\left(\epsilon _{j+1}\right)\mathrm{sin}\vartheta _j,\hfill \\ \vartheta _{j+1}=\hfill & \vartheta _j+2\pi /\epsilon _{j+1}^{3/2}\pi F_0\epsilon _0^2\eta \left(\epsilon _{j+1}\right)\mathrm{cos}\vartheta _j.\hfill \end{array}$$
$`(8)`$
Here
$$\eta \left(\epsilon \right)=\frac{\mathrm{d}h\left(\epsilon \right)}{\mathrm{d}\epsilon }.$$
$`(9)`$
Note that the map (8) can be derived also without introduction of the generating function \[6–8,11\] but using the requirement of the area-preserving of the map (8) defined as
$$\frac{(\epsilon _{j+1},\vartheta _{j+1})}{(\epsilon _j,\vartheta _j)}=1.$$
$`(10)`$
It should also be noted that the map (14)-(19) in Ref. is with the positive signs of terms in the right-hand site of Eq. (8) containing the field amplitudes $`F_0`$, i.e., it is derived for the reverse orientation of the atom with respect to the field orientation. Also note that a function $`\mathrm{sin}\vartheta _k`$ was inadvertently omitted from the right-hand side of Eq. (15) in .
The map (8) is the general mapping form of the classical equations of motion for the one-dimensional hydrogen atom in a microwave field derived in the classical perturbation theory approximation. Some analytical and numerical analysis of this map has already been done in Refs. \[3,6–8\]. Here we analyze different special cases of the map (8).
3. HIGH FREQUENCY LIMIT
For relatively high frequencies of the field, $`s1`$ ($`s2`$), theoretical analysis of the classical dynamics of the one-dimensional hydrogen atom in a microwave field is relatively simple. That is why the energy changes of the electron, $`\left(E_{j+1}E_j\right)`$ and $`\left(\epsilon _{j+1}\epsilon _j\right)`$, do not depend on the initial energy $`\epsilon _j`$ and relative frequency $`s1`$. Indeed, using the asymptotic form of the derivative of the Anger function, $`𝐉_s^{}(s)=b/s^{2/3}`$, we have $`h\left(\epsilon _{j+1}\right)=4b=const.`$, $`\eta \left(\epsilon _{j+1}\right)=0`$ and, consequently, the following map
$$\{\begin{array}{cc}\epsilon _{j+1}=\hfill & \epsilon _j4\pi bF_0\epsilon _0^2\mathrm{sin}\vartheta _j,\hfill \\ \vartheta _{j+1}=\hfill & \vartheta _j+2\pi /\epsilon _{j+1}^{3/2}.\hfill \end{array}$$
$`(11)`$
Note, that scaled classical dynamics according to maps (8) and (11) depends only on single combination of the field parameters, i.e., on the scaled field strength $`F_s=F_0\epsilon _0^2=F/\omega ^{4/3}`$ (see also ).
By the standard linearization procedure, $`\epsilon _j=\epsilon _0+\mathrm{\Delta }\epsilon _j`$, in the vicinity of the integer relative frequency (resonance), $`s_0=\epsilon _0^{3/2}=m`$ with $`m`$ integer, the map (11) can be transformed to the standard (Chirikov) map
$$I_{j+1}=I_j+K\mathrm{sin}\vartheta _j,$$
$$\vartheta _{j+1}=\vartheta _j+I_{j+1}.$$
$`(12)`$
Here $`I_j=3\pi \mathrm{\Delta }\epsilon _j/\epsilon _0^{5/2}`$ and $`K=12\pi ^2bF_0/\sqrt{\epsilon _0}`$.
From the condition of the onset of classical chaos for the standard map, $`KK_c0.9816`$ \[1,23–25\], we can, therefore, estimate the threshold field strength for chaotization of dynamics and ionization of the atom in the high frequency field
$$F_0^c=K_c/\left(12\pi ^2bs_0^{1/3}\right)0.02s_0^{1/3}.$$
$`(13)`$
Sometimes one writes the map (11) for a variable $`N=1/2n^2\omega `$, which change gives the number of absorbed photons ,
$$N_{j+1}=N_j+2\pi \left(F/\omega ^{5/3}\right)\mathrm{sin}\vartheta _j,$$
$$\vartheta _{j+1}=\vartheta _j+2\pi \omega \left(2\omega N_{j+1}\right)^{3/2}.$$
$`(14)`$
We see that for such variables the dynamics of the system depends on two parameters: on the ratio $`F_q=F/\omega ^{5/3}`$ (in Refs. $`F_q=F/\omega ^{5/3}`$ was called the quantum scaled field strength) and on the field frequency $`\omega `$. Map (14) is, therefore, not the most convenient one for analysis of the classical dynamics.
In general there are, however, no essential difficulties in the theoretical analysis of classical nonlinear dynamics of the highly excited hydrogen atom in the microwave field of relative frequency $`s_0=\omega n_0^30.5`$ when the field strength is lower or comparable with the threshold field strength for the onset of classical chaos, i.e., if the microwave field is considerably weaker than the characteristic Coulomb field. In such a case, the energy change of the electron during the period of intrinsic motion is relatively small and application of the classical perturbation theory for derivation of the Kepler map (8) is sufficiently correct. Further analysis of transition to chaotic behavior and of the ionization process can be based on the map (8) and for $`s_00.3÷1.5`$ results in the ”impressive agreement” between measured ionization curves and those obtained from the map (8) \[5–10\]. Even analytical estimation of the threshold field strengths based on this map is rather proper \[6–8\].
Considerably more complicated is the analysis of transition to stochastic motion and of ionization process in the region of low relative frequencies, $`s_00.3`$, \[6–8,11\].
4. LOW FREQUENCY LIMIT
For the low relative frequencies of the microwave field, $`s1`$, the map (8) can be simplified as well. Using expansion of the function $`𝐉_s^{}(s)`$ in powers of $`s`$, $`𝐉_s^{}(s)s/2`$, for $`s1`$ we have according to Eqs. (4) and (9)
$$h\left(\epsilon _{j+1}\right)=2/\epsilon _{j+1}^{5/2},$$
$$\eta \left(\epsilon _{j+1}\right)=5/\epsilon _{j+1}^{7/2}.$$
$`(15)`$
Consequently map (8) transforms to the form
$$\{\begin{array}{cc}\epsilon _{j+1}=\hfill & \epsilon _j2\pi F_0\left(\epsilon _0^2/\epsilon _{j+1}^{5/2}\right)\mathrm{sin}\vartheta _j,\hfill \\ \vartheta _{j+1}=\hfill & \vartheta _j+2\pi /\epsilon _{j+1}^{3/2}+5\pi F_0\left(\epsilon _0^2/\epsilon _{j+1}^{7/2}\right)\mathrm{cos}\vartheta _j.\hfill \end{array}$$
$`(16)`$
This map is slightly more complicated than map (11) for high frequencies, however, it can easily be analyzed numerically as well as analytically. Note first of all, that the energy change of the electron during the period of intrinsic motion (after one step of iteration), $`\left|\epsilon _{j+1}\epsilon _j\right|,`$ is considerably smaller than the binding energy of the electron $`\epsilon _j\epsilon _0`$ if the field strength is lower or comparable with the threshold field strength, i.e., $`2\pi F_0\left(\epsilon _0^2/\epsilon _{j+1}^{5/2}\right)2\pi F_0\epsilon _0^{1/2}\epsilon _0`$, or $`2\pi F_0s_01`$ if $`F_0F_0^{st}0.13`$ and $`s_01`$. This indicates that the map (16) is probably suitable for description of dynamics even in the low frequency region where the field is relatively strong.
In Figs 1 and 2 the results of the numerical analysis of maps (8) and (16) in the low frequency, $`s1`$, area are presented. We see that the threshold ionization field calculated from the maps approaches the static field ionization threshold $`F_0^{st}0.13`$ when $`s_00`$. This supports the presumption that the map (8) is valid even in the low frequency limit where the strength of the driving field is of the order of the Coulomb field.
4.1. Adiabatic ionization
For low frequencies, $`2\pi s=2\pi /\epsilon ^{3/2}1`$, according to the second equation of map (16) the change of the angle $`\vartheta `$ after one step of iteration is small. As it was noticed above, the energy change is also relatively small. Therefore, we can transform the difference equations (16) to differential equations of the form
$$\frac{\mathrm{d}\epsilon }{\mathrm{d}j}=\frac{2\pi \epsilon _0^2F_0}{\epsilon ^{5/2}}\mathrm{sin}\vartheta ,$$
$$\frac{\mathrm{d}\vartheta }{\mathrm{d}j}=\frac{2\pi }{\epsilon ^{3/2}}+\frac{5\pi \epsilon _0^2F_0}{\epsilon ^{7/2}}\mathrm{cos}\vartheta .$$
$`(17)`$
Dividing second equation of the system (17) by the first one we obtain one differential equation
$$\frac{\mathrm{d}\left(\mathrm{cos}\vartheta \right)}{\mathrm{d}\epsilon }=\frac{\epsilon }{\epsilon _0^2F_0}+\frac{5\mathrm{cos}\vartheta }{2\epsilon }.$$
$`(18)`$
The analytical solution of Eq. (18) with the initial condition $`\epsilon =\epsilon _0`$ when $`\vartheta =\vartheta _0`$ is
$$\mathrm{cos}\vartheta =z^5\mathrm{cos}\vartheta _02z^4\left(1z\right)/F_0,z=\sqrt{\epsilon /\epsilon _0}.$$
$`(19)`$
Eq. (16) describes the motion of the system in $`\epsilon `$ and $`\vartheta `$ variables, i.e., represents the functional interdependence between two dynamical variables.
Let us analyze Eqs. (18) and (19) in more detail. For relatively low values of $`F_0`$, i.e., for $`F_0<\frac{2}{5}z^4=\frac{2}{5}\left(\frac{\epsilon }{\epsilon _0}\right)^2`$, the right-hand side of Eq. (18) is positive for all phases $`\vartheta `$. Therefore, $`\mathrm{cos}\vartheta `$ and $`\epsilon `$ decrease and increase simultaneously and, according to Eq. (16), there is a motion in the whole interval $`[0,2\pi ]`$ of the angle $`\vartheta .`$ For $`F_0>\frac{2}{5}z^4`$, however, the increase of the angle $`\vartheta `$ in the interval $`0÷\pi `$ goes to decrease of $`\vartheta `$ at $`\vartheta \pi `$. This results in fast decrease of $`\epsilon `$ and to the ionization process (see also Fig. 1). It is easy to understand from analysis of Eq. (19) that the minimal value of $`F_0`$ for such a motion (resulting in ionization) corresponds to $`\vartheta _0=0`$ and $`\vartheta =\pi `$. This value of $`F_0`$ is very close to the maximal value of $`F_0`$ resulting from motion in the whole interval $`[0,2\pi ]`$ of $`\vartheta `$, i.e., the maximum of the expression
$$F_0=2z\left(1z\right)/\left(1+z^5\right).$$
$`(20)`$
This maximum is at $`z=z_0`$, where $`z_0`$ is a solution of the equation $`z^5+5z4=0`$, being $`z_00.75193`$. The critical value of the relative field strength, therefore, is $`F_0^c=2z_0^4/5=0.1279`$ which is only 1.5% lower than the adiabatic ionization threshold $`F_0^{st}=2^{10}/\left(3\pi \right)^4=0.1298`$ . According to our numerical analysis, if $`s_00.05`$ the electron remains bounded and the dynamics is regular for $`F_00.13`$ while ionization takes place for $`F_00.131`$ (see also Fig. 1 and 2). These results are very close as well to analytical conclusions. Note, that some decrease of the threshold field strength values $`F_0^c`$ with decreasing of $`s_0`$ was observed for $`s_00.1`$. Dynamics and classical ionization at such frequencies are, however, essentially adiabatic.
4.2. Chaotic ionization
For higher relative frequencies, $`s_00.1`$, ionization process is due to chaotic dynamics of the highly excited electron of the hydrogenic atom in a microwave field. There are different criterions for estimation of the parameters when the dynamics of the nonlinear system becomes chaotic. For analysis of transition to chaotic behavior of the motion described by maps (8), (11) and (16) the most proper, to our mind, is the criterion related with the randomization of the phases
$$K=\mathrm{max}\left|\frac{\delta \vartheta _{j+1}}{\delta \vartheta _j}1\right|1.$$
$`(21)`$
Here $`\mathrm{max}`$ means the maximum with respect to the phase $`\vartheta _j`$ and variation of the phase $`\vartheta _{j+1}`$ with respect to the phase $`\vartheta _j`$ means the full variation including dependence of $`\vartheta _{j+1}`$ on $`\vartheta _j`$ through the variable $`\epsilon _{j+1}`$ in Eqs. (8), (11) and (16).
Applying criterion (21) to the general map (8) we obtain the threshold field strength
$$F_0^c=\frac{\epsilon ^{7/2}}{12\pi ^2\epsilon _0^2𝐉_s^{}(s)}.$$
$`(22)`$
If $`\epsilon \epsilon _0`$ Eq. (22) yields the result
$$F_0^c=\left(12\pi ^2s𝐉_s^{}(s)\right)^1$$
$`(23)`$
which for $`s1`$ coincides with Eq. (13).
For more precise evaluation of the critical field strengths we should take into account the change (increase) of the electron’s energy due to the influence of the electromagnetic field. For higher relative frequency $`s`$ or lower scaled energy $`\epsilon _j`$ the threshold ionization field is lower. Therefore, if the scaled energy $`\epsilon _j`$ decreases as a result of relatively regular dynamics in a not very strong microwave field, then the lower field strength is sufficient for transition to chaotic dynamics. For high frequencies such change of the energy is relatively small. Nevertheless it reveals some resonance structure in the field-atom interaction. In the low frequency limit the energy change is more essential. Now consider it in detail.
As it was shown above, maximal decrease of the scaled energy $`\epsilon _j`$ is for the angle $`\vartheta _j\pi `$ and it can be evaluated from Eq. (20). Taking this into account we have from Eq. (16) according to criterion (21) the expression for the threshold relative field strength
$$F_0^c=\frac{\epsilon ^5}{6\pi ^2\epsilon _0^2}=\frac{z_c^{10}}{6\pi ^2s_0^2}$$
$`(24)`$
where $`z_c`$ is the solution of Eq. (20) with $`F_0=F_0^c`$. Eq. (20) can be solved approximately expanding $`z_c^{10}`$ in powers of $`F_0^c`$. The result of such an expansion is
$$z_c^{10}110F_0^c+30\left(F_0^c\right)^273.6\left(F_0^c\right)^3$$
$`(25)`$
where the last term in the right-hand side of Eq. (25) is from the requirement of the exact maximal value $`z_c=z_0=0.75193`$ for the static threshold field strength $`F_0^c=0.1279`$.
For evaluation of the threshold field for transition to chaotic behavior in the low frequency field we should thus solve the system of equations (24) and (25). For $`0.09s_00.5`$ expressions (24) and (25) give an ionization threshold field very close to the numerical results (see Fig. 2). For frequencies lower than $`s_00.09`$ ionization is adiabatic, because for so low frequencies the adiabatic ionization threshold field, $`F_0^c=2z_0^4/5=0.1279`$, is lower than the phase randomization field evaluated according to Eqs. (24) and (25). The adiabatic ionization, therefore, occurs in such a case earlier than the chaotization of the dynamics. Note, that numerical results reveal transition from adiabatic to chaotic ionization at relative frequency $`s_00.1`$ (scaled energy $`\epsilon _04.3`$) as well.
At higher frequencies ionization is due to chaotic dynamics while transition to chaotic behavior can be evaluated from the approximate criterion (21) taking into account the electron’s energy change by the influence of the electromagnetic field. For frequencies higher than $`s_00.5`$ we should use a more exact expression than $`𝐉_s^{}(s)s/2`$ for the derivative of the Anger function, i.e., Eq. (22).
5. CONCLUDING REMARKS
From the analysis given in this study we can conclude that the map at aphelion (8) is suitable for investigation of regular and stochastic classical dynamics, transition to chaotic behavior and ionization of Rydberg atoms in high, medium and low frequency fields, even for adiabatic ionization when the strength of the external field is comparable with the averaged Coulomb field. For such a purpose it is unnecessary to use the map at perihelion, the map for two halves of the intrinsic period or the canonical Kepler map . Moreover, the approximate criterion (21) for transition to chaotic behavior yields a threshold field strength very close to the numerical results if we take into account increase of the electron energy by influence of the electromagnetic field. Transition from adiabatic to chaotic ionization of the classical hydrogenic atom in a monochromatic field takes place at a relative field frequency $`s_00.1`$.
Furthermore, the Kepler map and some generalizations of it (for two- and multi-frequency or some other fields, e.g., circular polarized microwave field, for three-dimensional atoms and other modifications) are and may be more widely used for analysis of different effects of classical and quantum chaos in driven nonlinear systems . Note also the attempts to derive and use similar maps in astronomy for analysis of chaotic dynamics of comets and other astronomical bodies . It turns out, however, that in such a case, generalization of the Kepler map for nonharmonic perturbations and for motion in three-dimensional space, is necessary.
ACKNOWLEDGMENT
The research described in this publication was made possible in part by support of the Alexander von Humboldt Foundation.
References
1. Delone, N. B., Krainov, V. P. and Shepelyansky, D. L., Usp. Fiz. Nauk 140, 355 (1983) \[Sov. Phys.-Usp. 26, 551 (1983)\].
2. Casati, G., Chirikov, B. V., Shepelyansky, D. L. and Guarneri, I., Phys. Rep. 154, 77 (1987).
3. Casati, G., Guarneri, I. and Shepelyansky, D. L., IEEE J. Quantum Electron. 24, 1420 (1988).
4. Jensen, R. V., Susskind, S. M. and Sanders, M. M., Phys. Rep. 201 , 1 (1991).
5. Koch, P. M., in ”Chaos and Quantum Chaos”, edited by Heiss, W. (Lecture Notes in Physics Vol. 411, Springer-Verlag, Berlin, 1992), p. 167; Koch, P. M. and van Leeuwen, K. A. H., Phys. Rep. 255, 289 (1995).
6. Gontis, V. and Kaulakys, B., Deposited in VINITI as No.5087-V86 (1986) and Lit. Fiz. Sb. 27, 368 (1987) \[Sov Phys.-Collect. 27, 111 (1987)\].
7. Gontis, V. and Kaulakys, B., J. Phys. B: At. Mol. Opt. Phys. 20 , 5051 (1987).
8. Kaulakys, B. and Vilutis, G., in ”Chaos - The Interplay between Stochastic and Deterministic Behaviour”, edited by Garbaczewski, P., Wolf, M. and Weron, A. (Lecture Notes in Physics Vol. 457, Springer-Verlag, Berlin, 1995), p. 445; chao-dyn/9503011.
9. Moorman, L., Galvez, E. J., Sauer, B. E., Mortazawi-M., A., van Leeuwen, K. A. H., v.Oppen, G. and Koch, P. M., Phys. Rev. Lett. 61, 771 (1988); Galvez, E. J., Sauer, B. E., Moorman, L., Koch, P. M. and Richards, D., Phys. Rev. Lett. 61, 2011 (1988); Koch, P. M., Moorman, L., Sauer, B. E., Galvez, E. J. and Leeuwe, K. A. H., Phys. Scripta T26 , 51 (1989); Haffmans, A., Blümel, R., Koch, P. M. and Sirko, L., Phys. Rev. Lett. 73, 248 (1994).
10. Siko, L. and Koch, P. M., Appl. Phys. B 60, S195 (1995); Koch, P. M., Physica D 83, 178 (1995).
11. Kaulakys, B., J. Phys. B: At. Mol. Opt. Phys. 24, 571 (1991).
12. Kaulakys, B., J. Phys. B: At. Mol. Opt. Phys. 28, 4963 (1995); physics/9610018.
13. Casati, G., Guarneri, I. and Shepelyansky, D. L., Phys. Rev. A 36 , 3501 (1987).
14. Graham, R., Europhys. Lett. 7, 671 (1988); Leopold, J. G. and Richards, D., J. Phys. B: At. Mol. Opt. Phys. 23, 2911 (1990).
15. Gontis, V. and Kaulakys, B., Lit. Fiz. Sb. 28, 671 (1988) \[Sov. Phys. - Collec. 28 (6), 1 (1988)\]; Gontis V. and Kaulakys, B., Lit. Fiz. Sb. 31, 128 (1991) \[Lithuanian Phys. J. (AllertonPress, Inc.) 31 (2), 75 (1991)\]
16. Kaulakys, B., Gontis, V., Hermann, G. and Scharmann, A., Phys. Lett. A 159, 261 (1991); Kaulakys, B., Acta Phys. Pol. B 23, 313 (1992).
17. Kaulakys, B., Gontis, V. and Vilutis, G., Lith. Phys. J. (Allerton Press, Inc.) 33, 290 (1993); Kaulakys, B. and Vilutis, G., in AIP Conf. Proc. (AIP, New York) 329, 389 (1995); quant-ph/9504007.
18. Nauenberg, M., Europhys. Lett. 13, 611 (1990).
19. Blümel, R., Phys. Rev. A 49, 4787 (1994); Sundaram, B. and Jensen, R. V., Phys. Rev. A 51, 4018 (1995); Koch, P. M., Acta Phys. Pol. A 93, 105 (1998).
20. Landau, L. D. and Lifshitz, E. M., ”Classical Field Theory” (Pergamon, New York, 1975).
21. Jensen, R. V., Phys. Rev. Lett. 49, 1365 (1982); Phys. Rev. A 30, 386 (1984).
22. Shepelyansky, D. L., in Proc. Intern. Conf. on Quantum Chaos, Como, 1983 (Plenum, New York, 1995), p. 187.
23. Lichtenberg, A. J. and Lieberman, M. A., ”Regular and Stochastic Motion” ( Springer-Verlag, New York, 1983 and 1992).
24. Zaslavskii, G. M., ”Stochastic Behavior of Dynamical Systems” (Nauka, Moscow, 1984; Harwood, New York, 1985).
25. Jensen, R. V., Am. Scient. 75, 168 (1987).
26. Howard, J. E., Phys. Lett. A 156, 286 (1991); Kaulakys, B., Grauzhinis, D. and Vilutis, G., Europhys. Lett. 43, 123 (1998); physics/9808048.
27. Casati, G., Guarneri, I. and Mantica, G., Phys. Rev. A 50, 5018 (1994); Buchleitner, A., Delande, D., Zakrzewski, J., Mantenga, R. N., Arndt, M. and Walther, H., Phys. Rev. Lett. 75, 3818 (1995); Wojcik, M., Zakrzewski, J. and Rzazewski, K., Phys. Rev. A 52, 2523 (1995); Sanders, M. M. and Jensen, R. V., Am. J. Phys. 64, 21 and 1013 (1996); Sacha, K. and Zakrzewski, J., Phys. Rev. A 55, 568 (1997).
28. Jensen, R. V., Nature (London) 355, 311 (1992); Sirko, L., Haffmans, A., Bellermann, M. R. W. and Koch, P. M., Europhys. Lett. 33 , 181 (1996); Brenner, N. and Fishman, S., Phys. Rev. Lett. 77, 3763 (1996) and J. Phys. A 29, 7199 (1996); Benvenuto, F., Casati, G. and Shepelyansky, D. L., Phys. Rev. A 55, 1732 (1997); Buchleitner, A. and Delande, D., Phys. Rev. A 55, 1585 (1997).
29. Petrosky, T. Y., Phys. Lett. A 117, 328, (1986); Sagdeev, R. Z. and Zaslavsky, G. M., Nouvo. Cim. B 97, 119 (1987); Chirikov, B. V. and Vecheslavov, V. V., Astron. Astrophys. 221, 146 (1989); Torbett, V. M. and Smoluchowski, R., Nature (London) 345, 49 (1990); Milani, A. and Nobili, A. M., Nature (London) 357, 569 (1992); Chicone, C. and Retzloff, D. G., J. Math. Phys. 37, 3997 (1996).
Caption for the figures
Fig. 1. Trajectories of the map (8) for different initial conditions, $`\epsilon _0,\theta _0`$, and different relative field strength $`F_0`$. The pictures in the left-hand side correspond to the regular quasiperiodic motion while those in the right-hand side represent ionization process for a little stronger field. At $`\epsilon _04.3`$, i.e., $`s_00.11`$ a transition from the adiabatic to the chaotic ionization mechanism takes place.
Fig. 2. Relative threshold field strength for the onset of ionization from numerical analysis of the maps (8) and (16) and according to the approximate criterion (24) - (25).
|
no-problem/9904/hep-ex9904004.html
|
ar5iv
|
text
|
# HEAVY QUARK ASYMMETRIES AT LEP
## 1 Motivation for Measuring Quark Asymmetries
The accurate determination of the forward-backward asymmetries, $`A_{\mathrm{FB}}`$, of quarks serves to test the structure of Standard Model (SM) couplings to fermions. They also probe radiative corrections to the SM and consequently allow greater precision when predicting unknown parameters of the model. These are increasingly used to constrain uncertainties on the mass of the Higgs boson.
Global fits to electroweak data assume the SM structure of Z couplings to leptons ($`e,\mu \tau `$) and both up ($`u,c`$) and down-type ($`d,s,b`$) quarks. Given recent, highly accurate, lepton measurements from the $`\tau `$ polarisation and purely leptonic forward-backward asymmetries, a similar precision in the quark sector is needed to confirm the internal consistency of the model.
The suite of complimentary measurements described here provide such a precision for both up and down-type quark families. Performing these measurements at LEP 1 offers several advantages. The sensitivity of initial state couplings to the effective weak mixing angle, $`\mathrm{sin}^2\theta _\mathrm{w}^{\mathrm{eff}}`$, is compounded by large, measurable asymmetries from quark final states close to the Z. Heavy quark asymmetries in particular are especially favourable, as flavour and direction of the final state quark can be tagged with greater ease than is the case with lighter quarks.
## 2 Definitions and Experimental Issues
In the SM, the differential cross-section for the process $`e^+e^{}f^+f^{}`$ can be written as :
$$\frac{1}{\sigma }\frac{\mathrm{d}\sigma ^f}{\mathrm{d}cos\theta }=\frac{3}{8}(1+cos^2\theta )+A_{\mathrm{FB}}^fcos\theta $$
(1)
where $`A_{\mathrm{FB}}^f`$ defined to be the forward-backward asymmetry for fermion flavour, $`f`$. It can be expressed as :
$$A_{\mathrm{FB}}^f=\frac{3}{4}𝒜_e𝒜_f$$
(2)
where $`𝒜_f`$ is the polarisation of the fermion concerned :
$$𝒜_f=\frac{2x}{1+x^2}=\mathrm{\hspace{0.25em}1}\frac{2q}{I_3^f}(\mathrm{sin}^2\theta _\mathrm{W}^{\mathrm{eff}}+𝒞_f)$$
(3)
where $`x`$ is the ratio of the vector and axial couplings of the fermion to the Z. This final form, separates the terms containing sensitivity to parameters of the SM, such as $`(m_\mathrm{t},m_\mathrm{H})`$ through $`\mathrm{sin}^2\theta _\mathrm{W}^{\mathrm{eff}}`$, from vertex corrections in $`𝒞_f`$. The latter are typically of the order of $`1\%`$ for $`b`$ quarks.
For hadronic decays of quarks, the precise direction of the final state fermion is not accessible experimentally, and so the direction of the thrust axis is usually signed according to methods of correlating the charge of the quark its final decay products. Asymmetries are of the order of $`10\%`$ for $`b`$ and $`c`$ quarks but are diluted by several effects. These are caused primarily by the correlation method mistagging the quark charge, or by $`B^0\overline{B^0}`$ mixing or cancellations between other quark backgrounds.
Consequently, the methods of measurement described here represent different compromises between rates of charge mistag, and the efficiency$`+`$purity of the flavour tagging procedure. With the increasing sophistication of analyses, several methods of tagging the charge and flavour of decaying quarks are available. These are applied either singly or in combinations.
Minor complications arise when interpretating the results of these analyses, in the form of corrections to pure electroweak predictions of heavy quark asymmetries. For example, quark mass corrections to the electroweak process are generally small, eg. representing shifts of 0.05% in the calculation of $`A_{\mathrm{FB}}^b`$, and are well understood theoretically. Larger and more problematic corrections arise from hard gluon emission in the final state. These so-called QCD corrections are mass, flavour and analysis dependent and so their treatment and associated uncertainties must be handled with care.
## 3 Latest Techniques for Measuring $`A_{\mathrm{FB}}^b`$
### 3.1 Semileptonic Decays of Heavy Quarks
The “classical” method of measuring heavy quark asymmetries relies on differences in the momentum and transverse momentum, $`(p,p_\mathrm{T})`$, spectra of leptons arising from semileptonic $`b`$ and $`c`$ quark decays. This method benefits from an unambiguous charge and flavour tag in the case of unmixed $`b`$ and $`c`$ hadrons. It however suffers from several disadvantages when used on a more general sample.
The methods reliance on a pair of correlated inputs, such as lepton $`(p,p_\mathrm{T})`$alone, leads to a dependence of the method on the precise values of the branching ratios, BR($`bl`$) and BR($`bcl`$), semileptonic decay modelling and $`c`$ branching fractions assumed in Monte Carlo simulations. Some uncertainties are constrained by full use of measurements from lower energy experiments, according to the prescription detailed in. However, the extrapolation of such measurements to LEP energies, and into the different environment of a fragmented jet containing a $`b`$-hadron, leaves residual systematic uncertainties. As the purity of the sample, and its charge mistag, are determined by Monte Carlo simulation alone these residual uncertainties are correlated between the 4 LEP experiments.
The relatively small value of the ($`bl`$) branching fraction means that an overall $`b`$ tagging efficiency of the order of $`13\%`$ is typically achieved with reasonable purities of eg. $`80\%`$ in the case of. Effects of $`B^0\overline{B^0}`$ mixing, “cascade” $`b`$ decays ($`bcl`$), backgrounds from $`c`$ decays ($`cl`$) and other light quark sources, including detector misidentification, all serve to dilute the otherwise excellent mistag rate.
The systematic impact of such effects can be severely reduced by fitting simultaneously for the the time-integrated mixing parameter, $`\overline{\chi }`$<sup>1</sup><sup>1</sup>1$`\overline{\chi }`$ is defined to be the probability that a $`B`$ meson has oscillated to a $`\overline{B}`$ meson by the time of its decay., together with the $`b`$ and $`c`$ asymmetries in a “global” fit. Otherwise both $`\overline{\chi }`$ and $`A_{\mathrm{FB}}^c`$ are inputs to the determination of $`A_{\mathrm{FB}}^b`$. Their values and uncertainties are either fixed by experiment or, in the case of $`A_{\mathrm{FB}}^c`$, set to their SM expectation and the dependence on $`A_{\mathrm{FB}}^b`$used as input to subsequent electroweak (EW) fits. Uncertainties can be further minimised by adding input information which discriminates between $`b`$ and $`c`$ events. In such analyses information coming from lifetime tags, using silicon vertex detectors (VDET’s), and event shapes amplify the discrimination from semileptonic $`(p,p_\mathrm{T})`$spectra.
The LEP experiments make use of such additional inputs, and extra fit quantities, to varying degrees as is summarised in Table 1.
Analyses which make use of the classical semileptonic $`(p,p_\mathrm{T})`$spectra, and complement it with lifetime tags in this way, are the most powerful, statistically and with greater systematic control. For example, the OPAL measurement uses lepton $`(p,p_\mathrm{T})`$and event-shape information as inputs to a neural net (NN) $`b`$ tagging algorithm. In addition, it utilises a largely orthogonal $`c`$ tag, based on lifetime information of jets in the event, combined with the impact parameter significance and detector identification criteria of the lepton. The output of these two neural nets is shown in Figure 1
where the strong separation between sources of leptons in hadronic events is clearly evident. The separation between $`b`$ and $`c`$ lepton sources, and the more limited distinction between those and other background sources, enables both an precise determination of $`A_{\mathrm{FB}}^b`$, $`\overline{\chi }`$ and the most accurate measurement of $`A_{\mathrm{FB}}^c`$from the same sample of events. The net gain of such a method is an approximate $`25\%`$ improvement in the statistical sensitivity of $`A_{\mathrm{FB}}^b`$and a $`25\%`$ improvement in that of $`A_{\mathrm{FB}}^c`$. A summary of the current results for $`A_{\mathrm{FB}}^b`$ from semileptonic measurements at LEP is given in Table 2.
### 3.2 Lifetime Tagging and Jetcharge Measurements
An alternative, complementary technique to measure $`A_{\mathrm{FB}}^b`$ is based on lifetime information from silicon vertex detectors. This is then combined with a fully inclusive charge correlation method, referred to as the “jetcharge” technique. This method was initially pioneered using samples of untagged hadronic events containing all types of quark flavours accessible at these energies. As a consequence of the low semileptonic branching ratios, such inclusive measurements are almost entirely uncorrelated from semileptonic measurements and so can either be combined or used as a cross-check for consistency of measurements between different methods.
The jetcharge method is based upon the correlation between leading particles in a jet, with that of the parent quark. A hemisphere based jetcharge estimator is formed using a summation over particle charges, $`q`$, weighted by their momentum, $`\stackrel{}{p}`$ :
$$Q_\mathrm{F}=\frac{_i^{\stackrel{}{p_i}\stackrel{}{T}>0}\stackrel{}{p_i}\stackrel{}{T}^\kappa q_i}{_i^{\stackrel{}{p_i}\stackrel{}{T}>0}\stackrel{}{p_i}\stackrel{}{T}^\kappa },$$
(4)
and analogously for $`Q_\mathrm{B}`$. The $`\kappa `$ parameter is used to optimise the measurement sensitivity. The charge flow between hemispheres, namely $`Q_{\mathrm{FB}}=Q_\mathrm{F}Q_\mathrm{B}`$ is then used to sign the direction of the thrust axis. Currently all LEP collaborations use this method, and the above formalism.
The method benefits from many of the systematic studies performed in untagged samples, especially to understand the degree of charge correlation between hemispheres in background events and their light quark parents. The recent $`A_{\mathrm{FB}}^b`$ analysis carried out by DELPHI also illustrates this method. In addition to jetcharge information, OPAL makes uses of a weighted vertex charge method. This quantity is a weighted sum of the charges of tracks in a jet which contains a tagged secondary vertex, ie :
$$q_{\mathrm{vtx}}=\underset{\mathrm{tracks}=i}{}\omega _iq_i$$
(5)
where $`q_i`$ is the charge of each track,$`i`$ and $`\omega _i`$ is related to the probability that the tracks comes from the secondary vertex relative to that it came from the primary. The latter probabilities are determined using impact parameter, momentum and multiplicity information. An estimate of the accuracy of the $`q_{\mathrm{vtx}}`$ charge estimator is derived from its variance.
Selecting hemispheres with $`q_{\mathrm{vtx}}>1.4\times \sigma _\mathrm{q}+0.2`$ largely removes neutral $`B^0`$ mesons, and those with poorly measured vertex charges. This also leads to a severe reduction in the size of the event sample, leaving only $`13,000`$ events out of a total, untagged input of roughly 4 million hadronic events. Hence, the contribution of the vertex charge measurement, when combined with the jetcharge determination of $`A_{\mathrm{FB}}^b`$, is relatively low.
More significant improvements in statistical precision and control of systematic uncertainties can be obtained from the variety of new techniques summarised in Table 3.
Experiments make use of these techniques to varying degrees, the most significant of which being the improvement in statistical sensitivity gained by fitting to the asymmetry as a function of angle. With increased statistical precision, comes the need for improved systematic control. The most important of these being the extraction of the charge mistag factor for $`b`$ quark from data. ALEPH, DELPHI and OPAL now perform this extraction while ALEPH also extracts it as a function of the polar angle of the thrust axis. This takes into account particle losses close to the edge of detector acceptance.
The output distributions from the ALEPH measurement are shown in Figure 2.
The asymmetry measurement is made separately in each bin of polar angle, $`\kappa `$ and $`b`$ sample purity before being combined. Some points of interest in such new analyses include the increased statistical power arising from the bins at low angles, even those where the thrust axis lies outside the VDET acceptance. The large value of the asymmetry in these regions compensating for the low tagging efficiencies. The increase in sample $`b`$ purity at large $`cos\theta `$ for high $`b`$ purity samples, is due to the loss of tracks at the edge of the VDET acceptance. This affects $`b`$ events the least, as more tracks with large $`p_\mathrm{T}`$ to the thrust axis continue to tag the event. This is true to a much lesser extent for lighter quark flavours.
Measurements of the $`b`$ asymmetry using the lifetime and jetcharge method are also summarised in Table 2. It is interesting to note that such measurements, whilst providing similar sensitivity to $`A_{\mathrm{FB}}^b`$as semileptonic $`b`$ decays, as yet do not provide the possibility of analogous measurements of $`A_{\mathrm{FB}}^c`$.
## 4 Latest Techniques for Measuring the $`c`$ Asymmetry
In contrast to the incremental progress in the field of $`b`$ asymmetries, measurements of the corresponding quantity in $`c`$ decays have improved dramatically in recent years. Of the 4 LEP collaborations, DELPHI, L3 and OPAL have determined the $`c`$ asymmetry as an output of global semileptonic fits to $`b`$ and $`c`$ decays whereas ALEPH, DELPHI and OPAL have performed the same measurement using fully reconstructed samples of $`D`$ meson decays. As semileptonic measurements are discussed in Section 3.1, only the latter are described here.
The method of exclusively reconstructing $`D`$ decays aim to use as many channels as possible by reconstructing the $`D^0`$ through its decay to $`K^{}\pi ^+`$. The $`D^0`$ is generally reconstructed by taking all 2 and 4 track combinations and a $`\pi ^0`$ candidate with zero total charge. Those combinations with odd charges are then used to form possible $`D^+`$ candidates. Each experiment reconstructs a different subset out of a total of 9 different decay modes. The dominant channels however are the $`D^+K^{}\pi ^+\pi ^+`$ and $`D^+`$ modes, with all modes offering some statistical power. Similarly, each experiment has different selection criteria in each mode, depending on the momentum and particle identification resolutions of the detectors.
These differences lead to widely varying efficiencies and signal-to-background ratios. For example, the DELPHI mass difference distributions for 4 of the 8 selected modes are shown in Figure 3.
An important advantage of such measurements is the ability to determine the background asymmetry, $`A_{\mathrm{FB}}^{\mathrm{bkg}}`$, from data, mode-by-mode using information from sidebands.
A major difficulty is encountered when trying to correct for the substantial fraction of events due to $`B^0D`$ decays for $`B^0\overline{B^0}`$ mixing. Each $`D`$ mode is corrected using an “effective” $`\overline{\chi }`$ depending on the expected fractions of $`B_\mathrm{d}^0`$ and $`B_\mathrm{s}^0`$ decays contributing to the mode concerned. These effective factors are determined from using Monte Carlo simulation and so give rise to systematic uncertainties. The observed asymmetry, $`A_{\mathrm{FB}}^{\mathrm{obs}}`$, is then found using :
$$A_{\mathrm{FB}}^{\mathrm{obs}}=f_{\mathrm{sig}}f_cA_{\mathrm{FB}}^c+f_{\mathrm{sig}}(1f_c)A_{\mathrm{FB}}^b+(1f_{\mathrm{sig}})A_{\mathrm{FB}}^{\mathrm{bkg}}$$
(6)
where $`f_{\mathrm{sig}}`$ is the fraction of signal of signal+background events and $`f_c`$ is the fraction of events containing a true $`D`$ meson which are due to $`c`$ quark events. As far as possible, the sample $`c`$ purities, $`f_c`$, are determined from data using lifetime, mass and event-shape information in both hemispheres of events containing a $`D`$ tag.
Each experiment makes use of lifetime information, with both DELPHI and OPAL using the $`D`$ momentum and jet-shape information respectively in addition, so as to disentangle the substantial contamination from $`b`$ decays. The DELPHI experiment does so in the context of a simultaneous fit to both $`b`$ and $`c`$ asymmetries. However, besides constraining systematic uncertainties, the precision available of $`A_{\mathrm{FB}}^b`$is negligible compared to methods discussed previously.
The $`c`$ asymmetry measurements described here are summarised in Table 4 and indicate that, despite complex systematics, such measurements remain primarily limited by the low efficiencies and purities of the $`D`$ meson reconstruction.
Systematic errors vary widely, with the time-dependence of the background asymmetry remaining merely one of many dominant sources depending on decay mode and experiment.
## 5 Radiative Corrections to Asymmetry Measurements
Several small corrections must be made to the $`b`$ and $`c`$ asymmetries extracted from the described analyses. In the case of $`A_{\mathrm{FB}}^b`$, QED corrections for ISR and FSR are relatively minor, amounting to -0.0041 and -0.00002 respectively. Similarly, corrections for pure $`\gamma `$ exchange and $`\gamma Z`$ interference diagrams give rise to a correction of +0.0003. Such corrections, are general in nature and so apply equally to all analyses.
A more difficult set of corrections involves those needed to correct for the presence of hard gluon radiation which can distort the angular distribution of the final state quarks when compared with the pure electroweak process. Estimates of such corrections to heavy quark asymmetries have been computed to first and second order in $`\alpha _\mathrm{s}`$ both numerically and, most recently, analytically in different scenarios for either $`c`$, $`b`$ or massless quarks.
A common procedure for correcting and ascribing systematic uncertainties for the LEP heavy quark asymmetries has been developed. The more recent analytical calculations indicate several discrepancies when compared with the numerical results. These remain to be resolved. Current systematic uncertainties are determined using a procedure of comparing the effects between first and second order in QCD and by switching between massless quarks, and assumptions for the $`c`$ and $`b`$ quark masses.
Further difficulties arise when considering the application of such corrections to individual analyses. Theoretical calculations are typically based on the direction of the outgoing quark, whereas the analyses described here use the thrust direction. Further, the sensitivity to hard gluon radiation of data, containing either a lepton of a given $`(p,p_\mathrm{T})`$, a reconstructed $`D`$ meson or purely inclusive events, varies dramatically. Effects of non-perturbative QCD and higher-order effects during hadronisation must also be evaluated. The latter render QCD corrections both detector and analyses dependent, eg. event shape selections implying an implicit dependence on the strength of gluon emission. The correction to be applied to a given analysis is derived from :
$$A_{\mathrm{FB}}^{b,c}=(1𝒞_{b,c}𝒮_{b,c})_{\mathrm{no}\mathrm{QCD}}$$
(7)
where $`𝒞_{b,c}`$ represents the QCD correction at parton-through-to-hadron level, and $`𝒮_{b,c}`$ is the analysis dependent modification. Examples of the magnitude of the QCD corrections at the theoretical and experimental levels are shown in Table 5 for the cases of $`A_{\mathrm{FB}}^b`$, determined using semileptonic and lifetime+jetcharge analyses.
The constants are evaluated in terms of Monte Carlo simulations, before and after experimental cuts. The hadronisation dependence of corrections is included as a systematic uncertainty by comparing results from both the HERWIG and JETSET models. It is seen from comparing parton and hadron level that the effect of hadronisation is to reduce the magnitude of the QCD correction<sup>2</sup><sup>2</sup>2It is thought that non-perturbative colour reconnection effects during the shower may be responsible..
It is important to note that the corrections for lifetime+jetcharge measurements are negligible. Significant corrections are observed for both semileptonic,and $`D`$ tag measurements of both $`A_{\mathrm{FB}}^b`$and $`A_{\mathrm{FB}}^c`$. Jetcharge measurements are immune to such corrections as the $`b`$-quark charge mistag factor is defined using Monte Carlo with respect to the original $`b\overline{b}`$ quark pair orientation, prior to gluon or final state photon radiation, parton shower, hadronisation and $`B^0\overline{B^0}`$ mixing. All these effects are therefore included, by construction, in the analyses, as far as they are properly modelled in the JETSET hadronisation model.
## 6 Conclusion and Perspectives
With the completion of LEP data-taking at energies close to the Z resonance in 1995, the 4 experiments (ALEPH, DELPHI, L3 and OPAL), have accumulated large samples of hadronic events. From this data, the forward-backward asymmetry of the $`b`$ quark has emerged as the most sensitive single test of the SM at LEP. The complementary $`c`$ asymmetry measurements offer additional precison and a new window on couplings in the down-type quark family. The precision from semileptonic measurements of $`A_{\mathrm{FB}}^b`$is now matched by that of lifetime+jetcharge measurements. The electroweak sensitivity of $`A_{\mathrm{FB}}^c`$measurements now equals that obtained from combined quark asymmetries measured in untagged samples, highlighting the benficial effect of flavour tagging.
However, in light of these measurements great sensitivity to the couplings of the SM, the continuing discrepancy between electroweak results from LEP and SLD make it essential to understand whether it is due to statistical fluctuations or systematic effects. Separating LEP measurements of $`A_{\mathrm{FB}}^b`$into those from the two dominant techniques, and conservatively ignoring correlated systematic uncertainties, indicates that there is at most a $`1.2\sigma `$ discrepancy between semileptonic and lifetime+jetcharge measurements. This is insufficient to explain the LEP-SLD discrepancy but indicates that care must be taken when considering common systematics in leptonic decay modelling and fragmentation uncertainties. Further improvements in both $`b`$ and $`c`$ asymmetries are possible, as both sets of measurements are still dominated by statistics. For semileptonic analyses, the benefits of using both lifetime and lepton information are emphasised. In the case of the lifetime and jetcharge method, these are most likely to come in the form of improved $`b`$ tagging efficiencies and extensions of tagging to lower angles. The situation for improvments to measurements of $`A_{\mathrm{FB}}^c`$is more difficult as the number of available modes is exhausted, and efficient methods of tagging $`c`$ events remain to be discovered.
At this point, without the prospect of significant, further LEP data-taking at the Z, it is important to focus upon the latest techniques which offer the greatest sensitivity to the couplings of the SM combined with systematic control. The measurements described here obtain combined precisions on the $`b`$ and $`c`$ asymmetries of 2.2% and 7.1% respectively. Hence, the goal of acheiving similar precision on the Z couplings to quarks, as that obtained for leptons, has been reached.
|
no-problem/9904/hep-ph9904349.html
|
ar5iv
|
text
|
# IC/99/36 April 1999 NEUTRINO SPECTRUM, OSCILLATION SCENARIOS AND NEUTRINOLESS DOUBLE BETA DECAY
## Abstract
We introduce the representation on one unitarity triangle of the constraints resulting (1) from the interpretation of solar and atmospheric neutrino data in terms of oscillations, and (2) from the search for neutrinoless double beta decay. We show its use for the study of a nearly degenerate neutrino spectrum. The representation shows clearly the particular cases when the neutrinoless double beta decay rate can (or cannot) be small, that is: when the connection of the decay rate with the neutrino spectrum is less (or more) direct. These cases turn out to depend crucially on the scenario of oscillation (MSW solutions, vacuum oscillations, averaged oscillations), and in particular on the size of the mixing between the electron neutrino and the neutrino state giving rise to atmospheric neutrino oscillations.
Atmospheric neutrino data can be interpreted in terms of dominant $`\nu _\mu \nu _\tau `$ oscillation channel; a sub-dominant channel $`\nu _\mu \nu _\mathrm{e}`$ is not excluded. In terms of the mixing elements, we can summarize these informations by $`|U_{\tau 3}^2||U_{\mu 3}^2||U_{\mathrm{e3}}^2|,`$ assuming that the heaviest mass $`m_3`$ is the one responsible of atmospheric neutrino oscillations.
Several possibilities are opened for interpretation of the solar neutrino data, depending on the frequencies of oscillation and mixings.
However, there is still quite a limited knowledge on the neutrino mass spectrum itself. The search for neutrinoless double beta ($`0\nu 2\beta `$) decay can shed light on this issue. The bound obtained on the parameter $`_{\mathrm{ee}}=|_iU_{\mathrm{e}i}^2m_i|`$ ($`m_3m_2m_1`$) is sensibly smaller than the mass scales probed by present studies of $`\beta `$-decay, or those inferred in cosmology.
The largest theoretical value is taken for a nearly degenerate spectrum $`m_1m_2m_3`$ ; the value is<sup>1</sup><sup>1</sup>1Notice however that a value of $`_{\mathrm{ee}}(\mathrm{\Delta }m_{atm}^2)^{1/2}`$ would be already quite large on its own. $`_{\mathrm{ee}}m_1,`$ with good approximation if $`m_1(\mathrm{\Delta }m_{atm}^2)^{1/2}.`$ The corresponding minimum value, under an arbitrary variation of the unknown phases, is conveniently represented in an unitarity triangle (fig 1). From the figure it is visible that, to interpret properly the results of $`0\nu 2\beta `$ decay studies (and possibly, to exclude the inner region in $`1^{st}`$ plot, the one where $`_{\mathrm{ee}}m_1`$ is possible) we need precise informations on the mixing elements. This requires distinguishing among oscillation scenarios. The plots illustrate also the importance of quantifying the size of $`|U_{\mathrm{e3}}^2|`$ , . In the case in which the state responsible of neutrino oscillations is the lightest one–not the heaviest–, very similar considerations apply. Graphically, this case corresponds to a $`120^{}`$ rotation of the last three plots of fig 1 (the sub-dominant mixing being $`|U_{\mathrm{e1}}^2|`$). Acknowledgements
I thank for useful discussions R Barbieri, C Giunti, M Maris and A Yu Smirnov.
|
no-problem/9904/astro-ph9904028.html
|
ar5iv
|
text
|
# SCALE-FREE EQUILIBRIA OF ISOPEDIC POLYTROPIC CLOUDS
## 1 INTRODUCTION
The stage leading up to dynamic collapse of a magnetically subcritical cloud core to a protostar or a group of protostars is believed to be largely quasi-static, if the responsible process is ambipolar diffusion (e.g., Mestel & Spitzer 1956, Nakano (1979), Lizano & Shu (1989), Tomisaka et al. (1989), Basu & Mouschovias (1994)).<sup>1</sup><sup>1</sup>1For example, as measured either by the net accelerations or by the square of the inward flow speed divided by the sound speed, the ambipolar-diffusion models in Figures 3 and 6 of Ciolek & Mouschovias (1994) spend less than 0.1% of the total computed evolutionary time in states where even a single grid point is more than 10% out of mechanical balance with self-gravity, magnetic forces, and thermal pressure (see also Figs. 3 and 7 of Basu & Mouschovias 1994 and Fig. 1 of Ciolek & Koenigl 1998). To describe the transition between quasi-static evolution by ambipolar diffusion and dynamical evolution by gravitational collapse, Li & Shu (1996) introduced the idea of a pivotal state, with the scale-free, magnetostatic, density distribution approaching $`\rho r^2`$ for an isothermal equation of state (EOS) when the mass-to-flux ratio has a spatially constant value, a condition that Shu & Li (1997) and Li & Shu (1997) termed “isopedic”. Numerical simulations of the contraction of magnetized clouds justify the assumption of a nearly constant mass-to-flux ratio in the pivotal core.<sup>2</sup><sup>2</sup>2For example, inside the starred point where Ciolek & Mouschovias (1994) consider the core to begin, the mass-to-flux ratio varies in the last models of their Figures 3 and 6 by a factor of only 3 or 2 over a range where the density varies by a factor $`10^5`$. Outside the starred point, the mass-to-flux value exhibits greater variation, but this occurs only because Ciolek & Mouschovias (1994) impose starting values for the mass-to-flux in the envelope that are $`2\times 10^2`$ times the critical value (see also Figs. 4a and 8b in Basu & Mouschovias (1994)). Such small ratios for the bulk of the mass of a molecular cloud are probably ruled out by the Zeeman OH measurements summarized by Crutcher (1998).
The small dense cores of molecular clouds that give rise to low-mass star formation are effectively isothermal (Myers & Benson (1983); Shu, Adams, & Lizano (1987)). The situation may be different for larger regions that yield high-mass or clustered star formation. It has often been suggested that the EOS relating the gas pressure $`P`$ to the mass density $`\rho `$ of interstellar clouds can be represented by a polytropic relation $`P\rho ^{1+1/n}`$ with negative index $`n`$. Shu et al. (1972) pointed out the utility of this idealization within the context of the classic two-phase model of the diffuse interstellar medium \[Pikel’ner (1967); Field, Goldsmith, & Habing (1969); Spitzer & Scott (1969)\], while Viala & Horedt (1974) published extensive tables analyzing the stability of non-magnetized, self-gravitating spheres of such gases. Maloney (1988) examined the linewidth-size and density-size relations of molecular clouds, first found by Larson (1981) and subsequently studied by many authors \[e.g., Leung et al. (1982), Torrelles et al. (1983), Dame et al. (1986), Falgarone et al. (1992), Miesch & Bally (1994)\]. Maloney pronounced the results consistent with the properties of negative index polytropes. For a polytropic EOS, the sound speed $`c_s(dP/d\rho )^{1/2}\rho ^{1/2n}`$ increases with decreasing density if $`n<0`$. The latter behaviour may be compared with the empirical linewidth-density relation for molecular clouds, $`\mathrm{\Delta }v\rho ^q`$, with $`q0.5`$ for low-mass cores (Myers & Fuller (1992)) and $`q0.15`$ for high-mass cores (Caselli & Myers (1995)), implying that $`n`$ lies between $`1`$ and $`3`$, or that a static $`\mathrm{\Gamma }1+1/n`$ lies between $`0`$ and 0.7.
The case $`\mathrm{\Gamma }=1/2`$ is of particular relevance for the equilibrium properties of molecular clouds. Walén (1944) found that the pressure of Alfvèn waves propagating in a stratified medium, $`P_{\mathrm{wave}}|\delta 𝐁|^2`$, in the absence of damping obeys the simple polytropic relation $`P_{\mathrm{wave}}\rho ^{1/2}`$, a consequence of conservation of the wave energy flux $`v_A|\delta 𝐁|^2`$. This result was later derived more rigorously by Weinberg (1962) in the WKB approximation for MHD waves propagating in mildly inhomogeneous media, and, more recently by Fatuzzo & Adams (1993) and McKee & Zweibel (1995) in a specific astrophysical context. In numerical simulations of the same problem, Gammie & Ostriker (1996) found indication of a much shallower relation ($`\mathrm{\Gamma }0.1`$) for a self-gravitating medium supported by nonlinear Alfvèn waves. On the other hand, for the adiabatic contraction of a cloud supported by linear Alfvèn waves, McKee & Zweibel (1995) found a dynamic $`\gamma `$ larger than 1. Vázquez et al. (1997) confirmed a similar behaviour in numerical simulations of the gravitational collapse of clouds with an initial field of hydrodynamic rather than hydromagnetic turbulence.
In the limit of $`\mathrm{\Gamma }0`$ (or $`n1`$), the EOS becomes “logatropic,” $`P\mathrm{ln}\rho `$, a form first used by Lizano & Shu (1989) to mimic the nonthermal support in molecular clouds associated with the observed supersonic linewidths. The sound speed associated with the nonthermal contribution, $`c_s=(dP/d\rho )^{1/2}\rho ^{1/2}`$ becomes important at the low densities characteristic of molecular cloud envelopes (as contrasted with the cloud cores) since the thermal contribution is independent of density if the temperature $`T`$ remains constant. This nonthermal contribution decreases with increasing density and will become subsonic at high densities as recently observed in the central regions of dense cores (Barranco & Goodman (1998)). McLaughlin & Pudritz (1996) and McLaughlin & Pudritz (1997) have modeled the equilibrium and collapse properties of unmagnetized, self-gravitating, spheres with a pure logatropic EOS and claim to find good agreement with observations.
Adams, Lizano, & Shu (in 1987) independently obtained the similarity solution for the gravitational collapse of an unmagnetized singular logatropic sphere (SLS), but they chose not to publish their findings until they had learned how to magnetize the configuration in a nontrivial way (see the reference to this work in Fuller & Myers (1992), who considered the practical modifications to the protostellar mass-infall rate introduced by “nonthermal” contributions to the support against self-gravity). Magnetization constitutes an important program to carry out if we try to justify a nonthermal EOS as the result of a superposition of propagating MHD waves (see also Holliman & McKee 1993). In this paper, we extend the study of Li & Shu (1996) to include the isopedic magnetization of pivotally self-gravitating clouds with a polytropic equation of state. As a by-product of this investigation, we obtain the unanticipated and ironic result that the only way to magnetize a singular logatropic configuration and maintain a scale-free equilibrium is to do it trivially, i.e., by threading the SLS with straight and uniform field lines (see §6).
A basic consequence of treating the turbulence as a scalar pressure, coequal to the thermal pressure except for satisfying a different EOS, is that we do not change the basic topology of the magnetic field. This assumption may require reassessment if MHD turbulence enables fast magnetic reconnection (Vishniac & Lazarian (1999)) and allows the magnetic fields of highly flattened cloud cores (Mestel & Strittmatter 1967, Barker & Mestel (1996)) or pseudodisks (Galli & Shu 1993b ) to disconnect from their background. Recent MHD simulations carried out in multiple spatial dimensions (e.g., Stone, Ostriker, & Gammie (1998); MacLow et al (1998); Ostriker, Gammie, & Stone (1999); Padoan & Nordlund (1999)) find turbulence in strongly magnetized media to decay almost as fast as in unmagnetized media. Such decay may be responsible for accelerating molecular cloud core formation above simple ambipolar diffusion rates (Nakano (1998), Myers & Lazarian (1998), Shu et al. (1999)). Although this result also cautions against treating turbulence on an equal footing as thermal pressure, we attempt a simplified first analysis that includes magnetization to assess the resulting configurational changes when we adopt an alternative EOS for the pivotal state. In particular, different power-law dependences of the radial density profile translate immediately to different time dependences in the mass-infall rate for the subsequent inside-out collapse (Cheng (1978), McLaughlin & Pudritz 1997).
The paper is organized as follows. In §2 we formulate the equations of the scale-free problem and show that each solution depends only on the polytropic exponent $`\mathrm{\Gamma }`$ and a nondimensional parameter $`H_0`$ related to the cloud’s morphology. In §3 we present the numerical results. In §4, §5, and §6 we discuss the limiting form of the solutions. Finally, in §7 we give our conclusions and discuss the possible implications of our results for star formation and the structure of giant molecular clouds.
## 2 SELF-SIMILAR MAGNETOSTATIC EQUILIBRIUM EQUATIONS
To begin, we generalize the singular polytropic sphere in the same way that Li & Shu (1996) generalized the singular isothermal sphere (SIS). In the absence of an external boundary pressure, the only place the pressure $`P`$ enters in the equations of magnetostatic equilibrium is through a gradient. Consider then the polytropic relation
$$\frac{dP}{d\rho }=K\rho ^{(1\mathrm{\Gamma })}.$$
(1)
By integrating equation (1) we recover for $`\mathrm{\Gamma }=1`$ the isothermal EOS, $`P=K\rho `$, (where $`K`$ is the square of the isothermal sound speed) and for $`\mathrm{\Gamma }=0`$ the logatropic EOS, $`P=K\mathrm{ln}\rho `$.
We adopt axial symmetry in spherical coordinates and consider a poloidal magnetic field given by
$$𝐁=\frac{1}{2\pi }\times \left(\frac{\mathrm{\Phi }}{r\mathrm{sin}\theta }\widehat{e}_\varphi \right),$$
(2)
where $`\mathrm{\Phi }(r,\theta )`$ is the magnetic flux. Force balance along field lines requires
$$V+\frac{1}{\mathrm{\Gamma }1}K\rho ^{(1\mathrm{\Gamma })}=h(\mathrm{\Phi }),$$
(3)
where $`V`$ is the gravitational potential and $`h(\mathrm{\Phi })`$ is the Bernoulli “constant” along the field line $`\mathrm{\Phi }=`$ constant. Poisson’s equation now reads
$$\frac{1}{r^2}\frac{}{r}\left[r^2\left(\frac{dh}{d\mathrm{\Phi }}\frac{\mathrm{\Phi }}{r}K\rho ^{(2\mathrm{\Gamma })}\frac{\rho }{r}\right)\right]+\frac{1}{r^2\mathrm{sin}^2\theta }\frac{}{\theta }\left[\mathrm{sin}\theta \left(\frac{dh}{d\mathrm{\Phi }}\frac{\mathrm{\Phi }}{\theta }K\rho ^{(2\mathrm{\Gamma })}\frac{\rho }{\theta }\right)\right]=4\pi G\rho ;$$
(4)
whereas force balance across field lines reads
$$\frac{1}{16\pi ^3r^2\mathrm{sin}^2\theta }\left(\frac{^2\mathrm{\Phi }}{r^2}+\frac{1}{r^2}\frac{^2\mathrm{\Phi }}{\theta ^2}\frac{\mathrm{cot}\theta }{r^2}\frac{\mathrm{\Phi }}{\theta }\right)=\rho \frac{dh}{d\mathrm{\Phi }}.$$
(5)
We look for scale-free solutions of the above equations by nondimensionalizing and separating variables:
$$\rho =\left(\frac{K}{2\pi Gr^2}\right)^{1/(2\mathrm{\Gamma })}R(\theta ),$$
(6a)
$$\mathrm{\Phi }=4\left(\frac{\pi ^{32\mathrm{\Gamma }}Kr^{43\mathrm{\Gamma }}}{G^{\mathrm{\Gamma }/2}}\right)^{1/(2\mathrm{\Gamma })}\varphi (\theta ),$$
(6b)
$$\frac{dh}{d\mathrm{\Phi }}=H_0\left(\frac{2^{3\mathrm{\Gamma }2}KG^{22\mathrm{\Gamma }}}{\pi ^{1\mathrm{\Gamma }}\mathrm{\Phi }^{2\mathrm{\Gamma }}}\right)^{1/(43\mathrm{\Gamma })},$$
(6c)
where $`H_0`$ is a dimensionless constant that measures the deviation from a force free magnetic field, and $`R(\theta )`$ and $`\varphi (\theta )`$ are dimensionless functions of the polar angle $`\theta `$.<sup>3</sup><sup>3</sup>3These definitions are not applicable for $`\mathrm{\Gamma }=4/3`$ or $`\mathrm{\Gamma }=2`$. These assumptions imply that the equilibria will have spatially constant mass-to-flux ratios (see below). Substitution of equation (6c) into equations (4) and (5) yields
$$\frac{1}{\mathrm{sin}\theta }\frac{d}{d\theta }\left\{\mathrm{sin}\theta \left[A_\mathrm{\Gamma }H_0\varphi ^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })}\varphi ^{}R^{(2\mathrm{\Gamma })}R^{}\right]\right\}=$$
$$2\left[R\frac{(43\mathrm{\Gamma })}{(2\mathrm{\Gamma })^2}R^{(1\mathrm{\Gamma })}\left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)^2B_\mathrm{\Gamma }H_0\varphi ^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}\right],$$
(7)
and
$$\frac{1}{\mathrm{sin}^2\theta }\left[\varphi ^{\prime \prime }\mathrm{cot}\theta \varphi ^{}+\frac{2(43\mathrm{\Gamma })(1\mathrm{\Gamma })}{(2\mathrm{\Gamma })^2}\varphi \right]=C_\mathrm{\Gamma }H_0R\varphi ^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })},$$
(8)
where a prime denotes differentiation with respect to $`\theta `$, and
$$A_\mathrm{\Gamma }=2^{\mathrm{\Gamma }(32\mathrm{\Gamma })/(43\mathrm{\Gamma })(2\mathrm{\Gamma })},$$
(9a)
$$B_\mathrm{\Gamma }=2^{(1\mathrm{\Gamma })(85\mathrm{\Gamma })/(43\mathrm{\Gamma })(2\mathrm{\Gamma })},$$
(9b)
$$C_\mathrm{\Gamma }=2^{\mathrm{\Gamma }(1\mathrm{\Gamma })/(43\mathrm{\Gamma })(2\mathrm{\Gamma })}.$$
(9c)
In particular, for $`H_0=0`$, eq. (7) gives the dimensionless density for the non-magnetized singular polytropic sphere
$$R=\left[\frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}\right]^{1/(2\mathrm{\Gamma })},$$
(10)
whereas eq. (8) implies $`\mathrm{\Phi }=0`$ for $`0<\mathrm{\Gamma }1`$, in order to satisfy the boundary conditions eq. (11). In this case, the mass-to-flux ratio $`\lambda _r`$ is infinite. However, for $`\mathrm{\Gamma }=0`$, eq. (8) admits also the analytic solution of $`\mathrm{\Phi }r^2\mathrm{sin}^2\theta `$ corresponding to a straight and uniform field, while the density function is $`R(\theta )=1`$. Therefore, a spherical logatropic scale free cloud can be magnetized with a uniform magnetic field of any strength, and any value of the spherical mass to flux ratio is allowed.<sup>4</sup><sup>4</sup>4In this case, $`\lambda _r^2=2\mu ^2=[2\varphi (\pi /2)^2]^1`$.
For arbitrary values of $`\mathrm{\Gamma }`$ and $`H_0`$ the ordinary differential equations (ODEs) (7) and (8) are to be integrated subject to the two-point boundary conditions (BCs):
$$\underset{\theta 0}{lim}\mathrm{sin}\theta \left[A_\mathrm{\Gamma }H_0\varphi ^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })}\varphi ^{}R^{(2\mathrm{\Gamma })}R^{}\right]=0,$$
$$\varphi (0)=0,\varphi ^{}(\pi /2)=0,R^{}(\pi /2)=0.$$
(11)
The first BC implies that there is no contribution from the polar axis to the mass inside a radius $`r`$. The second BC comes from the definition of magnetic flux, i.e. no trapped flux at the polar axis. The last two BCs imply no kinks at the midplane.
The equilibria are characterized by:
(a) the spherical mass-to-flux ratio, <sup>5</sup><sup>5</sup>5The standard mass-to-flux ratio $`\lambda =2\pi G^{1/2}M(\mathrm{\Phi })/\mathrm{\Phi }`$ is not defined for the polytropic scale free magnetized equilibria because the integral $`_0^{\pi /2}R(\theta )\varphi (\theta )^1\mathrm{sin}\theta d\theta `$ diverges since it can be shown that $`R(\theta =0)0`$ for $`\mathrm{\Gamma }<1`$.
$$\lambda _r2\pi G^{1/2}\frac{M(r)}{\mathrm{\Phi }(r,\pi /2)}=2^{(1\mathrm{\Gamma })/(2\mathrm{\Gamma })}\left(\frac{2\mathrm{\Gamma }}{43\mathrm{\Gamma }}\right)\frac{1}{\varphi (\pi /2)}_0^{\pi /2}R(\theta )\mathrm{sin}\theta d\theta ,$$
(12)
where $`M(r)`$ is the mass enclosed within a radius $`r`$;
(b) the factor $`D`$ by which the average density is enhanced over the non-magnetized value because of the extra support provided by magnetic fields,
$$D\left[\frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}\right]^{1/(2\mathrm{\Gamma })}_0^{\pi /2}R(\theta )\mathrm{sin}\theta d\theta ,$$
(13)
which is equal to $`1`$ if $`H_0=0`$ (see eq. );
(c) the sound speed,
$$c_s^2=\left(2\pi Gr^2\right)^{(1\mathrm{\Gamma })/(2\mathrm{\Gamma })}K^{1/(2\mathrm{\Gamma })}R(\theta )^{(1\mathrm{\Gamma })};$$
(14)
and (d) the Alfvèn speed
$$v_A^2=2^{\mathrm{\Gamma }/(2\mathrm{\Gamma })}\left(2\pi Gr^2\right)^{(1\mathrm{\Gamma })/(2\mathrm{\Gamma })}K^{1/(2\mathrm{\Gamma })}\left[(\varphi ^{})^2+\left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)^2\varphi ^2\right]\frac{1}{R(\theta )\mathrm{sin}^2\theta }.$$
(15)
Both the sound speed and the Alfvèn speed scale as $`r^0`$ for $`\mathrm{\Gamma }=1`$, and $`r^{1/2}`$ for $`\mathrm{\Gamma }=0`$; for other values of $`\mathrm{\Gamma }`$, the exponent of $`r`$ lies between these two values.
It is also of interest to define the ratio $`\mu ^2`$ of the square of the sound speed and the square of the Alfvèn speed, each weighted by the density, which is a physical quantity that can be compared with observations:
$$\mu ^2=\frac{_0^{\pi /2}c_s^2\rho \mathrm{sin}\theta d\theta }{_0^{\pi /2}v_A^2\rho \mathrm{sin}\theta d\theta }=2^{\mathrm{\Gamma }/(2\mathrm{\Gamma })}\frac{_0^{\pi /2}R(\theta )^\mathrm{\Gamma }\mathrm{sin}\theta d\theta }{_0^{\pi /2}\left[(\varphi ^{})^2+\left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)^2\varphi ^2\right]/\mathrm{sin}\theta d\theta }.$$
(16)
If $`c_s`$ represents only the thermal sound speed, then the observational summary given by Fuller & Myers (1992) would imply that $`\mu ^21`$ in the quiet low-mass cores of GMCs, whereas $`\mu ^210^2`$ in their envelopes. If we include in $`c_s`$, however, the turbulent contribution, then the turbulent speed is likely to be sub-Alfvénic or marginally Alfvénic, and $`\mu ^21`$ everywhere is probably a better characterization of realistic clouds.
## 3 RESULTS
To obtain an equilibrium configuration for given values of $`\mathrm{\Gamma }`$ and $`H_0`$, equations (7) and (8) are integrated numerically. The integration is started at $`\theta =0`$ using the expansions: $`\varphi =a_0\xi ^2+\mathrm{}`$, $`R=b_0+b_2\xi ^{4(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}+\mathrm{}`$, with $`\xi =\mathrm{sin}\theta `$, and $`b_2=[(43\mathrm{\Gamma })/2(1\mathrm{\Gamma })]A_\mathrm{\Gamma }H_0a_0^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}b_0^{2\mathrm{\Gamma }}`$. The values of $`a_0`$ and $`b_0`$ are varied until the two BCs at $`\theta =\pi /2`$ (eq. 11), are satisfied. For flattened equilibria (see below) it is more convenient to start from $`\theta =\pi /2`$, where the BCs $`\varphi ^{}(\pi /2)=0`$ and $`R^{}(\pi /2)=0`$ are imposed, and integrate toward $`\theta =0`$. The values of $`\varphi (\pi /2)`$ and $`R(\pi /2)`$ are then varied until a solution is found that satisfies the two BCs at $`\theta =0`$.
Figure 1 shows the resulting flux and density functions $`\varphi (\theta )`$ and $`R(\theta )`$ computed for $`H_0=0.5`$ and values of $`\mathrm{\Gamma }`$ between 0.2 and 1. We reproduce the results of Li & Shu (1996) for $`\mathrm{\Gamma }=1`$, which is the only case that obtains perfect toroids (i.e., $`R[\theta =0]=0`$); models with $`\mathrm{\Gamma }<1`$ have nonzero density at the polar axis. Figure 2 shows the corresponding density contours and magnetic field lines. In the limit $`\mathrm{\Gamma }0`$, independent of $`H_0`$ as long as it is nonzero, the pivotal configuration becomes thin disks with an ever increasing magnetic field strength. Table 1 shows the spherical mass to flux ratio $`\lambda _r`$, the overdensity parameter $`D`$, and the ratio of the square of the sound and Alfvèn speeds $`\mu ^2`$. This table shows that, for fixed $`H_0`$, $`\mu ^2`$ decreases as $`\mathrm{\Gamma }`$ decreases because the magnetic field becomes stronger. For the same reason $`D`$ increases. In contrast, $`\lambda _r`$ goes through a minimum as $`\mathrm{\Gamma }`$ decreases. Figures 1 and 2 demonstrate that for $`\mathrm{\Gamma }0`$ (the logatropic limit), $`H_0`$ is not a measure of the strength of the magnetic fields since $`\varphi `$ diverges as $`R(\theta )\delta (\pi /2\theta )`$ (see §5 below).
For fixed $`\mathrm{\Gamma }`$, a sequence from small $`H_0`$ to large $`H_0`$ progresses through configurations of increasing support by magnetic fields, as demonstrated explicitly for the isothermal case by Li & Shu (1996). This behavior is illustrated here for the $`\mathrm{\Gamma }=1/2`$ case in Figure 3, which shows the density contours and magnetic field lines corresponding to values of $`H_0`$ from 0.05 to 1.5. Table 2 shows the corresponding values of $`\lambda _r`$, $`D`$, and $`\mu ^2`$. For small $`H_0`$, the equilibria have nearly spherically symmetric isodensity contours and weak quasiuniform magnetic fields that provide little support against gravity. With increasing $`H_0`$, the pivotal configurations flatten. The case $`H_0=1.5`$ is already quite disklike: the pole to equator density contrast is $`R(\pi /2)/R(0)10^6`$. For a thin disk, the analysis of Shu & Li (1997) demonstrates that magnetic tension provides virtually the sole means of horizontal support against self-gravity, with gas and magnetic pressures being important only for the vertical structure. In the limit of a completely flattened disk ($`H_0\mathrm{}`$), $`\lambda _r1`$ independent of the detailed nature of the gas EOS (see next section). Table 2 shows the spherical mass to flux ratio $`\lambda _r`$, the overdensity parameter $`D`$, and the ratio of the square of the sound and Alfvèn speeds $`\mu ^2`$. Again $`D`$ increases monotonically and $`\mu ^2`$ decreases monotonically as the magnetic support increases with $`H_0`$, while $`\lambda _r`$ goes through a minimum and tends to 1 for large $`H_0`$.
Since the mass-to-flux ratio $`\lambda _r`$ is a fundamental quantity that will not change unless magnetic field is lost by ambipolar diffusion, in Figure 4 we consider sequences where $`\lambda _r`$ is held fixed, but $`\mathrm{\Gamma }`$ is varied. This Figure shows the locus of the set of equilibria with $`\lambda _r=0.95,1,`$ and $`2`$ in the ($`H_0`$, $`\mathrm{\Gamma }`$) plane. Equilibria with $`\lambda _r<1`$ are highly flattened when $`\mathrm{\Gamma }0`$ even for small but fixed values of $`H_0`$ (see §6). In fact, to obtain incompletely flattened clouds when one takes the limit $`\mathrm{\Gamma }0`$, one also needs simultaneously to consider the limit $`H_00`$. Unfortunately, because both the density and the strength of the magnetic field at the midplane diverge as the equilibria become highly flattened, we are unable to follow numerically the limit $`\mathrm{\Gamma }0`$ to verify if these sequences of constant $`\lambda _r<1`$ will hook to a finite value in the $`H_0`$ axis, or will loop to $`H_0=0`$, consistent with our demonstration in §4 that flattened disks do not exist in the logatropic limit.<sup>6</sup><sup>6</sup>6As the equilibria flatten due to either small $`\mathrm{\Gamma }`$ or large $`H_0`$, it becomes necessary to determine the constants of the expansions of $`R(\theta )`$ and $`\varphi (\theta )`$ near the origin with prohibitively increasing accuracy.
We speculate that the results for $`\lambda _r<1`$ have the following physical interpretation. According to the theorem of Shu & Li (1997), only if $`\lambda `$ itself rather than $`\lambda _r`$ is less than unity, the magnetic field is strong enough overall to prevent the gravitational collapse of a highly flattened cloud. However, for moderate $`H_0`$ and $`\mathrm{\Gamma }`$ when $`\lambda _r<1`$, even the singular equilibria are probably magnetically subcritical, since there can be little practical difference between the spherical mass-to-flux ratio $`\lambda _r`$ and the “true” mass-to-flux ratio $`\lambda `$ for highly flattened configurations. The latter is formally infinite when $`\mathrm{\Gamma }<1`$ only because the mass column goes to zero a little slower than the field column when we perform an integration along the central field line (see footnote 3). In this interpretation, subcritical scale-free clouds with $`\lambda _r<1`$ and intermediate values of $`\mathrm{\Gamma }`$ can become highly flattened because magnetic tension supports them laterally against their self-gravity while the soft EOS does not provide much resistance in the direction along the field lines. The squeezing of the cloud toward the midplane is compounded by the confining pressure of bent magnetic field lines that exert pinch forces in the vertical direction. Both the magnetic tension and the vertical pinch of magnetic pressure disappear when the field lines unbend, as they must to maintain the scale-free equilibria in the limit $`\mathrm{\Gamma }0`$ (see below). As a consequence, logatropic configurations become spherical for any value of $`\lambda _r`$. We leave as an interesting problem for future elucidation the determination whether there is still a threshold in $`\lambda _r`$ below which the SLS, embedded with straight and uniform field lines, will not collapse dynamically.
## 4 THE THIN DISK LIMIT ($`H_01`$)
In the limit $`H_01`$, the cloud flattens to a thin disk for any $`\mathrm{\Gamma }1`$. Dominant balance arguments applied to the two ODEs of the problem reveal the following asymptotic behaviour:<sup>7</sup><sup>7</sup>7These expansions are not valid for $`\mathrm{\Gamma }=1`$. See Li & Shu (1996) for the correct asymptotic expansion in this case.
$$R(\theta )R_0\delta (\theta \pi /2)H_0^{(43\mathrm{\Gamma })/(2\mathrm{\Gamma })}+s(\theta )H_0^{(43\mathrm{\Gamma })/(2\mathrm{\Gamma })(1\mathrm{\Gamma })},$$
(17)
$$\varphi f(\theta )H_0^{(43\mathrm{\Gamma })/(2\mathrm{\Gamma })}.$$
(18)
To the lowest order in $`H_0`$ the equation of force balance along field lines (eq. 7) becomes:
$`{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{d}{d\theta }}\left[\mathrm{sin}\theta \left(A_\mathrm{\Gamma }f^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })}f^{}s^{(2\mathrm{\Gamma })}s^{}\right)\right]=`$
$`2\left[{\displaystyle \frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}}s^{(1\mathrm{\Gamma })}+\left({\displaystyle \frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}}\right)B_\mathrm{\Gamma }f^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}\right],`$ (19)
valid over the interval $`0\theta <\pi /2`$, plus the the integral constraint
$$R_0\frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}_0^\pi s^{(1\mathrm{\Gamma })}\mathrm{sin}\theta d\theta \left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)^2B_\mathrm{\Gamma }_0^\pi f^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}\mathrm{sin}\theta d\theta =0,$$
(20)
obtained by integrating eq. (7) from $`\theta =0`$ to $`\pi `$, and applying the first BC (eq. 11) on the polar axis.
The constant $`R_0`$ is proportional to the surface density of the polytropic disks, given by
$$\mathrm{\Sigma }(r)\underset{ϵ0}{lim}_{\pi /2ϵ}^{\pi /2+ϵ}\rho r\mathrm{sin}\theta d\theta \left(\frac{K}{2\pi G}\right)^{1/(2\mathrm{\Gamma })}r^{\mathrm{\Gamma }/(2\mathrm{\Gamma })}R_0H_0^{(43\mathrm{\Gamma })/(2\mathrm{\Gamma })},$$
(21)
which, for $`\mathrm{\Gamma }=1`$ gives $`\mathrm{\Sigma }H_0a^2/\pi Gr`$, as found by Li & Shu (1996).
Eq. (8) expressing force balance across field lines reduces to
$$\frac{1}{\mathrm{sin}^2\theta }\left[f^{\prime \prime }\mathrm{cot}\theta f^{}+\mathrm{}(\mathrm{}+1)f\right]=C_\mathrm{\Gamma }f^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })}R_0\delta (\theta \pi /2),$$
(22)
where the parameter $`\mathrm{}`$ is defined by
$$\mathrm{}(\mathrm{}+1)\frac{2(43\mathrm{\Gamma })(1\mathrm{\Gamma })}{(2\mathrm{\Gamma })^2}.$$
(23)
This is equivalent to the equation for force free magnetic fields
$$f^{\prime \prime }\mathrm{cot}\theta f^{}+\mathrm{}(\mathrm{}+1)f=0,$$
(24)
valid over the interval $`0\theta <\pi /2`$, plus the condition
$$2f^{}(\pi /2)=C_\mathrm{\Gamma }R_0f(\pi /2)^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })},$$
(25)
obtained integrating eq. (8) from $`\pi /2ϵ`$ to $`\pi /2+ϵ`$, and taking the limit $`ϵ0`$. For integer $`\mathrm{}`$, solutions of eq. (24) regular at $`\theta =0`$ are Gegenbauer polynomials of order $`\mathrm{}`$ and index $`\frac{1}{2}`$, $`C_{\mathrm{}}^{(\frac{1}{2})}`$ (see e.g. Abramowitz & Stegun 1965). In general, it can be shown (Chandrasekhar 1955) that any axisymmetric force free field, separable in spherical coordinates, can be expressed in terms of fundamental solutions whose radial dependence is given by a combination of Bessel functions of fractional order, and the angular dependence by Gegenbauer polynomials of index $`\frac{1}{2}`$. In our case, the choice of $`\mathrm{\Gamma }`$ determines a particular exponent of the power-law for the radial part of the flux function, and hence the corresponding value of $`\mathrm{}`$ (non-integer, except for $`\mathrm{\Gamma }=0`$ and 1).
Therefore, the magnetic field is force free everywhere except at the midplane where $`\rho 0`$ and the condition of force balance across field lines has to be satisfied. In the thin disk limit discussed here, the boundary condition $`\varphi ^{}(\pi /2)=0`$ is clearly not fullfilled: the kink of $`\varphi `$ at the midplane provides the magnetic support against self-gravity on the midplane. Currents must exist in the disk to support these kinks.
With the definitions
$$y(\theta )A_\mathrm{\Gamma }\frac{43\mathrm{\Gamma }}{2}f(\theta )^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })},z(\theta )s(\theta )^{(1\mathrm{\Gamma })},$$
eq. (4) transforms into
$$z^{\prime \prime }+\mathrm{cot}\theta z^{}+\mathrm{}(\mathrm{}+1)z=y^{\prime \prime }+\mathrm{cot}\theta y^{}+\mathrm{}(\mathrm{}+1)y,$$
(26)
which has the solution
$$z(\theta )=y(\theta )+q(\theta ),$$
where $`q(\theta )`$ is a solution of the homogeneous equation
$$q^{\prime \prime }+\mathrm{cot}\theta q^{}+\mathrm{}(\mathrm{}+1)q=0.$$
(27)
Therefore,
$$s(\theta )=\left[qA_\mathrm{\Gamma }\frac{43\mathrm{\Gamma }}{2}f^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}\right]^{1/(1\mathrm{\Gamma })},$$
(28)
and the integral constraint eq. (20) becomes
$$_0^{\pi /2}q(\theta )\mathrm{sin}\theta d\theta =\frac{(2\mathrm{\Gamma })^2}{2(43\mathrm{\Gamma })}R_0.$$
(29)
The problem is thus reduced to the solution of the two homogeneous equations eq. (24) and eq. (27) for the functions $`f(\theta )`$ and $`q(\theta )`$ which are determined up to an arbitrary constant. However, the two integral constraints that would have determined these latter constants (eqs. 25, 29), contain the additional unknown parameter $`R_0`$. The system of equations is closed by the requirement that
$$\underset{H_0\mathrm{}}{lim}\lambda _r=1.$$
Substituting eq. (17) and eq. (18) in eq. (12), one obtains
$$\underset{H_0\mathrm{}}{lim}\lambda _r=2^{(1\mathrm{\Gamma })/(2\mathrm{\Gamma })}\left(\frac{2\mathrm{\Gamma }}{43\mathrm{\Gamma }}\right)\frac{R_0}{2f(\pi /2)}=1,$$
i.e.,
$$R_0=2^{1/(2\mathrm{\Gamma })}\left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)f(\pi /2),$$
(30)
which gives the remaining condition.
Eq. (24) and (27) can be solved numerically by starting the integration at $`\theta =0`$ with the series expansions: $`q(\theta )=q_0[1\frac{1}{4}\mathrm{}(\mathrm{}+1)\theta ^2+\mathrm{}]`$, and $`f(\theta )=f_0\left\{\theta ^2\frac{1}{8}[\mathrm{}(\mathrm{}+1)+\frac{2}{3}]\theta ^4+\mathrm{}\right\}`$, where $`q_0`$ and $`f_0`$ are arbitrary constants.<sup>8</sup><sup>8</sup>8The two original BCs on the function $`R(\theta )`$ are of little use here: the one at $`\theta =0`$ reduces to the condition $`lim_{\theta 0}(1\mathrm{\Gamma })^1\mathrm{sin}\theta q^{}=0`$, trivially satisfied; the second BC, $`R^{}(\pi /2)=0`$ cannot be applied because of the $`\delta `$-function at $`\pi /2`$. The constants $`q_0,f_0`$ and $`R_0`$ are then determined by the constraints expressed by eqs. (20), (25),and (30).
Figure 5 shows the functions $`f(\theta )`$ and $`s(\theta )`$ obtained for $`\mathrm{\Gamma }=1/2`$ and increasing values of $`H_0`$ from 0.4 to 1.5 compared with the asymptotic expressions computed here. Already for $`H_0=1.5`$, the actual $`f(\theta )`$ and $`s(\theta )`$ are very close to the corresponding asymptotic functions eq. (17) and eq. (18). Table 3 shows the value of the angle $`\alpha `$ of the magnetic field with the plane of the disk, the flux function $`f`$ evaluated at $`\theta =\pi /2`$ (indicative of the magnetic field stength), and the surface density parameter $`R_0`$, as functions of $`\mathrm{\Gamma }`$. The angle $`\alpha `$ ranges from $`45^{}`$ for the isothermal case $`\mathrm{\Gamma }=1`$ to $`90^{}`$ in the logatropic case $`\mathrm{\Gamma }=0`$. Correspondingly, the magnetic flux in the disk and the surface density both diverge as $`\mathrm{\Gamma }0`$ for any large but finite value of $`H_0`$.
## 5 THE QUASI-SPHERICAL LIMIT ($`H_01`$)
For the isothermal case $`\mathrm{\Gamma }=1`$, Li & Shu (1996) have shown how the SIS is recovered for $`H_01`$ from a family of toroids with zero density on the polar axis. For $`\mathrm{\Gamma }1`$, in the limit $`H_01`$, the asymptotic expansions are given by:
$$R(\theta )\left[\frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}\right]^{1/(2\mathrm{\Gamma })}+p(\theta )H_0^{(43\mathrm{\Gamma })/(32\mathrm{\Gamma })}+\mathrm{}$$
$$\varphi =g(\theta )H_0^{(43\mathrm{\Gamma })/2(32\mathrm{\Gamma })}+\mathrm{}.$$
To the lowest order in $`H_0`$, eqs. (7) and (8) become:
$$\frac{1}{\mathrm{sin}\theta }\frac{d}{d\theta }\left\{\mathrm{sin}\theta \left[A_\mathrm{\Gamma }g^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })}g^{}\frac{(2\mathrm{\Gamma })^2}{43\mathrm{\Gamma }}p^{}\right]\right\}=$$
$$2\left[(2\mathrm{\Gamma })p\left(\frac{43\mathrm{\Gamma }}{2\mathrm{\Gamma }}\right)^2B_\mathrm{\Gamma }g^{2(1\mathrm{\Gamma })/(43\mathrm{\Gamma })}\right],$$
(31)
and
$$\frac{1}{\mathrm{sin}^2\theta }\left[g^{\prime \prime }\mathrm{cot}\theta g^{}+\mathrm{}(\mathrm{}+1)g\right]=C_\mathrm{\Gamma }\left[\frac{43\mathrm{\Gamma }}{(2\mathrm{\Gamma })^2}\right]^{1/(2\mathrm{\Gamma })}g^{(2\mathrm{\Gamma })/(43\mathrm{\Gamma })},$$
(32)
The BC for the functions $`p(\theta )`$ and $`g(\theta )`$ are the same as those for $`R(\theta )`$ and $`\varphi (\theta )`$ in eq. (11).
Figure 6 shows the convergence of the solutions of the full set of equations (7) and (8) obtained for $`\mathrm{\Gamma }=1/2`$ and decreasing values of $`H_0`$ from 0.4 to 0.05, to the asymptotic solutions obtained by integrating the equations above. Notice that $`p(0)<0`$ and $`p(\pi /2)>0`$ showing that the sequence of equilibria with $`\mathrm{\Gamma }=1/2`$ originates from the corresponding unmagnetized spherical state (eq. 10) by reducing the density on the pole and enhancing it on the equator. The same behaviour is found for any value of $`\mathrm{\Gamma }`$ in the range $`0<\mathrm{\Gamma }<1`$. For $`\mathrm{\Gamma }=1`$, the function $`p(\theta )`$ diverges at $`\theta =0`$, indicating that this expansion is not appropriate in the isothermal case, as in the case $`H_01`$. For the same reason, the expansion also fails for $`\mathrm{\Gamma }=0`$, since both $`g(\theta )`$ and $`p(\theta )`$ diverge on the equatorial plane.
These flattened configurations are supported by magnetic and gas pressure against self-gravity. The intensity of the magnetic field can become very high even though $`H_0`$ is small, because the latter parameter measures not the field strength but the deviations from a force free field (see eq. 6c).
## 6 THE LOGATROPIC LIMIT ($`\mathrm{\Gamma }0`$).
We consider in this section the logatropic limit $`\mathrm{\Gamma }0`$. As anticipated in § 2, for $`\mathrm{\Gamma }=0`$ eq. (7) and (8) admit the analytical solution $`R=1`$ and $`\mathrm{\Phi }r^2\mathrm{sin}^2\theta `$ corresponding to a SLS threaded by a straight and uniform magnetic field. This solution represents the only possible scale-free isopedic configuration of equilibrium for a magnetized cloud with a logatropic EOS. To show this, we use the results of § 4 and § 5 for $`H_01`$ and $`H_01`$ to find the limit of the equilibrium configurations for $`\mathrm{\Gamma }0`$ and fixed (small or large) values of $`H_0`$.
In the limit $`H_01`$, $`\mathrm{\Gamma }0`$, analytic solutions to equations (24) and (27) exist. The magnetic field tends to become uniform and straight, $`f(\theta )f(\pi /2)\mathrm{sin}^2\theta `$, but $`f(\pi /2)`$ diverges, as shown in Table 3, and therefore $`s(\theta =\pi /2)`$ also diverges (see eq. 28). Eq. (21) shows in this limit that the surface density $`\mathrm{\Sigma }`$ is independent of $`r`$, therefore, no pressure gradients can be exerted in the horizontal direction. The value $`\mathrm{\Sigma }=(K/2\pi G)^{1/2}R_0H_0^2`$ diverges as $`\mathrm{\Gamma }0`$ for any value of $`H_0`$ because $`lim_{\mathrm{\Gamma }0}R_0=\mathrm{}`$ (see Table 3). The magnetic flux threading the disk, $`\varphi =2^{3/2}R_0H_0^2r^2\mathrm{sin}^2\theta `$, becomes infinite in order to keep the mass to flux ratio $`\lambda _r`$ equal to 1. Therefore, the limiting configuration approaches a uniform disk with infinite surface density, threaded by an infinitely strong uniform and straight magnetic field.
If we now examine the case $`H_01`$, in the limit $`\mathrm{\Gamma }0`$, it is easy to show from eq. (32) that the magnetic field tends to become uniform, $`g(\theta )g(\pi /2)\mathrm{sin}^2\theta `$, but $`lim_{\mathrm{\Gamma }0}g(\pi /2)=\mathrm{}`$. Consequently, the density function $`p(\theta )`$ also diverges in $`\theta =\pi /2`$, and the configuration again approaches a thin disk threaded by an uniform, infinitely strong, magnetic field.
We conclude that scale-free logatropic clouds cannot exist as magnetostatic disks except in some limiting configuration. In the absence of such limits, the equilibria are spherical and can be magnetized only by straight and uniform field lines; i.e., the magnetic field is force-free and therefore given by $`H_0=0`$. The inside-out gravitational collapse of such a SLS would still proceed self-similarly as in the solution of McLaughlin & Pudritz (1997), but the frozen-in magnetic fields would yield a dependence with polar angle that eventually produces a pseudodisk (Galli & Shu 1993a,b; Allen & Shu 1998a).
## 7 SUMMARY AND DISCUSSION
We have solved the scale-free equations of magnetostatic equilibrium of isopedic self-gravitating polytropic clouds to find pivotal states that represent the initial state for the onset of dynamical collapse, as first proposed by Li & Shu (1996) for isothermal clouds. Compared to unmagnetized equilibria, the magnetized configurations are flattened because of magnetic support across field lines. The degree of this support is best represented by the ratio of the square of the sound to Alfvèn speeds $`\mu ^2`$, or the overdensity parameter $`D`$, since they are always monotonic functions of $`H_0`$ and $`\mathrm{\Gamma }`$.
Configurations with $`\mathrm{\Gamma }=1`$ become highly flattened as the parameter $`H_0`$ increases. When $`\mathrm{\Gamma }<1`$ (softer EOS) the equilibria get flattened even faster at the same values of $`H_0`$, since along field lines there is less support from a soft EOS than for a stiff one. However, it seems that in the logatropic limit flattened disks do not exist: the singular scale-free equilibria can only be spherical uniformly magnetized clouds. Figure 7 shows a schematic picture of the $`(H_0,\mathrm{\Gamma })`$ plane indicating the topology of the solutions for scale free magnetized isopedic singular self-gravitating clouds.
In self-gravitating clouds, the joint compression of matter and field is often expressed as producing an expected relationship: $`B\rho ^\kappa `$, with different theorists expressing different preferences for the value of $`\kappa `$ (e.g., Mestel 1965, Fiedler & Mouschovias 1993). No local (i.e., point by point) relationship of the form $`B\rho ^\kappa `$ holds for the scale-free equilibria studied in this paper. However, if we average the magnetic field strength and mass density over ever larger spherical volumes centered on $`r=0`$, we do recover such a relationship: $`B\rho ^\kappa `$, where angular brackets denote the result of such a spatial average and $`\kappa =\mathrm{\Gamma }/2`$.
We may think of the result $`B\rho ^{\mathrm{\Gamma }/2}`$ as arising physically from a combination of two tendencies. (a) Slow cloud contraction in the absence of magnetic fields and rotation tends to keep roughly one Jeans mass inside every radius $`r`$, which yields $`\rho c_s^2/Gr^2`$, or $`\rho r^{2/(2\mathrm{\Gamma })}`$ if $`c_s^2\rho ^{\mathrm{\Gamma }1}`$. (b) Slow cloud contraction in the absence of gas pressure tends to keep roughly one magnetic critical mass inside every radius $`r`$, which yields $`B/r\rho \lambda _r`$ = constant, or $`Br\rho r^{\mathrm{\Gamma }/(2\mathrm{\Gamma })}\rho ^{\mathrm{\Gamma }/2}`$ if gas pressure (thermal or turbulent) plays a comparable role to magnetic fields in cloud support. Notice that our reasoning does not rely on arguments of cloud geometry, e.g., whether cloud cores flatten dramatically or not as they contract; nor does it depend sensitively on the precise reason for core contraction, e.g., because of ambipolar diffusion or turbulent decay.
Crutcher (1998) claims that the observational data are consistent with $`\kappa =0.47\pm 0.05`$. If we take Crutcher’s conclusion at face value, we would interpret the observations as referring mostly to regions where the EOS is close to being isothermal $`\mathrm{\Gamma }1`$, which is the approximation adopted by many theoretical studies that ignore the role of cloud turbulence. The result is not unexpected for low-mass cloud cores, but we would not naively have expected this relationship for high-mass cores and cloud envelopes, where the importance of turbulent motions is much greater. Unfortunately, the observational data refer to different clouds rather than to different (spatially averaged) regions of the same cloud, so there is some ambiguity how to make the proper connection to different theoretical predictions. There may also be other mechanisms at work, e.g., perhaps a tendency for observations to select for regions of nearly constant Alfvén speed, $`v_AB/\rho ^{1/2}`$ constant (Bertoldi & McKee 1992). Thus, we would warn the reader against drawing premature conclusions about the effective EOS for molecular clouds, or the related degree to which observations can at present distinguish whether molecular clouds are magnetically supercritical or subcritical.
If molecular clouds are magnetically supercritical, with $`\lambda _r`$ greater than 1 by order unity (say, $`\lambda _r=2`$), then an appreciable fraction (say, 1/2) of their support against self-gravity has to come from turbulent or thermal pressure (Elmegreen 1978, McKee et al. 1993, Crutcher 1998). Modeled as scale-free equilibria, such clouds with $`\mu ^2`$ of order unity are not highly flattened (see Tables 1, 2 and Figs. 2, 3). Suppose we try gravitationally to extract a subunit from an unflattened massive molecular cloud, where the cloud as a whole is only somewhat supercritical, $`\lambda _r2`$. If the subunit’s linear size is smaller than the vertical dimension of the cloud by more than a factor of 2, which will be the case if we consider subunits of stellar mass scales, then this subunit will not itself be magnetically supercritical. Magnetically subcritical pieces of clouds cannot contract indefinitely without flux loss, so star formation in unflattened clouds, if they are not highly supercritical, needs to invoke some degree of ambipolar diffusion in order to produce small dense cores that can gravitationally separate from their surroundings.
On the other hand, if molecular clouds are magnetically critical or subcritical, with $`\lambda _r1`$, then almost all scale-free equilibria are highly flattened, with $`\mu ^2`$ appreciably less than unity. On a small scale, any subunit of this cloud, even subunits with vertical dimension comparable to the cloud as a whole, would also be magnetically critical or subcritical. For such a subunit to contract indefinitely, we would again need to invoke ambipolar diffusion to make a cloud core magnetically supercritical. Thus, although the decay of turbulence can accelerate the formation of cloud cores, the ultimate formation of stars from such cores may still need to rely on some magnetic flux loss (but perhaps not more than a factor of $`2`$) to trigger the evolution of the cores toward gravomagneto catastrophe and a pivotal state with a formally infinite central concentration.
On the large scale, if GMCs are modeled as flattened isopedic sheets, Shu & Li (1997) proved that magnetic pressure and tension are proportional to the gas pressure and force of self-gravity. Their theorems hold independently of the detailed forms of the EOS or the surface density distribution. If GMCs are truly highly flattened – with typical dimensions, say, of 50 pc $`\times `$ 50 pc $`\times `$ a few pc or even less – then many aspects of their magnetohydrodynamic stability and evolution become amenable to a simplified analysis through the judicious application and extension of the theorems proved by Shu & Li (1997) (e.g., see Allen & Shu 1998b, Shu et al. (1999)). This exciting possibility deserves further exploration.
###### Acknowledgements.
D.G. acknowledges support by CNR grant 97.00018.CT02, ASI grant ARS-96-66 and ARS-98-116, and hospitality from UNAM, México. S.L. acknowledges support by J. S. Guggenheim Memorial Foundation, grant DGAPA-UNAM and CONACyT, and hospitality from Osservatorio di Arcetri. F.C.A. is supported by NASA grant No. NAG 5-2869 and by funds from the Physics Department at the University of Michigan. The work of F.H.S. is supported in part by an NSF grant and in part by a NASA theory grant awarded to the Center for Star Formation Studies, a consortium of the University of California at Berkeley, the University of California at Santa Cruz, and the NASA Ames Research Center.
|
no-problem/9904/astro-ph9904159.html
|
ar5iv
|
text
|
# ON THE STRUCTURE AND NATURE OF DARK MATTER HALOS
## 1 Introduction
Cosmological models of hierarchical merging in a cold dark matter universe are in some difficulty. High-resolution N-body simulations (Navarro et al. 1996a; NFW) have shown that the density profiles $`\rho _{NFW}`$ of virialized dark matter halos should have a universal shape of the form
$$\rho _{NFW}(r)=\frac{3H_0^2}{8\pi G}\frac{\delta _c}{(r/r_s)(1+r/r_s)^2}$$
(1)
where $`r_s`$ is a characteristic length scale and $`\delta _c`$ a characteristic density enhancement. The two free parameters $`\delta _c`$ and $`r_s`$ can be determined from the halo concentration c and the virial mass $`M_{200}`$
$`\delta _c`$ $`=`$ $`{\displaystyle \frac{200}{3}}{\displaystyle \frac{c^3}{\mathrm{ln}(1+c)c/(1+c)}}`$ (2)
$`r_s`$ $`=`$ $`{\displaystyle \frac{R_{200}}{c}}={\displaystyle \frac{1.63\times 10^2}{c}}\left({\displaystyle \frac{M_{200}}{M_{}}}\right)^{1/3}h^{2/3}kpc.`$ (3)
where $`R_{200}`$ is the virial radius, that is the radius inside which the average overdensity is 200 times the critical density of the universe and $`M_{200}`$ is the mass within $`R_{200}`$.
For any particular cosmology there also exists a good correlation between c and $`M_{200}`$ which results from the fact that dark halo densities reflect the density of the universe at the epoch of their formation (NFW, Salvador-Solé et al. 1998) and that halos of a given mass are preferentially assembled over a narrow range of redshifts. As lower mass halos form earlier, at times when the universe was significantly denser, they are more centrally concentrated. NFW have published concentrations for dark matter halos in the mass range of $`3\times 10^{11}M_{}M_{200}3\times 10^{15}M_{}`$ which can be well fitted by the following power-law functions:
$`c`$ $`=`$ $`8.91\times 10^2\left({\displaystyle \frac{M_{200}}{M_{}}}\right)^{0.14}\mathrm{𝑓𝑜𝑟}\mathrm{SCDM}`$ (4)
$`c`$ $`=`$ $`1.86\times 10^2\left({\displaystyle \frac{M_{200}}{M_{}}}\right)^{0.10}\mathrm{𝑓𝑜𝑟}\mathrm{CDM}\mathrm{\Lambda }`$
where SCDM denotes a standard biased cold dark matter model with $`\mathrm{\Omega }_0`$=1, h=0.5, $`\sigma _8`$=0.65 and CDM$`\mathrm{\Lambda }`$ denotes a low-density universe with a flat geometry and a non-zero cosmological constant, defined by $`\mathrm{\Omega }_0`$=0.25, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ = 0.75, h=0.75, $`\sigma _8`$=1.3. Note that the universal profile (equation 1) and the scaling relations (equation 4) have only been determined from simulations for halo masses as small as $`M_{200}=10^{11}M_{}`$, but there is no reason to believe that these results would not be valid for halos which are, say, one order of magnitude lower in mass. In summary, dark matter halos represent a one-parameter family, with their density distribution being determined completely by their virial mass $`M_{200}`$.
The universal character of dark matter profiles and the validity of the NFW-profile for different cosmogonies has been verified by many N-body calculations (e.g. Navarro et al. 1997, Cole & Lacey 1997, Tormen et al. 1997, Tissera & Dominguez-Tenreiro 1998, Jing 1999). In one of the highest resolution simulations to date, Moore et al. (1998, see also Fukushige & Makino 1997) found good agreement with the NFW profile (equation 1) at and outside of $`r_s`$. Their simulations did however lead to a steeper innermost slope $`\rho r^{1.4}`$ which extends all the way down to their resolution limit of 0.01 $`r_s`$.
On the analytical side, early spherically symmetric collapse models by Gunn & Gott (1972) studied the collapse of a uniformly overdense region. Gott (1975) and Gunn (1977) investigated secondary infall onto already collapsed density perturbations and predicted $`r^{9/4}`$ profiles. Fillmore & Goldreich (1984) found self-similarity solutions for the secondary infall models. Hoffman & Shaham (1985) took into account more realistic Gaussian initial conditions and predicted sharp central density peaks of the form $`\rho r^2`$. An updated version of these models by Krull (1999) abandoned self-similarity and explicitly took into account the hierarchical formation history. His models lead to excellent agreement with the NFW-profile in the radius range $`0.5r_sr10r_s`$.
In a different series of very high-resolution models, using a new adaptive refinement tree N-body code, Kravtsov et al. (1998, KKBP) found significant deviations from the NFW profile or an even steeper inner power-law density distribution for $`r0.5r_s`$. In this region their dark matter profiles show a substantial scatter around an average profile that is characterized by a shallow central cusp with $`\rho r^{0.3}`$. Although the scatter is large, this result is in clear contradiction to the simulations of Moore et al. (1998) with equally high central resolution. Figure 1 (adopted from Primack et al. 1998) compares the NFW-profile (dashed line) with the profiles of dark matter halos of KKBP (thin solid lines).
## 2 The dark matter halo of DDO 154
DDO 154 (Carignan & Freeman 1988) is one of the most gas-rich galaxies known with a total H I mass of $`2.5\times 10^8M_{}`$ and an inner stellar component of only $`5\times 10^7M_{}`$. Recently, Carignan & Purton (1998) measured the rotation curve of its extended H I disk all the way out to 21 optical disk scale lengths. As the rotation curve, even in the innermost regions, is almost completely dominated by dark matter, this galaxy provides an ideal laboratory for testing the universal density profile predictions of cosmological models.
Figure 2 shows the dark matter rotation curve of DDO 154 and compares it with the NFW profile. Note that the maximum rotational velocity $`v_{max}`$ 47 km/s is reached at a radius of $`r_{max}`$ 4.5 kpc, beyond which the rotation decreases again. Fitting the inner regions (dotted line) has been known to pose a problem (Flores & Primack 1994, Moore 1994, Burkert 1995). However an even larger problem exists in the outermost regions where far too much dark matter would be expected. The dashed line in figure 2 shows a fit to the outer regions. In this case, the dark matter excess in the inner regions is unacceptably large. We conclude that the well-studied dark matter rotation curve of DDO 154 is far from agreement with NFW profiles.
We can also compare the observed location $`r_{max}`$ and the value $`v_{max}`$ of the observed maximum rotational velocity with predictions of a SCDM model. Adopting a NFW profile, the virial radius is determined by $`R_{200}`$= 0.5 c r<sub>max</sub>. The virial mass is then given by the relation
$$1.63\times 10^2\left(\frac{M_{200}}{M_{}}\right)^{1/3}h^{2/3}=\left(\frac{R_{200}}{\mathrm{kpc}}\right)=0.5c\left(\frac{r_{max}}{\mathrm{kpc}}\right).$$
(5)
Adopting the SCDM model with h=0.5 and inserting equation (4) for $`c`$ one obtains
$$M_{200}^{SCDM}=9\times 10^8\left(\frac{r_{max}}{\mathrm{kpc}}\right)^{2.11}M_{}$$
(6)
For DDO 154 with $`r_{max}`$ = 4.5 kpc we find $`M_{200}=2.1\times 10^{10}M_{}`$, $`R_{200}`$ = 72 kpc and $`c=31.9.`$ For these halo values, the predicted maximum rotational velocity would be
$$v_{max}=0.465\left(\frac{c}{ln(1+c)c/(1+c)}\right)^{1/2}h\left(\frac{R_{200}}{\mathrm{kpc}}\right)=60\mathrm{km}/\mathrm{s}$$
(7)
which is a factor 1.3 larger than observed.
Adopting instead the CDM$`\mathrm{\Lambda }`$ model with h=0.75, a similar calculation leads to
$$M_{200}^{CDM\mathrm{\Lambda }}=3\times 10^8\left(\frac{r_{max}}{\mathrm{kpc}}\right)^{2.3}M_{}$$
(8)
and therefore to $`M_{200}=9.5\times 10^9M_{}`$, $`R_{200}`$=42 kpc, c=18.7 and $`v_{max}`$=44.4 km/s which is in excellent in agreement with the observations (47 km/s), especially if one notes that equation 4 has been verified only for viral masses of $`M_{200}>10^{11}M_{}`$.
In summary, the radius and mass scale of DDO 154 as determined from the value and location of its maximum rotational velocity is in perfect agreement with the predictions of the currently most favoured cosmological models (CDM$`\mathrm{\Lambda }`$). The inferred dark matter density distribution is however quite different.
## 3 The universality of observed dark matter mass profiles.
DDO 154 is not a peculiar case. Burkert (1995) showed that the dark matter rotation curves of four dwarf galaxies studied by Moore (1994) have the same shape which can be well described by the density distribution
$$\rho _B(r)=\frac{\rho _b}{(1+r/r_b)[1+(r/r_b)^2]}.$$
(9)
KKBP extended this sample to ten dark matter-dominated dwarf irregular galaxies and seven dark matter-dominated low surface brightness galaxies. As shown in figure 3 all have the same shape, corresponding to a density distribution given by equation (9) and in contradiction to equation (1).
Equation 9 predicts a flat dark matter core, the origin of which is difficult to understand in the context of hierarchical merging (Syer & White 1998) as lower-mass dark halos in general have high densities and therefore spiral into the center of the more diffuse merger remnant, generating high-density cusps.
There is a fundamental difference in the kinematical properties of dark matter halos described by NFW profiles and Burkert profiles (equation 9). Assuming isotropy and spherical symmetry in the inner regions, the Jeans equation predicts a velocity dispersion profile $`\sigma (r)`$ of NFW halos that decreases towards the center as $`\sigma r^{1/2}`$ (see also the simulations of Fukushige & Makino 1997) whereas Burkert halos have isothermal cores with constant velocity dispersion. Again, non-isothermal, kinematically cold dark matter cores might be expected in hierarchical merging scenarios as the denser clumps that sink towards the center of merger remnants have on average smaller virial dispersions.
## 4 On the origin of dark matter cores
It is rather unsatisfying that numerical studies of dark matter halos using different techniques lead to different results. KKBP find dark matter halo profiles that, on average, can be well fitted by a Burkert profile (Fig. 1) and therefore also provide a good fit to observed rotation curves. The NFW-profiles, or even steeper inner density gradients (Moore et al. 1998) seem to be in clear conflict with observations (Fig. 2). Numerical resolution cannot be the answer as the N-body simulations of Moore et al. (1998) have enough resolution to determine the dark matter density distribution within 0.5 $`r_s`$, where any flattening should have been found. Instead, the authors find profiles that are even steeper than $`r^1`$. One should note that the results of KKBP have not yet been reproduced by other groups, whereas dark profiles with central $`r^1`$ profiles or even steeper cusps have been found independently in many studies using different numerical techniques. On the other hand, the high-resolution studies of KKBP sample galactic halos, whereas most of the other studies simulate halos of galactic clusters. Indeed, there exists observational evidence that the dark matter distribution in clusters of galaxies is well described by NFW profiles (Carlberg et al. 1997, McLaughlin 1999). One conclusion therefore could be that dark matter halos are not self-similar but that their core structure does depend on their virial mass. Because one is sampling different parts of the primordial CDM density fluctuation power spectrum, from which the initial conditions are derived, it is possible that the initial conditions could influence the final result. For example, low mass dark halos have initial fluctuations which result in virialized velocity dispersions that are nearly independent of mean density, and so a hierarchy of substructure would be expected to be nearly isothermal. For more massive halos typical of normal galaxies, the velocity dispersion varies as $`\rho ^{\frac{n1}{3(n+3)}}`$ where $`n`$ is the effective power-law power spectrum index $`\delta \rho /\rho M^{\frac{n+3}{6}}`$ and $`n2`$ on galaxy scales but $`n3`$ for dwarfs. Low-mass dark halos could then indeed have isothermal, constant density cores whereas high-mass dark halos should contain non-isothermal, cold power-law cores. However, even in this case, one still has the problem that the simulations lead to a much greater dispersion of the inner radial profiles than expected from observed rotation curves. Additional effects might therefore be important that have not been taken into account in dissipationless cosmological simulations.
### 4.1 Secular dynamical processes
Cold dark matter cores with steep density cusps are very fragile and can easily be affected by mass infall and outflow. This has been shown, for example, by Tissera & Domínguez-Tenreiro (1998) who included gas infall in their cosmological models and found even steeper power-laws in the central regions than predicted by purely dissipationless merging due to the adiabatic contraction of the dark component. In order to generate flat cores through secular processes, Navarro et al. (1996b) proposed a scenario where after a slow growth of a dense gaseous disk the gas is suddenly ejected. The subsequent expansion and violent relaxation phase of the inner dark matter regions leads to a flattening of the core. This model has been improved by Gelato & Sommer-Larsen (1999) who applied it to DDO 154 and found that it is not easy to satisfactorily explain the observed rotation curve even for extreme mass loss rates. In fact, it is unlikely that DDO 154 lost any gas, given its large gas fraction. In addition, secular dynamical processes due to mass outflow would predict inner dark matter profiles that depend sensitively on the detailed physics of mass ejection and therefore should again show a wide range of density distributions, and these are not observed.
### 4.2 A second dark and probably baryonic component
The rotation curves of the galaxies shown in figure 3 clearly cannot be understood by including only the visible component. It may well be that some non-negligible and as yet undetected fraction of the total baryonic mass contributes to their dark component, in addition to the non-baryonic standard cold dark component that is considered in cosmological models.
In fact, there is ample room for such a dark baryonic component. Primordial nucleosynthesis requires a baryonic density of $`\mathrm{\Omega }_bh^20.015\pm 0.008`$ (Kurki-Suonio et al. 1997, Copi et al. 1995), whereas the observed baryonic density for stellar and gaseous disks lies in the range of $`\mathrm{\Omega }_d0.004\pm 0.002`$ (Persic & Salucci 1992). Moreover modelling of Lyman alpha clouds at $`z24`$ suggests that all of the baryons expected from primordial nucleosynthesis were present in diffuse form, to within the uncertainties, which may amount to perhaps a factor of 2 (Weinberg et al. 1997). Hence dark baryons are required at low redshift. These may be in the form of hot gas that must be mostly outside of systems such as the Local Group and rich galaxy clusters (Cen and Ostriker 1999). But an equally plausible possibility is that the dark baryons are responsible for a significant fraction of the mass in galaxy halos, as is motivated by arguments involving disk rotation curves and halo morphologies (cf Gerhard and Silk 1996; Pfenniger et al. 1994).
This second dark baryonic component could be diffuse $`H_2`$ within the disks or some spheroidal distribution of massive compact baryonic objects (MACHOs), comparable to those that have been detected via gravitational microlensing events towards the Large Magellanic Cloud (LMC). It is difficult to reconcile the inferred typical MACHO lens mass of $`0.5\pm 0.3`$ M, as derived from the first 2.3 years of data for 8.5 million stars in the LMC (Alcock et al. 1996), with ordinary hydrogen-burning stars or old white dwarfs (Bahcall et al. 1994, Hu et al. 1994, Carr 1994, Charlot & Silk 1996). Brown dwarfs, substellar objects below the hydrogen-burning limit of 0.08 $`M_{}`$ would be ideal candidates. Indeed, halo models can be constructed, e.g. by assuming a declining outer rotation curve, for which the most likely MACHO mass is 0.1 M or less ( Honma & Kan-ya 1999) with a MACHO contribution to the total dark mass of almost 100%. Freese et al. (1999) have however shown by deep star counts using HST that faint stars and massive brown dwarfs contribute no more than 1% of the expected total dark matter mass density of the Galaxy, ruling out such a low-mass population.
A simple explanation of the MACHO mass problem has been presented by Zaritsky & Lin (1997) and Zhao (1998), who argued that the MACHOs reside in a previously undetected tidal stream, located somewhere in front of the LMC. In this case the microlensing events would represent stellar objects in the outer regions of the LMC and would not be associated with a dominant dark matter component of the Milky Way. This solution is supported by the fact that all lensing events toward the LMC and SMC with known distances (e.g. the binary lensing event 98 LMC-9, or 98-SMC-1) appear to be a result of self-lensing within the Magellanic Clouds. However the statistics are abysmal: only two SMC events have been reported, and there are approximately 20 LMC events in all, of which two have known distances. Moreover one can only measure distances for binary events, and these are very likely due to star-star lensing. Finally, the SMC is known to be extended along the line of sight, thereby enhancing the probability of self-lensing.
Thus, while evidence for a possible dark baryonic component in the outer regions of galaxies is small, such a component could still be the solution to the dark matter core problem. Burkert & Silk (1997) showed that the observed rotation curve of DDO 154 could be reconciled with standard cosmological theories if, in addition to a standard dark matter halo with an NFW-profile (corrected for adiabatic contraction), a separate and centrally condensed dark baryonic component is introduced. The solid line in figure 2 shows the fit achieved with the 2-component dark model of Burkert and Silk adopting dark matter halo parameters which are in agreement with CDM$`\mathrm{\Lambda }`$ models and using a spherically symmetric dark baryonic component with a physically reasonable density distribution that decreases monotonically with increasing radius. The required mass of the dark baryonic spheroid is $`1.5\times 10^9M_{}`$ which is 4-5 times the mass of the visible gaseous galactic disk and about 25% the total mass of the non-baryonic dark component. The apparent universality of rotation curves would then suggest that the relative mix of the two dark components should in turn be universal. The origin of such a dark baryonic component which must have formed during an early dissipative condensation phase of baryons relative to the nondissipative, collisionless dark matter component is not understood. However this is not to say that it could not have occurred. Our understanding of early star formation and the primordial initial mass function is sufficiently primitive that this remains very much an open possibility.
## References
|
no-problem/9904/chao-dyn9904048.html
|
ar5iv
|
text
|
# Transition to turbulence in a shear flow
## I Introduction
In many flows the transition to turbulence proceeds via a sequence of bifurcations to flows of ever increasing spatial and temporal complexity. Analytical and experimental efforts in particular on layers of fluid heated from below and fluids between rotating concentric cylinders have lead to the identification and verification of several routes to turbulence, which typically involve a transition from a structureless laminar state to a stationary spatially modulated one and then to more complicated states in secondary and higher bifurcations.
Transitions in shear flows do not seem to follow this pattern . Typically, a transition to a turbulent state can be induced for sufficiently large Reynolds number with finite amplitude perturbations, just as in a subcritical bifurcation. However, in the most spectacular cases of plane Couette flow between parallel plates and Hagen-Poiseuille flow in a pipe , there is no linear instability of the laminar profile for any finite Reynolds number that could give rise to a subcritical bifurcation. The turbulent state seems to be high dimensional immediately, without clear temporal or spatial patterns (unlike the rolls in Rayleigh-Bénard flow). And the transition seems to depend sensitively on the initial conditions. Based on these characteristic features it has been argued that a novel kind of transition to turbulence different from the well-known three low-dimensional ones is at work .
Recent activity has focussed on three features of this transition: the non-normality of the linear eigenvalue problem , the occurence of new stationary states without instability of the linear profile and the fractal properties of the lifetime landscape of perturbations as a function of amplitude and Reynolds number . The non-normality of the linear stability problem implies that even in the absence of exponentially growing eigenstates perturbations can first grow in amplitude before decaying since the eigenvectors are not orthogonal. During the decay other perturbations could be amplified, giving rise to a noise sustained turbulence . The amplification could also cause random fluctuations to grow to a size where the nonlinear terms can no longer be neglected . Then the dynamics including the nonlinear terms could belong to a new asymptotic state, different from the laminar profile, perhaps a turbulent attractor. Presumably, this attractor would be built around stationary or periodic solution. Here, the observation of tertiary structures comes in since they could form the basis for the turbulent state. Finally, the observation of fractality in the lifetime distribution suggests that the turbulent state is not an attractor but rather a repeller: Infinite lifetimes occur only along the stable manifolds of the repeller, all other initial conditions will eventually decay. Permanent turbulence would thus correspond to noise induced excitations onto a repeller.
In plane Couette flow some of the features described above have been identified, but only with extensive numerical effort . The aim of the present work is to present a simple model that is based on the Navier-Stokes equation and captures the essential elements of the transition. It is motivated in part by the desire to obtain a numerically more accessible model which perhaps will provide as much insight into the transition as the Lorenz model for the case of fluids heated from below (presumably at the price of similar shortcomings). The two and three degree of freedom models proposed by various groups (and reviewed in ) to study the effects of non-normality mock some features of the Navier-Stokes equations considered essential by their inventors but they are not derived in some systematic way from the Navier Stokes equation. The model used here differs from the one proposed by Waleffe in the selection of modes.
Attempts to built models for shear flows using Fourier modes immediately reveal an intrinsic difficulty: In the case of fluids heated from below the nonlinearity arises from the coupling of the temperature gradient to the flow field so that two wave vectors, $`𝐤`$ and $`2𝐤`$, suffice to obtain nonlinear couplings. In shear flows, the nonlinearity has to come from the coupling of the flow field with itself through the advection term $`(𝐮)𝐮`$. This imposes rather strong constraints on the wave vectors. At least three wave vectors satisfying the triangle relation $`𝐤_1+𝐤_2+𝐤_3=0`$ are required to collect a contribution from the advection term. A minimal model thus has at least six complex variables. Three of these decay monotonically to zero, leaving three for a nontrivial dynamics. In the subspaces investigated (B.E., unpublished), the most complex behaviour found is a perturbed pitchfork bifurcation, which may be seen as a precursor of the observed dynamics: for Reynolds numbers below a critical value, there is only one stable state. Above that value a pair of stable and unstable states is born in a saddle-node bifurcation. The stable state can be excited through perturbations of sufficient amplitude. The basins of attraction of the two stable states are intermingled, but the boundaries are smooth.
Thus more wave vectors are needed and they have to couple in a nontrivial manner to sustain permanent dynamics. The specific set of modes used is discussed in section II. It is motivated by boundary conditions for the laminar profile and the observation that wave vectors pointing to the vortices of hexagons satisfy the triangle conditions in a most symmetrical manner. Other than that the selected vectors are a matter of trial and error. In the end we arrive at a model with 19 real amplitudes, two force terms and 212 quadratic couplings. Without driving and damping the dynamics is energy conserving, as would be the corresponding Euler equation (suitably truncated). Moreover, the perturbation amplitudes can be put together to give complete flow fields. Thus the model has a somewhat larger number of degrees of freedom, but the dynamics should provide a realistic approximation to shear flows.
The outline of the paper is as follows. In section II we present the model, in particular the selected wave vectors, the equations of motion and a discussion of symmetries. In section III we focus on the dynamical properties of initial perturbations as a function of amplitude and Reynolds number. In section IV we discuss the stationary states, their bifurcations and their stability properties. We conclude in section V with a summary and a few final remarks.
## II The model shear flow
Ideal parallel shear flows have infinite lateral extension. Both in experiment and theory this cannot be realized. We therefore follow the numerical tradition and chose periodic boundary conditions in the flow and neutral direction. The flow is confined by parallel walls a distance $`d`$ apart. A convenient way to built a low dimensional model is to use a Galerkin approximation. Solid boundaries would require the vanishing of all velocity components and complicated Galerkin functions where all the couplings can only be calculated numerically. However, under the assumption that here as well as in many other situations the details of the boundary conditions effect the results only quantitatively but not qualitatively, we can adopt free-free boundary conditions on the walls and use simple trigonometric functions as basis for the Galerkin expansion. Similarly, the nature of the driving (pressure, boundary conditions or volume force) should not be essential so that we take a volume force proportional to some basis function (or a linear combination thereof). This still leaves plenty of free parameters to be fixed below.
### A Galerkin approximation
We expand the velocity field in Fourier modes,
$$𝐮(𝐱,t)=\underset{𝐤}{}𝐮(𝐤,t)e^{i𝐤𝐱}.$$
(1)
Incompressibility demands
$$𝐮(𝐤,t)𝐤=0.$$
(2)
The Navier-Stokes equation for the amplitudes $`𝐮(𝐤,t)`$ becomes
$`_t𝐮(𝐤,t)=`$ $``$ $`ip_𝐤𝐤i{\displaystyle \underset{𝐩+𝐪=𝐤}{}}\left(𝐮(𝐩,t)𝐪\right)𝐮(𝐪,t)`$ (3)
$``$ $`\nu 𝐤^2𝐮(𝐤,t)+f_𝐤`$ (4)
where $`p_𝐤`$ are the Fourier components of the pressure (divided by the density), $`\nu `$ is the kinematic viscosity and $`f_𝐤`$ are the Fourier components of the volume force sustaining the laminar profile.
There are three constraints on the components $`𝐮(𝐤)`$: incompressibility (2), reality of the velocity field,
$$𝐮(𝐤)=𝐮(𝐤)^{}$$
(5)
and the boundary conditions that the flow is limited by two parallel, impenetrable plates. The ensuing requirement $`u_z(x,y,z)=0`$ at $`z=0`$ and $`z=d`$ (where $`d`$ is the separation between plates) is most easily implemented through periodicity in $`z`$ and the mirror symmetry
$$\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z)=\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z),$$
(6)
which in Fourier space requires
$$\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(k_x,k_y,k_z)=\left(\begin{array}{c}u_x^{}\\ u_y^{}\\ u_z^{}\end{array}\right)(k_x,k_y,k_z).$$
(7)
This is not sufficient to fix the coefficients: the dynamics also has to stay in the relevant subspace, and thus the time derivatives have to satisfy similar requirements.
### B The wave vectors
The choice of wave vectors is motivated by the geometry of the flow and the aim to include nonlinear couplings. The basic flow shall be a flow in $`y`$-direction, neutral in the $`x`$-direction and sheared in the $`z`$-direction. Thus we take the first three wave vectors in $`z`$-direction,
$$𝐤_1=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right),𝐤_2=\left(\begin{array}{c}0\\ 0\\ 2\end{array}\right),𝐤_3=\left(\begin{array}{c}0\\ 0\\ 3\end{array}\right).$$
(8)
The negative vectors $`𝐤_i`$ also belong to the set but will not be numbered explicitely. In these units, the periodicity in the $`z`$-direction is $`2\pi `$, so that the separation between the plates is $`d=\pi `$ because of the mirror symmetry (6). The amplitude $`𝐮(𝐤_1)`$ will carry the laminar profile and $`𝐮(𝐤_3)`$ can be excited as a modification to the laminar profile. $`𝐤_2`$ is needed to provide couplings through the nonlinear term. These three vectors satisfy a triangle identity $`𝐤_1+𝐤_2𝐤_3=0`$, but the nonlinear term vanishes since they are parallel.
The next set of wave vectors contains modulations in the flow and neutral direction,
$$𝐤_4=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),𝐤_5=\left(\begin{array}{c}1/2\\ \sqrt{3}/2\\ 0\end{array}\right),𝐤_6=\left(\begin{array}{c}1/2\\ \sqrt{3}/2\\ 0\end{array}\right).$$
(9)
Together with $`𝐤_i`$ they form a regular hexagon, so that they provide nontrivial couplings in the nonlinear term. The periodicity in flow direction is $`4\pi /\sqrt{3}`$, in the neutral direction it is $`4\pi `$.
Finally, this hexagon is lifted upwards with $`𝐤_1`$ and $`𝐤_2`$ to form the remaining 12 vectors,
$`𝐤_7`$ $`=`$ $`𝐤_1+𝐤_4𝐤_8=𝐤_1+𝐤_5𝐤_9=𝐤_1+𝐤_6`$ (10)
$`𝐤_{10}`$ $`=`$ $`𝐤_1𝐤_4𝐤_{11}=𝐤_1𝐤_5𝐤_{12}=𝐤_1𝐤_6`$ (11)
$`𝐤_{13}`$ $`=`$ $`𝐤_2+𝐤_4𝐤_{14}=𝐤_2+𝐤_5𝐤_{15}=𝐤_2+𝐤_6`$ (12)
$`𝐤_{16}`$ $`=`$ $`𝐤_2𝐤_4𝐤_{17}=𝐤_2𝐤_5𝐤_{18}=𝐤_2𝐤_6.`$ (13)
The full set $`𝐤_i`$, $`i=1\mathrm{}18`$ is shown in Fig. 1.
The Fourier amplitudes $`𝐮(𝐤_i)`$ have to be orthogonal to $`𝐤_i`$ because of incompressibility (2). If they are expanded in basis vectors perpendicular to $`𝐤_i`$, the pressure drops out of the equations and need not be calculated. We therefore chose normalized basis vectors
$`𝐧(𝐤_i)`$ $`=`$ $`({\displaystyle \frac{k_xk_z}{k_x^2+k_y^2}},{\displaystyle \frac{k_yk_z}{k_x^2+k_y^2}},1)^T/\sqrt{1+k_z^2/(k_x^2+k_y^2)}`$ (14)
$`𝐦(𝐤_i)`$ $`=`$ $`(k_y,k_x,0)^T/\sqrt{k_x^2+k_y^2}`$ (15)
so that $`𝐧`$, $`𝐦`$ and $`𝐤`$ form an orthogonal set of basis vectors. For the negative vectors $`𝐤_i`$ we chose the basis vectors $`𝐧(𝐤_i)=𝐧(𝐤_i)`$ and $`𝐦(𝐤_i)=𝐦(𝐤_i)`$. If the $`x`$ and $`y`$ components of $`𝐤`$ vanish, the above definitions are singular and replaced by
$$𝐧=(1,0,0)^T𝐦=(0,1,0)^T.$$
(16)
The amplitudes of the velocity amplitude are now expanded as
$$𝐮(𝐤_i,t)=\alpha (𝐤_i,t)𝐧(𝐤_i)+\beta (𝐤_i,t)𝐦(𝐤_i).$$
(17)
The impenetrable plates impose further constraints on the $`\alpha (𝐤_i)`$ and $`\beta (𝐤_i)`$. For $`i=1`$, $`2`$ and $`3`$ the wave vector has no components in the $`x`$\- and $`y`$-directions, so that $`\alpha `$ and $`\beta `$ have to be real. For $`i=4`$, $`5`$ and $`6`$ the velocity field cannot have any components in the $`z`$-direction, hence $`\alpha =0`$. The remaining wave vectors $`𝐤_i`$ and $`𝐤_i`$ with $`i=7,\mathrm{},18`$, a total of 24, divide up into six groups of 4 vectors each,
$$𝐤=(k_x,k_y,k_z),𝐤^{}=(k_x,k_y,k_z),𝐤\text{and}𝐤^{}.$$
(18)
The groups are formed by the vectors and their negatives in the pairs with indices (7,10), (8,11), (9,12), (13,16), (14,17) and (15,18). The amplitudes of the vectors in the sets are related by
$`\alpha (𝐤)`$ $`=`$ $`\alpha (𝐤)^{}=\alpha (𝐤^{})^{}`$ (19)
$`\beta (𝐤)`$ $`=`$ $`\beta (𝐤)^{}=\beta (𝐤^{})^{}.`$ (20)
Thus the full model has $`6+6+6\times 4=36`$ real amplitudes. Restricting the flow by a point symmetry around $`𝐱_0=(0,0,\pi /2)^T`$ eliminates the contributions from $`𝐤_2`$ and some other components, resulting in a 19-dimensional subspace with nontrivial dynamics and the following amplitudes:
$`\alpha (𝐤_1)`$ $`=`$ $`y_1\beta (𝐤_1)=y_2`$ (21)
$`\alpha (𝐤_3)`$ $`=`$ $`y_3\beta (𝐤_3)=y_4`$ (22)
$`\beta (𝐤_4)`$ $`=`$ $`iy_5\beta (𝐤_5)=iy_6\beta (𝐤_6)=iy_7`$ (23)
$`\alpha (𝐤_7)`$ $`=`$ $`y_8\beta (𝐤_7)=y_9`$ (24)
$`\alpha (𝐤_8)`$ $`=`$ $`y_{10}\beta (𝐤_8)=y_{11}`$ (25)
$`\alpha (𝐤_9)`$ $`=`$ $`y_{12}\beta (𝐤_9)=y_{13}`$ (26)
$`\alpha (𝐤_{13})`$ $`=`$ $`iy_{14}\beta (𝐤_{13})=iy_{15}`$ (27)
$`\alpha (𝐤_{14})`$ $`=`$ $`iy_{16}\beta (𝐤_{14})=iy_{17}`$ (28)
$`\alpha (𝐤_{15})`$ $`=`$ $`iy_{18}\beta (𝐤_{15})=iy_{19};`$ (29)
components not listed vanish or are related to the given ones by the boundary conditions (20). A complete listing of the flow fields $`𝐮_i`$ associated with the coefficients $`y_i`$ such that $`𝐮=_iy_i𝐮_i`$ as well as of the equations of motion are available from the authors.
### C The equations of motion
In this 19-dimensional subspace $`y_1\mathrm{}y_{19}`$ the equations of motion are of the form
$$\dot{y}_i=\underset{j,k}{}A_{ijk}y_jy_k\nu K_iy_i+f_i.$$
(30)
Of the driving force all components but $`f_2`$ and $`f_4`$ vanish. Moreover, if the $`f`$’s are taken to be proportional to $`\nu `$, the resulting laminar profile has an amplitude independent of viscosity (and thus Reynolds number). These components give rise to a laminar profile that is a superposition of a $`\mathrm{cos}(z)`$ profile (from $`f_2`$) and a $`\mathrm{cos}(3z)`$ profile (from $`f_4`$). This allows us to approximate the first two terms of the Fourier expansion of a linear profile with velocity $`u_y=\pm 1`$ at the walls,
$$𝐮_0=\frac{8}{\pi ^2}(\mathrm{cos}z+\frac{1}{9}\mathrm{cos}3z)𝐞_y.$$
(31)
that can be obtained with a driving $`f_2=4\nu /\pi ^2`$ and $`f_4=4\nu /9\pi ^2`$ (see Fig. 2).
The nonlinear interactions in the Navier-Stokes equation conserve the energy $`E=\frac{1}{2}𝑑V𝐮^2`$. In the 19-dimensional subspace, the corresponding quadratic form is
$$E=V\left(\underset{i=1}{\overset{7}{}}y_i^2+2\underset{i=8}{\overset{19}{}}y_i^2\right).$$
(32)
The above equations conserve this form without driving and dissipation. With dissipation but still without driving, the time derivative is negative definite, indicating a monotonic decay of energy to zero.
Finally, we define the Reynolds number using the wall velocity of the linear profile, $`u_0=1`$, the half width of the gap, $`D=d/2=\pi /2`$ and the viscosity $`\nu `$,
$$Re=u_0D/\nu =\pi /2\nu .$$
(33)
The other geometric parameters are a period $`4\pi /\sqrt{3}`$ in flow direction and $`4\pi `$ perpendicular to it.
### D Symmetries
We achieved the impenetrability of the plates by requiring the mirror symmetry:
$$\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z)=\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z).$$
(34)
The reduction from 36 to 19 modes was achieved by restricting the dynamics to a subspace where the flow has the point symmetry around $`𝐱_0=(0,0,\pi /2)^T`$, a point in the middle of the shear layer,
$$\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z+\pi /2)=\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z+\pi /2).$$
(35)
In addition, there are further symmetries that can be used to reduce the phase space. There is a reflection on the $`y`$-$`z`$-plane,
$$T_1:\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z)\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z).$$
(36)
and two shifts by half a lattice spacing,
$`T_2:`$ $`\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z)\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x+2\pi ,y,z)`$ (37)
$`T_3:`$ $`\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x,y,z)\left(\begin{array}{c}u_x\\ u_y\\ u_z\end{array}\right)(x+\pi ,y+\pi /\sqrt{3},z).`$ (38)
When applied to the flow these transformations induce changes in the variables $`y_i`$ (typically exchanges or sign changes), but the equations of motion are invariant under these transformations. Thus, if a certain flow has this symmetry, it leads to constraints on the variables $`y_i`$, and if it does not have this symmetry immediately a new flowfield can be obtained by applying this symmetry transformation. We do not attempt to analyze the full symmetry structure here and confine our discussion to two illustrative examples which are relevant for the stationary states discussed below. Demanding invariance of the flow field to the reflection symmetry $`T_1`$ leads to the following constraints on the variables $`y_i`$:
$`y_1`$ $`=`$ $`y_3=y_5=y_8=y_{15}=0`$ (39)
$`y_6`$ $`=`$ $`y_7y_{10}=y_{12}`$ (40)
$`y_{11}`$ $`=`$ $`y_{13}y_{16}=y_{18}y_{17}=y_{19}.`$ (41)
The non vanishing components, $`y_2`$, $`y_4`$, $`y_6=y_7`$, $`y_9`$, $`y_{10}=y_{12}`$, $`y_{11}=y_{13}`$, $`y_{14}`$, $`y_{16}=y_{18}`$, $`y_{17}=y_{19}`$ thus define a 9 dimensional subspace.
For the combined symmetry $`T_1T_2`$ we find the constraints
$`y_1`$ $`=`$ $`y_3=y_5=y_8=y_{15}=0`$ (42)
$`y_6`$ $`=`$ $`y_7y_{10}=y_{12}`$ (43)
$`y_{11}`$ $`=`$ $`y_{13}y_{16}=y_{18}y_{17}=y_{19}`$ (44)
and again a 9 dimensional subspace with non vanishing components $`y_2`$, $`y_4`$, $`y_6=y_7`$, $`y_9`$, $`y_{10}=y_{12}`$, $`y_{11}=y_{13}`$, $`y_{14}`$, $`y_{16}=y_{18}`$, $`y_{17}=y_{19}`$. The dimensions of the invariant spaces vary from a minimum of 6 (for each a $`T_1T_3`$ and $`T_1T_2T_3`$ invariance) and a maximum of 10 (for $`T_2T_3`$-invariance).
As mentioned, one can classify flows according to their symmetries. The most asymmetric flows are eightfold degenerate as the application of the eight combinations of the symmetries give eight distinct flows. The laminar flow profile is invariant under all the linear transformations and is the only member of the class with highest symmetry. The other stationary states discussed below fall into equivalence classes with eight members or four members if they are invariant under $`T_1`$ or $`T_1T_2`$.
## III Dynamics of perturbations
A stability analysis shows that the laminar flow profile is linearly stable for all Reynolds numbers. The matrix of the linearization is non-normal with a block structure along the diagonal. To bring this structure out more clearly, it is best to order the equations in the sequence 1, 2, 3, 4, 5, 7, 15, 8, 9, 14, 13, 19, 12, 18, 6, 11, 17, 10, 16. The matrix of the linearization then is upper diagonal, with a clear block structure: there are 10 eigenvalues isolated on the diagonal, three $`2\times 2`$ blocks and one $`3\times 3`$ block as well as several couplings between them in the upper right corner. While some eigenvalues can be complex, all of them have negative real part as shown in Fig. 3. For vanishing viscosity, the eigenvalues become zero or purely imaginary.
Large amplitude perturbations, however, need not decay. Already in the linear regime the non-orthogonality of the eigenvectors can give rise to intermediate amplifications into a regime where the nonlinear terms become important . In a related study on plane Couette flow we used the lifetime of perturbations to get information on the dynamics in a high-dimensional phase space. As in that case, the amplitude of the velocity field in the $`z`$-direction indicates the survival strength of a perturbation. Linearizing the equations of motion around the base flow $`𝐮_0`$ gives for the perturbation $`𝐮^{}`$ the equation
$$_t𝐮^{}=(𝐮_0)𝐮^{}(𝐮^{})𝐮_0p^{}+\nu \mathrm{\Delta }𝐮^{}.$$
(45)
The second term on the right hand side describes the energy source for the perturbation, and depends, because of $`𝐮_0=u_0(z)𝐞_y`$ and thus
$$(𝐮^{})𝐮_0=u_z^{}_zu_0(z)𝐞_y$$
(46)
in an essential way on the $`z`$-components of the perturbation. Thus, if the amplitudes $`y_8`$, $`y_{10}`$, $`y_{12}`$, $`y_{14}`$, $`y_{16}`$ and $`y_{18}`$ become too small, the decay of the perturbation cannot be stopped any more. These modes account also for most of the off-diagonal block-couplings. A model for sustainable shear flow turbulence has to include some of these modes.
We chose a fixed initial flow field with a random selection of amplitudes $`y_1,\mathrm{},y_{19}`$, scaled it by an amplitude parameter $`A`$ and measured the lifetime as a function of $`A`$ and Reynolds number Re. Fig. 4 shows the time evolution of such a perturbation at $`Re=400`$ with one mode driven and for different amplitudes. For small $`A`$ there is an essentially exponential decay, whereas for larger amplitudes the perturbation swings up to large amplitude and shows no sign of a decline at all. The results for many amplitudes and Reynolds numbers are collected in Fig. 5 in a landscape plot. For small Reynolds number and/or small amplitude the lifetimes of perturbations are short, indicated by the light areas. For Reynolds numbers around 100 isolated black spots appear, indicating the occurence of lifetimes larger than the integration time (which increases with $`Re`$ so that $`t_{max}/Re=4\pi `$). The spottiness for $`Re`$ between about 100 and 1000 is due to rapid changes in lifetimes from pixel to pixel. For Re above 1000 the long lifetimes dominate. These results are in good agreement with what has been observed in plane Couette flow. Fig. 5b shows a similar plot for the case with two modes driven; it is qualitatively similar, but quantitatively shifted to higher Reynolds numbers.
In connection with the non-normality of the linearized eigenvalue problem it has been argued that the upper limit on the size of perturbations for which the non-linear terms in the dynamics can be neglected decreases algebraically like $`Re^\alpha `$. Different exponents have been proposed, ranging from 1 to 3 . It seems that for large $`Re`$ (where the model is actually less reliable because of the limited spatial resolution) the envelope of the long lived states in the fractal life time plot decays like $`Re^1`$.
The sensitive dependence of lifetimes on initial conditions and parameters is further highlighted in Fig. 6 and 7. The first shows the lifetime in the plane of the amplitudes $`y_{16}`$ and $`y_{17}`$ at Reynolds number $`Re=400`$ with all other components fixed. There is considerable structure on many scales. One notes ’valleys’ of short lifetimes between ’plateaus’ of longer lifetimes and a granular structure within both. The striations are reminiscent of features seen near fractal basin boundaries . Fig. 7 shows successive magnifications of lifetime versus amplitude plots at Re=200. Even after a magnification by $`10^7`$ there is no indication of a continuous and smooth variation of lifetime with amplitude.
## IV Stationary states
Motivated by the observation of new stationary structures in plane Couette flow for sufficiently high Reynolds number we searched for non-trivial stationary solutions and studied their generation, evolution and symmetries.
We computed the stationary states with the help of a Monte Carlo algorithm. The initial conditions for the $`y_i`$’s were chosen randomly out of the interval $`[1/2,1/2]`$ and the Reynolds number was chosen randomly matrixwith an exponential bias for small $`Re`$ in the interval $`[10,10000]`$. With these initial conditions we entered a Newton algorithm. If the Newton algorithm converged, we followed the fixed point in Reynolds number as far as possible. We included about 200000 attempts in the Monte Carlo search.
The stationary states found for a single driven mode are collected in Fig. 8. No stationary states (besides the laminar profile) were found for Reynolds numbers below about 190. Between 190 and about 500 there are eight stationary states which divide into two groups of four symmetry related states each. With increasing Reynolds number more and more stationary states are found and they reach down to smaller and smaller amplitude. The envelope of all states reflects the $`Re^1`$ behaviour found for the borderline where nonlinearity becomes important. For two driven modes (Fig. 9) the situation is similar.
The appearance of the branches of the stationary states and in particular their coalescence near $`Re=190`$ suggests that the states are born out of a saddle-node bifurcation. And indeed, the eigenvalues as a function of $`Re`$ show two eigenvalues moving closer together and collapsing at zero for $`Re=190.41`$ (Fig. 10). However, these eigenvalues are not the leading ones, so that one set of states has three unstable eigenvalues, the other two unstable ones. It is thus a ‘saddle-node’ bifurcation into unstable states.
With increasing $`Re`$ more and more stationary states appear, partly through secondary bifurcations, partly through additional saddle-node bifurcations. Their number increases rapidly with Reynolds number (Fig. 11) and this increase goes in parallel with the increase in density of long lived states, Fig. 5. The detailed structure of the bifurcation diagram is rather complex and has not yet been fully explored. We note here that the various stationary states may be grouped according to their symmetries introduced in section II D and that we found only stationary states which belong to equivalent classes with four or eight members. The stationary states of the classes with four members are invariant under the transformation $`T_1`$ or $`T_1T_2`$. In addition, there are forward directed bifurcations generating two new branches with the same symmetry properties (eight or four member class) and inverse bifurcations of two branches belonging to eight member equivalent classes. We also found a backward directed bifurcation generating branches of an eight member class, which is born out of a four member class branch. The scenarios described above are marked in the bifurcation diagram Fig. 8.
## V Concluding remarks
The few degrees of freedom shear model introduced here lies halfway between the simplest models of non-normality and full simulations. Its dynamics has turned out to be surprisingly rich. There are a multitude of bifurcations introducing new stationary states besides the laminar profile, there are secondary bifurcations, and the distribution of life times shows fractal structures on amazingly small scales. It seems that as one goes from the low-dimensional models via the present one to full simulations one notes not only an increase in numerical complexity but also the appearence of qualitatively new features .
The simplest models with very few degrees of freedom focus on the non-normality of the linearized Navier-Stokes problem and emphasize the amplification of small perturbations. If the non-linearity is included a transition to another kind of dynamics, sometimes as simple as relaxation to a stationary point, is found .
Next in complexity are models like the one presented here that share with the few degree of freedom models the amplification and the transition but the additional degrees of freedom allow for chaos. When nonlinearities become important the dynamics does not settle to a fixed point or a limit cycle but continues irregularly for an essentially unpredictable time. The time is unpredictable because of the fractal life time distribution which seems to persist down to amazingly small scale: tiny variations in Reynolds number or amplitudes of the perturbation can cause major variations in life times. This fractal behaviour is the new quality introduced by the additional degrees of freedom. Indications for this behaviour are seen in the experiments by Mullin on pipe flow . It is interesting to ask just how few degrees of freedom are necessary to obtain this behaviour. Reducing our model to the $`T_1`$ subspace gives one with just 9 degrees of freedom (comparable in number and flow behaviour to the ones of Waleffe ) that still shows this fractal life time distributions. Further reduction, as in the four mode model of , seems to eliminate them.
The full, spatially extended shear flows share essential features with the model but add new problems. Spatially resolved simulations of the present model as well as plane Couette flow with rigid-rigid boundary conditions show the occurence of additional stationary states at sufficiently high Reynolds number that are unstable. A novel and as yet unexplained feature in spatially extended plane Couette flow, which we believe to be connected to the high dimensionality of phase space, is the difference between Reynolds numbers where the first stationary states are born (about 125 in units of half the gap width and half the velocity difference) and the ones where experiments begin to see long lived states (about 300–350) .
The fractal life time distributions have obvious similiarities to chaotic scattering . Drawing on this analogy one would like to identify permanent structures in phase space away from the laminar profile that could sustain turbulence. This has partly been achieved by the search for stationary states. Many have been found but irritatingly only for Reynolds numbers above about 190 while long lived states seem to appear much earlier. The solution to this puzzle must be periodic states and indeed we have found a few periodic states in a symmetry reduced model at lower Reynolds numbers, close to the occurence of the first long lived states. This suggests that the dynamical system picture that long lived states have to be connected to persistent structures in phase space is tenable.
There are several features of the model that can be studied further. In particular, quantitative characterizations of the fractal life time distribution, visualizations of the flow field, a detailed analysis of the primary and secondary bifurcation, an investigation of the dependence on the aspect ratio of the periodicity cell are required and look promising. We expect the lessons to be learned from this simple model to be useful in understanding the dynamics of full plane Couette and other shear flows. Work along these directions continues.
|
no-problem/9904/astro-ph9904246.html
|
ar5iv
|
text
|
# Weighing neutrinos: weak lensing approach
## 1 Introduction
It is now well known that the statistical analysis of weak lensing effects on background galaxies due to foreground large scale structure can be used as a probe of cosmological parameters, such as the matter density and the cosmological constant, and the projected mass distribution of the Universe (e.g., Bernadeau et al. 1997; Blandford et al. 1991; Jain & Seljak 1997; Kaiser 1998; Miralda-Escudé 1991; Schneider et al. 1998). Given that weak gravitational lensing results from the projected mass distribution, the statistical properties of weak lensing, such as the two point function of the shear distribution, reflects certain aspects associated with the projected matter distribution. With growing interest in weak gravitational lensing surveys, several studies have now explored the accuracy to which conventional cosmological parameters, such as the mass density of the Universe and the cosmological constant can be determined (e.g., Bartelmann & Schneider 1999; van Waerbeke et al. 1999).
Beyond primary cosmological parameters, such as the mass density, the projected matter distribution is also affected by the presence of neutrinos with non-zero masses. For example, when non-zero mass neutrinos are present, a strong suppression of power in the mass distribution occurs at scales below the time-dependent free streaming scale (see, e.g., Hu & Eisenstein 1998). The detection of such suppression, say in the density power spectrum, allows a direct measurement of the neutrino mass, in contrast to various particle physics based neutrino experiments which only allow measurements of mass differences, or splittings, between different neutrino species (e.g., Super Kamiokande experiment; Fukuda et al. 1998). The direct astrophysical probes of neutrino masses include time-of-flight from a core-collapse supernova (e.g., Totani 1998; Beacom & Vogel 1998; see, Beacom 1999 for a review) and large scale structure power spectrum. The suppression of power at small scales due to neutrinos can easily be investigated with the galaxy power spectrum from wide-field redshift surveys, such as the Sloan Digital Sky Survey (SDSS<sup>1</sup><sup>1</sup>1http://www.sdss.org/; e.g., Hu et al. 1997), however, such measurements are subjected to unknown biases between galaxy and matter distribution and its evolution with redshift. Therefore, an understanding of bias and its evolution may first be necessary before making a reliable measurement of neutrino mass using galaxy power spectra. On the contrary, a measurement of the power spectrum unaffected by such effects allows a strong possibility to measure the neutrino mass. Such a possibility should soon be available with weak gravitational lensing surveys through the measured weak lensing power spectrum directly, which probes the matter power spectrum through a convolution of the redshift distribution of sources and distances. Thus, it is expected that the weak lensing power spectrum can also allow a determination of the suppression due to neutrinos, and thus, a direct measurement of the neutrino mass.
In addition to neutrino mass, weak lensing also allows determination of several cosmological parameters, including matter density ($`\mathrm{\Omega }_m`$) and the cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }`$). However, there are large number of cosmological probes that essentially measure these parameters. For example, luminosity distance measurements to Type Ia supernovae at high redshifts and gravitational lensing statistics allow the determination of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (e.g., Cooray et al. 1998; Perlmutter et al. 1998; Riess et al. 1998), or rather $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$, while the location of the first Doppler peak in the cosmic microwave background (CMB) power spectrum allows a determination of these two quantities in the orthogonal direction ($`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }`$; White 1998). When combined (e.g., Lineweaver 1998), these two parameters can be known to a high accuracy reaching a level of $``$ 5%, when the expected measurements on SNe and CMB experiments over the next decade, such as MAP, are considered (e.g., Tegmark et al. 1998). Since these measurements are not sensitive to neutrinos, therefore, it is necessary that one returns to probes which are strongly sensitive to neutrinos to obtain important cosmological information on their presence. This is the primary motivation of this paper: weak lensing is highly suitable for a neutrino mass measurement when compared to various other probes of cosmology, including CMB. The direct measurement of galaxy power spectrum also allows a measurement of neutrino mass, as has been discussed in Hu et al. (1997), however as discussed above, such a measurement can be contaminated by bias and its evolution.
The other motivation for this paper comes through the cosmological importance of neutrinos (see, Ma 1999 for a recent review on this subject). Current upper bounds on neutrino masses range from 23 eV based on SN 1987A neutrino arrival time delays to 4.4 eV using recent oscillation experiments, assuming that 3 degenerate neutrino species are present (Vernon Barger, priv. communication; Fogli et al. 1997. Recently, Croft et al. (1999) determined that $`m_\nu <5.5`$ eV at the 95% level using the Ly$`\alpha `$ forest for all $`\mathrm{\Omega }_m`$ values and $`m_\nu <2.4(\mathrm{\Omega }_m/0.171)`$ eV for $`0.2\begin{array}{c}<\\ \end{array}\mathrm{\Omega }_m\begin{array}{c}<\\ \end{array}0.5`$ (95% confidence). A rather definite upper limit on neutrino mass is 94 eV, which is the mass limit to produce a normalized cosmological mass density of 1, while according to Ma (1999), a rather conservative cosmological limit on neutrino mass presently is $``$ 5 eV. However, apart from these limits, several studies still suggest the possibility that the neutrino mass can be as high as 15 eV (e.g., Shi & Fuller 1999), therefore, it is safe to say that neutrino mass or limit on its mass is not strongly constrained. The neutrino mass is one of the important cosmological parameters, and thus, it is necessary that suitable probes which allow this measurement, beyond mass splitting measurements allowed by particle physics experiments, be studied.
In this paper, we explore the possibility for a neutrino mass measurement with weak lensing surveys and suggest that weak lensing can be used as a strong probe of the neutrino mass, provided that one has adequate knowledge on the uncertainties of basic cosmological parameters will other techniques. In a recent paper, Hu & Tegmark (1999) explored the full parameter space of wide-field weak lensing surveys combined with future cosmic microwave background (CMB) satellites. A recent review on weak lensing could be found in Mellier (1998). In Sect.2, we discuss the effect of neutrinos in the weak lensing convergence power spectrum and calculate accuracies to which the neutrino mass can be determined. We follow the conventions that the Hubble constant, $`H_0`$, is 100 $`h`$ kms<sup>-1</sup>Mpc<sup>-1</sup> and $`\mathrm{\Omega }_i`$ is the fraction of the critical density contributed by the $`i`$th energy component: $`b`$ baryons, $`\nu `$ neutrinos, $`m`$ all matter species (including baryons and neutrinos) and $`\mathrm{\Lambda }`$ cosmological constant.
## 2 Weak Lensing Power Spectrum
### 2.1 Effective Convergence Power Spectrum
Following Kaiser (1998) and Jain & Seljak (1997), we can write the power spectrum of convergence due to weak gravitational lensing as:
$$P_\kappa (l)=l^4𝑑\chi \frac{g^2(\chi )}{r^6(\chi )}P_\mathrm{\Phi }(\frac{l}{r(\chi )},\chi ),$$
(1)
where $`\chi `$ is the radial comoving distance related to redshift $`z`$ through:
$$\chi (z)=\frac{c}{H_0}_0^z𝑑z^{}\left[\mathrm{\Omega }_m(1+z^{})^3+\mathrm{\Omega }_k(1+z^{})^2+\mathrm{\Omega }_\mathrm{\Lambda }\right]^{1/2},$$
(2)
and $`r(\chi )`$ is the comoving angular diameter distance written as $`r(\chi )=1/\sqrt{K}\mathrm{sin}\sqrt{K}\chi ,\chi ,1/\sqrt{K}\mathrm{sinh}\sqrt{K}\chi `$ for closed, flat and open models respectively with $`K=(1\mathrm{\Omega }_{\mathrm{tot}})H_0^2/c^2`$. In Eq.1, $`P_\mathrm{\Phi }(k=\frac{l}{r(\chi )},\chi )`$ is the time-dependent three dimensional power spectrum of the Newtonian potential which is related to the density power spectrum, $`P_\delta (k)`$, through the Poisson’s equation (e.g., Eq.2.6 of Schneider et al. 1998) and $`g(\chi )`$ weights the background source distribution by the lensing probability:
$$g(\chi )=r(\chi )_\chi ^{\chi _H}\frac{r(\chi ^{}\chi )}{r(\chi ^{})}W_\chi (\chi ^{})𝑑\chi ^{}.$$
(3)
Here, $`\chi _H`$ is the comoving distance to the horizon.
Following Kaiser (1998) and Hu & Tegmark (1999), we can write the expected uncertainties in the weak lensing convergence power spectrum as:
$$\sigma (P_\kappa )(l)=\sqrt{\frac{2}{(2ł+1)f_{\mathrm{sky}}}}\left(P_\kappa (l)+\frac{\gamma ^2}{n_{\mathrm{mag}}}\right),$$
(4)
where $`f_{\mathrm{sky}}`$ is the fraction of the sky covered by a survey, $`\sqrt{\gamma ^2}0.4`$ is the intrinsic non-zero ellipticity of background galaxies and $`n_{\mathrm{mag}}`$ is the surface density of galaxies down to the magnitude limit of the survey. Thus, Eq.5, accounts for three sources of noise in the weak lensing power spectrum: cosmic variance, shot-noise in the ellipticity measurements and the number of galaxies available to make such measurements from which the weak lensing properties are derived (see, e.g., Schneider et al. 1998 and Kaiser 1998 for further details). We take $`n_{\mathrm{mag}}`$ to be $`6.5\times 10^8`$ sr<sup>-1</sup> down to R magnitude of 25 and $`4\times 10^9`$ sr<sup>-1</sup> down to R of 27, which were determined based on galaxy number counts of deep surveys such as the Hubble Deep Field.
Following Schneider et al. (1998), we parameterize the source distribution, $`W_\chi (\chi )`$, as a function of redshift, $`W_z(z)`$:
$$W_z(z)=\frac{\beta }{\mathrm{\Gamma }\left[\frac{1+\alpha }{\beta }\right]z_0}\left(\frac{z}{z_0}\right)^\alpha \mathrm{exp}\left[\left(\frac{z}{z_0}\right)^\beta \right].$$
(5)
Such a distribution has been observed to provide a good fit to the observed redshift distribution of galaxies (e.g., Smail et al. 1995).
### 2.2 Linear and Nonlinear Power Spectra
Since we are considering non-zero mass neutrinos, it is necessary that both a linear and a nonlinear power spectrum which takes in to account for such neutrinos be considered. In order to obtain the linear power spectrum, we follow Hu & Eisenstein (1998) and Eisenstein & Hu (1999) and consider the MDM (mixed dark matter) transfer and growth functions appropriate for massive neutrinos as well as baryons. We use fitting formulae presented therein which agree with numerical calculations at a level of 1%. Both neutrinos and baryons affect the standard power spectrum by suppressing power at small scales below the free-streaming length. The small scale suppression due to neutrinos can be written as:
$$\left(\frac{\mathrm{\Delta }P}{P}\right)8\frac{\mathrm{\Omega }_\nu }{\mathrm{\Omega }_m}0.8\left(\frac{m_\nu }{1\mathrm{eV}}\right)\left(\frac{0.1N}{\mathrm{\Omega }_mh^2}\right),$$
(6)
where $`N`$ is the number of degenerate neutrinos. Assuming the standard model for neutrinos with a temperature $`(4/11)^{1/3}`$ that of the CMB, we can write $`\mathrm{\Omega }_\nu `$ based on neutrino mass, $`m_\nu `$ (in eV), and the number of degenerate neutrino species, $`N`$, as $`\mathrm{\Omega }_\nu =N(m_\nu /94)h^2`$. We assume integer number of neutrino species that can amount up to three. The suppression of power is proportional to the ratio of hot matter density of neutrinos to cold matter density; in low $`\mathrm{\Omega }_m`$ cosmological models, currently preferred by observations, the suppression of power is much larger than in an Einstein-de Sitter universe with the same amount of neutrinos. In fact, for low $`\mathrm{\Omega }_m`$ models, massive neutrinos of mass $``$ 1 eV, contribute to a 100% suppression of power compared with no neutrinos.
In addition to the linear power spectrum, given the time-dependence, it is necessary that the non-linear evolution of the density power spectrum be fully taken into account when calculating the convergence power spectrum given in Eq.1. The importance of the non linear evolution of the power spectrum on weak lensing statistics was first discussed in Jain & Seljak (1997) for standard $`\mathrm{\Lambda }`$CDM cosmological models involving $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. There are several approaches to obtain the nonlinear evolution, however, for analytical calculations, fitting functions are strongly preferred over detailed numerical work. In Peacock & Dodds (1996), the evolved density power spectrum was related to the linear power spectrum through a function $`F(x)`$, where $`x`$ was calibrated against numerical simulations in standard CDM models. According to Peacock & Dodds (1996), the nonlinear power spectrum $`P_\delta `$ is related to the linear power spectrum, $`P_\delta ^L(k_L)`$, through: $`k^3P_\delta (k)/(2\pi ^2)=F\left[k_L^3P_\delta ^L(k_L)/(2\pi ^2)\right]`$ where $`k_L=\left[1+k^3P_\delta (k)/(2\pi ^2)\right]^{1/3}k`$. We refer the reader to Peacock & Dodds (1996) for the functional form of $`F(x)`$.
Since we are now allowing for the presence of massive neutrinos, as well as baryons, it is necessary that we consider whether the fitting function given in Peacock & Dodds (1996) is reliable for the present calculation as these two species were not included in their simulations; Smith et al. (1998) compared the Peacock & Dodds (1996) formulation against MDM numerical simulations and suggested a possible agreement between the two when spectral index for Peacock & Dodds (1996) fitting formula was calculated using MDM power spectrum. However, recently, Ma (1998) suggested that this agreement was only due to poor resolution of numerical simulations used in Smith et al. (1998). According to Ma (1998), Peacock & Dodds (1996) formulation disagrees with numerical data at the level of 10% to 50%. Therefore, instead of the Peacock & Dodds (1996) approach, we use the fitting function given in Ma (1998) in the present calculation which was now shown to agree with numerical simulations at a level of 3% to 10% for $`k\begin{array}{c}<\\ \end{array}10h`$ Mpc<sup>-1</sup> out to a redshift of $`4`$. For higher scales and redshifts, agreement is only reached at a level of 15% against numerical simulations. For the present calculation involving a redshift distribution that peaks at redshifts lower than 4 with scales of interest lower than 10 $`h^1`$ Mpc<sup>-1</sup>, the fitted formulation is reasonably adequate. Since this fitting formulae is still not widely used, compared to Peacock & Dodds (1996) formulae, we reproduce them here for interested readers. The nonlinear power spectrum is related to the linear power spectrum through (Ma 1998):
$`{\displaystyle \frac{\mathrm{\Delta }(k)}{\mathrm{\Delta }_L(k_L)}}=G\left({\displaystyle \frac{\mathrm{\Delta }_L(k_L)}{g_0^{1.5}\sigma _8^\beta }}\right),`$
$`G(x)=[1+\mathrm{ln}(1+0.5x)]{\displaystyle \frac{1+0.02x^4+c_1x^8/g^3}{1+c_2x^{7.5}}},`$ (7)
where $`\mathrm{\Delta }(k)k^3P_\delta (k)/(2\pi ^2)`$ is the density variance in the linear and nonlinear regimes<sup>2</sup><sup>2</sup>2The notations used by Peacock & Dodds (1996) and Ma (1998) differs in that $`P(k)`$ defined in Ma (1998) refers to $`P(k)/(2\pi )^3`$ in Peacock & Dodds (1996).. Similar to Peacock & Dodds (1996), the nonlinear scale is related to the linear scale through:
$$k_L=\left[1+\mathrm{\Delta }(k)\right]^{1/3}k.$$
(8)
Instead of the effective spectral index $`n_{\mathrm{eff}}`$ used in Peacock & Dodds (1996), the formalism uses $`\sigma _8`$ which is the rms linear mass fluctuation on $`8h^1`$ Mpc scale evaluated at the redshift of interest. The numerical simulations suggest that $`n_{\mathrm{eff}}+3`$ is related to $`\sigma _8`$ through $`n_{\mathrm{eff}}+3\sigma _8^\beta `$ where $`\beta =0.7+10\mathrm{\Omega }_\nu ^2`$. The functions $`g_0=g(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda })`$ and $`g=g(\mathrm{\Omega }_m(z),\mathrm{\Omega }_\mathrm{\Lambda }(z))`$ are, respectively, the relative growth factor for the linear density field evaluated at present and at redshift $`z`$, for a model with a present-day matter density $`\mathrm{\Omega }_m`$ and a cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }`$. A fitting formula for $`g(\mathrm{\Omega }_m,\mathrm{\Omega }_\mathrm{\Lambda })`$ is (Carroll et al. 1992):
$`g={\displaystyle \frac{5}{2}}\mathrm{\Omega }_m(z)`$ (9)
$`[\mathrm{\Omega }_m(z)^{4/7}\mathrm{\Omega }_\mathrm{\Lambda }(z)+\left(1+\mathrm{\Omega }_m(z)/2\right)\left(1+\mathrm{\Omega }_\mathrm{\Lambda }(z)/70\right)]^1.`$
According to Ma (1998), for CDM and LCDM models, a good fit is given by $`c_1=1.08\times 10^4`$ and $`c_2=2.10\times 10^5`$, while $`c_1=3.16\times 10^3`$ and $`c_2=3.49\times 10^4`$ for MDM models with $`\mathrm{\Omega }_\nu `$ of $``$ 0.1 and $`c_1=6.96\times 10^3`$ and $`c_2=4.39\times 10^4`$ for MDM models with $`\mathrm{\Omega }_\nu `$ of $``$ 0.2. For all other $`\mathrm{\Omega }_\nu `$ values, usually less than 0.1 for neutrino masses of current interest, we interpolate between the published values of $`c_1`$ and $`c_2`$ by Ma (1998). This procedure should be approximate, but for higher precision, numerical simulations would be required to determine $`c_1`$ and $`c_2`$ at individual $`\mathrm{\Omega }_\nu `$ values. In general, the weak lensing convergence power spectrum depends on six cosmological parameters: $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\nu `$, $`n_s`$ the primordial scalar tilt and $`\delta _H`$ the normalization of the density power spectrum. Also, throughout this paper, we take a flat model in which $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. Such a cosmology is motivated by both inflationary scenarios and current observational data. For $`\delta _H`$ we use COBE normalizations as presented by Bunn & White (1997) and also consider galaxy cluster based normalizations, $`\sigma _8`$, from Viana & Liddle (1998).
In Fig.1, we show two sets of COBE normalized weak lensing power spectra considering the presence of non-zero mass neutrinos. The upper and lower curves represent two cosmological models with high and low $`\mathrm{\Omega }_m`$ values and computed assuming the redshift of background sources as given in Eq.4, with $`\beta =1.5`$ and $`\alpha =2.0`$. As shown, non-zero mass neutrinos suppress power at large $`l`$ values, and this effect is significant for low $`\mathrm{\Omega }_m`$ models. This is primarily due to that fact that the suppression of power is directly proportional to the ratio of $`\mathrm{\Omega }_\nu /\mathrm{\Omega }_m`$. In addition, we have also shown the expected 1$`\sigma `$ uncertainty in the power spectrum measurement for a survey of size 625 deg<sup>2</sup> down to magnitude limit of 25 in R. It is likely that weak lensing surveys down to R of 25 within an area of 100 deg<sup>2</sup> will be available in the near future, and that the area coverage would steadily grow as high as several thousand square degrees over the next decade. As shown in Fig.1, reliable measurements of the power spectrum is likely when $`l`$ is between 100 and 3000. This is the same range in which neutrinos suppress power. Such effects do not exist, for example, in the CMB anisotropy power spectrum; low redshift probes of the matter power spectrum provide ideal ways to weigh neutrinos.
### 2.3 Cosmic Confusion?
However, there are alternative possibilities which can mimic neutrinos. In Fig.2, as examples, we illustrate two possibilities which can produce a similar power spectrum as a model involving $`\mathrm{\Omega }_m`$ of 0.35 and $`m_\nu `$ of 0.7 eV; When $`m_\nu `$ is 1 eV, increasing the primordial scalar tilt by 30% can mimic the original power spectrum, while in a model with zero mass neutrinos, increasing the baryon content by 80% can produce essentially the original power spectrum. Such effects are essentially what can be described as cosmic confusion, and thus, careful measurements of cosmological parameters are needed to weigh neutrinos even with weak lensing.
## 3 Neutrino Mass Measurement
In order to investigate the possibility for a neutrino mass measurement, we consider the so-called Fisher information matrix (e.g., Tegmark et al. 1997) with six cosmological parameters that define the weak lensing power spectrum. The Fisher matrix $`F`$ can be written as:
$$F_{ij}=\frac{^2\mathrm{ln}L}{p_ip_j}_𝐱,$$
(10)
where $`L`$ is the likelihood of observing data set $`𝐱`$ given the parameters $`p_1\mathrm{}p_n`$. Following the Cramér-Rao inequality, no unbiased method can measure the ith parameter with standard deviation less than $`(F_{ii})^{1/2}`$ if other parameters are known, and less than $`[(F^1)_{ii}]^{1/2}`$ if other parameters are estimated from the data as well. Since Eq.6 is usually calculated assuming a prior cosmological model, the estimated errors on the parameters of this underlying model can be dependent on prior assumptions.
Assuming a Gaussian and uncorrelated distribution for uncertainties, one can easily derive the Fisher matrix for weak lensing as<sup>3</sup><sup>3</sup>3Note the minor correction to Eq.4 of Hu & Tegmark (1999):
$$F_{ij}=\underset{l=l_{\mathrm{min}}}{\overset{l_{\mathrm{max}}}{}}\frac{f_{\mathrm{sky}}(2l+1)}{2\left(P_\kappa (l)+\frac{\gamma ^2}{n_{\mathrm{mag}}}\right)^2}\frac{P_\kappa (l)}{p_i}\frac{P_\kappa (l)}{p_j}.$$
(11)
As illustrated in Fig.2, in order to make a reliable measurement of neutrino mass, it is necessary that one consider external measurements of cosmological parameters. Such measurements can come from variety of probes such as Type Ia supernovae, galaxy clusters, CMB, gravitational lensing etc. Here, we take both a conservative approach with large uncertainties for the cosmological parameters based on other techniques and a more optimistic approach motivated by the expected uncertainties from future surveys. In our conservative model, we use following errors: $`\sigma (\mathrm{\Omega }_m)=0.2`$, $`\sigma (\mathrm{\Omega }_b)=0.1\mathrm{\Omega }_b`$, $`\sigma (h)=0.2`$, $`\sigma (n_s)=0.1`$, $`\sigma (\mathrm{ln}\delta _H)=0.5`$, while in our optimistic model, we use $`\sigma (\mathrm{\Omega }_m)=0.07`$, $`\sigma (\mathrm{\Omega }_b)=0.0025h^2`$, $`\sigma (h)=0.1`$, $`\sigma (n_s)=0.06`$, $`\sigma (\mathrm{ln}\delta _H)=0.3`$. These errors are in fact worser than what is expected to be measured from PLANCK<sup>4</sup><sup>4</sup>4http://astro.estec.esa.nl/Planck/; also, ESA document D/SCI(96)3., but is similar to what could be achieved with a mission such as MAP<sup>5</sup><sup>5</sup>5http://map.gsfc.nasa.gov/. We consider a fiducial model in which $`\mathrm{\Omega }_b=0.019`$, consistent with current observations, $`h=0.65`$ and $`n_s=1.0`$ and normalization based on COBE. In addition, we also consider an alternative normalization to the power spectrum based on measurements of $`\sigma _8`$ ($`=0.56\mathrm{\Omega }_m^{0.47}`$) following Viana & Liddle (1998). We also consider variations to the above fudicial model and marginalize over the uncertainties to obtain the 2$`\sigma `$ detection limit of neutrinos for various weak lensing surveys. We only use the information on the power spectrum between $`l`$ values of 100 and 5000. At $`l`$ values below 100, cosmic variance dominate the measurement while at $`l>5000`$ the finite number of galaxies and their ellipticies contribute to the increase in power spectrum measurement uncertainties.
In Fig.3, we summarize our results: solid lines show the expected 2$`\sigma `$ detection limit for our conservative errors while dashed lines show the detection limits for more optimistic errors. The dot-dashed line is for models in which the matter power spectrum is normalized to 8 h<sup>-1</sup> Mpc scales. The high dependence of its value and error on $`\mathrm{\Omega }_m`$ causes the $`\sigma _8`$ normalized limits to be different from those in which power spectra are normalized to COBE measurements. In Fig.3, we have shown the limits assuming a survey of 100 $`\times `$ 100 sqr. degrees down to a R band magnitude of 25. However, for surveys with different areas, especially for surveys in near future with small coverage, the limits can be scaled by the reduction factor in the observed area (see, Eq.12). We assume uncorrelated errors in the weak lensing power spectrum measurement. For low $`\mathrm{\Omega }_m`$ models ($`\begin{array}{c}<\\ \end{array}0.5`$) normalized to COBE, and using our conservative errors, we can write the 2 $`\sigma `$ detection limit on the neutrino mass as:
$$m_\nu ^{2\sigma }5.5\left(\frac{\mathrm{\Omega }_mh^2}{0.1N}\right)^{0.8}\left(\frac{10}{\theta _s}\right).$$
(12)
This 2 $`\sigma `$ detection limit is comparable to current upper limits at the 2 $`\sigma `$ level on the neutrino mass. Using more optimistic errors decreases this limit by a factor of 2 to 3 depending on $`\mathrm{\Omega }_m`$, however, to obtain such optimistic errors on cosmological parameters one require accurate measurements on the CMB power spectrum such as to the level of MAP satellite. In making this prediction we have assumed that the weak lensing power spectrum can be measured to the expected uncertainty predicted by simple arguments involving errors in ellipticities and cosmic confusion and that the measurements are uncorrelated. Also, in order to obtain a reliable measurement of the weak lensing power spectrum, one require additional knowledge on the redshift distribution of sources. Such information is likely to be adequately obtained with photometric redshift measurements of color data or by template fitting techniques that has been developed for multicolor surveys (e.g., Hogg et al. 1998). The accuracy to which such measurements can be made should be adequate, however, if no multicolor data is available then this may not be possible. Therefore, it is likely that such a clean measurement of the weak lensing power spectrum will not be directly possible in the near future. In order to consider such affects, we increased the expected uncertainties in the power spectrum by a factor of 2 beyond what is predicted for a survey of 100 sqr. degrees down to a R band magnitude of 25. The expected neutrino mass limit increases by an amount consistent with what is expected from the Fisher matrix formalism. Even in such a scenario with a poorly measured power spectrum, one can still put interesting limits on the neutrino mass.
A small area survey such as 10 $`\times `$ 10 sqr. degrees is likely to be feasible in the near future with upcoming observations from wide field CCD cameras. There are several such instruments currently either in the design or manufacturing stages: MEGACAM<sup>6</sup><sup>6</sup>6 http://cdsweb.u-strasbg.fr:2001/projects/megacam/ which will make observations from the Canada France Hawaii Telescope (CFHT; Boulade et al. 1998), VLT-Survey-Telescope<sup>7</sup><sup>7</sup>7 http://oacosf.na.astro.it/vst/ (VST). Other than these surveys, which are likely to first produce deep weak lensing surveys over small areas, two wide-field shallow surveys are currently ongoing at optical (SDSS; Stebbins et al. 1997) and radio (FIRST; Kamionkowski et al. 1997), however, it is still unclear as to what accuracy these imaging data can be used for weak lensing studies. Still, assuming that SDSS can in fact make weak lensing measurements down to a R band magnitude of 22, we find that given its wide field coverage, it can also be used to detect neutrinos down to a mass limit of $``$ 3 eV at the 2$`\sigma `$ level, or to put interesting limits at the same mass threshold. For an ultimate survey of $`100\times 100`$ deg<sup>2</sup>, weak lensing allows a detection of neutrinos down to a mass of $``$ 0.5 eV when $`\mathrm{\Omega }_m0.3`$ and $`h0.65`$. With expected errors from CMB satellites, this limit can be lowered by a factor of 3 to 4 allowing a possibility for weak lensing surveys to probe neutrinos with mass lower than 0.1 eV. These conclusions, generally, are consistent with what was found by Hu & Tegmark (1999); minor differences are likely to arise from the fact that the present study and Hu & Tegmark (1999) used different fitting functions to describe the non linear evolution of the potential power spectrum and that fudicial cosmological models may be different. We note here that using MAP or PLANCK data with galaxy redshift surveys such as from SDSS, and no weak lensing measurements, only allow the determination of neutrino mass to a limit of $``$ 1 eV and 0.3 eV respectively (Hu et al. 1997).
Returning to much smaller surveys, we have only studied the accuracy to which the neutrino mass can be measured. However, in making such measurements one does not lose information to make other measurements as well. For example, the conservative errors we assumed on other cosmological parameters can also be improved by factors of 2 to 3 when information on these parameters are also derived with weak lensing. Also, one can abandon the assumption of a spatially flat Universe, and determine the value of the cosmological constant directly from weak lensing data, while also putting a limit on the neutrino mass. However, if the assumption on a spatially flat Universe is dropped in order to measure $`\mathrm{\Omega }_\mathrm{\Lambda }`$, then the limit to which neutrino mass can be measured increases by a factor of $``$ 1.5 for surveys of size 100 sqr. degrees. For now, if one to measure or improve all other cosmological parameters that can be studied with weak lensing surveys (and listed in Sect.2.1), then it is safe to say that neutrinos down to a mass limit of $``$ 8 eV can be measured with weak lensing surveys of size 100 sqr. degrees down to a R band magnitude of 25. Such a possibility will definitely be available with upcoming surveys from MEGACAM. For still smaller surveys, such as 10 sqr. degrees, if one attempts to make all cosmological measurements, such as $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ interesting limits on neutrino mass can only be obtained at a mass level greater than 25 eV. Since such neutrino masses may be ruled out, it is safe to ignore the presence of neutrinos when making measurements with much smaller surveys. Such surveys are likely to be first available with wide-field cameras, with the coverage increasing afterwards.
## 4 Discussion & Summary
Here, we have considered the possibility for a neutrino mass measurement using weak gravitational lensing of background sources due to foreground large scale structure. For survey of size 100 deg<sup>2</sup>, neutrinos with masses greater than $``$ 5.5 eV could easily be detected. This detection limit is comparable to the current cosmological limits on neutrino mass, such as from the Ly$`\alpha `$ forest. When compared to various ongoing experiments to detect neutrinos, the advantage of weak lensing is that one can directly obtain a measure of mass rather than mass difference between two neutrino species. For typical surveys of size $``$ ten square degrees, ignoring the presence of neutrinos can lead to biased estimates for cosmological parameters, e.g., cosmological mass density can be underestimated by a factor as high as $``$ 15% if neutrinos with mass 5 eV are in fact present. However, if such weak lensing surveys are solely used for the derivation of parameters such as cosmological mass density, than the accuracy to which such derivations can be made is less than the bias produced by neutrinos. Therefore, for small area surveys, the presence of neutrinos can be safely ignored (assuming that their masses is less than $``$ 5 eV or so). However, armed with cosmological parameters from other complimentary techniques, even such small weak lensing surveys allow a strong possibility to investigate the presence of non-zero mass neutrinos.
*
We acknowledge useful discussions with Wayne Hu and Dragan Huterer. Wayne Hu is also thanked for communicating the fitting code to evaluate the MDM transfer function. We also thank an anonymous referee for comments which led to several improvements in the presentation.
|
no-problem/9904/cond-mat9904120.html
|
ar5iv
|
text
|
# Pseudogap phase in the U(1) gauge theory with incoherent spinon pairs
## Abstract
The pseudogap effect of underdoped high-$`T_c`$ superconductors is studied in the U(1) gauge theory of the t-J model including the spinon pairing fluctuation. The gauge fluctuation breaks the long range correlation between the spinon pairs. The pairing fluctuation, however, suppresses significantly the low-lying gauge fluctuations and leads to a stable but phase incoherent spin gap phase which is responsible for the pseudogap effects. This quantum disordered spin gap phase emerges below a characteristic temperature $`T^{}`$ which is determined by the effective potential for the spinon pairing gap amplitude. The resistivity is suppressed by the phase fluctuation below $`T^{}`$, consistent with experiments.
The normal state behavior of high-$`T_c`$ cuprates has cast doubt on the conjecture that correlated electron metals are Fermi liquids in the absence of symmetry breaking. One anomalous feature which is difficult to understand is the spin gap or pseudogap behavior observed in underdoped materials. This phenomenon was first observed in the NMR measurement of deoxygenated YBCO, which showed that both the spin susceptibility and the nuclear spin-lattice relaxation rate were suppressed below some temperature above $`T_c`$, indicating an opening of a spin excitation gap. This anomalous behavior of underdoped materials also manifests itself in the resistivity, Hall coefficient, specific heat, optical properties, inelastic neutron scattering, and other thermodynamic or transport quantities.
At present there is no consensus as to the correct theory of the pseudogap effect. One interpretation is that the pseudogap is the energy gap of pre-formed Cooper pairs, and phase fluctuations in the underdoped regime prevent these pairs condensing until it has reached some lower temperature. This type of theory gives a qualitative account for the similarity between the pseudogap and the superconducting gap. But it is not clear if the phase fluctuation is really strong enough to surpress $`T_c`$ by one order of magnitude in the extremely underdoped limit.
A different interpretation is given by the resonant valence bond mean field theory based on the notion of charge-spin separation: electrons separate into spinons and holons, spinons have spin but no charge, holons have charge but no spin. In this theory, the spinons are predicted to form singlet pairs well above the superconducting $`T_c`$, with superconductivity setting in only when the holons become phase coherent at $`T_c`$. This theory provides a qualitative description of the high-$`T_c`$ phase diagram. However, the spin gap phase is unstable against the fluctuation of the U(1) gauge field, which is introduced to enhance the constraints of no double occupancy.
In this paper, we study the effect of the spinon pairing fluctuation on the physical properties of the spin gap phase in the U(1) gauge theory of the t-J model. we propose that instead of the mean field spinon pairing phase with long range phase conference, a local pairing phase of spinons which has no long range phase coherence will survive under the fluctuation of U(1) gauge field because it keeps the local U(1) symmetry. Then such a description can be viewed as the strong coupling version or the microscopic origin of the nodal liquid proposed by Balents,Fisher and Nayak(BFN). The reason which quantum disorders the d-wave pairing state is attributed to the fluctuation of U(1) gauge field, which is absent in BFN’s theory. Since the U(1) gauge field is added to insure the local constrain in t-J model, the quantum disordering of the spinon pairing state is also a result of the local constrain. Firstly, the spinon pairing state should be regarded as the mean field description of the RVB state. In such a description the phase of the pairing order parameter is fixed, because the phase operator is conjugate to the particle number operator, the fluctuation of the occupy number will be infinitive due to the uncertainty principle. Since actually this kind of fluctuation is very small in Under doped regime because of the local constrain, when we go beyond the mean field theory by considering the fluctuation of the U(1) gauge field it is quite reasonable to have a result with large uncertainty in the phase and small uncertainty in the occupy number. Then the instability problem of the spinon pairing phase is physically caused by the local constrain and should be cured by disordering the phase of the order parameter.
After integrating out the spinon and holon operators, we obtain an effective action for the gauge and pairing fields $`𝒮=𝒮_\mathrm{\Delta }+𝒮_A`$. Then the total action is $`𝒮=𝒮_\varphi +𝒮_A`$.
The effective action for the gauge field is given by
$$𝒮_A=\frac{T}{2}\underset{i\nu _n}{}\frac{d^2q}{(2\pi )^2}\mathrm{\Pi }_{\mu \nu }(𝐪,i\nu _n)A_q^\mu A_q^\nu ,$$
(1)
where $`A^\mu `$ is the U(1) gauge field. The inverse of the gauge field propagator is
$$\mathrm{\Pi }_{\mu \nu }(𝐪,\nu )=\chi _dq^2i\gamma _F\nu /q,$$
(2)
where $`\gamma _F=2n_e/k_F`$ and $`\chi _d=\chi _F+\chi _B`$. $`\chi _F`$ and $`\chi _B`$ are the Landau diamagnetic susceptibilities of spinons and holons, respectively.
In order to treat the fluctuation of pairing order parameter, we assume that the dynamics of the spinon pairing order parameter is described by an effective Ginsburg-Landau theory with a minimal coupling with the gauge field, namely
$$𝒮_\mathrm{\Delta }=𝑑x\left[\stackrel{~}{\kappa }_\mu \mathrm{\Delta }^{}(x)\left(_\mu A_\mu (x)\right)^2\mathrm{\Delta }(x)+\alpha |\mathrm{\Delta }|^2+\beta |\mathrm{\Delta }|^4\right],$$
(3)
where $`\stackrel{~}{\kappa }_0=\frac{1}{2m^{}c^2}`$, $`\stackrel{~}{\kappa }_{1,2}=\frac{1}{2m^{}}`$, $`m^{}`$ and $`c`$ are the effective mass and velocity of spinon pairs, respectively. We further assume that $`\alpha =a(TT_0^{})`$ and $`\beta `$ is independent of temperature. In $`\alpha `$, $`a`$ is a constant and $`T_0^{}`$ is the critical temperature of the spin gap phase without gauge fluctuation.
In the work of Ubben and Lee, the phase fluctuation is ignored. The energy loss due to the gauge fluctuation is proportional to $`\mathrm{\Delta }^{5/3}`$, while the energy gain from spinon pairing is proportional to $`\mathrm{\Delta }^2`$. Thus for small $`\mathrm{\Delta }`$ the first term always dominates and the phase coherent spin gap phase is unstable. This means that to study the pseudogap phase in the U(1) gauge model, the pairing phase fluctuation should be considered.
The spinon pairing field $`\mathrm{\Delta }(x)=\sqrt{\rho }e^{i\phi }`$ contains both the amplitude and the phase fluctuations. When $`T`$ $`T_0^{}`$, the amplitude fluctuation is massive and can therefore be omitted. However, when $`TT_0^{}`$, the amplitude fluctuation is important. If we assume that $`\rho (x)`$ varies slowly in both time and space, then $`𝒮_\mathrm{\Delta }`$ is approximately given by
$$𝒮_\mathrm{\Delta }d^3x\left[\frac{1}{2}\kappa _\mu (x)\left(_\mu \phi (x)A_\mu (x)\right)^2+\alpha \rho (x)+\beta \rho (x)^2\right],$$
(4)
where $`\kappa _\mu (x)=\rho (x)\stackrel{~}{\kappa }_\mu `$. To treat the phase fluctuation properly, we introduce the duality transformation, fllowing reference . The phase variable $`\phi `$ is generally multivalued and $`_\mu \phi (x)`$ is not curl-free. Thus there are vortices in the $`\phi `$ field and the cure of $`_\mu \phi (x)`$ defines a vortex current operator $`j_\mu ^v=ϵ_{\mu \nu \lambda }_\nu _\lambda \phi `$. To treat these vortices, a commonly used approach is to introduce a fictitious gauge field $`a_\mu `$, which is dual the phase variable $`\phi `$, via the equation
$$\kappa _\mu (_\mu \phi A_\mu )=ϵ_{\mu \nu \lambda }_\nu a_\lambda .$$
(5)
Substituting the solution of $`\phi `$ from the above equation to the definition of the vortex current, one can relate $`a_\mu `$ to the vortex current operator
$$j_\mu ^v=ϵ_{\mu \nu \lambda }_\nu \left[\kappa _\lambda ^1ϵ_{\lambda \alpha \beta }_\alpha a_\beta +A_\lambda \right].$$
(6)
In the dual representation of $`\phi `$, vortices are treated as quantized particles and represented by a complex field $`\mathrm{\Phi }`$. A dual Lagrangian of $`\phi `$ with minimal coupling between $`a_\mu `$ and $`\mathrm{\Phi }`$ can be constructed as
$`_\phi `$ $`=`$ $`ϵ_{\mu \nu \lambda }{\displaystyle \frac{1}{2\kappa _\mu (x)}}(_\nu a_\lambda _\lambda a_\mu )^2+ϵ_{\mu \nu \lambda }a_\mu _\nu A_\lambda `$ (8)
$`+{\displaystyle \frac{\kappa _\mu (x)}{2}}|(_\mu ia_\mu )\mathrm{\Phi }|^2V_2\mathrm{\Phi }^2,`$
where the higher order terms of $`\mathrm{\Phi }`$ are ignored. From the equation of motion of $`_\phi `$, it can be shown that this Lagrangian has the desired property of the vortex current defined above. The bare mass of the vortex field $`V_2`$ depends on the superfluid density of spinon pairing $`\rho (x)`$ and can be estimated as follows. Close to the transition temperature $`T_0^{}`$, the excitation energy of a single static vortex is approximately equal to $`E_V\sqrt{2V_2/\kappa _0}`$. From the GL theory, we know that $`E_V`$ is also proportional to the spinon superfluid density $`\rho (x)`$ if we assume the spinon superconductivity is also type II, i.e. $`E_V=g\rho (x)`$ and $`g`$ is a constant. Thus $`V_2`$ is approximately equal to
$$V_2\frac{1}{2}g^2\rho ^2\kappa _0=\frac{g^2}{4m^{}c^2}\rho (x)^3,$$
(9)
The highest power of $`a_\mu `$ in $`_\phi `$ is 2. $`a_\mu `$ can therefore be rigorously integrated out from the Lagrangian. This leads to a self energy correction to the propagator of the gauge field $`A_\mu `$:
$$\mathrm{\Pi }_{\mu \nu }^{}(𝐪,𝐪^{},\nu )=\mathrm{\Pi }_{\mu \nu }(𝐪,\nu )\delta _{𝐪,𝐪^{}}+\frac{q_\mu q_\nu ^{}}{\frac{1}{2}_k\kappa _0(k)\mathrm{\Phi }^2(k+q^{}q)}.$$
(10)
By further integrating out the gauge field $`𝐀`$, an effective actioin for the $`\mathrm{\Phi }(x)`$ and $`\rho (x)`$ fields is obtained as
$`S[\mathrm{\Phi }(x),\rho (x)]`$ $``$ $`f_0\left[1+{\displaystyle \frac{1}{d^2x\frac{1}{2}\chi _d\kappa _0(x)\mathrm{\Phi }^2(x)}}\right]^{\frac{2}{3}}`$ (12)
$`+{\displaystyle d^2x\left[V_2\mathrm{\Phi }^2(x)+\alpha \rho (x)+\beta \rho (x)^2\right]},`$
where $`f_0`$ is the free energy of the system without spinon pairing.
In the dual language, a phase incoherent state corresponds to a superfluid state of vortices with $`<\mathrm{\Phi }>=\mathrm{\Phi }_00`$; while a long range phase correlated state corresponds to a normal state of vortices with $`<\mathrm{\Phi }>=0`$. Thus to investigate the property of a quantum disordered spin gap phase, only the superfluid phase of vortices needs be considered. In this case the vortex field has a finite energy in its low-lying excitations, we can therefore take a saddle point approximation for the $`\mathrm{\Phi }`$ field. The saddle point $`\mathrm{\Phi }_0`$ is determined by the equation $`\delta S[\mathrm{\Phi }(x),\rho (x)]/\delta \mathrm{\Phi }(x)=0`$. In the small $`\mathrm{\Phi }(x)`$ limit and using the slow varying condition of both $`\kappa _0(x)`$ and $`\mathrm{\Phi }(x)`$, the equation becomes
$$\frac{4}{3}f_0\left[\frac{1}{2}\chi _d\kappa _0(x)\right]^{\frac{2}{3}}\mathrm{\Phi }_0^{\frac{1}{3}}(x)+2V_2(x)\mathrm{\Phi }_0(x)=0,$$
(13)
Due to the presence of the first term, $`\mathrm{\Phi }(x)=0`$ can not be the stable solution, which indecate that for any given configration of $`\rho (x)`$ the effect of U(1) gauge field fluctuation will cause the condensation of vortice. In the description before duality, this means the quantum disorder phase is always more stable than the ordered phase. The spinons become ”superconducting” when $`\chi _d^{}\mathrm{}`$, then in our approach the pseudo gap phase is the middle state between the strange metal phase in which $`\chi _d^{}`$ keeps constant with the decrement of temperature and the mean field pairing phase with infinitive $`\chi _d^{}`$. At the same time, the phase transition at $`T_0^{}`$ predicted by the mean field theory becomes a crossover temperature $`T^{}`$ below which the local minimum in the effective potential of $`\rho `$ moves away from zero.
Since the saddle point of $`\mathrm{\Phi }(x)`$ is actually very large in small $`\rho `$ case, we must consider the saddle point equation in the large $`\mathrm{\Phi }(x)`$ limit which leads to
$$\mathrm{\Phi }_0^2(x)=\sqrt{\frac{4f_0}{3\kappa _0(x)\chi _dV_2(x)}}=4m^{}c^2\sqrt{\frac{f_0}{3g^2\chi _d}}\rho ^2(x).$$
(14)
It can be shown that this quantum disorder phase is more stable than the ordered phase. In the BFN’s theory, the condensation of vortices is obtained by assuming $`V_2<0`$. In our case, however, $`V_2`$ is always positive and the condensation of vortices is caused by the U(1) gauge field fluctuation. In the superconducting phase, the U(1) gauge field is screened by the superfluid of holons, we find that $`\mathrm{\Phi }_0(x)=0`$.
Taking the saddle point approxiamtion for the $`\mathrm{\Phi }`$ field in $`S[\mathrm{\Phi }(x),\rho (x)]`$, we then find the effective potential for the pairing gap amplitude
$$S\left[\rho (x)\right]f_0+𝑑x\left[(\alpha +\alpha _0)\rho +\beta \rho ^2\right].$$
(15)
where $`\alpha _0=2\sqrt{g^2f_0/3\chi _d}`$ is assumed to be weakly temperature dependent. The $`\alpha _0`$ term is the contribution of the gauge fluctuation. An important property revealed by this equation is that the contribution from the gauge fluctuation to the free energy is proportional to $`\rho `$, rather than $`\rho ^{5/6}`$ as for the case in which the phase fluctuation vanishes. This means that the gauge fluctuation is greatly suppressed by the phase fluctuation. Since the contributions from both the gauge fluctuation and the pairing condensation are now proportional to $`\rho `$, the spin gap phase is therefore stable below a characterizing temperature
$$T^{}=T_0^{}\frac{\alpha _0}{a},$$
(16)
which is the solution of the equation $`(\alpha +\alpha _0)|_{T=T^{}}=0`$.
Well below $`T^{}`$, $`\rho `$ becomes finite and its fluctuation can be omitted. In this case, the system we study is similar to the ““Nodal Liquid Phase” of BFN. The only difference is that in our model the local pairs are not formed by electrons but by spinons.
The phase fluctuation modifies the Landau diamagnetic susceptibility. Under the saddle point approximation, the renormalized Landau diamagnetic susceptibility is approximately given by
$$\chi _d^{}\chi _d+<\left(\frac{1}{2}\kappa _0\mathrm{\Phi }_0^2\right)^1>=\chi _d(1+u<\rho >),$$
(17)
where $`u=\sqrt{3g^2/\chi _df_0}`$ and $`<\rho >=_0^{\mathrm{}}𝑑\rho \rho e^{\frac{(\alpha +\alpha _0)\rho \beta \rho ^2}{k_BT}}`$
In the gauge theory, the electronic resistivity $`R(T)`$ is determined by the the transport scattering rate of holons $`\tau _B^1`$. In the strange metal phase above $`T^{}`$, $`\tau _B^1`$ is determined by the Landau diamagnetic susceptibility $`\chi _d:`$
$$\tau _B^1\frac{k_BT}{m_B\chi _d}.$$
(18)
Since $`\chi _d=\chi _F+`$ $`\chi _b`$ is mainly determined by $`\chi _F`$ which is nearly temperature independent, $`R(T)`$ thus depends linearly on $`T`$ in this phase. In the spin gap phase, $`\chi _d`$ in $`\tau _B^1`$ should be replaced by $`\chi _d^{}`$. Since $`<\rho >,`$ and subsequently $`\chi _d^{}`$, increases rapidly with decreasing temperature in the spin gap phase, the temperature dependence of $`\tau _B^1`$ is therefore changed. Near $`T^{}`$, the deviation of$`R(T)`$ from its high temperature linear dependence is approximately given by
$$\frac{R(T)}{CT}=\frac{\chi _d}{\chi _d^{}}1u<\rho >,$$
(19)
where $`C`$ is the slope of $`R(T)`$ at high temperatures. From the previous result of $`<\rho >`$, we find that the leading temperature dependence of $`\frac{R(T)}{CT}`$ calculated by us fits well with the experimental data of $`YBa_2Cu_3O_{7x}`$ as shown in Figure 1.
In this paper, we proposed that due to the strong fluctuation of the U(1) gauge field, the proper description of the pseudo gap phase is the quantum disordered spinon pairing state without the Bose-condensation of holons. The mean field spinon pairing state is proved to be instable when the U(1) gauge field fluctuation is included. While in our new description beyond mean field, the local U(1) symmetry which is broken in the mean field pairing state restores due to the phase disorder in the spinon pairing order parameter. Then the phase transition in the mean field description become a cross over from weak pairing fluctuation regime ($`T>T^{}`$) to the strong fluctuation regime ($`T<T^{}`$) in the present paper. We further devide the pairing fluctuation to amplitude and phase part and treat them seperatively. By integrating out the phase fluctuation by duality transformation, we obtain the effective potential for the amplitude. The crossover temperature can be determined by the minimum of the effective potential moving away from zero. For the temperature much lower than $`T^{}`$, the minimum in the effective potential is far away from zero and the fluctuation of the amplitude is no longer be important. Then our result is quite similar with BFN except that in our case the local pairs are formed by spinons and the U(1) gauge field fluctuation plays a very important role to obtain the quantum disordered phase. For temperature near $`T^{}`$, we can calculate the slope of the resistivity by considering the effect of amplitude fluctuation and our result fits very well with the experimental data.
Very recently Y. B. Kim and Z. Q. Wang proposed that the mean field spin gap phase can be stabilized by strong critical fluctuation of holons. Compared with their approach, ours works better in quite high temperature near $`T^{}`$, where the diamagnetic susceptibility of holons $`\chi _b`$ can be viewed as constant.
|
no-problem/9904/hep-ph9904459.html
|
ar5iv
|
text
|
# Chiral quarks, chiral limit, nonanalytic terms and baryon spectroscopy
## Abstract
It is shown that the principal pattern in baryon spectroscopy, which is associated with the flavor-spin hyperfine interactins, is due to the spontaneous breaking of chiral symmetry in QCD and persists in the chiral limit. All corrections, which are associated with a finite quark (Goldstone boson) mass are suppressed by the factor $`(\mu /\mathrm{\Lambda }_\chi )^2`$ and higher.
In a recent work Thomas and Krein questioned the foundations of the description of the baryon spectrum with a chiral constituent quark model . They claim that “…the leading nonanalytic (LNA) contributions are incorrect in such approaches. The failure to implement the correct chiral behaviour of QCD results in incorrect systematics for the corrections to the masses.” The argument made was that the splitting pattern implied by the short-range Goldstone boson exchange (GBE) <sup>*</sup><sup>*</sup>*Through the whole paper I use pion-exchange. The transition to the whole GBE exchange within the $`SU(3)_F`$ limit implies a substitution of the $`SU(2)`$ flavor matrices by the $`SU(3)`$ ones. operator
$$\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j,$$
(1)
should be inconsistent with chiral symmetry because it is inconsistent with the leading nonanalytic contribution to baryon mass predicted by heavy baryon chiral perturbation theory (HBChPT) . I here show that the HBChPT has no bearing on this issue.
It is shown by Jenkins and Manohar that the nonanalytic contributions to octet and decuplet masses, such as $`\mu ^3,\mu ^2\mathrm{ln}\mu ^2,\mathrm{}`$, where $`\mu `$ stands for a Goldstone boson mass, within the HBChPT in the chiral limit arise only from the loop diagrams a) and b) of Fig. 1, where only diagonal octet-octet or decuplet-decuplet vertices with respect to baryon field contribute, and there is no contribution from the diagrams c) and d) of Fig. 1, where the intermediate baryon belongs to a different $`SU(3)`$ multiplet. This argument was used by Thomas and Krein, but it was not emphasized that this statement is valid only in the chiral limit. In this limit the $`\mathrm{\Delta }`$ and $`N`$ (decuplet-octet) are well split and all the infrared divergences of the diagrams c) and d) disappear. Since it is infrared contributions which are an origin for nonanalytic terms, such terms should vanish in the chiral limit in the case of c) and d), while they persist in the case of diagrams a) and b). Because these nonanalytic terms for $`N`$ and $`\mathrm{\Delta }`$ from a) and b) have exactly the same spin-isospin factors, they cannot split $`N`$ and $`\mathrm{\Delta }`$ in the chiral limit. Beyond the chiral limit there appears a contribution from the diagrams c) and d) as well.
Consider now in details what happens with the $`\mathrm{\Delta }N`$ splitting in the chiral limit within the chiral constituent quark model. There are two distinct pion contributions to the baryon mass of Fig. 2. The diagram a) is a constituent quark self-energy while the diagram b) represents the pion-exchange interaction between the constituent quarks. Consider the first one. The result is well known and coincides with that one for the nucleon in baryon ChPT: it contains, in particular, the nonanalytic terms. All the nonanalytic terms of the constituent quark self-energy diagram are hidden in the constituent quark mass $`m`$ and thus appear in the $`3m`$ contribution to the baryon mass within the quark model. Evidently they do not split $`N`$ and $`\mathrm{\Delta }`$, in agreement with HBChPTObviously the authors of ref. forget about these nonanalytic terms.. These nonanalytic terms appear at the loop level of the effective pion-nucleon and pion-delta Lagrangians.
There is, however, a small difference in the magnitude of these nonanalytic terms. The quark and nucleon axial coupling constants are related as $`g_A^q=3/5g_A^N`$ within the exact $`SU(6)`$ nucleon wave function. There are three constituent quarks in the nucleon and thus the total contribution within the quark model is proportional $`3(g_A^q)^2`$, while at the nucleon level it is given by $`(g_A^N)^2=25/9(g_A^q)^2`$. One of the sources of this small difference is that the exact $`SU(6)`$ is used, which is in fact broken by the interaction (1) and thus the nucleon wave function contains an admixture of the components from other multiplets.
Now consider the interaction diagram b) of Fig. 2. This diagram is not a loop diagramOnly when this diagram is used to evaluate a matrix element perturbatively it also becomes a loop diagram, but of different kind. Its contribution is determined by the $`SU(6)`$ and radial structure of the baryon zero-order wave-function, in contrast to diagram a).. All effects related to this process are beyond the baryon ChPT, which deals with structureless baryons, and its effect is absorbed into a tree-level baryon mass within the effective baryon-meson Lagrangian. Effect of these meson exchanges can be systematically studied within the large $`N_c`$ approach . Note that the large $`N_c`$ nucleon wave function is a quark model wave function with the infinite number of quarks and with the FS Young diagram consisting of one row (with infinite number of boxes). What is important is that both large $`N_c`$ and simple quark model nucleon wave function with $`N_c=3`$ are described by the one-row Young diagram ( i.e. they belong to a completely symmetric $`SU(6)_{FS}`$ representation), and that the pion-exchange diagram b) satisfies all the necessary large $`N_c`$ counting rules. Both these circumstances is one of the origins of a success of the chiral constituent quark model in baryon spectroscopy.
Unfortunately it is notoriusly difficult to treat this interaction in a consistent relativistic manner, but since the constituent quark mass is rather large, $`300400`$ MeV, one hopes that at least qualitative features can be understood using the $`1/m`$ expansion of the constituent quark spinors. To leading nonvanishing order ($`1/m^2`$) the structure of the $`Q_iQ_j`$ pion exchange interaction in momentum representation is given as
$$V_\chi \stackrel{}{\sigma }_i\stackrel{}{q}\stackrel{}{\sigma }_j\stackrel{}{q}\stackrel{}{\tau }_i\stackrel{}{\tau }_jD(q^2)F^2(q^2),$$
(2)
where $`\stackrel{}{q}`$ is pion 3-momentum, $`D(q^2)`$ is dressed pion Green function, which generally includes both the nonlinear terms of the chiral Lagrangian and fermion loops, and $`F(q^2)`$ is a pion-quark form factor, which takes into account the internal structure of both pion and constituent quark and thus provides natural ultraviolet cut-off. This form factor should be normalized to 1 at the time-like momentum $`q^2=\mu ^2`$. For the interaction of two different particles in static approximation only space-like momenta of the pion are important. Approaching $`\stackrel{}{q}0`$ the pion Green function approaches at a free static Klein-Gordon Green function $`D_0=(\stackrel{}{q}^2+\mu ^2)^1`$ and form factor does not have any singularity. It then follows from (2) that
$$V_\chi (\stackrel{}{q}=0)=0.$$
(3)
This result is rather general and does not rely on any particular form of the chiral Lagrangian and pion-quark form factor. The only necessary ingredient is that pion is a pseudoscalar and hence the pion-quark vertex vanishes with $`\stackrel{}{q}`$. The requirement (3) is equivalent in coordinate representation to
$$𝑑\stackrel{}{r}V_\chi (\stackrel{}{r})=0.$$
(4)
The sum rule (4) is trivial for the tensor component of the pseudoscalar exchange interaction since the tensor force automatically vanishes on averaging over the directions of $`\stackrel{}{r}`$. But for the spin-spin component of the pion-exchange interaction the sum rule (4) indicates that there must be a strong short-range term. Indeed, at large interquark separations the spin-spin component is represented by the Yukawa tail $`\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j\mu ^2\frac{e^{\mu r}}{r},`$ it then follows from the sum rule (4) that at short interquark separations the spin-spin interaction must be opposite in sign compared to Yukawa tail and very strong, of the form (1). The concrete radial form of the interaction (1) should be determined by the explicit form of the chiral Lagrangian and pion-quark form factor, which are unknown. It is this short-range part of the GBE interaction between the constituent quarks which is of crucial importance for baryons: it has the sign appropriate to reproduce the level splittings and dominates over the Yukawa tail in baryons. Within the oversimplified consideration with a free Klein-Gordon Green function and without the pion-quark form factor, one obtains the well-known pion-exchange potential
$$V=\frac{g^2}{4\pi }\frac{1}{3}\frac{1}{4m_im_j}\stackrel{}{\tau }_i\stackrel{}{\tau }_j\stackrel{}{\sigma }_i\stackrel{}{\sigma }_j\left\{\mu ^2\frac{e^{\mu r}}{r}4\pi \delta (\stackrel{}{r})\right\},$$
(5)
where the tensor force component which is irrelevant to discussion here, has been dropped.
The pion-exchange interaction makes a sense only at momenta below the chiral symmetry breaking scale $`\mathrm{\Lambda }\chi 1`$GeV, where both pions and constituent quarks exist as effective quasiparticle degrees of freedom. The ultraviolet cut off is provided by the pion-quark form factor and thus the $`\delta `$-function term in (5) is substituted by the finite function with the range $`\mathrm{\Lambda }_\chi ^1`$. Note that the short-range interaction of the same form comes also from the $`\rho `$ \- exchange , which can also be considered as a representation of a correlated two-pion exchange , since the latter has a $`\rho `$-meson pole in t-channel. There are phenomenological reasons to believe that these contributions are also important .
What happens with the pion-exchange potential in the chiral limit, $`\mu =0`$? In this case the sum rule (3) - (4) is no longer valid since the $`\stackrel{}{q}^2`$ behaviour of the numerator is exactly cancelled by the pion Green function, $`\stackrel{}{q}^2`$. As a result the $`\mu `$-dependent long-range part of the interaction vanishes, while the $`\mathrm{\Lambda }_\chi `$-dependent short-range part survives. Note that while the volume integral (3) - (4) is discontinuous and in the chiral limit the right-hand side of equations (3) - (4) is not a zero, the approaching chiral limit in the interaction potential (5) is continuous. That this is so can be easily seen from (5) applying the limit $`\mu =0`$.
Thus the contribution of the interaction (5) via its short-range part appears at the leading order, $`m_c^0`$, within the chiral perturbation theory, where $`m_c`$ stands for current quark mass. This simple observation has by far-going consequences: while the physics of baryons does not change much in the chiral limit (e.g. the $`\mathrm{\Delta }N`$ mass splitting persists), the long-range spin-spin nuclear force vanishes (the tensor interaction in this limit is $`r^3`$). Note, that approaching the chiral limit does not cause any infrared problems (there are no infrared divergences) and this limit can be safely reached by a substitution $`\mu =0`$ in the pion Green function. It also implies that there are no nonanalytic in $`\mu `$ contributions to baryon masses and, in particular, to $`\mathrm{\Delta }N`$ mass splitting from the long-range Yukawa tail in the chiral limit. The crucial difference between the loop-diagram a) and the interaction diagram b) as far as the infrared behaviour is concerned is obvious.
The leading contribution from the long-range part of the interaction (5) appears at the order $`\mu ^2m_c`$ and thus is suppressed by a small factor $`(\frac{\mu }{\mathrm{\Lambda }_\chi })^2`$ compared to the contribution of the short-range part. The contribution at the order $`\mu ^3m_c^{3/2}`$ is suppressed by the third power of this small factor.
This is perfectly consistent with the large $`N_c`$ analysis up to the order $`N_c^2`$ and also with analysis which incorporates in addition the ChPT . These authors find the following relations between the octet-decuplet masses at the tree level (taking into account the $`SU(3)`$ breaking):
$$M_\mathrm{\Delta }M_N=M_\mathrm{\Sigma }^{}M_\mathrm{\Sigma }+\frac{3}{2}(M_\mathrm{\Sigma }M_\mathrm{\Lambda }),$$
$$M_\mathrm{\Xi }^{}M_\mathrm{\Xi }=M_\mathrm{\Sigma }^{}M_\mathrm{\Sigma },$$
$$M_\mathrm{\Omega }M_\mathrm{\Delta }=3(M_\mathrm{\Xi }^{}M_\mathrm{\Sigma }^{}),$$
which are very well satisfied empirically. Note that exactly the same relations have been found within the chiral constituent quark model (see eq. (7.5)). It is also found in ref. that the loop corrections to the relations above appear at the order $`\mu ^2`$, which is consistent with our analysis.
The main merit of the hyperfine interaction (1) is not that it is able to explain the octet-decuplet splitting, which can be also explained in other picture, but that it solves at the same time the long-standing problem of the relative position of the lowest positive-negative parity excited states . What is interesting, even an analysis of the negative parity states alone within a careful phenomenological approach or within the large $`N_c`$ study give an additional credibility to the interaction (1).
There are, nevertheless, two obvious limitations in the use of the potential picture (5): (i) it relies on the leading term in the $`1/m`$ expansion of the constituent quark spinors, and (ii) it uses a static approximation for a pion Green function, and thus all retardation effects are neglected. How important these retardation effects are is an interesting issue and deserves a special study. However, in order to treat the retardation effects in a nonperturbative calculation one would solve a Bethe-Salpeter-like equation in the 3-body system… Within the static approximation the nonperturbative treatment is straightforward . It is important to realize, however, that the successes of the GBE interaction in baryon spectroscopy are based not on details of the dynamical space-time treatment, but on the flavor-spin structure and sign of the short-range interaction (1) , which is rather general and persists with any dynamical treatment.
In conclusion, I will summarize. The idea of the chiral constituent quark model is that the main features of the baryon spectrum are supplied by the spontaneous breaking of chiral symmetry, i.e. by the constituent mass of quarks and the interaction (1) between confined constituent quarks. As a consequence the $`N`$ and $`\mathrm{\Delta }`$ are split already in the chiral limit, as it must be. The expressions (in the notation of ref. )
$$M_N=M_015P_{00}^\pi ,$$
$$M_\mathrm{\Delta }=M_03P_{00}^\pi ,$$
(6)
where $`P_{00}^\pi `$ is positive, arise from the interaction (1). The long-range Yukawa tail, which has the opposite sign represents only a small perturbation. It is in fact possible to obtain a near perfect fit of the baryon spectrum in a dynamical 3-body calculation neglecting the long-range Yukawa tail contribution, with a quality even better than that of .
The implication is that baryon ChPT has no bearing on the interactions (1) nor on the the expressions (6), which should be considered as leading order contributions ($`m_c^0`$, where $`m_c`$ is current quark mass) within the chiral perturbation theory. This does not mean, however, that the systematic corrections from the finite meson (current quark) mass should be ignored.
A rough idea about importance of the finite meson mass corrections for the $`N`$ and $`\mathrm{\Delta }`$ can be obtained from the comparison of the contributions of the first and second terms in (5) in nonperturbative calculations . The former one turns out to be much smaller than the latter. This is because of a small matter radius of the $`N`$ and $`\mathrm{\Delta }`$ . For highly excited states, however, the role of the Yukawa tail increases because of a bigger baryon size and thus the importance of the ChPT corrections should be expected to increase. To consider these corrections systematically one definitely needs to consider the loop contributions to the interactions between constituent quarks as well as the couplings to decay channels, which is rather involved task. This task is one for constituent quark chiral perturbation theory which is awaiting practical implementation.
I am thankful to D.O. Riska for numerous discussions of the properties of the GBE interaction for last years. I am also indebted to the nuclear theory groups of KEK-Tanashi and Tokyo Institute of Technology for a warm hospitality. This work is supported by a foreign guestprofessorship program of the Ministry of Education, Science, Sports and Culture of Japan.
Figure captions
Fig.1 Pion loop contributions to the baryon mass within the baryon chiral perturbation theory.
Fig.2 Pion loop a) and pion exchange b) contributions to the baryon mass within the chiral constituent quark model.
|
no-problem/9904/hep-th9904110.html
|
ar5iv
|
text
|
# References
IMSc/99/04/15
hep-th/9904110
Holographic Principle in the Closed Universe :
a Resolution with Negative Pressure Matter
S. Kalyana Rama
Institute of Mathematical Sciences, C. I. T. Campus,
Taramani, CHENNAI 600 113, India.
email: krama@imsc.ernet.in
ABSTRACT
> A closed universe containing pressureless dust, more generally perfect fluid matter with pressure-to-density ratio $`w`$ in the range $`(\frac{1}{3},\frac{1}{3})`$, violates holographic principle applied according to the Fischler-Susskind proposal. We show, first for a class of two-fluid solutions and then for the general multifluid case, that the closed universe will obey the holographic principle if it also contains matter with $`w<\frac{1}{3}`$, and if the present value of its total density is sufficiently close to the critical density. It is possible that such matter can be realised by some form of ‘quintessence’, much studied recently.
PACS numbers: 98.80.Cq, 98.80.Bp
1. The holographic principle implies that the degrees of freedom in a spatial region can all be encoded on its boundary, with a density not exceeding one degree of freedom per planck cell . Accordingly, the entropy in a spatial region does not exceed its boundary area measured in planck units. Moreover, the physics of the bulk is describeable by the physics on the boundary. This has, indeed, been realised recently for some anti de Sitter spaces .
Fischler and Susskind (FS) have proposed how to apply the holographic principle in cosmology, and showed that our universe, if flat or open, obeys this principle as long as its size is non zero - that is, non planckian. If closed, however, it violates this principle in the future even while its size is non zero. In fact, in some cases, the violation occurs while the universe is still expanding. This may indicate that closed universe is to be excluded as inconsistent, or some new behaviour must set in to accomodate the holographic principle .
The holographic principle has since been applied in the context of pre big bang scenario , singularity problem , and inflation , mostly for flat universe. Recently, there have been two alternative proposals for the implementation of the holographic principle: by Easther and Lowe, based on second law of thermodynamics ; and, by Bak and Rey, using the ‘cosmological apparent horizon’ instead of particle horizon . In both of these implementations, the closed universe also obeys the holographic principle naturally. Therefore, these proposals are perhaps the more natural ones than the FS proposal.
Nevertheless, it is of interest to study whether or not a closed universe is indeed to be excluded as inconsistent with the holographic principle, applied according to the FS proposal. We study this issue in this paper.
Throughout in the following we consider a closed universe, initially of zero size and expanding. It is assumed to contain more than one type of non interacting perfect fluid matter. The pressure $`p_i`$ and the density $`\rho _i`$ of the $`i^{\mathrm{th}}`$ type of matter is related by the equation of state
$$p_i=w_i\rho _i,1w_i1.$$
(1)
The parameter $`w`$ denotes the nature of the matter: $`w=0`$ for pressureless dust, $`w=\frac{1}{3}`$ for radiation, etc. Furthermore, we assume that one of the $`w`$’s, say $`w_1`$, lies in the range
$$\frac{1}{3}<w_1<\frac{1}{3}$$
(2)
so that if the corresponding matter were the only one present, then the universe violates the holographic principle in the future (see ).
We study the explicit solution for the two-fluid case with $`w_1+w_2=\frac{2}{3}`$. We find that the closed universe obeys the holographic principle throughout its future if and only if the present value of the total density is sufficiently close to the critical density.
Using this solution, we show furthermore that the closed universe, containing atleast one matter, with its $`w`$ lying in the range (2), obeys the holographic principle, applied according to the FS proposal, if it also contains atleast one other matter with its $`w`$ satisfying $`w<\frac{1}{3}`$, and if the present value of its total density is sufficiently close to the critical density. Thus, the closed universes need not be excluded as inconsistent, nor any new behaviour needs to set in; the above requirements will suffice.
If these conditions are also necessary then the holographic principle, applied according to the FS proposal, can be said to predict that if the total density at present of our universe exceeds the critical density, no matter by how small an amount, then it is closed and, hence, must also contain matter with $`w<\frac{1}{3}`$.
We make a few remarks about matter with negative values of $`w`$. No known physical matter is of this type, except cosmological constant ($`w=1`$). However, such matter, with $`w\frac{1}{\sqrt{3}}`$, was found necessary in in avoiding big bang singularity within low energy string theory. Furthermore, Dirichlet - $`0`$ and/or $`(1)`$ \- branes were envisaged as possible candidates for such matter. Also, the recent discovery, through the analyses of distant Supernovae, that the universe is accelerating at present has sparked an enormous interest in the study of matter with $`w<0`$. A realisation of such matter is the so called ‘quintessence’ - a time varying, spatially inhomogeneous component of energy density of the universe with negative pressure, much studied recently . Some of the references which study various candidates for matter with negative pressure and/or quintessence are given in . It is possible that matter with $`w<\frac{1}{3}`$, required here, can be realised by some one of the above candidates.
In the following, we first outline how the closed universe violates the holographic principle in the future. We then present two-fluid solutions, and obtain the conditions underwhich the holographic principle is obeyed. We then show that these conditions are also valid in general.
2. The line element for the homogeneous isotropic universe is given by
$$ds^2=dt^2+R^2(t)\left(\frac{dr^2}{1kr^2}+r^2d\mathrm{\Omega }_2^2\right),$$
where $`R`$ is the scale factor, $`d\mathrm{\Omega }_2`$ is the standard line element on unit two dimensional sphere, and $`k=0,1`$, or $`+1`$ for flat, open, or closed universe respectively. The above line element can also be written as
$$ds^2=dt^2+R^2(t)(d\chi ^2+r^2d\mathrm{\Omega }_2^2),$$
where $`r=\chi ,\mathrm{sinh}\chi `$, or $`\mathrm{sin}\chi `$ for $`k=0,1`$, or $`+1`$ respectively. The coordinate size of the horizon is given by the parameter
$$\chi =_0^t\frac{dt}{R}.$$
The holographic bound is given, upto numerical factors of $`𝒪(1)`$, by $`SA`$ where $`S`$ is the entropy in a given region and $`A`$ is the area of its boundary in Planck units . Applied to the closed universe according to FS proposal, it implies, upto numerical factors of $`𝒪(1)`$, that
$$\frac{S}{A}=\frac{\sigma (2\chi \mathrm{sin}2\chi )}{R^2\mathrm{sin}^2\chi }1,$$
(3)
where $`\sigma `$ is the constant comoving entropy density .
For a closed universe, initially of zero size and expanding, and containing only one type of matter with $`w>\frac{1}{3}`$, one obtains , see below also,
$$R^{1+3w}\mathrm{sin}^2\frac{1+3w}{2}\chi .$$
(4)
It can be seen that if $`w\frac{1}{3}`$, then the FS bound (3) will be violated only when $`R0`$, which is the Planckian regime. However, if $`\frac{1}{3}<w<\frac{1}{3}`$, then the FS bound (3) will be violated as $`\chi \pi `$. The violation occurs even while $`R`$ is non zero. In fact, when $`w0`$, the violation occurs while the universe is still expanding. This may indicate that such universes are to be excluded as inconsistent, or some new behaviour must set in to accomodate the holographic principle, applied according to the FS proposal .
3. The above conclusion is valid if the universe contains one type of matter only. In reality, however, more than one type of matter will be present in various, perhaps subdominant, quantities. It is then important to study such multifluid solutions before excluding closed universes as inconsistent.
The general multifluid solutions are difficult to obtain. In a few cases where exist , they typically involve elliptic functions and are often not in a useful form. However, for a class of models, we now present general solutions, in a form useful for our purposes.
Assume that the universe contains different types of non interacting perfect fluid matter, with equations of state given as in (1). We assume, without loss of generality, that $`w_i\frac{1}{3}`$ since the effect of such matter is same as that of the $`k`$-term in equation (5) below. Also, define
$$\mathrm{\Omega }_i\frac{\rho _{0i}}{\rho _c},\rho _c\frac{3H_0^2}{8\pi G},H_0\left(\frac{\dot{R}}{R}\right)_0$$
where $`\rho _{0i}`$ ($`0`$), is the present value of $`\rho _i`$, $`\rho _c`$ is the critical density, $`G`$ is the Newton’s constant, and $`H_0`$ is the present value of the Hubble parameter. Einstein’s equations of motion can then be written as
$`\dot{R}^2`$ $`=`$ $`k+H_0^2R_0^2{\displaystyle \underset{i}{}}\mathrm{\Omega }_i\left({\displaystyle \frac{R}{R_0}}\right)^{(1+3w_i)}f(R)`$ (5)
$`\rho _i`$ $`=`$ $`\rho _{0i}\left({\displaystyle \frac{R}{R_0}}\right)^{3(1+w_i)},`$ (6)
where $`R_0`$ is the present value of $`R`$, and the upper dots denote the time derivatives. The present value of $`\dot{R}`$ gives the relation
$$k=H_0^2R_0^2(\underset{i}{}\mathrm{\Omega }_i1).$$
(7)
Throughout in the following, let $`w_1`$ lie in the range given in (2). Thus, if the corresponding matter were the only one present, then the universe violates the holographic principle in the future (see ). We now define $`y`$ and $`x`$ as follows:
$`{\displaystyle \frac{R}{R_0}}`$ $`=`$ $`\left(H_0^2R_0^2\mathrm{\Omega }_1\right)^{\frac{1}{1+3w_1}}y^{\frac{2a}{1+3w_1}}`$ (8)
$`{\displaystyle \frac{dt}{R_0}}`$ $`=`$ $`{\displaystyle \frac{2a}{1+3w_1}}\left(H_0^2R_0^2\mathrm{\Omega }_1\right)^{\frac{1}{1+3w_1}}y^{\frac{2aq}{1+3w_1}}dx,`$ (9)
where $`a`$ and $`q`$ are positive constants to be chosen suitably, and we set $`x=0`$ at $`t=\chi =0`$. Clearly, the parameter $`\chi `$ is given by
$$\chi =_0^t\frac{dt}{R}=\frac{2a}{1+3w_1}_0^x𝑑xy^{\frac{2a(q1)}{1+3w_1}}.$$
(10)
In terms of $`y`$ and $`x`$, equation (5) becomes
$$\left(\frac{dy}{dx}\right)^2=ky^\alpha +\underset{i}{}c_iy^{\alpha _i}g(y),$$
(11)
where the exponents $`\alpha `$ and $`\alpha _i`$, and the constants $`c_i`$, are given by
$`\alpha `$ $`=`$ $`2+{\displaystyle \frac{4a(q1)}{1+3w_1}}`$ (12)
$`\alpha _i`$ $`=`$ $`\alpha {\displaystyle \frac{2a(1+3w_i)}{1+3w_1}}`$ (13)
$`c_i`$ $`=`$ $`H_0^2R_0^2\mathrm{\Omega }_i\left(H_0^2R_0^2\mathrm{\Omega }_1\right)^{\frac{1+3w_i}{1+3w_1}}.`$ (14)
Note that $`c_1=1`$. The function $`f(R)`$ in (5), expressed in terms of $`y`$, becomes $`f(R)=y^\alpha g(y)`$.
From now on, we set $`k=+1`$. If $`q=a=1`$ then $`(\alpha ,\alpha _1)=(2,0)`$, and the solution (4) for the single fluid case follows trivially. Consider two-fluid cases: $`i=1,2`$. The boundary conditions, corresponding to an universe, initially of zero size and expanding, are $`R=y=0`$ and $`\dot{R}>0`$ at $`t=x=0`$.
A: Let $`q=a=1`$, and $`(1+3w_1)=2(1+3w_2)`$. To be definite, let $`(w_1,w_2)=(\frac{1}{3},0)`$ and $`(\mathrm{\Omega }_1,\mathrm{\Omega }_2)=(\mathrm{\Omega }_r,\mathrm{\Omega }_d)`$ denoting radiation and pressureless dust respectively. Then $`(\alpha ,\alpha _1,\alpha _2)=(2,0,1)`$, and equation (11) becomes
$$\left(\frac{dy}{dx}\right)^2=1+cyy^2,c=\frac{H_0R_0\mathrm{\Omega }_d}{\sqrt{\mathrm{\Omega }_r}}.$$
The solution for $`R`$ is given, after a straightforward algebra, by
$$R=AR_0(\mathrm{sin}(\chi \alpha )+\mathrm{sin}\alpha ),t=_0^\chi 𝑑\chi R,$$
(15)
where the constants $`A`$ and $`\alpha `$ are given by
$$4A^2=H_0^2R_0^2(4\mathrm{\Omega }_r+H_0^2R_0^2\mathrm{\Omega }_d^2),\mathrm{tan}\alpha =\frac{c}{2}.$$
It follows from the above expressions that
$$R|_{\chi =\pi }=2AR_0\mathrm{sin}\alpha =H_0^2R_0^3\mathrm{\Omega }_d.$$
Hence, in a closed universe with both radiation and dust present, the FS bound (3) will be violated even while $`R`$ ($`=R|_{\chi =\pi }`$) is non zero. This is true irrespective of the amount of radiation present, however large it may be. Only when $`\mathrm{\Omega }_d=0`$ exactly, will the FS bound (3) be obeyed all the way until $`R=0`$, i.e. until the universe recollapses to zero size.
B: Let $`2a=1`$, and $`2(q1)=(1+3w_1)=(1+3w_2)`$. (For example, $`(w_1,w_2)=(0,\frac{2}{3})`$.) Then $`(\alpha ,\alpha _1,\alpha _2)=(1,0,2)`$, and equation (11) becomes
$$\left(\frac{dy}{dx}\right)^2=1y+cy^2,c=H_0^4R_0^4\mathrm{\Omega }_1\mathrm{\Omega }_2.$$
Using equation (7), the constant $`c`$ can be written as
$$c=\frac{\mathrm{\Omega }_1\mathrm{\Omega }_2}{(\mathrm{\Omega }_1+\mathrm{\Omega }_21)^2},$$
(16)
where $`\mathrm{\Omega }_1+\mathrm{\Omega }_2>1`$ since $`k=+1`$. The parameter $`\chi `$ and the time $`t`$ are given by
$`\chi `$ $`=`$ $`{\displaystyle \frac{1}{1+3w_1}}{\displaystyle _0^x}{\displaystyle \frac{dx}{\sqrt{y}}}`$ (17)
$`t`$ $`=`$ $`{\displaystyle \frac{R_0}{1+3w_1}}\left(H_0^2R_0^2\mathrm{\Omega }_1\right)^{\frac{1}{1+3w_1}}{\displaystyle _0^x}{\displaystyle \frac{dx}{\sqrt{y}}}y^{\frac{1}{1+3w_1}}.`$ (18)
The details of the solution depend on whether or not $`f_{\mathrm{Min}}=\mathrm{Min}(\frac{1}{y}1+cy)=(2\sqrt{c}1)`$ is negative or positive. Consider now each of these cases.
$`f_{\mathrm{Min}}<0`$ $``$ $`2\sqrt{c}<1`$
The solution for $`y`$ is given, after a straightforward algebra, by
$$y\sqrt{c}\mathrm{sinh}\alpha =\mathrm{cosh}\alpha \mathrm{cosh}(x\sqrt{c}\alpha ),\mathrm{tanh}\alpha 2\sqrt{c}.$$
(19)
Thus, $`y`$ starts from zero at $`x=0`$, expands to a maximum, given by $`y_{\mathrm{max}}\sqrt{c}=\mathrm{tanh}\frac{\alpha }{2}`$, at $`x\sqrt{c}=\alpha `$, and then recollapses to zero at $`x\sqrt{c}=2\alpha `$. It can be seen from equation (18) that the recollapse occurs in a finite time.
The parameter $`\chi `$ is given, after a straightforward algebra using equation (17) and the formula (2.464(32)) of , by
$$\frac{1+3w_1}{2}\chi =\sqrt{\frac{2\mathrm{cosh}\alpha }{1+\mathrm{cosh}\alpha }}F(\varphi ,\beta ),$$
(20)
where
$$F(\varphi ,\beta )=_0^\varphi \frac{d\theta }{\sqrt{1\beta ^2\mathrm{sin}^2\theta }}$$
(21)
is the elliptic integral of the first kind. The parameters $`\varphi `$ and $`\beta `$ are given by
$$\mathrm{sin}^2\varphi =\frac{y}{y_{\mathrm{max}}},\beta =\mathrm{tanh}\frac{\alpha }{2}<1.$$
Thus, as $`y`$ expands from zero to $`y_{\mathrm{max}}`$ and then recollapses to zero, $`\varphi `$ increases monotonically from $`0`$ to $`\frac{\pi }{2}`$ to $`\pi `$. Also, the parameter $`\chi `$ increases monotonically with $`\varphi `$ and, since $`\beta ^2<1`$, remains finite always.
It is now important to find out whether or not the value of $`\chi `$ at the time of recollapse, $`\chi _{}\chi |_{\varphi =\pi }>\pi `$. Clearly, if $`\chi _{}>\pi `$ then the FS bound (3) will be violated even while $`R`$ is non zero. If $`\chi _{}\pi `$ then the FS bound (3) will be obeyed all the way until $`R=0`$, i.e. until the universe recollapses to zero size. Towards this end, note from equations (20) and (21) that
$$\frac{1+3w_1}{2}\chi >\varphi \mathrm{and},\mathrm{hence},\chi _{}>\frac{2\pi }{1+3w_1}.$$
Thus, for $`w_1<\frac{1}{3}`$, $`\chi _{}>\pi `$ and, therefore, the FS bound (3) will be violated even while $`R`$ is non zero. In fact, for dust ($`w_1=0`$), the violation occurs while the universe is still exapnding; that is, $`\chi =\pi `$ even while $`\varphi <\frac{\pi }{2}`$.
$`f_{\mathrm{Min}}>0`$ $``$ $`2\sqrt{c}>1`$
The solution for $`y`$ is given, after a straightforward algebra, by
$$y\sqrt{c}\mathrm{cosh}\alpha =\mathrm{sinh}(x\sqrt{c}\alpha )+\mathrm{sinh}\alpha ,\mathrm{tanh}\alpha \frac{1}{2\sqrt{c}}.$$
(22)
Thus, $`y`$ starts from zero at $`x=0`$, expands to infinity as $`x\mathrm{}`$. It can be seen from equation (18) that, for $`w_1<\frac{1}{3}`$, the required time $`t`$ also $`\mathrm{}`$.
The parameter $`\chi `$ is given, after a straightforward algebra using equation (17) and the formula (2.464(16)) of , by
$$(1+3w_1)\chi =c^{\frac{1}{4}}F(\varphi ,\beta ),$$
(23)
with $`F(\varphi ,\beta )`$ as given in (21). The parameters $`\varphi `$ and $`\beta `$ are now given by
$$\mathrm{cos}\varphi =\frac{1y\sqrt{c}}{1+y\sqrt{c}},\beta ^2=\frac{1+\mathrm{tanh}\alpha }{2}<1.$$
Thus, as $`y`$ expands from $`0`$ to infinity, $`\varphi `$ increases monotonically from $`0`$ to $`\pi `$. Also, $`\chi `$ increases monotonically with $`\varphi `$ and, since $`\beta ^2<1`$, remains finite always.
For the same reasons as given before, it is now important to find out whether or not the value of $`\chi `$ as $`y\mathrm{}`$, $`\chi _{}\chi |_{\varphi =\pi }>\pi `$. Clearly if $`\chi _{}>\pi `$ then the FS bound (3) will be violated even while $`R`$ is non zero and, in fact, increasing. If $`\chi _{}<\pi `$ then the FS bound (3) will be obeyed for all times $`t`$. Towards this end, note from equation (23) that
$$\chi _{}=\frac{2c^{\frac{1}{4}}}{1+3w_1}K(\beta )$$
(24)
where we have used $`F(\pi ,\beta )=2F(\frac{\pi }{2},\beta )2K(\beta )`$. Here, $`K(\beta )`$ is the complete elliptic integral, and is finite since $`\beta ^2<1`$. Thus, it follows from equation (24), the $`c`$-dependence of $`\beta `$, and the properties of $`K(\beta )`$, that $`\chi _{}<\pi `$ implies that $`c>c_{}`$, where $`c_{}`$ is the solution of equation (24) when $`\chi _{}=\pi `$. This, in turn, implies that
$$f_{\mathrm{Min}}>f_{}2\sqrt{c_{}}1$$
and also, from equation (16), that $`\mathrm{\Omega }_2<\mathrm{\Omega }_2`$ for a given value of $`\mathrm{\Omega }_1`$.
It also follows from equation (24), the $`c`$-dependence of $`\beta `$, and the properties of $`K(\beta )`$, that $`c_{}`$ increases and, hence, $`\mathrm{\Omega }_2`$ decreases, as $`w_1`$ decreases. However, an explicit expression for $`c_{}`$ is not available, although one can approximately determine $`c_{}`$ using the tabulated values of $`K(\beta )`$. Then, $`\mathrm{\Omega }_2`$ can be determined for a given $`\mathrm{\Omega }_1`$.
For example, if $`w_1=0`$ then $`c_{}2.684`$, and
$$(\mathrm{\Omega }_1,\mathrm{\Omega }_2)(0.1,1.103),(0.3,1.041),(0.5,0.912),\mathrm{}$$
If $`w_1=\frac{1}{3}`$, the highest value possible in the present solution, then $`c_{}0.416`$, and
$$(\mathrm{\Omega }_1,\mathrm{\Omega }_2)(0.1,1.501),(0.3,1.857),(0.5,2.082),\mathrm{}$$
In all these cases, we have $`\mathrm{\Omega }_1+\mathrm{\Omega }_2=1+𝒪(1)`$. Thus, we see that $`\chi _{}<\pi `$ and, hence, the FS bound (3) will be obeyed if matter, with $`w<\frac{1}{3}`$, is present and if
$$0<\mathrm{\Omega }_1+\mathrm{\Omega }_21\stackrel{<}{_{}}𝒪(1).$$
4. The above result is obtained for the two-fluid solutions where $`(1+3w_1)=(1+3w_2)`$. However, this result is valid for general multifluid solutions also. Namely, the FS bound (3) will be obeyed if
(1) atleast one matter, with $`w<\frac{1}{3}`$, is present, and
(2) the present value of the total density is sufficiently close to the critical density, i.e. $`(_i\mathrm{\Omega }_i1)`$, which must be positive since $`k=+1`$, is sufficiently small.
This can be proved as follows. Let the multifluid system contain atleast two types of matter, one with its $`ww_1>\frac{1}{3}`$, and another with its $`ww_2<\frac{1}{3}`$. It may now contain other types of matter also, with no further restrictions on $`\{w_i\}`$, $`i=1,2,\mathrm{}`$. Then, the function
$$h(R)\underset{i}{}\mathrm{\Omega }_i\left(\frac{R}{R_0}\right)^{(1+3w_i)}$$
(25)
has its non zero, and the only, minimum at a finite value of $`R`$. That is,
$$h(R)h(R_m)>0,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}<R_m<\mathrm{}.$$
We now consider an auxiliary two-fluid system, with the corresponding function given by
$$\stackrel{~}{h}(R)\underset{j=1,2}{}\stackrel{~}{\mathrm{\Omega }}_j\left(\frac{R}{R_0}\right)^{(1+3\stackrel{~}{w}_j)},$$
(26)
where the tildes refer to the auxiliary system. By a simple, but slightly involved, analysis it can be shown that the parameters $`\stackrel{~}{w}_j`$ and $`\stackrel{~}{\mathrm{\Omega }}_j`$, $`j=1,2`$, can be chosen <sup>1</sup><sup>1</sup>1For example, choose $`\stackrel{~}{w}_1`$ such that $`\mathrm{Min}\{|1+3w_i|\}>1+3\stackrel{~}{w}_1>0`$, and $`\stackrel{~}{\mathrm{\Omega }}_j`$’s such that $`\stackrel{~}{h}(R)`$ also has its non zero, and the only, minimum at $`R=R_m`$ and $`h(R_m)>\stackrel{~}{h}(R_m)>0`$, where the inequalities are obeyed by sufficient margins. such that $`(1+3\stackrel{~}{w}_1)=(1+3\stackrel{~}{w}_2)>0`$ and
$$h(R)>\stackrel{~}{h}(R),\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}R\mathrm{}.$$
The parameter $`H_0R_0`$, taken to be the same for both systems, is given by (see equation (7))
$$H_0^2R_0^2(\underset{i}{}\mathrm{\Omega }_i1)=1.$$
The solution for the auxiliary system is nothing but the one given in the previous section, where now the parameter $`\stackrel{~}{c}=H_0^4R_0^4\stackrel{~}{\mathrm{\Omega }}_1\stackrel{~}{\mathrm{\Omega }}_2`$.
Note that
$$f(R)=1+H_0^2R_0^2h(R)\mathrm{and}\chi =_0^t\frac{dt}{R}=_0^R\frac{dR}{R\sqrt{f}},$$
and similarly for $`\stackrel{~}{f}(R)`$ and $`\stackrel{~}{\chi }`$. Since $`h(R)>\stackrel{~}{h}(R)`$ and $`H_0R_0`$ is same for both the systems, it follows that
$$f(R)>\stackrel{~}{f}(R)\mathrm{and},\mathrm{hence},\chi <\stackrel{~}{\chi }.$$
However, it is shown in the previous section that $`\stackrel{~}{\chi }<\pi `$ if $`\stackrel{~}{c}`$ is sufficiently large. Therefore, it now follows that the parameter
$$\chi <\pi $$
and, hence, the FS bound (3) is obeyed in the future, if $`(_i\mathrm{\Omega }_i1)`$, which must be positive, is sufficiently small; that is, if the present value of the total density is sufficiently close to the critical density.
We have thus shown that the closed universe, containing atleast one matter, with its $`w`$ lying in the range (2), obeys the holographic principle, applied according to the FS proposal, if it also contains atleast one other matter with its $`w`$ satisfying $`w<\frac{1}{3}`$, and if the present value of its total density is sufficiently close to the critical density. Thus, the closed universes need not be excluded as inconsistent, nor any new behaviour needs to set in; the above requirements will suffice.
If these conditions are also necessary, then they can be taken as predictions of holographic principle, applied to the closed universe according to the FS proposal. Thus, if the total density at present of our universe, which certainly contains pressureless dust, exceeds the critical density, no matter by how small an amount, then it is closed, $`k=+1`$. The holographic principle, applied according to tha FS proposal, would then require that our universe must also contain matter with $`w<\frac{1}{3}`$. It is possible that such matter can be realised by some form of quintessence, much studied recently.
|
no-problem/9904/patt-sol9904006.html
|
ar5iv
|
text
|
# The moment method in general Nonlinear Schrödinger Equations
## Abstract
In this paper we develop a new approximation method valid for a wide family of nonlinear wave equations of Nonlinear Schrödinger type. The result is a reduced set of ordinary differential equations for a finite set of parameters measuring global properties of the solutions, named momenta. We prove that these equations provide exact results in some relevant cases and show how to impose reasonable approximations that can be regarded as a perturbative approach and as an extension of the time dependent variational method.
The solution of nonlinear wave equations representing physically relevant phenomena is a task of the highest interest. However, it is not possible to find exact solutions except for a few simple cases where one is lucky to integrate the equations involved. In particular, in the 70’s some mathematical techniques were discovered which allowed the integration of several relevant nonlinear wave equations . So the development of rigorous approximation methods is of interest in those cases where the equations are known to be non-integrable.
One family of nonlinear wave equations with lots of practical applications is that of Nonlinear Schrödinger Equations (NSE) , which arise in plasma physics, biomolecule dynamics, fundamentals of quantum mechanics, beam physics, etc., but specially in the fields of Nonlinear Optics and Bose–Einstein condensation . In the last two fields a great variety of these equations appear involving different spatial dimensionalities, nonlinear terms (saturation, polynomial, nonlocal, losses, etc.) and number of coupled equations.
One common approximate theoretical approach to the analysis of the dynamics involved in those problems is to assume a fixed shape for the solution with a finite set of time dependent parameters so that the dimensionality is made finite at the cost of loss of information on the solution (many degrees of freedom are lost). This method receives many denominations depending on the context: collective coordinate technique, time-dependent variational method, equivalent particle approach, energy balance equations, etc.
Although not explicitly stated most of those methods can be reduced to a more elegant formulation which is the time dependent variational technique, originally developed by Anderson for one dimensional problems based on Ritz’s optimization procedure. This approximate technique is a good tool to study the propagation of distributions having simple shape. If the shape of the actual solution is close to the trial function, the outcome of variational method will be in good agreement with the real solutions, otherwise it may be very rough or even fail . Despite of this fact the technique has been used in many physical situations. In Nonlinear Optics it has been applied to many problems some of them being listed in Ref. . The method has been applied to many other physical problems where nonlinear wave equations (in particular NSEs) arise including random perturbations , nonlocal equations , collapse phenomena , propagation and scattering of nonlinear waves , etc. A review of the application of the technique with emphasis on problems with different scales (focused in condensed matter) is given in Ref. . This technique has been also used in the last years in the framework of Bose-Einstein condensation (BEC) applications to explain the low energy excitation spectrum of single and double condensates, collapse dynamics , and many other problems , etc.
In this letter we develop a completely different technique called the moment method . This method is based on the definition of several integral parameters whose evolution can be computed in closed form and has been used to obtain exact results in particular applications . We provide here a general framework for its application as well as several ways to treat it systematically as a perturbative technique.
The general NSE and moment equations.- Let us consider the $`n`$-dimensional NSE
$$i\frac{\psi }{t}=\frac{1}{2}\mathrm{}\psi +V(\stackrel{}{r})\psi +g(|\psi |^2,t)\psi i\sigma (|\psi |^2,t)\psi $$
(1)
where $`\mathrm{}=_k^2/x_j^2`$, and $`V(\stackrel{}{r})=_k\frac{1}{2}\omega _k(t)^2(t)x_k^2`$ is a time and spatially dependent parabolic potential which has been included because it is always present in BEC problems. To aid readability, we will separate the solution in modulus and phase, $`\psi =\sqrt{\rho }e^{i\varphi }`$, and define an interaction energy density $`G(\rho )`$ as $`g=G/\rho `$. The nonlinear terms will be analytic functions of $`\rho `$ such that $`g(\rho ),G(\rho )0,\rho 0.`$
In Table I we define the so called momenta of $`\psi `$. Some of them are related to the momenta of the distribution $`\rho =|\psi |^2`$, and they all have a physical meaning (see Ref. for their interpretation in Optics). The evolution equations for these quantities are
$`{\displaystyle \frac{dN}{dt}}`$ $`=`$ $`2{\displaystyle \sigma \rho },`$ (3)
$`{\displaystyle \frac{dX_i}{dt}}`$ $`=`$ $`V_i2{\displaystyle \sigma x_i\rho },`$ (4)
$`{\displaystyle \frac{dV_i}{dt}}`$ $`=`$ $`\omega _iX_i2{\displaystyle \sigma \frac{\varphi }{x_i}\rho },`$ (5)
$`{\displaystyle \frac{dW_i}{dt}}`$ $`=`$ $`B_i2{\displaystyle \sigma x_i^2\rho },`$ (6)
$`{\displaystyle \frac{dB_i}{dt}}`$ $`=`$ $`4K_i2\omega _i^2W_i2{\displaystyle DG}4{\displaystyle \sigma x_i\frac{\varphi }{x_i}\rho },`$ (7)
$`{\displaystyle \frac{dK_i}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\omega _i^2B_i{\displaystyle DG\frac{^2\varphi }{x_i^2}}`$ (8)
$`+`$ $`{\displaystyle \sigma \left[\sqrt{\rho }\frac{^2\sqrt{\rho }}{x_i^2}\rho \left(\frac{\varphi }{x_{\widehat{i}}}\right)^2\right]},`$ (9)
$`{\displaystyle \frac{dJ}{dt}}`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle DG\frac{^2\varphi }{x_i^2}}2{\displaystyle \sigma g\rho }+{\displaystyle \frac{G}{t}}.`$ (10)
Here $`DG`$ is a shorthand notation for $`G(\rho )g(\rho )\rho `$. Through this paper we will concentrate in the most common case $`\sigma (|\psi |^2,t)=\sigma (t)`$ for which the equations are
$`{\displaystyle \frac{dN}{dt}}`$ $`=`$ $`2\sigma N,`$ (12)
$`{\displaystyle \frac{dX_i}{dt}}`$ $`=`$ $`V_i2\sigma X_i,`$ (13)
$`{\displaystyle \frac{dV_i}{dt}}`$ $`=`$ $`\omega _iX_i2\sigma V_i,`$ (14)
$`{\displaystyle \frac{dW_i}{dt}}`$ $`=`$ $`B_i2\sigma W_i,`$ (15)
$`{\displaystyle \frac{dB_i}{dt}}`$ $`=`$ $`4K_i2\omega _i^2W_i2{\displaystyle DG}2\sigma B_i,`$ (16)
$`{\displaystyle \frac{dK_i}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\omega _i^2B_i{\displaystyle DG\frac{^2\varphi }{x_i^2}}2\sigma K_i,`$ (17)
$`{\displaystyle \frac{dJ}{dt}}`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle DG\frac{^2\varphi }{x_i^2}}2J+{\displaystyle \frac{G}{t}}.`$ (18)
As stated before, some of these laws can be found in other treatments which mostly concentrate on particular cases or treat them as basis for perturbation methods choosing one particular shape for the solution. From Eqs. (13-14) we find *exact* closed equations for zeroth and first order momenta,
$$\frac{d^2X_i}{dt^2}=\omega _i(t)X_i2\sigma (t)\frac{dX_i}{dt}2\sigma (t)X_i.$$
(19)
In conservative systems with any type of potential the classical result of Quantum Mechanics $`dx_i/dt=V/x_i`$ is obtained as described in .
Once Eqs. (12-14) are integrated one is left with the problem of solving the remaining $`3n+1`$ equations. Typically these equations do not form a closed set but involve integral quantities which are not included in the definitions of the momenta. To close them, i.e. to equal the numbers of equations and of unknowns, one must either restrict the problem or impose some kind of approximation.
Exact closure of moment equations.- We have found only two relevant cases in which the closure is exact for the rest of the equations. Both simplified problems correspond to conservative, $`\sigma =0`$, spherically symmetric potentials, $`\omega _i=\omega (t)`$ where
$`{\displaystyle \frac{dR}{dt}}`$ $`=`$ $`B_r,`$ (21)
$`{\displaystyle \frac{dB_r}{dt}}`$ $`=`$ $`4K2\omega ^2(t)R2D,`$ (22)
$`{\displaystyle \frac{dK}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\omega ^2(t)B_r{\displaystyle \frac{dJ}{dt}},`$ (23)
being $`D=[Gg\rho ]`$ and $`R=\sqrt{W}`$ the radial width. The first integrable case corresponds to $`n=2`$ and $`G=U\rho ^2`$. For it Eqs. (The moment method in general Nonlinear Schrödinger Equations) simplify to
$$\frac{d^2R}{dt^2}=\omega (t)R+\frac{M}{R^3}.$$
(24)
Here $`M`$ a constant that depends only on the initial data and interaction strength $`U`$. This equation has been used to prove the existence of extended resonances in Ref. .
Another ample family of systems for which moment equations are closed and exact is those with a time-independent interaction strength, $`G/t=0`$ (the usual case), and a divergenceless velocity distribution (given by the phase gradient) $`\text{div}\left(\varphi \right)=\mathrm{}\varphi =0.`$ This condition imposes no restriction on the density distribution $`\rho `$ and is automatically satisfied by the well known vortex-line solutions, which in $`n`$ spatial dimensions read
$`\psi `$ $`=`$ $`\rho _B(x_1,\mathrm{},x_n;t)e^{i\varphi _B},`$ (26)
$`\varphi _B`$ $`=`$ $`\stackrel{}{\alpha }\stackrel{}{X}+\mathrm{arctan}{\displaystyle \frac{x_kX_k}{x_lX_l}}.`$ (27)
Here $`\stackrel{}{X}`$ are free, time dependent parameters, and $`\rho _0`$ is arbitrary. With those simple conditions we get an infinite number of constants of evolution named “supermomenta”, $`Q(F)F(\rho )`$, built up from differentiable functions of the modulus, $`F(\rho )`$, that satisfy the regularity conditions. In these cases one can prove that $`D`$ is a constant and that Eqs. (The moment method in general Nonlinear Schrödinger Equations) become equivalent to Eq. (24).
Uniform divergence approximation.- Intuition dictates that the zero-Laplacian phase condition is related to (a) the configuration of the cloud, i.e. $`\rho `$, does not change, (b) the soliton or the wavepacket is either stationary or at most suffers *displacements and rotations* and (c) all of the “supermomenta” depend solely on the norm, $`Q(F)=f(N)`$. General solutions have nonzero divergence of the velocity field, thus the zeroth order approximation, $`\mathrm{}\varphi =0`$, fails to describe the system. The next possibility is a first order approximation in which the Laplacian of the phase is uniform. As we will see, this extension now allows for *changes in the shape* of the cloud and introduces three new independent variables in the supermomenta $`Q(F)`$. Mathematically, the first order approximation the phase is
$$\varphi =\varphi _B(\stackrel{}{x},t)+\underset{j}{}\beta _j(t)x_j^2,$$
(28)
where $`\varphi _B(\stackrel{}{x},t)`$ is any function satisfying $`\mathrm{}\varphi _B=0`$. This approximation will be called uniform divergence approximation on the phase in what follows. A limited version of this approximation that uses a linear function (which has zero divergence) in place of $`\varphi _B`$ was first applied to radially symmetric problems in Ref. and to the study of resonances in general 3D forced NSE problems in Refs. . In that case, were $`\varphi (\stackrel{}{x},t)=\varphi _0(t)+\stackrel{}{\alpha }\stackrel{}{x}+_j\beta _j(t)x_j^2`$ it is evident that the approximation consists on a Taylor expansion of the phase or even better a polynomial fitting with time dependent parameters. Here we provide a general framework for the application of the technique to arbitrary NSE problems. In a certain sense this is a generalization of the usual time dependent variational method but now no assumption on the shape of the amplitude of the wave is needed and the phase has a free is approximated by a least-squares type fitting with time-dependent parameters and only very general restrictions on the phase.
Let us first take the case with $`\varphi _B=\stackrel{}{\alpha }\stackrel{}{x}`$. We will assume for simplicity that the nonlinearity is or can be approximated by a polynomial
$$G(\rho )=\underset{k}{}\alpha _k\rho ^k.$$
(29)
The dissipative terms can be removed from the equations by rescaling the solution with $`\gamma =_0^t\sigma (t^{})𝑑t^{},`$ so that $`\stackrel{~}{\rho }=e^\gamma \rho `$. We will denote the momenta obtained using $`\stackrel{~}{\rho }`$ by a tilde, as in $`\stackrel{~}{W}_i`$. It is important to stress that Eqs. (The moment method in general Nonlinear Schrödinger Equations) do not close immediately and it is necessary to study the monomial supermomenta
$$\stackrel{~}{Q}^{(m)}=\stackrel{~}{\rho }^md^nx.$$
(30)
Their evolution laws are
$$\frac{d\stackrel{~}{Q}^{(m)}}{dt}=(m1)\underset{i}{}\beta _i\stackrel{~}{Q}^{(m)}.$$
(31)
Since the parameters in the phase can be expressed as
$$\beta _i=\frac{d}{dt}\mathrm{log}\sqrt{\stackrel{~}{W}_i},$$
(32)
one obtains a closed form for each of the supermomenta
$$\stackrel{~}{Q}^{(m)}=C^{(m)}\left(\sqrt{\stackrel{~}{W}_1\mathrm{}\stackrel{~}{W}_d}\right)^{m+1},$$
(33)
where the constants $`C^{(m)}`$ must be determined from initial data. We can use this property to estimate all the integrals in which $`G(\rho )`$ appears, as a function of powers of the mean square widths. Using this, and defining as before the natural widths, $`\stackrel{~}{R}_i=\sqrt{\stackrel{~}{W}_i}`$, in the general nonsymmetric case we arrive to
$$\frac{d^2\stackrel{~}{R}_i}{dt}=\omega _i\stackrel{~}{R}_i+\frac{M_i}{\stackrel{~}{R}_i^3}+\frac{𝒢(\stackrel{~}{R}_1\mathrm{}\stackrel{~}{R}_d,t)}{\stackrel{~}{R}_1}$$
(34)
where $`M_i`$ is a constant to be determined from the initial data, and $`𝒢(R_1\mathrm{}R_d,t)`$ is a function that may be calculated from Eqs. (29) and (33). Finally, to interpret these results one must remember that the actual width of the cloud is actually $`R_i(t)=\stackrel{~}{R}_i(t)e^{2\gamma }`$.
Eq. (34) means that given an initial data it is possible to compute the evolution of the width (and all the momenta of the initial datum) provided the parabolic phase is a good description for the solution. The method is very powerful as it accounts for the evolution of any solution that can be described in terms of a finite number of momenta (since higher order momenta are functions of lower order ones). Though much more general than the collective coordinate method the one presented here is simpler to apply and extend since it only involves computing the integrals in Eqs. (The moment method in general Nonlinear Schrödinger Equations).
We must also remark that the phase is not restricted to be a polynomial in $`\stackrel{}{r}`$ as in Ref. . Instead one still obtains information when the solution has many other forms. For example, if phase is of type (The moment method in general Nonlinear Schrödinger Equations), then one can combine the two widths from the plane on which the vorticity is present, $`W_k`$ and $`W_l`$, into a radial one, $`RW_k+W_l`$ and the evolution of $`R`$ can be studied together with the widths from the remaining spatial directions.
Independent moment approximation.- We have seen that the uniform divergence condition leads to a closed expression for every $`Q^{(m)}`$ in terms of the widths $`W_i`$. Although this approach is more powerful than the usual time dependent ansatz it is possible to improve the moment method to higher precision. The basic idea is to assume that only a finite set of momenta are independent, the number of those independent momenta being related to the accuracy of the solution and finding expressions for higher order momenta as functions of lower order ones. For instance if only the $`\{N,W_i\}`$ momenta are truly independent (as it occurs with the uniform divergence approximation) it is possible to find expressions for the rest of the magnitudes in terms of these momenta. By scaling the wave with respect to one of the coordinates
$$\psi (x_1,\mathrm{},x_i,\mathrm{})\frac{1}{\sqrt{1+ϵ}}\psi (x_1,\mathrm{},\frac{x_i}{1+ϵ},\mathrm{}),$$
(35)
and relating the first order changes in $`Q^{(m)}`$ and $`W_k`$ one arrives to
$$(m1)Q^{(m)}=\underset{k}{}2\frac{Q^{(m)}}{W_k}W_k.$$
(36)
This equation has a solution of the form reflected in Eq. (33) and also similar expressions may be derived for the rest of the unknown integrals in Eq. (3). Thus, in the first order case discussed here the method can provide a closed set of equations for any type of nonlinear term. In principle the procedure could be extended to higher precision approximation of the evolution of initial data. Work on this point is in progress and will be reported in detail elsewhere.
Finally we would like to point out that Eqs. (The moment method in general Nonlinear Schrödinger Equations) can be used straightforwardly by replacing $`\psi (\stackrel{}{r})`$ with an appropriate ansatz. In the conservative case, $`\sigma =0`$, this procedure is equivalent to Ritz’s optimization procedure, with the advantages that one needs not build a huge Lagrangian integral, and that our central equations (3) can also be applied to dissipative systems that lack a Lagrangian density at all. It must also be remarked that unless the ansatz has a phase different from (28), one will always arrive to Eq. (34).
In conclusion we have presented the moment method in a general framework and discussed under which conditions it leads to closed equations for a finite set of parameters. The uniform divergence ansatz has been introduced as a way to improve usual collective coordinate methods and obtain a closed set of equations for any NSE with polynomial nonlinearity and linear dissipation (with or without a parabolic time dependent spatial potential). The second method takes the zeroth, first and second order momenta to be the only independent ones and builds analytical approximations for other magnitudes. It is an extension of the uniform divergence approximation that seems to work for non polynomic self-interaction energies.
This work has been partially supported by CICYT under grant PB96-0534.
|
no-problem/9904/hep-ph9904437.html
|
ar5iv
|
text
|
# 1 Full Lorentz Invariance: Proper and Improper
## 1 Full Lorentz Invariance: Proper and Improper
For many decades there was a strong theoretical and/or aesthetic prejudice for fundamental physical laws that were symmetric under both spatial and temporal reflection (parity and time-reversal invariance). When the $`VA`$ character of weak interactions was established in the late 1950’s, exact parity invariance was apparently empirically falsified. Soon after, the discovery of $`CP`$ violation apparently falsified exact time-reversal invariance. It seemed as if only the Proper Lorentz Transformations were true symmetries of Nature.
However, Lee and Yang recognised from the outset that parity can be retained as an exact symmetry, despite the $`VA`$ character of weak interactions, provided that the ordinary particle spectrum is doubled. From a modern theoretical and phenomenological perspective this requires every ordinary lepton, quark, gauge boson and Higgs boson to be paired with a mirror analogue. Since the ordinary and mirror particle sectors are but weakly coupled to each other, the resulting scenario is phenomenologically viable. The violation of parity invariance is therefore, remarkably, still an open question. A simple argument to be presented below shows that if the above type of exact parity symmetry exists in Nature, then so necessarily does a form of time reversal invariance. So, it is possible for the full Lorentz Group to be a completely unbroken symmetry of Nature!
Of particular interest is the fact that the observed atmospheric and solar neutrino anomalies may be the first experimental manifestation of the mirror matter sector (mirror neutrinos to be specific). In order to further test this proposal, neutral current based measurements probing both the atmospheric and solar anomalies are vital. Such measurements will determine whether the relevant ordinary neutrinos are transforming into other ordinary neutrinos, or into something more exotic such as mirror or sterile neutrinos.
The gauge theoretic construction of a theory with exact parity symmetry is very easy to understand. Consider a theory defined by a parity violating Lagrangian $``$ which has gauge group $`G`$. This theory may, for instance, be the minimal Standard Model, or, more pertinently, the Standard Model augmented by nonzero neutrino masses and mixings. For every field $`\psi `$ in $``$ introduce a mirror or parity partner $`\psi ^{}`$. For spin-$`1/2`$ fields this requires of course that the $`\psi ^{}`$ have opposite chirality to the $`\psi `$. The fields $`\psi ^{}`$ are singlets under $`G`$ but transform under a gauge group $`G^{}`$ which is isomorphic to $`G`$, while the $`\psi `$’s are correspondingly required to be singlets under $`G^{}`$. The fields $`\psi `$ and $`\psi ^{}`$ are placed into identical multiplets under their respective gauge groups $`G`$ and $`G^{}`$, and the discrete parity symmetry (schematically $`\psi \psi ^{}`$) is enforced. The resulting Lagrangian is
$$_{\mathrm{total}}(\psi ,\psi ^{})=(\psi )+^{}(\psi ^{})+_{\mathrm{int}}(\psi ,\psi ^{}),$$
(1)
where $`^{}`$ is exactly the same function of the $`\psi ^{}`$ fields as $``$ is of the $`\psi `$ fields.<sup>a</sup><sup>a</sup>aThe dependence of the Lagrangian on first derivatives of the fields is of course understood. The extremely important interaction term $`_{\mathrm{int}}`$ describes any gauge and parity invariant renormalisable coupling terms between the ordinary and mirror sectors. The above procedure was first carried out for the minimal Standard Model in the first paper quoted under Ref., where it was shown that parity was a symmetry of the vacuum as well as the Lagrangian for a large region in Higgs potential parameter space. We will focus on this parameter space region from now on.<sup>b</sup><sup>b</sup>bBy extending the Higgs sector, it is possible to spontaneously break the parity symmetry together with the electroweak and mirror-electroweak gauge symmetries. We call the resulting theory the “Exact Parity Model (EPM)”.
The ordinary and mirror sectors are coupled by gravitation and $`_{\mathrm{int}}`$. The gravitational coupling is very interesting from the point of view of cosmology and the dark matter problem, but will not be further discussed here. The nongravitational effects in $`_{\mathrm{int}}`$ in general feature photon – mirror photon, $`Z`$ – mirror $`Z`$ and Higgs – mirror Higgs mixing. These particles are singled out because they are neutral under the electromagnetic and colour forces and their mirror analogues, so there are no exact conservation laws to prevent mixing. (Electron – mirror electron mixing is forbidden by both ordinary and mirror electric charge conservation, for instance.) Unfortunately, cosmological constraints from Big Bang Nucleosynthesis make it unlikely that these effects will be seen in the laboratory.
We now come to the crux of the matter: since neutrinos and mirror neutrinos are electrically neutral and colourless, they will in general mix if they also have nonzero masses. Furthermore, we will see in the next section that the exact parity symmetry forces the mixing angle between an ordinary neutrino and its mirror partner to be the maximal value of $`\pi /4`$.
I close this section with two brief comments. (i) Let the exact parity symmetry be denoted by $`P^{}`$. Note that it is different from the usual (broken) parity symmetry $`P`$.<sup>c</sup><sup>c</sup>cFor instance, the left-handed electron is transformed into the right-handed mirror-electron by $`P^{}`$, whereas it is transformed into the right-handed electron by $`P`$. However, standard $`CPT`$ is still of course an exact symmetry of the theory. We can therefore define a non-standard time-reversal invariance $`T^{}`$ through $`CPT=P^{}T^{}`$ that must necessarily be exact if $`P^{}`$ is exact. The full Lorentz Group, including all Improper Transformations, is thus a symmetry of the theory. (ii) It is amusing to compare the ‘exact parity’ idea with spacetime supersymmetry. Both extend the Proper Lorentz or Poincaré Group, and both require degree-of-freedom doubling. A crucial difference, though, is that phenomenology forces spacetime supersymmetry to be broken. The resulting proliferation of soft-supersymmetry breaking parameters has no analogue in the EPM.
## 2 Phenomenology of Mirror Neutrinos
Under the exact parity symmetry $`P^{}`$, an ordinary neutrino field $`\nu _\alpha `$ (where $`\alpha =e,\mu ,\tau `$) transforms into its mirror partner field $`\nu _\alpha ^{}`$ as per
$$\nu _{\alpha L}\gamma _0\nu _{\alpha R}^{}.$$
(2)
From basic quantum mechanics, we know that the exact $`P^{}`$ symmetry forces the parity eigenstates to also be mass eigenstates. In the absence of interfamily mixing, this means that the two mass eigenstates $`\nu _{\alpha \pm }`$ per family take the form
$$|\nu _{\alpha \pm }=\frac{1}{\sqrt{2}}\left(|\nu _\alpha \pm |\nu _\alpha ^{}\right).$$
(3)
The positive and negative parity states, $`\nu _{\alpha +}`$ and $`\nu _\alpha `$ respectively, in general have arbitrary masses. The oscillation parameter $`\mathrm{\Delta }m_{\alpha +\alpha }^2|m_{\nu _{\alpha +}}^2m_{\nu _\alpha }^2|`$ is therefore free. The mixing angle, however, is forced by $`P^{}`$ symmetry to have the maximal value of $`\pi /4`$.
Interestingly, SuperKamiokande and other experiments have observed a disappearence of atmospheric muon-neutrinos in a manner which favours maximal mixing with another flavour $`\nu _x`$. Current results preclude $`x=e`$, but allow both $`x=\tau `$ and $`x=s`$ (where $`s`$ stands for “sterile”). It is natural in the EPM to identify $`\nu _x`$ with the effectively sterile mirror muon-neutrino $`\nu _\mu ^{}`$. The maximal mixing angle for the $`\nu _\mu \nu _\mu ^{}`$ subsystem is a simple and characteristic prediction of the EPM that is strongly supported by experiment.<sup>d</sup><sup>d</sup>dFor attempts to explain the large mixing angle in the case of $`\nu _x`$ identified as $`\nu _\tau `$ see, for instance, Ref.. The $`\mathrm{\Delta }m_{\mu +\mu }^2`$ oscillation parameter is adjusted to agree with the measurements. This requires it to be in the $`10^310^2`$ eV<sup>2</sup> range.
The experimental discrimination between $`\nu _x=\nu _s`$ and $`\nu _x=\nu _\tau `$ is a vital further test of this proposal. Hope for progress in this area in the immediate future lies with SuperKamiokande atmospheric neutrino data and the K2K long baseline experiment. The basic requirement is a neutral current measurement, since $`\nu _\tau `$ is sensitive to this interaction while $`\nu _s`$ and $`\nu _\mu ^{}`$ are not. SuperKamiokande has quoted a measured value for the atmospheric neutrino induced $`\pi ^0/e`$ ratio (see Ref. for the present status) that cannot discriminate between the two possibilities because of a large theoretical uncertainty in the $`\pi ^0`$ production cross-section. The measurement of this cross-section by the K2K long baseline experiment is thus of great importance. This may allow a discrimination based on a zenith-angle averaged atmospheric neutrino induced $`\pi ^0/e`$ ratio by SuperKamiokande within about a year from the time of writing. It should be noted, however, that the $`\nu _\mu \nu _\mu ^{}(\nu _s)`$ case predicts that the actual $`\pi ^0/e`$ ratio will be about $`0.8`$ times the no-oscillation or $`\nu _\mu \nu _\tau `$ expectation. If the SuperKamiokande central value were to be around $`0.9`$, then the remaining systematic error would still be too large to discriminate between the possibilities. A cleaner discrimination, which however requires significantly greater statistics, lies in the future through the $`\pi ^0`$ up-down asymmetry. The K2K experiment could in principle discriminate between the two possibilities on its own by comparing the neutral- to charged-current rates at the near and far detectors. However, inadequate statistics at the far detector (SuperKamiokande) may preclude a useful result. However, provided $`\mathrm{\Delta }m_{\mu +\mu }^2`$ is sufficiently large, it should at the very least confirm $`\nu _\mu `$ disappearence. Looking slightly further into the future, the long baseline experiment MINOS and the proposed CERN–Gran-Sasso long baseline experiments should provide important information.
The solar neutrino anomaly provides further motivation for the maximal mixing feature of the EPM. Consider the maximally mixed $`\nu _e\nu _e^{}`$ subsytem in the zero interfamily mixing limit. For the $`10^3\stackrel{<}{}\mathrm{\Delta }m_{e+e}^2/\mathrm{eV}^2\stackrel{<}{}10^{10}`$ range, the maximal $`\nu _e\nu _e^{}`$ oscillations are consistent with disappearence experiments and lead to an energy-independent day-time solar neutrino flux reduction by $`50\%`$. This is consistent with four out of the five solar event rate meansurements relative to the latest standard solar model calculations. (The Chlorine experiment sees a greater than $`50\%`$ deficit.) Since this talk was given, Guth et al. have emphasised that the night-time oscillation-affected solar neutrino rate differs from the day-time rate, even if the vacuum mixing angle is maximal. This leads to some energy-dependence in the night-time flux suppression, and provides an interesting further test. Preliminary calculations show that the day-night asymmetry for the $`\nu _e\nu _e^{}`$ case (or the $`\nu _e\nu _s`$ case with maximal mixing) is potentially observable for the range $`6\times 10^8\stackrel{<}{}\mathrm{\Delta }m_{e+e}^2/\mathrm{eV}^2\stackrel{<}{}2\times 10^5`$, with the range $`2\times 10^7\stackrel{<}{}\mathrm{\Delta }m_{e+e}^2/\mathrm{eV}^2\stackrel{<}{}8\times 10^6`$ already disfavoured by the data.
The future KAMLAND experiment will probe the $`10^3\stackrel{<}{}\mathrm{\Delta }m_{e+e}^2/\mathrm{eV}^2\stackrel{<}{}\mathrm{few}\times 10^5`$ regime by looking for $`\overline{\nu }_e`$ disappearence.<sup>e</sup><sup>e</sup>eThe $`\nu _e\nu _e^{}`$ mode also has potentially observable consequence for atmospheric $`\nu _e`$’s for this parameter range. Another extremely important future test is the neutral to charged current event rate ratio that will be measured by SNO. The mirror electron-neutrino $`\nu _e^{}`$ is effectively a sterile flavour, so SNO should measure the “standard” value for this ratio.
If $`\mathrm{\Delta }m_{e+e}^2/\mathrm{eV}^2`$ is in the $`10^{10}10^{11}`$ range, then “just-so” oscillations result. One amusing possibility is the following: as $`\mathrm{\Delta }m_{e+e}^2`$ is reduced from the range considered in the previous paragraph into the just-so regime, the energy at which the averaged oscillations give way to coherent just-so behaviour decreases. For some value, this transition will happen within the energy range probed by SuperKamiokande. This could possibly be the origin of the mysterious high-energy spectral feature reported by SuperKamiokande! (This type of idea was first examined in the context of $`\nu _e`$ oscillations into an active flavour in Ref..)
So, putting the above in a nutshell, we have KAMLAND probing the high range for $`\mathrm{\Delta }m_{e+e}^2`$, the day-night asymmetry being used in the intermediate range, and just-so signatures such as seasonal variation and Boron neutrino energy spectrum distortion probing the low $`\mathrm{\Delta }m_{e+e}^2`$ regime. The range between about $`6\times 10^8`$ eV<sup>2</sup> and the beginning of the just-so region appears to have no characteristic signature other than the $`50\%`$ energy independent flux suppression. Furthermore, the crucial neutral current measurement at SNO will test the general idea that solar neutrinos are disappearing into sterile states of some sort for the whole $`\mathrm{\Delta }m_{e+e}^2`$ range of interest.
The above analysis saw interfamily mixing set to zero. Certainly, small interfamily neutrino mixing is well motivated by the small mixing observed for the quark sector. However, it is unlikely that this mixing exactly vanishes. I will now comment on three possible consequences of interfamily mixing.
First, the LSND anomaly can be trivially accomodated within the EPM by switching on $`\nu _e\nu _\mu `$ mixing with the appropriate parameter choices. The LSND parameter regime does not significantly modify the solar and atmospheric neutrino scenario discussed above.
Second, the solar neutrino flux depletion can be due to an amalgam of vacuum $`\nu _e\nu _e^{}`$ oscillations and MSW interfamily transitions. This leads to characteristic energy-dependent flux depletions depending on the precise oscillation parameter range chosen. Further, the neutral to charged current induced event rate ratio to be measured by SNO can take on values intermediate between the extreme cases of $`\nu _e\nu _{\mathrm{active}}`$ only and $`\nu _e\nu _s`$ only.
Third, it turns out that small interfamily mixing is well motivated from cosmology, a topic I very briefly review in the next section.
## 3 Cosmology
The tale of how neutrino oscillations affect early universe cosmology is long and complicated. I will pass over it lightly here, just for the sake of completeness, without much in the way of explanations. Please consult, for example, Refs. for further details.
Cosmology and ordinary-mirror (and ordinary-sterile) neutrino oscillations present challenges to each other. On the one hand, it had long been thought that sterile neutrinos ought to mix but weakly with ordinary neutrinos lest the reasonably successful Big Bang Nucleosynthesis (BBN) predictions be spoiled. In particular, it was thought that a $`\nu _\mu \nu _s`$ solution to the atmospheric neutrino problem would have necessarily implied the thermal equilibration of the sterile flavour prior to the BBN epoch, and thus would have increased the expansion rate of the universe. Recall that the expansion rate of the universe during BBN is driven by the relativistic degrees of freedom in the plasma, with “neutrino flavour number $`N_\nu `$” being a convenient measure. In the minimal Standard Model $`N_\nu =3`$, while one thermally equilibrated sterile flavour in addition to the ordinary neutrinos produces $`N_\nu =4`$. There is some confusion in interpreting primordial element abundance data at present, but it is arguable that $`N_\nu <4`$ is preferred. So, it had been thought that a large region of active-sterile oscillation parameter space was at least disfavoured by BBN. This problem was seen to be much more acute for the EPM than for models with a single extra sterile state, because of the three mirror neutrino flavours as well as the mirror photons, electrons and positrons. Prior wisdom would have concluded that the EPM ruined BBN and was therefore unlikely to be true. Thus cosmology challenged sterile and mirror neutrino models.
On the other hand, the discovery of relic neutrino asymmetry amplification, driven by the ordinary-mirror or ordinary-sterile neutrino transitions themselves, showed that the previous pessimism was misplaced: a very natural mechanism for reconciling BBN with sterile or mirror neutrinos, born out of the apparently problematic neutrino scenario itself, actually existed all along but had been missed. The basic point is that large relic neutrino asymmetries (neutrino chemical potentials) will, in a certain large region of oscillation parameter space, be inevitably created via a positive feedback process from the tiny $`CP`$ asymmetry (baryon asymmetry for instance) of the high-temperature background plasma. The large matter (Wolfenstein) effective potentials so induced then damp further ordinary-mirror or ordinary-sterile transitions and lead to quite acceptable BBN predictions (for the appropriate region of parameter space).
The full story of BBN in the presence of ordinary-mirror/sterile transitions is complicated because many different oscillation modes are in general involved. We have discussed above how the excitation of mirror or sterile neutrinos prior to BBN increases the expansion rate as quantified through $`N_\nu `$. But there is another important effect: a fairly large electron neutrino asymmetry will be created before and during BBN given appropriate oscillation parameters. This asymmetry will directly affect BBN reaction rates and will alter the primordial Helium abundance so as to mimic either a negative or a positive contribution to the effective neutrino number during BBN. A detailed numerical calculation is often necessary to determine the final BBN outcome. Such calculations have been performed for a couple of models featuring a single sterile flavour. They have demonstrated that strong ordinary-sterile neutrino mixing can be reconciled with BBN for realistic sterile neutrino models via the interesting physics just discussed.
The first full analysis of neutrino asymmetry evolution and BBN in the Exact Parity Model was completed after this Symposium. It demonstrated that the EPM scenario outlined above is consistent with primordial element abundance measurements for a large region of oscillation parameter space. It turns out that this parameter space region requires some small interfamily mixing.
The challenge for observational cosmology, then, is to pin down cosmological parameters precisely enough to test early universe neutrino physics in some detail. Continuing primordial element abundance measurements will help, but much dramatic new information is likely from the cosmic microwave background anistropy measurements promised by the future MAP and PLANCK satellite missions.
## 4 Conclusion
The Exact Parity Model predicts that if ordinary neutrinos mix with their mirror partners, then they mix maximally. This has been proposed as a very natural and simple explanation of the very large mixing angle deduced from atmospheric $`\nu _\mu `$ disappearence measurements. In addition, maximal oscillations of the $`\nu _e`$ into its mirror partner are well motivated by most of the solar neutrino data. With small interfamily mixing switched on, the LSND anomaly can be explained by ordinary $`\nu _e\nu _\mu `$ oscillations. The EPM offers a theoretically elegant solution to all of the neutrino puzzles within a model that had as its original motivation the retention of the full Lorentz Group as an exact symmetry of Nature. The model also has some very interesting consequences for early universe cosmology, particularly the process of Big Bang Nucleosynthesis. In addition to the important results that continue to be produced by SuperKamiokande, we await with interest upcoming experiments – such as K2K, SNO, KAMLAND and others – that will provide further crucial tests of the Exact Parity Model.
## 5 Acknowledgements
I would very much like to thank Professor Milla Baldo Ceolin for organising this stimulating symposium in such an inspirational setting. I would also like to thank the participants of the symposium for interesting discussions and for their informative presentations. I warmly acknowledge my very wise long time collaborator Robert Foot, and my present students Nicole Bell, Roland Crocker and Yvonne Wong for many fruitful scientific discussions.
|
no-problem/9904/astro-ph9904273.html
|
ar5iv
|
text
|
# The Civ Absorption–Mgii Kinematics Connection in ⟨𝑧⟩∼0.7 Galaxies
## 1. Introduction
The central and complex role of galactic gas in the star formation, dynamical, and chemical evolution of galaxies is well established. Evidence is mounting that, at the present epoch, multiphase gaseous halos are a physical extension of their host galaxy’s interstellar medium (ISM); their physical extent, spatial distribution, ionization conditions, and chemical enrichment are intimately linked to the energy density rate infused into the galaxy’s ISM by stellar winds and ionizing radiation, and by supernovae shock waves (e.g. Dahlem (1998), and references therein). One long–standing question is how the halos and ISM of earlier epoch galaxies compare or relate, in an evolutionary sense, to those of the present epoch.
Normal ($`L^{}`$) galaxies at intermediate redshifts ($`0.5z1.0`$) are seen to give rise to low ionization Mgii $`\lambda \lambda 2796,2803`$ absorption with $`W_r(2796)0.3`$ Å out to projected distances of $`40h^1`$ kpc (e.g. Steidel (1995)). A key question is whether low ionization gas at large galactocentric distances is due to infall (i.e. satellite accretion, minor mergers, intragroup or intergalactic infall), or to energetic processes in the ISM (galactic fountains, chimneys). Using high resolution Mgii profiles, Churchill, Steidel, & Vogt (1996) found no suggestive trends between low ionization gas properties and galaxy properties at $`z=0.7`$. A next logical step toward addressing this question is to explore the high ionization gas in Mgii absorption selected galaxies at these redshifts.
The Civ $`\lambda \lambda 1548,1550`$ doublet is a sensitive probe of higher ionization gas. Using Civ and Mgii, Bergeron et al. (1994) inferred multiphase ionization structure around a $`z=0.79`$ galaxy. Churchill & Charlton (1999) incorporated the Mgii kinematics and found high metallicity, multiphase absorption in a possible group of three galaxies at $`z=0.93`$. In a survey of the 3C 336 field (Q $`1622+238`$), Steidel et al. (1997) reported that $`W_r(1548)/W_r(2796)`$ appeared to be correlated with galaxy–QSO impact parameter, as expected if halo gas density decreases with galactocentric distance.
In this Letter, we present a study of Civ absorption associated with $`0.4z1.4`$ Mgii absorption–selected galaxies using Faint Object Spectrograph (FOS) data available in the Hubble Space Telescope (HST) Archive. We compare the Civ strengths to the Mgii strengths and kinematics and (when available) to the galaxy properties.
## 2. The Data
The Mgii absorbers are selected from the HIRES/Keck sample of Churchill (1997) and Churchill et al. (1999a). The HIRES resolution is $`6`$ km s<sup>-1</sup> (Vogt et al. (1994)). The data were processed using IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observat–
ories, which are operated by AURA, Inc., under contract to the NSF., as described in Churchill et al. (1999a). The redshifts of the individual Mgii sub–components were obtained using minfit (Churchill (1997)), a Voigt profile (VP) fitter that uses $`\chi ^2`$ minimization. For $`36`$ of the Mgii absorbers, FOS/HST (resolution $`230`$ km s<sup>-1</sup>) spectra covering Civ were available from the HST Archive. For four absorbers, Civ was taken from ground–based spectra of Steidel & Sargent (1992), and Sargent, Boksenberg, & Steidel (1988). The FOS spectra were processed using the techniques of the HST QSO Absorption Line Key Project (Schneider et al. (1993); Jannuzi et al. (1998)). For 13 systems, the absorbing galaxy impact parameters, rest–frame $`K`$ and $`B`$ luminosities, and $`BK`$ colors are available from Steidel, Dickinson, & Persson (1994), Churchill et al. (1996), and Steidel et al. (1997). We will present a more detailed account in a companion paper (Churchill et al. 1999b ).
## 3. Results
In Figure 1, we present the Mgii and Civ data for each of the 40 systems (note that the velocity scale for Mgii is 500 km s<sup>-1</sup> and for Civ is 3000 km s<sup>-1</sup>). Ticks above the HIRES spectra give the velocities of the multiple VP Mgii sub–components and ticks above the FOS data give the expected location of these components for both members of the Civ doublet. The Mgii profiles are shown in order of increasing kinematic spread from the upper left to lower right. The kinematic spread is the second velocity moment of the apparent optical depth of the Mgii $`\lambda 2796`$ profile, given by $`\omega _v^2=\tau _a(v)v^2𝑑v/\tau _a(v)𝑑v`$, where $`\tau _a(v)=\mathrm{ln}[I_c(v)/I(v)]`$, and $`I(v)`$ and $`I_c(v)`$ are the observed flux and the fitted continuum flux at velocity $`v`$, respectively. The zero–point velocity is given by the optical depth median of the Mgii $`\lambda 2796`$ profile. The kinematic spreads range from a few to $`120`$ km s<sup>-1</sup>.
In Figure 2$`a`$, we present $`W_r(1548)`$ vs. $`W_r(2796)`$, which exhibits considerable scatter. Solid data points are damped Ly$`\alpha `$ absorbers (DLAs) and candidate DLAs, based upon $`W_r(\text{Ly}\alpha )8`$ Å or $`W_r(\text{Mg}\text{ii}\lambda 2796)W_r(\text{Fe}\text{ii}\lambda 2600)1.0`$ Å (Boissé et al. (1998)).
As seen in Figure 2$`b`$, $`W_r(2796)`$ correlates with $`\omega _v`$. Significant scatter arises because $`\omega _v`$ is sensitive to the line–of–sight chance presence and equivalent width distribution of the smaller $`W_r(2796)`$, “outlying velocity” clouds (see Charlton & Churchill (1998)). The DLAs define a “saturation line”; profiles with $`W_r(2796)>0.3`$ Å along this line have saturated cores. As seen in Figure 2$`c`$, there is a tight correlation between $`\omega _v`$ and $`W_r(1548)`$. A Spearman–Kendall test, incorporating limits (LaValley, Isobe, & Feigelson (1992)), yielded a greater than 99.99% confidence level. A weighted least–squares fit to the data (dotted line through the origin and with upper limits excluded) yielded a slope of $`\omega _v65`$ km s<sup>-1</sup> per $`1`$ Å of $`W_r(1548)`$. The data exhibit a scatter of $`\sigma _{W_r}(1548)=0.22`$ Å about the fit. An essentially identical maximum likelihood fit is shown as a dashed line.
Over the interval $`35\omega _v55`$ km s<sup>-1</sup>, there are four absorbers that lack the higher ionization phase typical for their Mgii kinematic spread; they are “Civ deficient”. These systems, Q 0117+213 at $`z=1.0479`$, Q 1317+277 at $`z=0.6606`$, Q 0117+213 at $`z=0.7290`$, and Q 1329+274 at $`z=0.8936`$ (in order of decreasing $`\omega _v`$), lie $`3.3`$, $`2.9`$ $`2.5`$, and $`2.3\sigma `$ from the correlation line, respectively. As compared to other DLAs with similar $`\omega _v`$, the DLA at $`z=0.6561`$ in the field of Q $`1622+238`$ (3C 336) also appears to have a slight Civ deficiency.
## 4. Discussion
Three important observational facts are: (1) the scatter in $`W_r(1548)`$ vs. $`W_r(2796)`$ indicates that the strength of Civ absorption is not driven by the strong “central” Mgii component that dominates $`W_r(2796)`$, (2) $`\omega _v`$ is sensitive to the presence of small $`W_r(2796)`$, outlying velocity clouds, and (3) the scatter of $`W_r(2796)`$ vs. $`\omega _v`$ is large, whereas the scatter of $`W_r(1548)`$ vs. $`\omega _v`$ about the correlation line is only $`0.22`$ Å. These facts imply that, independent of the overall Mgii line–of–sight kinematics, the existence and global dynamics of smaller, kinematic “outliers” are intimately linked to the presence and physical conditions of a higher ionization phase.
It would appear that Civ is governed by the same physical processes that give rise to kinematic outlying Mgii clouds. The Civ absorption could arise due to the ionization balance in the Mgii clouds, ionization structure in the clouds, or due to multiphase structure. In most cases, multiphase structure is the likely explanation because the small $`W_r(2796)`$, single phase, outlying velocity clouds with $`b6`$ km s<sup>-1</sup> cannot produce the large observed $`W_r(1548)`$, as shown by Churchill et al. (1999a) \[also see Churchill & Charlton (1999)\].
If the Mgii clouds are accreted by infall and/or minor mergers, such that the energetics originated gravitationally (e.g. Mo & Miralda–Escudé (1996)), multiphase structure could arise from shock heating and/or merger induced star formation (e.g. Hernquist & Mihos (1995)). If, on the other hand, the absorbing gas is mechanically produced by winds from massive stars, OB associations, or from galactic fountains and chimneys, a dynamic multiphase structure could arise from shock heated ascending material that forms a high ionization layer (corona) and supports a descending lower ionization layer, which then breaks into cool, infalling clouds \[see Avillez (1999), and references therein\]. Both scenarios imply a link between galaxy star formation histories, in particular multiple episodes of elevated star formation \[but not necessarily bursting (Dahlem (1998))\], and the presence of outlying velocity Mgii clouds and strong Civ.
Infall models predict increasing cloud densities with decreasing galactocentric distance (e.g. Mo & Miralda–Escudé (1996)), resulting in an ionization gradient (clouds further out are more highly ionized). In Figure 3$`a`$, we show $`W_r(1548)/W_r(2796)`$ vs. impact parameter for 13 absorbing galaxies. There is no obvious evidence for an ionization gradient (95% confidence). However, $`W_r(1548)/W_r(2796)`$ could be sensitive to halo mass (e.g. Mo & Miralda–Escudé (1996)), to the presence of satellite galaxies (York et al. (1986)), or to the sampling of discrete clouds over a range of galactocentric distances along the line of sight. In Figure 3$`b`$, we show $`W_r(1548)/W_r(2796)`$ vs. galaxy $`BK`$ color for 11 galaxies. There is a suggested trend ($`2\sigma `$) for red galaxies (those dominated by late–type stellar populations) to have small $`W_r(1548)/W_r(2796)`$. If such a trend is confirmed in a larger data sample, it would not be incompatable with a dynamical multiphase scenario in which absorbing gas properties are linked to the host galaxy stellar populations, and therefore, star formation history.
The tight correlation between Mgii kinematics and Civ absorption may imply a self–regulating process involving both ionization conditions and kinematics in the halos of higher redshift, $`L^{}`$ galaxies (e.g. Lanzetta & Bowen (1992)), as explored by Norman & Ikeuchi (1989) and Li & Ikeuchi (1992). Perhaps outflow energetics from supernovae during periods of elevated star formation are balanced by the galactic gravitational potential well, resulting in a fairly narrow range of kinematic and multiphase ionization conditions. Such a balance might set up a high ionization Galactic–like “corona” (Savage, Sembach, & Lu (1997)) in proportion to the kinematics of gravitationally bound, cooling material. All this would imply that galaxy “coronae” have been in place since $`z1`$, that their nature primarily depends upon the host galaxy’s star formation history, and therefore morphology, environment, and stellar populations (as seen locally, e.g. Dahlem (1998)). Detailed observations of the stellar content and galactic morphologies and space–based UV high–resolution spectroscopy of high ionization absorption lines, would be central to establishing the interactive cycles between stars and gas in higher redshift galaxies.
Support for this work was provided by the NSF (AST–9617185), and NASA (NAG 5–6399 and AR–07983.01–96A) the latter from the Space Telescope Science Institute, which is operated by ARUA, Inc., under NASA contract NAS5–26555 BTJ acknowledges support from NOAO, which is operated by ARUA, Inc., under cooperative agreement with the NSF.
|
no-problem/9904/astro-ph9904216.html
|
ar5iv
|
text
|
# HIGH ENERGY NEUTRINO ASTRONOMY: WIN 99
## 1 Neutrino Astronomy: Multidisciplinary Science
Using optical sensors buried in the deep clear ice or deployed in deep ocean and lake waters, neutrino astronomers are attacking major problems in astronomy, astrophysics, cosmic ray physics and particle physics by commissioning a first generation of neutrino telescopes. According to estimates covering a wide range of scientific objectives, a neutrino telescope with effective telescope area of 1 kilometer squared is required to address the most fundamental questions. Planning is already underway to instrument a cubic volume of ice, 1 kilometer on the side, as a neutrino detector. This infrastructure provides unique opportunities for yet more interdisciplinary science covering the geosciences and biology.
Among the many problems which high energy neutrino telescopes will address are the origin of cosmic rays, the engines which power active galaxies, the nature of gamma ray bursts (GRB), the search for the annihilation products of halo cold dark matter and, possibly, even the structure of the Earth’s interior. In burst mode they scan the sky for galactic supernovae and, more speculatively, for the birth of supermassive black holes. Coincident experiments with Earth- and space-based gamma ray observatories, cosmic ray telescopes and gravitational wave detectors such as LIGO can be contemplated. With high-energy neutrino astrophysics we are poised to open a new window into space and back in time to the highest-energy processes in the Universe.
Neutrino telescopes can do particle physics. This is often illustrated by their capability to detect the annihilation into high energy neutrinos of neutralinos, the lightest supersymmetric particle which may constitute the cold dark matter. Also, with cosmological sources such as active galaxies and GRBs we will be observing $`\nu _e`$ and $`\nu _\mu `$ neutrinos over a baseline of $`10^3`$ Megaparsecs. Above 1 PeV these are absorbed by charged-current interactions in the Earth before reaching a detector at the opposite surface. In contrast, the Earth never becomes opaque to $`\nu _\tau `$ since the $`\tau `$ produced in a charged-current $`\nu _\tau `$ interaction decays back into $`\nu _\tau `$ before losing significant energy. This penetration of tau neutrinos through the Earth above $`10^2`$ TeV provides an experimental signature for neutrino oscillations. The appearance of a $`\nu _\tau `$ component in a pure $`\nu _{e,\mu }`$ beam would be signalled by a flat angular dependence of a source intensity at the highest neutrino energies. Such a flat zenith angle dependence for the farthest sources is a signature for tau neutrino mixing with a sensitivity in $`\mathrm{\Delta }m^2`$ as low as $`10^{17}`$ eV<sup>2</sup>. With neutrino telescopes we will also search for ultrahigh-energy neutrino signatures from topological defects and magnetic monopoles; for properties of neutrinos such as mass, magnetic moment, and flavor-oscillations; and for clues to entirely new physical phenomena. The potential of neutrino “telescopes” to do particle physics is evident.
### 1.1 Cosmic Particle Accelerators: Gamma Ray Bursts Take Center Stage
Recently, GRBs may have become the best motivated source for high energy neutrinos . Although neutrino emission may be less copious and less energetic than anticipated in some models of active galaxies, the predicted fluxes can be calculated in a relatively model-independent way. There is increasing observational support for a model where an initial event involving neutron stars, black holes or the collapse of highly magnetized rotating stars, deposits a solar mass of energy into a radius of order 100 km. Such a state is opaque to light. The observed gamma ray display is the result of a relativistic shock which expands the original fireball by a factor $`10^6`$ in 1 second. Gamma rays arise by synchrotron radiation by relativistic electrons accelerated in the shock, possibly followed by inverse-Compton scattering.
It has been suggested that the same cataclysmic events produce the highest energy cosmic rays. This association is reinforced by more than the phenomenal energy and luminosity. Both GRBs and the highest energy cosmic rays are produced in cosmological sources, i.e., distributed throughout the Universe. Also, the average rate $`\dot{E}4\times 10^{44}\mathrm{Mpc}^3\mathrm{yr}^1`$ at which energy is injected into the Universe as gamma rays from GRBs is similar to the rate at which energy must be injected in the highest energy cosmic rays in order to produce the observed cosmic ray flux beyond the “ankle” in the spectrum at $`10^{18}`$ eV.
The association of cosmic rays with GRBs obviously requires that kinetic energy in the shock is converted into the acceleration of protons as well as electrons. It is assumed that the efficiency with which kinetic energy is converted to accelerated protons is comparable to that for electrons. The production of high-energy neutrinos is inevitably a feature of the fireball model because the protons will photoproduce pions and, therefore, neutrinos in interactions with the gamma rays in the burst. We have a beam dump configuration where both the beam and target are constrained by observation: the beam by the observed cosmic ray spectrum and the photon target by astronomical measurements of the high energy photon flux.
From an observational point of view, the predicted flux can be summarized in terms of the main ingredients of the model:
$$N_\nu (km^2year^1)25\left[\frac{f_\pi }{20\%}\right]\left[\frac{\dot{E}}{4\times 10^{44}\mathrm{Mpc}^3\mathrm{yr}^1}\right]\left[\frac{E_\nu }{700\mathrm{TeV}}\right]^1,$$
(1)
i.e., we expect 25 events in a km<sup>2</sup> detector in one year. Here $`f_\pi `$, estimated to be 20%, is the efficiency by which proton energy is converted into the production of pions and $`\dot{E}`$ is the total injection rate into GRBs averaged over volume and time. The energy of the neutrinos is fixed by particle physics and is determined by the threshold for photoproduction of pions by the protons on the GRB photons in the shock. Note that GRBs produce a “burst” spectrum of neutrinos. After folding the falling GRB energy spectrum with the increasing detection efficiency, a burst energy distribution results centered on an average energy of several hundred TeV.
Interestingly, this flux may be observable in operating first-generation detectors. The effective area for the detection of 100 TeV neutrinos can approach 0.1 km<sup>2</sup> as a result of the large size of the events. A $`\nu _e`$ of this energy initiates an electromagnetic shower which produces single photoelectron signals in ice over a radius of 250 m. The effective area for a $`\nu _\mu `$ is larger because the muon has a range of 10 km (water-equivalent) and produces single photoelectrons as far as 200 m from the track by catastrophic energy losses. Because these spectacular events arrive with the GRB time-stamp of order 1 second precision and with an unmistakable high-energy signature, background rejection is greatly simplified. Although, on average, we expect much less than one event per burst, a relatively near burst would produce multiple events in a single second. The considerable simplification of observing high energy neutrinos in the burst mode has inspired proposals for highly simplified dedicated detectors.
The importance of making GRB observations cannot be overemphasized:
* The observations are a direct probe of the fireball model of GRBs.
* They may unveil the source of the highest energy cosmic rays.
* The zenith angle distribution of the GRB neutrinos may reveal the appearance of $`\nu _\tau `$ in what was a $`\nu _e,\nu _\mu `$ beam at its origin. The appearance experiment with a baseline of thousands of megaparsecs has a sensitivity to oscillations of $`\mathrm{\Delta }m^2`$ as low as $`10^{17}`$ eV<sup>2</sup>, as previously discussed.
* The relative timing of photons and neutrinos over cosmological distances will allow unrivaled tests of special relativity.
* The fact that photons and neutrinos should suffer the same time delay travelling through the gravitational field of our galaxy will lead to better tests of the weak equivalence principle.
In response to the evidence that atmospheric neutrinos oscillate, workers in this field have been investigating the possibility of studying neutrino mass with the atmospheric neutrinos which, up to now, are used for calibration only. These studies may significantly reshape the architecture of some detectors. We will return to this topic later on.
## 2 Large Natural Cherenkov Detectors
The study of GRBs is one more example of a science mission that requires kilometer-scale neutrino detectors. This is not a real surprise. The probability to detect a PeV neutrino is roughly $`10^3`$. This is easily computed from the requirement that, in order to be detected, the neutrino has to interact within a distance of the detector which is shorter than the range of the muon it produces. At PeV energy the cosmic ray flux is of order 1 per m<sup>2</sup> per year and the probability to detect a neutrino of this energy is of order 10<sup>-3</sup>. A neutrino flux equal to the cosmic ray flux will therefore yield only a few events per day in a kilometer squared detector. At EeV energy the situation is worse. With a cosmic ray rate of 1 per km<sup>2</sup> per year and a detection probability of 0.1, one can only detect several events per year in a kilometer squared detector provided the neutrino flux exceeds the proton flux by 2 orders of magnitude or more. For the neutrino flux generated by cosmic rays interacting with CMBR photons and for sources like active galaxies and topological defects, this is indeed the case. All above estimates are however conservative and the rates should be higher because absorption of protons in the source is expected, and the neutrinos escape the source with a flatter energy spectrum than the protons. In summary, at least where cosmic rays are part of the beam dump, their flux and the neutrino cross section and muon range define the size of a neutrino telescope. A telescope with kilometer squared effective area represents a neutrino detector of kilometer cubed volume.
The first generation of neutrino telescopes, launched by the bold decision of the DUMAND collaboration over 25 years ago to construct such an instrument, are designed to reach a large telescope area and detection volume for a neutrino threshold of order 10 GeV. This relatively low threshold permits calibration of the novel instrument on the known flux of atmospheric neutrinos. The architecture is optimized for reconstructing the Cherenkov light front radiated by an up-going, neutrino-induced muon. Only up-going muons made by neutrinos reaching us through the Earth can be successfully detected. The Earth is used as a filter to screen the fatal background of cosmic ray muons. This makes neutrino detection possible over the lower hemisphere of the detector. Up-going muons must be identified in a background of down-going, cosmic ray muons which are more than $`10^5`$ times more frequent for a depth of 1$``$2 kilometers. The method is sketched in Fig. 1.
The optical requirements of the detector medium are severe. A large absorption length is required because it determines the spacings of the optical sensors and, to a significant extent, the cost of the detector. A long scattering length is needed to preserve the geometry of the Cherenkov pattern. Nature has been kind and offered ice and water as adequate natural Cherencov media. Their optical properties are, in fact, complementary. Water and ice have similar attenuation length, with the role of scattering and absorption reversed; see Table 1. Optics seems, at present, to drive the evolution of ice and water detectors in predictable directions: towards very large telescope area in ice exploiting the large absorption length, and towards lower threshold and good muon track reconstruction in water exploiting the large scattering length.
## 3 Baikal, ANTARES, Nestor and NEMO: Northern Water.
Whereas the science is compelling, the real challenge is to develop a reliable, expandable and affordable detector technology. With the termination of the pioneering DUMAND experiment, the efforts in water are, at present, spearheaded by the Baikal experiment. The Baikal Neutrino Telescope is deployed in Lake Baikal, Siberia, 3.6 km from shore at a depth of 1.1 km. An umbrella-like frame holds 8 strings, each instrumented with 24 pairs of 37-cm diameter QUASAR photomultiplier tubes (PMT). Two PMTs in a pair are switched in coincidence in order to suppress background from natural radioactivity and bioluminescence. Operating with 144 optical modules since April 1997, the NT-200 detector has been completed in April 1998 with 192 optical modules (OM). The Baikal detector is well understood and the first atmospheric neutrinos have been identified.
The Baikal site is competitive with deep oceans although the smaller absorption length of Cherenkov light in lake water requires a somewhat denser spacing of the OMs. This does however result in a lower threshold which is a definite advantage, for instance for oscillation measurements and WIMP searches. They have shown that their shallow depth of 1 kilometer does not represent a serious drawback. By far the most significant advantage is the site with a seasonal ice cover which allows reliable and inexpensive deployment and repair of detector elements.
With data taken with 96 OMs only, they have shown that atmospheric muons can be reconstructed with sufficient accuracy to identify atmospheric neutrinos; see Fig. 2. The neutrino events are isolated from the cosmic ray muon background by imposing a restriction on the chi-square of the Cherenkov fit, and by requiring consistency between the reconstructed trajectory and the spatial locations of the OMs reporting signals. In order to guarantee a minimum lever arm for track fitting, they were forced to reject events with a projection of the most distant channels on the track smaller than 35 meters. This does, of course, result in a higher threshold.
In the following years, NT-200 will be operated as a neutrino telescope with an effective area between $`10^35\times 10^3`$ m<sup>2</sup>, depending on energy. Presumably too small to detect neutrinos from extraterrestrial sources, NT-200 will serve as the prototype for a larger telescope. For instance, with 2000 OMs, a threshold of $`1020`$ GeV and an effective area of $`5\times 10^410^5`$ m<sup>2</sup>, an expanded Baikal telescope would fill the gap between present underground detectors and planned high threshold detectors of cubic kilometer size. Its key advantage would be low threshold.
The Baikal experiment represents a proof of concept for deep ocean projects. These have the advantage of larger depth and optically superior water. Their challenge is to find reliable and affordable solutions to a variety of technological challenges for deploying a deep underwater detector. Several groups are confronting the problem, both NESTOR and ANTARES are developing rather different detector concepts in the Mediterranean.
The NESTOR collaboration, as part of a series of ongoing technology tests, is testing the umbrella structure which will hold the OMs. They have already deployed two aluminum “floors”, 34 m in diameter, to a depth of 2600 m. Mechanical robustness was demonstrated by towing the structure, submerged below 2000 m, from shore to the site and back. These test should soon be repeated with fully instrumented floors. The actual detector will consist of a tower of 12 six-legged floors vertically separated by 30 m. Each floor contains 14 OMs with four times the photocathode area of the commercial 8 inch photomultipliers used by AMANDA and ANTARES.
The detector concept is patterned along the Baikal design. The symmetric up/down orientation of the OMs will result in uniform angular acceptance and the relatively close spacings in a low threshold. NESTOR does have the advantage of a superb site, possibly the best in the Mediterranean. The detector can be deployed below 3.5 km relatively close to shore. With the attenuation length peaking at 55 m near 470 nm the site is optically superior to that of all deep water sites investigated for neutrino astronomy.
The ANTARES collaboration is investigating the suitability of a 2400 m deep Mediterranean site off Toulon, France. The site is a trade-off between acceptable optical properties of the water and easy access to ocean technology. Their detector concept requires, for instance, ROV’s for making underwater connections. First results on water quality are very encouraging with an attenuation length of 40 m at 467 nm and a scattering length exceeding 100 m. Random noise exceeding 50 khz per OM is eliminated by requiring coincidences between neighboring OMs, as is done in the Lake Baikal design. Unlike other water experiments, they will point all photomultipliers sideways or down in order to avoid the effects of biofouling. The problem is significant at the Toulon site, but only affects the upper pole region of the OM. Relatively weak intensity and long duration bioluminescence results in an acceptable deadtime of the detector. They have demonstrated their capability of deploying and retrieving a string.
With the study of atmospheric neutrino oscillations as a top priority, they plan to deploy in 2001-2003 10 strings instrumented over 400 m with 100 OMs. After study of the underwater currents they decided that they can space the strings by 100 m, and possibly by 60 m. The large photocathode density of the array will allow the study of oscillations in the range $`255<L/E<2550\mathrm{km}\mathrm{GeV}^1`$ with neutrinos in the energy range $`5<E_\nu <50`$ GeV.
A new R&D initiative based in Catania, Sicily has been mapping Mediterranean sites, studying mechanical structures and low power electronics. One must hope that with a successful pioneering neutrino detector of $`10^3\mathrm{km}^3`$ in Lake Baikal, a forthcoming $`10^2\mathrm{km}^3`$ detector near Toulon, the Mediterranean effort will converge on a $`10^1\mathrm{km}^3`$ detector at the NESTOR site. For neutrino astronomy to become a viable science several of these, or other, projects will have to succeed besides AMANDA. Astronomy, whether in the optical or in any other wave-band, thrives on a diversity of complementary instruments, not on “a single best instrument”. When, for instance, the Soviet government tried out the latter method by creating a national large mirror project, it virtually annihilated the field.
## 4 AMANDA: Southern Ice
Construction of the first-generation AMANDA detector was completed in the austral summer 96–97. It consists of 300 optical modules deployed at a depth of 1500–2000 m; see Fig. 3. Here the optical module consists of an 8 inch photomultiplier tube and nothing else. It is connected to the surface by a cable which transmits the HV as well as the anode current of a triggered photomultiplier. The instrumented volume and the effective telescope area of this instrument matches those of the ultimate DUMAND Octogon detector which, unfortunately, could not be completed.
As predicted from transparency measurements performed with strings near 1 km depth, it was found that ice is bubble-free below 1400 m. Its optical properties were a surprise, nevertheless the bubble-free ice turned out to be an adequate Cherenkov medium. We explain this next.
The AMANDA detector was antecedently proposed on the premise that inferior properties of ice as a particle detector with respect to water could be compensated by additional optical modules. The technique was supposed to be a factor $`510`$ more cost-effective and, therefore, competitive. The design was based on then current information:
* the absorption length at 370 nm, the wavelength where photomultipliers are maximally efficient, had been measured to be 8 m;
* the scattering length was unknown;
* the AMANDA strategy was to use a large number of closely spaced OM’s to overcome the short absorption length. Simulations indicated that muon tracks triggering 6 or more OM’s were reconstructed with degree accuracy. Taking data with a simple majority trigger of 6 OM’s or more at 100 Hz resulted in an average effective telescope area of $`10^4`$ m<sup>2</sup>, somewhat smaller for atmospheric neutrinos and significantly larger for the high energy signals.
The reality is that:
* the absorption length is 100 m or more, depending on depth;
* the scattering length is $`25`$ m (preliminary, this number represents an average value which may include the combined effects of deep ice and the refrozen ice disturbed by the hot water drilling);
* because of the large absorption length, OM spacings are similar, actually larger, than those of proposed water detectors. Also, in a trigger 20 OM’s report, not 6. Of these more than 6 photons are, on average, “not scattered.” A “direct” photon is typically required to arrive within 25 nsec of the time predicted by the Cherenkov fit. This allows for a small amount of scattering and includes the dispersion of the anode signals over the 2 km cable. In the end, reconstruction is therefore as before, although additional information can be extracted from scattered photons by minimizing a likelihood function which matches their measured and expected time delays.
The most striking demonstration of the quality of natural ice as a Cherenkov detector medium is the observation of atmospheric neutrino candidates with the partially deployed AMANDA detector which consisted of only eighty 8 inch photomultiplier tubes. The up-going muons are separated from the down-going cosmic ray background once a sufficient number of direct photons and a minimum track length guarantee adequate reconstruction of the Cherenkov cone. For details, see Ref. . The analysis methods were verified by reconstructing cosmic ray muon tracks registered in coincidence with a surface air shower array.
After completion of the AMANDA detector with 300 OMs, a similar analysis led to a first calibration of the instrument using the atmospheric neutrino beam. The separation of signal and background are shown in Fig. 4 after requiring, sequentially, 5 direct photons, a minimum 100 m track length, and 6 direct photons per event. The details are somewhat more complicated; see Ref. . A neutrino event is shown in Fig. 5. By requiring the long muon track the events are gold-plated, but the threshold high, roughly $`E_\nu 50`$ GeV. This type of analysis will allow AMANDA to harvest of order 100 atmospheric neutrinos per year, adequate for calibration.
While calibration continues, science started. Ongoing efforts cover a wide range of goals: indirect search for dark matter, active galaxies, gamma ray bursts and magnetic monopoles.
While water detectors exploit large scattering length to achieve sub-degree angular resolution, ice detectors can more readily achieve large telescope area because of the long absorption length of blue light. By instrumenting a cube of ice, 1 kilometer on the side, the planned IceCube detector will reach the effective telescope area of 1 kilometer squared which is, according to estimates covering a wide range of scientific objectives, required to address the most fundamental questions. A strawman detector with effective area in excess of 1 km<sup>2</sup> consists of 4800 OM’s: 80 strings spaced by $``$ 100 m, each instrumented with 60 OM’s spaced by 15 m. IceCube will offer great advantages over AMANDA and AMANDA II beyond its larger size: it will have a much higher efficiency to reconstruct tracks, map showers from electron- and tau-neutrinos (events where both the production and decay of a $`\tau `$ produced by a $`\nu _\tau `$ can be identified) and, most importantly, adequately measure neutrino energy.
Acknowledgments: This work was supported in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation, and in part by the U.S. Department of Energy under Grant No. DE-FG02-95ER40896.
References
|
no-problem/9904/astro-ph9904139.html
|
ar5iv
|
text
|
# The light curves of the short–period variable stars in 𝜔 Centauri
## 1 Introduction
Pulsating variables are being continuously discovered in the course of large–scale projects. The Fourier decomposition describes their light curves in a powerful, synthetic way, supplying information on the pulsational content. As an example, Fourier parameters give the possibility to determine if a Cepheid pulsates in the fundamental or in an overtone mode (see Pardo & Poretti 1997 for an application to double–mode Cepheids) and this could make any Period–Luminosity relationship more clear.
The analysis of the light curve of short–period pulsating variables ($`P<`$0.20 d) was carried out firstly by Antonello et al. (1986); then Poretti et al. (1990) and Musazzi et al. (1998) supplied new observational evidence. All these stars are located in the Galaxy and they do not belong to clusters; we shall call them hereinafter “galactic” variables. They are both Pop. I ($`\delta `$ Sct stars) and Pop. II (SX Phe stars) objects; no clear separation of the light curves as a function of the population was detected.
The OGLE project collected a large amount of photometric data while monitoring the globular cluster NGC 5139$`\omega `$ Cen (Kaluzny et al. 1996, 1997). 34 new SX Phe stars were discovered: 24 are presented by Kaluzny et al. (1996), 10 by Kaluzny et al. (1997). These data can supply original results since galactic stars do not display periods shorter than 0.06 d, while in the $`\omega `$ Cen sample this value is rather an upper limit. Therefore, we have an opportunity to verify if there is a straight connection between the two different samples and, if any, to extend the period baseline.
## 2 Period verification and refinement
The time baseline covered by the OGLE monitoring is around 120 days (i.e. a single observing season) for most of the stars, but in 9 cases the available data extend over two seasons. In this case, an improvement of the goodness of the fit could be obtained by calculating a solution for each season and then aligning the mean magnitudes (this procedure was applied to the measurements of OGLEGC 3, 4, 5). As a matter of fact, shifts up to 0.048 mag were observed in Field 5139BC, which are surely due to observational or instrumental problems. In two cases (OGLEGC 42, 45), we did not consider the data obtained in one season, as they were a small part of the total and probably affected by a misalignment which was difficult to quantify. In the remaining four cases (OGLEGC 9, 29, 38, 59) the procedure of the re-alignment did not introduce appreciable effects on the fit.
We made an independent period search. Since the baseline and the number of measurements were appropriate, all the values previously known were confirmed. Only the case of OGLEGC 34 deserves some comment. Kaluzny et al. (1996) suspected a double–mode nature on the basis of the period search carried out with the CLEAN algorithm. We performed the frequency search by using the least–squares iterative method (Vaniĉek 1971) and we obtained the power spectra shown in Fig. 1 (upper panel). The peak at $`f`$=26.1611 cd<sup>-1</sup>is the highest, but the difference with respect to the alias at 25.1611 cd<sup>-1</sup>is very small. When introducing $`f`$ as a known constituent, the power spectrum did not reveal any significant feature in the range that would be expected for a second period (lower panel). The CLEAN algorithm is probably responsible for the result quoted by Kaluzny et al. (1996): because it cannot match the odd noise distribution, the signal is spread at different peaks. OGLEGC 34 is probably monoperiodic, but the period is uncertain and may be either one of the two values reported above; we have a slight preference for $`f`$=26.1611 cd<sup>-1</sup>because it gives a better fit and a better residual power spectrum. Note also in the lower panel of Fig. 1 the increasing noise at very low frequencies, the fingerprint of night–to–night misalignments.
## 3 Fourier parameters
As a further step, we fitted the $`V`$ magnitudes by means of the formula
$$V(t)=V_o+\underset{i}{}A_i\mathrm{cos}[2\pi if(tT_o)+\varphi _i]$$
(1)
where $`f`$ is the frequency, measured in cycles per day (cd<sup>-1</sup>). From the least–squares coefficients we calculated the Fourier parameters $`R_{ij}=A_i/A_j`$ (in particular $`R_{21}=A_2/A_1`$) and $`\varphi _{ij}=j\varphi _ii\varphi _j`$ (in particular $`\varphi _{21}=\varphi _22\varphi _1`$). These parameters are reported in Tab. 1; the mean magnitude of OGLEGC 29 is assumed from Kaluzny et al. (1996) as the values listed in the electronic table are shifted up by 2.5 mag. The period values obtained from the least–squares routine are listed, but they do not differ greatly from those reported by Kaluzny et al. (1996, 1997).
Typical error bars are $`\pm `$0.33 rad for the $`\varphi _{21}`$values and $`\pm `$0.05 for the $`R_{21}`$ ones. Note that the amplitudes quoted hereinafter are those of the cosine terms, i.e. the half–amplitude of the light variation. No significant $`2f`$ term could be evidenced in 12 cases (OGLEGC 7, 24, 34, 35, 37, 40, 46, 59, 60, 63, 66, 70). For these stars the light curves do not deviate appreciably from a sinusoid: that means that if a 2<sup>nd</sup>–order fit is forced on the data, the error bar on the amplitude of the 2$`f`$ term is larger than the amplitude itself. Following the same criterium, in 15 other cases the fit was stopped at the 2<sup>nd</sup>–order, in 6 cases at the 3<sup>rd</sup> and in two cases at the 4<sup>th</sup>.
Figure 2 shows the $`\varphi _{21}P`$ plot: the stars have been subdivided into three groups according to their amplitude and different symbols have been used. As can be noticed, there is a well defined trend in the diagrams. Moreover, the $`\varphi _{21}`$ values (open squares in Fig. 2) related to the galactic stars CY Aqr, ZZ Mic (Antonello et al. 1986) and V831 Tau (Musazzi et al. 1998) are in excellent agreement with those related to stars in $`\omega `$ Cen.
There are some interesting cases:
OGLEGC 26 – The light curve is noisy (rms residual 0.033 mag), but its shape looks quite strange, with a descending branch steeper than the ascending one (Fig. 3, upper panel). The reality of the asymmetry is even more obvious when considering the mean light curve (Fig. 3, lower panel). In the Galaxy, there are two high–amplitude $`\delta `$ Sct stars with a similar light curve: V1719 Cyg (Poretti & Antonello 1988) and V798 Cyg (Musazzi et al. 1998). Both these stars have a double–mode nature. Since the number of measurements of OGLEGC 26 is adequate (231 on 50 nights), a second period should be revealed by the frequency analysis, but we failed to find it.
OGLEGC 29, 36, 39 and 62 – There are a few cases where the $`\varphi _{21}`$values seem to deviate from the progression described by the others (Fig. 2). When considering the error bars, the $`\varphi _{21}`$values of the light curves of OGLEGC 29 and 39 (3.72$`\pm `$0.66 rad and 4.02$`\pm `$0.82 rad, respectively) are only marginally deviating; in the case of OGLEGC 62 ($`\varphi _{21}`$=4.13$`\pm `$0.56 rad) the line is just within the error bar of the related point. This discrepancy can be explained by observational scatter, since the amplitude of the $`A_2`$ terms is very small. Moreover, the error bars may be optimistic since they are obtained from the formal error propagation. However, we note that the highest value (4.65$`\pm `$0.49 rad for OGLEGC 36) is the more reliable one and is farther than 3$`\sigma `$ from the others.
## 4 Discussion
The analysis of the 34 short–period variable stars in $`\omega `$ Cen stressed the importance of studying the Fourier parameters. The sample of high–amplitude $`\delta `$ Sct and SX Phe stars is considerably enlarged by these new variables especially toward shortest periods. In general, many variable stars show a very small amplitude, below 0.10 mag. Such a small value is probably responsible for the high number of sinusoidal light curves: since the $`R_{21}`$ ratios are usually around 0.1, the amplitude of the $`2f`$ term is very small and observational errors can mask the asymmetry of the light curve.
In spite of that, the $`\varphi _{21}`$parameters are confined in a narrow strip for periods between 0.042 d and 0.07 d. Toward longer periods, there is an overlapping with the values obtained in the case of galactic stars. Toward shorter periods, the tendency to decreasing $`\varphi _{21}`$values is also verified. It should be noted that there is a strong difference with the results obtained by analyzing the stars in the Carina dwarf Spheroidal Galaxy (Poretti 1999), where the distribution is not as clear as it is here.
As a general consideration, the progression of the $`\varphi _{21}`$parameter as a function of the period appears in a clear way. However, a careful analysis should take more details in consideration:
1. Attention should be paid to the scatter in the distribution of the $`\varphi _{21}`$parameters around 0.050 d in Fig. 2; in that region the mean error is $`\pm `$0.20 rad. Hence, this intriguing feature is on the borderline to be considered as a real change in the progression. By analogy to Cepheid light curves (Pardo & Poretti 1997), such a change can be the signature of a resonance between the fundamental mode and a higher overtone.
2. The small bunch of points above the progression at 0.038 d suggests a different light curve family. Since this group of stars shows a very small amplitude, it is possible that they are nonradial pulsators, not necessarily radial pulsators in a higher overtone.
3. The very low $`\varphi _{21}`$value (1.28$`\pm `$0.31 rad) emphasizes the anomalous light curve of OGLEGC 26. The fact that such a light curve is observed in a Pop. II object is quite surprising, since V1719 Cyg and V798 Cyg (whose light curves are similar) are very probably Pop. I stars having a quite normal metallic content. However, their $`\varphi _{21}`$values are higher (2.52$`\pm `$0.05 and 2.64$`\pm `$0.06 rad, respectively) and hence the light curves are a little different. In many cases, it seems that the phenomenon at the origin of the anomalous brightness increase should be carefully evaluated when dealing with pulsating star models.
It is of paramount importance to obtain very accurate light curves to give more confidence to these results. However, it should be noted that the $`\varphi _{31}`$ and $`\varphi _{41}`$ values (Tab. 1) supply a good confirmation of the reliability of the least–squares fits: indeed, their mean values (1.20 rad and 5.18 rad, respectively) are in excellent agreement with the expected ones on the basis of the results on the galactic variables (see Fig. 2 in Antonello et al. 1986, upper and middle panels).
###### Acknowledgements.
The author wishes to thank J. Vialle for checking the English form of the manuscript.
|
no-problem/9904/hep-th9904008.html
|
ar5iv
|
text
|
# On the hyperbolic structure of moduli spaces with 16 SUSYs
## 1 Introduction
In a recent collaboration with W. Fischler , we showed that the space of asymptotic directions in the moduli space of toroidally compactified M-theory had a hyperbolic metric, related to the hyperbolic structure of the $`E_{10}`$ duality group. We pointed out that this could have been anticipated from the hyperbolic nature of metric on moduli space in low energy SUGRA, which ultimately derives from the negative kinetic term for the conformal factor.
An important consequence of this claim is that there are asymptotic regions of the moduli space which cannot be mapped onto either 11D SUGRA (on a large smooth manifold) or weakly coupled Type II string theory. These regions represent true singularities of M-theory at which no known description of the theory is applicable. Interestingly, the classical solutions of the theory all follow trajectories which interpolate between the mysterious singular region and the regions which are amenable to a semiclassical description. This introduces a natural arrow of time into the theory. We suggested that moduli were the natural semiclassical variables that define cosmological time in M-theory and that “the Universe began” in the mysterious singular region.
We note that many of the singularities of the classical solutions can be removed by duality transformations. This makes the special nature of the singular region all the more striking<sup>1</sup><sup>1</sup>1For reference, we note that there are actually two different types of singular region: neither the exterior of the light cone in the space of asymptotic directions, nor the past light cone, can be mapped into the safe domain. Classical solutions do not visit the exterior of the light cone..
In view of the connection to the properties of the low energy SUGRA Lagrangian, we conjectured in that the same sort of hyperbolic structure would characterize moduli spaces of M-theory with less SUSY than the toroidal background. In this paper, we verify this conjecture for 11D SUGRA backgrounds of the form $`K3\times T^6`$, which is the same as the moduli space of heterotic strings compactified on $`T^9`$. A notable difference is the absence of a completely satisfactory description of the safe domains of asymptotic moduli space. This is not surprising. The moduli space is known to have an F-theory limit in which there is no complete semiclassical description of the physics. Rather, there are different semiclassical limits valid in different regions of a large spacetime.
Another difference is the appearance of asymptotic domains with different internal symmetry groups. 11D SUGRA on $`K3\times T^3`$ exhibits a $`U(1)^{28}`$ gauge group in four noncompact dimensions. At certain singularities, this is enhanced to a nonabelian group, but these singularities have finite codimension in the moduli space. Nonetheless, there are asymptotic limits in the full moduli space (i.e. generic asymptotic directions) in which the full heterotic symmetry group is restored. From the heterotic point of view, the singularity removing, symmetry breaking, parameters are Wilson lines on $`T^9`$. In the infinite (heterotic torus) volume limit, these become irrelevant. In this paper we will only describe the subspace of asymptotic moduli space with full $`SO(32)`$ symmetry. We will call this the HO moduli space from now on. The points of the moduli space will be parametrized by the dimensionless heterotic string coupling constant $`g_{het}=\mathrm{exp}p_0`$ and the radii $`R_i=L_{het}\mathrm{exp}p_i`$ where $`i=1,\mathrm{}10d`$ with $`d`$ being the number of large spacetime dimensions and $`L_{het}`$ denoting the heterotic string length. Throughout the paper we will neglect factors of order one.
Apart from these, more or less expected, differences, our results are quite similar to those of . The modular group of the completely compactified theory preserves a Lorentzian bilinear form with one timelike direction. The (more or less) well understood regimes correspond to the future light cone of this bilinear form, while all classical solutions interpolate between the past and future light cones. We interpret this as evidence for a new hyperbolic algebra $`𝒪`$, whose infinite momentum frame Galilean subalgebra is precisely the affine algebra $`\widehat{o}(8,24)`$ of -. This would precisely mirror the relation between $`E_{10}`$ and $`E_9`$. Recently, Ganor has suggested the $`DE_{18}`$ Dynkin diagram as the definition of the basic algebra of toroidally compactified heterotic strings. This is indeed a hyperbolic algebra in the sense that it preserves a nondegenerate bilinear form with precisely one negative eigenvalue<sup>2</sup><sup>2</sup>2Kac’ definition of a hyperbolic algebra requires it to turn into an affine or finite dimensional algebra when one root of the Dynkin diagram is cut. We believe that this is too restrictive and that the name hyperbolic should be based solely on the signature of the Cartan metric. We thank O. Ganor for discussions of this point..
### 1.1 The bilinear form
We adopt the result of with a few changes in notation. First we will use $`d=11k`$ instead of $`k`$ because now we start in ten dimensions instead of eleven. The parameter that makes the parallel between toroidal M-theory and heterotic compactifications most obvious is the number of large spacetime dimensions $`d`$. In , the bilinear form was
$$I=(\underset{i=1}{\overset{k}{}}P_i)^2+(d2)\underset{i=1}{\overset{k}{}}(P_i^2).$$
(1)
where $`P_i`$ (denoted $`p_i`$ in ) are the logarithms of the radii in 11-dimensional Planck units.
Now let us employ the last logarithm $`P_k`$ as the M-theoretical circle of a type IIA description. For the HE theory, which can be understood as M-theory on a line interval, we expect the same bilinear form where $`P_k`$ is the logarithm of the length of the Hořava-Witten line interval. Now we convert (1) to the heterotic units according to the formulae ($`k1=10d`$)
$$P_k=\frac{2}{3}p_0,P_i=p_i\frac{1}{3}p_0,i=1,\mathrm{}10d$$
(2)
where $`p_0=\mathrm{ln}g_{het}`$ and $`p_i=\mathrm{ln}(R_i/L_{het})`$ for $`i=1,\mathrm{}10d`$. To simplify things, we use natural logarithms instead of the logarithms with a large base $`t`$ like in . This corresponds to a simple rescaling of $`p`$’s but the directions are finally the only thing that we study. In obtaining (2) we have used the well-known formulae $`R_{11}=L_{planck}^{eleven}g_{het}^{2/3}`$ and $`L_{planck}^{eleven}=g_{het}^{1/3}L_{het}`$. Substituing (2) into (1) we obtain
$$I=(2p_0+\underset{i=1}{\overset{10d}{}}p_i)^2+(d2)\underset{i=1}{\overset{10d}{}}(p_i^2).$$
(3)
This bilinear form encodes the kinetic terms for the moduli in the $`E_8\times E_8`$ heterotic theory (HE) in the Einstein frame for the large coordinates.
We can see very easily that (3) is conserved by T-dualities. A simple T-duality (without Wilson lines) takes HE theory to HE theory with $`R_1`$ inverted and acts on the parameters like
$$(p_0,p_1,p_2,\mathrm{})(p_0p_1,p_1,p_2,\mathrm{}).$$
(4)
The change of the coupling constant keeps the effective 9-dimensional gravitational constant $`g_{het}^2/R_1=g_{het}^2/R_1^{}`$ (in units of $`L_{het}`$) fixed. In any number of dimensions (4) conserves the quantity
$$p_{10}=2p_0+\underset{i=1}{\overset{10d}{}}p_i$$
(5)
and therefore also the first term in (3). The second term in (3) is fixed trivially since only the sign of $`p_1`$ was changed. Sometimes we will use $`p_{10}`$ instead of $`p_0`$ as the extra parameter apart from $`p_1,\mathrm{}p_{10d}`$.
In fact those two terms in (3) are the only terms conserved by T-dualities and only the relative ratio between them is undetermined. However it is determined by S-dualities, which exist for $`d4`$. For the moment, we ask the reader to take this claim on faith. Since the HE and HO moduli spaces are the same on a torus, the same bilinear form can be viewed in the $`SO(32)`$ language. It takes the form (3) in $`SO(32)`$ variables as well.
Let us note also another interesting invariance of (3), which is useful for the $`SO(32)`$ case. Let us express the parameters in the terms of the natural parameters of the S-dual type I theory
$$p_0=q_0=\mathrm{ln}(g_{typeI}),p_i=q_i\frac{1}{2}q_0,i=1,\mathrm{}10d$$
(6)
where $`q_i=\mathrm{ln}(R_i/L_{typeI})`$. We used $`g_{typeI}=1/g_{het}`$ and $`L_{het}=g_{typeI}^{1/2}L_{typeI}`$, the latter expresses that the tension of the D1-brane and the heterotic strings are equal. Substituing this into (3) we get the same formula with $`q`$’s.
$$I=(2q_0+\underset{i=1}{\overset{10d}{}}q_i)^2+(d2)\underset{i=1}{\overset{10d}{}}(q_i^2)$$
(7)
### 1.2 Moduli spaces and heterotic S-duality
Let us recall a few well-known facts about the moduli space of heterotic strings toroidally compactified to $`d`$ dimensions. For $`d>4`$ the moduli space is
$$_d=^+\times (SO(26d,10d,)\backslash SO(26d,10d,)/SO(26d,)\times SO(10d,)).$$
(8)
The factor $`^+`$ determines the coupling constant $`L_{het}`$. For $`d=8`$ the second factor can be understood as the moduli space of elliptically fibered K3’s (with unit fiber volume), giving the duality with the F-theory. For $`d=7`$ the second factor also corresponds to the Einstein metrics on a K3 manifold with unit volume which expresses the duality with M-theory on K3. In this context, the factor $`^+`$ can be understood as the volume of the K3. Similarly for $`d=5,6,7`$ the second factor describes conformal field theory of type II string theories on K3, the factor $`^+`$ is related to the type IIA coupling constant.
For $`d=4`$, i.e. compactification on $`T^6`$, there is a new surprise. The field strength $`H_{\kappa \lambda \mu }`$ of the $`B`$-field can be Hodge-dualized to a 1-form which is the exterior derivative of a dual 0-form potential, the axion field. The dilaton and axion are combined in the $`S`$-field which means that in four noncompact dimensions, toroidally compactified heterotic strings exhibit the $`SL(2,)`$ S-duality.
$$_4=SL(2,)\backslash SL(2,)/SO(2,)\times (SO(22,6,)\backslash SO(22,6,)/SO(22)\times SO(6)).$$
(9)
Let us find how our parameters $`p_i`$ transform under S-duality. The S-duality is a kind of electromagnetic duality. Therefore an electrically charged state must be mapped to a magnetically charged state. The $`U(1)`$ symmetry expressing rotations of one of the six toroidal coordinate is just one of the 22 $`U(1)`$’s in the Cartan subalgebra of the full gauge group. It means that the electrically charged states, the momentum modes in the given direction of the six torus, must be mapped to the magnetically charged objects which are the KK-monopoles.
The strings wrapped on the $`T^6`$ must be therefore mapped to the only remaining point-like<sup>3</sup><sup>3</sup>3Macroscopic strings (and higher-dimensional objects) in $`d=4`$ have at least logarithmic IR divergence of the dilaton and other fields and therefore their tension becomes infinite. BPS objects available, i.e. to wrapped NS5-branes. We know that NS5-branes are magnetically charged with respect to the $`B`$-field so this action of the electromagnetic duality should not surprise us. We find it convenient to combine this S-duality with T-dualities on all six coordinates of the torus. The combined symmetry $`ST^6`$ exchanges the point-like BPS objects in the following way:
$$\begin{array}{ccc}& & \\ \hfill \text{momentum modes}& & \text{wrapped NS5-branes}\hfill \\ & & \\ \hfill \text{wrapped strings}& & \text{KK-monopoles}\hfill \end{array}$$
(10)
Of course, the distinguished direction inside the $`T^6`$ on both sides is the same. The tension of the NS5-brane is equal to $`1/(g_{het}^2L_{het}^6)`$. Now consider the tension of the KK-monopole. In 11 dimensions, a KK-monopole is reinterpreted as the D6-brane so its tension must be
$$T_{D6}=\frac{1}{g_{IIA}L_{IIA}^7}=\frac{R_{11}^2}{(L_{planck}^{eleven})^9}$$
(11)
where we have used $`g_{IIA}=R_{11}^{3/2}L_{planck}^{eleven}{}_{}{}^{3/2}`$ and $`L_{IIA}=L_{planck}^{eleven}{}_{}{}^{3/2}R_{11}^{1/2}`$ (from the tension of the fundamental string).
The KK-monopole must always be a $`(d5)`$-brane where $`d`$ is the dimension of the spacetime. Since it is a gravitational object and the dimensions along its worldvolume play no role, the tension must be always of order $`(R_1)^2`$ in appropriate Planck units where $`R_1`$ is the radius of the circle under whose $`U(1)`$ the monopole is magnetically charged. Namely in the case of the heterotic string in $`d=4`$, the KK-monopole must be another fivebrane whose tension is equal to
$$T_{KK5}=\frac{R_1^2}{(L_{planck}^{ten})^8}=\frac{R_{1}^{}{}_{}{}^{2}}{g_{het}^2L_{het}^8}$$
(12)
where the denominators express the ten-dimensional Newton’s constant.
Knowing this, we can find the transformation laws for $`p`$’s with respect to the $`ST^6`$ symmetry. Here $`V_6=R_1R_2R_3R_4R_5R_6`$ denotes the volume of the six-torus. Identifying the tensions in (10) we get
$$\frac{1}{R_1^{}}=\frac{V_6}{g_{het}^2R_1L_{het}^6},\frac{R_1^{}}{(L_{het}^{})^2}=\frac{V_6R_1}{g_{het}^2L_{het}^8}$$
(13)
Dividing and multiplying these two equations we get respectively
$$\frac{R_1^{}}{L_{het}^{}}=\frac{R_1}{L_{het}},\frac{1}{L_{het}^{}}=\frac{V_6}{g_{het}^2L_{het}^7}.$$
(14)
It means that the radii of the six-torus are fixed in string units i.e. $`p_1,\mathrm{},p_6`$ are fixed. Now it is straightforward to see that the effective four-dimensional $`SO(32)`$ coupling constant $`g_{het}^2L_{het}^6/V_6`$ is inverted and the four-dimensional Newton’s constant must remain unchanged. The induced transformation on the $`p`$’s is
$$(p_0,p_1,\mathrm{}p_6,p_7,p_8\mathrm{})(p_0+m,p_1,\mathrm{}p_6,p_7+m,p_8+m\mathrm{})$$
(15)
where $`m=(p_1+p_2+p_3+p_4+p_5+p_62p_0)`$ and the form (3) can be checked to be constant. It is also easy to see that such an invariance uniquely determines the form up to an overall normalization i.e. it determines the relative magnitude of two terms in (3).
For $`d=4`$ this $`ST^6`$ symmetry can be expressed as $`p_{10}p_{10}`$ with $`p_1,\mathrm{}p_6`$ fixed which gives the $`_2`$ subgroup of the $`SL(2,)`$. For $`d=3`$ the transformation (15) acts as $`p_7p_{10}`$ so $`p_{10}`$ becomes one of eight parameters that can be permuted with each other. It is a trivial consequence of the more general fact that in three dimensions, the dilaton-axion field unifies with the other moduli and the total space becomes
$$_3=SO(24,8,)\backslash SO(24,8,)/SO(24,)\times SO(8,).$$
(16)
We have thus repaid our debt to the indulgent reader, and verified that the bilinear form (3) is indeed invariant under the dualities of the heterotic moduli space for $`d3`$. For $`d=2`$ the bilinear form is degenerate and is the Cartan form of the affine algebra $`\widehat{o}(8,24)`$ studied by . For $`d=1`$ it is the Cartan form of $`DE_{18}`$ . The consequences of this for the structure of the extremes of moduli space are nearly identical to those of . The major difference is our relative lack of understanding of the safe domain. We believe that this is a consequence of the existence of regimes like F-theory or 11D SUGRA on a large smooth K3 with isolated singularities, where much of the physics is accessible but there is no systematic expansion of all scattering amplitudes. In the next section we make some remarks about different extreme regions of the restricted moduli space that preserves the full $`SO(32)`$ symmetry.
## 2 Covering the $`SO(32)`$ moduli space
### 2.1 Heterotic strings, type I, type IA and $`d9`$
One new feature of heterotic moduli spaces is the apparent possibility of having asymptotic domains with enhanced gauge symmetry. For example, if we consider the description of heterotic string theory on a torus from the usual weak coupling point of view, there are domains with asymptotically large heterotic radii and weak coupling, where the the full nonabelian rank $`16`$ Lie groups are restored. All other parameters are held fixed at what appears from the weak coupling point of view to be “generic”values. This includes Wilson lines. In the large volume limit, local physics is not sensitive to the Wilson line symmetry breaking.
Now, consider the limit described by weakly coupled Type IA string theory on a large orbifold. In this limit, the theory consists of D-branes and orientifolds, placed along a line interval. There is no way to restore the $`E_8\times E_8`$ symmetry in this regime. Thus, even the safe domain of asymptotic moduli space appears to be divided into regimes in which different nonabelian symmetries are restored. Apart from sets of measure zero (e.g. partial decompactifications) we either have one of the full rank $`16`$ nonabelian groups, or no nonabelian symmetry at all. The example of F-theory tells us that the abelian portion of asymptotic moduli space has regions without a systematic semiclassical expansion.
In a similar manner, consider the moduli space of the $`E_8\times E_8`$ heterotic strings on rectilinear tori. We have only two semiclassical descriptions with manifest $`E_8\times E_8`$ symmetry, namely HE strings and the Hořava-Witten (HW) domain walls. Already for $`d=9`$ (and any $`d<9`$) we would find limits that are described neither by HE nor by HW. For example, consider a limit of M-theory on a cylinder with very large $`g_{het}`$ but the radius of the circle, $`R`$, in the domain $`L_PRL_{het}^2/L_{planck}^{eleven}`$, and unbroken $`E_8\times E_8`$. We do not know how to describe this limit with any known semiclassical expansion. We will find that we can get a more systematic description of asymptotic domains in the HO case, and will restrict attention to that regime for the rest of this paper.
For $`d=10`$ there are only two limits. $`p_0<0`$ gives the heterotic strings and $`p_0>0`$ is the type I theory. However already for $`d=9`$ we have a more interesting picture analogous to the figure 1 in . Let us make a counterclockwise trip around the figure. We start at a HO point with $`p_1=0`$ which is a weakly coupled heterotic string theory with radii of order $`L_{het}`$ (therefore it is adjacent to its T-dual region). When we go around the circle, the radius and also the coupling increases and we reach the line $`p_0(p_1p_{10})/2=0`$ where we must switch to the type I description. Then the radius decreases again so that we must perform a T-duality and switch to the type IA description. This happens for $`p_1(p_0/2)=(3p_1+p_{10})/4=0`$; we had to convert $`R_1`$ to the units of $`L_{typeI}=g_{het}^{1/2}L_{het}`$. Then we go on and the coupling $`g_{IA}`$ and/or the size of the line interval increases. The most interesting is the final boundary given by $`p_1=0`$ which guarantees that each of the point of the $`p`$-space is covered precisely by one limit.
We can show that $`p_1>0`$ is precisely the condition that the dilaton in the type IA theory is not divergent. Roughly speaking, in units of $`L_{typeI}=L_{IA}`$ the “gravitational potential” is linear in $`x_1`$ and proportional to $`g_{IA}^2/g_{IA}`$. Here $`g_{IA}^2`$ comes from the gravitational constant and $`1/g_{IA}`$ comes from the tension of the D8-branes. Therefore we require not only $`g_{IA}<1`$ but also $`g_{IA}<L_{typeI}/R_{lineinterval}`$. Performing the T-duality $`L_{typeI}/R_{lineinterval}=R_{circle}/L_{typeI}`$ and converting to $`L_{het}`$ the condition becomes precisely $`R_{circle}>L_{het}`$.
In all the text we adopt (and slightly modify) the standard definition for an asymptotic description to be viable: dimensionless coupling constants should be smaller than one, but in cases without translational invariance, the dilaton should not diverge anywhere, and the sizes of the effective geometry should be greater than the appropriate typical scale (the string length for string theories or the Planck length for M-theory). It is important to realize that in the asymptotic regions we can distinguish between e.g. type I and type IA because their physics is different. We cannot distinguish between them in case the T-dualized circle is of order $`L_{typeI}`$ but such vacua are of measure zero in our investigation and form various boundaries in the parameter space. This is the analog of the distinction we made between the IIA and IIB asymptotic toroidal moduli spaces in
### 2.2 Type IA<sup>2</sup> and $`d=8`$
In $`d=8`$ we will have to use a new desciption to cover the parameter space, namely the double T-dual of type I which we call type IA<sup>2</sup>. Generally, type IA<sup>k</sup> contains 16 D-$`(9k)`$-branes, their images and $`2^k`$ orientifold $`(9k)`$-planes. We find it also useful to perform heterotic T-dualities to make $`p_i`$ positive for $`i=1,\mathrm{},10d`$ and sort $`p`$’s so that our interest is (without a loss of generality) only in configurations with
$$0p_1p_2\mathrm{}p_{10d}$$
(17)
We need positive $`p`$’s for the heterotic description to be valid but such a transformation can only improve the situation also for type I and its T-dual descriptions since when we turn $`p`$’s from negative to positive values, $`g_{het}`$ increases and therefore $`g_{typeI}`$ decreases. For type I we also need large radii. For its T-duals we need a very small string coupling and if we make a T-duality to convert $`R>L_{typeI}`$ into $`R<L_{typeI}`$, the coupling $`g_{IA}`$ still decreases; therefore it is good to have as large radii in the type I limit as possible.
In $`d=8`$ our parameters are $`p_0,p_1,p_2`$ or $`p_{10},p_1,p_2`$ where $`p_{10}=2p_0+p_1+p_2`$ and we will assume $`0<p_1<p_2`$ as we have explained (sets of measure zero such as the boundaries between regions will be neglected). If $`p_0<0`$, the HO description is good. Otherwise $`p_0>0`$. If furthermore $`2p_1p_0>0`$ (and therefore also $`2p_2p_0>0`$), the radii are large in the type I units and we can use the (weakly coupled) type I description. Otherwise $`2p_1p_0<0`$. If furthermore $`2p_2p_0>0`$, we can use type IA strings. Otherwise $`2p_2p_0<0`$ and the type IA<sup>2</sup> description is valid. Therefore we cover all the parameter space. Note that the F-theory on K3 did not appear here. In asymptotic moduli space, the F-theory regime generically has no enhanced nonabelian symmetries.
In describing the boundaries of the moduli space, we used the relations $`L_{het}=g_{typeI}^{1/2}L_{typeI}`$, $`g_{het}=1/g_{typeI}`$. The condition for the dilaton not to diverge is still $`p_1>0`$ for any type IA<sup>k</sup> description. The longest direction of the $`T^k/_2`$ of this theory is still the most dangerous for the dilaton divergence and is not affected by the T-dualities on the shorter directions of the $`T^k/_2`$ orientifold. For $`d=9`$ (and fortunately also for $`d=8`$) the finiteness of the dilaton field automatically implied that $`g_{IA^k}<1`$. However this is not true for general $`d`$. After a short chase through a sequence of S and T-dualities we find that the condition $`g_{IA^k}<1`$ can be written as
$$(k2)p_02\underset{i=1}{\overset{k}{}}p_i<0.$$
(18)
We used the trivial requirement that the T-dualities must be performed on the shortest radii (if $`R_j<L_{typeI}`$, also $`R_{j1}<L_{typeI}`$ and therefore it must be also T-dualized). Note that for $`k=1`$ the relation is $`p_02p_1<0`$ which is a trivial consequence of $`p_1>0`$ and $`p_0>0`$. Also for $`k=2`$ we get a trivial condition $`2(p_1+p_2)<0`$. However for $`k>2`$ this condition starts to be nontrivial. This is neccessary for consistency: otherwise IA<sup>k</sup> theories would be sufficient to cover the whole asymptotic moduli space, and because of S-dualities we would cover the space several times. It would be also surprising not to encounter regimes described by large K3 geometries.
### 2.3 Type IA<sup>3</sup>, M-theory on K3 and $`d=7`$
This happens already for $`d=7`$ where the type IA<sup>3</sup> description must be added. The reasoning starts in the same way: for $`p_0<0`$ HO, for $`2p_1p_0>0`$ type I, for $`2p_2p_0>0`$ type IA, for $`2p_3p_0>0`$ type IA<sup>2</sup>.
However, when we have $`2p_3p_0<0`$ we cannot deduce that the conditions for type IA<sup>3</sup> are obeyed because also (18) must be imposed:
$$p_02(p_1+p_2+p_3)<0$$
(19)
It is easy to see that this condition is the weakest one i.e. that it is implied by any of the conditions $`p_0<0`$, $`2p_1p_0>0`$, $`2p_2p_0>0`$ or $`2p_3p_0>0`$. Therefore the region that we have not covered yet is given by the opposite equation
$$2p_04(p_1+p_2+p_3)=p_{10}3(p_1+p_2+p_3)>0$$
(20)
The natural hypothesis is that this part of the asymptotic parameter space is the limit where we can use the description of M-theory on a K3 manifold. However things are not so easy: the condition that $`V_{K3}>(L_{planck}^{eleven})^4`$ gives just $`p_{10}<0`$ which is a weaker requirement than (20).
The K3 manifold has a $`D_{16}`$ singularity but this is not the real source of the troubles. A more serious issue is that the various typical sizes of such a K3 are very different and we should require that each of them is greater than $`L_{planck}^{eleven}`$ (which means that the shortest one is). In an analogous situation with $`T^4`$ instead of K3 the condition $`V_{T^4}>L_{planck}^{eleven}{}_{}{}^{4}`$ would be also insufficient: all the radii of the four-torus must be greater than $`L_{planck}^{eleven}`$.
Now we would like to argue that the region defined by (20) with our gauge $`0<p_1<p_2<p_3`$ can indeed be described by the 11D SUGRA on K3, except near the $`D_{16}`$ singularity. Therefore, all of the asymptotic moduli space is covered by regions which have a reasonable semiclassical description.
While the fourth root of the volume of K3 equals
$$\frac{V_{K3}^{1/4}}{L_{planck}^{eleven}}=\frac{g_{het}^{1/3}L_{het}^{1/2}}{V_3^{1/6}}=\mathrm{exp}\left(p_0/3(p_1+p_2+p_3)/6\right)=\mathrm{exp}(p_{10}/6),$$
(21)
the minimal typical distance in K3 must be corrected to agree with (20). We must correct it only by a factor depending on the three radii in heterotic units (because only those are the parameters in the moduli space of metric on the K3) so the distance equals (confirming (20))
$$\frac{L_{min.K3}}{L_{planck}^{eleven}}=\mathrm{exp}\left(p_{10}/6(p_1+p_2+p_3)/2\right).$$
(22)
Evidence that (22) is really correct and thus that we understand the limits for $`d=7`$ is the following. We must first realize that 16 independent two-cycles are shrunk to zero size because of the $`D_{16}`$ singularity present in the K3 manifold. This singularity implies a lack of understanding of the physics in a vicinity of this point but it does not prevent us from describing the physics in the rest of K3 by 11D SUGRA. So we allow the 16 two-cycles to shrink. The remaining 6 two-cycles generate a space of signature 3+3 in the cohomology lattice: the intersection numbers are identical to the second cohomology of $`T^4`$. We can compute the areas of those 6 two-cycles because the M2-brane wrapped on the 6-cycles are dual to the wrapped heterotic strings and their momentum modes. Now let us imagine that the geometry of the two-cycles of K3 can be replaced by the 6 two-cycles of a $`T^4`$ which have the same intersection number.
It means that the areas can be written as $`a_1a_2,a_1a_3,a_1a_4`$, $`a_2a_3,a_2a_4,a_3a_4`$ where $`a_1,a_2,a_3,a_4`$ are the radii of the four-torus and correspond to some typical distances of the K3. If we order the $`a`$’s so that $`a_1<a_2<a_3<a_4`$, we see that the smallest of the six areas is $`a_1a_2`$ (the largest two-cycle is the dual $`a_3a_4`$) and similarly the second smallest area is $`a_1a_3`$ (the second largest two-cycle is the dual $`a_2a_4`$). On the heterotic side we have radii $`L_{het}<R_1<R_2<R_3`$ (thus also $`L_{het}^2/R_3<L_{het}^2/R_2<L_{het}^2/R_1<L_{het}`$) and therefore the correspondence between the membranes and the wrapping and momentum modes of heterotic strings tells us that
$$\frac{a_1a_2}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{1}{R_3},\frac{a_3a_4}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{R_3}{L_{het}^2},\frac{a_1a_3}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{1}{R_2},\frac{a_2a_4}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{R_2}{L_{het}^2}.$$
(23)
As a check, note that $`V_{K3}=a_1a_2a_3a_4`$ gives us $`L_{planck}^{eleven}{}_{}{}^{6}/L_{het}^2`$ as expected (since heterotic strings are M5-branes wrapped on $`K3`$). We will also assume that
$$\frac{a_1a_4}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{1}{R_1},\frac{a_2a_3}{L_{planck}^{eleven}{}_{}{}^{3}}=\frac{R_1}{L_{het}^2}.$$
(24)
Now we can calculate the smallest typical distance on the K3.
$$a_1=\sqrt{\frac{a_1a_2a_1a_3}{a_2a_3}}=\frac{L_{planck}^{eleven}{}_{}{}^{3/2}L_{het}}{\sqrt{R_1R_2R_3}}$$
(25)
which can be seen to coincide with (22). There is a subtlety that we should mention. It is not completely clear whether $`a_1a_4<a_2a_3`$ as we assumed in (24). The opposite possibility is obtained by exchanging $`a_1a_4`$ and $`a_2a_3`$ in (24) and leads to $`a_1`$ greater than (25) which would imply an overlap with the other regions. Therefore we believe that the calculation in (24) and (25) is the correct way to find the condition for the K3 manifold to be large enough for the 11-dimensional supergravity (as a limit of M-theory) to be a good description.
### 2.4 Type IA<sup>4,5</sup>, type IIA/B on K3 and $`d=6,5`$
Before we will study new phenomena in lower dimensions, it is useful to note that in any dimension we add new descriptions of the physics. The last added limit always corresponds to the “true” S-dual of the original heterotic string theory – defined by keeping the radii fixed in the heterotic string units (i.e. also keeping the shape of the K3 geometry) and sending the coupling to infinity – because this last limit always contains the direction with $`p_0`$ large and positive (or $`p_{10}`$ large and negative) and other $`p_i`$’s much smaller.
* In 10 dimensions, the true S-dual of heterotic strings is the type I theory.
* In 9 dimensions it is type IA.
* In 8 dimensions type IA<sup>2</sup>.
* In 7 dimensions we get M-theory on K3.
* In 6 dimensions type IIA strings on K3.
* In 5 dimensions type IIB strings on K3$`\times S^1`$ where the circle decompactifies as the coupling goes to infinity. The limit is therefore a six-dimensional theory.
* In 4 dimensions we observe a mirror copy of the region $`p_{10}<0`$ to arise for $`p_{10}>0`$. The strong coupling limit is the heterotic string itself.
* In 3 dimensions the dilaton-axion is already unified with the other moduli so it becomes clear that we studied an overly specialized direction in the examples above. Nevertheless the same claim as in $`d=4`$ can be made.
* In 2 dimensions only positive values of $`p_{10}`$ are possible therefore the strong coupling limit does not exist in the safe domain of moduli space.
* In 1 dimension the Lorentzian structure of the parameter space emerges. Only the future light cone corresponds to semiclassical physics which is reasonably well understood. The strong coupling limit defined above would lie inside the unphysical past light cone.
Now let us return to the discussion of how to separate the parameter space into regions where different semiclassical descriptions are valid. We may repeat the same inequalities as in $`d=7`$ to define the limits HO, I, IA, IA<sup>2</sup>, IA<sup>3</sup>. But for M-theory on K3 we must add one more condition to the constraint (20): a new circle has been added and its size should be also greater than $`L_{planck}^{eleven}`$. For the new limit of the type IIA strings on K3 we encounter similar problems as in the case of the M-theory on K3. Furthermore if we use the definition (22) and postulate this shortest distance to be greater than the type IIA string length, we do not seem to get a consistent picture covering the whole moduli space. Similarly for $`d=5`$, there appear two new asymptotic descriptions, namely type IA<sup>5</sup> theory and type IIB strings on $`K3\times S^1`$. It is clear that the condition $`g_{IA^5}<1`$ means part of the parameter space is not understood and another description, most probably type IIB strings on $`K3\times S^1`$, must be used. Unfortunately at this moment we are not able to show that the condition for the IIB theory on K3 to be valid is complementary to the condition $`g_{IA^5}<1`$. A straightforward application of (25) already for the type IIA theory on a K3 gives us a different inequality. Our lack of understanding of the limits for $`d<7`$ might be solved by employing a correct T-duality of the type IIA on K3 but we do not have a complete and consistent picture at this time.
### 2.5 Type IA<sup>6</sup> and S-duality in $`d=4`$
Let us turn to the questions that we understand better. As we have already said, in $`d=4`$ we see the $`_2`$ subgroup of the $`SL(2,)`$ S-duality which acts as $`p_{10}p_{10}`$ and $`p_1,\mathrm{},p_6`$ fixed in our formalism. This reflection divides the $`p`$-space to subregions $`p_{10}>0`$ and $`p_{10}<0`$ which will be exchanged by the S-duality. This implies that a new description should require $`p_{10}>0`$. Fortunately this is precisely what happens: in $`d=4`$ we have one new limit, namely the type IA<sup>6</sup> strings and the condition (18) for $`g_{IA^6}<1`$ gives
$$4p_02\underset{i=1}{\overset{6}{}}p_i=2p_{10}<0$$
(26)
or $`p_{10}>0`$.
In the case of $`d=3`$ we find also a fundamental domain that is copied several times by S-dualities. This fundamental region is again bounded by the condition $`g_{gauge}^{eff.4dim}<1`$ which is the same like $`g_{IA^6}<1`$ and the internal structure has been partly described: the fundamental region is divided into several subregions HO, type I, type IA<sup>k</sup>, M/K3, IIA/K3, IIB/K3. As we have said, we do not understand the limits with a K3 geometry well enough to separate the fundamental region into the subregions enumerated above. We are not even sure whether those limits are sufficient to cover the whole parameter space. In the case of $`E_8\times E_8`$ theory, we are pretty sure that there are some limits that we do not understand already for $`d=9`$ and similar claim can be true in the case of the $`SO(32)`$ vacua for $`d<7`$. We understand much better how the entire parameter space can be divided into the copies of the fundamental region and we want to concentrate on this question.
The inequality $`g_{gauge}^{eff.4dim}<1`$ should hold independently of which of the six radii are chosen to be the radii of the six-torus. In other words, it must hold for the smallest radii and the condition is again (26) which can be for $`d=3`$ reexpressed as $`p_7<p_{10}`$.
So the “last” limit at the boundary of the fundamental region is again type IA<sup>6</sup> and not type IA<sup>7</sup>, for instance. It is easy to show that the condition $`g_{IA^6}<1`$ is implied by any of the conditions for the other limits so this condition is the weakest of all: all the regions are inside $`g_{IA^6}<1`$.
This should not be surprising, since $`g_{gauge}^{eff.4dim}=(g_{IA^6})^{1/2}=g_{IA^6}^{open}`$; the heterotic S-duality in this type IA<sup>6</sup> limit can be identified with the S-duality of the effective low-energy description of the D3-branes of the type IA<sup>6</sup> theory. As we have already said, this inequality reads for $`d=3`$
$$2p_0\underset{i=1}{\overset{6}{}}p_i=p_{10}+p_7<0$$
(27)
or $`p_{10}>p_7`$. We know that precisely in $`d=3`$ the S-duality (more precisely the $`ST^6`$ transformation) acts as the permutation of $`p_7`$ and $`p_{10}`$. Therefore it is not hard to see what to do if we want to reach the fundamental domain: we change all signs to pluses by T-dualities and sort all eight numbers $`p_1,\mathrm{}p_7;p_{10}`$ in the ascending order. The inequality (27) will be then satisfied. The condition $`g_{gauge}^{eff.4dim}<1`$ or (26) will define the fundamental region also for the case of one or two dimensions.
### 2.6 The infinite groups in $`d2`$
In the dimensions $`d>2`$ the bilinear form is positive definite and the group of dualities conserves the lattice $`^{11d}`$ in the $`p`$-space. Therefore the groups are finite. However for $`d=2`$ (and a fortiori for $`d=1`$ because the $`d=2`$ group is isomorphic to a subgroup of the $`d=1`$ group) the group becomes infinite. In this dimension $`p_{10}`$ is unchanged by T-dualities and S-dualities. The regions with $`p_{10}0`$ again correspond to mysterious regions where the holographic principle appears to be violated, as in . Thus we may assume that $`p_{10}=1`$; the overall normalization does not matter.
Start for instance with $`p_{10}=1`$ and
$$(p_1,p_2,\mathrm{}p_8)=(0,0,0,0,0,0,0,0)$$
(28)
and perform the S-duality ($`ST^6`$ from the formula (15)) with $`p_7`$ and $`p_8`$ understood as the large dimensions (and $`p_1\mathrm{}p_6`$ as the 6-torus). This transformation maps $`p_7p_{10}p_8`$ and $`p_8p_{10}p_7`$. So if we repeat $`ST^6`$ on $`p_7,p_8`$, T-duality of $`p_7,p_8`$, $`ST^6`$, $`T^2`$ and so on, $`p_1\mathrm{}p_6`$ will be still zero and the values of $`p_7,p_8`$ are
$$(p_7,p_8)=(1,1)(1,1)(2,2)(2,2)(3,3)\mathrm{}$$
(29)
and thus grow linearly to infinity, proving the infinite order of the group. The equation for $`g_{IA^6}<1`$ now gives
$$2p_0\underset{i=1}{\overset{6}{}}p_i=p_{10}+p_7+p_8<0$$
(30)
or $`p_{10}>p_7+p_8`$. Now it is clear how to get to such a fundamental region with (30) and $`0<p_1<\mathrm{}p_8`$. We repeat the $`ST^6`$ transformation with the two largest radii ($`p_7,p_8`$) as the large coordinates. After each step we turn the signs to $`+`$ by T-dualities and order $`p_1<\mathrm{}<p_8`$ by permutations of radii. A bilinear quantity decreases assuming $`p_{10}>0`$ and $`p_{10}<p_7+p_8`$ much like in , the case $`k=9`$ ($`d=2`$):
$$C_{d=2}=\underset{i=1}{\overset{8}{}}(p_i)^2\underset{i=1}{\overset{8}{}}(p_i)^2+2p_{10}(p_{10}(p_7+p_8))$$
(31)
In the same way as in , starting with a rational approximation of a vector $`\stackrel{}{p}`$, the quantity $`C_{d=2}`$ cannot decrease indefinitely and therefore finally we must get to a point with $`p_{10}>p_7+p_8`$.
In the case $`d=1`$ the bilinear form has a Minkowski signature. The fundamental region is now limited by
$$2p_0\underset{i=1}{\overset{6}{}}p_i=p_{10}+p_7+p_8+p_9<0$$
(32)
and it is easy to see that under the $`ST^6`$ transformation on radii $`p_1\mathrm{}p_6`$, $`p_{10}`$ transforms as
$$p_{10}2p_{10}(p_7+p_8+p_9).$$
(33)
Since the $`ST^6`$ transformation is a reflection of a spatial coordinate in all cases, it keeps us inside the future light cone if we start there. Furthermore, after each step we make such T-dualities and permutations to ensure $`0<p_1<\mathrm{}p_9`$.
If the initial $`p_{10}`$ is greater than $`[(p_1)^2+\mathrm{}+(p_9)^2]^{1/2}`$ (and therefore positive), it remains positive and assuming $`p_{10}<p_7+p_8+p_9`$, it decreases according to (33). But it cannot decrease indefinitely (if we approximate $`p`$’s by rational numbers or integers after a scale transformation). So at some point the assumption $`p_{10}<p_7+p_8+p_9`$ must break down and we reach the conclusion that fundamental domain is characterized by $`p_{10}>p_7+p_8+p_9`$.
### 2.7 The lattices
In the maximally supersymmetric case , we encountered exceptional algebras and their corresponding lattices. We were able to see some properties of the Weyl group of the exceptional algebra $`E_{10}`$ and define its fundamental domain in the Cartan subalgebra. In the present case with 16 supersymmetries, the structure of lattices for $`d>2`$ is not as rich. The dualities always map integer vectors $`p_i`$ onto integer vectors.
For $`d>4`$, there are no S-dualities and our T-dualities know about the group $`O(26d,10d,)`$. For $`d=4`$ our group contains an extra $`_2`$ factor from the single S-duality. For $`d=3`$ they unify to a larger group $`O(8,24,)`$. We have seen the semidirect product of $`(_2)^8`$ and $`S_8`$ related to its Weyl group in our formalism. For $`d=2`$ the equations of motion exhibit a larger affine $`\widehat{o}(8,24)`$ algebra whose discrete duality group has been studied in .
In $`d=1`$ our bilinear form has Minkowski signature. The S-duality can be interpreted as a reflection with respect to the vector
$$(p_1,p_2,\mathrm{},p_9,p_{10})=(0,0,0,0,0,0,1,1,1,+1).$$
(34)
This is a spatial vector with length-squared equal to minus two (the form (3) has a time-like signature). As we have seen, such reflections generate together with T-dualities an infinite group which is an evidence for an underlying hyperbolic algebra analogous to $`E_{10}`$. Indeed, Ganor has argued that the $`DE_{18}`$ “hyperbolic” algebra underlies the nonperturbative duality group of maximally compactified heterotic string theory. The Cartan algebra of this Dynkin diagram unifies the asymptotic directions which we have studied with compact internal symmetry directions. Its Cartan metric has one negative signature direction.
## 3 Conclusions
The parallel structure of the moduli spaces with 32 and 16 SUSYs gives us reassurance that the features uncovered in are general properties of M-theory. It would be interesting to extend these arguments to moduli spaces with less SUSY. Unfortunately, we know of no algebraic characterization of the moduli space of M-theory on a Calabi Yau threefold. Furthermore, this moduli space is no longer an orbifold. It is stratified, with moduli spaces of different dimensions connecting to each other via extremal transitions. Furthermore, in general the metric on moduli space is no longer protected by nonrenormalization theorems, and we are far from a characterization of all the extreme regions. For the case of four SUSYs the situation is even worse, for most of what we usually think of as the moduli space actually has a superpotential on it, which generically is of order the fundamental scale of the theory. <sup>4</sup><sup>4</sup>4Apart from certain extreme regions, where the superpotential asymptotes to zero, the only known loci on which it vanishes are rather low dimensional subspaces of the classical moduli space, .
There are thus many hurdles to be jumped before we can claim that the concepts discussed here and in have a practical application to realistic cosmologies.
###### Acknowledgments.
We are grateful to Ori Ganor for valuable discussions. This work was supported in part by the DOE under grant number DE-FG02-96ER40559.
|
no-problem/9904/astro-ph9904172.html
|
ar5iv
|
text
|
# Brief Note: Analytical Fit to the Luminosity Distance for Flat Cosmologies with a Cosmological Constant
## 1 Introduction
It is presently fashionable to consider spatially flat cosmological models with a cosmological constant (Ostriker and Steinhardt 1995). Several projects are underway to measure the magnitude of the cosmological constant, usually based on the luminosity-distance (Perlmutter et al. 1997), the angular-diameter distance (Pen 1997), or volume-distance (Kochanek 1996). Unfortunately, these distances are only expressible in terms of Elliptic functions (Eisenstein 1997). In order to simplify the repeated computation of difficult transcendental functions or numerical integrals, we present a fitting formula with the following properties:
1. it is exact for $`\mathrm{\Omega }_01^{}`$ and $`\mathrm{\Omega }_00^+`$ at all redshifts.
2. The relative error tends to zero as $`z\mathrm{}`$ for any value of $`\mathrm{\Omega }_0`$.
3. For the range $`0.2\mathrm{\Omega }_01`$, the relative error is less than $`0.4\%`$.
4. For any choice of parameters, the relative error is always less than $`4\%`$.
Without further ado, the luminosity distance is given as:
$`d_L`$ $`=`$ $`{\displaystyle \frac{c}{H_0}}(1+z)\left[\eta (1,\mathrm{\Omega }_0)\eta ({\displaystyle \frac{1}{1+z}},\mathrm{\Omega }_0)\right]`$
$`\eta (a,\mathrm{\Omega }_0)`$ $`=`$ $`2\sqrt{s^3+1}\left[{\displaystyle \frac{1}{a^4}}0.1540{\displaystyle \frac{s}{a^3}}+0.4304{\displaystyle \frac{s^2}{a^2}}+0.19097{\displaystyle \frac{s^3}{a}}+0.066941s^4\right]^{\frac{1}{8}}`$
$`s^3`$ $`=`$ $`{\displaystyle \frac{1\mathrm{\Omega }_0}{\mathrm{\Omega }_0}}.`$ (1)
We have used the Hubble constant $`H_0`$, and the pressureless matter content $`\mathrm{\Omega }_0`$. We recall that an object of luminosity $`L`$ has flux $`F=L/(4\pi d_L^2)`$.
## 2 Approximation
Spatial flatness allows us to rewrite the Friedman-Robertson-Walker metric as a conformally flat spacetime
$$ds^2=a^2(d\eta ^2+dr^2+r^2d\mathrm{\Omega }).$$
(2)
$`\eta `$ is the conformal time, and $`r`$ is the comoving distance. We have set the speed of light $`c=1`$. The scale factor $`a`$ can be normalized in terms of the redshift $`a1/(1+z)`$. The Friedman equations determine
$$\left(\frac{da}{d\eta }\right)^2=a\mathrm{\Omega }_0+a^4\mathrm{\Omega }_\mathrm{\Lambda }$$
(3)
where spatial flatness requires $`\mathrm{\Omega }_0+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. A change of variables to $`u=as`$ where $`s^3=(1\mathrm{\Omega }_0)/\mathrm{\Omega }_0`$ allows us to express (3) parameter free
$$\eta =\sqrt{\frac{s^3+1}{s}}_0^{as}\frac{du}{\sqrt{u^4+u}}.$$
(4)
We can asymptotically approximate (4) as
$$\eta _1=2\sqrt{\frac{s^3+1}{s}}\left[u^n+n(2X)^{2n+1}u^1+(2X)^{2n}\right]^{\frac{1}{2n}}$$
(5)
where $`X(_0^{\mathrm{}}𝑑u/\sqrt{u^4+u})^1=3\sqrt{\pi }/(\mathrm{\Gamma }[\frac{1}{6}]\mathrm{\Gamma }[\frac{1}{3}])0.3566`$. $`n`$ is a free parameter, and at this stage a choice of $`n=3`$ approximates all distances to better than $`14\%`$ relative accuracy. Equation (5) satisfies the following conditions: 1. it converges to (4) as $`u0`$ and $`u\mathrm{}`$. 2. Its derivative converges to the derivative of $`\eta `$ as $`u\mathrm{}`$. To improve on (5) we consider a polynomial expansion of $`u^1`$ in the denominator with two free parameters by setting $`n=4`$:
$$\stackrel{~}{\eta }=2\sqrt{\frac{s^3+1}{s}}\left[u^4+c_1u^3+c_2u^2+4(2X)^9u^1+(2X)^8\right]^{1/8}.$$
(6)
We will now choose the coefficients $`c_1,c_2`$ to minimize the relative error in the approximate luminosity distance $`\stackrel{~}{d}_L`$,
$$e_{\mathrm{}}(\stackrel{~}{d}_Ld_L)/d_L_{\mathrm{}}$$
(7)
where the subscript $`\mathrm{}`$ indicates the infinity norm, i.e. the maximal value over the domain. The error in (7) tends to be dominated by $`z=0`$ (see for example Figure 2), for which we can express
$$e_{\mathrm{}}=\sqrt{u^4+u}\frac{d\stackrel{~}{\eta }}{du}1_{\mathrm{}}.$$
(8)
Globally optimizing (8) allows us to reduce the error to about $`2\%`$. But we choose the following trade-off for current cosmological parameters: We want to minimize (8) over the range $`0.2\mathrm{\Omega }_01`$, which covers the popularly considered parameter space. In that range, we find through a non-linear equation solver that $`c_1=0.1540`$ and $`c_2=0.4304`$. This allows us to trade the global error of $`2\%`$ for a global error of $`4\%`$, while reducing the error in the range of interest to $`0.4\%`$. The global error surface plot is shown in Figure 2. The error at $`z0`$ is shown in Figure 1. At small $`z`$, any errors in $`\stackrel{~}{\eta }`$ are amplified, even though the global errors in $`\eta (z)`$ are generally significantly smaller. Figure (3) shows the fit for the conformal time $`\stackrel{~}{\eta }`$ and its residual, which is accurate to $`0.2\%`$ globally.
One can always express the luminosity distance as a power series expansion around $`z=0`$ using Equations (1) and (4). For a flat universe, one obtains
$$\frac{H_0}{c}d_L=z+\left(1\frac{3}{4}\mathrm{\Omega }_0\right)z^2+(9\mathrm{\Omega }_010)\frac{\mathrm{\Omega }_0}{8}z^3+\mathrm{}$$
(9)
The series converges only slowly for $`z1`$: even when expanded to sixth order in $`z`$, the maximal relative error in the interval $`0<z<1`$ and $`0.2<\mathrm{\Omega }_0<1`$ using the series expansion in (9) is 37%.
## 3 Conclusion
We have presented a simple algebraic approximation to the luminosity distance $`d_L`$ and the proper angular diameter distance $`d_A=d_L/(1+z)^2`$ in a flat universe with pressureless matter and a cosmological constant.
|
no-problem/9904/astro-ph9904052.html
|
ar5iv
|
text
|
# Active-Sterile Neutrino Mixing in the Early Universe and Primordial Nucleosynthesis
## I Introduction
Big Bang Nucleosynthesis (BBN) remains one of the most successful probes of early times in the hot big bang cosmology. Standard BBN (SBBN) assumes the Standard Model proposition of only three massless light neutrinos, leaving the primordial element abundances characterized by only one parameter, the baryon-to-photon ratio $`\eta `$. Now, the evidence for neutrino mass from the Super-Kamiokande atmospheric neutrino observations is overwhelming . In addition, the solar neutrino deficit and the LSND experiment are hinting at the presence of a fourth light neutrino that would necessarily be “sterile” due to the $`Z`$ decay width . The role of neutrino masses and their mixing bring another aspect to the physical evolution of the Early Universe, when neutrinos played a large role. The effects of neutrinos on big bang nucleosynthesis can be in part parameterized by the effective number of neutrinos $`N_\nu `$ (i.e. the expansion rate), and in the weak physics which determine the neutron-to-proton ratio. In many works $`N_\nu `$ is regarded as the sole determinant of neutrino effects in BBN. However, we find that the effect of neutrino mixing on BBN is not well described by this one parameter. Characterizing the effects of neutrino mixing on BBN by $`N_\nu `$ can become overly complicated, since the parameter has been used to describe several distinct physical effects, including lepton number asymmetry and change in energy density. Another feature we find is that matter-enhanced mixing of neutrinos in the Early Universe can alter $`\nu _e`$ spectra to be non-thermal, which uniquely affects the nucleon weak rates by lifting Fermi blocking of neutron decay by neutrinos and suppression of neutrino capture on neutrons.
The observation of the abundances of primordial elements has greatly increased in precision within the past few years . In SBBN, each observed primordial abundance corresponds to a value of the baryon-to-photon ratio $`\eta 2.79\times 10^8\mathrm{\Omega }_b^1h^2`$, where $`\mathrm{\Omega }_b`$ is the fractional contribution of baryon rest mass to the closure density and $`h`$ is the Hubble parameter in units of $`100\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$. The theory is apparently successful since the inferred primordial abundances correspond within errors to a single value of $`\eta `$. SBBN took a new turn in 1996 when it was found that the deuterium abundance at high redshifts may be significantly lower than that inferred through the cosmic helium abundance . The advent of high-resolution spectroscopy of quasars allowed the measurement of deuterium abundance in high-redshift clouds between us and distant quasars. The “early days” of these high-redshift spectral deuterium measurements were filled with controversy due to a disparity in the inferred deuterium to hydrogen (D/H) ratio between two observational groups working with different quasars. More recently, the number of quasars with low D/H has grown. The deuterium abundance has converged to $``$10% precision: $`(\mathrm{D}/\mathrm{H})_p(3.4\pm 0.25)\times 10^5`$. The $`\eta `$ value inferred from this observation is $`\eta _D(5.1\pm 0.5)\times 10^{10}`$. The prediction for the corresponding primordial <sup>4</sup>He mass fraction is $`Y_p(D)0.246\pm 0.0014`$.
Based on stellar absorbtion features in metal-poor HII regions, the observed helium abundance has been argued by Olive, Steigman, and Skillman (OSS) to be $`Y_p(OSS)0.234\pm 0.002(\mathrm{stat})\pm 0.005(\mathrm{sys})`$. The corresponding baryon-to-photon ratio is $`\eta _{OSS}(2.1\pm 0.6)\times 10^{10}`$. The discrepancy between the central values of $`Y_p(D)`$ predicted by the Burles and Tytler deuterium observations and $`Y_p(OSS)`$ has led to much speculation on non-standard BBN scenarios which could reconcile the two measurements by decreasing the $`Y_p(D)`$ predicted by BBN. However, the systematic and statistical uncertainty in the $`Y_p(D)`$ and $`Y_p(OSS)`$ puts them well within their $`2\sigma `$ errors. Furthermore, Izotov and Thuan exclude a few potentially tainted HII regions used in OSS from their analysis and arrive at a significantly higher $`Y_p0.244\pm 0.002(\mathrm{stat})`$. The systematic uncertainty claimed in observed $`Y_p`$ may also be underestimated .
It has been suggested that this dubious $`Y_p`$ and D/H discrepancy with standard BBN could be hinting at the presence of sterile neutrino mixing in the Early Universe . However, we find that the overall effect of active-sterile mixing in the Early Universe would only increase $`Y_p`$ and thus increase the possible disparity.
Active-sterile mixing can produce an electron lepton number asymmetry $`L(\nu _e)`$ before and during BBN . If the sign of $`L(\nu _e)`$ is the same throughout the Universe, the helium abundance would change to be more favorable ($`\delta Y_p<0`$) or less favorable ($`\delta Y_p>0`$) with observation. Taking into account the effects of energy density, lepton number generation and alteration of neutrino spectra, we find that $`\delta Y_p`$ can be large in the positive direction, but limited in the negative direction . In particular, we find that matter-enhanced transformations can leave non-thermal $`\nu _e`$ spectra which significantly affect the weak rates through the alleviation of Fermi-blocking and suppression of neutrino capture on neutrons . Furthermore, causality limits the size of the region with a certain sign of $`L(\nu _e)`$ to be within the size of the horizon during these early times ($`10^{10}\mathrm{cm}`$ at weak freeze-out) . The sign of the asymmetry will vary between regions, and the net effect of transformation on $`Y_p`$ will then be the average of both the $`L(\nu _e)>0`$ and $`L(\nu _e)<0`$ BBN yields, which leaves $`\delta Y_p>0`$.
## II Matter-Enhanced Mixing of Neutrinos and Neutrino Spectra <br>in the Early Universe
Two possible schemes exist for producing a nonzero $`L(\nu _e)`$ involving matter-enhanced (MSW) transformation of neutrinos with sterile neutrinos. These two schemes differ greatly in the energy distribution of the asymmetry in the spectra of the $`\nu _e/\overline{\nu }_e`$ the asymmetry. The resonant transformation of one neutrino flavor to another depends on the energy of the neutrino. In the case of BBN, the active neutrinos all start off as thermal, and sterile neutrinos are not present. As the Universe expands, decreases in density and cools, the position of the MSW resonance moves in energy space from lower to higher neutrino energies. The energy of the resonance is
$$\left(\frac{E}{T}\right)_{\mathrm{res}}\frac{|\delta m^2|/\mathrm{eV}^2}{16(T/\mathrm{MeV})^4L(\nu _\alpha )}$$
(1)
where $`(E/T)_{\mathrm{res}}`$ is the neutrino energy normalized by the ambient temperature $`T`$. The asymmetry $`L(\nu _\alpha )`$ is generated as the resonance energy moves up the neutrino spectrum. As the Universe cools, the resonance energy increases. This produces a distortion of the neutrino or anti-neutrino spectrum. There are two plausible cases for producing an asymmetry $`L(\nu _e)`$ that would affect the production of primordial Helium through the weak rates:
* Direct Two Neutrino Mixing: An asymmetry in $`L(\nu _e)`$ is created directly through a $`\nu _e\nu _s`$ or $`\overline{\nu }_e\overline{\nu }_s`$ resonance. In this case, the resonance starts at low temperatures for low neutrino energies, when neutrino-scattering processes are very slow at re-thermalization. This case leaves the $`\nu _e`$ or $`\overline{\nu }_e`$ spectrum distorted, with the spectral distortion from a thermal Fermi-Dirac form residing in the low energy portion of the spectrum. The position of the distortion cut-off moves through the spectrum before and during BBN. In calculating the effects of this scenario on BBN, we included the evolving non-thermal nature of the neutrino spectrum.
* Indirect Three Neutrino Mixing: An asymmetry in $`L(\nu _\tau )`$ or $`L(\nu _\mu )`$ created by a $`\nu _\tau \nu _s(\nu _\mu \nu _s)`$ resonance is later transferred into the $`L(\nu _e)`$ through a $`\nu _\tau \nu _e(\nu _\mu \nu _e)`$ resonance ($`L(\nu _e)>0`$). This will happen for the anti-neutrino flavors for the $`L(\nu _e)<0`$ case. In this scenario, the resonant transitions occur at higher temperatures when neutrino-scattering is more efficient. Both the $`\nu _\tau (\nu _\mu )`$ and $`\nu _e`$ spectra re-thermalize, but since the temperature is below weak decoupling, the asymmetry in neutrino lepton numbers remain. The effect of this scenario on BBN can be described by suppression or enhancement over the entire $`\nu _e/\overline{\nu }_e`$ spectra.
## III Neutron-Proton Interconversion Rates and Neutrino Spectra
BBN can be approximated as the freeze-out of nuclides from nuclear statistical equilibrium in an “expanding box.” Nuclear reactions freeze-out, or fail to convert one nuclide to another when their rate falls below the expansion rate of the box (the Hubble expansion). As different reactions rates freeze-out at different temperatures, the abundances of various nuclides get altered and then become fixed. The nuclide produced in greatest abundance (other than hydrogen) is <sup>4</sup>He. The reactions that affect the <sup>4</sup>He abundance ($`Y_p`$) the most are the weak nucleon interconversion reactions
$$n+\nu _ep+e^{}n+e^+p+\overline{\nu }_enp+e^{}+\overline{\nu }_e.$$
(2)
These reactions all depend heavily on the density of the electron-type neutrinos and their energy distributions . The two neutrino transformation scenarios for altering the $`\nu _e`$ spectrum described above affect $`Y_p`$ through these rates and through the effect of increased energy density on the expansion rate.
### A Direct Two-Neutrino Mixing: $`\nu _s\nu _e`$
In this situation, either the populations of the $`\nu _e`$ or $`\overline{\nu }_e`$ energy spectra are cutoff at lower energies. The reaction rates that alter the overall $`np`$ rates the most because of the spectral cutoff are
$$\lambda _{npe\nu }=Av_eE_\nu ^2E_e^2𝑑p_\nu [1+e^{E_\nu /kT_\nu }]^1[1+e^{E_e/kT_\nu }]^1$$
(3)
$$\lambda _{n\nu pe}=Av_eE_e^2p_\nu ^2𝑑p_\nu [e^{E_\nu /kT_\nu }+1]^1[1+e^{E_e/kT_\nu }]^1$$
(4)
(the notation in these expressions follows that in Ref. ). The rate in Eq. 3 is modified with the $`\overline{\nu }_e`$ spectra, and the rate in Eq. 4 is modified with the $`\nu _e`$ spectra.
For a spectral cutoff for $`\overline{\nu }_e`$, corresponding to $`\overline{\nu }_s\overline{\nu }_e`$ transformations, the Fermi-blocking term $`[1+e^{E_\nu /kT_\nu }]^1`$ in neutron decay (Eq. 3) becomes unity at low $`\overline{\nu }_e`$ energies. The rate integrand is then greatly enhanced when low energy neutrinos would otherwise block the process (see Figure 1). In the case of a spectral cutoff for $`\nu _e`$, corresponding to $`\nu _s\nu _e`$ transformations, the spectral term $`p_\nu ^2[e^{E_\nu /kT_\nu }+1]^1`$ and thus the rate integrand go to zero for low energy neutrinos (see Fig 2).
The resultant change in $`Y_p`$ for this “two neutrino” case is shown in Figure 3. For $`\overline{\nu }_e\overline{\nu }_s`$ transformations, $`L(\nu _e)>0`$, the neutron decay rate in Eq. 3 is enhanced, fewer neutrons are present, and $`Y_p`$ is decreased. For $`\nu _e\nu _s`$ transformations, $`L(\nu _e)<0`$, the $`n\nu _epe`$ rate in Eq. 4 is suppressed, more neutrons are present, and $`Y_p`$ is increased. The effect of the rate in Eq. 4 is much greater than that of the rate in Eq. 3 on the neutron-to-proton ratio and $`Y_p`$. This causes a larger magnitude change in the neutron-to-proton ratio for the $`L(\nu _e)<0`$ case, and thus a larger magnitude change in $`Y_p`$ in the positive direction than in the negative.
The sign of the asymmetry $`L(\nu _e)`$ is randomly determined . Because of causality, different horizons will have an asymmetry of random sign , fifty-percent positive and fifty-percent negative. The effect of the asymmetry on $`Y_p`$ will then be the average of these two cases, which is also plotted in Figure 3.
### B Indirect Three-Neutrino Mixing
In the case that an asymmetry is produced in $`\nu _e/\overline{\nu }_e`$ indirectly through an initial transformation between $`\nu _\tau \nu _s`$ or $`\nu _\mu \nu _s`$, the neutrinos can re-thermalize through self-scattering at higher temperatures. The number densities remain asymmetric, but the asymmetry is spread uniformly over the entire energy spectra. The effect on the weak rates is not as drastic as the above case, but is still present. In this case, the increase in energy density due to the thermalization of sterile neutrinos also affects $`Y_p`$ through the increase in the expansion rate. Thus, for $`L(\nu _e)>0`$, the suppression of $`Y_p`$ is counteracted by the increased expansion rate. The increase in energy density always causes a positive change in $`Y_p`$, while the change caused by the weak nucleon rates depends on the sign of $`L(\nu _e)`$, thus the overall resulting change in $`Y_p`$ is greater in the positive direction than in the negative (see Figure 4). For the $`L(\nu _e)>0`$ case, the negative change $`\delta Y_p`$ has a minimum at $`\delta m^2100\mathrm{eV}^2`$ since at this point the positive effect of energy density begins to dominate the negative effect of the modification of the weak rates.
## IV Conclusions
The characterization of the effects of neutrino transformations on BBN through a single parameter $`N_\nu `$ hides varied physical effects of neutrino mass and mixing that are non-trivially related. These effects include an increase in energy density and modification in the nucleon weak rates. The modification of these rates is through either an asymmetry that is distributed over the entire spectrum, or an asymmetry confined to low energy $`\nu _e/\overline{\nu }_e`$. We find that the distribution of the $`L(\nu _e)`$ in the spectra is important to the weak rates, the neutron-to-proton ratio, and the production of <sup>4</sup>He. In addition, the change in the predicted $`Y_p`$ is always greater in the positive direction than in the negative, thus when averaged over causal horizons, $`\delta Y_p`$ is always positive. The changes $`\delta Y_p`$ are comparable to the uncertainty in current $`Y_p`$ measurements, thus resonant neutrino mixing can measurably affect the BBN predictions. It is important to note that there may remain non-trivial portions of parameter space in non-standard BBN—including, but not limited to, neutrino mixing, spatial variations in $`\eta `$, and massive decaying particles—that fit within the observed uncertainties of the primordial abundances.
K. A., X. S. and G. M. F. are partially supported by NSF grant PHY98-00980 at UCSD.
|
no-problem/9904/physics9904023.html
|
ar5iv
|
text
|
# Error Thresholds on Dynamic Fitness-Landscapes
## Abstract
In this paper we investigate error-thresholds on dynamics fitness-landscapes. We show that there exists both lower and an upper threshold, representing limits to the copying fidelity of simple replicators. The lower bound can be expressed as a correction term to the error-threshold present on a static landscape. The upper error-threshold is a new limit that only exists on dynamic fitness-landscapes. We also show that for long genomes on highly dynamic fitness-landscapes there exists a lower bound on the selection pressure needed to enable effective selection of genomes with superior fitness independent of mutation rates, i.e., there are distinct limits to the evolutionary parameters in dynamic environments.
Ever since Eigen’s work on replicating molecules in 1971 , the concept of quasi-species has proven to be a very fruitful way of modeling the fundamental behavior of evolution. A quasi-species is an equilibrium distribution of closely related gene sequences, localized around one or a few sequences with high fitness. The combination of simplicity and mathematical preciseness makes it possible to isolate the effects of different fundamental parameters in the model. It also makes it possible to capture some general phenomena in nature, such as the critical relation between mutation rate and information transmission . The kinetics of these simple systems has been studied in great detail as the formulation has allowed many of the techniques of statistical physics to be applied to replicator and evolutionary systems. See for instance .
The appearance in these models of an error-threshold (or error-catastrophy) as an upper bound on the mutation rate, above which no effective selection can occur, has important implications for biological systems. In particular it places limits on the maintainable amounts of genetic information which puts strong restrictions on possible theories for the origins of life. It is interesting to note that some RNA-viruses seem to have evolved mutation rates that are close to the error-threshold .
Studies of quasi-species until now have focused on static fitness-landscapes. Many organisms in nature however live in a quickly changing environment . This is especially important for viruses and other microbial pathogens that must survive in a host with an highly dynamic immune system for which there only exist tight and temporary niches with high fitness (for the pathogen).
In this paper we investigate how the critical mutation rate of the error threshold is affected by a dynamical fitness-landscape. We show how the critical mutation rate is lowered by shifts of the fitness-peak. An simple analytical expression for this critical copying fidelity is also presented. It also turns out that if the selection pressure is too small, the fitness-landscape moves too fast and the fitness encoding genome is too large, the population will lose the fitness-peak independent of mutation rate. This shows the existence of regions in parameter space where no selection can occur despite possibilities of adjusting copying-fidelity.
In brief a quasi-species consists of a population of self-replicating genomes represented by a sequence of bases $`s_k`$, $`\left(s_1s_2\mathrm{}s_n\right)`$. Hereafter we will assume binary bases $`\{1,0\}`$ and that all sequences have equal length $`n`$ though these restrictions are easily relaxed. Every genome is then given by a binary string $`\left(011001\mathrm{}\right)`$, which can be represented by an integer $`k`$ ($`0k<2^n`$).
To describe how mutations affect a population we define $`W_k^l`$ as the probability that replication of genome $`l`$ gives genome $`k`$ as offspring. For perfect copying accuracy, $`W_k^l`$ equals the identity matrix. Mutations however give rise to off diagonal elements in $`W_k^l`$. Since the genome length is fixed to $`n`$ we will only consider point mutations, which conserve the genome length.
We assume that the point mutation rate $`p=1q`$ (where $`q`$ is the copying accuracy per base) is constant in time and independent of position in the genome. We can then write an explicit expression for $`W_k^l`$ in terms of the copying fidelity:
$`W_k^l`$ $`=`$ $`p^{h_{kl}}q^{nh_{kl}}=q^n\left({\displaystyle \frac{1q}{q}}\right)^{h_{kl}}`$ (1)
where $`h_{kl}`$ is the Hamming distance between genomes $`k`$ and $`l`$, and $`n`$ is the genome length. The Hamming distance $`h_{kl}`$ is defined as the number of positions where genomes $`k`$ and $`l`$ differ.
The equations describing the dynamics of the population now take a relatively simple form. Let $`x_k`$ denote the relative concentration and $`A_k`$ the fitness of genome $`k`$. We then obtain the rate equations:
$`\dot{x}_k`$ $`=`$ $`{\displaystyle \underset{l}{}}W_k^lA_lx_lex_k`$ (2)
where $`e=_lA_lx_l`$ and the dot denotes a time derivative. The second term ensures the total normalization of the population (as $`_lx_l=1`$) so that $`x_k`$ describes relative concentrations.
To create a dynamic landscape we consider a single peaked fitness landscape whose peak moves, resulting in different optimal gene sequences at different times. Formally we can write $`A_{k(t)}=\sigma `$ and $`A_l=1`$ $`lk(t)`$ where the (changing) genome $`k(t)`$ describes how the peak moves through sequence space. If $`k(t)`$ is constant in time the rate equation \[Eq. 2\] corresponds to the classical (static) theory of quasi-species studied by Eigen and others.
We allow the peak in the fitness landscape to move to one of its closest neighbors (chosen ranomly). In this paper we assume that movements occur with a fixed frequency but one could also consider a probabilistic movement.
The mutation matrix $`W`$ describes point mutations which occurr with equal probability independent of position in the genome. This imposes a symmetry on the rate equations, dividing the relative concentrations into error classes $`\mathrm{\Gamma }_i`$ described by their Hamming distance $`i`$ from the master sequence ($`\mathrm{\Gamma }_0`$). This reduces the effective dimension of the sequence space from $`2^n`$ to $`n+1`$ thereby making the problem analytically tractable. The use of assymetric evolution operators (such as recombination) or fitness landscapes is obviously significantly more problematic and is the subject of ongoing work. When the fitness peak moves this landscape symmetry will be broken since one sequence in $`\mathrm{\Gamma }_1`$ will be singled out as the new master sequence. This would only affect the results we present below if the mean time between shifts in the fitness-landscape was small — as there would then be a substantial concentration of the old master sequence present when the peak moves back into this error-class. We assume the dynamics to be slow enough for this not to be a problem.
Moving the fitness peak then corresponds to applying the following co-ordinate transformation to the concentration vector:
$`R`$ $`=`$ $`\left(\begin{array}{ccccc}0& \frac{1}{n}& 0& \mathrm{}& \\ 1& 0& \frac{2}{n}& \mathrm{}& \\ 0& \frac{n1}{n}& 0& \mathrm{}& \\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \end{array}\right)`$ (7)
To study the population dynamics we may divide the dynamics into cycles from time $`0`$ to $`\tau `$, where $`\tau `$ is a parameter determining the number of generations between shifts of the fitness peak when the evolution proceeds as for a static landscape. We then apply the $`R`$ transformation to the concentration vector. The resulting concentration distribution is used as the initial condition for the rate equations from time $`\tau `$ to $`2\tau `$ and so on. These population dynamics \[Eq. 2 and $`R`$\] may be solved numerically as shown in Fig. 1 (after the initial transient) where $`\tau =5`$, $`\sigma =10`$, $`q=0.999`$ and string-length $`n=50`$.
A simple approximation of the model presented above enables us to derive analytical expressions for the error-thresholds on a dynamic fitness-landscape. Neglecting back mutations into the master-sequence, we can write the rate equation for the master-sequence on a static fitness-landscape as
$`\dot{x}_{mas}`$ $`=`$ $`Q\sigma x_{mas}ex_{mas}`$ (8)
where $`Q=q^n`$ is the copying fidelity of the whole genome and $`e=\sigma x_{mas}+1x_{mas}`$. The asymptotic concentration of master-sequences is
$`x_{mas}\left(t\right)`$ $``$ $`{\displaystyle \frac{Q\sigma 1}{\sigma 1}}\text{ when }t\mathrm{}`$ (9)
This implies that the error-threshold on a static fitness-landscape occurs when
$`Q^{stat}={\displaystyle \frac{1}{\sigma }}`$ (10)
(see e.g., ). This result is also intuitively clear since the superior fitness (and hence growth rate) of the master-sequence must compensate for the loss of $`\mathrm{\Gamma }_0`$ individuals due to mutations that occur during replication.
The intuitive picture of the error-threshold on a dynamic fitness-landscape is different: what determines the critical mutation rate is whether the master-sequence will have time to regrow between the shifts of the fitness-peak. To find an analytical approximation for the error-treshold we have to expand Eq. (8) to include the dynamics of error-class one as well as the master-sequence. This is necessary since the fitness-peak moves into error-class one every $`\tau `$ time-steps. We can, however, make a large simplification by assuming the growth of the master-sequence to be in the exponential regime, i.e., that we can neglect the non-linearity in Eq. (8). This is a good approximation near the error-threshold as, for these values of $`q`$, the master-sequence will not have time to approach any kind of equilibrium before the peak shifts again. We can thus write an approximation of the rate equations for the master-sequence and a representative member of error-class one:
$`\dot{x}_{mas}`$ $`=`$ $`\left(Q\sigma 1\right)x_{mas}`$ (11)
$`\dot{x}_{1j}`$ $`=`$ $`\stackrel{~}{Q}\sigma x_{mas}+(Q1)x_{1j}`$ (12)
where mutations into the member of error-class one are neglected and $`\stackrel{~}{Q}=\left(1q\right)q^{n1}`$ describes mutation from $`x_{mas}`$ into $`x_{1j}`$. We now assume $`x_{1j}\left(0\right)=0`$, which is a good approximation since $`x_{1j}`$ is (almost always) in $`\mathrm{\Gamma }_2`$ before the shift. The solutions to Eq. (12) using this boundary condition can be written as
$`x_{mas}\left(t\right)`$ $`=`$ $`x_{mas}\left(0\right)e^{\left(q^n\sigma 1\right)t}`$ (13)
$`x_{1i}\left(t\right)`$ $`=`$ $`x_{mas}\left(0\right)\left({\displaystyle \frac{\left(e^{\left(q^n\sigma 1\right)t}e^{\left(q^n1\right)t}\right)\left(1q\right)\sigma }{\left(\sigma 1\right)q}}\right)`$ (14)
The shifting involves the move of the fitness peak to one of the sequences in error-class one at time $`t=\tau `$. The initial concentration of master-sequences at the beginning of a shift cycle is therefore $`x_{mas}\left(0\right)=x_1\left(\tau \right)`$. If the concentration of the master-sequence after the shift is lower than immediately after the previous shift, i.e. $`x_{mas}\left(0\right)>x_1\left(\tau \right)`$, the distribution of concentrations will converge towards a uniform distribution. This is, in effect, a definition of the error-threshold. A condition for effective selection is then given by inserting $`x_{mas}\left(0\right)<x_1\left(\tau \right)`$ into Eq. (14). We then derive a master-sequence growth parameter
$`\kappa {\displaystyle \frac{\left(e^{\left(q^n\sigma 1\right)\tau }e^{\left(q^n1\right)\tau }\right)\left(1q\right)\sigma }{\left(\sigma 1\right)q}}`$ $`>`$ $`1`$ (15)
It is not possible to find exact analytical solutions for the roots of Eq. (15) and hence the error-thresholds. Fig. 2 shows the region where Eq. (15) can be expected to hold. The figure also shows the existence of two error-thresholds, $`q_{lo}^{dyn}`$ and $`q_{hi}^{dyn}`$ corresponding to the real roots of $`\kappa =1`$. The lower threshold is a new version of the static error-threshold, with a pertubation resulting from the movement of the fitness-landscape. The upper threshold is a new phenomenon that appears only on dynamic fitness-landscapes. Its existence is intuitively clear — if the mutation rate is very close to zero, there will not be enough individuals present on the new peak position when the shift occurrs to maintain a steady occupancy of the master sequence, i.e. the peak moves out from under the quasi-species and the population will not be able to track shifts in the fitness-landscape.
Analytical approximations to the error-thresholds can be found by assuming different dominant terms in the two different regions. To find the lower threshold $`q_{lo}^{dyn}`$ we asssume $`q^n`$ to dominate the behavior. Solving for $`q^n`$ gives
$`q^n`$ $``$ $`{\displaystyle \frac{\tau \mathrm{ln}\left(\frac{\sigma }{\sigma 1}\frac{1q}{q}\right)}{\sigma \tau }}`$ (16)
We can use Eq. (16) to find a first order correction in $`\tau `$ to the static threshold by putting $`q=\frac{1}{\sigma ^{1/n}}`$ on the right hand side
$`Q_{lo}^{dyn}`$ $``$ $`{\displaystyle \frac{1}{\sigma }}{\displaystyle \frac{\mathrm{ln}\left(\sigma ^{1/n}1\right)}{\tau \sigma }}`$ (17)
where we also made the approximation $`\frac{\sigma }{\sigma 1}1`$. This is an expression for the lower error-threshold on a dynamic fitness-landscape. Note that $`Q_{lo}^{dyn}Q_{crit}`$ when $`\tau \mathrm{}`$, i.e. we recover the stationary landscape limit.
Fig. 3 shows the mean fitness of a population as a function of the copying-fidelity. When $`q`$ is below $`q_{lo}^{dyn}`$, the concentration of master-sequences is approximately zero and the mean fitness will therefore be $`1`$. The figure is based on numerical simulations of the full rate equations \[Eq. 2\]. Note that the predicted value of $`q_{lo}^{dyn}`$ given by Eq. (17) is quite accurate. Further comparisons to numerical solutions to the full dynamics are shown in table I.
Both the qualitative and quantitative dynamics of both error thresholds have been verified by computer simulations using large populations to approximate the deterministic dynamics.
The critical copying fidelity $`Q_{lo}^{dyn}`$ depends on the genome-length. This is not surprising since the fitness-peak shifts into a specific member of $`\mathrm{\Gamma }_1`$, which consists of $`n`$ different gene-sequences. It is, however, a direct consequence of the dynamic fitness-landscape since the static error-threshold is independent of genome-length. This effect is demonstrated in Fig. 4, where $`Q_{lo}^{dyn}`$ versus the genome-length is plotted. The perturbation from the static error-threshold increases with genome-length. The derivative is however decreasing and for reasonable values of $`\tau 1`$ and $`\sigma 1`$ the static and dynamic error-threshold are of the same order of magnitude and show the same scaling behaviour.
An analytical approximation to the new upper threshold can be found by assuming $`q`$ to be very close to $`1`$ and therefore the $`\left(1q\right)`$-term dominates the behaviour of Eq. (15). Again assuming $`\sigma 1`$ and putting $`q^n=1`$, gives
$`q_{hi}^{dyn}`$ $``$ $`1e^{\left(\sigma 1\right)\tau }`$ (18)
Explicit numerical solutions of the full dynamics confirm that this threshold exsists and is predicted by Eq. (18). For most values of $`\sigma `$ and $`\tau `$, $`q_{hi}^{dyn}`$ is very close to $`1`$ (e.g. $`\left(\sigma 1\right)\tau =50`$ gives $`10^{22}`$ as a lower bound on the mutation rate per base pair). Finite population affects are however significant for the upper error-threshold. In real biological populations this may be imposrtant. More detailed studies of these issues are under preparation.
It is important to note that $`q_{hi}^{dyn}`$ is independent of the genome-length. The total copying fidelity $`Q_{hi}^{dyn}=\left(q_{hi}^{dyn}\right)^n`$ will then depend strongly on the genome-length. This means that as the genome-length increases, the evolvable gap in between the two error-thresholds narrows.
On a static fitness-landscape it is always possible to find copying fidelities high enough for evolution to be effective. It turns out that this is no longer the case for dynamic fitness-landscapes. There exist regions in parameter-space (spanned by $`\sigma `$, $`\tau `$ and $`n`$) where solutions to Eq. (15) cease to exist. This happens when the upper and lower error-thresholds coincide or, to put it differently, when the maximum (taken over $`q`$) of the left hand side of Eq. (15) become less than $`1`$. To find this convergence point it is better to search for a direct approximation of $`q`$ that maximizes the left hand side of Eq. (15) as the approximations for upper and lower error-thresholds given above become less accurate when they are close together, To do this we assume the leading behaviour is determined by the factor $`e^{\left(q^n\sigma 1\right)\tau }\left(1q\right)`$. Taking the derivative of this expression and setting it to zero gives the equation $`q^{n1}\left(1q\right)=\frac{1}{n\sigma \tau }`$. Assuming $`q`$ to be very close to $`1`$, and hence $`q^{n1}1`$ gives
$`q_{max}`$ $``$ $`1{\displaystyle \frac{1}{\sigma \tau n}}`$ (19)
This approximation for $`q_{max}`$ can be substituted into Eq. (15). It is easy find points in phase space where this inequality starts to hold by fixing two parameters (e.g., $`\tau `$ and $`n`$) and then numerically solving for the third ($`\sigma `$). Table II shows the minimal height of the fitness-peak for different values of $`\tau `$ and $`n`$. The required selective pressure becomes large for fast moving fitness-landscapes and large genome lengths.
In conclusion we have shown existence of, and derived analytic expressions for, two error-thresholds on a simple dynamic fitness-landscape. The lower threshold is a perturbation of the well known error-catastrophy that exists a static fitness-landscape that accounts for the destabilizing effect of the changing environment. The existence of an upper bound on the copying fidelity is a new phenomenon, only existing in dynamic environments. The presence of this upper bound results in the existence of critical regions of the landscape parameters ($`\sigma `$, $`\tau `$ and $`n`$) where the two thresholds coincide (or cross) and threrefore no effective selection can occur. Thus dynamics landscapes have strong constraints on evolvability.
We would like to thank Claes Andersson and Erik van Nimwegen for useful discussions. Thanks are also due to Mats Nordahl who has given valuable comments on the manuscript. Nigel Snoad and Martin Nilsson were supported by SFI core funding grants. N.S. would also like to acknowledge the support of Marc Feldman and the Center for Computational Genetics and Biological Modelling at Standford University while preparing this manuscript.
|
no-problem/9904/hep-th9904209.html
|
ar5iv
|
text
|
# ITEP-15/99 April 1999 A remark on collisions of domain walls in a supersymmetric model
Abstract
The process of collision of two parallel domain walls in a supersymmetric model is studied both in effective Lagrangian approximation and by numerical solving of the exact classical field problem. For small initial velocities we find that the walls interaction looks like elastic reflection with some delay. It is also shown that in such approximation internal parameter of the wall may be considered as a time-dependent dynamical variable.
Now it is well-known that in a wide class of supersymmetric theories families of the so-called BPS domain walls exist, see e.g. and references therein. These families of domain walls link various supersymmetric vacua. Although the internal structure of each domain wall within the family is different, they all have the same energy. So each family of domain walls is degenerate with respect to at least one parameter transformation, which may be considered as a label of each specific domain wall configuration. To be more concrete let us consider theory described by the superpotential
$$W(\mathrm{\Phi },X)=\frac{m^2}{\lambda }\mathrm{\Phi }\frac{1}{3}\lambda \mathrm{\Phi }^3\alpha \mathrm{\Phi }X^2,$$
$`(1)`$
where $`m`$ is a mass parameter and $`\alpha `$ and $`\lambda `$ are coupling constants. We assume that $`\alpha `$ and $`\lambda `$ are real and positive. The Lagrangian for the real parts of the scalar fields is given for this theory by the expression
$$L=(\varphi )^2+(\chi )^2\left(\frac{m^2}{\lambda }\lambda \varphi ^2\alpha \chi ^2\right)^24\alpha ^2\varphi ^2\chi ^2.$$
$`(2)`$
The potential term of Eq. (2) has four degenerate vacuum states, shown in Fig. 1. This theory posses a wide class of domain walls, which link different vacua of the theory. These domain wall configurations may be obtained as solutions of Bogomol’nyi - Prasad - Sommerfeld (BPS, Ref. ) equations, which look in the case of Eq. (2) like
| $`{\displaystyle \frac{df}{dz}}=1f^2h^2`$ , |
| --- |
| $`{\displaystyle \frac{dh}{dz}}={\displaystyle \frac{2}{\rho }}fh.`$ |
$`(3)`$
Here $`\rho =\lambda /\alpha `$, $`m=1`$ and $`z`$ is a space coordinate. As it was shown in Refs. , for the case $`\rho =4`$ solutions for $`f(z)`$ and $`h(z)`$ may be obtained in analytical form:
$$f(z)=\frac{a(e^{2z}1)}{a+2e^z+ae^{2z}},h^2(z)=\frac{2e^z}{a+2e^z+ae^{2z}},$$
$`(4)`$
where $`a`$ is continuous parameter, $`0a+\mathrm{}`$.
The solution (4) links vacua states 1 and 2. Depending on $`a`$ the specific form of $`f(z)`$ and $`h(z)`$ looks quite different. Considering region $`0<a<1`$, we may introduce different parameterization:
$$\mathrm{cosh}s=\frac{1}{a}.$$
$`(5)`$
In terms of the parameter $`s`$ the functions $`f(z)`$ and $`h(z)`$ look like
$$f(z)=\frac{1}{2}\left(\mathrm{tanh}\frac{zs}{2}+\mathrm{tanh}\frac{z+s}{2}\right),$$
$$h^2(z)=\frac{1}{2}\left(1\mathrm{tanh}\frac{zs}{2}\mathrm{tanh}\frac{z+s}{2}\right).$$
$`(6)`$
From Eqs. (6) it is clear that at large $`s`$ functions $`f`$ and $`h`$ split into two orthogonal to z-axis elementary walls corresponding to the transitions $`13`$ at $`z=s`$ and $`32`$ at $`z=s`$. Thus at large $`s`$ Eqs. (6) describe two far separated domain walls. The purpose of this research is to study the dynamics of the collision between these two separated domain walls $`13`$ and $`32`$. Evidently, this question is outside of the BPS approximation and to study this problem we should solve exact field equations, which follow form the Lagrangian (2).
Recently in Ref. the problem of intersection of two domain walls in this model was considered. The authors didn’t solve the exact equations of motion, but used more simple reasonable approach considering the parameter $`a`$ as dynamical effective variable. In what follows we shall use both methods, considering $`a`$ (or $`s`$) as a function of time $`t`$, as well as solving exact equations of motion for fields $`f`$ and $`h`$. We shall demonstrate that effective Lagrangian method is consistent with solution of Cauchy problem for exact field system.
In terms of $`a`$ (5) the energy of the domain wall configuration (4) has the form
$$E=E_0+E_1,E_1=\frac{1}{2}\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\dot{a}^2\frac{a\mathrm{cosh}^3z+\mathrm{cosh}2z}{(a\mathrm{cosh}z+1)^4}𝑑z.$$
$`(7)`$
Here $`E_0`$ does not depend on $`a`$ and is sum of energy densities of two far separated walls $`13`$ and $`32`$. Its specific form is inessential for our future consideration. Replacing $`a(t)`$ by $`s(t)`$ according to Eq. (5), we get the following effective Lagrangian for the new dynamical variable $`s`$:
$$L_{eff}=\frac{1}{2}m(s)\dot{s}^2,$$
$`(8)`$
where
$$m(s)=2\left[s\mathrm{tanh}s+\frac{5}{3}\frac{2s}{\mathrm{tanh}s}+\frac{1}{\mathrm{tanh}^2s}\left(\frac{s}{\mathrm{tanh}s}1\right)\right].$$
$`(9)`$
The effective Lagrangian (8) yields the following differential equation for the function $`s(t)`$:
$$m(s)\ddot{s}+\frac{1}{2}m^{}(s)\dot{s}^2=0.$$
$`(10)`$
To observe the process of the walls collision we have to start with initial configuration (4) with some $`s(0)1`$ and $`\dot{s}(0)<0`$. As the parameter $`s`$ has a meaning of distance between the walls, such configuration corresponds to $`13`$ and $`32`$ walls moving along $`z`$-axis towards each other. Numerical solving of Eq. (10) shows that $`s(t)`$ decreases to zero. In the range of small $`s`$ the solution can be obtained analytically:
$$m(s)\frac{16}{15}s^2,s(t)=\sqrt{2s(t_{})\dot{s}(t_{})(tt_{})+s^2(t_{})},$$
$`(11)`$
where $`t_{}`$ denotes the moment of transition from numerical solving of Eq. (10) to analytical solution (11). While $`s`$ decreases from $`+\mathrm{}`$ to 0, the parameter $`a`$ changes from 0 to 1. At $`s=0`$ (point $`A`$ in Fig. 2) we have for the fields:
$$f(z)=\mathrm{tanh}(z/2),h(z)=\frac{1}{\sqrt{2}\mathrm{cosh}(z/2)}.$$
$`(12)`$
In the range $`1<a<+\mathrm{}`$ the parameter $`s`$, defined by Eq. (5), becomes pure imaginary. Therefore it is suitable to introduce $`\stackrel{~}{s}`$ as
$$\mathrm{cos}\stackrel{~}{s}=\frac{1}{a}.$$
$`(13)`$
The effective Lagrangian for $`\stackrel{~}{s}`$ is analogous to (8):
$$\stackrel{~}{L}_{eff}=\frac{1}{2}\stackrel{~}{m}(\stackrel{~}{s})\dot{\stackrel{~}{s}}^2$$
$`(14)`$
with
$$\stackrel{~}{m}(\stackrel{~}{s})=2\left[\stackrel{~}{s}\mathrm{tan}\stackrel{~}{s}\frac{5}{3}+\frac{2\stackrel{~}{s}}{\mathrm{tan}\stackrel{~}{s}}+\frac{1}{\mathrm{tan}^2\stackrel{~}{s}}\left(\frac{\stackrel{~}{s}}{\mathrm{tan}\stackrel{~}{s}}1\right)\right].$$
$`(15)`$
At the moment when $`a=1`$ (or $`s=0`$) we pass from $`s`$ to $`\stackrel{~}{s}`$ which increases from 0 to $`\pi /2`$. At $`\stackrel{~}{s}=\pi /2`$ (point $`B`$ in Fig. 2) the fields $`f`$ and $`h`$ have the form:
$$f(z)=\mathrm{tanh}z,h(z)0.$$
$`(16)`$
After that, $`\stackrel{~}{s}`$ decreases to 0 and becomes pure imaginary. Therefore at the moment of $`\stackrel{~}{s}=0`$ (point $`C`$ in Fig. 2) we return back to $`s`$, which begins to increase from 0 to $`+\mathrm{}`$. Note, that for $`\stackrel{~}{s}`$ we also used analytical solutions at $`\stackrel{~}{s}1`$ and $`|\stackrel{~}{s}\pi /2|1`$. The summary $`(s,\stackrel{~}{s})`$ $`t`$-dependencies for two initial relative velocities $`\dot{s}(0)=0.05`$ and $`\dot{s}(0)=0.1`$ are represented in Fig. 3.
Developed dynamical variable approximation seems to be reasonable, but needs in numerical confirmation. We performed calculation of evolution of the described initial configuration by solving numerically the Cauchy problem for the system of two differential equations in partial derivatives for the fields $`f`$ and $`h`$. These equations are consequence of the Lagrangian (2). Results of the numerical solving of the Cauchy problem for fields $`f`$ and $`h`$ were compared with the fields profiles obtained for current value of the parameter $`(s,\stackrel{~}{s})`$ at all times $`t`$. For not too large values of $`\dot{s}(0)`$ we observed good agreement. It confirms that the parameter $`(s,\stackrel{~}{s})`$ is a good dynamical variable for describing the process of the BPS domain walls collision in the SUSY model that was considered.
Note, that with $`\dot{s}(0)`$ being increased in numerical calculations of the exact field problem we observe progressing difference from the field profiles restored with the help of the dynamical variable method. The question whether it is a problem of computational scheme or it is related with excitation of other than zero mode field degrees of freedom is under consideration now.
Acknowledgments
We are thankful to M. B. Voloshin for useful discussions. We would also like to thank A. A. Panfilov for his help with computer graphics.
This work was supported in part by the Russian Foundation for Basic Research under grants No 98-02-17316 and No 96-15-96578 (both authors). The work of V. A. Gani was also supported by the INTAS Grant No 96-0457 within the research program of the International Center for Fundamental Physics in Moscow. One of the authors (A. E. Kudryavtsev) would like to thank RS Grant for financial support.
Figure captions
Fig. 1. Locations of the vacuum states of the model.
Fig. 2. A scheme of the $`13`$ and $`32`$ walls collision process. Numbers 1, 2, 3 denote corresponding vacua.
Fig. 3. The dynamical variable ($`s,\stackrel{~}{s}`$) as a function of time for two different values of the initial velocity $`\dot{s}(0)`$.
|
no-problem/9904/cond-mat9904298.html
|
ar5iv
|
text
|
# Uniform susceptibility of classical antiferromagnets in one and two dimensions in a magnetic field
In recent years, investigations of two-dimensional antiferromagnets concentrated primarily on the quantum model with $`S=1/2`$. A practical reason for that is its possible relevance for the high-temperature superconductivity. On the other hand, the identification with the quantum nonlinear sigma model (QNL$`\sigma `$M) in the low-energy sector allowed using field-theory methods . Although the QNL$`\sigma `$M results for the $`S=1/2`$ model proved to be in a good argeement with quantum Monte Carlo (QMC) simulations (see, e.g., Ref. ), the requirement of low energies confines the validity region of the QNL$`\sigma `$M to rather low temperatures already for $`S1`$. High-temperature series expansions (HTSE) for $`S1`$ and QMC simulations for $`S=1`$ in the experimentally relevant temperature range, as well as experiments on model substances with $`1S5/2`$, showed much better accord with the pure-quantum self-consistent harmonic approximation (PQSCHA) , than with the field-theoretical QNL$`\sigma `$M predictions. In contrast to the QNL$`\sigma `$M, the PQSCHA maps a quantum system on the corresponding classical system on the lattice, which, in turn, can be studied by classical MC simulations or other methods. The parameters of these classical Hamiltonians are renormalized by quantum fluctuations and given by explicit analytical expressions.
The above arguments show that in most cases the classical model can be used as a good starting point for studying quantum systems. In fact, most of nontrivial features of two-dimensional antiferromagnets, such as impossibility of ordering at nonzero temperatures in the isotropic case, are universal and appear already at the classical level. The main theoretical problem is that due to Goldstone modes, a simple spin-wave theory at $`TJS^2`$ is inapplicable to two-dimensional magnets.
Despite their importance, classical antiferromagnets received much less attention than the quantum $`S=1/2`$ model. In particular, the initial uniform susceptibility $`\chi (T)`$ for the square lattice having a flat maximum at $`TJ`$ has been simulated for $`S=1/2`$ in Refs. and for $`S=1`$ in Ref. , but there are no results for the classical model yet! For the latter, only the old MC data for the energy are available up to now.
On the other hand, classical magnets can be theoretically studied with the help of the $`1/D`$ expansion, where $`D`$ is the number of spin components . In Ref. , $`\chi (T)`$ has been calculated for the square lattice and linear chain to first order in $`1/D`$ for all temperatures, the solution interpolating between the exact result at $`T=0`$ and the leading terms of the HTSE at high temperatures. In contrast, the low-energy approaches such as “Schwinger-boson mean-field theory” or “modified spin-wave theory” break down at $`TJ`$ and fail to reproduce the maximum of $`\chi (T)`$. It should be noted that for quantum magnets there is a method consisting in the expansion in powers of $`1/N`$ where $`N`$ is the number of flavors in the Schwinger-boson technique . This method, which is nonequivalent to the $`1/D`$ expansion in the limit $`S\mathrm{}`$, is supposed to work for all $`T`$, in contrast to the low-energy QNL$`\sigma `$M. Unfortunately, only the results for $`m(T,H)`$ of ferromagnets are available.
The $`1/D`$ expansion also works in the situations with nonzero magnetic field, which are not amenable to the methods of Refs. imposing an external condition $`m=0`$. An especially interesting issue is the singular behavior of $`\chi (H,T)`$ for $`H,T0`$ for the square-lattice and linear-chain models. For any $`H0`$, the spins with lowering temperature come into a position nearly perpendicular to the field, thus $`lim_{H0}lim_{T0}\chi (H,T)=1/(2J_0)`$, where $`J_0`$ is the zero Fourier transform of the exchange interaction, $`J_0=zJ`$, $`z`$ is the number of nearest neighbors. This value coincides with the susceptibility of the three-dimensional classical antiferromagnets on bipartite lattices in the direction transverse to the spontaneous magnetization. For $`H=0`$, the spins assume all directions, including that along the infinitesimal field, for which the susceptibility tends to zero at $`T0`$. Thus $`lim_{T0}lim_{H0}\chi (H,T)=1/(2J_0)(11/D)`$. One can see that the difference between these two results is captured exactly in the first order in $`1/D`$. According to Ref. , for any $`H0`$ with lowering temperature $`\chi (H,T)`$ increases, goes through the flat maximum, decreases, attains a minimum and then goes up to the limiting value $`1/(2J_0)`$.
The existence of the interesting features described above, which should be also pertinent to quantum antiferromagnets, have never been checked numerically. That is why we have undertaken MC simulations for classical AFMs in square-lattice and linear-chain geometries.
Our systems are defined by a classical Heisenberg Hamiltonian
$$=𝐇\underset{i}{}𝐒_i+\frac{1}{2}\underset{ij}{}J_{ij}𝐒_i𝐒_j$$
(1)
where $`𝐒`$ is a $`D`$-component normalized vector of unit length ($`|𝐒|=1`$), $`𝐇`$ is a magnetic field and the exchange coupling $`J_{ij}`$ is $`J>0`$ for nearest neighbors and zero otherwise. The mean-field transition temperature is given by $`T_c^{\mathrm{MFA}}=J_0/D=zJ/D`$. Although there is no phase transition in our model, it is convenient to choose $`T_c^{\mathrm{MFA}}`$ as the energy scale and to introduce dimensionless temperature, magnetic field, and susceptibilities
$$\theta T/T_c^{\mathrm{MFA}},hH/J_0,\stackrel{~}{\chi }_\alpha J_0\chi _\alpha ,$$
(2)
where $`\chi _\alpha S_\alpha /H_\alpha `$ and $`\alpha =x,y,z`$.
In the limit $`D\mathrm{}`$, the model Eq. (1) is exactly solvable and equivalent to the spherical model. The solution includes an integral over the Brillouin zone taking into account spin-wave effects in a nonperturbative way. The latter leads to the absence of the phase transition for the spatial dimensionalities $`d2`$.
The $`1/D`$ corrections to the spherical-model solution have been obtained in Refs. . They include double integrals over the Brillouin zone and are responsible for the maximum of the antiferromagnetic susceptibility at $`\theta 1`$ . For small fields and temperatures, $`h,\theta 1`$, the field-induced magnetization $`m`$ for the square-lattice model simplifies to
$$m\frac{h}{2}\left[1\frac{1}{D}+\frac{\theta }{\pi D}\mathrm{ln}\left(1+\frac{h^2}{16}e^{\pi /\theta }\right)+\frac{\theta }{D}\right],$$
(3)
which follows from Eqs. (4.9) and (2.23) of Ref. . The log term of the above expression is responsible for the singularity of both transverse and longitudinal (with respect to the field) susceptibilities,
$$\stackrel{~}{\chi }_{}m/h,\stackrel{~}{\chi }_{}m/h,$$
(4)
which was mentioned above. For $`h=0`$ they have the form $`\stackrel{~}{\chi }[11/D+\theta /D]/2`$, whereas for $`h0`$ the limiting value at $`\theta =0`$ and the slope with respect to $`\theta `$ are different: $`\stackrel{~}{\chi }\{1[\theta /(\pi D)]\mathrm{ln}[16/(e^\pi h^2)]\}/2`$. In the latter case, $`\chi `$ has a minimum at $`\theta \theta ^{}=\pi /\mathrm{ln}(16/h^2)`$. There are corrections of order $`\theta ^2`$ and $`1/D^2`$ to Eq. (3). The latter renormalize the last, regular term in Eq. (3) (see Eq. (8.2) of Ref. ). The $`1/D^2`$ corrections cannot, however, appear in the log term of Eq. (3), because this would violate the general properties of $`\chi (H,T)`$ discussed above.
For the linear chain, the magnetization in the region $`h,\theta 1`$ to first order in $`1/D`$ is given by
$$m\frac{h}{2}\left[1\frac{\theta }{D\sqrt{h^2+\theta ^2}}+\frac{\theta }{D}+O(\theta ^2)\right].$$
(5)
The transverse susceptibility of the linear chain behaves qualitatively similarly to that of the square lattice. The minimum of $`\chi _{}`$ is attained at $`\theta =h^{2/3}`$ which is smaller than in two dimensions. The longitudinal susceptibility $`\chi _{}`$ corresponding to Eq. (5) has a minimum at $`\theta 3^{1/3}h^{2/3}h`$ and a maximum at $`\theta 3^{1/2}h^{3/2}h`$.
For comparison, the zero-field Takahashi’s results for the Heisenberg model on the linear chain and square lattice can for $`\theta 1`$ be rewritten in the form
$$\stackrel{~}{\chi }\frac{1}{3}\{\begin{array}{cc}[1\theta /3]^1,\hfill & d=1\hfill \\ 2\left[1+\sqrt{1(4/3)\theta }\right]^1,\hfill & d=2,\hfill \end{array}$$
(6)
where the exponentially small terms are neglected. For both lattices the low-temperature expansion is the same to order $`\theta `$: $`\stackrel{~}{\chi }=(1/3)+(1/9)\theta +\mathrm{}`$, and the results diverge at $`\theta 1`$. The coefficient in front of $`\theta `$ here is at variance with the $`1/D`$-expansion results above for $`D=3`$. It was argued in Ref. that the correct general-$`D`$ form of the low-temperature expansion of the zero-field susceptibility for both square lattice and the linear chain reads
$$\stackrel{~}{\chi }=\frac{1}{2}\left(1\frac{1}{D}\right)+\frac{1}{2D}\left(1\frac{1}{D}\right)\theta +O(\theta ^2),$$
(7)
i.e., it is reproduced to order $`\theta `$ at the second order of the $`1/D`$ expansion. This formula is in accord with Takahashi’s theory.
In order to check the validity of the analytic results from the $`1/D`$ expansion above for the most realistic case of $`D=3`$, we performed Monte Carlo simulation for three-component classical spins on a chain with length $`N`$ as well as on a square lattice of size $`N=L\times L`$, both with periodic boundary conditions. In our Monte Carlo procedure, a spin is chosen randomly and a trial step is made where the new spin direction is taken randomly with equal distribution on the unit sphere. This trial step does not depend on the initial spin direction. The energy change of the system is computed according Eq. (1) and is accepted with the heat-bath probability. One sweep through the lattice and performing the procedure described above once per spin (on average) is called one Monte Carlo step (MCS). We start our simulation at high temperature and cool the system stepwise. For each temperature we wait 6000MCS (chain) and 4000MCS (square lattice), respectively, in order to reach equilibrium. After thermalization we compute thermal averages $`\mathrm{}`$ for the next 8000MCS (chain) and 6000MCS (square lattice), respectively.
The relevant quantities we are interested in are the magnetization $`mm_z=M_z`$ and the components of the susceptibility $`\chi _\alpha =\frac{N}{T}(M_\alpha ^2M_\alpha ^2)`$, where the $`z`$ axis is directed along H, $`\alpha =x,y,z`$, and $`M_\alpha \frac{1}{N}_iS_i^\alpha `$. We have used the formula above for $`\chi _\alpha `$ to simulate the zero-field and longitudinal susceptibility, $`\chi _{}\chi _z`$. For the transverse susceptibility, $`\chi _{}\chi _x=\chi _y`$, at nonzero field it is more convenient to use Eq. (4). For $`h=0`$ the transverse and longitudinal susceptibilities are identical and calculated as $`\chi _{}=\chi _{}=(\chi _x+\chi _y+\chi _z)/3`$.
With intent to minimize the statistical error and to be able to compute error bars we take averages over $`N_r=100`$ independent Monte Carlo runs. The error bars we show are the mean errors of the averages $`\sigma /\sqrt{N_r}`$, where $`\sigma `$ is the standard deviation of the distribution of thermal averages following from the independent runs.
We start the comparison of theoretical and numerical results with the square lattice. Fig. 1 shows the temperature dependence of the reduced longitudinal susceptibility $`\stackrel{~}{\chi }_{}`$ and reduced transverse susceptibility $`\stackrel{~}{\chi }_{}`$ for different values of the magnetic field, both for the system size $`L=64`$. The corresponding results for the spin chain with system size $`L=100`$ are presented in Fig. 2.
We investigated possible finite-size effects by varying the lattice size. However, we did not find any significant change of our data for lattice sizes in the range $`L=16\mathrm{}64`$ (square lattice) and $`L=40\mathrm{}100`$ (linear chain). Also, we did not find any systematic change of our results for longer Monte Carlo runs so that we believe to present data corresponding to thermal equilibrium.
Note, that for all Monte Carlo data shown the error bars of the transverse susceptibility are smaller than those of the longitudinal one since the transverse susceptibility follows directly from the $`z`$ component of the magnetization while the longitudinal susceptibility is calculated from the fluctuations of the $`z`$ component of the magnetization. In the case $`h=0`$ the transverse and longitudinal susceptibility are identical and follow from fluctuations of the magnetization so that the error bars are larger.
For the square lattice as well as for the chain the numerical data confirm the non-analytic behavior of $`\chi `$ in the limit of temperature $`T0`$, i. e. the limiting values $`\stackrel{~}{\chi }_{}=\stackrel{~}{\chi }_{}=1/2`$ for $`h0`$ and $`\stackrel{~}{\chi }_{}=\stackrel{~}{\chi }_{}=1/3`$ for $`h=0`$.
Especially for the square lattice, the Monte Carlo data agree reasonable with the first-order $`1/D`$ expansion in the whole range of temperatures. On the other hand, at low temperatures the agreement with Takahashi’s theory within error bars is achieved. Our numerical data thus confirm that the coefficient in the linear-$`\theta `$ term in $`\chi `$ in Takahashi’s theory is accurate. For $`h=1`$ and $`\theta 1`$, the MC data fall slightly below the $`1/D`$-expansion curve. Both are again in accord with each other for $`\theta 3`$ (not shown).
The maximum of the longitudinal susceptibility of the square-lattice model for $`h=1`$ looks much sharper than that of the theoretical curve. This feature, as well as the hump on the $`h=0.1`$ curve at slightly lower temperature, are possible indications of the Berezinsky-Kosterlitz-Thouless (BKT) transition. The reason for that is an effective reduction of the number of spin components by one at sufficiently low temperatures in the magnetic field (the effect mentioned in the introduction), so that the Heisenberg model becomes effectively $`D=2`$ and it can undergo a BKT transition in two dimensions. We have not, however, studied this point in detail in this work.
For the antiferromagnetic chain our MC simulation data are in a qualitative agreement with the $`1/D`$ expansion, although the discrepancies are stronger.
Unfortunately, we could also not perform simulations for even lower values of the field $`h`$ for the following reason: The singular behavior of $`\chi `$ stems from the fact that for $`h>0`$ the spins tend to come into a position perpendicular to the field. For fields as small as $`h=0.01`$ (curve 4 in Figures 1 and 2) the amount of energy related to this ordering field is 100 times smaller than the exchange interaction energy. Therefore the corresponding relaxation for this energetically favorable state takes very long in a Monte Carlo simulation, especially for these low temperatures, where this effect occurs for low fields.
Our MC simulations showed for the first time the singular behavior of the susceptibility of classical antiferromagnets at low temperatured and magnetic fields. The results are in accord with predictions based on the first-order $`1/D`$ expansion . It would be interesting to try deriving the corresponding low-temperature results \[cf. Eqs. (3) and (5)\] without using the $`1/D`$ expansion. One of the formulas of this type already exists: It is Eq. (7). A candidate among theoretical approaches is the chiral perturbation theory of Ref. , which is applicable to quantum models, as well.
The features manifested here by classical antiferromagnets should be pertinent to quantum models, as well. The effects observed here could be checked with the help of the QMC simulations which achieved recently a substantial accuracy (see, e.g., Refs. and ). Another possibility is to map the quantum model on the classical one and to perform classical MC simulations. One should also mention an alternative way of mapping of quantum magnetic Hamiltonians on classical ones with the help of the coherent-state cumulant expansion , which is a rigorous expansion in powers of $`1/S`$.
|
no-problem/9904/hep-ex9904007.html
|
ar5iv
|
text
|
# Electroweak results from the Z resonance Cross-Sections and Leptonic Forward-Backward Asymmetries with the ALEPH detector
## 1 Introduction
From 1990 to 1995 the LEP $`\mathrm{e}^+\mathrm{e}^{}`$ storage ring was operated at centre of mass energies close to the Z mass, in the range $`|\sqrt{s}\mathrm{M}_\mathrm{Z}|<3`$ GeV. Most of the data have been recorded at the maximum of the resonance (120 pb<sup>-1</sup> per experiment) and about 2 GeV below and above (40 pb<sup>-1</sup> per experiment).
The measurement of the hadronic and leptonic cross sections as well as the leptonic forward backward asymmetries performed with the Aleph detector at these energies are presented here. The large statistic allows a precise measurement of these quantities which are then used to determine the Z lineshape parameters: the Z mass $`\mathrm{M}_\mathrm{Z}`$, the Z width $`\mathrm{\Gamma }_\mathrm{Z}`$, the total hadronic cross section at the pole $`\sigma _{\mathrm{had}}^0`$ and the ratio of hadronic to leptonic pole cross sections $`\mathrm{R}_\mathrm{l}=\sigma _{\mathrm{had}}^0/\sigma _\mathrm{l}^0`$.
Here we will give some details of the Aleph experimental measurement of these quantities. A review of the whole LEP electroweak measurements and a discussion of the results as a test of the Standard Model can be found in another talk of this conference .
## 2 Cross sections and leptonic Forward-Backward asymmetries measurement
The cross section and asymmetries are determined for the s-channel process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{Z},\gamma \mathrm{f}\overline{\mathrm{f}}`$. The cross section is derived from the number of selected events $`\mathrm{N}_{\mathrm{sel}}`$ with
$$\sigma _{\mathrm{f}\overline{\mathrm{f}}}=\frac{\mathrm{N}_{\mathrm{sel}}(1\mathrm{f}_{\mathrm{bkg}})}{ϵ}\frac{1}{}$$
(1)
where $`\mathrm{f}_{\mathrm{bkg}}`$ is the fraction of background, $`ϵ`$ is the selection efficiency and $``$ is the integrated luminosity. Note that in the case of $`\mathrm{e}^+\mathrm{e}^{}`$ final state the irreducible background originating from the exchange of $`\gamma `$ (Z) in the t-channel is subtracted from $`\mathrm{N}_{\mathrm{sel}}`$ to obtain the s-channel cross section.
The leptonic forward-backward asymmetry ($`\mathrm{A}_{\mathrm{FB}}`$) is derived from a fit to the angular distribution
$$\frac{\mathrm{d}\sigma _{\mathrm{f}\overline{\mathrm{f}}}}{\mathrm{d}\mathrm{cos}\theta ^{}}1+\mathrm{cos}^2\theta ^{}+\frac{8}{3}\mathrm{A}_{\mathrm{FB}}\mathrm{cos}\theta ^{}$$
(2)
where $`\theta ^{}`$ is the centre of mass scattering angle between the incoming $`\mathrm{e}^{}`$ and the out-coming negative lepton.
To achieve a good precision on the cross section a high efficiency and low background is necessary while the asymmetry is insensitive to the overall efficiency and to a background with the same asymmetry as the signal. This justifies the use of different leptonic selections for cross section and asymmetry measurement.
### 2.1 Hadronic cross sections
Four million of $`\mathrm{e}^+\mathrm{e}^{}\mathrm{q}\overline{\mathrm{q}}`$ events have been recorded at the Z peak, leading to a statistical uncertainty of $`0.05\%`$. The systematic uncertainty has been reduced to the same level.
The hadronic cross section measurement is based on two independent selections. The first selection is based on charged track properties while the second is based on calorimetric energy. Details of these selections can be found in previous publications . These two measurements are in good agreement and are combined to obtain the final result. The systematic uncertainties of these selections are almost uncorrelated because they are mainly based on uncorrelated quantities therefore the combination of the 2 selections allows to reduce the systematic uncertainty.
Figure 1 shows the distribution of charged multiplicity versus the charged track energy for signal and background. These variables are used to separate $`\mathrm{e}^+\mathrm{e}^{}\mathrm{q}\overline{\mathrm{q}}`$ events from background in the charged track selection. The most dangerous background in both selections is $`\gamma \gamma `$ events because Monte Carlo prediction is not fully reliable. Therefore a method to determine this background from the data has been developed. This is achieved by exploiting the different $`\sqrt{s}`$ dependence of the resonant (signal) and non-resonant (background) contributions. The resultant systematic error ($`0.04\%`$) reflects the statistic of the data.
The dominant systematic error in the calorimetric selection comes from the calibration of calorimeters ($`0.09\%`$) and, in the charged track selection from the determination of the acceptance ($`0.06\%`$). Table 1 gives a breakdown of the efficiency, the background and the systematic uncertainties of both selections.
### 2.2 Leptonic cross sections
The statistical uncertainty in the leptonic channel is of the order of $`0.15\%`$, The aim of the analysis is to reduce the systematic to less than $`0.1\%`$. Two analyses were developed for the measurement of the leptonic cross sections. The first one, referred to as exclusive is based on three independent selections each aimed at isolating one lepton flavour and still follows the general philosophy of the analysis procedures described earlier . The second one is new and has been optimised for the measurement of $`\mathrm{R}_\mathrm{l}`$, it is refered as global di-lepton selection. The results of these selections agree within the uncorrelated statistical error and have been combined for the final result. These selections are not independent since they make use of similar variables therefore their combination does not reduce the systematic uncertainty.
We concentrate here on the global analysis. This selection takes advantage of the excellent particle identification capabilities (dE/dx, shower developement in the calorimeters and muon chamber information) and the high granularity of the Aleph detector. First di-leptons are selected within the detector acceptance with an efficiency of $`99.2\%`$ and the background arising from $`\gamma \gamma `$, $`\mathrm{q}\overline{\mathrm{q}}`$ and cosmic events is reduced to the level of $`0.2\%`$. Then the lepton flavour separation is performed inside the di-lepton sample so that the systematic uncertainties are anti-correlated between 2 lepton species and that no additional uncertainty is introduced on $`\mathrm{R}_\mathrm{l}`$. Table 2 gives a breakdown of the systematic errors obtained with 1994 data. The background from $`\mathrm{q}\overline{\mathrm{q}}`$ and $`\gamma \gamma `$ events affects mainly the $`\tau ^+\tau ^{}`$ channel, therefore this channel is affected by bigger selection systematics than $`\mathrm{e}^+\mathrm{e}^{}`$ and $`\mu ^+\mu ^{}`$.
As an example we consider the systematic errors related to the $`\tau ^+\tau ^{}`$ selection efficiency. This efficiency is measured on the data: $`\tau ^+\tau ^{}`$ events are selected using tight selection criteria to flag $`\tau `$-like hemispheres; with the sample of opposite hemispheres, artificial $`\tau ^+\tau ^{}`$ events are constructed by associating two back-to-back such hemispheres and the selection cuts are applied. In order to assess the validity of the method and to correct for possible bias of this method, two different Monte Carlo reference samples are used. On the first sample the same procedure of artificial $`\tau ^+\tau ^{}`$ events is applied, on the second one the selection cuts are applied directly. The uncertainty on the efficiency measured with this method is dominated by the statistic of the artificial events used in the data. This method is applied in order to measure the inefficiency arising from $`\mathrm{q}\overline{\mathrm{q}}`$ cuts and in the flavour separation.
The dominant systematic in $`\mathrm{e}^+\mathrm{e}^{}`$ channel arises from t-channel subtraction and is given by the theoretical uncertainty on the t-channel contribution to the cross section.
This leptonic cross section measurement contributes to a systematic uncertainty on $`\mathrm{R}_\mathrm{l}`$ of $`0.08\%`$.
### 2.3 Leptonic Forward-Backward asymmetries
The measurement of the asymmetries is dominated by the statistical uncertainty equal to 0.0015. Special muon and tau selections have been designed for the asymmetry measurement while the $`\mathrm{e}^+\mathrm{e}^{}`$ exclusive selection is used for the $`\mathrm{e}^+\mathrm{e}^{}`$ asymmetry measurement. The $`\mathrm{e}^+\mathrm{e}^{}`$ angular distribution needs to be corrected for efficiency before subtracting the t-channel and therefore relies on Monte Carlo while $`\mu ^+\mu ^{}`$ and $`\tau ^+\tau ^{}`$ angular distributions do not need to be corrected with Monte Carlo since the selections are designed so that the efficiency is symmetric.
Because of the $`\gamma Z`$ interference, the asymmetry varies rapidly with the centre of mass energy $`\sqrt{s^{}}`$ around the Z mass. Cuts on energy induce a dependence of the efficiency with $`\sqrt{s^{}}`$ and therefore could introduce a bias in the measurement since the efficiency would no more be symmetric. To minimise these effects the selections are mainly based on particle identification instead of kinematic variables.
Figure 2 shows the particle identification efficiency as a function of energy.
The asymmetry is extracted by performing a maximum likelihood fit to the differential cross section. The dominant systematic uncertainty arises from t-channel subtraction in the Bhabha channel (0.0013 on $`\mathrm{A}_{\mathrm{FB}}^{0,\mathrm{e}}`$) and other systematic errors are smaller than 0.0005.
## 3 Results
Figures 4 and 4 show the measured cross sections and asymmetries. The Z lineshape parameters are fitted to these measurements with the latest version of ZFITTER . The error matrix used in the $`\chi ^2`$ fit includes the experimental statistic and systematic uncertainties, the LEP beam energy measurement uncertainty and the theoretical uncertainties arising from the small angle Bhabha cross section in the luminosity determination and the t-channel contribution to wide angle Bhabha events. The results are shown in Table 3.
The value of the Z couplings to charged leptons $`|g_V|`$ and $`|g_A|`$ can be derived from these parameters. The experimental measurement is shown in Figure 5 with the Standard Model prediction. The data favor a light Higgs.
The value of $`\alpha _s`$ can also be extracted from $`\mathrm{R}_\mathrm{l}`$, $`\mathrm{\Gamma }_\mathrm{Z}`$ and $`\sigma _{\mathrm{had}}^0`$:
$$\alpha _\mathrm{s}=0.115\pm 0.004_{\mathrm{exp}}\pm 0.002_{\mathrm{QCD}}$$
(3)
where the first error is experimental and the second reflects uncertainties on the QCD part of the theoretical prediction . Here the Higgs mass has been fixed to 150 GeV and the dependence of $`\alpha _s`$ with $`\mathrm{M}_\mathrm{H}`$ can be approximately parametrised by $`\alpha _\mathrm{s}(\mathrm{M}_\mathrm{H})=\alpha _\mathrm{s}(\mathrm{M}_\mathrm{H}=150\mathrm{G}\mathrm{e}\mathrm{V})\times (1+0.02\times \mathrm{ln}(\mathrm{M}_\mathrm{H}/150))`$ where $`\mathrm{M}_\mathrm{H}`$ is expressed in GeV.
## 4 Conclusion
The high statistic accumulated by Aleph during LEP 1 running allows to measure the hadronic and leptonic cross sections and the leptonic forward-backward asymmetries with statistical and systematic precision of the order of 1 permil. These measurements are turned into precise determination of the Z boson properties and constraints on the Standard Model parameters.
## Acknowledgments
I would like to thank D. Schlatter for his help in preparing this talk. I also would like to thank the organisers of the Lake Louise Winter Institute conference for the interesting program and the nice atmosphere at the conference.
|
no-problem/9904/astro-ph9904412.html
|
ar5iv
|
text
|
# An Atmospheric Model for the Ion Cyclotron Line of Geminga.
## 1 Introduction
The panorama of optical observations of Isolated Neutron Stars (INS) is limited by the faintness of the vast majority of them. Only one object, the Crab pulsar, has good, medium resolution optical (Nasuti et al. 1996) and near-UV spectral data (Gull et al. 1998), while for PSR0540-69 the synchrotron continuum has been measured by HST (Hill et al. 1997). For a few more cases acceptable multicolour photometry exists, while the rest of the data base (a grand total of less than 10 objects currently) consists of one/two - wavelengths detections (see Caraveo 1998 for a summary of the observational panorama).
Apart from the very young objects, characterized by flat, synchrotron-like spectra arising from energetic electron interactions in their magnetosphere, of particular interest are the middle-aged ones ($`10^5yrs`$ old). The non-thermal, magnetospheric emission should have faded enough (in the X-ray waveband at least) to render visible the thermal emission from the hot INS surface. Standard cooling calculations predict a surface temperature in the range $`10^510^6{}_{}{}^{}K`$, in excellent agreement with recent X-ray observations of INS with thermal spectra (e.g. Becker & Trümper 1997). It is easy to predict the IR-optical-UV fluxes generated along the $`E^2`$ Rayleigh-Jeans slope of the Planck curve best fitting the X-ray data, and to compare predictions to observations, where available.
In what follows, we shall concentrate on Geminga, which is certainly the most studied object of its class (Bignami & Caraveo 1996 and refs. therein), and possibly of all INSs (except for the Crab). Its IR-optical-UV data (Bignami et al. 1996; Mignani et al. 1998) show the presence of a well defined emission feature. Such a feature is superimposed on the thermal continuum expected from the extrapolation of the black-body X-ray emission detected by ROSAT (e.g. Halpern & Ruderman 1993). The presence of a clear maximum centered on V has been recently questioned by Martin et al. (1998) on the basis of a spectrum which is at the limit of the capability of the Keck telescope. The spectrum of Geminga, detected at a level of just 0.5% of the dark sky, seems fairly flat and, although broadly consistent with earlier measurements, it is definitely above the flux measured in the B-band by the HST/FOC (Mignani et al. 1998). Therefore, in view of the faintness of the target, we shall concentrate on the photometric measurements, which have been repeated using different instrumental set-ups and appear more reliable than the available spectral data.
Recently, pulsations in the B-band have been tentatively detected by Shearer et al. (1998). Also in this case, the faintness of the source limits quite severely the S/N ratio and thus the statistical significance of the result. Indeed, a pulsed signal at just the 3.5 $`\sigma `$ level was found during only one of the three nights devoted to the project. If confirmed, these measurements would have deep implications on the mechanisms responsible for the optical emission of Geminga. However, in view of the rather low statistical significance of these results, we shall stand by the interpretation of Mignani et al. (1998) and propose a phenomenological model interpreting the feature as an atmospheric cyclotron line emission from Geminga’s polar caps.
## 2 The Spectral Distribution
Fig.1 shows the complete I-to-near UV colours of Geminga. These were obtained both from the ground and from HST during ten years of continuing observational effort (from Mignani et al. 1998). Comparing the repeteadly confirmed V-band magnitude with the relative fluxes of the three FOC points (430W, 342W and 195W) in the B/UV and the upper limit in I, it is easy to recognize an emission feature superimposed to a $`E^2`$ RJ-like continuum. It is thus obvious to directly compare the measured optical fluxes to the extrapolation of the soft X-ray blackbody spectrum. However, this apparently trivial step is easier said than done, since different X-ray observations have yielded for Geminga slightly different best fitting temperatures, with significantly different bolometric fluxes. For example, two independent ROSAT observations yielded best fitting temperatures of $`\mathrm{5.77\; 10}^5{}_{}{}^{}K`$ (Halpern & Ruderman 1993) and $`\mathrm{4.5\; 10}^5{}_{}{}^{}K`$ (Halpern & Wang 1997), respectively, implying, for the same Geminga distance, a factor of 3 difference in the emitting area. As a reference, we show in Fig. 1 the Rayleigh-Jeans extrapolation of the ROSAT X-ray spectrum obtained using the best fitting temperature derived by Halpern & Ruderman (1993) which, at the Geminga distance (Caraveo et al. 1996), yields an emission radius of 10 km. Thus, while the optical, RJ-like, continuum is largely consistent with thermal emission from the neutron star surface, the feature around $`6,000\AA `$, requires a different interpretation.
In the following, we propose a phenomenological model interpreting the feature seen in Fig.1 as an atmospheric cyclotron line emission from Geminga’s polar caps.
## 3 The Ion Cyclotron Emission Model.
Comparisons between theory and observed cooling NS spectra have been performed, e.g., by Romani (1987) and Meyer et al. (1994). Basically, the star surface is thought to be surrounded by a colder, partially ionized, atmosphere. Such an atmosphere behaves as a broad-band, absorbing-emitting medium. The observed spectrum is due to a rather complicate process, involving transfer of radiating energy between regions of different depths, temperatures and chemical compositions. These models predict a deviation from the blackbody emission law marginally observed in cooling neutron star spectra. Models that account for the effects of the star magnetic field $`B`$ foresee absorption lines at the cyclotron frequencies. This is easily explained by the presence of resonant frequencies, created by the magnetic field, at which the radiation emitted from deeper regions is quite efficiently absorbed. On the other hand, a cyclotron emission ”line”, superimposed to the blackbody continuum, requires a thermal inversion of the stellar atmosphere. We will see in the following that a thin hot plasma layer is optically thick at the cyclotron frequency, so that the cyclotron emission is far more efficient than Bremsstrahlung in the same spectral range. For reasonable value of the plasma density, therefore, Bremsstrahlung can be neglected. Geminga’s emission ”line” might then be explained as an (ion) cyclotron emission from a magnetoplasma, consisting of a mixture of Hydrogen and Helium, surrounding the star surface. Any feature of the observed radiation spectrum of Geminga in the region of the proton cyclotron frequency is ultimately due to the accelerated motion of charged particles in the magnetic and electric fields of the rotating star. We assume, however, the upper part of the star atmosphere to be a fully ionized ”classical” plasma layer, with a Maxwellian ion distribution function. The plasma layer is immersed in the inhomogeneous star dipole magnetic field. Accordingly, we develop the theory of the emission under the condition of applicability of the Kirchhoff’s law. To this end it is necessary to consider the full dielectric response of the magnetized plasma, and address two key questions: (i) identification of an electromagnetic (e.m.) wave which can propagate in vacuo toward the observer and assessment of its thermal (black body) absorption/emission properties in a single or multiple species plasma, (ii) assessment of the emission line broadening mechanism as a consequence of Doppler effect, collisions, temperature and density inhomogeneity and averaging over the magnetic field.
### 3.1 The Electromagnetic Wave and its Absorption/Emission Properties.
If a medium emits radiation at a given frequency, it also absorbs it at the same frequency. The quantity $`I(\omega )`$, i.e. the radiated power per unit area per unit solid angle per unit angular frequency, is given by the solution of the radiative transfer equation. For a single propagating e.m. mode in a medium of refractive index $`n`$ this is written as:
$$n^2\frac{d}{ds}\left(\frac{I(\omega )}{n^2}\right)=j(\omega )\alpha (\omega )I(\omega )$$
$`(1)`$
where $`j(\omega )`$ and $`\alpha (\omega )`$ are the emissivity and the absorption coefficient respectively and $`s`$ is the axis along the radiation path. In the present problem only one normal e.m. mode can propagate in vacuo and be observed. The emissivity, $`j(\omega )`$, is computed from the plasma dispersion relation and the Kirchhoff’s law:
$$j(\omega )=\alpha (\omega )K(\omega ,T)$$
$`(2)`$
where $`K(\omega ,T)`$ is the Planck function at the temperature $`T`$, where $`T`$ is here the plasma temperature. Near the cyclotron frequency $`\omega _{ci}`$ ($`2\pi \nu _{ci}=\omega _{ci}=ZeB/Ac`$; $`Z`$ is the atomic charge A the atomic mass), two independent e.m. modes can propagate in the magnetized plasma atmosphere. They are identified by an index of refraction given by the complex roots $`n_\pm =n^{}+in^{\prime \prime }`$ of the complex dispersion relation written for a real frequency $`\omega \omega _{ci}`$ in the biquadratic form (Akhiezer et al. 1975):
$$An_\pm ^4+Bn_\pm ^2+C=0$$
$`(3)`$
The coefficients $`A,B`$ and $`C`$ are function of the full dielectric tensor and depend on the plasma frequency $`\omega _p`$, the ion cyclotron frequency $`\omega _{ci}`$, the ion temperature $`T_i`$ and the angle $`\theta `$ between the propagation wave vector $`k=\frac{\omega }{c}n`$ and the local magnetic field $`B`$. Of these two waves, the slow $`n_+`$ (ordinary) wave diverges at the cyclotron resonance with a finite bandwidth (roughly given by the ratio of the thermal velocity to the phase velocity) of anomalous dispersion around the cyclotron frequency.
The ordinary wave in the region where $`\omega >\omega _{ci}`$ has an evanescence gap, which even if bridging by thermal and collisional effects is considered, produces substantial attenuation of wave propagating in tenuous plasma toward the observer (Shafranov 1058, Akhiezer et. al. 1975). The Geminga plasma atmosphere is characterized by $`\omega _p/\omega _{ci}<1`$. This prevents the use of the customary ion cyclotron approximation of the dielectric tensor and a numerical solution of the dispersion relation for both the ordinary and extraordinary mode has been performed . The numerical evaluation of absorption and propagation properties of the mode propagating in vacuo are shown in Fig. 2. as function of the angle between the direction of propagation and the magnetic field . The extraordinary mode, which can propagate in vacuo with frequency $`\omega \omega _{ci}`$, is the so called magnetosonic, compressional Alfven or fast wave.
It has a regular index of refraction and a very narrow band of frequency of anomalous dispersion with a correspondingly narrow frequency range of absorption due to thermal effects (Akhiezer et al, l975). The fast wave is electromagnetic in nature and in the cold plasma limit it is righthanded polarized. It becomes lefthanded (i.e. in the direction of the ion motion), thus allowing absorption at the fundamental harmonic, owing to small thermal effects or to the presence in the plasma of small traces of isotopes of the minority species. The Geminga plasma is characterized by $`\omega _p/\omega _{ci}<1`$. This prevents the use of the customary ion cyclotron approximation of the dielectric tensor and requires a numerical solution of the dispersion relation. Assuming an appropriate temperature and density, even in a pure $`H^+`$ plasma the fast wave reaches a sufficiently large absorption coefficient $`\alpha (\omega ,\theta )=2(\omega /c)n^{\prime \prime }(\omega ,\theta )`$ . The optical depth, $`\tau `$, can be estimated as
$$\tau (\omega ,\theta )=_0^L\alpha (\omega ,\theta )𝑑s^{}\alpha (\omega ,\theta )L1$$
$`(4)`$
where $`L`$ is the plasma layer thickness, of the order of few cm. The refraction index of the fast wave and its absorption coefficient $`\alpha `$ are given in Fig. 2a,b for different $`\theta `$ angles, a plasma density of $`10^{19}cm^3`$ and a plasma temperature of $`\mathrm{9\; 10}^7{}_{}{}^{}K`$. It is worth noticing that, owing to the weak dependence of $`\alpha `$ on $`\theta `$ (see Fig.2b), a layer of thickness less than 1 cm is sufficient to match the blackbody emission, independently of the angle of propagation with respect to the magnetic field at any point on the star surface. We will use this condition to greatly simplify the computation of the intensity and of the broadening of the emitted line.
The intensity, $`i(\omega ,\omega _c)`$, radiated from any point of the star surface, is then simply given by
$$i(\omega ,\omega _{ci})=K(T,\omega )[1e^{\tau (\omega ,\omega _{ci})}]$$
$`(5)`$
A density limit of about $`10^{19}`$ particles $`cm^3`$ represents an upper limit to the plasma density. Above such limit, bremsstrahlung, with its absorption coefficient $`\alpha 10^2(cm^1)`$ (Bekefi 1966) in the range of temperature frequency of interest here, would introduce severe spectral distortions, not observed in the data.
Blackbody emission conditions could be met in a lower density region of the outer part of the atmosphere, with a slightly larger plasma thickness. This, however, would not change the main issue of the present model: the observed Geminga spectral feature comes from a plasma which belongs to the star atmosphere. One may alternatively think that the cyclotron line was emitted by the low density matter surrounding the star. This would imply that what is observed depends upon a process of photon collection over a wide plasma volume and, consequently, over a wide range of cyclotron frequencies. However, this is contradicted by the comparatively well defined peak now observed.
### 3.2 Emission Line Broadening Mechanism.
The real part of $`n^2(\omega ,\theta )`$ shown in Fig.2 exhibits a very narrow frequency band of anomalous dispersion. This indicates that the width of the observed emission line cannot be explained in terms of the thermal Doppler broadening ($`\mathrm{\Delta }\omega \omega |ncos\theta |\frac{v_{thi}}{c}`$, where $`v_{thi}`$ is the ion thermal velocity). This is by far insufficient to fit the observations in the range of temperatures required to explain the intensity of the line. Another possible source of broadening are collisional effects. These, however, have been shown to be insufficient to account for the width of the cyclotron feature. The only mechanism left to account for such an observed width is the structure of a dipole-like magnetic field associated with the star. In the case of a perfect magnetic dipole located at the centre of the star, the intensity of the magnetic field increases by a factor two when moving on the star surface from the (magnetic) equator to the poles. As a consequence, the ion cyclotron frequency should also change by a factor two, if the emitting plasma were to cover the whole star surface. The shape of the emission line is then determined by the superposition of radiation emitted by regions of the star with different magnetic field values and possibly different plasma temperature. The intensity of the line in our model is obtained as the integral of the emission from each point of the star surface:
$$I(\omega )=\frac{1}{D^2}_\mathrm{\Sigma }i(\omega ,\omega _{ci})\frac{nk}{|k|}d^2\mathrm{\Sigma }$$
$`(6)`$
where $`D`$ is the distance of the star from the observer and the integral is extended to the whole surface, $`\mathrm{\Sigma }`$, taking into account the usual geometry effects ($`n`$ represents the unit vector perpendicular to $`d^2\mathrm{\Sigma }`$, while $`k`$ is the propagation vector pointing toward the observer). The non-homogeneous emission also depends on the relative position of the observer with respect to the star rotation axis and, to properly account for the shape and the intensity of the feature, on the angle $`\gamma `$ between the rotation and magnetic axis. The computed spectrum, of course, will be averaged over the star rotation period. These model assumptions also allow us to determine the modulation depth expected for the observed spectral feature, yielding a clear observational test.
## 4 Model and Interpretation.
In the context of the model described above, the intensity of the emission line depends upon 1) temperature of the cyclotron-emitting plasma 2) emitting fraction of the star surface. The range of the dipole field spanned by the emitting plasma surface determines the frequency width of the observed feature. Clearly, the ion gyrofrequency formula allows us to determine the value of the star’ s magnetic field as a function of the frequency of the observed line. Since the ratio A/Z (which defines the chemical composition and the ionization level of the emitting atmosphere) is unknown, the magnetic field value is, in principle, determined only within a factor of $``$ 2. On the other hand, our model yields a clear prediction of a sharp intensity decrease at the frequency value corresponding to the B field maximum, located close to the magnetic poles. To compute a B field value, the emitting medium is assumed to be either H or He, yielding a well-defined A/Z ratio. This is consistent with the strong stratification of the elements induced by the huge gravitational field of the neutron star. Hydrogen and Helium differ by a factor two in their ratio A/Z, so that the cyclotron second harmonic of the heavier element overlaps exactly the fundamental one of the lighter. The position of the polar caps with respect to the star magnetic axis is shown in Fig.3. The polar cap extension, given by the angle $`\beta `$, determines the frequency width of the cyclotron emission. The magnetic axis forms an angle $`\gamma `$ with the rotation axis. The model will be fully determined once the angle, $`\alpha `$, between the observer and the rotation axis, is defined (see Fig.4). Fig.5 compares the data (filled circles) to the cyclotron emission model (open circles). The thin continous line shows the detailed shape of the feature as determined by the model. The open circles are obtained by integrating the model data over the filter passbands. The computed magnetic field ranges from $`\mathrm{3.8\; 10}^{11}G`$ for the case of a pure Hydrogen plasma to $`\mathrm{7.6\; 10}^{11}G`$ in the case of Helium. The model data shown in Fig. 5 have been obtained with the following assumptions: magnetic pole plasma temperature $`T_0=\mathrm{9\; 10}^7{}_{}{}^{}K`$; temperature profile along the polar caps assumed to be gaussian-like $`T=T_0exp[(\beta /\beta _0)^4]`$ with $`\beta _0=57^{}`$. With this choice of parameters the plasma temperature drops to 1/10 of its maximum value in about $`60^{}`$. Such an extension of the plasma polar caps is required to explain the width of the line. Of course, the observed line intensity and the plasma temperature are also determined by Geminga’s radius and by its distance from the observer. We assumed a radius $`r_0=10km`$ and the parallax distance of 157 pc. Reasonable geometry uncertainties, however, do not change the order of magnitude of the plasma temperature required to emit such an intense cyclotron line. Fig.6 gives a prediction of our model under the assumption of the oblique rotation geometry ($`\gamma =90^{}`$) and for $`\alpha `$ close to $`20^{}`$, both the line intensities and profiles are seen to vary with the rotation phase. For this choice of parameters the modulation factor is about 15% and the profile is seen to sharpen close to the emission maximum (i.e. when the $`B`$ axis sweeps over the observer direction). Obviously, the region around $`5500\AA `$ should be the ideal one for observing the feature modulation and profile.
## 5 Electron Cyclotron Emission and Absorption.
Geminga’s polar cap atmosphere has been modelled as a fully ionized gas with a density $`10^{19}cm^3`$ and temperature $`\mathrm{9\; 10}^7{}_{}{}^{}K`$. The associated electron-ion energy equipartion time, under these model assumptions, keeps ions and electrons close in temperature. The electron cyclotron emission process is thus quite efficient and the electron cyclotron line, associated to the companion ion cyclotron line, is expected to be an observable feature. This is in contradiction with the experimental data (see the combined ROSAT/ASCA X-ray spectrum) that show no line feature at E=4 or 8 keV i.e. in the X-rays spectral range where the electron spectral line should fall. A possible explanation based on the observation that, within a blackbody emission approximation, the power emitted at the electron cyclotron frequency is so high that the rate of relaxation between parallel and perpendicular (to the magnetic field) temperature is not sufficient to keep the electron distribution function isotropic (Ichimaru 1973; Trubnikov 1965). The anisotropy of the electron distribution function, following this line of thought, would then reach a steady state when the cyclotron emitted power, which is proportional to the perpendicular temperature (Bornatici et al. 1983), is lower than the one predicted by the isotropic case.
In order to give a quantitative estimate of the steady state anisotropy and emission a kinetic calculation is required by means of a relativistic Fokker-Planck equation which includes a quasilinear term of interaction with electron cyclotron radiation. This is, at present, beyond the target of this work.
A possible explanation of the fact that the residual emission is not observed is given by the polar cap heating model proposed for Geminga by Halpern & Ruderman (1993). Following this model a flux of $`e^+e^{}`$ pairs is created on closed field lines lying outside the star and channelled into the polar caps of Geminga with a residual energy of about 6.5 erg each (Halpern & Ruderman 1993). The $`e^\pm `$ cloud embedded in the dipole magnetic field, which decreases by increasing the distance from the star surface, can thus act as a ”second harmonic” resonant absorber of cyclotron radiation emitted from regions closer to the star surface. The resonant radial position is located at $`r_{2nd}=2^{1/3}r_0`$, and, owing to the electron cyclotron line width $`\mathrm{\Delta }\omega /\omega _{ce}=v_{the}/c`$ (where $`\omega _{ce}`$ is the angular frequency of the electron cyclotron emission and $`v_{the}`$ the thermal velocity of the electrons) extends over about 300 m. It can be shown (Bornatici et al. 1983) that the E=4 keV line is efficiently absorbed (”optical depth” $`\tau 16`$) by the $`e^\pm `$ cloud, providing that ($`n_\pm T_\pm `$) $`100`$ where $`n_\pm `$ and $`T_\pm `$ are density and temperature of $`e^\pm `$ respectively ($`n_\pm `$ in $`10^{15}`$ units and $`T_\pm `$ in keV). Typical densities $`n_\pm 10^{17}cm^3`$, corresponding to column densities $`10^{22}cm^2`$, with a temperature of $`\mathrm{2\; 10}^6{}_{}{}^{}K`$ (0.2 keV), for instance, will attenuate the electron cyclotron line by a factor $`10^7`$. The electron cyclotron emission, following these model assumptions, can no longer be observed as a spectral feature.
## 6 Conclusions.
The data shown in Fig.1 leave no room for doubt that a wide emission feature exists in the optical region of Geminga’s thermal continuum. The feature falls in the wavelength region where the atmospheric ion-cyclotron emission will be located close to the surface of a magnetic neutron star. Since Geminga is a magnetic neutron star, to wit its periodic $`\gamma `$-ray emission, and most probably has an atmosphere, to wit its soft X-ray emission, we have provided here a semi-quantitative interpretation for such feature. It is based on the reasonable assumption that the polar cap regions of the NS are covered by a thin plasma layer heated to a temperature higher that the global surface atmosphere by, e.g., infalling particles. This is not a new scenario per se. It was foreseen both in the case of INS accretion of ionized matter funnelled towards the poles by the B-field configuration and of magnetospheric particles drawn back to the polar surface by the strong E field induced by the oblique rotator.
The plausibility of this emission model in the visible range frequencies is also supported by estimate of power balance performed along the line proposed by Halpern & Ruderman. In the case of Geminga a pair flux in excess of $`\dot{N}=10^{38}s^1`$ can release in the emitting plasma layer a linear power density $`\dot{N}dE/dr10^{28}ergcm^1s^1`$ (Jackson 1975). This power is sufficient to compensate plasma losses mainly due to the ion cyclotron emission and Bremsstrahlung over the whole star surface.
What is new here is the excellent fit obtained to the multiple experimental data by our physical model using a minimum of assumption. In particular, we have shown that the feature could not originate over the whole star surface, because global B-field variations would induce a feature wider than observed. The only free parameter is the geometry of the emission with respect to the observer; note, however, that our geometry is fully compatible with the oblique rotator proposed for Geminga by Halpern & Ruderman (1993). The assumption that the composition of the outer emitting layer is either H or a light, fully ionized element mixture is supported by the estimated value for the magnetic field. Such a value is in good agreement with the standard pulsar magnetic field prediction. It represents, in fact, the first independent measurement of the surface magnetic field of an INS.
###### Acknowledgements.
Useful discussions with Prof. Bruno Bertotti are gratefully acknowledged.
|
no-problem/9904/astro-ph9904328.html
|
ar5iv
|
text
|
# Newtonian hydrodynamics of the coalescence of black holes with neutron stars II: Tidally locked binaries with a soft equation of state.
## 1 Introduction
Angular momentum losses to gravitational radiation are expected to lead to the coalescence of binary systems containing black holes, and/or neutron stars (when the initial binary separation is small enough for the decay to take place in less than the Hubble time). This type of evolution has been suggested in a variety of contexts as possibly giving rise to observable events, such as gamma—ray bursts (GRBs) and bursts of gravitational waves (see e.g. Thorne 1995). Additionally, it could help explain the observed abundances of heavy elements in our galaxy (Lattimer & Schramm 1974; 1976) if the star is tidally disrupted in the encounter (see Wheeler 1971). Study of such events could also provide constraints on the equation of state at supra–nuclear densities.
After the recent measurement of redshifts to their afterglows \[Metzger et al. 1997, Kulkarni et al. 1998, Djorgovski et al. 1998\], it is now generally believed that GRBs originate at cosmological distances. The calculated event rates \[Lattimer & Schramm 1976, Narayan, Piran & Shemi 1991, Tutukov & Yungelson 1993, Lipunov, Postnov & Prokhorov 1997, Portegies Zwart & Yungelson 1998, Bełczyński & Bulik 1999\] for merging compact binaries are compatible with the observed frequency of GRBs (on the order of one per day). The preferred model for the production of a GRB invokes a relativistic fireball from a compact ‘central engine’ that would produce the observable $`\gamma `$–rays through internal shocks (Mészáros & Rees 1992; 1993). This model requires the presence of a relatively baryon–free line of sight from the central engine to the observer along which the fireball can expand at ultrarelativistic speeds. Additionally, the short–timescale variations seen in many bursts (often in the millisecond range) probably arise within the central engine \[Sari & Piran 1997\].
The coalescence of binary neutron star systems or black hole–neutron star binaries was suggested as a mechanism capable of powering the gamma–ray bursts, either during the binary merger itself or through the formation of a dense accretion disk which could survive long enough to accomodate the variable timescales of GRBs (Paczyński 1986; Goodman 1986; Goodman, Dar & Nussinov 1987; Eichler et al. 1988; Paczyński 1991; Mészáros & Rees 1992; Woosley 1993; Jaroszyński 1993; 1996; Witt et al. 1994; Wilson, Mathews & Marronetti 1996; Ruffert, Janka & Schäfer 1996; Lee & Kluźniak 1997; Ruffert, Janka, Takahashi & Shäfer 1997; Katz 1997; Kluźniak & Lee 1998; Ruffert & Janka 1998; Popham, Woosley & Fryer 1998; McFadden & Woosley 1998; Ruffert & Janka 1999). The enormous amount of gravitational energy that would be liberated in such an event could account for the energetics of the observed GRBs and neutrino–antineutrino annihilation may power the necessary relativistic fireball.
In previous work (Lee & Kluźniak 1995; 1998 (hereafter Paper I); Kluźniak & Lee 1998), we have studied the coalescence of a neutron star with a stellar–mass black hole for a stiff ($`\mathrm{\Gamma }=3`$) polytropic equation of state and a range of mass ratios. We found that the neutron star was not entirely disrupted, but rather remained in orbit (with a greatly reduced mass) about the black hole after a quick episode of mass transfer. Thus the duration of the coalescence process would be extended from a few milliseconds to possibly several tens of milliseconds. The observed outcome seemed favorable for the production of a GRB since in every case we found a baryon–free axis in the system, along the axis of rotation.
In the present paper, we investigate the coalesence of a black hole–neutron star binary for a soft equation of state (with an adiabatic index $`\mathrm{\Gamma }=5/3`$) and a range of mass ratios. Our initial conditions are as in Paper I in that they correspond to tidally locked binaries. Complete tidal locking is not realistically expected \[Bildsten & Cutler 1992\], but it can be considered as an extreme case of angular momentum distribution in the system. In the future we will explore configurations with varying degrees of tidal locking.
As before, the questions motivating our study are: Is the neutron star tidally disrupted by the black hole and does an accretion torus form around the black hole? If so, how long–lived is it? Is the baryon contamination low enough to allow the formation of a relativistic fireball? Is any significant amount of mass dynamically ejected from the system? What is the gravitational radiation signal like, and how does it depend on the equation of state and the initial mass ratio?
In section 2 we present the method we have used to carry out our simulations. This is followed by a presentation of our results in section 3 and a discussion in section 4.
## 2 Numerical method
For the simulations presented in this paper, we have used the method known as Smooth Particle Hydrodynamics (SPH). Our code is three–dimensional and essentially Newtonian. This method has been described often, we refer the reader to Monaghan \[Monaghan 1992\] for a review of the principles of SPH, and to Paper I and Lee \[Lee 1998\] for a detailed description of our own code, including the tree structure used to compute the gravitational field.
We model the neutron star via a polytropic equation of state, $`P=K\rho ^\mathrm{\Gamma }`$ with $`\mathrm{\Gamma }=5/3`$. For the following, we measure distance and mass in units of the radius R and mass $`M_{\mathrm{NS}}`$ of the unperturbed (spherical) neutron star (13.4 km and 1.4 M respectively), except where noted, so that the units of time, density and velocity are
$`\stackrel{~}{t}=1.146\times 10^4\mathrm{s}\times \left({\displaystyle \frac{R}{13.4\text{km}}}\right)^{3/2}\left({\displaystyle \frac{M_{\mathrm{NS}}}{1.4M_{}}}\right)^{1/2}`$ (1)
$`\stackrel{~}{\rho }=1.14\times 10^{18}\mathrm{kg}\mathrm{m}^3\times \left({\displaystyle \frac{R}{13.4\text{km}}}\right)^3\left({\displaystyle \frac{M_{\mathrm{NS}}}{1.4M_{}}}\right)`$ (2)
$`\stackrel{~}{v}=0.39c\times \left({\displaystyle \frac{R}{13.4\text{km}}}\right)^{1/2}\left({\displaystyle \frac{M_{\mathrm{NS}}}{1.4M_{}}}\right)^{1/2}.`$ (3)
and we use corresponding units for derivative quantities such as energy and angular momentum.
The black hole (of mass $`M_{\mathrm{BH}}`$) is modeled as a Newtonian point mass, with a potential $`\mathrm{\Phi }_{\mathrm{BH}}(r)=GM_{\mathrm{BH}}/r`$. We model accretion onto the black hole by placing an absorbing boundary at the Schwarzschild radius ($`r_{Sch}=2GM_{\mathrm{BH}}/c^2`$). Any particle that crosses this boundary is absorbed by the black hole and removed from the simulation. The mass and position of the black hole are continously adjusted so as to conserve total mass and total momentum.
Initial conditions corresponding to tidally locked binaries in equilibrium are constructed in the co–rotating frame of the binary for a range of separations r and a given value of the mass ratio $`q=M_{\mathrm{NS}}/M_{\mathrm{BH}}`$ (Rasio & Shapiro 1994; Paper I). The binary separation is defined henceforth as the distance between the black hole and the center of mass of the SPH particles. During the construction of these configurations, the specific entropies of all particles are maintained constant, i.e. $`K`$=constant in $`P`$=$`K\rho ^\mathrm{\Gamma }`$. The neutron star is modeled with $`N=17,256`$ particles in every case presented in this paper. To ensure uniform spatial resolution, the masses of the particles were made proportional to the Lane–Emden densities on the initial grid.
To carry out a dynamical run, the black hole and every particle are given the azimuthal velocity corresponding to the equilibrium value of the angular frequency $`\mathrm{\Omega }`$ in an inertial frame, with the origin of coordinates at the center of mass of the system. Each SPH particle is assigned a specific internal enery $`u_i=K\rho ^{(\mathrm{\Gamma }1)}/(\mathrm{\Gamma }1)`$, and the equation of state is changed to that of an ideal gas, where $`P=(\mathrm{\Gamma }1)\rho u`$. The specific internal energy of each particle is then evolved according to the first law of thermodynamics, taking into account the contributions from the artificial viscosity present in SPH. During the dynamical runs we calculate the gravitational radiation waveforms in the quadrupole approximation.
We have included a term in the equations of motion that simulates the effect of gravitational radiation reaction on the components of the binary system. Using the quadrupole approximation, the rate of energy change for a point–mass binary is given by (see Landau & Lifshitz 1975):
$`{\displaystyle \frac{dE}{dt}}={\displaystyle \frac{32}{5}}{\displaystyle \frac{G^4(M_{\mathrm{NS}}+M_{\mathrm{BH}})(M_{\mathrm{NS}}M_{\mathrm{BH}})^2}{(cr)^5}}`$ (4)
and the rate of angular momentum loss by
$`{\displaystyle \frac{dJ}{dt}}={\displaystyle \frac{32}{5c^5}}{\displaystyle \frac{G^{7/2}}{r^{7/2}}}M_{\mathrm{BH}}^2M_{\mathrm{NS}}^2\sqrt{M_{\mathrm{BH}}+M_{\mathrm{NS}}}.`$ (5)
From these equations a radiation reaction acceleration for each component of the binary can be obtained as
$`𝒂^{}={\displaystyle \frac{1}{q(M_{\mathrm{NS}}+M_{\mathrm{BH}})}}{\displaystyle \frac{dE}{dt}}{\displaystyle \frac{𝒗^{}}{(v^{})^2}}`$ (6)
$`𝒂^{\mathrm{BH}}={\displaystyle \frac{q}{M_{\mathrm{NS}}+M_{\mathrm{BH}}}}{\displaystyle \frac{dE}{dt}}{\displaystyle \frac{𝒗^{\mathrm{BH}}}{(v^{\mathrm{BH}})^2}}`$ (7)
where $`v^{}`$ is the velocity of the neutron star and $`v^{\mathrm{BH}}`$ that of the black hole.
We have used this formula for our calculations to simulate the effect of gravitational radiation reaction on the system. Clearly, the application of equation (7) to the black hole in our calculations is trivial, since we always treat it as a point mass. For the neutron star, we have chosen to apply the same acceleration to all SPH particles. This value is that of the acceleration at the center of mass of the SPH particles, so that equation (6) now reads:
$`𝒂^i={\displaystyle \frac{1}{q(M_{\mathrm{NS}}+M_{\mathrm{BH}})}}{\displaystyle \frac{dE}{dt}}{\displaystyle \frac{𝒗_{cm}^{}}{(v_{cm}^{})^2}},`$ (8)
This formulation of the gravitational radiation reaction has been used in SPH simulations by others \[Davies et al. 1994, Zhuge, Centrella & McMillan 1996, Rosswog et al. 1999\] in the case of merging neutron stars, and it is usually switched off once the stars come into contact, when the point–mass approximation clearly breaks down. We are assuming then, that the polytrope representing the neutron star can be considered as a point mass for the purposes of including radiation reaction. If the neutron star is disrupted during the encounter with the black hole, this radiation reaction must be turned off, since our formula would no longer give meaningful results. We have adopted a switch for this purpose, as follows: the radiation reaction is turned off if the center of mass of the SPH particles comes within a prescribed distance of the black hole (effectively a tidal disruption radius). This distance is set to $`r_{tidal}=CR(M_{\mathrm{BH}}/M_{\mathrm{NS}})^{1/3}`$, where C is a constant of order unity.
## 3 Results
We now describe our results. First, we present the initial conditions that were used to perform the dynamical runs. We then describe the general morphology of the coalescence events, the detailed structure of the accretion disks that form as a result of the tidal disruption of the neutron star, and the gravitational radiation signal.
### 3.1 Evolution of the Binary
To allow comparisons of results for differing equations of state, we have run simulations with the same initial binary mass ratios as previously explored (Paper I), namely $`q`$=1, $`q`$=0.8 and $`q`$=0.31. Additionally we have examined the case with mass ratio $`q`$=0.1. Equilibrium sequences of tidally locked binaries were constructed for a range of initial separations, terminating at the point where the neutron star overflows its Roche Lobe (at $`r=r_{RL}`$). In Figure 1
we show the variation of total angular momentum J in these sequences as a function of binary separation for the four values of the mass ratio (solid lines). Following Lai, Rasio & Shapiro \[Lai, Rasio & Shapiro 1993b\], we have also plotted the variation in J that results from approximating the neutron star as compressible tri–axial ellipsoid (dashed lines) and as a rigid sphere (dotted lines).
In all cases, the SPH results for the $`\mathrm{\Gamma }=5/3`$ polytrope are very close to the ellipsoidal approximation until the point of Roche–Lobe overflow. This result is easy to understand if one considers that the softer the equation of state, the more centrally condensed the neutron star is and the less susceptible to tidal deformations arising from the presence of the black hole. For $`\mathrm{\Gamma }=3`$ (Paper I), the variation in angular momentum as a function of binary separation was qualitatively different (for high mass ratios) from our present findings. For $`q`$=1 and $`q`$=0.8, total angular momentum attained a minimum at some critical separation before Roche–Lobe overflow occurred. This minimum indicated the presence of a dynamical instability, which made the binary decay on an orbital timescale. This purely Newtonian effect arose from the tidal interactions in the system \[Lai, Rasio & Shapiro 1993a\]. In the present study, we expect all orbits with initial separations $`rr_{RL}`$ to be dynamically stable.
There is a crucial difference between the two polytropes considered in Paper I and here. For polytropes, the mass–radius relationship is $`RM^{(\mathrm{\Gamma }2)/(3\mathrm{\Gamma }4)}`$. For $`\mathrm{\Gamma }`$=3 this becomes $`RM^{1/5}`$, while for $`\mathrm{\Gamma }`$=5/3, $`RM^{1/3}`$. Thus, the polytrope considered in Paper I responded to mass loss by shrinking. The $`\mathrm{\Gamma }=5/3`$ polytrope, considered here, responds to mass loss by expanding, as do neutron stars modeled with realistic equations of state \[Arnett & Bowers 1977\]–the dynamical disruption of the star reported below seems to be related to this effect. For the polytropic index considered in Paper I, the star was not disrupted (see also Lee & Kluźniak 1995; 1997; Kluźniak & Lee 1998), but we find no evidence in any of our dynamical calculations for a steady mass transfer in the binary, such as the one suggested in the literature (e.g. Blinnikov et al. 1984; Portegies Zwart 1998).
Using equations (4) and (5) one can compute the binary separation as a function of time for a point–mass binary in the quadrupole approximation, and obtain
$`r=r_i\left(1t/t_0\right)^{1/4},`$ (9)
with $`t_0^1=256G^3M_{\mathrm{BH}}M_{\mathrm{NS}}(M_{\mathrm{BH}}+M_{\mathrm{NS}})/(5r_i^4c^5)`$. Here $`r_i`$ is the separation at $`t`$=0. For black hole–neutron star binaries studied in this paper, the timescale for orbital decay because of angular momentum loss to gravitational radiation, $`t_0`$, is on the order of the orbital period, $`P`$ (for $`q`$=1, at an initial separation $`r_i`$=2.7 we find $`t_0=56.81\times \stackrel{~}{t}`$=6.5 ms) and $`P=19.58\times \stackrel{~}{t}`$=2.24 ms), so one must analyze whether hydrodynamical effects will drive the coalescence on a comparable timescale. We have performed a dynamical simulation for $`q`$=1, with an initial separation on the verge of Roche Lobe overflow, at $`r`$=2.7, without including radiation reaction in the equations of motion. We show in Figure 2a
the binary separation as a function of time for this calculation (solid line) as well as the separation for a point–mass binary decaying in the quadrupole approximation, using equation (9) (dashed line). In the dynamical simulation the orbital separation remains approximately constant, and begins to decay rapidly around $`t`$=110 (in the units defined in equation ), when mass loss from the neutron star becomes important. Clearly at this stage hydrodynamical effects are dominant, but one must include radiation reaction in the early stages of the process. There is an added (practical) benefit derived from including radiation reaction in these calculations. As seen in Figure 2a, it takes a full 15 ms for the orbit to become unstable. Simulating the behavior of the system at high resolution (practically no SPH particles have been accreted at this stage) for such a long time is computationally expensive, whereas accretion in the early stages of the simulation allows us to perform, in general, more calculations at higher resolution. We have thus included radiation reaction in all the runs (A through E) presented in this paper, and adopted a switch as described in section 2.
### 3.2 Run parameters
In Table 1 we present the parameters distinguishing each dynamical run we performed. All times are in units of $`\stackrel{~}{t}`$ (equation ) and all distances in units of R, the unperturbed (spherical) stellar radius. The runs are labeled with decreasing mass ratio (increasing black hole mass), from $`q`$=1 down to $`q`$=0.1. All simulations were run for the same length of time, $`t_{final}=200`$, equivalent to 22.9 ms (this covers on the order of ten initial orbital periods for the mass ratios considered).
The fifth column in Table 1 shows the value of $`t_{rad}`$, when radiation reaction is switched off according to the criterion described in section 2. In Figure 3 we show density contours in the orbital plane for runs A, C, D and E at times very close to $`t=t_{rad}`$. The corresponding plot for run B is very similar to that for run A. Note that runs C and D differ only in the corresponding value of $`t_{rad}`$. For run D there is little doubt that approximating the neutron star as a point–mass is still reasonable at this stage, while for run C this is clearly not the case. We can then use these two runs to gauge the effect of our simple radiation reaction formulation on the outcome of the coalescence event. We note here that run E is probably beyond the limit of what should be inferred from a Newtonian treatment of such a binary system. The black hole is very large compared to the neutron star, and the initial separation ($`r_i=5.05`$, equivalent to 67.87 km) is such that the neutron star is within the innermost stable circular orbit of a test particle around a Schwarzschild black hole of the same mass ($`r_{ms}=9.17`$, equivalent to 123.26 km). Thus we present in Appendix A a dynamical run with initial mass ratio $`q=0.1`$ making use of a pseudo–Newtonian potential for the black hole. For the following, we will at times omit a discussion of run B, as it is qualitatively and quantitatively very similar to run A (both of these have a relatively high mass ratio).
### 3.3 Morphology of the mergers
The initial configurations are close to Roche Lobe overflow, and mass transfer from the neutron star onto the black hole starts within one orbital period for all runs, A through E. Once accretion begins, the total number of particles decreases. Since this compromises resolution, we made a modification to the code for run E to avoid the number of particles from dropping below 9,000. This is done simply by splitting a given fraction of the particles $`N_{split}`$ and creating $`2N_{split}`$ particles from them. Total mass and momentum are conserved during this procedure, and it can be shown that the numerical noise introduced into the smoothed density $`\rho `$ by doing this is of the order of the accuracy of the SPH method itself, $`𝒪`$($`h^2`$), where $`h`$ is the smoothing length \[Meglicki, Wickramasinghe & Bicknell 1993\].
In every run the binary separation (solid lines in in Figure 2b) initially decreases due to gravitational radiation reaction. For high mass ratios, (runs A, B) the separation decays faster that what would be expected of a point–mass binary. This is also the case for a stiff equation of state, in black hole–neutron star mergers (Paper I) as well as in binary neutron star mergers \[Rasio & Shapiro 1994\], and merely reflects the fact that hydrodynamical effects are playing an important role. For the soft equation of state studied here, there is the added effect of ‘runaway’ mass transfer because of the mass–radius relationship (see section 3.1). For runs C and D, the solid and dashed lines in Figure 2b follow each other very closely, indicating that the orbital decay is primarily driven by angular momentum losses to gravitational radiation. For run E, the orbit decays more slowly than what one would expect for a point–mass binary. This is explained by the fact that there is a large amount of mass transfer (10% of the initial neutron star mass has been accreted by $`t=t_{rad}`$ in this case) in the very early stages of the simulation, substantially altering the mass ratio in the system (the dashed curves in Figure 2 are computed for a fixed mass ratio). From the expression for the timescale for orbital decay $`t_0`$ in equation (9), it is apparent that at constant total mass, lowering the mass ratio slows the orbital decay, when $`q<0.5`$.
The general behavior of the system is qualitatively similar for every run. Figures 45 and 6 show density contours in the orbital plane (left columns) and in the meridional plane containing the black hole (right columns) for runs A, D and E respectively, at $`t=50`$ and $`t=t_f=200`$ (equivalent to 5.73 ms and 22.9 ms). The corresponding plots for runs B and C are very similar to those for runs A and D, respectively. The neutron star becomes initially elongated along the binary axis and an accretion stream forms, transferring mass to the black hole through the inner Lagrange point. The neutron star responds to mass loss and tidal forces by expanding, and is tidally disrupted. An accretion torus forms around the black hole as the initial accretion stream winds around it. A long tidal tail is formed as the material furthest from the black hole is stripped from the star. Most of the mass transfer occurs in the first two orbital periods and peak accretion rates reach values between 0.04 and 0.1—equivalent to 0.49 and 1.22 M/ms (see Figure 7). The mass accretion rate then drops and the disk becomes more and more azimuthally symmetric, reaching a quasi–steady state by the end of the simulations.
We show in Figure 8 the various energies of the system (kinetic, internal, gravitational potential and total) for runs A, D and E.
The dramatic drop in total internal energy reflects the intense mass accretion that takes place within the first couple of orbits. Figure 8 also shows \[in panel (d)\] the total angular momentum of the system (the only contribution to the total angular momentum not plotted is the spin angular momentum of the black hole, see below). Angular momentum decreases for two reasons. First, if gravitational radiation reaction is still acting on the system, it will decrease approximately according to equation (5). Second, whenever matter is accreted by the black hole, the corresponding angular momentum is removed from our system. In reality, the angular momentum of the accreted fluid would increase the spin of the black hole. We keep track of this accreted angular momentum and exhibit its value in Table 2 as the Kerr parameter of the black hole. This shows up as a decrease in the total value of J. It is clear from runs C and D that the value of $`t_{rad}`$ influences the peak accretion rate and the mass of the black hole (particularly immediately after the first episode of heavy mass transfer). The maximum accretion rate is different for run C and D by about a factor of 1.4. This is easy to understand, since radiation reaction increases angular momentum losses, and hence more matter is accreted per unit time when it is present.
### 3.4 Accretion disk structure
In Table 2 we show several parameters pertaining to the final accretion structure around the black hole for every run. The disk settles down to a fairly azimuthally symmetric structure within a few initial orbital periods (except for the long tidal tail, which always persists as a well–defined structure), and there is a baryon–free axis above (and below) the black hole in every case (Figure 9). We have calculated the mass of the remnant disk, $`M_{disk}`$, by searching for the amount of matter that has sufficient specific angular momentum j at the end of the simulation to remain in orbit around the black hole (as in Ruffert & Janka 1999). This material has $`j>j_{crit}=\sqrt{6}GM_t/c`$, where $`M_t`$ is the total mass of the system. The values shown in Table 2 are equivalent to a few tenths of a solar mass, and again the effect of $`t_{rad}`$ can be seen by comparing runs C and D, where the disk masses differ by a factor of 1.2. By the end of the simulations, between 70% and 80% of the neutron star has been accreted by the black hole. It is interesting to note that the final accretion rate (at $`t=t_f`$) appears to be rather insensitive to the initial mass ratio, and is between $`2\times 10^4`$ and $`5\times 10^4`$ (equivalent to 2.4 and 6.1 M s<sup>-1</sup> respectively). From this final accretion rate we have estimated a typical timescale for the evolution of the accretion disk, $`\tau _{disk}=M_{disk}/\dot{M}_{final}`$—for reference, $`\tau =100`$, in the units of equation (1) corresponds to $`11.5`$ ms and thus the values of $`\tau _{disk}`$ given in Table 2 are between 47 and 63 ms. Despite the difference in the initial mass ratios and the typical sizes of the disks ($`r_0`$ is the radial distance from the black hole to the density maximum at $`t=t_f`$), the similar disk masses and final accretion rates make the lifetimes comparable for every run.
We have plotted azimuthally averaged density and internal energy profiles in Figure 10 for runs A, D and E. The specific internal energy is greater towards the center of the disk, and flattens out at a distance from the black hole roughly corresponding the density maximum, at $`u2\times 10^2`$. This value corresponds to $`2.74\times 10^{18}`$ erg g<sup>-1</sup> or 2.9 MeV/nucleon and is largely independent of the initial mass ratio. The inner regions of the disks have specific internal energies that are greater by approximately one order of magnitude.
Additionally, panel (d) in the same figure shows the azimuthally averaged distribution of specific angular momentum j in the orbital plane for all runs. The curves terminate at $`r_{in}=2r_{Sch}`$. Pressure support in the inner regions of the accretion disks makes the rotation curves sub–Keplerian, while the flattening of distribution marks the outer edge of the disk and the presence of the long tidal tail (see Figure 11),which has practically constant specific angular momentum.
The Kerr parameter of the black hole, given by $`a=J_{\mathrm{BH}}c/GM_{\mathrm{BH}}^2`$, is also shown in Table 2. We have calculated it from the amount of angular momentum lost via accretion onto the black hole (see Figure 8d), assuming that the black hole is not rotating at $`t=0`$. The final specific angular momentum of the black hole is smaller for lower mass ratios simply because the black hole is initially more massive when q is smaller. The difference in the value of a for runs C and D is important (almost a factor of 2), and again reflects the influence of gravitational radiation reaction (for a larger value of $`t_{rad}`$ the black hole is spun up to a greater degree because of the larger amount of accreted mass).
It is of crucial importance for the production of GRBs from such a coalescence event that there be a baryon–free axis in the system along which a fireball may expand with ultrarelativistic velocities (Mészáros & Rees 1992; 1993). We have calculated the baryon contamination for every run as a function of the half–angle $`\mathrm{\Delta }\theta `$ of a cone directly above (and below) the black hole and along the rotation axis of the binary that contains a given amount of mass $`\mathrm{\Delta }M`$. Table 2 shows these angles (in degrees) for $`\mathrm{\Delta }M=10^3,10^4,10^5`$ (equivalent to $`1.4\times 10^3,1.4\times 10^4,1.4\times 10^5`$ M respectively). There is a greater amount of pollution for high mass ratios (the disk is geometrically thicker compared to the size of the black hole), but in all cases only modest angles of collimation are required to avoid contamination. We note here that the values for $`\theta _5`$ are rough estimates at this stage since they are at the limit of our numerical resolution in the region directly above the black hole. This can be seen by inspection in Figure 9
where we show the enclosed mass as a function of half–angle $`\mathrm{\Delta }\theta `$ for all runs at $`t=t_f`$.
### 3.5 Ejected mass and r–process
To calculate the amount of dynamically ejected mass during the coalescence process, we look for matter that has a positive total energy (kinetic+gravitational potential+internal) at the end of each simulation. Figure 11 shows a large–scale view of the system at $`t=t_f`$ for runs D and E. The thick black line running across the tidal tail in each case divides matter that is bound to the black hole from that which may be on outbound trajectories. This matter comes from the part of the neutron star that was initially furthest from the black hole and was ejected through the outer Lagrange point in the very early stages of mass transfer. We find that a mass between $`4.4\times 10^3`$ M and $`2.2\times 10^2`$ M can potentially be ejected in this fashion (see Table 2). This is very similar to what has recently been calculated for binary neutron star mergers for a variety of initial configurations \[Rosswog et al. 1999\]. Since it is believed that the event rate for binary neutron star mergers is comparable to that of black hole–neutron star mergers, this could prove to be a sizable contribution to the amount of observed r–process material. This result appears to be strongly dependent on the equation of state, since we previously observed no significant amount of matter being ejected for a system with a stiff equation of state (Paper I). For binary neutron star mergers, Rosswog et al. 1999 have obtained the same qualitative result. We also note that there is a significant difference in the amount of ejected mass for runs C and D (approximately a factor of four), due to the difference in the values of $`t_{rad}`$ (in run D the system loses less angular momentum and thus matter escapes with greater ease).
### 3.6 Gravitational radiation waveforms and luminosities
The emission of gravitational radiation is calculated in all our models in the quadrupole approximation (see e.g. Finn 1989; Rasio & Shapiro 1992), and can be obtained directly from the hydrodynamical variables of the system. The calculation of the gravitational radiation luminosity then requires only an additional numerical differentiation. Figure 12 shows the computed waveforms and luminosities, along with what the waveforms would be for a point–mass binary, also calculated in the quadrupole approximation. It is apparent that hydrodynamical effects play an important role particularly for high mass ratios in the early stages of the coalescence process (see panel (a) in Figure 12). When the neutron star is tidally disrupted, the amplitude of the waveform drops abruptly and practically to zero, since a structure that is almost azimuthally symmetric has formed around the black hole. This is in stark contrast to what occurred for a stiff equation of state (Lee 1998; Paper I; Kluźniak & Lee 1998), when the binary system survived the initial episode of mass transfer and a stable binary was the final outcome. In fact, these waveforms resemble more the case of a double neutron star merger with a soft equation of state \[Rasio & Shapiro 1992\], in which the coalescence resulted in a compact, azimuthally symmetric object surrounded by a dense halo and spiral arms. Table 3 shows the maximum amplitude $`h_{max}`$ for an observer located a distance $`r_0`$ away from the system along the axis of rotation, the maximum luminosity $`L_{max}`$ and the total enery $`\mathrm{\Delta }E_{GW}`$ emitted during the event. This last number should be taken only as an order of magnitude estimate since it depends on the choice of the origin of time. The peak luminosities are $`(R/M_{\mathrm{NS}})^5(L_{max}/L_0)=0.37`$ for run A, $`(R/M_{\mathrm{NS}})^5(L_{max}/L_0)=1.50`$ for run D and $`(R/M_{\mathrm{NS}})^5(L_{max}/L_0)=5.90`$ for run E (equivalent to $`1.12\times 10^{55}`$erg s<sup>-1</sup>, $`4.55\times 10^{55}`$erg s<sup>-1</sup> and $`1.79\times 10^{56}`$erg s<sup>-1</sup> respectively). We note that although the waveforms for runs C (not plotted) and D (panel (b) in Figure 12) are very similar, the maximum amplitudes and the luminosity differ by about 1.3% and 3.4% respectively, a small but non–negligible amount. This is again a reflection of the way in which the radiation reaction was formulated, and indicates that a more rigorous treatment of this effect is necessary.
## 4 Summary and Discussion
We have presented results of hydrodynamical simulations of the binary coalescence of a black hole with a neutron star. We have used a polytropic equation of state (with index $`\mathrm{\Gamma }=5/3`$) to model the neutron star, and a Newtonian point mass with an absorbing surface at the Schwarzschild radius to represent the black hole. All our computations are strictly Newtonian, but we have included a term that approximates the effect of gravitational radiation reaction in the system. We have also calculated the emission of gravitational radiation in the quadrupole approximation.
We have found that for every mass ratio investigated ($`M_{\mathrm{NS}}/M_{\mathrm{BH}}`$=1, 0.8, 0.31 and 0.1) the $`\mathrm{\Gamma }=5/3`$ polytrope (‘neutron star’) is entirely disrupted by tidal forces, and a dense accretion torus, containing a few tenths of a solar mass, forms around the black hole. The maximum densities and specific internal energies in the tori are on the order of $`10^{11}\text{g cm}\text{-3}`$ and $`10^{19}\text{erg g}\text{-1}`$ (or 10 MeV/nucleon) respectively (all simulations were run for approximately 22.9 ms). The final accretion rate is between 2 and 6 solar masses per second, and hence the expected lifetime of the torus $`\tau _{disk}=M_{disk}/\dot{M}_{final}`$ is between 40 and 60 milliseconds.
The rotation axis of the system remains free of matter to a degree that would not hinder the production of a relativistic fireball, possibly giving rise to a gamma ray burst. Although the duration of such a bursts would still be too short to power the longest GRBs, the present scenario could well account for the subclass of short bursts \[Kouveliotou et al. 1995\]. A significant amount of matter (between $`10^2`$ and $`10^3`$ solar masses) is dynamically ejected from the system, and could contribute significantly to the observed abundances of r–process material in our galaxy. The gravitational radiation signal is very similar to that of a point–mass binary until the beginning of mass transfer, particularly for low mass ratios. After mass transfer starts, the amplitude of the waveforms drops dramatically on a dynamical timescale when the accretion torus is formed. In every aspect, the results are dramatically different from what occurs for a stiff equation of state.
## Acknowledgments
We gratefully acknowledge financial support from DGAPA–UNAM and KBN grant 2P03D01311. W.L. thanks Craig Markwardt for helpful discussions concerning the effect of a soft equation of state on the system. It is a pleasure to thank the referee for his most helpful comments.
## Appendix A Dynamical evolution for a pseudo–Newtonian potential
Since, as stated in section 3.2, the system with mass ratio $`q=0.1`$ (run E) is at an initial separation that is within the marginally stable orbit for test particles around a Schwarzschild black hole, we have performed an additional run, altering the form of the potential produced by the black hole. We will perform a detailed description of results elsewhere (Lee 1999). Here we only present a comparison to the results of run E. We have chosen the form proposed by Paczyński & Wiita (1980), namely:
$`\mathrm{\Phi }_{\mathrm{BH}}^{PW}(r)=GM_{\mathrm{BH}}/(rr_{Sch}).`$ (10)
This potential correctly reproduces the positions of the marginally bound and marginally stable orbits for test particles. A few modifications need to be made to the SPH code to accomodate this new potential. First, the absorbing boundary of the black hole is now placed at a distance $`r_{boundary}=1.5r_{Sch}`$ from the position of the black hole; second, the total gravitational force exerted by the neutron star on the black hole is symmetrized so that total linear momentum is conserved in the system.
A tidally locked equilibrium configuration for a given separation $`r`$ can be constructed with these modifications in the same manner as described in section 3.1. For a test particle in orbit about a Schwarzschild black hole, the marginally stable orbit appears at a separation $`r_{ms}=6GM_{\mathrm{BH}}/c^2`$ because the total angular momentum exhibits a minimum at that point. In our case (where the neutron star has a finite size and mass), the turning point in the curve of total angular momentum as a function of binary separation occurs at approximately $`r=9.1R_{\mathrm{NS}}`$, and so we have chosen this value for the initial separation $`r_i`$ to be used in the dynamical simulation. Gravitational radiation reaction has been implemented as described in section 2, with a slight modification to the definition of $`r_{tidal}`$ to account for the increased strength of the gravitational interactions, so that now $`r_{tidal}=CR(M_{\mathrm{BH}}/M_{\mathrm{NS}})^{1/3}+r_{Sch}`$.
Since gravitational interactions are stronger with the modified form of the gravitational potential, the overall encounter is more violent. The neutron star is tidally disrupted into a long tidal tail in a way similar to that exhibited in run E. The accretion episode is very brief, with a peak accretion rate onto the black hole of $`\dot{M}_{max}=0.7`$, equivalent to 8.5 M/ms and thus substantially higher than that for run E (see Table 2).
We followed the dynamical evolution from $`t=0`$ to $`t_f=200`$, and show final density contour plots in the orbital and meridional plane containing the black hole in Figure 13. By the end of the simulation, the fluid has not formed a quasi–static accretion structure around the black hole as for the Newtonian runs, and 99.2% of the initial neutron star mass has been accreted ($`M_{acc}=0.992`$). The thick black line in Figure 13a divides material that is bound to the black hole from that which is on outbound trajectories (see Figure 11 for a comparison with runs D and E). Overall, a smaller amount of mass is left over after the initial episode of heavy mass transfer (approximately an order of magnitude less than for run E), but a larger fraction ($`M_{ejected}=6.8\times 10^3`$, equivalent to $`9.6\times 10^3`$M) may be dynamically ejected from the system. The region above and below the black hole is devoid of matter to an even greater extent than for the Newtonian case as can be seen in Figure 14,
where we plot the enclosed mass $`\mathrm{\Delta }M`$ as a function of the half angle $`\mathrm{\Delta }\theta `$ of a cone directly above (and below) the black hole at $`t=t_f`$, as in Figure 9. Thus in this scenario the production of a relativistic fireball that could give rise to a gamma–ray burst would require an even more modest degree of beaming in order to avoid baryon contamination.
|
no-problem/9904/astro-ph9904341.html
|
ar5iv
|
text
|
# X–ray spectra transmitted through Compton–thick absorbers
## 1 Introduction
The presence of large amount of “cold” (i.e. not much ionized, and substantially opaque in X–rays) matter around Active Galactic Nuclei is now a well established fact. For all Seyfert 2 galaxies observed in X–rays so far there is evidence for absorption in excess of the Galactic one. In a significant fraction of them, the column density of the absorbing matter exceeds 10<sup>24</sup> cm<sup>-2</sup> (Maiolino et al. 1998), and the matter is therefore optically thick to Compton scattering. In a few objects, like NGC 1068 (Matt et al. 1997), the column density is so high that the X–ray photons cannot escape even in hard X–rays, being trapped in the matter, downscattered to energies where photoelectric absorption dominates, and eventually destroyed. In other cases, like NGC 4945 (Iwasawa et al. 1993; Done et al. 1996), Mrk 3 (Cappi et al. 1999) and the Circinus Galaxy (Matt et al. 1999), the column density is a few$`\times `$10<sup>24</sup> cm<sup>-2</sup>, so permitting the transmission of a significant fraction of X–ray photons, many of them escaping after one or more scatterings. To properly model transmission through absorbers with these intermediate column densities, is therefore necessary to fully take into account Compton scattering. This has been done in an analytical, approximated way by Yaqoob (1997), but his model is valid only below $``$15 keV, a painful limitation after the launch of BeppoSAX, which carries a sensitive hard X–ray (15-200 keV) instrument, and in view of future missions like Astro-E and Constellation-X.
We have therefore calculated transmitted spectra by means of Monte Carlo simulations. The code is, as far as physical processes are concerned, basically that described in Matt, Perola & Piro (1991). A spherical geometry, with the X–ray source in the centre, has been assumed, while the element abundances are those tabulated in Morrison & McCammon (1983). Photoelectric absorption, Compton scattering (in a fully relativistic treatment) and fluorescence (for iron atoms only) are included in the code. Photon’s path are followed until either the photon is photoabsorbed (and not re–emitted as iron fluorescence) or escapes from the cloud.
Spectra have been calculated for 31 different column densities, ranging from 10<sup>22</sup> to 4$`\times 10^{24}`$ cm<sup>-2</sup>. In order to be independent of the shape of the primary radiation, transmitted spectra for monochromatic emission have been calculated, with a step of 0.1 keV below 20 keV, and 1 keV above. A grid has then been constructed, which can be folded with the chosen spectral shape.
## 2 Transmitted spectra. Comparison with simple absorption models
To illustrate the effects of including the Compton scattering in the transmission spectrum, we show in Figs 1-5 (which refer to column densities of 10<sup>23</sup>, 3$`\times 10^{23}`$, 10<sup>24</sup>, 3$`\times 10^{24}`$ and 10<sup>25</sup> cm<sup>-2</sup>, respectively), the results of the MonteCarlo simulations (solid lines), along with the results when only photoelectric absorption (dotted lines) or both photoelectric and Compton absorption without scattering (dashed lines) are included. The first case is unphysical, and it is shown here only for the sake of illustration; the second case would correspond to absorption by matter with a negligible covering factor to the primary source (i.e. a small cloud on the line of sight), a physically possible but highly unlikely situation, at least for Seyfert galaxies (the fraction of Compton–thick sources is estimated to be at least 30%, Maiolino et al. 1998, and then the covering fraction of the matter must be significant). The injected spectrum is a power law with a photon index of 2 and an exponential cut–off at 500 keV, as typical for Seyfert galaxies (even if the latter parameter is at present poorly known). As expected, our curves lie below the dotted curves (because of the larger absorption especially above $``$10 keV, where the Compton cross section starts dominating over the photoelectric cross section), and above the dashed curves (because of the extra radiation provided by the scattering of photons into the line of sight). The effect is dramatic, especially for large column densities and high energies.
## 3 Applications. I. The Circinus Galaxy
As a first application, let us discuss the case of the Circinus Galaxy. Matt et al. (1999) analyzed the BeppoSAX observation of this source and found a clear excess in hard X–rays (i.e. in the PDS instrument) with respect to the best fit medium energy (i.e. LECS and MECS) spectrum (which, in turn, was in good agreement with the ASCA result, Matt et al. 1996). The excess is best explained assuming that the nuclear emission is piercing through material with moderate Compton–thick (i.e. between $`10^{24}`$ and $`10^{25}`$ cm<sup>-2</sup>, see previous section) column density. Compton scattering must therefore be taken into account in modeling the emerging spectrum, not only in absorption but also in emission, as there is clear evidence of large amount of reflection too, suggesting a fairly large solid angle subtended by the cold matter to the primary source. The fit with the model described here yields the parameters of the transmitted component reported in Table 1 (model 1). Model 2 in the same table refers to the fit with a pure absorption model (photelectric plus Compton). Both models are statistically acceptable (reduced $`\chi ^2`$1), but the differences in the best fit parameters are significant, leading to dramatically different (i.e. two orders of magnitude) X–ray nuclear luminosities.
## 4 Applications. II. The hard X–ray Background
The origin of the thermal–like, $``$40 keV spectrum (Marshall et al. 1980) of the hard Cosmic X–ray background (XRB) has remained unexplained for many years. In 1989, Setti & Woltjer proposed an explanation in terms of a mixture of obscured (i.e. Seyfert 2s) and unobscured (i.e. Seyfert 1s) AGN. Following this idea, many authors developed synthesis models for the XRB (e.g. Madau, Ghisellini & Fabian 1993, 1994; Matt & Fabian 1994; Comastri et al. 1995), and nowadays this explanation is widely considered as basically correct.
To model the spectrum of the XRB it is necessary to include all the relevant ingredients, and a correct transmission spectrum is one of them because, as remarked above, Compton–thick sources are a significant fraction of all Seyfert 2s. To our knowledge, out of the many papers devoted to fitting the XRB, the transmission component has been properly included only by Madau, Ghisellini & Fabian (1994). Here we do it again, to highlight and discuss the differences with models in which only absorption is included. In Fig. 6, we show the integrated local spectrum of Seyfert 1 galaxies (dotted curve), of Seyfert 2 galaxies (lower dashed and solid curves) and of the sum of Seyfert 1 and 2 galaxies (upper dashed and solid curves). The spectrum of Seyfert 1 galaxies is described by a power law with a photon spectral index of 1.9 and an exponential cut–off with $`e`$-folding energy of 400 keV; a Compton reflection component, corresponding to an isotropically illuminated accretion disk observed at an inclination angle of 60, is also included. According to unification models, the spectrum of Seyfert 2 galaxies is assumed to be intrinsically identical to that of Seyfert 1s, but seen through obscuring matter. The solid lines in the figure refer to a synthesis model in which the transmitted component is included, while in the dashed ones only absorption is considered. Type 2 sources are assumed to outnumber type 1 sources by a factor of 4, independently of the luminosity. The adopted distribution of column densities of the absorbing matter for the Seyfert 2s is: $`\frac{dN}{dLog(N_\mathrm{H})}Log(N_\mathrm{H})`$, from 10<sup>21</sup> to 4$`\times `$10<sup>25</sup> cm<sup>-2</sup>. The fraction of Compton–thick sources is then about 1/3, in agreement with the estimate of Maiolino et al. (1998). The two total spectra differ significantly above 10 keV, the spectrum including the transmission component being about 20% higher at 30 keV.
The best fit spectrum to the XRB (HEAO-1 data, Marshall et al. 1980), obtained after evolving the local spectrum of Seyfert galaxies to cosmological distances, following Boyle et al. (1994), is shown in Fig.7. Different descriptions of the pure luminosity evolution scenario do not change significantly the results. The study of both the spectral shape of the XRB and the source counts in different scenarios, including e.g. density evolution, is beyond the scope of the present work, and is deferred to a forthcoming paper (Pompilio et al., in preparation). Apart from the highest energy part of the spectrum, where the fit is not very good (suggesting either that an exponential cut–off is not a good description of the spectrum of Seyfert galaxies, or that there is not a universal value of such a parameter, as actually is emerging from BeppoSAX observations: see e.g. Matt 1998), and the lowest part (where contributions from other classes of sources, like Clusters of Galaxies, may be relevant), the agreement between the data and the model is acceptable. The soft X–ray source counts are also well reproduced, while the hard (5–10 keV) counts (Fiore et al., in preparation; Comastri et al. 1999) are somewhat underestimated, but still marginally consistent with the data. The complete model and the detailed analysis will be discussed elsewhere (Pompilio 1999; Pompilio et al., in preparation).
|
no-problem/9904/astro-ph9904407.html
|
ar5iv
|
text
|
# Non-explosive hydrogen and helium burnings: Abundance predictions from the NACRE reaction rate compilationAn electronic version of this paper, with colour figures, is available at http://astro.ulb.ac.be
## 1 Introduction
The evolution of a star is made of a succession of “controlled” thermonuclear burning stages interspersed with phases of gravitational contraction. The latter stages are responsible for a temperature increase, the former ones producing nuclear energy and composition changes.
As is well known, hydrogen and helium burning in the central regions or in peripheral layers of a star are key nuclear episodes, and leave clear observables, especially in the Hertzsprung-Russell diagram, or in the stellar surface composition. These photospheric abundance signatures may result from so-called “dredge-up” phases, which are expected to transport the H- or He-burning ashes from the deep production zones to the more external layers. This type of surface contamination is encountered especially in low- and intermediate-mass stars on their first or asymptotic branches, where two to three dredge-up episodes have been identified by stellar evolution calculations. Nuclear burning ashes may also find their way to the surface of non-exploding stars by rotationally-induced mixing, which has been started to be investigated in some detail (Heger 1998), or by steady stellar winds, which have their most spectacular effects in massive stars of the Wolf-Rayet type.
The confrontation between the wealth of observed elemental or isotopic compositions and calculated abundances can provide essential clues on the stellar structure from the main sequence to the red giant phase, and much has indeed been written on this subject. Of course, the information one can extract from such a confrontation is most astrophysically useful if the discussion is freed from nuclear physics uncertainties to the largest possible extent.
Thanks to the impressive skill and dedication of some nuclear physicists, remarkable progress has been made over the years in our knowledge of reaction rates at energies which are as close as possible to those of astrophysical relevance (e.g. Rolfs & Rodney 1988). Despite these efforts, important uncertainties remain. This relates directly to the enormous problems the experiments have to face in this field, especially because the energies of astrophysical interest for charged-particle-induced reactions are much lower than the Coulomb barrier energies. As a consequence, the corresponding cross sections can dive into the nanobarn to picobarn abyss. In general, it has not been possible yet to measure directly such small cross sections. Theoreticians are thus requested to supply reliable extrapolations from the lowest energies attained experimentally to those of most direct astrophysical relevance.
Recently, a new major challenge has been taken up by a consortium of European laboratories with the build-up of well documented and evaluated sets of experimental data or theoretical predictions for a large number of astrophysically interesting nuclear reactions (Angulo et al. 1999). This compilation of reaction rates, referred to as NACRE (Nuclear Astrophysics Compilation of REaction rates; see Sect. 2 for some details), comprises in particular the rates for all the charged-particle-induced nuclear reactions involved in the “cold” pp-, CNO, NeNa and MgAl chains, the first two burning modes being essential energy producers, all four being important nucleosynthesis agents. It also includes the main reactions involved in non-explosive helium burning.
The aim of this paper is to calculate with the help of the NACRE data the abundances of the different isotopes of the elements from C to Al involved in the non-explosive H (Sects. 3 - 5) and He (Sect. 6) burnings, special emphasis being put on the impact of the reported remaining rate uncertainties on the derived abundances. The yields from the considered burning modes are calculated by combining in all possible ways the lower and upper limits of all the relevant reaction rates. One “reference” abundance calculation is also performed with all the recommended NACRE rates. Note that the pp-chains are not considered here. A solar neutrino analysis based on preliminary NACRE data for the pp reactions can be found in Castellani et al. (1997).
Our extensive abundance uncertainty analysis is performed in the framework of a parametric model assuming that H burning takes place at a constant density $`\rho =100`$ g cm<sup>-3</sup> and at constant temperatures between $`T_6=10`$ and 80 ($`T_n`$ is the temperature in units of $`10^n\mathrm{K}`$). The corresponding typical values adopted for He burning are $`\rho =10^4`$ g cm<sup>-3</sup> and $`T_8=1.5`$ and 3.5. These ranges encompass typical burning conditions in a large variety of realistic stellar models. For the study of H-burning, initial abundances are assumed to be solar (Anders & Grevesse 1989). For He-burning, we adopt the abundances resulting from H burning at $`T_6=60`$ and $`\rho =100`$ g cm<sup>-3</sup> calculated with the use of the NACRE recommended rates. The H- and He-burning nucleosynthesis is followed until the H and He mass fractions drop to $`10^5`$.
In spite of its highly simplistic aspect, this analysis provides results that are of reasonable qualitative value, as testified by their confrontation with detailed stellar model predictions. Most significant, these parametric calculations have the virtue of identifying the rate uncertainties whose impact may be of significance on abundance predictions at temperatures of stellar relevance. They thus serve as a guide in the selection of the nuclear uncertainties that have to be duly analyzed in detailed model stars, particularly in order to perform meaningful confrontations between abundance observations and predictions. They are also hoped to help nuclear astrophysicists pinpointing the rate uncertainties that have to be reduced most urgently.
## 2 The NACRE compilation in a nutshell
A detailed information about the procedure adopted to evaluate each of the NACRE reaction rates and about the derived values can be found in Angulo et al. (1999), or in electronic form at http://astro.ulb.ac.be, which also offers the possibility of generating interactively tables of reaction rates for networks and temperature grids selected by the user<sup>1</sup><sup>1</sup>1This electronic address also provides many other nuclear data of nuclear astrophysics interest. It is clearly impossible to go here into the details of the NACRE procedure. Let us just emphasize some of its specificities:
(1) For each reaction, the non-resonant and broad-resonance contributions to its rate are evaluated numerically in order to avoid the approximations which are classically made (see Fowler et al. 1975 for details) in order to allow analytical rate evaluations;
(2) Narrow or subthreshold resonances are in general approximated by Breit-Wigner shapes, and their contributions to the reaction rates are approximated in the usual analytical way (e.g. Fowler et al. 1975). However, in some cases, the resonance data are abundant enough to allow a numerical calculation avoiding these approximations;
(3) For each reaction, NACRE provides a recommended “adopted” rate, along with realistic lower and upper limits. The adopted values of, and the limits on the resonance contributions are derived from weighted averages duly taking into account the uncertainties on individual measurements, as well as the different measurements that are sometimes available for a given resonance \[see Eq. (15) of Angulo et al. 1999\]. For non-resonant contributions, $`\chi ^2`$-fits to available data provide the recommended values along with the lower and upper limits, as the experimental uncertainties on one set of data and the differences between various sets, if available, are taken into account in the $`\chi ^2`$-procedure. It is worth stressing at this point that enough information is provided by NACRE for helping the user to tailor his own preferred rates if he wants.
The procedure just sketched in (1) - (3) is the selected standard methodology, and has the advantage of being easily reproducible and of avoiding any subjective renormalization of different experimental data sets.Quite clearly, however, the large variety of different situations makes unavoidable some slight modifications of the standard procedure in some cases. These specific adjustments are clearly identified and discussed in Angulo et al. (1999);
(4) A theoretical (Hauser-Feshbach) evaluation of the contribution to each rate of the thermally populated excited states of the target is also provided. It has to be noted that the widely used compilation of Caughlan & Fowler (1988, hereafter referred to as CF88) provides uncertainties for some rates only, while the contribution of excited target states is derived in most cases from a rough (referred to as “equal strength”) approximation;
(5) It has to be emphasized that the major goal of the NACRE compilation is to provide numerical reaction rates in tabular form (see http://astro.ulb.ac.be). This philosophy differs markedly from the one promoted by the previous widely used compilations (CF88, and references therein), and is expected to lead to more accurate rate data. However, for completeness, NACRE also provides analytical approximations (Angulo et al. 1999) that differ in several respects from the classically used expressions (CF88, and references therein).
## 3 The CNO Cycles
The reactions of the CNO cycles are displayed in Fig. 1. As is well known, their net result is the production of $`{}_{}{}^{4}\mathrm{He}`$ from H, and the transformation of the C, N and O isotopes mostly into $`{}_{}{}^{14}\mathrm{N}`$ as a result of the relative slowness of $`{}_{}{}^{14}\mathrm{N}(\mathrm{p},\gamma ){}_{}{}^{15}\mathrm{O}`$ with respect to the other involved reactions. This $`{}_{}{}^{14}\mathrm{N}`$ build-up is clearly seen in Fig. 2.
As shown in Fig. 1, three nuclides are important branching points for the CNO cycles. The first one is $`{}_{}{}^{15}\mathrm{N}`$. At $`T_6=25`$, $`{}_{}{}^{15}\mathrm{N}(\mathrm{p},\alpha ){}_{}{}^{12}\mathrm{C}`$ is 1000 times faster than $`{}_{}{}^{15}\mathrm{N}(\mathrm{p},\gamma ){}_{}{}^{16}\mathrm{O}`$, and the CN cycle reaches equilibrium already before $`10^3`$ of the initial protons have been burned. The second branching is at $`{}_{}{}^{17}\mathrm{O}`$. The competing reactions $`{}_{}{}^{17}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{14}\mathrm{N}`$ and $`{}_{}{}^{17}\mathrm{O}(\mathrm{p},\gamma ){}_{}{}^{18}\mathrm{F}`$ determine the relative importance of cycle II over cycle III (Fig. 1). The uncertainties on these rates have been strongly reduced in the last years. The rate of $`{}_{}{}^{17}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{14}\mathrm{N}`$ recommended by NACRE is larger than the CF88 one by factors of 13 and 90 at $`T_6=20`$ and 80, respectively. Smaller deviations, though reaching a factor of 9 at $`T_6=50`$, are found for the $`{}_{}{}^{17}\mathrm{O}(\mathrm{p},\gamma ){}_{}{}^{18}\mathrm{F}`$ rate.
The oxygen isotopic composition is shown in Fig. 3. As it is well known, it depends drastically on the burning temperature. In particular, $`{}_{}{}^{17}\mathrm{O}`$ is produced at $`T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}25}`$, but is destroyed at higher temperatures. This has the important consequence that the amount of $`{}_{}{}^{17}\mathrm{O}`$ emerging from the CNO cycles and eventually transported to the stellar surface is a steep function of the stellar mass. This conclusion could get some support from the observation of a large spread in the oxygen isotopic ratios at the surface of red giant stars of somewhat different masses (Dearborn 1992, and references therein). Fig. 3 also demonstrates that the oxygen isotopic composition cannot be fully reliably predicted yet at a given temperature as a result of the cumulative uncertainties associated with the different production and destruction rates.
Finally, the leakage from cycle III is determined by the ratio of the $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\gamma ){}_{}{}^{19}\mathrm{F}`$ and $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{15}\mathrm{N}`$ rates (Fig. 1). At the temperatures of relevance, $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\gamma ){}_{}{}^{19}\mathrm{F}`$ is roughly 1000 times slower than $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{15}\mathrm{N}`$, in relatively good agreement with CF88 (Fig. 4), undermining the path leading to the production of $`{}_{}{}^{19}\mathrm{F}`$. However, at low temperatures, large uncertainties still affect the $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\gamma ){}_{}{}^{19}\mathrm{F}`$ rate. In fact, its upper bound could be comparable to the $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{15}\mathrm{N}`$ rate, and at the same time larger than the $`{}_{}{}^{19}\mathrm{F}(\mathrm{p},\alpha ){}_{}{}^{16}\mathrm{O}`$ rate at $`T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}20}`$. As a result, some $`{}_{}{}^{19}\mathrm{F}`$ might be produced, in contradiction with the conclusion drawn from the adoption of the CF88 rates. Fig. 3 indeed confirms that fluorine could be overproduced (with respect to solar) by up to a factor of 100 at H exhaustion when $`T_615`$. However, Fig. 3 also reveals that the maximum $`{}_{}{}^{19}\mathrm{F}`$ yields that can be attained remain very poorly predictable as a result of the rate uncertainties. In fact, some hint of a non-negligible production of fluorine by the CNO cycles might come from the observation of fluorine abundances slightly larger than solar at the surface of red giant stars considered to be in their post-first dredge-up phase (Jorissen et al. 1992; Mowlavi et al. 1996).
As far as $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\alpha ){}_{}{}^{15}\mathrm{N}`$ is concerned, let us also mention that Huss et al. (1997) have speculated that its rate could be about 1000 times larger than the one adopted by CF88 and NACRE at temperatures of about $`15\times 10^6`$ K. This proposal has been made in order to explain the N isotopic composition measured in some presolar grains. It is clearly fully incompatible with the NACRE analysis.
Finally, let us note that $`{}_{}{}^{19}\mathrm{F}(\mathrm{p},\alpha ){}_{}{}^{16}\mathrm{O}`$ is always much faster than $`{}_{}{}^{19}\mathrm{F}(\mathrm{p},\gamma ){}_{}{}^{20}\mathrm{Ne}`$. Any important leakage out of the CNO cycles to $`{}_{}{}^{20}\mathrm{Ne}`$ is thus prevented, this conclusion being independent of the remaining rate uncertainties.
## 4 The NeNa Chain
The NeNa chain is illustrated in Fig. 5, while Fig. 6 displays some relevant NACRE reaction rates, and their, sometimes quite large, uncertainties. These affect in particular the proton captures by $`{}_{}{}^{21}\mathrm{Ne}`$, $`{}_{}{}^{22}\mathrm{Ne}`$ and $`{}_{}{}^{23}\mathrm{Na}`$. In contrast, the $`{}_{}{}^{20}\mathrm{Ne}(\mathrm{p},\gamma ){}_{}{}^{21}\mathrm{Na}`$ rate may be considered as relatively well determined. Some of these rates may also deviate strongly from the CF88 proposed values.
The NACRE rates are used to compute the abundances shown in Fig. 7. A slight alteration of the initial $`{}_{}{}^{20}\mathrm{Ne}`$ abundance is visible only for $`T_6>`$ 50. However, an unnoticeable $`{}_{}{}^{20}\mathrm{Ne}`$ destruction is sufficient to lead to a significant increase of the abundance of the rare $`{}_{}{}^{21}\mathrm{Ne}`$ isotope through $`{}_{}{}^{20}\mathrm{Ne}(\mathrm{p},\gamma ){}_{}{}^{21}\mathrm{Na}(\beta ^+){}_{}{}^{21}\mathrm{Ne}`$ at $`T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}40}`$. At higher temperatures, $`{}_{}{}^{21}\mathrm{Ne}(\mathrm{p},\gamma ){}_{}{}^{22}\mathrm{Na}(\beta ^+){}_{}{}^{22}\mathrm{Ne}`$ destroys $`{}_{}{}^{21}\mathrm{Ne}`$. As a result, the $`{}_{}{}^{21}\mathrm{Ne}`$ abundance at H exhaustion is maximum when H burns in the approximate $`30<T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}35}`$ range. This conclusion may, however, be altered if the upper limit of the $`{}_{}{}^{21}\mathrm{Ne}(\mathrm{p},\gamma ){}_{}{}^{22}\mathrm{Na}`$ rate is adopted instead.
The $`{}_{}{}^{23}\mathrm{Na}`$ yield has raised much interest recently, following the discovery at the surface of globular cluster red giant stars of moderate sodium overabundances which correlate or anti-correlate with the amount of other elements (like C, N, O, Mg and Al) also involved in cold H burning (Denissenkov et al. 1998; Kraft et al. 1998, and references therein). This situation may be the signature of the dredge-up to the stellar surface of the ashes of the NeNa chain. The $`{}_{}{}^{23}\mathrm{Na}`$ production results from $`{}_{}{}^{22}\mathrm{Ne}(\mathrm{p},\gamma ){}_{}{}^{23}\mathrm{Na}`$, while $`{}_{}{}^{23}\mathrm{Na}(\mathrm{p},\gamma ){}_{}{}^{24}\mathrm{Mg}`$ and $`{}_{}{}^{23}\mathrm{Na}(\mathrm{p},\alpha ){}_{}{}^{20}\mathrm{Ne}`$ are responsible for its destruction, which can be substantial at $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}60}`$. Unfortunately, our knowledge of these three reaction rates remains very poor, with uncertainties that can amount to factors of about 100 to $`10^4`$ in certain temperature ranges (see Fig. 6). As indicated in Fig. 7, this situation prevents an accurate prediction of the $`{}_{}{}^{23}\mathrm{Na}`$ yields when $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$. More precisely, the spread in the $`{}_{}{}^{23}\mathrm{Na}`$ abundance at H exhaustion reaches a factor of 100 at these temperatures.
The possible cycling character of the NeNa chain is determined by the ratio of the rates of $`{}_{}{}^{23}\mathrm{Na}(\mathrm{p},\alpha ){}_{}{}^{20}\mathrm{Ne}`$ and of $`{}_{}{}^{23}\mathrm{Na}(\mathrm{p},\gamma ){}_{}{}^{24}\mathrm{Mg}`$. Fig. 6 indicates that the former reaction is predicted to be faster than the latter one at $`T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$ only. In this case, the NeNa chain is indeed a cycle. However, at higher temperatures, an important leakage to the MgAl chain can be expected, unless future experiments confirm the lower bound of the uncertain $`{}_{}{}^{23}\mathrm{Na}(\mathrm{p},\gamma ){}_{}{}^{24}\mathrm{Mg}`$ rate.
## 5 The MgAl Chain
The MgAl chain is depicted in Fig. 5. It involves in particular $`{}_{}{}^{26}\mathrm{Al}`$. Its long-lived ($`t_{1/2}=`$ $`7.05\times 10^5`$ y) $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ ground state and its short-lived ($`t_{1/2}=6.35`$ s) $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{m}}`$ isomeric state are out of thermal equilibrium at the temperatures of relevance for the non-explosive burning of hydrogen (Coc & Porquet 1998). They have thus to be considered as separate species in abundance calculations.
The status of our present knowledge of some important reactions of the MgAl chain is depicted in Fig. 8, while the yield predictions for the species involved in this chain are presented in Fig. 9. Let us first discuss the situation resulting from the use of the NACRE adopted rates. The most abundant nuclide is $`{}_{}{}^{24}\mathrm{Mg}`$, the concentration of which remains unaffected, at least for $`T_6<\mathrm{\hspace{0.17em}\hspace{0.17em}60}`$. In contrast, $`{}_{}{}^{25}\mathrm{Mg}`$ is significantly transformed by proton captures into $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ at $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}30}`$. At $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$, the leakage from the NeNa cycle starts affecting the MgAl nucleosynthesis through a slight increase of the $`{}_{}{}^{24}\mathrm{Mg}`$ abundance, followed by a modest enhancement of the $`{}_{}{}^{25}\mathrm{Mg}`$, $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ and $`{}_{}{}^{27}\mathrm{Al}`$ concentrations (Fig. 9). At temperatures $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}70}`$, the $`{}_{}{}^{24}\mathrm{Mg}`$ accumulation starts turning into a depletion by proton captures, which contributes to a further increase in the $`{}_{}{}^{25}\mathrm{Mg}`$, $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ and $`{}_{}{}^{27}\mathrm{Al}`$ abundances. This build-up cannot be significantly hampered by the destruction of these species by proton captures, as a result of their relative slowness. Among these reactions, $`{}_{}{}^{27}\mathrm{Al}(\mathrm{p},\alpha ){}_{}{}^{24}\mathrm{Mg}`$ and $`{}_{}{}^{27}\mathrm{Al}(\mathrm{p},\gamma ){}_{}{}^{28}\mathrm{Si}`$ are of special interest, as the ratio of their rates determines in particular the leakage out of the MgAl chain. The adopted NACRE rate of the former reaction is 20 to 100 times slower than the CF88 one in the considered temperature range, and turns out to be slower than the (p,$`\gamma `$) channel for $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}60}`$ (Fig. 8), so that no cycling back is possible in these conditions.
It is noticeable that the $`{}_{}{}^{26}\mathrm{Mg}`$ abundance at H exhaustion is almost temperature independent. This trend differs from the behaviour of the concentrations of the other Mg and Al isotopes, and results from two factors. First, the adopted $`{}_{}{}^{26}\mathrm{Mg}`$ proton capture is slow enough (about ten times slower than prescribed by CF88) for preventing $`{}_{}{}^{26}\mathrm{Mg}`$ to be destroyed at the considered temperatures. Second, $`{}_{}{}^{26}\mathrm{Mg}`$ is bypassed by the nuclear flow associated with the leakage from the NeNa chain at $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$. The reaction $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}(\mathrm{p},\gamma ){}_{}{}^{27}\mathrm{Al}`$ is indeed predicted to be faster than the $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ $`\beta `$-decay in this temperature domain.
Various aspects of the above analysis may be affected by remaining rate uncertainties. In fact, the only proton captures whose rates are now put on safe grounds are $`{}_{}{}^{24}\mathrm{Mg}(\mathrm{p},\gamma ){}_{}{}^{25}\mathrm{Al}`$ (for which NACRE and CF88 are in good agreement) and $`{}_{}{}^{25}\mathrm{Mg}(\mathrm{p},\gamma ){}_{}{}^{26}\mathrm{Al}`$ (for which the NACRE adopted rate is about 5 times slower than the CF88 one at $`T_6<80`$). In spite of much recent effort, the other proton capture rates of the MgAl chain still show more or less large uncertainties in the considered temperature range, as illustrated in Fig. 8.
Due consideration of these uncertainties indicates in particular (see Fig. 9) that, for $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$, $`{}_{}{}^{24}\mathrm{Mg}`$ could be more strongly destroyed than stated above, while $`{}_{}{}^{26}\mathrm{Mg}`$ could be substantially transformed into $`{}_{}{}^{27}\mathrm{Al}`$ if the NACRE upper limits on the $`{}_{}{}^{24}\mathrm{Mg}(\mathrm{p},\gamma ){}_{}{}^{25}\mathrm{Al}`$ and $`{}_{}{}^{26}\mathrm{Mg}(\mathrm{p},\gamma ){}_{}{}^{27}\mathrm{Al}`$ rates were selected. It is also important to note that the abundances at H exhaustion of $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ and $`{}_{}{}^{27}\mathrm{Al}`$ are not drastically affected by the uncertainties left in their proton capture rates, even if these uncertainties can be quite large (for example, the $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}(\mathrm{p},\gamma ){}_{}{}^{27}\mathrm{Si}`$ rate is uncertain by more than a factor of $`10^3`$ at $`T_6>\mathrm{\hspace{0.17em}\hspace{0.17em}50}`$). This situation results from the fact that even the highest NACRE proton capture rates are not fast enough for leading to a substantial destruction of the two Al isotopes by the time H is consumed<sup>2</sup><sup>2</sup>2Arnould et al. (1995) have reached a different conclusion due to a trivial mistake in the $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}(\mathrm{p},\gamma ){}_{}{}^{27}\mathrm{Si}`$ rate used in their calculations. In contrast, the exact conditions under which the MgAl chain is cycling cannot be reliably specified yet in view of the large uncertainties still affecting the $`{}_{}{}^{27}\mathrm{Al}(\mathrm{p},\alpha ){}_{}{}^{24}\mathrm{Mg}`$ and $`{}_{}{}^{27}\mathrm{Al}(\mathrm{p},\gamma ){}_{}{}^{28}\mathrm{Si}`$ rates.
The possibility for the MgAl chain to produce substantial amounts of $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$ is of high interest in view of the prime importance of this radionuclide in cosmochemistry and $`\gamma `$-ray line astronomy. There is now ample observational evidence that $`{}_{}{}^{26}\mathrm{Al}`$ has been injected live in the forming solar system before its in situ decay in various meteoritic inclusions (MacPherson et al. 1995). Its presence in extinct form is also demonstrated in various types of presolar grains of supposedly circumstellar origin identified in primitive meteorites (e.g. Zinner 1995). The present-day galactic plane also contains $`{}_{}{}^{26}\mathrm{Al}_{}^{\mathrm{g}}`$, as shown by the observation of a 1.8 MeV $`\gamma `$-ray line associated with its $`\beta `$-decay (e.g. Prantzos & Diehl 1996).
The MgAl chain has also a direct bearing on the puzzling Mg-Al anticorrelation observed in globular cluster red giants. Denissenkov et al. (1998) have speculated that a strong low-energy resonance could dominate the rate of $`{}_{}{}^{24}\mathrm{Mg}(\mathrm{p},\gamma ){}_{}{}^{25}\mathrm{Al}`$ at typical cold H-burning temperatures, and could help explaining these observations. There is at present no support of any sort to such a resonant enhancement of this rate.
## 6 Helium burning
The NACRE compilation also provides recommended rates and their lower and upper limits for most of the $`\alpha `$-captures involved in the non-explosive burning of helium. The impact of the remaining rate uncertainties on the abundances of the elements up to Al affected by He burning is evaluated in our parametric model for two sets of conditions: (i) $`\rho =10^4`$ g cm<sup>-3</sup> and $`T_8=1.5`$, adopted to characterize the central or shell He-burning phases of intermediate-mass stars ($`M6`$ M), and (ii) $`\rho =10^4`$ g cm<sup>-3</sup> and $`T_8=3.5`$, which can be encountered at the end of the He burning phase in the core of massive stars or in AGB thermal pulses. The initial abundances used in these calculations are adopted as described in Sect. 1.
In contrast to the H-burning case, the abundances during He burning exhibit some sensitivity to density, as it enters differently the $`3\alpha `$ reaction rate and the other $`\alpha `$-capture rates. Consequently, the results presented here should not be used to infer abundances resulting from He burning in specific stellar models, where the time evolution of the temperature and the density may play an important role on the final He-burning composition. It has also to be noted that the neutrons produced by $`{}_{}{}^{13}\mathrm{C}(\alpha ,\mathrm{n}){}_{}{}^{16}\mathrm{O}`$ or $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n}){}_{}{}^{25}\mathrm{Mg}`$ during He burning lead us to extend the nuclear network to all (about 500) the s-process nuclides up to Bi.
Figs. 10 and 11 illustrate the evolution during He burning in the two situations mentioned above of the abundances of all the stable nuclides between $`{}_{}{}^{12}\mathrm{C}`$ and $`{}_{}{}^{27}\mathrm{Al}`$ (plus $`{}_{}{}^{26}\mathrm{Al}`$). At low temperature ($`T_81.5`$; Figs. 10a and 11a), the main reaction flows are
a) $`2\alpha (\alpha ,\gamma )^{12}\mathrm{C}`$, followed by $`{}_{}{}^{12}\mathrm{C}(\alpha ,\gamma ){}_{}{}^{16}\mathrm{O}`$ at the very end of He burning. The factor of 2 uncertainty in the rate of $`{}_{}{}^{12}\mathrm{C}(\alpha ,\gamma ){}_{}{}^{16}\mathrm{O}`$ (Fig. 12) is responsible for the error bars on the $`{}_{}{}^{16}\mathrm{O}`$ abundance;
b) $`{}_{}{}^{14}\mathrm{N}(\alpha ,\gamma )^{18}\mathrm{F}(\beta ^+)^{18}\mathrm{O}`$, followed by $`{}_{}{}^{18}\mathrm{O}(\alpha ,\gamma )^{22}\mathrm{Ne}`$ at the end of He burning. The resulting $`{}_{}{}^{22}\mathrm{Ne}`$ does not burn at the considered low temperature<sup>3</sup><sup>3</sup>3In detailed stellar models, the temperature increases to values in excess of $`T_8=3`$ towards the end of core (or shell) He-burning. This may lead to the destruction of $`{}_{}{}^{22}\mathrm{Ne}`$ by $`(\alpha ,n)`$ (with a concomitant production of neutrons) or $`(\alpha ,\gamma )`$ reactions, as illustrated on Fig. 11b. The uncertainties of a factor of 1.5 and 5 at $`T_8=1.5`$ in the NACRE rates of $`{}_{}{}^{14}\mathrm{N}(\alpha ,\gamma )^{18}\mathrm{F}`$ and $`{}_{}{}^{18}\mathrm{O}(\alpha ,\gamma )^{22}\mathrm{Ne}`$, respectively (Fig. 12), are responsible for the wide range of predicted $`{}_{}{}^{18}\mathrm{O}`$ and $`{}_{}{}^{22}\mathrm{Ne}`$ abundances. A much larger $`{}_{}{}^{18}\mathrm{O}`$ abundance at the end of He burning would result if use were made of the CF88 rate, which is about 220 times smaller than the NACRE one (Fig. 12).
The neutron density resulting from $`{}_{}{}^{13}\mathrm{C}(\alpha ,\mathrm{n}){}_{}{}^{16}\mathrm{O}`$ is shown in Fig. 13, along with its associated uncertainty. Albeit small, this neutron irradiation is responsible for the $`{}_{}{}^{15}\mathrm{N}`$ and $`{}_{}{}^{19}\mathrm{F}`$ abundance peaks seen in Fig. 10a. They result from $`{}_{}{}^{14}\mathrm{N}(\alpha ,\gamma )^{18}\mathrm{F}(\beta ^+)^{18}\mathrm{O}(\mathrm{p},\alpha )^{15}\mathrm{N}(\alpha ,\gamma )^{19}\mathrm{F}`$, the protons originating from $`{}_{}{}^{14}\mathrm{N}(\mathrm{n},\mathrm{p})^{14}\mathrm{C}`$. Towards the end of He burning, $`{}_{}{}^{19}\mathrm{F}`$ is destroyed by $`{}_{}{}^{19}\mathrm{F}(\alpha ,\mathrm{p})^{22}\mathrm{Ne}`$. Shell He burning in AGB stars or central He burning in Wolf-Rayet stars have been proposed as a major site for the galactic production of $`{}_{}{}^{19}\mathrm{F}`$ (Goriely et al. 1989; Meynet & Arnould 1996, 1999; Mowlavi et al. 1998). For AGB stars, these predictions have been confirmed by the observation of $`{}_{}{}^{19}\mathrm{F}`$ overabundances in some of these objects (Jorissen et al. 1992). Incomplete He-burning (e.g. in Wolf-Rayet stars) may also contribute to the galactic enrichment in primary $`{}_{}{}^{15}\mathrm{N}`$, as required by the observations of this nuclide in the interstellar medium (Güsten & Ungerechts 1985).
The large $`{}_{}{}^{26}\mathrm{Al}`$ abundance seen on Fig. 11a results from the particular choice of initial conditions (see Sect. 1), since $`{}_{}{}^{26}\mathrm{Al}`$ is not produced in the conditions prevailing during He-burning. Its rapid drop close to the end of He burning results from the combined effect of $`\beta `$-decay and $`{}_{}{}^{26}\mathrm{Al}(\mathrm{n},\mathrm{p})^{26}\mathrm{Mg}`$ making use of the few neutrons liberated by $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n})^{25}\mathrm{Mg}`$.
At higher temperatures (Figs. 10b and 11b), the He-burning nucleosynthesis of the elements up to about Al is essentially the same as in the low temperature case. The major differences are observed for $`{}_{}{}^{18}\mathrm{O}`$, $`{}_{}{}^{19}\mathrm{F}`$, $`{}_{}{}^{21}\mathrm{Ne}`$, $`{}_{}{}^{22}\mathrm{Ne}`$, $`{}_{}{}^{25}\mathrm{Mg}`$, $`{}_{}{}^{26}\mathrm{Mg}`$ and $`{}_{}{}^{26}\mathrm{Al}`$, and are mainly due to a larger neutron production by $`{}_{}{}^{13}\mathrm{C}(\alpha ,\mathrm{n}){}_{}{}^{16}\mathrm{O}`$, $`{}_{}{}^{18}\mathrm{O}(\alpha ,\mathrm{n}){}_{}{}^{21}\mathrm{Ne}`$ and $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n}){}_{}{}^{25}\mathrm{Mg}`$. Note that $`{}_{}{}^{18}\mathrm{O}(\alpha ,\mathrm{n}){}_{}{}^{21}\mathrm{Ne}`$ is about 150 times slower than $`{}_{}{}^{18}\mathrm{O}(\alpha ,\gamma )^{22}\mathrm{Ne}`$ in these conditions, but is fast enough to keep the neutron density above $`N_\mathrm{n}=10^9\mathrm{cm}^3`$ (Fig. 13). These neutrons allow protons to be produced by the reactions $`{}_{}{}^{14}\mathrm{N}(\mathrm{n},\mathrm{p})^{14}\mathrm{C}`$ and $`{}_{}{}^{18}\mathrm{F}(\mathrm{n},\mathrm{p})^{18}\mathrm{O}`$. Additional protons come from $`{}_{}{}^{18}\mathrm{F}(\alpha ,\mathrm{p})^{21}\mathrm{Ne}`$. As a result, $`{}_{}{}^{15}\mathrm{N}`$ is produced via $`{}_{}{}^{18}\mathrm{F}(\mathrm{n},\alpha )^{15}\mathrm{N}`$, $`{}_{}{}^{18}\mathrm{O}(\mathrm{p},\alpha )^{15}\mathrm{N}`$,
$`{}_{}{}^{14}\mathrm{N}(\mathrm{p},\gamma )^{15}\mathrm{O}(\beta ^+)^{15}\mathrm{N}`$ and $`{}_{}{}^{18}\mathrm{F}(\mathrm{p},\alpha )^{15}\mathrm{O}(\beta ^+)^{15}\mathrm{N}`$. The production of $`{}_{}{}^{19}\mathrm{F}`$ follows from $`{}_{}{}^{15}\mathrm{N}(\alpha ,\gamma )^{19}\mathrm{F}`$. Since most of the involved reactions have better known rates at $`T_8=3.5`$ than at $`T_8=1.5`$, the corresponding error bars on the abundances are smaller at higher temperature. Neutrons are also responsible for the destruction of any $`{}_{}{}^{26}\mathrm{Al}`$ that may survive the former H-burning episode.
The operation of $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n})^{25}`$Mg at the end of He burning leads to a non-negligible neutron irradiation which triggers a weak s-process leading to the overproduction of the $`70<A<\mathrm{\hspace{0.17em}\hspace{0.17em}90}`$ s-nuclei. Unfortunately, the rate of $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n})^{25}`$Mg remains quite uncertain (Fig. 12), even at temperatures as high as $`T_8=3.5`$ (in this case by a factor of 25). The resulting uncertainty on the neutron density amounts to a factor of 10 (Fig. 13), while the total neutron exposure spans the range 0.1 – 0.3 mbarn<sup>-1</sup>. Finally, the $`\alpha `$-captures by the Ne isotopes are fast enough at temperatures $`T_8>3`$ to alter the Mg isotopic composition. This may provide a direct observational signature of the operation of the $`{}_{}{}^{22}\mathrm{Ne}(\alpha ,\mathrm{n})^{25}`$Mg neutron source in stars (e.g. Malaney & Lambert 1988). Large uncertainties remain, however, in these reaction rates at He-burning temperatures, except for the relatively well-determined $`{}_{}{}^{20}\mathrm{Ne}(\alpha ,\gamma ){}_{}{}^{24}\mathrm{Mg}`$ rate.
## 7 Conclusions
As an aid to the confrontation between spectroscopic observations and theoretical expectations, the nucleosynthesis associated with the cold CNO, NeNa and MgAl modes of H burning, as well as with He burning, is studied with the help of the recent NACRE compilation of nuclear reaction rates. Special attention is paid to the impact on the derived abundances of the carefully evaluated uncertainties that still affect the rates of many reactions. In order to isolate this nuclear effect in an unambiguous way, a very simple constant temperature and density model is adopted.
It is shown that large spreads in the abundance predictions for several nuclides may result not only from a change in temperature, but also from nuclear physics uncertainties. This additional intricacy has to be kept in mind when trying to interpret the observations and when attempting to derive constraints on stellar models from these data.
###### Acknowledgements.
This work has been supported in part by the European Commission under the Human Capital and Mobility network contract ERBCHRXCT930339 and the PECO-NIS contract ERBCIPDCT940629.
|
no-problem/9904/hep-th9904186.html
|
ar5iv
|
text
|
# Gravitating monopoles and black holes in Einstein-Born-Infeld-Higgs model
## 1 Introduction
Some time ago monopoles in Einstein-Yang-Mills-Higgs(EYMH) model , for $`SU(2)`$ gauge group with Higgs field in adjoint representation, were studied as a generalization of the ’t Hooft-Ployakov monopole to see the effect of gravity on it. In particular, it was found that solutions exist up to some critical value of a dimensionless parameter $`\alpha `$, characterising the strength of the gravitational interaction, above which there is no regular solution. The existance of these solutions were also proved analytically for the case of infinite Higgs mass. Also, non Abelian magnetically charged black hole solutions were shown to exist in this model for both finite as well as infinite value of the coupling constant for Higgs field. The Abelian black holes exists for $`r_h\alpha `$ and non Abelian black holes exist in a limited region of the $`(\alpha ,r_h)`$ plane.
Recently Born-Infeld theory has received wide publicity, especially in the context of string theory. Bogomol’nyi-Prasad-Sommerfield (BPS) saturated solutions were obtained in Abelian Higgs model as well as in $`O(3)`$ sigma model in $`2+1`$ dimensions in presence of Born-Infeld term. Different models for domain wall, vortex and monopole solutions, containing the Born-Infeld Lagrangian were constructed in such a way that the self-dual equations are identical with the corresponding Yang-Mills-Higgs model. Recently non self-dual monopole solutions were found numerically in non Abelian Born-Infeld-Higgs theory.
In this paper we consider the Einstein-Born-Infeld-Higgs(EBIH) model and study the monopole and black hole solutions. The solutions are qualitatively similar to those of EYMH model. The black hole configurations have nonzero non Abelian field strength and hence they are called non Abelian black holes. In Sec. II we consider the model and find the equations of motion for static spherically symmetric fields. In Sec III we find the asymptotic behaviours and discuss the numerical results. Finally we conclude the results in Sec. IV.
## 2 The Model
We consider the following Einstein-Born-Infeld-Higgs action for $`SU(2)`$ fields with the Higgs field in the adjoint representation
$`S={\displaystyle d^4x\sqrt{g}\left[L_G+L_{BI}+L_H\right]}`$ (1)
with
$`L_G`$ $`=`$ $`{\displaystyle \frac{1}{16\pi G}},`$
$`L_H`$ $`=`$ $`{\displaystyle \frac{1}{2}}D_\mu \varphi ^aD^\mu \varphi ^a{\displaystyle \frac{e^2g^2}{4}}\left(\varphi ^a\varphi ^av^2\right)^2`$
and the non Abelian Born-Infeld Lagrangian,
$`L_{BI}=\beta ^2Str\left(1\sqrt{1+{\displaystyle \frac{1}{2\beta ^2}}F_{\mu \nu }F^{\mu \nu }{\displaystyle \frac{1}{8\beta ^4}}\left(F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }\right)^2}\right)`$
where
$`D_\mu \varphi ^a=_\mu \varphi ^a+eϵ^{abc}A_\mu ^b\varphi ^c,`$
$`F_{\mu \nu }=F_{\mu \nu }^at^a=\left(_\mu A_\nu ^a_\nu A_\mu ^a+eϵ^{abc}A_\mu ^bA_\nu ^c\right)t^a`$
and the symmetric trace is defined as
$`Str(t_1,t_2\mathrm{},t_n)={\displaystyle \frac{1}{n!}}{\displaystyle tr\left(t_{i_1}t_{i_2}\mathrm{}t_{i_n}\right)}.`$
Here the sum is over all permutations on the product of the $`n`$ generators $`t_i`$. Here we are interested in purely magnetic configurations, hence we have $`F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }=0`$. Expanding the square root in powers of $`\frac{1}{\beta ^2}`$ and keeping up to order $`\frac{1}{\beta ^2}`$ we have the Born-Infeld Lagrangian
$`L_{BI}={\displaystyle \frac{1}{4}}F_{\mu \nu }^aF^{a\mu \nu }+{\displaystyle \frac{1}{96\beta ^2}}\left[\left(F_{\mu \nu }^aF^{a\mu \nu }\right)^2+2F_{\mu \nu }^aF_{\rho \sigma }^aF^{b\mu \nu }F^{b\rho \sigma }\right]+O({\displaystyle \frac{1}{\beta ^4}}).`$
For static spherical symmetric solutions, the metric can be parametrized as
$`ds^2=e^{2\nu (R)}dt^2+e^{2\lambda (R)}dR^2+r^2(R)(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)`$ (2)
and we consider the following ansatz for the gauge and scalar fields
$`A_t^a(R)=0=A_R^a,A_\theta ^a=e_\phi ^a{\displaystyle \frac{W(R)1}{e}},A_\phi ^a=e_\theta ^a{\displaystyle \frac{W(R)1}{e}}\mathrm{sin}\theta ,`$ (3)
and
$`\varphi ^a=e_R^avH(R).`$ (4)
Putting the above ansatz in Eq.1, defining $`\alpha ^2=4\pi Gv^2`$ and rescaling $`RR/ev,\beta \beta ev^2`$ and $`r(R)r(R)/ev`$ we get the following expression for the Lagrangian
$`{\displaystyle 𝑑Re^{\nu +\lambda }\left[\frac{1}{2}\left(1+e^{2\lambda }\left((r^{})^2+\nu ^{}(r^2)^{}\right)\right)\alpha ^2\left(e^{2\lambda }V_1e^{4\lambda }V_2+V_3\right)\right]},`$ (5)
where
$`V_1=(W^{})^2+{\displaystyle \frac{1}{2}}r^2(H^{})^2(W^{})^2{\displaystyle \frac{(W^21)^2}{6\beta ^2r^4}},`$ (6)
$`V_2={\displaystyle \frac{(W^{})^4}{3\beta ^2r^2}}`$ (7)
and
$`V_3={\displaystyle \frac{(W^21)^2}{2r^2}}+W^2H^2+{\displaystyle \frac{g^2r^2}{4}}(H^21)^2{\displaystyle \frac{(W^21)^4}{8\beta ^2r^6}}.`$ (8)
Here the prime denotes differentiation with respect to $`R`$. The dimensionless parameter $`\alpha `$ can be expressed as the mass ratio
$`\alpha =\sqrt{4\pi }{\displaystyle \frac{M_W}{eM_{Pl}}}`$ (9)
with the gauge field mass $`M_W=ev`$ and the Planck mass $`M_{Pl}=1/\sqrt{G}`$ . Note that the Higgs mass $`M_H=\sqrt{2}gev`$. In the limit of $`\beta \mathrm{}`$ the above action reduces to that of the Einstein-Yang-Mills-Higgs model. For the case of $`\alpha =0`$ we must have $`\nu (R)=0=\lambda (R)`$ which corresponds to the flat space Born-Infeld-Higgs theory. We now consider the gauge $`r(R)=R`$, corresponding to the Schwarzschild-like coordinates and rename $`R=r`$. We define $`A=e^{\nu +\lambda }`$ and $`N=e^{2\lambda }`$. Varying the matter field Lagrangian with respect to the metric we find the energy-momentum tensor. Integrating the $`tt`$ component of the energy-momentum we get the mass of the monopole equal to $`M/evG`$ where
$`M=\alpha ^2{\displaystyle _0^{\mathrm{}}}𝑑r\left(NV_1N^2V_2+V_3\right)`$ (10)
Following ’t Hooft the electromagnetic $`U(1)`$ field strength $`_{\mu \nu }`$ can be defined as
$`_{\mu \nu }={\displaystyle \frac{\varphi ^aF_{\mu \nu }^a}{\varphi }}{\displaystyle \frac{1}{e\varphi ^3}}ϵ^{abc}\varphi ^aD_\mu \varphi ^bD_\nu \varphi ^c.`$
Then using the ansatz(3) the magnetic field
$`B^i={\displaystyle \frac{1}{2}}ϵ^{ijk}_{jk}`$
is equal to $`e_r^i/er^2`$ with a total flux $`4\pi /e`$ and unit magnetic charge.
The $`tt`$ and $`rr`$ components of Einstein’s equations are
$`{\displaystyle \frac{1}{2}}\left(1(rN)^{}\right)=\alpha ^2\left(NV_1N^2V_2+V_3\right)`$ (11)
$`{\displaystyle \frac{A^{}}{A}}={\displaystyle \frac{2\alpha ^2}{r}}\left(V_12NV_2\right).`$ (12)
The equations for the matter fields are
$`\left(ANV_4\right)^{}=AW\left({\displaystyle \frac{2}{r^2}}(W^21)+2H^2{\displaystyle \frac{(W^21)^3}{\beta ^2r^6}}{\displaystyle \frac{2N(W^{})^2}{3\beta ^2r^4}}(W^21)\right)`$ (13)
$`(ANr^2H^{})^{}=AH\left(2W^2+g^2r^2(H^21)\right)`$ (14)
with
$`V_4=2W^{}{\displaystyle \frac{W^{}}{3\beta ^2r^4}}(W^21)^2{\displaystyle \frac{4N}{3\beta ^2r^2}}(W^{})^3`$ (15)
It is easy to see that $`A`$ can be elliminated from the matter field equations using Eq.(12). Hence we have to solve three differential equations Eqs. (11),(13) and (14) for the three fields $`N,W`$ and $`H`$.
## 3 Solutions
### 3.1 Monopoles
For finite $`g`$, demanding the solutions to be regular and the monopole mass to be finite gives the following behaviour near the origin
$`H=ar+O(r^3),`$ (16)
$`W=1br^2+O(r^4),`$ (17)
$`N=1cr^2+O(r^4),`$ (18)
where $`a`$ and $`b`$ are free parameters and $`c`$ is given by
$`c=\alpha ^2\left(a^2+4b^2+{\displaystyle \frac{g^2}{6}}{\displaystyle \frac{20b^4}{3\beta ^2}}\right).`$
In general, with these initial conditions $`N`$ can be zero at some finite $`r`$ where the solutions become singular. In order to avoid this singularity we have to adjust the parameters $`a`$ and $`b`$ suitably.
For $`r\mathrm{}`$ we require the solutions to be asymptotically flat. Hence we impose
$`N=1{\displaystyle \frac{2M}{r}}`$ (19)
Then for finite mass configuration we have the following expressions for the gauge and the Higgs fields
$`W=Cr^Me^r\left(1+O({\displaystyle \frac{1}{r}})\right)`$ (20)
$`H=\{\begin{array}{cc}1Br^{\sqrt{2}gM1}e^{\sqrt{2}gr},\hfill & for0<g\sqrt{2}\hfill \\ 1\frac{C^2}{g^22}r^{2M2}e^{2r},\hfill & forg=0andg>\sqrt{2}.\hfill \end{array}`$ (23)
Note that the fields have similar kind of asymptotic behaviour in the EYMH model. We have solved the equations of motion numerically with the boundary conditions given by Eqs.(16-21). For $`\alpha =0`$, $`g=0`$ and $`\beta \mathrm{}`$ they corresponds to the exact Prasad-Sommerfield solution. For nonzero $`\alpha ,g`$ and finite $`\beta `$ the qualitative behaviour of the solutions are similar to the corresponding solutions of EYMH model. For large $`r`$ these solutions converges to their asymptotic values given as in Eqs.(19-21). For a fixed value of $`g`$ and $`\beta `$ we solved the equations increasing the value of $`\alpha `$. For small value of $`\alpha `$ the solutions are very close to flat space solution. As $`\alpha `$ is increased the minimum of the metric function $`N`$ was found to be decreasing. The solutions cease to exist for $`\alpha `$ greater then certain critical value $`\alpha _{max}`$. For $`g=0`$ and $`\beta =3`$ we find $`\alpha _{max}2`$. The profile for the fields for different values of $`\alpha `$ with $`g=0`$ and $`\beta =3`$ are given in Figs.1,2 and 3. The profile for the fields for $`g=.1,\alpha =1.0`$ and $`\beta =3`$ are given in Fig. 4. We find numerically the mass $`M=0.7865`$ of the monopole for $`g=.1,\alpha =1.0`$ and $`\beta =3`$.
### 3.2 Black holes
Apart from the regular monopoles, magnetically charged black holes can also exist in this model. Black hole arises when the field $`N`$ vanishes for some finite $`r=r_h`$ . Demanding the solutions to be regular near horizon $`r_h`$ we find the following behaviour of the fields
$`N(r_h+\rho )=N_h^{}\rho +O(\rho ^2),`$ (24)
$`H(r_h+\rho )=H_h+H_h^{}\rho +O(\rho ^2),`$ (25)
$`W(r_h+\rho )=W_h+W_h^{}\rho +O(\rho ^2)`$ (26)
with
$`N_h^{}={\displaystyle \frac{1}{r_h}}\left[1\alpha ^2\left\{{\displaystyle \frac{(W_h^21)^2}{r_h^2}}+2W_h^2H_h^2+{\displaystyle \frac{g^2r_h^2}{2}}(H_h^21)^2{\displaystyle \frac{(W_h^21)^4}{4\beta ^2r_h^2}}\right\}\right]`$ (27)
$`H_h^{}={\displaystyle \frac{H_h}{N_h^{}r_h^2}}\left\{2W_h^2+g^2r_h^2(H_h^21)\right\}`$ (28)
$`W_h^{}={\displaystyle \frac{\frac{W_h}{r_h^2}(W_h^21)+W_hH_h^2\frac{W_h}{2\beta ^2r_h^6}(W_h^21)^3}{N_h^{}\left[1\frac{(W_h^21)^2}{6\beta ^2r_h^4}\right]}}.`$ (29)
Here $`r_h,W_h(W(r_h))`$ and $`H_h(W(r_h))`$ are arbitrary. For $`r\mathrm{}`$ the behaviour of the fields is same as the regular monopole solution as given by Eqs.(19-21). The black hole has unit magnetic charge with nontrivial gauge field strength. We found numerical solutions to the non Abelian black hole for different $`r_h`$. For a fixed value of $`r_h`$ we find the solutions for $`r>r_h`$ adjusting the parameters $`W_h`$ and $`H_h`$. For $`r_h`$ close to zero the solutions approach the regular monopole solutions. The profile for the fields are given in Fig.5. We found the mass of the black hole equals to be $`0.6796`$ for $`\alpha =1.0,g=0`$ and $`\beta =3`$.
## 4 Conclusion
In this paper we have investigated the effect of gravity on the Born-Infeld-Higgs monopole. We found that solutions exist only up to some critical value $`\alpha _{max}`$ of the parameter $`\alpha `$. In the limit $`\beta \mathrm{}`$ these solutions reduces to those of EYMH monopoles. We also found numerically magnetically charged non Abelian black hole solutions in this model. It would be interesting to prove analytically the existence of these solutions for finite value of the parameters. Recently dyons and dyonic black holes were found in EYMH model numerically and the existence of critical value for $`\alpha `$ was also proved analytically . It may be possible to generalize these solutions to find dyons and dyonic black holes in EBIH model. We hope to report on this issue in future.
## 5 Acknowledgements
I am indebted to Avinash Khare for many helpful discussions as well as for a careful manuscript reading.
|
no-problem/9904/astro-ph9904111.html
|
ar5iv
|
text
|
# Kilohertz Quasi-Periodic Oscillations, Magnetic Fields and Mass of Neutron Stars in Low-Mass X-Ray Binaries
## 1 Introduction
Recent observations with Rossi X-ray Timing Explorer (RXTE) have revealed kilohertz quasi-periodic oscillations (QPOs) in at least eighteen low-mass X-ray binaries (LMXBs; see Van der Klis 1998a,b for a review; also see Eric Ford’s QPO web page at http://www.astro.uva.nl/ecford/qpos.html for updated information). These kHz QPOs are characterized by their high levels of coherence (with $`\nu /\mathrm{\Delta }\nu `$ up to $`100`$), large rms amplitudes (up to $`20\%`$), and wide span of frequencies ($`5001200`$ Hz). In almost all sources, the X-ray power spectra show twin kHz peaks moving up and down in frequency together as a function of photon count rate, with the separation frequency roughly constant (The clear exceptions are Sco X-1 and 4U 1608-52, van der Klis et al. 1997, Mendez et al. 1998a; see also Psaltis et al. 1998. In Aql X-1, only a single QPO has been detected.). Moreover, in several sources, a third, nearly coherent QPO has been detected during one or more X-ray bursts, at a frequency approximately equal to the frequency difference between the twin peaks or twice that value. (An exception is 4U 1636-53, Mendez et al. 1998b.) The observations suggest a generic beat-frequency model where the QPO with the higher frequency is associated with the orbital motion at some preferred orbital radius around the neutron star, while the lower-frequency QPO results from the beat between the Kepler frequency and the neutron star spin frequency. It has been suggested that this preferred radius is the magnetosphere radius (Strohmayer et al. 1996) or the sonic radius of the disk accretion flow (Miller, Lamb and Psaltis 1998; see also Kluźniak et al. 1990). The recent observational findings (e.g., the variable frequency separations for Sco X-1 and 4U 1608-52) indicate that the “beat” is not perfect, so perhaps a boundary layer with varying angular frequencies, rather than simply the neutron star spin, is involved.
This paper is motivated by recent RXTE observation of the bright globular cluster source 4U 1820-30 (Zhang et al. 1998), which has revealed that, as a function of X-ray photon count rate, $`\dot{C}`$, the twin QPO frequencies increase roughly linearly for small photon count rates ($`\dot{C}16002500`$ cps) and become independent of $`\dot{C}`$ for larger photon count rates ($`\dot{C}25003200`$ cps). (The QPOs become unobservable for still higher count rates.) It was suggested that the $`\dot{C}\mathrm{independent}`$ maximum frequency ($`\nu _{\mathrm{max}}=1060\pm 20`$ Hz) of the upper QPO corresponds to the orbital frequency of the disk at the inner-most stable circular orbit (ISO) as predicted by general relativity. This would imply that the NS has mass of $`2.2M_{}`$ (assuming a spin frequency of $`275`$ Hz). It has also been noted earlier (Zhang et al. 1997), based on the narrow range of the maximal QPO frequencies ($`\nu _{\mathrm{max}}11001200`$ Hz) in at least six sources (which have very different X-ray luminosities), that these maximum frequencies correspond to the Kepler frequency at the ISO, which then implies that the neutron star masses are near $`2M_{}`$ (see also Kaaret et al. 1997).
The neutron star masses inferred from identifiying $`\nu _{\mathrm{max}}`$ with the Kepler frequency at the ISO would, if confirmed, be of great importance for constraining the properties of neutron stars and for understanding the recycling processes leading to the formation of millisecond pulsars. However, while it is tempting to identify $`\nu _{\mathrm{max}}`$ with the the orbital frequency at the ISO, this seemingly natural interpretation may not be true. One clue that this identification may not be correct is that the inferred neutron star masses are substantially above the masses of those neutron stars for which accurate determinations are available (Thorsett & Chakrabarty 1999) even though spin-up to $`\nu _s300\mathrm{Hz}`$ only requires accretion of a very small amount of material ($`M_{}`$; §2). The cause of the flattening of the $`\nu _{\mathrm{QPO}}\dot{C}`$ correlation, and the value of the maximum frequency, are still not understood (and the existence of a plateau in $`\nu _{\mathrm{QPO}}`$ with increasing $`\dot{M}`$ is debatable; e.g. Mendez et al. 1998c). We suggest in §3 that the steepening of the magnetic field, expected near the accreting neutron star, together with general relativistic effect, naturally leads to the flattening in the $`\nu _{\mathrm{QPO}}`$ -$`\dot{M}`$ correlation. In §4 we advocate two alternative interpretations of the maximum QPO frequency without invoking excessively large neutron star masses.
## 2 Possible Problems with Neutron Star Masses $`>2M_{}`$
The most important concern for the inferred neutron star mass of $`>2M_{}`$ is an empirical one. LMXBs have long been thought (e.g., Alpar et al. 1982) to be the progenitors of binary millisecond radio pulsars. The recent discovery of binary X-ray pulsar SAX J1808-3658 (with spin period 2.5 ms and orbital period 2 hrs; Wijnands & van der Klis 1998; Chakrabarty & Morgan 1998) appears to confirm this link. Measurements of neutron star masses in radio pulsar binaries give values in a narrow range around $`M1.4M_{}`$; the data are consistent with a neutron star mass function that is flat between $`>1.1M_{}`$ and $`<1.6M_{}`$ at 95% CL (Thorsett & Chakrabarty 1999, Finn 1994). The masses of neutron stars in X-ray binaries are also consistent with $`M1.4M_{}`$ (e.g., van Kerkwijk et al. 1995). Of particular interest is the $`5.4`$ ms recycled pulsar B1855+09 with a white dwarf companion: this system is thought to have gone through a LMXB phase (Phinney & Kulkarni 1994), and contains a neutron star with $`M=1.41\pm 0.10M_{}`$ (Thorsett & Chakrabarty 1999; earlier Kaspi et al. 1994 estimated $`M=1.50\pm _{0.14}^{0.26}M_{}`$). The 23 ms pulsar PSR B1802-07, which is in a white dwarf binary that is also thought to have gone through the LMXB phase, has an inferred mass $`M=1.26\pm _{0.67}^{0.15}M_{}`$ ($`95\%`$ confidence; Thorsett & Chakrabarty 1999).
If $`1.4M_{}`$ is the mass of the neutron star immediately after its formation in core collapse, then to make a $`2.2M_{}`$ object would require accretion of material of at least $`0.8M_{}`$. Such large accretion mass may be problematic. If we neglect torques on the star due to the interaction of its magnetic field and the accretion disk, the added mass needed to spin up the NS to a spin frequency $`\nu _s=\mathrm{\Omega }_s/(2\pi )`$ is
$$\mathrm{\Delta }M\frac{I\mathrm{\Omega }_s}{\sqrt{GMr_{\mathrm{in}}}}0.07M_{}\frac{I_{45}}{\sqrt{M_{1.4}r_6}}\left(\frac{\nu _s}{300\mathrm{Hz}}\right),$$
(1)
where $`I=10^{45}I_{45}`$ g cm<sup>2</sup> is the moment of inertia, $`M=1.4M_{1.4}M_{}`$ is the neutron star mass and and $`r_{\mathrm{in}}=10r_6`$ km is the radius of the inner edge of the accretion disk, which could correspond to either the stellar surface (radius $`R`$) or the inner-most stable orbit (ISO) in the absence of a magnetic field strong enough to influence the flow substantially (see Cook et al. 1994). When the neutron star magnetic field is strong enough, the inner radius $`r_{\mathrm{in}}`$ corresponds to the Alfvén radius. (We note that the positions of all known millisecond pulsars and binary pulsars in the $`P\dot{P}`$ diagram for radiopulsars are consistent with spinup via accretion onto neutron stars with dipolar surface fields $`>10^{89}`$ G.) For magnetic accretion, we expect
$$I\dot{\mathrm{\Omega }}_s=\dot{M}\sqrt{GMr_{\mathrm{in}}}f(\omega _s),$$
(2)
where $`\omega _s=\mathrm{\Omega }_s/\mathrm{\Omega }_K(r_{\mathrm{in}})`$, with $`\mathrm{\Omega }_K(r_{\mathrm{in}})`$ the Kepler frequency at $`r_{\mathrm{in}}`$. The dimensionless function $`f(\omega _s)`$ includes contributions to the angular momentum transport from magnetic stresses and accreting material. It is equal to zero at some equilibrium $`\omega _s`$, but the actual form of $`f(\omega _s)`$ depends on details of the magnetic field – disk interaction. Treating $`r_{\mathrm{in}}`$ as a constant, we find
$`\mathrm{\Delta }M`$ $`=`$ $`{\displaystyle \frac{I\mathrm{\Omega }_s}{\sqrt{GMr_{\mathrm{in}}}}}\left[{\displaystyle \frac{1}{\omega _s}}{\displaystyle _0^{\omega _s}}{\displaystyle \frac{d\omega _s^{}}{f(\omega _s^{})}}\right]`$ (3)
$`=`$ $`0.07M_{}{\displaystyle \frac{I_{45}}{\sqrt{M_{1.4}r_6}}}\left[\omega _s^1\mathrm{ln}\left({\displaystyle \frac{1}{1\omega _s}}\right)\right]\left({\displaystyle \frac{\nu _s}{300\mathrm{Hz}}}\right)`$
$`=`$ $`0.04M_{}I_{45}M_{1.4}^{2/3}\left[\omega _s^{4/3}\mathrm{ln}\left({\displaystyle \frac{1}{1\omega _s}}\right)\right]\left({\displaystyle \frac{\nu _s}{300\mathrm{Hz}}}\right)^{4/3},`$
where, in the last two lines, we have adopted a simple functional form $`f(\omega _s)=1\omega _s`$; generically,
$$\mathrm{\Delta }M0.07M_{}\frac{I_{45}}{\sqrt{M_{1.4}r_6}}\left(\frac{\nu _s}{300\mathrm{Hz}}\right)\psi (\omega _s),$$
(4)
where $`\psi (\omega _s)1`$ for $`\omega _s\omega _{s,c}`$, assuming that the torque tends to zero at a critical value $`\omega _s=\omega _{s,c}`$.
Large $`\mathrm{\Delta }M`$ is possible if there is a lengthy phase of accretion with nearly zero net torque (e.g. accreting $`0.8M_{}`$ at a mean accretion rate of $`\dot{M}=10^{17}\mathrm{g}\mathrm{s}^1`$ would require about 400 Myr) following a much shorter phase of spin-up to $`\omega _s\omega _{s,c}`$ (e.g. accreting $`0.05M_{}`$ at $`\dot{M}=10^{17}\mathrm{g}\mathrm{s}^1`$ would require 30 Myr). If magnetic field decays during accretion (e.g. Taam & van den Heuvel 1986, Shibazaki et al. 1989), then the spin-up phase would have been even shorter. (Spin diffusion due to alternating or stochastic epsiodes of spin-up and spin-down \[e.g. Bildsten et al. 1997, Nelson et al. 1997\] might be allowed – but constrained – in such a picture.) To accomodate masses as large as $`2M_{}`$, these LMXBs must be rather old and must have spun up rapidly at first, and then not at all for $`>90\%`$ of their lifetimes. Gravitational radiation might provide a mechanism for enforcing virtually zero net torque during the bulk of accretion (Bildsten 1998, Andersson et al. 1999). But equations (1) and (3) show that only very small $`\mathrm{\Delta }M`$ is required to achieve $`\nu _s300\mathrm{Hz}`$, irrespective of the mechanism responsible for halting spin-up at such frequencies.
## 3 Steepening Magnetic Fields Near the Accreting Neutron Star
We shall adopt, as a working hypothesis, that the upper QPO frequency is approximately equal to the Kepler frequency at a certain critical radius of the disk (Strohmayer et al. 1996; Miller, Lamb & Psaltis 1998; van der Klis 1998)<sup>1</sup><sup>1</sup>1In the model of Titarchuk et al. (1998) , the QPO corresponds of vertical oscillation of the disk boundary layer, but the oscillation frequency is equal to the local Kepler frequency. Even in the “non-beat” frequency model of Stella and Vietri (1998), the upper QPO frequency still corresponds to the orbital frequency. that is determined by the combined effects of general relativity and stellar magnetic field. For sufficiently strong magnetic fields, the disk may be truncated near this radius, where matter flows out of the disk and is funneled toward the neutron star. This critical radius then corresponds to the usual Alfvén radius (Strohmayer et al. 1996). Even if the fields are relatively weak ($`10^710^8`$ G) and the field geometry is such that matter remains in the disk, the magnetic stress can still slow down the orbital motion in the inner disk by taking away angular momentum from the flow, and accreting gas then plunges toward the star at supersonic speed – a process that is also accelerated by relativistic instability. In this case, the critical radius would correspond to the sonic point of the flow (Lai 1998). We neglect the possible role of radiative forces discussed by Miller et al. (1998). As emphasized by van der Klis (1998a), the fact that similar QPO frequencies ($`5001200`$ Hz) are observed in sources with vastly different average luminosities (from a few times $`10^3L_{\mathrm{Edd}}`$ to near $`L_{\mathrm{Edd}}`$) suggests that radiative effects cannot be the only factor that induces the correlation of the QPO frequency and the X-ray flux for an individual source.
Despite many decades of theoretical studies (e.g., Pringle & Rees 1972; Lamb et al. 1973; Ghosh & Lamb 1979; Arons 1987; Spruit & Taam 1990; Aly 1991; Sturrock 1991; Shu et al. 1994; Lovelace, Romanova & Bisnovatyi-Kogan 1995, 1999; Miller & Stone 1997), there remain considerable uncertainties on the nature of the stellar magnetic field – disk interactions. Among the issues that are understood poorly are the transport of magnetic field in the disk, the configuration of the field threading the disk, and the nature of outflows from the disk. To sidestep these complicated questions, we adopt a simple phenomenological prescription for the vertical and azimuthal components of the magnetic field on the disk,
$$B_z=B_0\left(\frac{R}{r}\right)^n,B_\varphi =\beta B_z,$$
(5)
where $`B_\varphi `$ is evaluated at the upper surface of the disk, and $`\beta `$ is the azimuthal pitch angle of the field. If we neglect the GR effect, the critical radius $`r_{\mathrm{in}}`$ is located where the magnetic field stress dominates the angular momentum transport in the disk, and it is approximately given by the condition
$$\dot{M}\frac{d\sqrt{GMr}}{dr}=r^2B_zB_\varphi ;$$
(6)
using the ansatz equation (5), and assuming Keplerian rotation (which may break down in a boundary layer near $`r_{\mathrm{in}}`$; e.g. Lovelace et al. 1995), we find
$$r_{\mathrm{in}}=R\left(2\beta \frac{B_0^2R^3}{\dot{M}\sqrt{GMR}}\right)^{2/(4n5)}.$$
(7)
and the Kepler frequency at $`r_{\mathrm{in}}`$ is
$$\nu _K(r_{\mathrm{in}})\dot{M}^{3/(4n5)}.$$
(8)
For a “dipolar” field configuration, $`n=3`$ and $`\nu _K(r_{\mathrm{in}})\dot{M}^{3/7}`$, as is well-known, but for smaller values of $`n`$, the dependence steepens; for example, $`\nu _K(r_{\mathrm{in}})\dot{M}`$ for a “monopole” field, $`n=2`$. The observed correlation $`\nu _{\mathrm{QPO}}\dot{C}`$ may require $`n<3`$, although the relationship between $`\dot{C}`$ and $`\dot{M}`$ is unclear (Mendez et al. 1998c).
Unusual field topologies are possible as the disk approaches the surface of the neutron star. Values of $`n3`$ (and even violation of power-law scaling) might occur naturally, for open field configurations, which may be prevalent because of differential rotation between the star and the disk (e.g. Lovelace et al. 1995). MHD winds driven off a disk could also result in $`n3`$ (e.g. Lovelace et al. 1995, Blandford & Payne 1982). Disks that are fully (Aly 1980, Riffert 1980) or partially (Arons 1993) diamagnetic will also have non-dipolar variation in field strength near their inner edges (see also §3.1 below). None of these possibilities requires the field to be substantially non-dipolar at the stellar surface, although for disks that penetrate close to the star (at $`r_{\mathrm{in}}R<R`$) any non-dipolar field components, if strong enough, would be significant.
A particular field configuration that could explain the observed variation of $`\nu _{\mathrm{QPO}}`$ with $`\dot{C}`$ might have $`n<3`$ at moderate values of $`r_{\mathrm{in}}`$, leading to a strong correlation between $`\nu _{\mathrm{QPO}}`$ and $`\dot{M}`$ (and hence $`\dot{C}`$). As $`\dot{M}`$ rises, the disk approaches the star, and the field topology could become more complex, resulting in additional, non-power-law radial steepening of the field strength. As is argued below, this could happen even if the field is dipolar at the surface of the star, particularly if the disk is diamagnetic. This steepening of the field results in a flattening of the $`\nu _{\mathrm{QPO}}\dot{M}`$ relation. Additional flattening results from incipient general relativistic instability at the inner edge of the disk.
### 3.1 A Specific Ansatz: Diamagnetic Disk
An illustration of the field steepening discussed above is as follows. Consider a vacuum dipole field produced by the star $`|B_z|=\mu /r^3`$ (in the equatorial plane perpendicular to the dipole axis). Imagine inserting a diamagnetic disk in the equatorial plane with inner radius $`r_{\mathrm{in}}`$. Flux conservation requires $`\pi (r_{\mathrm{in}}^2R^2)|\overline{B}_z|=2\pi \mu /R`$, which gives the mean vertical field inside between $`R`$ and $`r_{\mathrm{in}}`$:
$$|\overline{B}_z(r_{\mathrm{in}})|=\frac{2\mu }{R(r_{\mathrm{in}}^2R^2)}.$$
(9)
This field has scaling $`|\overline{B}_z|1/r^2`$ for large $`r`$, which would result in $`\nu _K(r_{\mathrm{in}})\dot{M}`$ (see eq. ), and stiffens as the disk approaches the stellar surface.
The actual field at $`r=r_{\mathrm{in}}`$ is difficult to calculate. Aly (1980) found the magnetic field of a point dipole in the presence of a thin diamagnetic disk (thickness $`Hr`$ at radius $`r`$), and demonstrated that the field strength at $`r_{\mathrm{in}}`$ is enhanced by a factor $`(r_{\mathrm{in}}/H)^{1/2}`$. (See also Riffert 1980 and Arons 1993.) However, the situation is different for a finite-sized dipole (a conducting sphere of radius $`R`$) in the presence of a diamagnetic disk. This can be seen by considering a simpler problem, where we replace the disk by a diamagnetic sphere (with radius $`r_{\mathrm{in}}`$)<sup>2</sup><sup>2</sup>2 In replacing the disk with a spherical surface, we lose the square-root divergence found by Aly (1980) for infinitesmal $`H/r`$. But note that for small fields, the disk penetrates near the star, and $`H`$ may not be very small compared with $`r_{\mathrm{in}}R`$. In assuming a point dipole, Aly (1980) (and Riffert 1980) exacerbated the divergence, and their results probably apply only when $`r_{\mathrm{in}}R`$..The magnetic field at radius $`r`$ (between $`R`$ and $`r_{\mathrm{in}}`$) is given by (in spherical coordinates with the magnetic dipole along the $`z`$-axis):
$`B_r(r,\theta )`$ $`=`$ $`\left({\displaystyle \frac{2\mu }{r_{\mathrm{in}}^3}}+{\displaystyle \frac{2\mu }{r^3}}\right){\displaystyle \frac{\mathrm{cos}\theta }{1\alpha ^3}},`$ (10)
$`B_\theta (r,\theta )`$ $`=`$ $`\left({\displaystyle \frac{2\mu }{r_{\mathrm{in}}^3}}+{\displaystyle \frac{\mu }{r^3}}\right){\displaystyle \frac{\mathrm{sin}\theta }{1\alpha ^3}},`$ (11)
where $`\alpha =R/r_{\mathrm{in}}`$. Thus the vertical magnetic field at the inner edge of the disk ($`r=r_{\mathrm{in}}`$) is
$$|B_z(r_{\mathrm{in}})|=\frac{3\mu }{r_{\mathrm{in}}^3R^3}.$$
(12)
We see that the magnetic field steepens as $`r_{\mathrm{in}}`$ approaches the stellar surface. In reality, some magnetic field will penetrate the disk because of turbulence in the disk and Rayleigh-Taylor instabilities (Kaisig, Tajima & Lovelace 1992); however, some steepening of the field may remain.
Adopting the magnetic field ansatz (9) and $`B_\varphi =\beta B_z`$, we can use (6) to calculate $`r_{\mathrm{in}}`$; this gives
$$2b^2\frac{x_c^{2.5}}{(x_c^21)^2}=1,$$
(13)
where $`x_c=r_{\mathrm{in}}/R`$ and
$$b^2=\frac{\beta B_0^2R^3}{\dot{M}\sqrt{GMR}}=0.07\left(M_{1.4}^{1/2}R_{10}^{5/2}\right)\left(\frac{\beta B_7^2}{\dot{M}_{17}}\right),$$
(14)
$`\mu =B_0^2R^3/2`$ ($`B_0=10^7B_7`$ G is the polar field strength at the neutron star surface), $`M_{1.4}=M/(1.4M_{})`$, $`R_{10}=R/(10\mathrm{km})`$, and $`\dot{M}_{17}=\dot{M}/(10^{17}\mathrm{g}\mathrm{s}^1)`$. Alternatively, if we adopt (12), we find
$$\frac{9}{2}b^2\frac{x_c^{2.5}}{(x_c^31)^2}=1.$$
(15)
Figure 1 shows the Kepler frequency at $`r_{\mathrm{in}}`$ as a function the scaled mass accretion rate, $`\dot{M}_{17}M_{1.4}^{1/2}R_{10}^{5/2}M_{17}/\beta B_7^2=0.07/b^2`$. Clearly, for small $`\dot{M}`$, $`\nu _K(r_{\mathrm{in}})`$ depends on $`\dot{M}`$ through a power-law, but the dependence weakens as $`\dot{M}`$ becomes large, in qualitative agreement with the observed $`\nu _{\mathrm{QPO}}`$-$`\dot{M}`$ correlation. General relativistic effects also flatten the $`\nu _{\mathrm{QPO}}`$-$`\dot{M}`$ relation, as we discuss next.
### 3.2 General Relativistic Effects
General relativity (GR) introduces two effects on the location of the inner edge of the disk. First, the space-time curvature modifies the vacuum dipole field. For example, in Schwarzschild metric, the locally measured magnetic field in the equatorial plane is given by
$$B^{\widehat{\theta }}=\frac{\mu }{r^3}\left[6y^3(1y^1)^{1/2}\mathrm{ln}(1y^1)+\frac{6y^2(1y^1/2)}{(1y^1)^{1/2}}\right],$$
(16)
where $`y=rc^2/(2GM)`$ (Petterson 1974; Wasserman & Shapiro 1983). The GR effect steepens the field only at small $`r`$. For $`r=6GM/c^210GM/c^2`$, we find the approximate scaling $`B^{\widehat{\theta }}r^{3ϵ}`$, with $`ϵ0.30.4`$. We shall neglect such a small correction to the dipole field given the much larger uncertainties associated with the magnetic field – disk interaction.
A more important effect of GR is that it modifies the the dynamics of the accreting gas around the neutron star. Without magnetic field, the inner edge of the disk is given by the condition $`dl_K/dr=0`$, where $`l_K`$ is the specific angular momentum of a test mass:
$$l_K=\left(\frac{GMr^2}{r3GM/c^2}\right)^{1/2}.$$
(17)
This would give the usual the ISO at $`r_{\mathrm{iso}}=6GM/c^2`$, where no viscosity is necessary to induce accretion<sup>3</sup><sup>3</sup>3When viscosity and radial pressure force is taken into account, the flow is transonic, with the sonic point located close to $`r_{\mathrm{iso}}`$.. Since magnetic fields take angular momentum out of the disk, we can determine the inner edge of the disk using an analogous expression<sup>4</sup><sup>4</sup>4Note that in the limit of perfect conductivity, it is possible to express the Maxwell stress tensor in terms of a magnetic field four-vector $`𝐁`$ that is orthogonal to the fluid velocity four-vector $`𝐔`$ (e.g. Novikov & Thorne 1973, pp. 366-367). The field components $`B_\varphi `$ and $`B_z`$ in eq. (18) and below are actually the projections of $`𝐁`$ onto a local orthonormal basis (i.e. $`B_\varphi \stackrel{}{𝐞}_{\widehat{\varphi }}𝐁`$ and $`B_z\stackrel{}{𝐞}_{\widehat{z}}𝐁`$) even though we have retained the nonrelativistic notation for these field components. No additional relativistic corrections are required with these identifications understood.
$$\dot{M}\frac{dl_K}{dr}=r^2B_zB_\varphi ,$$
(18)
(see eq. ). In Lai (1998) it was shown that this equation determines the limiting value of the sonic point of the accretion flow (although a Newtonian pseudo-potential was used in that paper). Adopting the magnetic field ansatz (9), we find
$$2b^2\frac{x_c^{2.5}}{(x_c^21)^2}=\left(1\frac{6GM}{c^2r_{\mathrm{in}}}\right)\left(1\frac{3GM}{c^2r_{\mathrm{in}}}\right)^{3/2}.$$
(19)
Similarly, using (12), we have
$$\frac{9}{2}b^2\frac{x_c^{2.5}}{(x_c^31)^2}=\left(1\frac{6GM}{c^2r_{\mathrm{in}}}\right)\left(1\frac{3GM}{c^2r_{\mathrm{in}}}\right)^{3/2}.$$
(20)
It is clear that for $`b1`$, eq. (19) or (20) reduces to the Newtonian limit (see §3.1), while for $`b=0`$ we recover the expected $`r_{\mathrm{in}}=r_{\mathrm{iso}}=6GM/c^2`$. For small $`b`$, the GR effect can modify the inner disk radius signficantly. In Fig. 1 we show the orbital frequency at $`r_{\mathrm{in}}`$ (as a function of the “effective” accretion rate) as obtained from (19) and (20). We see that the GR effect induces additional flattening in the correlation between $`\nu _K(r_{\mathrm{in}})`$ and $`\dot{M}`$ as $`r_{\mathrm{in}}`$ approaches $`r_{\mathrm{iso}}`$.
We emphasize the phenomenological nature of eqs. (18)-(20): they are not derived from a self-consistent MHD calculation, and take account of the dynamics of the disk under a prescribed magnetic field configuration. However, we believe that they indicate the combined effects of dynamically altered magnetic field and GR on the inner region of the accretion disk. By measuring the correlation between the QPO frequency and the mass accretion rate, one might be able to constrain the magnetic field structure in accreting neutron stars, and reach quantitative conclusions about the nature of the interaction of the accretion disk and magnetic field.
## 4 Where are the QPOs Produced?
Implicit in the discussion of magnetic fields and $`\nu _{\mathrm{QPO}}`$ in the preceding sections were the assumptions that the QPO arises at a radius outside the star that coincides with the inner radius of the accretion disk. Here, we examine two ways in which these assumptions might be violated, and show how the relatively small measured values of $`\nu _{\mathrm{max}}`$ might be consistent with neutron star masses near $`1.4M_{}`$.
### 4.1 Disk Termination at the Neutron Star Surface
For the model discussed in §3, the steepening magnetic field and general relativity produce the flattening in the correction between the QPO frequency $`\nu _{\mathrm{QPO}}=\nu _K(r_{\mathrm{in}})`$ and the mass accretion rate $`\dot{M}`$. But $`\nu _{\mathrm{QPO}}`$ becomes truly independent of $`\dot{M}`$ only when $`r_{\mathrm{in}}`$ approaches $`r_{\mathrm{iso}}`$ or the stellar radius $`R`$. It has been suggested (see §1) that the $`\dot{M}`$-independent QPO frequency corresponds to the Kepler frequency at $`r_{\mathrm{iso}}`$. But it is also possible that the inner disk radius reaches the stellar surface, which is outside the ISO, as $`\dot{M}`$ increases. We note that observationally it is difficult to distinguish the flattening of $`\nu _{\mathrm{QPO}}`$ and a true plateau. It is not clear that the flattening feature at $`\nu _{\mathrm{QPO}}1100`$ Hz observed in 4U 1820-30 (Zhang et al. 1998) corresponds the maximum QPO frequency, but we shall assume it does and explore the consequences.
The maximum QPO frequency, $`\nu _{\mathrm{max}}`$, is given by the orbital frequency at the larger of $`r_{\mathrm{iso}}`$ and $`R`$. To linear order in $`\nu _s`$ (the spin frequency), the ISO is located at $`r_{\mathrm{iso}}=(6GM/c^2)(10.544a)`$, and the orbital frequency at ISO is
$$\nu _K(r_{\mathrm{iso}})=\frac{1571}{M_{1.4}}(1+0.748a)\mathrm{Hz},$$
(21)
with the dimensionless spin parameter
$$a0.099\frac{R_{10}^2}{M_{1.4}}\left(\frac{\nu _s}{300\mathrm{Hz}}\right),$$
(22)
where we have adopted $`I=(2/5)\kappa MR^2`$ for the moment of inertia of the neutron star, with $`\kappa 0.815`$ (appropriate for a $`n=0.5`$ polytrope). The orbital frequency at the stellar surface can be written, to linear order in $`\nu _s`$, as
$$\nu _K(R)=2169M_{1.4}^{1/2}R_{10}^{3/2}\left[10.094a\left(\frac{M_{1.4}}{R_{10}}\right)\right]\mathrm{Hz}.$$
(23)
Note that in the above equations, $`R`$ refers to the equatorial radius of the (spinning) neutron star, which is related to the radius, $`R_0`$, of the corresponding nonrotating star by:
$$\frac{RR_0}{R_0}0.4\frac{\mathrm{\Omega }_s^2R_0^3}{GM}0.0078M_{1.4}^1R_{10}^3\left(\frac{\nu _s}{300\mathrm{Hz}}\right)^2,$$
(24)
where we have again adopted the numerical parameters appropriate for a $`n=0.5`$ polytrope (Lai et al. 1994). One may appeal to numerical calculations (e.g., Miller, Lamb & Cook 1998) for more accurate results, but the approximate expressions given above are adequate.
Figure 2 shows the contours of constant $`\nu _{\mathrm{max}}=\mathrm{min}[\nu _K(r_{\mathrm{iso}}),\nu _K(R)]`$ in the $`M`$-$`R_0`$ plane. For large $`M`$ and small $`R_0`$, the contours are specified by $`\nu _K(r_{\mathrm{iso}})`$, while for larger $`R_0`$ and small $`M`$, the contours are specified by $`\nu _K(R)`$. We see that to obtain the maximum QPO frequency of order $`11001200`$ Hz, one can either have a $`M>2M_{}`$ neutron star (with $`R_0<16`$ Km), or have a $`M1.4M_{}`$ neutron star with $`R_01415`$ km. Here we focus on the latter interpretation, in which the accretion disk terminates at the stellar surface before reaching the ISO. A boundary layer forms in which the angular velocity of the accreting gas changes from near the Keplerian value (at the outer edge of the boundary layer) to the stellar rotation rate. Depending on the thickness of the boundary layer, the inferred the NS radius may be somewhat smaller. Moreover, the peak rotation frequency may be below $`\nu _K(R)`$, which would also allow smaller values of $`R_0`$.
In addition to avoiding a large neutron star mass (see §2), the identification of $`\nu _{\mathrm{max}}`$ with the Kepler frequency near the stellar surface may allow a plausible explanation of the observed correlation between the QPO amplitude and the X-ray flux. While the mechanism of producing X-ray modulation in a kHz QPO is uncertain, in many models (e.g., Miller et al. 1998; see also Kluźniak et al. 1990) the existence of a supersonic “accretion gap” between the stellar surface and the accretion disk is crucial for generating the observed the X-ray modulation. If we interperate $`\nu _{\mathrm{max}}`$ as the Kepler frequency at the ISO, which is always outside the stellar surface, then the “accretion gap” always exists, and there is no qualitative change in the flow behavior as the inner disk approaches ISO. It is therefore difficult to explain why the QPO amplitude decreases and eventually vanishes as the X-ray flux increases. The situation is different if $`\nu _{\mathrm{max}}=\nu _K(R)`$, since the gap disappears when the mass accretion rate becomes sufficiently large. At small $`\dot{M}`$ there is a gap (induced by a combination of magnetic and GR effects) between the inner edge of the disk and the stellar surface. Since the impact velocity of the gas blob at the stellar surface is larger for a wider accretion gap, we expect the modulation amplitude to be larger for small accretion rates <sup>5</sup><sup>5</sup>5When $`\dot{M}`$ is too low (for a given $`B_0`$) so that $`r_{\mathrm{in}}`$ is far away from the stellar surface, the accreting gas can be channeled out of the disk plane by the magnetic field toward the magnetic poles. The detail of the channeling process depends on the magnetic field geometry in the disk (such as the radial pitch angle of the field line). This may quench the kHz QPOs and give rise to X-ray pulsation (as in X-ray pulsars). The pulsating X-ray transient system SAX J1808.4-3658 may just be such an example.. As $`\dot{M}`$ increases, the inner disk edge approaches the stellar surface, and we expect the QPO amplitude to decrease. The maximum QPO frequency signifies the closing of the accretion gap and the formation of a boundary layer. Since there is no supersonic flow in this case, one might expect the QPO amplitude to vanish. In addition, there may be changes in the spectral properties of the system as the gap closes.
The large neutron star radius ($`15`$ km for a $`1.4M_{}`$ star) required if $`\nu _{\mathrm{max}}=\nu _K(R)`$ is only allowed for a handful of very stiff nuclear equations of state (see Fig. 2); most recent microscopic calculations give $`R_010`$ km (e.g., Wiringa et al. 1988). Is such a large radius consistent with observations? No neutron star radii are known with the accuracy that has been achieved for numerous neutron star mass determinations, but several methods have been tried:
1. Observations of X-ray bursts have been used to determine empirical $`MR`$ relations, but these are hampered by the need for model-dependent assumptions regarding the total luminosity and its time history, anisotropy of the emission, radiated spectrum and surface composition, even when the source distance is known (e.g. van Paradijs et al. 1990, Lewin, van Paradijs & Taam 1995).
2. X-ray and optical observations of the (apparently nonrotating) isolated neutron star RX J185635-3754 (Walter, Wolk & Neuhäuser 1996, Walter & Matthews 1997), combined with limits on the source distance, $`D`$, imply a blackbody radius $`R(1+z)<14(D/130\mathrm{pc})`$ km, where $`z`$ is the surface redshift of the star.
3. Ray tracing and lightbending may be used to derive limits on $`R/M`$ for periodically modulated X-ray emission. For two isolated neutron stars (PSR B1929+10 and B0950+08; Yancopoulos, Hamilton & Helfand 1994, Wang & Halpern 1997) and one millisecond pulsar (J0437-4715; Zavlin and Pavlov 1997, Pavlov & Zavlin 1998), the results are broadly consistent with $`Rc^2/2GM2.02.5`$, but the results depend on geometry (angles between rotation and magnetic axes, and rotation axis and the line of sight) as well as on the spectrum and (energy-dependent) anisotropy of the polar cap emission. The rather large observed pulsed fractions appear to rule out two polar cap hot spots unless $`Rc^2/2GM`$ is rather large (e.g. $`4.3`$ for PSR B1929+10; Wang & Halpern 1997). Similar considerations may prove fruitful for periodically modulated flux from X-ray bursts (e.g. Miller & Lamb 1998); the pulse fractions observed so far are large, suggesting non-compact sources (e.g. Strohmayer et al. 1999, who find $`Rc^2/2GM5`$ for 4U1636-54, corresponding to an implausibly large radius of 21 km for $`M=1.4M_{}`$).
4. Burderi & King (1998) have argued that requiring the Alfvén radius to be intermediate between $`R`$ and the corotation radius, $`R_{co}=(GM/\mathrm{\Omega }_s^2)^{1/3}`$, for the 2.5 ms pulsating source SAX J1808.4-3658 (discovered by Wijnands & van der Klis 1998) implies an upper bound of $`R<13.8(M/M_{})^{1/3}`$ km, since the pulsations are detected at the same frequency for X-ray count rates spanning an order of magnitude. However, their bound depends on the model-dependent assumptions that the count rate is strictly proportional to $`\dot{M}`$ and the field strength in the disk is dipolar ($`Br^3`$). (See also Psaltis & Chakrabarty 1999.)
Taken together, the evidence neither supports nor excludes the possibility that $`R15`$ km for $`M1.4M_{}`$ (or $`Rc^2/2GM3.6`$) definitively, although most of the estimates listed above favor more compact models ($`R10`$ km for $`M1.4M_{}`$) nominally.
### 4.2 QPOs from $`r>r_{\mathrm{in}}`$?
QPOs are identified in the Fourier spectra of photon counts from X-ray sources, so it may be that most of the spectral power comes from radii outside $`r_{\mathrm{in}}`$, possibly from the disk radius at which the differential photon emission rate is maximum. For example, if the QPO arises from a radius $`r=(1+\lambda )r_{\mathrm{in}}`$, then $`\nu _{\mathrm{QPO}}=(1+\lambda )^{3/2}\nu _K(r_{\mathrm{in}})`$. As $`r_{\mathrm{in}}6GM/c^2`$, the ISO in the slow-rotation limit, $`\nu _{\mathrm{QPO}}2200\mathrm{Hz}/(M/M_{})(1+\lambda )^{3/2}`$, so observations that give $`\nu _{\mathrm{max}}1060\mathrm{Hz}`$ asymptotically may actually require $`M(1+\lambda )^{3/2}=2.1M_{}`$, or $`1+\lambda 1.3`$ if $`M1.4M_{}`$. rather than $`M2.1M_{}`$.
To obtain a simple realization of this idea, consider a Shakura-Sunyaev (1973) disk, for which the emitted flux from one face is
$$F(r)=\sigma _{\mathrm{SB}}T_e^4(r)=\frac{3GM\dot{M}f(r)}{8\pi r^3};$$
(25)
in the Newtonian limit (which we shall employ here for giving a simplified illustration). The function $`f(r)=1\beta \sqrt{r_{\mathrm{in}}/r}`$, where $`\beta 1`$ parametrizes the rate of accretion of angular momentum from the disk onto the star relative to $`\dot{M}\sqrt{GMr_{\mathrm{in}}}`$ (e.g. Shapiro & Teukolsky 1983. eq. \[14.5.17\]; see also Frank, King & Raine 1992, §5.3); if “imperfect” fluid stresses vanish at $`r_{\mathrm{in}}`$, then $`\beta =1`$ (as in black hole accretion; see Page & Thorne 1974, Novikov & Thorne 1973). If the color temperature of the emission equals the effective temperature $`T_e(r)`$, then the “bolometric flux” of photons is $`F(r)/kT_e(r)`$ at radius $`r`$, and the rate at which photons are emitted from radii between $`r`$ and $`r+dr`$ is of order
$$\frac{2\pi rF(r)}{kT_e(r)}\frac{\dot{M}^{3/4}[f(r)]^{3/4}}{r^{5/4}}.$$
(26)
Differentiating equation (26) implies a maximum emission rate at $`\sqrt{r/r_{\mathrm{in}}}=1.3\beta `$, consistent with $`r>r_{\mathrm{in}}`$ provided that $`\beta >0.77`$. Assuming that the QPO frequency is the Kepler frequency at the radius of peak (bolometric) photon emission,
$$\nu _{\mathrm{QPO}}=\frac{\nu _K(r_{\mathrm{in}})}{(1.3\beta )^3}\frac{1000\mathrm{Hz}}{\beta ^3M/M_{}},$$
(27)
where the limiting result is for $`r_{\mathrm{in}}6GM/c^2`$. In order for the maximum value of $`\nu _{\mathrm{QPO}}`$ to be $`\nu _{\mathrm{max}}1060`$ Hz, we require $`M=1.4M_{}/(\beta /0.88)^3`$.
Real disk emission profiles for small $`r_{\mathrm{in}}`$, and the determination of $`\nu _{\mathrm{QPO}}`$, are not this simple for several reasons. A detailed calculation of the X-ray spectrum is needed, since the QPOs are found for counts in particular energy bands; the bolometric count rate is not a good approximation in general. (But note that Comptonization by hot coronal gas above the disk conserves photon number, so the approximation may be better than it appears at first sight.) In particular, the color temperature is not usually the same as the effective temperature, since electron scattering is the dominant opacity at relevant disk radii. The composition of the disk is also important; at low enough $`\dot{M}`$, the disk will be matter-dominated, but at larger $`\dot{M}`$, radiation-dominated. (Less important, but still significant, is the dependence of opacity on the element abundances in the accreting gas.) In addition, relativistic effects alter $`f(r)`$ (e.g. Page & Thorne 1974, Novikov & Thorne 1973), and hence $`\nu _{\mathrm{QPO}}`$. Moreover, the angular momentum carried away by photons may not be insignificant once $`r_{\mathrm{in}}`$ approaches the ISO (Page & Thorne 1974, Epstein 1985). Instabilities associated with the transition from matter to radiation domination (Lightman & Eardley 1974) or the inner boundary layer (e.g. Epstein 1985) might also play a role in determining $`\nu _{\mathrm{QPO}}`$. These and other issues associated with the termination of disks at $`r_{\mathrm{in}}`$ and QPOs will be explored more fully elsewhere. However, the simplified example presented here indicates that $`\nu _{\mathrm{QPO}}`$ might plausibly arise from $`r>r_{\mathrm{in}}`$.
## 5 Conclusion
In this paper we have presented a phenomenological model of the inner region of the accretion disk for weakly magnetized neutron stars such as those in LMXBs. A notable feature of these systems is that both magnetic field and general relativity are important in determining the inner disk radius. Our result suggests that the combined effects of a steepening magnetic field – which is likely for disk accretion onto a neutron star – and general relativity can produce the flattening of the QPO frequency $`\nu _{\mathrm{QPO}}`$ as the mass accretion rate $`\dot{M}`$ increases. If the field steepens fast enough with decreasing inner disk radius, $`\nu _{\mathrm{QPO}}`$ may vary little over a fairly substantial range of $`\dot{M}`$ at values considerably below the Kepler frequency at the ISO due to general relativity. Observationally, the correlation between $`\nu _{\mathrm{QPO}}`$ and the RXTE photon count rate has been well-established, but the scaling between $`\nu _{\mathrm{QPO}}`$ and $`\dot{M}`$ is ambiguous (Mendez et al. 1998c). An observational or phenomenological determination of this scaling would be quite useful in constraining the magnetic field structure in LMXBs.
Currently it is not clear whether the plateau behavior in the QPO frequency has been observed. But even if $`\nu _{\mathrm{QPO}}11001200`$ Hz represents the maximum possible QPO frequency, we argue that a massive neutron star ($`M>2M_{}`$) is not necessaily implied. Instead, a $`M1.4M_{}`$, $`R_01415`$ km neutron star may be a better solution, and is within the range allowed by some nuclear equations of state. If this is the case, the maximum QPO frequency signifies the closing of the accretion gap and the formation of a boundary layer. Alternatively, the QPO frequency might be associated with the Kepler frequency at a radius somewhat larger than the inner radius of the disk, thus allowing lower mass for the accreting neutron star. In either case, better theoretical and phenomenological understanding of the termination of magnetized accretion disks is needed before observations of maximal kHz QPOs can be interpreted as purely general relativistic in origin, and used to deduce neutron star masses.
D.L. is supported by a Alfred P. Sloan Foundation Fellowship. R.L. acknowledges support from NASA grant NAG 5-6311. I.W. acknowledges support from NASA grants NAG 5-3097 and NAG 5-2762.
|
no-problem/9904/quant-ph9904098.html
|
ar5iv
|
text
|
# On energy transfer by detection of a tunneling atom
## I Introduction
Tunneling is one of the most striking predictions of quantum mechanics, and continues to provoke some of its most heated controversies. Despite the appearance of tunneling phenomena in numerous physical and technological areas (and in first-year physics courses), certain aspects of the effect remain poorly understood. This is seen most clearly in the debate over how long a particle takes to traverse a tunnel barrier, and in particular, whether or not it can do so faster than light .
The confusion over these issues can be traced to certain common elements of quantum “paradoxes.” For one, definite trajectories cannot be assigned to particles in general, and in this sense it is not even clear how to rigorously phrase a question about how much time a transmitted particle spent in a forbidden region– in fact, it may not even be necessary that a particle “traverse” a region in order to be found on the far side.
Of course, at one level quantum mechanics is merely a wave theory, and quite thoroughly understood. In many physical situations, more controversial, interpretational, issues (related to “collapse” or other alternatives) may easily be skirted without loss of predictive power. In tunneling, however, it is quite natural to look for a description of transmitted particles, as distinct from reflected ones (or from the ensemble as a whole) . But such a description is impossible without an attempt to model the detection itself, because without the detection event, transmitted and reflected packets necessarily coexist. Detection naturally raises other interesting questions. What is the nature of a detection process which occurs inside a forbidden region (cf. )? According to the collapse postulate, if a particle is found to be in the barrier region, it is subsequently described by a new wave function, confined to that region. Any such wave function has $`E>V_0`$, and suddenly, the problem is no longer one of tunneling.
Our plans to observe tunneling of laser-cooled Rubidium atoms, and to perform “weak measurements” in order to study the behaviour of a transmitted subensemble, have been presented at length elsewhere. Here we repeat only the essential elements, to provide context for the present discussion.
## II Tunneling in atom optics
Laser-cooled atoms offer a unique tool for studying quantum phenomena such as tunneling through spatial barriers. They can routinely be cooled into the quantum regime, where their de Broglie wavelengths are on the order of microns, and their time evolution takes place in the millisecond regime. They can be directly imaged, and if they are made to impinge on a laser-induced tunnel barrier, transmitted and reflected clouds should be spatially resolvable. With various internal degrees of freedom (hyperfine structure as well as Zeeman sublevels), they offer a great deal of flexibility for studying the various interaction times and nonlocality-related issues. In addition, extensions to dissipative interactions and questions related to irreversible measurements and the quantum-classical boundary are easy to envision.
In our work, we prepare a sample of laser-cooled Rubidium atoms in a MOT, and cool them in optical molasses to approximately 6 $`\mu K`$. As explained below, further cooling techniques are under investigation for achieving yet lower temperatures .
We plan to use a tightly focussed beam of intense light detuned far to the blue of the D2 line to create a dipole-force potential for the atoms. In this intense beam, the atom becomes polarised, and the polarisation lags the field by $`90^{}`$ when the light frequency exceeds that of the atomic resonance. This polarisation out of phase with the local electric field constitutes an effective repulsive potential, proportional to the intensity of the perturbing light beam. It can also be thought of in terms of the new (position-dependent) energy levels of the atom dressed by the intense laser field. Using a 500 mW laser at 770 nm, we will be able to make repulsive potentials with maxima on the order of the Doppler temperature of the Rubidium vapour. Acousto-optical modulation of the beam will let us shape these potentials with nearly total freedom, such that we can have the atoms impinge on a thin plane of repulsive light, whose width would be on the order of the cold atoms’ de Broglie wavelength. This is because the beam may be focussed down to a spot several microns across (somewhat larger than the wavelength of atoms in a MOT, but of the order of that of atoms just below the recoil temperature, and hence accessible by a combination of cooling and selection techniques). This focus may be rapidly displaced by using acousto-optic modulators and motorized mirrors. As the atomic motion is in the mm/sec range, the atoms respond only to the time-averaged intensity, which can be arranged to have a nearly arbitrary profile.
As a second stage of cooling, we follow the MOT and optical molasses with an improved variant of a proposal termed “delta-kick cooling”. In our version, the millimetre-sized cloud is allowed to expand for about ten milliseconds, to several times its initial size. This allows individual atoms’ positions $`x_i`$ to become strongly correlated with atomic velocity, $`x_iv_it_{\mathrm{free}}`$. Magnetic field coils are then used in either a quadrupole or a harmonic configuration to provide a position-dependent restoring force for a short period of time. By proper choice of this impulse, one can greatly reduce the rms velocity of the atoms. So far, we have achieved a one-dimensional temperature of about 700 nK, corresponding to a de Broglie wavelength of about half a micron. We are currently working on improving this temperature by producing stronger, more harmonic potentials, and simultaneously providing an antigravity potential in order to increase the interaction time.
However, the tunneling probability through a 5-micron focus will still be negligible at these temperatures. Furthermore, the exponential dependence of the tunneling rate on barrier height will be difficult to distinguish from the exponential tail of a thermal distribution at high energies. We will therefore follow the delta-kick with a velocity-selection phase. By using the same beam which is to form a tunnel barrier, but increasing the width to many microns, we will be able to “sweep” the lowest-energy atoms from the center of the magnetic trap off to the side, leaving the hotter atoms behind. Our simulations suggest that we will be able to to transfer about 7% of the atoms into the one-dimensional ground state of this auxiliary trap. This new, smaller sample will have a thermal de Broglie wavelength of approximately $`3.5\mu `$m, leading to a significant tunneling probability through a 10-micron barrier. We expect rates on the order of 1% per secular period, causing the auxiliary trap to decay via tunneling on a timescale of the order of 100 ms.
## III Measuring tunneling atoms
A weak measurement is one which does not significantly disturb the particle being studied (nor, consequently, does it provide much information on any single occasion). Why not perform a strong measurement? Simply because if one can tell with certainty that a particle is in a given region, one has also determined that the particle has enough energy to be in that region; one is no longer studying tunneling. The measurement has too strongly disturbed the unitary evolution of the wave function.
At the 6th Symposium on Laser Spectroscopy in Taejon, I made the above glib assertion as I had frequently done in the past, and went on to discuss weak measurements. Afterwards, however, Bill Phillips raised the question of where exactly the energy comes from to turn a forbidden region into an allowed one. The imaging of an atom involves a small transfer of momentum, and typically the only energy exchange is the more-or-less negligible recoil shift. But in this scenario, an atom observed under an arbitrarily high tunnel barrier must– merely by being observed– acquire enough energy to ride on top of the barrier. Why should the effect of a weak probe beam (in fact, the interaction with a single resonant photon) scale with the completely unrelated height of a potential barrier?
The situtation envisioned is shown in schematic form in Fig. 1. The wave function of the atom decays exponentially into the barrier region over a characteristic length $`1/\kappa `$. If this length is greater than the resolution of the imaging lens, then it is possible for the appearance of a spot of focussed fluorescence on an appropriate point on the screen to indicate that an atom is in the barrier region. This atom, having scattered perhaps only a single photon, must according to quantum mechanics have acquired an energy of at least $`V_0`$ to exist confined to the barrier region. This energy depends not on the wavelength or intensity of the imaging light, but only on the height of the barrier created by the dipole-force beam, which may greatly exceed the recoil energy associated with the momentum transfer involved in elastic scattering of a photon. An interesting point is that it is unnecessary to actually focus and detect the photon in question. The very possibility that some future observer could use the scattered light to determine that an atom was in the forbidden region is sufficient to decohere spatially separated portions of the atomic wavefunction, causing some fraction of the atoms to “collapse” (if you will) into the barrier region.
At first, one might think that the energy comes from the interaction between the atom and the dipole-force beam. A little thought suffices, however, to demonstrate that this cannot be the solution. Even in the absence of a potential, an imaging beam may localize a previously unlocalized particle, increasing its momentum uncertainty and hence its energy. The energy must come from the imaging beam. Why, then, does the quantity of energy transferred depend on the barrier height?
A partial answer comes from carefully considering the energy levels of the atom. Inside the barrier region, the presence of the dipole beam couples the atomic eigenstates, creating an AC Stark shift (which is the effective repulsive potential). An atom which makes a transition between a state primarily outside the barrier (of energy $`E_g+P^2/2m`$) to a state localized in the barrier is simultaneously making a transition to a new, higher-energy electronic state ($`E_g+V_0+P^{\mathrm{\hspace{0.17em}2}}/2m`$). Energy-conservation will be enforced by the time-integral in perturbation theory, causing the amplitudes for this process to interfere destructively unless the scattered photon energy plus the final energy of the atom equals the initial photon energy plus the initial energy of the atom. In other words, the presence of the dipole beam makes possible inelastic (Raman) transitions between different atomic states. When an elastic scattering event occurs, the atom is left in the original state, and cannot be localized to the barrier. Only when an inelastic collision occurs can the atom be transferred to the state dressed by the dipole field, and localized in the formerly forbidden region.
Can one then determine that an atom is in the barrier without imaging, by merely measuring the energy of the scattered photon? Unfortunately, no. Recall that this argument hinges on an imaging resolution
$$\delta l<1/\kappa ,$$
(1)
where
$$\mathrm{}^2\kappa ^2=2m(V_0E).$$
(2)
A particle localized to within $`\delta l`$ has a momentum uncertainty
$$\mathrm{\Delta }P\mathrm{}/2\delta l.$$
(3)
This means that it will only remain within the resolution volume for a time
$$t\frac{\delta l}{\mathrm{\Delta }P/m}=2m\delta l^2/\mathrm{}.$$
(4)
This in turn implies
$$t<2m/\mathrm{}\kappa ^2.$$
(5)
Unless the imaging light is time-resolved to better than this limit, it is impossible to maintain the spatial resolution necessary to conclude with certainty that the particle is in the barrier. (Strictly speaking, it would suffice to image to better than the barrier width. However, a particle in the barrier is most likely to be within the first exponential decay length $`1/\kappa `$. While with lower resolution, one might still conclude that the particle was deeper in the barrier, the likelihood will be exponentially suppressed. Thus on some occasions, the photon energy will be shifted by an amount greater than its rms spectrum, but the low amplitude of this frequency component will be matched by the low probability of finding the atom so deep in the barrier, and the present arguments may easily be generalized.) This implies that the energy uncertainty of the scattered photon must be
$$\mathrm{\Delta }E\stackrel{>}{_{}}\mathrm{}/t>\mathrm{}^2\kappa ^2/2m.$$
(6)
But this is $`V_0E`$, just the energy required to excite the tunneling atom above the barrier. So the only way to image an atom in the forbidden region is to use light with sufficient energy uncertainty that it can boost the atom above the barrier without a significant change in its own spectrum.
## IV Detection without observation, or observation without detection?
Just as these issues were beginning to make themselves clear to us, Terry Rudolph suggested an even more confounding extension. His idea is outlined in Figure 2. Suppose one decides to determine the location of the tunneling atom in a more indirect manner. Specifically, suppose a nearly ideal imaging system is devised (relying, for example, on $`\pi `$-pulses of probe light), but that a beam stop is imaged onto the barrier region. In this way, any atom in the classically allowed region will be imaged, but an atom which finds itself in the forbidden region will be out of the reach of probe light, and no photon will be scattered. When no scattered photon is observed, we can conclude with near certainty that the atom is in the forbidden region, and has therefore made a transition to a higher-energy dressed state. But now where did the energy come from? After all, it appears that the “detected” atom became localized without ever undergoing an interaction.
This picture is ill-founded, however. The atom cannot be considered in isolation; it is in fact the entire system, composed of atom, dipole-force beams, and imaging photons, which undergoes a transition and must conserve energy. Under the influence of a probe pulse, the atom’s quantum state becomes entangled with the state of the imaging light. There is some amplitude for a photon to travel along its original path, unscattered, but this amplitude is correlated with an atomic state localized to the barrier region. For the time-integral of this amplitude to lead to a real probability for detecting an unscattered photon, the detected photon will necessarily lose enough energy to boost the atom above the barrier, just as in the case previously discussed.
Once more, the situation becomes less startling when we observe that (1) there is indeed a mechanism for energy-exchange between the (unscattered) imaging beam and the atom; and (2) this energy exchange never exceeds the intrinsic uncertainty in the initial photon energy. The interaction between imaging light and an atom comprises not only the possibility of scattering, but also the real part of the atomic polarizability, which is to say the index of refraction experienced by the light due to the presence of the atom. For a near-resonant photon with a probability $`\eta `$ of being scattered by an atom, the extra optical path introduced by the presence of the atom, $`[n(z)1]𝑑z`$, is of the order of an optical wavelength times $`\eta `$, corresponding to an optical phase shift approximately equal to $`\eta `$. If an atom is found to have appeared in the dark region enclosing the barrier, this implies that it left the region of interaction with the probe light, causing the light to experience a time-varying index of refraction. If the atom’s departure from the illuminated region is known to have occurred within a time $`t`$, then the phase of the light was modulated by an amount $`\eta `$ in a time smaller than $`t`$, producing a frequency shift of the order $`\eta /t`$. Each photon’s energy can in this way be altered by the “disappearance” of the atom, by the quantity $`\mathrm{}\eta /t`$. Since on the order of $`1/\eta `$ photons are necessary to detect atoms with near-unit probability in such a scenario, this phase-modulation effect is automatically sufficient to produce an energy exchange of up to $`\mathrm{}/t`$ between the moving atom and the probe beam, even when no photons are scattered.
As in the original discussion of bright imaging of the barrier region, we know that $`\mathrm{}/t\stackrel{>}{_{}}\mathrm{}^2\kappa ^2/2m`$, and this energy exchange is enough to propel the particle above the barrier. Furthermore, the same argument concerning the duration of a probe pulse remains intact. If the pulse lasts long enough that even a particle localized to the barrier would have time to escape while the light was on, then one will never completely avoid fluorescence, and thus never be able to conclude with certainty that the particle is in the barrier region. One might instead envision a CW probe but time-gated photodetection; in this case, the argument is similar, but it is the detected photon whose energy can no longer be determined precisely enough to be certain that energy exchange has taken place on any individual occasion. Nevertheless, by studying an ensemble of particles, one should be able to build up enough statistics to confirm the shift in the mean photon frequency.
## V Conclusion
We see that tunneling is just one more prototypical example of the way in which observation may disturb a quantum system. It is instructive to consider the mechanisms which allow the necessary energy transfer to take place, along with the requisite uncertainties behind which this transfer hides. Ultracold atoms in Bose condensates, and at temperatures achievable through related laser-cooling techniques as well, have long enough de Broglie wavelengths that tunneling effects should soon be observed in a regime where these questions become more than purely academic. Particularly intriguing is the possibility of modifying the barrier-traversal rate by the application of a probe beam which could in principle be used to image an atom in the forbidden region. Even if no attempt is made to actually perform the imaging, the simple possibility that one could do so should be enough to turn the quantum amplitude for an atom to be within about $`1/\kappa `$ of the edge of the barrier into an actual probability, in the sense of a real fraction of atoms localized into that region of the barrier. These atoms have enough energy to traverse the barrier classically in either direction, and may therefore be observed on the far side.
## VI Acknowledgments
This discussion would remain purely academic if not for the hard work of Stefan Myrskog, Jalani Fox, Phillip Hadley, and Ana Jofre on our laser-cooling experiment. I would also like to acknowledge Jung Bog Kim and his students Han Seb Moon and Hyun Ah Kim for their collaboration on this project. I want to thank Jung Bog Kim and the organizers of the Symposium on Laser Spectroscopy for their invitation and for their hospitality during the meeting, which proved quite stimulating. Finally, I would like to thank Bill Phillips for the question which prompted this short paper, and for fascinating discussions concerning it; and Terry Rudolph for following it up with an even harder question just as I thought I was beginning to understand something.
Figure Captions 1. In this setup, a tunneling atom is illuminated by a plane wave, and the scattered fluorescence may be imaged on a screen to determine whether or not the particle was in the barrier region.
2. Here, a beam block is imaged onto the barrier region, in such a way that an image may be observed on a screen unless the particle is in the process of tunneling.
|
no-problem/9904/hep-ph9904511.html
|
ar5iv
|
text
|
# Electroproduction of strangeness above the resonance region.
## Abstract
A simple and elegant model, based on Reggeized $`t`$-channel exchanges is successful in reproducing the main features of all existing data of the reactions $`epe^{}K^+\mathrm{\Lambda }`$ and $`epe^{}K^+\mathrm{\Sigma }`$. In particular, the original way gauge invariance is taken into account is found to be essential to describe the ratio between the Coulomb and the transverse cross-sections at large $`Q^2`$ that has been measured recently at JLab.
Strangeness production is undergoing a renewed interest in view of the numerous data which are currently coming out of electron accelerators like CEBAF at Jefferson Lab and ELSA at Bonn. It offers us with an original way to probe hadronic matter. Not only the study of the Hyperon-Nucleon interaction is a mandatory complement to the study of the Nucleon-Nucleon interaction, but the implantation of an impurity (the strange quark or a hyperon) in a hadronic system is a formidable tool to study its properties. However, the elementary processes of photo- and electroproduction of kaons off the nucleon should be mastered. In particular, we will show that the determination of the kaon form factor depends on the model used.
At low energy, within about 1 GeV above threshold, many resonances may contribute in the $`s`$-channel and fits to the scarce data are generally obtained at the expense of many free parameters ,. At higher energy, Regge phenomenology provides us with an elegant and simple way to account for the analyticity and unitarity of the amplitude with almost no free parameters ,.
Our Regge model is fully described in Refs., for the photoproduction of pions and kaons on the nucleon above the resonance region (above $`E_\gamma `$ 2 GeV), and in Ref. for the electroproduction of pions. It is based simply on two “Reggeized” $`t`$-channel exchanges ($`\pi `$ and $`\rho `$ trajectories for pion production, $`K`$ and $`K^{}`$ trajectories for kaon production). An original and essential feature of this model is the way gauge invariance is restored for the $`\pi `$ ($`K`$) $`t`$-channel exchanges by proper “reggeization” of the $`s`$-channel nucleon pole contribution. This was found to be the key element to describe numerous features of the experimental data (for instance, the $`\pi ^+`$/$`\pi ^{}`$ ratio, the forward peaking of the charged pion differential cross section, the photon asymmetry, etc…).
As in Ref., we extend the model of kaon photoproduction to electroproduction by multiplying the separately gauge invariant $`K`$ and $`K^{}`$ $`t`$-channel diagrams by a monopole form factor :
$$F_{K,K^{}}(Q^2)=\left[1+Q^2/\mathrm{\Lambda }_{K,K^{}}^2\right]^1,$$
(1)
with $`Q^2=q^2`$, where $`q`$ is the spacelike virtual photon four-momentum. The mass scales $`\mathrm{\Lambda }_K`$ and $`\mathrm{\Lambda }_K^{}`$ are chosen to be $`\mathrm{\Lambda }_K^2=\mathrm{\Lambda }_K^{}^2=1.5`$ GeV<sup>2</sup>, in order to fit the high $`Q^2`$ behavior of $`\sigma _L`$ and $`\sigma _T`$ in Fig. 1. We keep the same coupling constants at the ($`K,(\mathrm{\Lambda },\mathrm{\Sigma }),N`$) and ($`K^{},(\mathrm{\Lambda },\mathrm{\Sigma }),N`$) vertices as in the photoproduction study , which were found to describe all existing high energy data, i.e. :
$`{\displaystyle \frac{g_{K\mathrm{\Lambda }N}^2}{4\pi }}=10.6,g_{K^{}\mathrm{\Lambda }N}=23.0,\kappa _{K^{}\mathrm{\Lambda }N}=2.5,`$ (2)
$`{\displaystyle \frac{g_{K\mathrm{\Sigma }N}^2}{4\pi }}=1.6,g_{K^{}\mathrm{\Sigma }N}=25.0,\kappa _{K^{}\mathrm{\Sigma }N}=1.0`$ (3)
The magnitudes and signs ($`g_{K\mathrm{\Lambda }N}<0`$ and $`g_{K\mathrm{\Sigma }N}>0`$) of the $`K`$ strong coupling constants are in agreement with SU(3) constraints (broken to about 20$`\%`$). The signs of the $`K^{}`$ strong coupling constants are also in accordance with SU(3).
It turns out that the recent measurement , at Jefferson Lab, of the ratio between the Coulomb ($`\sigma _L`$) and the transverse ($`\sigma _T`$) cross-sections of the $`p(e,e^{}K^+)\mathrm{\Lambda }`$ reaction clearly favors the Regge model over the resonance models, already in the CEBAF energy range. Fig. 1 shows the comparison of the Regge model with the data. Particularly interesting is the behavior at large $`Q^2`$ of the $`\sigma _L/\sigma _T`$ ratio which decreases as the data, in contrast to two recent theoretical resonance models ,. Our Regge model suitably reproduces the trend of this ratio. The reason is that, due to gauge invariance, the $`t`$-channel kaon exchange and the $`s`$-channel nucleon pole terms are indissociable and must be treated on the same footing. In our model ,, they are Reggeized in the same way and multiplied by the same electromagnetic form factor. This approach clearly differs from traditional ones , where a different electromagnetic form factor is assigned to the $`t`$\- and $`s`$-channel diagrams (monopole and a dipole forms respectively -with also different mass scales-, in general). This explicitly breaks gauge invariance and the introduction of a purely phenomenological and ad-hoc counter-term is needed to restore it. Whatever particular way this procedure has been done in the existing literature, it produces a -linearly- rising ratio $`\sigma _L/\sigma _T`$ with $`Q^2`$ in contrast to the data. It is important to note that the $`Q^2`$ dependence of this ratio is relatively insensitive to the particular shape or mass scale taken for the electromagnetic form factors of the $`K`$ and $`K^{}`$ and that the decreasing trend observed here is intrinsically linked to the assignment of the same electromagnetic form factor to the $`t`$\- and $`s`$-channel Born diagrams (which is clearly the simplest way in fact to keep gauge invariance in electroproduction).
The Regge model reproduces also fairly well the -scarce- data prior to Jefferson Lab. Fig. 2 shows the $`t`$-dependence of the $`\gamma ^{}+pK^++\mathrm{\Lambda }`$ and $`\gamma ^{}+pK^++\mathrm{\Sigma }^0`$ differential electroproduction cross section $`2\pi d^2\sigma /dtd\mathrm{\Phi }`$ for different $`Q^2`$ values. The latest and older Bonn data for photoproduction are also shown for reference. At $`Q^2`$=0.06 GeV<sup>2</sup>, there is essentially no influence of the form factors. Therefore, without any additional parameter, a straightforward extension of the photoproduction model gives the correct $`t`$-dependence and magnitude of the data. As in the photoproduction study, the $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }`$ channels show a different behavior at forward angles : the differential cross section decreases towards 0 for the latter one whereas it tends to peak for the former one. According to , this “peaking” for the $`\mathrm{\Lambda }`$ channel is due to the dominance of the gauge invariant $`K`$ exchange at small $`t`$. Because of the weaker $`g_{K\mathrm{\Sigma }N}`$ coupling constant relative to the $`g_{K\mathrm{\Lambda }N}`$ coupling constant, the $`K^{}`$ exchange contribution -which has to vanish at forward angles due to angular momentum conservation- dominates the $`\mathrm{\Sigma }`$ channel, which reflects into a decrease of the differential cross section at forward angles. This decrease at small $`t`$ is attenuated at larger $`Q^2`$ due to the “shift” of $`t_{min}`$ with $`Q^2`$.
The Brauel et al. data of Fig. 2 were integrated in $`\mathrm{\Phi }`$ between 120<sup>o</sup> and 240<sup>o</sup> ; so was the model in order to correctly take into account the influence of the $`\sigma _{TT}`$ and $`\sigma _{TL}`$ terms which is found not to be negligible. Fig. 2 shows furthermore the destructive interference between the $`K`$ and $`K^{}`$ exchange mechanisms for the $`\mathrm{\Sigma }`$ channel found at large angles, which was also noticed before in the photoproduction study .
This $`Q^2`$ dependence is confirmed by Fig. 3 which shows the differential cross section $`d\sigma /d\mathrm{\Omega }`$ at $`\theta _{c.m.}`$=8<sup>o</sup> as a function of $`Q^2`$ for 2 energy bins. In fact, commonly, this observable has been plotted at a single averaged $`W`$ value ($`<W>`$=2.15 GeV) where a $`p_K^{}/W/(sm_p^2)`$ dependence was used for the extrapolation of the lower and higher measured $`W`$ values . Fig. 3 shows that this procedure is approximately right for the $`\mathrm{\Lambda }`$ channel which shows roughly a $`\frac{1}{s}`$ behavior, but is not appropriate for the $`\mathrm{\Sigma }`$ channel which shows a rather constant behavior in this energy domain. Indeed, it is well known that a Regge amplitude proportional to $`s^{\alpha (t)}`$ leads to a differential cross-section $`\frac{d\sigma }{dt}s^{2\alpha (t)2}`$ and therefore $`\frac{d\sigma }{d\mathrm{\Omega }}s^{2\alpha (t)1}`$. For a $`K`$-meson exchange dominated mechanism (such as the $`\mathrm{\Lambda }`$ channel at forward angles, see Fig. 2) with $`\alpha _K(0)`$ -0.17, this implies $`\frac{d\sigma }{d\mathrm{\Omega }}s^{1.34}`$. And for a $`K^{}`$-meson exchange dominated mechanism (such as the $`\mathrm{\Sigma }`$ channel) with $`\alpha _K^{}(0)0.25`$, we have $`\frac{d\sigma }{d\mathrm{\Omega }}s^{0.5}`$.
Note that the photoproduction point of has been renormalized. This data was taken at $`\theta _{c.m.}`$=25<sup>o</sup> ($`t`$-0.15 GeV<sup>2</sup>). It has been extrapolated to 8<sup>o</sup> ($`t`$-0.06 GeV<sup>2</sup>) to allow a consistent comparison with the other data. We did so by using the $`t`$-dependence of our model. From Fig. 2, it can be seen that this implies an upscaling (of $``$ 1.2) of the $`\mathrm{\Lambda }`$ photoproduction point and a downscaling (of $``$ 2.4) of the $`\mathrm{\Sigma }`$ photoproduction point. This leads to a very different figure and conclusion than in Ref. (see Figs. 6 and 7 of Ref. ) : firstly, the $`Q^2`$ dependence of the $`\mathrm{\Sigma }`$ channel is not steeper than for the $`\mathrm{\Lambda }`$ channel and, secondly, there is no particular evidence of a rise with $`Q^2`$ of the $`\mathrm{\Lambda }`$ cross-section which would have indicated a strong contribution of $`\sigma _L`$. This latter contribution is seen to account for less than half of the cross section.
It is remarkable that the value of the cut-off mass, $`\mathrm{\Lambda }_K^2`$=$`\mathrm{\Lambda }_K^{}^2`$=1.5 GeV<sup>2</sup>, deduced from the Jefferson Lab experiment leads also to a correct $`Q^2`$ dependence for the world set of previous data, both in the $`\mathrm{\Lambda }`$ and the $`\mathrm{\Sigma }`$ channels. It appears, however, to be quite large, resulting in a rather flat form factor. Indeed, the effective charge radius corresponding to $`\mathrm{\Lambda }_K^2`$=1.5 GeV<sup>2</sup> is : $`<r_K^2>6\frac{dF_K}{dQ^2}_{(Q^2=0)}=\frac{6}{\mathrm{\Lambda }^2}`$0.16 fm<sup>2</sup>. This has to be compared to the value measured by direct scattering of kaons on atomic electrons at the CERN SPS which yielded $`<r_K^2>`$=0.34 fm<sup>2</sup> . However, we found previoulsy a good agreement between the pion form factor mass scale ($`\mathrm{\Lambda }_\pi ^2=`$0.462 GeV<sup>2</sup>, i.e. $`<r_\pi ^2>`$= 0.52 fm<sup>2</sup>) deduced from a study similar to the one in this article and the value measured by direct scattering of pions on atomic electrons at the CERN SPS : $``$0.44 fm<sup>2</sup> . An interpretation of this discrepancy for the kaon case can be that, the kaon pole being far from the physical region, the form factor used in this kind of model does not represent the properties of the kaon itself but rather the properties of the whole trajectory. Instead of being sensitive to the kaon form factor, one might in fact measure a transition form factor between the kaon and an orbital excited state lying on the kaon Regge trajectory. This has to be kept in mind when trying to extract the kaon electromagnetic form factor from these electroproduction reactions.
It is clear that the extracted form factor mass scale depends strongly on the extrapolation from $`t_{min}`$ to the pole. Indeed, the standard procedure uses a $`\frac{1}{tm_K^2}`$ dependence reflecting the traditional $`t`$-channel kaon propagator. However, in our present approach, this propagator is proportional to $`s^{\alpha (t)}`$ which leads to a steeper (exponential) $`t`$ dependence. It is therefore no wonder intuitively that the mass scale of the electromagnetic form factor in this latter approach is softer than when using a standard Feynman propagator for the extrapolation.
To understand this crucial point better, we compare on Fig. 4 the two approaches : the plots are extracted from Fig. 2 and 3 and we compare here our Regge model (solid line) and a standard Born model based on the usual $`\frac{1}{tm_K^2}`$ (dashed line). Our Regge model contains $`K`$ and $`K^{}`$ trajectories exchanges but, as can be seen from Fig. 2, only the $`K`$ trajectory contributes significantly. Our Born model here uses only a (gauge-invariant) $`K`$ exchange rescaled in order to match the photoproduction result in the forward direction. We left out the $`K^{}`$ exchange as it is well known that it diverges with rising energy, due to derivative couplings for exchanged high spin particles. The upper left plot of Fig. 4 shows the $`t`$ dependence of the differential cross section at (almost) the photoproduction point. We see that the Born model based on $`K`$ exchange alone produces a flatter $`t`$-dependence at larger $`t`$ than the Regge model and the data. This could be corrected in principle by introducing an extra hadronic form factor at the $`K\mathrm{\Lambda }N`$ vertex but at the expense of one additional free parameter for the corresponding mass scale. However, this will not give the correct high energy Regge dependence ($`\frac{d\sigma }{d\mathrm{\Omega }}s^{2\alpha (t)1}`$ and the associated “shrinkage”) as was illustrated previously for the pion case where more data are available. A decent $`Q^2`$ dependence for this Born model is obtained with an electromagnetic monopole form factor with $`\mathrm{\Lambda }_K^2`$=0.68 which is the value exactly corresponding to the kaon charge radius, and, in any case, much smaller than the value needed for the Regge model ($`\mathrm{\Lambda }_K^2`$=1.5). It is clear that it is at small $`Q^2`$ values that one is most sensitive to the mass scale of the electromagnetic form factor as, at larger $`Q^2`$, both form factors show a $`\frac{1}{Q^2}`$ asymptotic behavior. The conclusions here are clear : a traditional Born model (with a $`\frac{1}{tm_K^2}`$ standard Feynman propagator) seems to lead at first sight to a mass scale for the kaon electromagnetic form factor ($`\mathrm{\Lambda }_K^2`$=0.68) compatible with the kaon charge radius. However, such Born model is unable to reproduce the correct energy and $`t`$ dependences (unless, for this latter case, corrected by an extra hadronic form factor). Furthermore, it is unable to take into account the role of the $`K^{}`$ exchange which would diverge (properly taking into account the exchange of higher spin particles was actually one of the main motivations of our Regge model). Let’s also notice that a recent direct experimental determination of the proton electric form factor at Jefferson Lab clearly shows that the cut-off mass needed to fit the large $`Q^2`$ data is not consistent with the cut-off mass which is determined by the proton charge radius.
We now turn to polarization observables. We first show on Fig. 5 that the $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }`$ single recoil polarizations recently measured at Bonn in photoproduction are reasonnably predicted by our model at forward angles (At larger angles, other contribution besides $`t`$-channel exchanges are expected to contribute : $`u`$-channel exchanges, resonances contributions at low energies, etc…). It is the interference between the $`K`$ and $`K^{}`$ trajectories -arising from the $`e^{i\alpha _{K,K^{}}(t)}`$ signature term in the $`K`$ and $`K^{}`$ Regge propagators, see Ref. \- which produces the non-zero polarization in the present model. The negative (positive) $`\mathrm{\Lambda }`$ ($`\mathrm{\Sigma }`$) recoil polarization is directly related to the relative sign of the $`g_{K^{}N\mathrm{\Lambda }}`$ couplings with respect to the $`K`$ ($`g_{K\mathrm{\Lambda }N}<0`$ and $`g_{K\mathrm{\Sigma }N}>0`$ whereas $`g_{K^{}(\mathrm{\Lambda },\mathrm{\Sigma })N}`$ are both negative).
At Jefferson Lab, with electron beams, double polarization observables will soon be accessed for the first time . These will put further stringent constraints on the models and will allow to disentangle the contributions of the $`K`$ and the $`K^{}`$ exchange. Typical behaviors for kinematics accessible at Jefferson Lab ($`E_e`$ = 6 GeV, $`\theta _e^{}`$ = 13, $`E_\gamma ^{}`$ = 2.643 GeV) are shown in Fig. 6 for $`\mathrm{\Phi }`$=0<sup>o</sup> and 180<sup>o</sup> (The definitions and the notations are given in Appendix A of Ref. , and the $`z`$-axis is chosen along the direction of the virtual photon). If only one Regge trajectory is retained the induced polarization $`P_Y^{}`$ (unpolarized electrons) vanishes: it is different from zero when the two trajectories interfere. It is worth noting that $`P_Y^{}`$, which is the “extension” to electroproduction of the photoproduction recoil polarization of Fig. 5, is now positive (at $`\mathrm{\Phi }`$=0<sup>o</sup>). Indeed, in photoproduction, this observable is sensitive only to $`\sigma _T^Y`$ whereas in electroproduction the additionnal $`\sigma _L^Y`$, $`\sigma _{TT}^Y`$ and $`\sigma _{TL}^Y`$ cross sections enter. These latter contributions come with an oppposite sign with respect to $`\sigma _T^Y`$ here. The sideways ($`P_X^{}`$) and longitudinal ($`P_Z^{}`$) transfered polarizations indicate the relative amount of $`K`$ and $`K^{}`$ exchanges. The strong $`\mathrm{\Phi }`$ dependence evidenced on Fig. 6 can also be used to disentangle the various contributions. At large transfers, it was noted that, in Deep Inelastic Scattering, the strange quark can be used to follow the spin transfer from the probe to the emitted $`\mathrm{\Lambda }`$. It will be interesting to compare such approaches to our Regge calculations.
To conclude, this simple and elegant Regge trajectory exchange model accounts fairly well for the whole set of available data and the rather accurate first measurement at Jefferson Lab. Since the model depends on very few parameters, the forthcoming data from Jefferson Lab will constitute a stringent test (that may eventually call for a more fundamental partonic description). Nevertheless, it provides already a good starting point to compute and analyze strangeness production in nuclei.
We would like to acknowledge useful discussion with P. Bydzovsky. This work was supported in part by the French CNRS/IN2P3, the French Commissariat à l’énergie Atomique and the Deutsche Forschungsgemeinschaft (SFB443).
|
no-problem/9904/cond-mat9904181.html
|
ar5iv
|
text
|
# Electronic structure and dimerization of a single monatomic gold wire
## Abstract
The electronic structure of a single monatomic gold wire is presented for the first time. It has been obtained with state-of-the-art ab-initio full-potential density-functional (DFT) LMTO (linearized muffin-tin orbital) calculations taking into account relativistic effects. For stretched structures in the experimentally accessible range the conduction band is exactly half-filled, whereas the band structures are more complex for the optimized structure. By studying the total energy as a function of unit-cell length and of a possible bond-length alternation we find that the system can lower its total energy by letting the bond lengths alternate leading to a structure containing separated dimers with bond lengths of about 2.5 Å, largely independent of the stretching. However, first for fairly large unit cells (above roughly 7 Å), is the total-energy gain upon this dimerization comparable with the energy costs upon stretching. We propose that this together with band-structure effects is the reason for the larger interatomic distances observed in recent experiments. We find also that although spin-orbit couplings lead to significant effects on the band structure, the overall conclusions are not altered, and that finite Au<sub>2</sub>, Au<sub>4</sub>, and Au<sub>6</sub> chains possess electronic properties very similar to those of the infinite chain.
and thanks: Also at Centro Internacional de Fisica Teorica, Apartado Aereo 49490, Santafé de Bogotá, Colombia. thanks: Corresponding author : Michael Springborg, Universität Konstanz, Fakultät für Chemie, Universitätsstraße 10, Postfach 5560 - M 722, D-78457 Konstanz, Germany; fax: +49 7531 883139; email: mcs@chclu.chemie.uni-konstanz.de
Very recently, single chains of suspended gold atoms have been produced between two -oriented tips in transmission electron microscope (TEM) and in mechanically controllable break-junction (MCB) experiments . In the TEM experiment, conductance measurements were performed simultaneously with electron-microscope images of the atomic-size contact as the tips were separated. Before breaking, the contact consists of a monatomic nanowire made up of four gold atoms and shows a conductance of $`2e^2/h`$. The MCB experiment, although lacking direct imaging of the systems, gives also evidence of the formation of a monatomic wire upon stretching.
The formation of such a monatomic chain structure for gold and its properties upon stretching were studied theoretically by Sørensen $`et`$ $`al.`$ by means of classical molecular-dynamics simulations although for contacts between -oriented tips. More recently, Torres $`et`$ $`al.`$ studied the thermodynamical foundations of the spontaneous thinning process as well as the stability of the monatomic gold wire for the -oriented tips using classical many-body force simulations as well as ab-initio local-density (LDA) and generalized-gradient (GGA) density-functional electronic-structure calculations. They found that for bond lengths above 2.8 Å, the system will break into isolated Au<sub>2</sub> dimers. However, in the experimental studies, the finite monatomic wire shows an overwhelming stability upon stretching and is stable for bond lengths up to between $`3`$ and $`4`$ Å .
In the present work the problem of the stability of a single monatomic gold wire is addressed. The electronic structure of the system is presented for the first time and its behaviour upon stretching used to rationalize the whole scenario. In particular we show that, upon stretching, the ordering of the energy levels leads to a situation with one exactly half-filled electronic band so that dimerization (i.e., bond-length alternation) becomes favoured. We show how, as the system is stretched, the energy gain upon dimerization increases, but the nearest-neighbour interatomic distance at equilibrium stays constant at approximately $`2.5`$ Å. Finally, the importance of relativistic effects, not considered in any previous study, is addressed and the results for the infinite chains are compared with similar ones for finite Au<sub>2</sub>, Au<sub>4</sub>, and Au<sub>6</sub> chains.
We performed full-potential density-functional LMTO (linearized muffin-tin orbital) calculations with a local-density approximation (LDA) on isolated periodic infinite gold chains using the method described in Refs. . This method is specifically targeted for isolated, infinite, periodic, helical polymers and chain compounds and have been applied successfully to a wide series of systems . The basis set consists of two sets of $`s,p`$ and $`d`$ functions on all sites; each function is defined numerically inside non-overlapping atom-centered spheres and in terms of spherical Hankel functions in the interstitial region. The two sets differ mainly in decay constants of the latter. Scalar relativistic (SR) corrections were included in all the calculations presented here; in addition the effects of spin-orbit (SO) couplings were also considered.
For the calculations on the isolated, undimerized gold chain we assume that the nuclei are lying in the $`(x,z)`$ plane with the $`z`$ axis parallel to the chain axis. We use 31 equidistant points in half-part of the first Brillouin zone in order to ensure the appropriate convergence of all the physically relevant quantities and to properly describe the metallicity of the system. We also use the level-broadening scheme described in with an electronic temperature of $`0.068`$ eV ($`5\times 10^3`$ Ry). For the dimerized chain we use 16 points in half-part of the first Brillouin zone.
The structure of the undimerized chain can be described with the single bond length $`a`$, whereas the dimerized chain has two alternating bond lengths $`a_1`$ and $`a_2`$. From these we define an average bond length
$$\overline{a}=\frac{1}{2}(a_1+a_2)$$
(1)
and a dimerization coordinate
$$\delta =\frac{1}{2}(a_1a_2).$$
(2)
The electronic band structures of a single monatomic gold wire with an interatomic distance of $`a=`$3.5 Å are shown in Fig. 1.a. This bond length is representative of those observed experimentally. The system is metallic with a half-filled band of symmetry $`\sigma `$. Below this, a broader occupied $`\sigma `$ band as well as (doubly degenerate) narrower $`\pi `$ and $`\delta `$ bands are found. Analyzing the orbitals it turns out that the $`\sigma `$ bands have contributions from both $`6s`$ and $`5d_{z^2}`$ functions, whereas the $`\pi `$ and $`\delta `$ bands largely are due to $`d`$ functions.
Including the spin-orbit (SO) couplings leads to the band structures of Fig. 1.b. Except for the fact that the lower symmetry leads to a splitting of the doubly degenerate bands as well as to avoided crossings between various pairs of bands, the overall picture is not altered and, most important, the occurrence of one exactly half-filled band remains.
Finally, Fig. 1.c shows the energy levels for finite Au<sub>N</sub> chains consisting of $`N=2`$, $`4`$, and $`6`$ atoms, respectively. In these calculations we set all bond lengths equal to 3.5 Å, but we stress that these do $`not`$ correspond to the optimized values (for the dimer, the optimized bond length is in fact much shorter, as we shall see below). Instead, they are similar to the ones observed experimentally.
In Fig. 1.c it can be seen that although increasing the number of atoms leads to some broadening of the energy regions spanned by the orbitals, most of the features of the infinite systems are recovered already for these fairly small systems. Most notably, the fact that $`\sigma `$ orbitals are those appearing closest to the Fermi level and that $`\pi `$ and $`\delta `$ orbitals appear at deeper energies is true also for the finite chains. Furthermore, the Fermi energies of the finite systems are very similar to those of the infinite chains.
In Fig. 2 the total energy of the nanowire is shown as a function of the interatomic distance $`a`$. Empty squares (triangles) indicate values of the cohesive energy for which the relativistic contributions were included up to the SR (SO) level. At the SR level, an equilibrium distance of about $`2.65`$ Å is predicted together with a total-energy minimum of $`1.35`$ eV/atom in good agreement with the results of the other ab-initio calculations . The inclusion of the SO coupling leads to a reduction of the equilibrium length to $`2.55`$ Å and an increase of the cohesive energy by approximately $`0.10`$ eV/atom. Such contractions due to relativistic effects are often observed (see, e.g., ).
In both curves of Fig. 2 the energy values between $`2.7`$ and $`2.9`$ Å are not shown. For these, numerical problems obscured the calculations, i.e., the highest occupied $`\pi `$ band for $`k=0`$ (cf. Fig. 1.a) was lifted to so high energies that it became partly empty. This placed the Fermi level very close to a van Hove singularity in the density of states leading to smaller discontinuities in the total-energy curve. Although the effects were very tiny they were observable. In addition, they show how sensible the system is to external perturbations. Below 2.7 Å the doubly degenerate $`\pi `$ band at $`k=0`$ in Fig. 1.a is lifted even further so that this band is only partially filled and the broad $`\sigma `$ band no longer is exactly half-filled.
An exactly half-filled band, as found for $`a`$ above 2.9 Å, favours strongly a (Peierls) dimerization. Therefore, in Fig. 3 the energy gain upon dimerization is shown as a function of the dimerization coordinate $`\delta `$ for some selected values of the average bond length $`\overline{a}`$ that lie in the experimentally accessible region. In contrast to previous calculations , we find that the system possess a stable structure consisting of alternating shorter and longer bonds where the shorter bonds have lengths of about 2.5 Å. Furthermore, by comparing with the total-energy curves of Fig. 2 we see that the energy gain upon dimerization first for $`\overline{a}`$ about 3.5 Å becomes comparable with the energy costs upon stretching.
Trans polyacetylene (CH)<sub>x</sub> is a well-known example of a material that possesses a Peierls dimerization, leading to alternating single and double bonds between the carbon atoms of the backbone. For this, however, both the amplitude of the bond-length alternation as well as the related total-energy gain are much smaller than those observed here for the gold chain .
In Fig. 4 we show the band structures for two representative values of $`\overline{a}`$, and for each case we show them for two values of the dimerization coordinate $`\delta `$, i.e., a very small one and the optimized value in Fig. 2. The occurrence of a band gap at the Fermi level due to the dimerization is readily recognized for the smaller value of $`\delta `$. For larger values of $`\delta `$, at least for the larger values of $`\overline{a}`$, the highest occupied orbital is no longer derived from the $`\sigma `$ band crossing the Fermi level for the undimerized structure, but from bands of $`\pi `$ or $`\delta `$ symmetry. Moreover, for these larger values of $`\overline{a}`$, the bands become fairly flat for the optimized structure which indicates that at those distances the electronic interactions between the dimers are only weak. For the sake of comparison we show in the figure also the single-particle energies for the isolated dimer with a bond length of 2.5 Å. These are seen to lie fairly close to the band regions for the optimized structures supporting that this structure essentially consists of weakly interacting Au<sub>2</sub> units.
The fact that the nature of the band gap changes from being a direct gap between the two $`\sigma `$ bands to becoming indirect for larger values of $`\delta `$ is seen in Fig. 5 that shows the band gap as a function of $`\delta `$ for different larger values of $`\overline{a}`$. For smaller values of $`\overline{a}`$, the above-mentioned fact that the $`\sigma `$ band no longer is exactly half-filled and that the $`\pi `$ bands are partly emptied makes the smallest band gap vanishing up till some $`\overline{a}`$-dependent threshold for $`\delta `$.
It is remarkable that for the smallest values of the dimerization coordinate $`\delta `$ all curves lie on top of each other. Assuming that the $`\sigma `$ band crossing the Fermi level for the undimerized structure can be described with a single Wannier function per atom and that only nearest-neighbour hopping integrals need to be taken into account, the band gap is
$$E_{\mathrm{gap}}=2|t_1t_2|,$$
(3)
where $`t_1`$ and $`t_2`$ are the two hopping integrals. These may in turn be assumed to depend linearly on the bond lengths,
$$t_{1,2}=t_0\alpha (a_{1,2}\overline{a}).$$
(4)
$`\alpha `$ is an electron-phonon coupling constant. For any average bond length $`\overline{a}`$ one would expect both $`t_0`$ and $`\alpha `$ to depend on $`\overline{a}`$. However, since Eqs. (3) and (4) imply that
$$E_{\mathrm{gap}}=4\alpha |\delta |,$$
(5)
we obtain that the electron-phonon coupling constant is independent of $`\overline{a}`$, at least in the range considered here. Since the tendency towards dimerization largely is determined by the size of the electron-phonon coupling, this result implies that the strength of this tendency is independent of the unit-cell length. Finally, the fact that the curves of Fig. 5 for larger $`\delta `$ do not lie on the top of each other is due to the above-mentioned change of the nature of the gap.
Experimentally, it is found that the finite chains stay stable up to bond lengths of about 4 Å, after which the chains break. Compared with the systems we have studied here there is a number of differences. First, the experimentally studied systems consist of only about four atoms, but as Fig. 1 shows, already such systems possess electronic properties close to those of the infinite chain. Second, the experimental systems are not isolated but suspended between two tips, which may give further support for studying infinite chains. Third, and more importantly, the experimental systems are not static but subjected both to mechanical (stretching) forces and to electrostatic forces (due to the voltage between the tips). Here, we have only considered the static parts of the mechanical forces, which, however, due to the very different time scales between structural relaxations and the applied forces is justified. On the other hand, the applied electrostatic forces may influence the physically properties since they lead to an overall asymmetric potential along the chain, although the potential is weak (in the 10 meV range).
In total, our results suggest the following for the experimental systems. For a given structure with a certain average bond length $`\overline{a}`$ obtained through stretching, the system may attempt to lower its total energy upon structural relaxation. Here, our results show that the overall driving mode for this is to split the system into more or less strongly interacting dimers, although band-structure effects give that this happens first for $`\overline{a}`$ above around 3 Å. As a competition to this, the system may attempt to relax towards the shorter, optimized, undimerized structure. Although this latter is prohibited by the external mechanical forces, we may still compare the two relaxation modes and, then, first for average bond lengths above about 3.5 Å is the former energetically preferred. Furthermore, due to the external voltage, the dimerization mode is weakened. Therefore, we suggest that first when $`\overline{a}`$ is so large that the energy gain upon the two relaxation modes are comparable, will the system change structure, i.e., split into fairly well separated dimers. This offers thus an explanation for the unusually long average bond lengths that are observed experimentally.
In conclusion we have shown here how it is necessary to analyze the electronic band structures of a single monatomic gold chain in order to gain more understanding into the puzzling problem of its stability. In particular, we have shown that the dimerization is the most relevant structural relaxation that shall be considered for these chains. Furthermore, we have shown that relativistic effects have significant effects on the band structures although, maybe surprisingly, without changing the general picture. To our knowledge, relativistic effects have not been considered previously for such systems. Finally, we demonstrated that the finite chains have properties very similar to those of the infinite chain.
The authors want to thank Dr. Karla Schmidt for very useful comments about the relativistic corrections and valuable help and guidance concerning the use of the programs. This project was supported by the Deutsche Forschungsgemeinschaft (DFG) through project No. Sp439/6–1. Finally, the authors are grateful to Fonds der Chemischen Industrie for very generous support.
|
no-problem/9904/cond-mat9904289.html
|
ar5iv
|
text
|
# Geometry of fully coordinated, two-dimensional percolation
## I Introduction
The geometrical phase transition known as percolation (see, for a review, Stauffer and Aharony ) is appreciated by many to be an elegant and simply defined yet fully featured example of a second order phase transition. A number of variations of the original percolation problem were proposed as better models of some physical phenomena in the past. This includes the backbone percolation for studying electrical conduction through random media, polychromatic percolation for multi-component composites, and four-coordinated bond percolation for hydrogen-bonded water molecules. In particular, Blumberg et al and Gonzalez and Reynolds studied a random bond, site-correlated percolation problem they call four-coordinated percolation on the square lattice. They conclude that this problem belongs to the same universality class as the ordinary random percolation with the same set of (static) exponents.
In this paper, we revisit a problem in this realm, though not exactly the same one. We define fully coordinated percolation as the site percolation problem where only the occupied sites all of whose neighboring sites are also occupied can transmit connectivity. Since the random element is the site, this problem is slightly different from the bond problem referred to above. Thus, after generating a random site configuration with the independent site occupation probability p, we only select those occupied sites with all 4 neighbors also occupied on the square lattice and study the clusters formed by nearest neighbor connections among those sites. It should be noted that this problem is distinct from the so-called bootstrap percolation (see, e.g., ) where sites of less connectivity are iteratively removed. In our problem, no iterative procedures are involved; rather, sites of less than full connectivity are marked first and then all of them removed at one time.
This problem arose in the context of studying the vibrational properties of fractal structures tethered at their boundaries . In that problem, scaling was observed in the normal mode spectrum whose origin may lie in the ratio of 2 length scales, one of which is the size of highly connected regions of a cluster. In this context, we have embarked on revisiting the characteristics of randomly generated, but highly connected geometrical structures.
In the next section, we summarize the Monte Carlo and finite size scaling analyses of the static critical properties of fully coordinated percolation. In Section 3, we discuss the normal modes of the transition probability matrix for tracer diffusion on the structure using the methods of Arnoldi and Saad (see, e.g., ). Then in Section 4, we describe the classification of the cluster sites into external boundary, internal boundary, and interior ones and using these to show the major distinctions between the critical clusters of ordinary and fully coordinated percolation. We summarize the results in the final section.
## II STATIC CRITICAL BEHAVIOR
To determine the static critical behavior of fully coordinated percolation we first performed Monte Carlo simulations on a square lattice in two dimensions. Each site is occupied with probability $`p`$ independently and subsequent fully coordinated sites are marked and their connectivity searched. Lattice sizes of $`L^2`$ where $`L=256`$, $`512`$, $`1024`$, and $`2048`$ were constructed. For each lattice size we further made a thousand realizations wherein a different random number seed was used on every run. The unnormalized susceptibilities, i.e., $`\mathrm{\Xi }(L)=_s^{^{}}s^2\widehat{n}_s`$ where $`\widehat{n}_s`$ is the number of clusters of size $`s`$, are calculated on each run and are then summed at the end of the thousand realizations. The average susceptibilities $`\chi `$ are calculated by dividing the sum by the number of realizations and the lattice size. The prime on the summation indicates the fact that the contribution of the largest cluster to $`\chi `$ near and above what we perceived to be the critical probability $`p_c`$ has been subtracted as usual .
In Fig. 1 we plot the average susceptibilities against the probability $`p`$ for the corresponding lattice sizes. The data correspond to the values of $`L=256`$, $`512`$, $`1024`$, and $`2048`$ from the lowest to highest. We can see that the effects due to the finite sizes of the lattices are exhibited clearly. In particular, there are well-defined peaks which scale with lattice sizes as
$$\chi (p_{max},L)L^{\gamma /\nu }$$
(1)
where the known exact value of $`\gamma /\nu `$ for the ordinary percolation is $`\frac{43}{24}1.7917`$. To demonstrate the precision of our calculations, we plot $`\chi (p_{max},L)`$ against the corresponding lattice sizes in the inset of Fig. 1. Notice that the data follow an excellent power law, leading to a least squares fit of $`\chi (p_{max},L)L^{1.7911}`$. The value of $`\gamma `$ found is identical with the ordinary percolation value to within about $`0.03\%`$. This result confirms previous work stating that fully coordinated percolation and ordinary percolation belong to the same static universality class.
The critical behavior of susceptibility is known to scale as
$$\chi (p,\mathrm{})|pp_c|^\gamma $$
(2)
where for ordinary percolation $`\gamma _o=\frac{43}{18}2.3889`$. Notice however that in Fig. 1 the peaks are very near $`p=1.0`$. This would provide data to the right of the peaks in only a small probability interval. In our simulations, we would therefore use $`\chi `$ only to the left of the peaks.
Since the scaling relation in Eq. (2) is expected only for infinite lattices, we use only the data taken from $`L=2048`$ to test it. Since there are two unknowns in Eq. (2), we first choose a particular $`p_c`$ and make a fit to see what value of $`\gamma _{exp}`$ is obtained. If we choose $`p_c=0.886`$ we get $`\gamma _{exp}=2.4004`$. The correlation coefficient, $`|R|`$, for this fit is $`0.99999`$. The discrepancy between $`\gamma _{exp}`$ and $`\gamma _o`$ is around $`0.481\%`$. Choosing $`p_c=0.8858`$ we obtain $`\gamma _{exp}=2.3864`$. The discrepancy this time is around $`0.10\%`$ and $`|R|=1`$. So we have an exact fit for this value of $`p_c`$ and the $`\gamma _{exp}`$ found is very close to the $`\gamma _o`$. Choosing $`p_c=0.885`$ we obtain $`\gamma _{exp}=2.3302`$ with an $`|R|=0.99999`$. The discrepancy for this value of $`p_c`$ is $`2.46\%`$. Fits done with $`p_c`$ between $`0.885`$ and $`0.886`$ gave $`|R|=1`$; however, the $`\gamma _{exp}`$ found when $`p_c=0.8858`$ gave the closest value to $`\gamma _o`$. This allows us to conclude that $`\gamma `$ for fully coordinated percolation is the same as that for ordinary percolation while also giving an estimate for the value of $`p_c`$ close to $`0.8858`$. (We will state the experimental uncertainty for $`p_c`$ after all our analyses are presented.)
From the fit done to examine the scaling in Eq. (1) we could further conclude that $`\nu `$ for fully coordinated percolation should be the same with that for ordinary percolation. This again confirms the statement that fully coordinated percolation is in the same static universality class as ordinary percolation. Another universal constant often used to characterize ordinary percolation is the amplitude ratio $`C_+/C_{}`$ of susceptibility $`\chi `$ (whose value is about 200 in $`d=2`$ ). In fully coordinated percolation, this quantity is unfortunately difficult to calculate accurately because the critical region for $`p>p_c`$ is very small (see below). When we constrain the exponent $`\gamma `$ to be close to $`\gamma _o`$ and use the $`p_c`$ estimated in this work, however, we find that $`C_+/C_{}`$ is of $`\vartheta (10^2)`$, which is consistent with the above observation as well.
The contribution of the largest cluster to the susceptibility is not significant when $`p<p_c`$. However when $`pp_c`$ a significant number of sites will belong to the largest cluster and when $`p>p_c`$ the largest cluster is dominant in the whole lattice. The average susceptibility contribution due to this largest cluster is $`\chi _1=s_{max}^2/(L^2N)`$, where the summation is over $`N=1000`$ realizations and $`s_{max}`$ is the size of the largest cluster. The fractal dimension, $`d_f`$, can be obtained from $`s_{max}`$ by
$$s_{max}(p_c)L^{d_f}$$
(3)
where $`\overline{s}_{max}(p_c)`$ is the mean size of the largest cluster at $`p_c`$. $`\chi _1`$ should therefore scale as
$$\chi _1L^{2d_f2}.$$
(4)
For ordinary percolation on a two dimensional lattice (see, e.g., ), it is known that $`d_f=91/48`$ and $`y2d_f21.7917`$. For fully coordinated percolation, the scaling in Eq. (4) have two unknowns, $`p_c`$ and $`d_f`$. Similar to what we have done when examining the scaling in Eq. (2), we choose trial values for $`p_c`$ and then perform a least squares fit to obtain the corresponding $`y=2d_f2`$. By looking for the range of trial $`p_c`$ that maximizes the regression coefficient $`|R|`$, we arrive at an estimate of $`p_c`$ to be close to $`0.8845`$ where $`|R|=1`$ and $`y=1.7855`$. The variation of $`|R|`$ is about 2 parts in $`10^5`$ if $`p_c`$ is varied by 0.0002, always with less than 1% deviation from the ordinary percolation value of $`y`$. From these results we conclude that $`d_f`$ for fully coordinated percolation is the same as that for ordinary percolation as well as the estimate of about $`0.8845`$ for $`p_c`$.
In addition to the above, we have also performed the scaling analysis of the quantity $`\chi _1(p,L)`$, as both $`p`$ and $`L`$ are varied, in the form of
$$\chi _{i1}=L^{2d_f2}g(|pp_c|^\nu L)$$
(5)
where $`g(|pp_c|^\nu L)`$ is a scaling function. Using the exactly known ordinary percolation values of the exponents $`d_f`$ and $`\nu `$ (as they have been shown to be the same for fully coordinated percolation above), we obtained the maximum data collapsing in the range of $`0.884<p_c<0.885`$.
Independent of the above analyses based on the fully coordinated clusters obtained by Monte Carlo simulations of fixed-sized square grids, we have also performed Monte Carlo simulations by growing fully coordinated clusters starting from a seed site using a variant of the breadth-first search algorithm . This latter approach has an advantage that there is no obvious finite-size effects and that statistics taken while a cluster is still growing represents a partial sum automatically. That is, we start growing such clusters 10,000 times at each of $`p=0.880`$, 0.884, 0.885, and 0.890, and keep track of how many of them are still growing at predetermined intervals of size ($`2^n`$ where $`n=1`$, 2, … , 15 in our case). This number, say, $`N_s`$ represents the partial sum
$$N_s/N_1=\frac{1}{p}\underset{s^{}s}{}s^{}n_s^{}$$
(6)
Since the normalized number of size-$`s`$ clusters, $`n_s`$ scales as $`s^\tau f(ϵs^\sigma )`$ where $`\tau =187/91`$ and $`\sigma =36/91`$, we expect that $`N_s`$ scales as
$$N_ss^{\tau 2}\widehat{f}(ϵs^\sigma )$$
(7)
near $`p_c`$ and for large $`s`$. In particular, at $`p_c`$, this quantity should be constant independent of (large) $`s`$. The numerical results are shown in Fig. 2, where the data correspond, from highest to lowest, to $`p=0.890`$, 0.885, 0.884, and 0.880. The horizontal dashed line drawn to guide the eye makes it clear that data for $`p=0.885`$ best approximates a horizontal line as $`s\mathrm{}`$, suggesting that a good estimate of $`p_c`$ would be 0.885.
We now consider all the above results together. The results from scaling in Eq. (2) indicate a range $`0.8850.886`$, and those from scaling in Eq. (4) indicate $`0.88440.8847`$, while those from scaling in Eq. (5) hints $`p_c`$ to be in the $`0.8840.885`$ interval. Another result that could also be used are the values of $`p`$ for the peaks in Fig. 1, which vary from $`0.8841`$ to $`0.8844`$ (with the peak for the largest grid $`L=2048`$ occurring at $`p_{peak}=0.8844`$). Combining all these results, our final estimate is $`p_c=0.885\pm 0.001`$.
## III DYNAMIC CRITICAL BEHAVIOR
By dynamic critical behavior here we simply mean the asymptotic long-time behavior of diffusion taking place on an incipient infinite cluster of fully coordinated percolation, or its equivalent scalar elastic behavior. This represents the simplest kind of dynamics associated with these complicated geometrical objects and is mainly reflected in the two dynamic critical exponents called $`d_s`$ (spectral dimension) and $`d_w`$ (walk dimension).
It is well known that the return-to-the starting point probability of the random walk, $`P(t)`$, in the long-time limit obeys the power law,
$$P(t)t^{d_s/2},$$
(8)
where $`d_s`$ is the spectral dimension of the walk. In a fractal medium, $`d_s`$ is less than the space dimension $`d`$, because the progressive displacement of the random walker further from the starting point is hampered by its encounter with the irregularities of the medium at all scales. Thus, $`d_s`$ is expected to be greater for environments that provide higher connectivity at large length scales, independently of the fractal dimension itself which is mainly the measure of the overall size scales or how many sites are connected, not how well those sites are connected to each other.
For media with long-range loops, $`d_f`$ (fractal dimension), $`d_s`$, and $`d_w`$ are not independent but are expected to obey the well-known Alexander and Orbach scaling law
$$d_s=2d_f/d_w.$$
(9)
For this reason, we only calculate $`d_s`$ here though both of $`d_s`$ and $`d_w`$ can be conveniently calculated by numerically studying the transition probability matrix W which represent the random walk on a specific fractal medium. Our calculation in this work is only one aspect of such an analysis: finite size scaling of the dominant non-trivial eigenvalue which describes the longest finite time scale of the Brownian process. This approach has already been described in detail elsewhere , and thus we merely state the main feature and then immediately report our specific numerical results.
The matrix W is constructed from the elements $`W_{ij}`$ being equal to a hopping probability per step (equal to $`\frac{1}{4}`$ here) for available nearest neighbor sites $`i`$ and $`j`$. For each neighbor site which is not present, a probability of $`\frac{1}{4}`$ is added to the probability for not taking a step for one time period - this is called the blind ant rule. Many large matrices W are obtained by Monte Carlo simulation (by growing a fully coordinated percolation cluster from a seed site and stopping the growth when a predetermined desired size is reached) and their largest eigenvalues are numerically obtained by the so-called Arnoldi-Saad method. The dominant non-trivial eigenvalue $`\lambda _1`$ is the largest eigenvalue just below the stationary eigenvalue 1 and it is known to satisfy the following finite size scaling law:
$$|\mathrm{ln}\lambda _1|1\lambda _1S^{2/d_s}.$$
(10)
Shown in Fig. 3 are our results from such an analysis. We have generated at least 1000 independent realizations of the underlying fully coordinated percolation clusters for sizes $`S=1250`$, 2500, 5000, 10000, and 20000 at each of the three nominal probabilities $`p=0.883`$, 0.885, and 0.887, and numerically obtained $`\lambda _1`$ for each cluster. The main part of Fig. 3 shows the data from $`p=0.885`$, the value shown to be closest to $`p_c`$ in this work. The figure shows an excellent power law fit (regression coefficient of -0.99993) to Eq.(10) with the exponent $`2/d_s=1.486\pm 0.01`$. This power translates to $`d_s=1.346\pm 0.011`$ which is close to but definitely larger than the corresponding ordinary percolation value of $`d_s^{(O)}=1.30\pm 0.02`$ estimated by many independent calculations. (See, e.g., .) For comparison, a loopless variant of percolation has exactly the same static exponents as ordinary percolation but has $`d_s1.22`$ in two dimensions, about twice as much deviation from ordinary percolation in the opposite direction as the present fully coordinated percolation problem.
In the inset for Fig. 3, we show a normalized $`\lambda _1`$ by plotting $`(1\lambda _1)S^{2/d_s^{(o)}}`$ where the circles are for $`p=0.887`$, squares for $`p=0.885`$, diamonds for $`p=0.883`$ and the solid line is a horizontal line (for ordinary percolation) to guide the eye. In all cases, the standard errors of the mean for each set of data are substantially smaller than the size of the symbols used in the figure. The distribution of $`\lambda _1`$ in each case appears to be Gaussian with the standard deviations scaling in the same way as the means.
¿From these results, we conclude that, though the numerical differences are small, there is a high likelihood that the fully coordinated percolation clusters are significantly different from the ordinary percolation counterparts even at long length scales. In the next section, we show that this analysis is vindicated by exposing one dramatic difference in the cluster morphology which will not be obvious to an uncritical observer.
## IV CLUSTER GEOMETRY
In this section we examine the geometry of fully coordinated percolation clusters more closely. First, we present Fig. 4 which show in grey scale the sites of (a) fully coordinated percolation cluster and (b) ordinary percolation cluster at respective $`p_c`$. The overall visual impression is that they are very similarly shaped even down to the details of the boundaries and internal holes. Their shapes are also essentially independent of the underlying lattice anisotropy (as Fig. 4(a) has actually been rotated by 45 degrees with respect to the coordinate axes of the square lattice). However, the number and distributions of the especially dark points are evidently quite distinct in (a) and (b). They cluster more and are much more abundant in (a) than in (b). These sites are actually the interior or fully coordinated sites in the internal part of the cluster. The remaining sites (shaded grey) are either the external hull sites or internal boundary sites.
In Fig. 5 quantitative examination is made on the different classes of sites of the two kinds of clusters. In the main part of the figure, the average numbers of two kinds of sites are shown, interior (diamonds for fully coordinated percolation, crosses for ordinary percolation), and external (squares for fully coordinated percolation, plusses for ordinary percolation). It is clear that the interior sites are more than 3 times abundant in fully coordinated percolation as is visually suggested by Fig. 4. Though this is primarily a local effect due to the full coordination rule, they do have a multiplicative effect at long-range connectivity and thus may well be the source of the small difference in the value of $`d_s`$.
Of course just the fact that there are more than 3 times as many interior sites (and correspondingly, much fewer hull sites) in fully coordinated percolation must have quantitative consequences (even if not qualitative) for any process on the cluster which depend on degrees of connectivity rather than just on the number of connected sites. An example of the effect of the different numbers of the hull sites may be in oxidation or catalysis of a material through the external embedding phase or even a irregularly shaped breakwater in the form of the external boundary of a percolation cluster. The external sites are those which are sometimes called hull sites and they are known to scale with an exact exponent in ordinary percolation as
$$N_{hull}S^{d_h/d_f}$$
(11)
where $`d_h^{(o)}=7/4`$ and thus the exponent is $`84/91=0.923\mathrm{}`$ for ordinary percolation. Since this is less than 1, these sites comprise less and less fraction of the cluster as $`s\mathrm{}`$ and the remaining sites (i.e., interior and internal boundary sites) eventually dominate the whole cluster. This is already evident from the greater slopes close to 1 for the interior sites in Fig. 5. The linear regression fits for the hull sites in Fig. 5 indicate the slopes of about 0.913 for ordinary percolation and 0.922 for fully coordinated percolation with essentially perfect fits, again reinforcing the conclusion that they show the same static critical behavior.
## V SUMMARY AND CONCLUSION
In summary, we have studied both static and dynamic critical behaviors associated with a model of the highly connected regions of a disordered cluster. The model is a site variant of the four-coordinated percolation on the square lattice we call fully coordinated percolation. While the bond version was studied for static critical behavior, neither bond nor site version was previously studied for the dynamic behavior to the best of our knowledge. We have used various methods such as Monte Carlo simulations, finite-size scaling and Arnoldi-Saad approximate diagonalization of large random matrices for this purpose.
Though all indications are that the static behavior of this model is exactly the same as the ordinary percolation (as previous work suggested), the dynamic behavior shows a small but significant difference in the values of the universal critical exponents. We have looked for the cause of this difference and found a three-fold increase in the number and significantly enhanced clustering of the interior sites (i.e., those not on the exterior or internal boundaries) and the associated decrease in the number of boundary sites. Thus, although the deviations from ordinary percolation in terms of the values of the dynamic critical exponents are not large, there will be rather significant differences in any processes that depend sensitively on those numbers. Possible examples of such processes include the oxidation of a material through the external embedding phase and the vibrational normal modes with boundary conditions such as clamping or tethering of the external boundaries (through the contrast in elastic constants of embedding and embedded materials, for example).
## ACKNOWLEDGMENTS
One of us (JHK) wishes to thank the Purdue University Department of Physics for the hospitality during his visit there when part of the work was done. We are also grateful to D. Stauffer and R. Ziff for insightful remarks.
|
no-problem/9904/astro-ph9904287.html
|
ar5iv
|
text
|
# Test of high–energy interaction models using the hadronic core of EAS
## 1 Proem
The interpretation of extensive air shower (EAS) measurements in the PeV domain and above relies strongly on the hadronic interaction model applied when simulating the shower development in the Earth’s atmosphere. Such models are needed to describe the interaction processes of the primary particles with the air nuclei and the production of secondary particles.
In the EAS Monte Carlo codes the electromagnetic and weak interactions can be calculated with good accuracy. Hadronic interactions, on the other hand, are still uncertain to a large extent. A wealth of data exists on particle production from $`p\overline{p}`$ colliders up to energies which correspond to 2 PeV/c laboratory momentum and from heavy ion experiments up to energies of 200 GeV/nucleon. However, almost all collider experiments do not register particles emitted in the very forward direction where most of the energy flows. These particles carry the preponderant part of the energy and, therefore, are of utmost importance for the shower development of an EAS. Since most of these particles are produced in interactions with small momentum transfer, QCD is at present not capable of calculating their kinematic parameters.
Many phenomenological models have been developed to reproduce the experimental results. Extrapolations to higher energies, to small angles, and to nucleus–nucleus collisions have been performed under different theoretical assumptions. The number of participant nucleons in the latter case is another important parameter which influences the longitudinal development of a shower. Many EAS experiments have used specific models to determine the primary energy and to extract information about the primary mass composition. Experience shows that different models can lead to different results when applied to the same data.
Therefore, it is of crucial importance to verify the individual models experimentally as thoroughly as possible. When planning the KASCADE experiment, one of the principal motivations to build the hadron calorimeter was the intention to verify available interaction models by studying the hadronic central core. In the Monte Carlo code CORSIKA five different interaction codes have been implemented and placed at the users’ disposal. By examining the hadron distribution in the very centre these interaction models are tested. The propagation code itself, viz. hadron transport, decay modes, scattering etc., is checked by looking to the hadron lateral distribution further outside up to distances of 100 m from the core.
## 2 The apparatus
The KASCADE experiment consists of an array of 252 stations for electron and muon detection and a calorimeter in its centre for hadron detection and spectroscopy. It has been described in detail elsewhere . The muon detectors in the array are positioned directly below the scintillators of the electron detectors and are shielded by slabs of lead and iron corresponding to 20 radiation lengths in total. The absorber imposes an energy threshold of about 300 MeV for muon detection.
The calorimeter is of the sampling type, the energy being absorbed in an iron stack and sampled in eight layers by ionisation chambers. Its performance is described in detail by Engler et al. . A sketch of the set–up is shown in Fig. 1. The iron slabs are 12–36 cm thick, becoming thicker in deeper parts of the calorimeter. Therefore, the energy resolution does not scale as $`1/\sqrt{E}`$, but is rather constant varying slowly from $`\sigma /E=20\%`$ at 100 GeV to 10% at 10 TeV. The concrete ceiling of the detector building is the last part of the absorber and the ionisation chamber layer below acts as tail catcher. In total, the calorimeter thickness corresponds to 11 interaction lengths $`\lambda _I`$ for vertical hadrons. On top, a 5 cm lead layer filters off the electromagnetic component to a sufficiently low level.
The liquid ionisation chambers use the room temperature liquids tetramethylsilane (TMS) and tetramethylpentane (TMP). A detailed description of their performance can be found elsewhere . Liquid ionisation chambers exhibit a linear signal behaviour with a very large dynamic range. The latter is limited only by the electronics to about $`5\times 10^4`$ of the amplifier rms–noise, i.e., the signal of one to more than $`10^4`$ passing muons, equivalent to 10 GeV deposited energy, is read out without saturation. This ensures the energy of individual hadrons to be measured linearly up to 20 TeV. At this energy, containment losses are at a level of two percent. They rise and at 50 TeV signal losses of about 5% have to be taken into account. The energy calibration is performed by means of through–going muons taking their energy deposition as standard. Electronic calibration is repeated in regular intervals of 6 months by injecting a calibration charge at the amplifier input. A stability of better than 2% over two years of operation has been attained. The detector signal is shaped to a slow signal with $`10\mu `$s risetime in order to reduce the amplifier noise to a level less than that of a passing muon. On the other hand, this makes a fast external trigger necessary.
The principal trigger of KASCADE is formed by a coincident signal in at least five stations in one subgroup of 16 stations of the array. This sets the energy threshold to a few times $`10^{14}`$ eV depending on zenith angle and primary mass. An alternative trigger is generated by a layer of plastic scintillators positioned below the third iron layer at a depth of $`2.2\lambda _I`$. These scintillators cover two thirds of the calorimeter surface and deliver timing information with 1.5 ns resolution.
## 3 Simulations
EAS simulations are performed using the CORSIKA versions 5.2 and 5.62 as described in . The interaction models chosen in the tests are VENUS version 4.12 , QGSJET and SIBYLL version 1.6 . We have chosen two models which are based on the Gribov Regge theory because their solid theoretical ground allows best to extrapolate from collider measurements to higher energies, forward kinematical regions, and nucleus–nucleus interactions. The DPMJET model, at the time of investigations, was not available in CORSIKA in a stable version. In addition, SIBYLL was used, a minijet model that is widely used in EAS calculations, especially as the hadronic interaction model in the MOCCA code. A sample of 2000 proton and iron–induced showers were simulated with SIBYLL and 7000 p and Fe events with QGSJET. With VENUS 2000 showers were generated, each for p, He, O, Si and Fe primaries. The showers were distributed in the energy range of 0.1 PeV up to 31.6 PeV according to a power law with a differential index of -2.7 and were equally spread in the interval of $`15^o`$ to $`20^o`$ zenith angle. In addition, the changing of the index to -3.1 at the knee position, which is assumed to be at 5 PeV, was taken into account. The shower axes were spread uniformly over the calorimeter surface extended by 2 m beyond its boundary.
In order to determine the signals in the individual detectors, all secondary particles at ground level are passed through a detector simulation program using the GEANT package . By these means, the instrumental response is taken into account and the simulated events are analysed in the same way as the experimental data, an important aspect to avoid systematic biases by pattern recognition and reconstruction algorithms.
## 4 Shower size determination
The data evaluation proceeds via three levels. In a first step the shower core and its direction of incidence are reconstructed and, using the single muon calibration of the array detectors, their energy deposits are converted into numbers of particles. In the next stage, iterative corrections for electromagnetic punch–through in the muon detectors and muonic energy deposits in the electron detectors are applied. The particle densities are fitted with a likelihood function to the Nishimura Kamata Greisen (NKG) formula . A radius parameter of 89 m and 420 m is used for electrons and muons, respectively. Because of limited statistics, the radial slope parameter (age) is fixed for the muons. The radius parameters deviate from the parameters originally proposed, but have been found to yield the best agreement with the data. The muon fit extends from 40 m to 200 m, the lower cut being imposed by the strong hadronic and electromagnetic punch–through near the shower centre. The upper boundary reflects the geometrical acceptance. In a final step, the muon fit function is used to correct the electron numbers and vice versa.
The electromagnetic and muonic sizes $`N_e`$ and $`N_\mu `$ are obtained by integrating the final NKG fit functions. For the muons alternatively, integration within the range of the fit results in a truncated muon number $`N_\mu ^{}`$. This observable has the advantage of being free of systematical errors caused by the extrapolation outside the experimental acceptance. As demonstrated in Fig. 2, it yields a good estimate of the primary energy irrespective of primary mass. To a certain extent, it is an integral variable indicating the sum of particles produced in the atmosphere independently of longitudinal cascade development. In the lefthand graph, the simulated values for the QGSJET model are plotted together with fitted straight lines. They show that in the $`N_\mu ^{}`$ range given, the primary energy is proportional to the muon number $`E_0N_\mu ^{0.98}`$ with an error in the exponent of 0.06. This holds for the selected showers hitting the central detector with their axes. (For all showers falling into the area of the array a slightly higher coefficient of 1.10 is found.)
It has been checked that the particle numbers are evaluated correctly up to values of $`\mathrm{lg}N_\mu ^{}=5`$. At the highest energy of 100 PeV simulations indicate that $`N_\mu ^{}`$ is overestimated by about 10%. By studying $`N_\mu `$ sizes at this energy experimentally, irregularities in the muon size distribution may indicate an overestimation of 20%. How well different models agree among each other is shown on the righthand part of Fig. 2, where the corresponding fitted lines are presented. It is seen that the SIBYLL model lies above the two others. In other words it generates fewer muons with consequences that will be discussed below. It is this truncated muon number $`N_\mu ^{}`$ which we shall use throughout this article to classify events according to the muon number, that means approximately according to the primary energy.
The accuracy of the reconstructed shower sizes is estimated to be 5% for $`N_e`$ and 10% for $`N_\mu ^{}`$ around the knee position.
## 5 Hadron reconstruction
The raw data of the central detector are passed through a pattern recognition program which traces a particle in the detector and reconstructs its position, energy and incident angle. Two algorithms exist. One of them is optimized to reconstruct unaccompanied hadrons and to determine their energy and angle with best resolution. The second is trained to resolve as many hadrons as possible in a shower core and to reconstruct their proper energies and angles of incidence. This algorithm has been used for the analyses presented in the following. Grosso modo, the pattern recognition proceeds as follows: Clusters of energy are searched to line up and to form a track, from which roughly an angle of incidence can be inferred. Then in the lower layers patterns of cascades are looked for since these penetrating and late developing cascades can be reconstructed most easily. Going upwards in the calorimeter, clusters are formed from the remaining energy and lined up to showers according to the direction already found. The uppermost layer is not used for hadron energy determination to evade hadron signals, which are too much distorted by the electromagnetic component, nor is the trigger layer used because of its limited dynamical range.
Due to a fine lateral segmentation of 25 cm, the minimal distance to separate two equal–energy hadrons with 50% probability amounts to 40 cm. This causes the reconstructed hadron density to flatten off at about 1.5 hadrons/m<sup>2</sup>. The reconstruction efficiency with respect to the hadron energy is presented in Fig. 4. At 50 GeV an efficiency of 70% is obtained. This energy is taken as threshold in most of the analyses in the following, if not mentioned otherwise. We present the values on a logarithmic scale in order to demonstrate how often high–energy radiating muons can mimic a hadron. Their reconstructed hadronic energy, however, is much lower, typically by a factor of 10. The fraction of non–identified hadrons above 100 GeV typically amounts to 5%. This value holds for a 1 PeV shower hitting the calorimeter at its centre and rises to 30% at 10 PeV. This effect is taken into account automatically, because in the simulation it appears as the same token.
## 6 Event selection
About $`10^8`$ events were recorded from October 1996 to August 1998. In $`6\times 10^6`$ events, at least one hadron was reconstructed. Events accepted for the present analysis have to fulfill the following requirements: More than two hadrons are reconstructed, the zenith angle of the shower is less than $`30^{}`$ and the core, as determined by the array stations, hits the calorimeter or lies within 1.5 m distance outside its boundary. For shower sizes corresponding to energies of more than about 1 PeV, the core can also be determined in the first calorimeter layer by the electromagnetic punch–through. The fine sampling of the ionisation chambers yields 0.5 m spatial resolution for the core position. For events with such a precise core position it has to lie within the calorimeter at 1 m distance from its boundary. After all cuts, 40 000 events were left for the final analysis.
For non–centric showers, hadronic observables like the number of hadrons have been corrected for the missing calorimeter surface by requiring rotational symmetry. On the other hand, some variables are used, for which such a correction is not obvious, e.g. the minimum–spanning–tree, see Section 8.4. In these cases, only a square of $`8\times 8\text{m}^2`$ of the calorimeter with the shower core in its centre is used and the rest of the calorimeter information neglected. This treatment ensures that all events are analysed on the same footing.
## 7 Tests at large distances
Studying hadron distributions at large core distances checks mainly the overall performance of the shower simulation program CORSIKA. In the regions far away from the shower axis of an EAS, the Monte Carlo calculations can be verified with respect to the transport of particles, their decay characteristics, etc. If the hadrons are well described it signifies that the shower propagation is treated properly. In these outer regions, where lower hadron energies and larger scattering angles dominate, the underlying physics is sufficiently well known from accelerator experiments, and the code in itself can be tested.
As an example of such a test, the hadron lateral distribution is presented in Fig. 4 for $`N_\mu ^{}`$ sizes corresponding to the primary energy interval around and above the knee: $`3\text{PeV}E_0<10\text{PeV}`$. The distributions of the number of hadrons and of the hadronic energy are given. In the very centre of the former, a saturation as mentioned in chapter 5 can be noticed. Several functions have been tried to fit the data points, among others exponentials as suggested by Kempa . However, by far the best fit was obtained when applying the NKG formula represented by the curves shown in the graph. This finding is not particularly surprising because hadrons of an energy of approximately 100 GeV , when passing through the atmosphere, generate the electromagnetic component, and the NKG formula has been derived for electromagnetic cascades. In addition, multiple scattering of electrons determining the Molière radius resembles the scattering character of hadrons with a mean transverse momentum of 400 MeV/c irrespective of their energy. Replacing the mean multiple scattering by the latter and the radiation length by the interaction length one arrives at a radius $`R_H`$ of about 10 m. This value we expect to take the place of the Molière radius in the NKG formula for electron measurements. Indeed, values of this order are found experimentally.
Lateral hadron distributions compared with CORSIKA simulations are shown in Fig. 5 for primary energies below and above the knee. In the diagrams the hadronic energy density is plotted for muon numbers corresponding to the primary energy intervals of $`1\text{PeV}E_0<3\text{PeV}`$ and $`3\text{PeV}E_0<10\text{PeV}`$. The data points are compared to primary proton and iron simulations applying the QGSJET model. These two extreme assumptions about the masses result in nearly identical hadron densities and the measured data coincide with the simulations, thereby verifying the calculations. Similar good agreement is found for the VENUS and SIBYLL models. Simulations and data agree well up to 100 m distance from the core. Only in the very inner region of 10 m, the simulations yield deviating hadron densities for different primary masses. Nevertheless, the measurements here lie well in between the two extreme primary compositions of pure protons or pure iron nuclei.
## 8 Tests at shower core
### 8.1 Hadron lateral distribution
To begin with, the lateral distributions are compared with values published in the literature. Hadron distributions in the core of EAS have been measured at Ooty by Vatcha and Sreekantan and at Tien Shan by Danilova et al. . Results of earlier experiments have been examined and discussed by Sreekantan et al. . In the experiments different techniques for hadron detection have been applied: a cloud chamber at Ooty, long gaseous ionisation tubes at Tien Shan and liquid ionisation chambers in the present experiment. Therefore, it is of interest to compare the respective results.
The experiments were performed at different altitudes, and a priori they are expected to deliver deviating results. However, when compared at the same electromagnetic shower size, hadron distributions should be similar because electrons and hadrons, the latter of about 100 GeV, are closely related to each other in an EAS when the shower passes through the atmosphere. A sort of equilibrium turns up as has been pointed out by Kempa . Indeed, Fig. 6 demonstrates for electron numbers $`5.25\mathrm{lg}N_e<5.5`$ that the lateral hadron distributions agree reasonably well. In particular, the measurements of the Ooty group at an atmospheric depth of $`800\text{g}/\text{cm}^2`$ coincide with the present findings. The grey shaded band represents CORSIKA simulations using the hadronic interaction model QGSJET, the lower curve representing primary protons and the upper curve primary iron nuclei. The curves are fits to the simulated density of hadrons according to $`\rho _H(r)exp((\frac{r}{r_0})^\kappa )`$ with values for $`\kappa `$ found to be between 0.7 and 0.9. The data lie well between these two boundaries. The graph on the righthand side represents hadron densities with a threshold of 1 TeV. Bearing this high threshold in mind, the similarity in both distributions, Tien Shan at $`690\text{g}/\text{cm}^2`$ and KASCADE at sea–level, is astonishing. In conclusion, it can be stated that hadron densities, despite of being measured with different techniques, agree reasonably well among different experiments.
When classifying hadron distributions according to muonic shower sizes, differences among the interaction models emerge. This becomes apparent in Fig. 7, where the central density is plotted for truncated muon numbers which correspond to a mean energy of about 1.2 PeV. On the left graph, the VENUS calculations enclose the data points leaving the elemental composition to be somewhere between pure proton or pure iron primaries. On the right graph, the measured data points follow the lower boundary of the SIBYLL calculations, suggesting that all primaries are iron nuclei, at this energy obviously an unprobable result.
The lateral distribution demonstrates, and other observables in a similar manner as reported previously , that the SIBYLL code generates too low muon numbers thereby entailing a comparison at a different estimate of the primary energy. A hint has already been observed in Fig. 2 where the SIBYLL lines lie above those of QGSJET and VENUS. When hadronic observables are classified according to electromagnetic shower sizes, the disagreement vanishes as will be discussed in the following.
### 8.2 Hadron energy distribution
The energy distribution of hadrons is shown in Fig. 9 for a fixed electromagnetic shower size. Plotted is the number of hadrons in an area of $`8\times 8\text{m}^2`$ around the shower core. As already mentioned, in this way all showers are treated in the same manner, independent of their point of incidence. To avoid a systematic bias the loss in statistics has to be accepted. The number of showers reduces to about 5000. The shower size bin of $`5.5\mathrm{lg}N_e<5.75`$ corresponds approximately to a mean primary energy of 6 PeV. The lines represent fits to the simulations according to $`exp((\frac{\mathrm{lg}E_Ha}{b})^c)`$. Usually in the literature $`c=1`$ is assumed, however, the present data, due to their large dynamical range, yield values for $`c`$ from 1.3 to 1.6. As can be inferred from the graph, all three interaction models reproduce the measured data reasonably well, elucidating the fact that electrons closely follow the hadrons in EAS propagation. But if the same data are classified corresponding to the muon number, again SIBYLL seems to generate too many hadrons and thereby mimics a primary composition of pure iron nuclei. For this reason SIBYLL will not be utilized any further. In the figure also the energy spectrum is plotted as measured with the Maket–ANI calorimeter by Ter–Antonian et al. . As already mentioned above, distributions are expected to coincide when taken at the same electron number even if they have been measured at different altitudes. In the present case the data have been taken at sea level and at $`700\text{g}/\text{cm}^2`$ on Mount Aragats. The energy distributions, indeed, agree rather well with each other indicating that in both data sets the patterns of hadrons are well recognized and the energies correctly determined.
It was seen that SIBYLL encounters difficulties when the data are classified according to muonic shower sizes. The model VENUS, on the other hand, cannot reproduce hadronic observables convincingly well when they are binned into electron number intervals. An example is given in Fig. 9. It shows the number of hadrons, i.e. the hadronic shower size $`N_h`$, as a function of the electromagnetic shower size $`N_e`$. The experimental points match well to the primary proton line as expected from QGSJET predictions. This phenomenon is easily understood by the steeply falling flux spectrum and the fact that primary protons induce larger electromagnetic sizes at observation level than heavy primaries. Hence, when grouping in $`N_e`$ bins , showers from primary protons will be enriched and we expect to have predominantly proton showers in our sample. This fact reduces any ambiguities in the results due to the absence of direct information on primary composition. Concerning the VENUS model the predicted hadron numbers are too high and the two lines which mark the region between primary protons and iron nuclei cannot explain the data. The point at the lowest shower size is still influenced by the trigger efficiency of the array counters.
### 8.3 Hadron energy fraction
A suitable test of the interaction models consists in investigating the granular structure of the hadronic core concerning spatial as well as energy distributions. As variables we have chosen the energy fraction of hadrons and the distances in the minimum–spanning–tree between them. Both will be dealt with in the following sections.
For each hadron its energy fraction with respect to the most energetic hadron in that particular shower is calculated. For primary protons, the leading particle effect is expected to produce one particularly energetic hadron accompanied by hadrons with a broad distribution of lower energies. Hence, we presume to find a rather large dispersion of hadronic energies for primary protons, whereas for primary iron nuclei the hadron energies should be more equally distributed. The simulated distributions, indeed, confirm this expectation as is shown in Fig. 10. The lines — to guide the eye — represent fits to the simulations using two modified exponentials as in the preceding section, which are connected to each other at the maximum. On the lefthand graph, the data seem to corroborate the simulations. They are shown for a muon number range corresponding to a primary energy of approximately 2 PeV, i.e., below the knee position. On the righthand side, the results are shown for an interval above the knee for muonic shower sizes corresponding to a primary energy of 12 PeV. The reader observes that the data cannot be explained by the simulations, neither by primary protons nor by iron nuclei. On a logarithmic scale the data exhibit a symmetric distribution around the value $`\mathrm{lg}(E_H/E_H^{max})1.5`$, even more symmetric than would be expected for a pure iron composition. In particular, energetic hadrons resulting from the leading particle effect seem to be missing. They would shift the distribution to smaller values. This absence of energetic hadrons in the observations will be confirmed later when investigating other observables.
### 8.4 Minimum–spanning–tree
When constructing the minimum–spanning–tree (MST), all hadrons are connected to each other in a plane perpendicular to the shower axis. The MST is that configuration where the sum of all connections weighted by the inverse energy sum of its neighbours has a minimum. The $`1/E`$ weighting has been found to separate iron and proton induced showers the most. Fig. 11 shows as an example the central shower core of an event. Plotted are the points of incidence on the calorimeter. The sizes of the points mark the hadron energies on a logarithmic scale. The shower centre and the fiducial area of $`8\times 8\text{m}^2`$ around it are indicated as well. For each event the distribution of distances is formed. Average distributions from many events are given in Fig. 12. As in Fig. 10, the muonic shower sizes correspond to primary energy intervals below and above the knee. It is observed that for the former, the data lie well within the bounds of the primary composition but that above the knee the measurements yield results which are not in complete agreement with the model although they are close to the simulated iron data. The distributions of Figs. 10 and 12 have also been calculated analysing the full calorimeter surface and not only the $`8\times 8\text{m}^2`$ around the shower centre. No remarkable difference could be obtained.
In both observables – energy fraction and MST – the data for higher primary energies cannot be interpreted by the simulations. Additionally, in the M.C. calculations the knee in the primary energy distribution has been omitted. Again, no remarkable change in the distributions showed up. In fact, when investigating the distributions as a function of muon number, the deviation between M.C. values and the measured data develops smoothly with increasing energy.
When regarding the righthand graph in Fig. 12 the question arises whether the interaction model produces too small distances or too energetic hadrons or both. In agreement with the observation in Fig. 10 one has to conclude that too energetic hadrons are generated as compared to the data. Whether in the MSTs also the distances between the hadrons, in other words the transverse momenta, are underestimated cannot be decided at the moment. Also the number of hadrons plays a role. This issue is under further investigations.
### 8.5 Hadronic energy in large showers
Deviations between measurement and simulations as in the preceding sections are also observed when investigating the hadronic energy in large showers. With rising muon numbers $`N_\mu ^{}`$, the experiment reveals an increasing part of missing hadronic energy in the shower core. Fig. 13 (left) shows the number of hadrons versus the muonic shower size. At muon numbers corresponding to about 5 PeV primary energy, the hadron numbers turn out to be smaller than predicted for iron by both interaction models VENUS and QGSJET. Again, one observes that the latter model describes the experimental points somewhat better. The conclusion that QGSJET reproduces the data best in the PeV region is also confirmed by a recent model comparison performed by Erlykin and Wolfendale . The authors classify the models on the basis of consistency checks among different observables, e.g. the depth of shower maximum $`X_{max}`$ and the $`N_\mu /N_e`$ ratio.
The righthand graph of Fig. 13 presents the maximum hadron energy found in showers with the indicated muon number. The open symbols represent the QGSJET simulations, again QGSJET and VENUS yield similar results. Measurement and simulation disagree also to some extent in this variable at large shower sizes. The overestimation of muon numbers mentioned in section 4 cannot account for the discrepancies. On a logarithmic scale it starts to be noticeable at $`\mathrm{lg}N_\mu ^{}=5.5`$ and amounts to $`\mathrm{\Delta }\mathrm{lg}N_\mu ^{}=0.1`$. A shift of this size does not ameliorate the situation. The data have been checked independently in the reduced fiducial area of $`8\times 8\text{m}^2`$. But in this analysis, too, the data seem to fake pure iron primaries at $`\mathrm{lg}N_\mu ^{}=4.3`$ and are below that boundary for larger muonic sizes. Factum is that we do not observe the energetic hadrons expected from the M.C. calculations. In the energy region 10 to 100 PeV even QGSJET fails to describe the measurements.
Obviously, the question arises whether these experimentally detected effects are artifacts caused, for instance, by saturation effects in the calorimeter or by insufficient pattern recognition presumed to be different in the simulation than present in the experimental data. After all, the high–energy values correspond to primary energies of about 100 PeV where 400 hadrons have to be reconstructed. At this point, it may be noted again that always the experimental and simulated data are compared with each other on the detector signal level, hence, a possible hadron misidentification applies to both data sets. As already pointed out in section 2, individual hadrons up to 50 TeV have been reconstructed and their saturation effects have been examined thoroughly.
Some misallocation of energy to individual hadrons might occur, though, if lateral distributions of hadrons in the core differ markedly between simulations and reality. There may be indication from emulsion experiments for this . However, from the results shown in Fig. 6 and 7 we would not expect any dramatic effect.
Fig. 14 demonstrates that for large electromagnetic shower sizes, the number of hadrons compares well with other experiments as well as with CORSIKA simulations. In the diagram the number of hadrons above the indicated thresholds is presented with respect to the shower size. The values obtained for hadrons above 1 TeV can be related to two other experiments performed at Kiel by Fritze et al. and on the Chacaltaya by C. Aguirre et al. . It is observed that up to shower sizes which correspond to about 20 PeV for primary protons all high–energy hadrons are reconstructed, i.e., more than 70 TeV energy are found in the calorimeter. When compared to QGSJET simulations, the data lie within the physical boundaries as shown for the 1 TeV line. On closer inspection the data indicate an increase of the mean mass with rising energy. Also in Fig. 9 it has been seen that the hadron numbers are well reproduced by QGSJET up to the highest electromagnetic shower sizes. In conclusion, it can be stated that the hadron component compares well between different experiments and with M.C. calculations when classified according to electromagnetic shower sizes, and that the deviations observed in muon number binning cannot be accounted for by experimental imperfections.
## 9 Conclusion and outlook
Three interaction models have been tested by examining the hadronic cores of large EAS. It turned out that QGSJET reproduces the data best, but at large muonic shower sizes, i.e. at energies above the knee even this model fails to reproduce certain observables. Most importantly, the model predicts more hadrons than are observed experimentally.
The current investigation is a first approach with a first data sample of the KASCADE experiment. Better statistics, both in the data and in the Monte Carlo calculations, are imperative, especially above the knee in the 10 PeV region, and are expected from the further operation of the experiment. In addition, other experimental methods have to be developed to check the simulation codes even more rigorously. Such a stringent check consists of verifying absolute particle fluxes at ground level at energies where the primary flux is reasonably well known. Improvements in the interaction models are also under way. NEXUS is in statu nascendi, a joint enterprise by the authors of VENUS and QGSJET . It has become evident that a very precise description of the shower development in the atmosphere is needed if the mass of the primaries is to be estimated by means of ground level particle distributions.
## 10 Acknowledgments
The authors would like to thank the members of the engineering and technical staff of the KASCADE collaboration who contributed with enthusiasm and engagement to the success of the experiment.
The Polish group gratefully acknowledges support by the Polish State Committee for Scientific Research (grant No. 2 P03B 16012). The work has been partly supported by a grant of the Rumanian Ministry of Research and Technology and by the research grant no. 94964 of the Armenian Government and ISTC project A 116. The support of the experiment by the Ministry for Research of the German Federal Government is gratefully acknowledged.
## References
|
no-problem/9904/hep-ex9904033.html
|
ar5iv
|
text
|
# Event Shapes and Power Corrections in e⁺e⁻ Annihilations
## 1 Introduction
When studying observables in the process $`\mathrm{e}^+\mathrm{e}^{}\mathrm{Hadrons}`$, it is found that the perturbative QCD predictions have to be complemented by non-perturbative corrections of the form $`1/Q^p`$, where $`Q`$ is the centre-of-mass energy $`E_{CM}`$, and the power $`p`$ depends on the particular observable. For fully inclusive observables such as the total cross section $`p=4`$, however, for less inclusive ones such as event shape variables the power is much smaller, typically $`p=1`$. So for those observables non-perturbative effects can be sizeable, e.g., at LEP1 energies corrections of 5-10% are found. Since these variables are extensively used for $`\alpha _s`$ determinations, non-perturbative effects have to be well understood. Until recently they have been determined from QCD-inspired Monte Carlo (MC) models of hadronization, however, this method leads to model dependence and thus limitations in the precision of the $`\alpha _s`$ measurements. The new approach of power law corrections to event shapes, pioneered by Dokshitzer and Webber , could lead to improvements in this respect.
The event shape variables studied are Thrust, C-parameter, Heavy Jet Mass and Total and Wide Jet Broadening. These are infrared and collinear safe variables, and perturbative predictions are known up to second order in $`\alpha _s`$, as well as the resummations of leading and next-to-leading logarithms to all orders.
The results presented in the following are mostly based on the study of Ref. , where data from $`\mathrm{e}^+\mathrm{e}^{}`$ annihilations at $`E_{CM}=14`$ GeV up to 161 GeV have been analyzed. In addition, some preliminary results from the LEP2 runs up to 189 GeV have been employed.
## 2 Power Corrections
Power corrections are supposed to have their origin in infrared divergences (renormalons) in the perturbative expansions when the overall energy scale $`Q`$ approaches the Landau pole $`\mathrm{\Lambda }`$. The first approaches were based on the assumption of the existence of a universal non-singular behaviour of an effective strong coupling at small scales, parametrized by a non-perturbative parameter
$$\alpha _0(\mu _I)=\frac{1}{\mu _I}_0^{\mu _I}𝑑k\alpha _s(k),$$
(1)
with $`\mathrm{\Lambda }\mu _IQ`$, which separates the perturbative from the non-perturbative region, $`\mu _I=2`$ GeV, typically. $`\alpha _0(\mu _I)`$ is assumed to be universal.
### 2.1 Mean Values
Using the Ansatz described above, the following prediction is obtained for the mean value of an event shape variable $`f`$ :
$$f=f^{pert}+f^{pow},$$
(2)
where $`f^{pert}`$ is the full second order prediction of the form
$$\alpha _s(\mu ^2)A_f+\alpha _s^2(\mu ^2)\left[B_f+A_fb_0\mathrm{ln}\frac{\mu ^2}{s}\right].$$
(3)
Here $`\mu ^2`$ is the renormalization scale, $`s=E_{CM}^2`$, and $`b_0=(332n_f)/(12\pi )`$, $`n_f`$ being the number of active flavours.
The power correction term is given by $`f^{pow}=a_f𝒫`$, where $`a_f`$ is 2 for Thrust, 1 for Heavy Jet Mass and $`3\pi `$ in case of the C-parameter. $`𝒫`$ is a universal function of the form $`𝒫\mu _I\alpha _0(\mu _I)/Q`$ (up to a constant and corrections of order $`\alpha _s`$ and $`\alpha _s^2`$). The Milan factor $`1.8`$ takes into account two-loop effects. Recently it has been found that in the case of the Jet Broadening variable the power correction is of a more complicated type compared to above, namely of the form $`1/(Q\sqrt{\alpha _s(Q)})`$.
DELPHI have measured mean values for Thrust, Wide Jet Broadening and Heavy Jet Mass from the LEP1 and LEP2 data and combined their results with measurements from low energy $`\mathrm{e}^+\mathrm{e}^{}`$ experiments in order to extract $`\alpha _s(M_Z)`$ and $`\alpha _0`$ from a fit of the power law Ansatz to these data. The fits are displayed in Fig. 1. Very good fits are obtained with $`\alpha _s(M_Z)`$ between 0.118 and 0.120, and $`\alpha _0(2\mathrm{G}\mathrm{e}\mathrm{V})`$ between 0.40 and 0.55. Similar results have been found in the analysis of Ref. .
### 2.2 Distributions
For distributions of event shape observables it has been shown that the non-perturbative corrections lead to a shift in the distribution, i.e,
$$\frac{1}{\sigma _{tot}}\frac{d\sigma (f)^{corr}}{df}=\frac{1}{\sigma _{tot}}\frac{d\sigma (f\mathrm{\Delta }f)^{pert}}{df}$$
(4)
where in the cases of Thrust, Heavy Jet Mass and C-parameter the shift $`\mathrm{\Delta }f`$ is given by exactly the same terms as the correction for the mean values, i.e., $`\mathrm{\Delta }f=a_f𝒫`$. An improved calculation for the Jet Broadening variable has shown that in this case the distribution is not only shifted, but also squeezed, since the shift is of the form $`\mathrm{\Delta }B𝒫\mathrm{ln}(1/B)`$. The perturbative distribution is obtained from a matching of the full next-to-leading order prediction to the resummation of all leading and next-to-leading logarithms $`\mathrm{ln}f`$. Theoretical uncertainties on the $`\alpha _s`$ and $`\alpha _0`$ determinations are estimated from variations of the renormalization scale and the scheme applied for matching the fixed order and resummed calculations. Central values are given for $`\mu ^2=s`$. In Fig. 2 the fits to the Wide Jet Broadening are displayed, for various centre-of-mass energies. Good fits are obtained for the power law Ansatz as well as for the more traditional approach of hadronization corrections from MC models. Similar fits to Thrust and Heavy Jet Mass work well at high energies, however, some deviations are found at very small energies. There probably the limit of applicability of the power law approach is reached. Furthermore, mass effects could play a role there.
In Fig. 3 a summary of the results can be found. A combination of the results gives $`\alpha _s(M_Z)=0.1082\pm 0.0021`$, $`\alpha _0(2\mathrm{G}\mathrm{e}\mathrm{V})=0.504\pm 0.042`$. For $`\alpha _0`$ universality is found at the level of 20%. The value of the strong coupling results to be lower than the one obtained from a similar fit when using MC models for the hadronization corrections, namely $`\alpha _s(M_Z)=0.1232\pm 0.0040`$. This difference has still to be understood.
## 3 Non-Perturbative Shape Functions
Recently it has been shown that all leading power corrections of the type $`1/(fQ)`$, f being the event shape variable, can be resummed when folding the perturbative distribution with a non-perturbative shape function. The form of the shape function depends on the observable, and new non-perturbative parameters are introduced. Fits have been tried for Thrust and Heavy Jet Mass. For the former a good fit quality for a large energy range is obtained, and $`\alpha _s(M_Z)`$ values close to the world average are found. However, in case of the latter no satisfactory fits could be achieved. This should be followed up in the future analyses.
## 4 Conclusions
Significant progress has been made in the understanding of power corrections to event shape variables in $`\mathrm{e}^+\mathrm{e}^{}`$ annihilations. Universality of the non-perturbative parameter $`\alpha _0`$ is observed at the level of 20%. Some open questions remain such as the difference of $`\alpha _s`$ values obtained with power laws and MC corrections. Also the effects of quark or hadron masses should be studied. Power law predictions for other variables such as the differential two-jet rate as well as the energy-energy correlations are awaited for.
A new approach based on non-perturbative shape functions looks very promising, but some further investigations are required.
## 5 Acknowledgements
I would like to thank H. Stenzel for providing me with the results of his power law studies, and G. Salam and G.P. Korchemsky for helpful discussions.
|
no-problem/9904/hep-ph9904232.html
|
ar5iv
|
text
|
# 1 Illustrating the variation of the dijet mass distribution with variation in the scale M_S at the Tevatron. The solid histogram shows the SM NLO prediction; dashed histograms show the prediction for M_S=800 GeV and 1 TeV (upper line and lower line, respectively.) Data are taken from the D0 collaboration.
TIFR/TH/99-13
TIFR-HECR-99-03
April 1999
Testing TeV Scale Quantum Gravity Using Dijet Production at the Tevatron
Prakash Mathews<sup>1</sup><sup>*</sup><sup>*</sup>*prakash@theory.tifr.res.in, Sreerup Raychaudhuri<sup>2</sup>sreerup@iris.hecr.tifr.res.in, K. Sridhar<sup>1</sup>sridhar@theory.tifr.res.in
1) Department of Theoretical Physics, Tata Institute of Fundamental Research,
Homi Bhabha Road, Bombay 400 005, India.
2) Department of High Energy Physics, Tata Institute of Fundamental Research,
Homi Bhabha Road, Bombay 400 005, India. <sup>§</sup><sup>§</sup>§Address after May 1, 1998 : Department of Physics, Indian Institute of Technology,
Kanpur 208 016, India.
ABSTRACT
Dijet production at the Tevatron including effects of virtual exchanges of spin-2 Kaluza-Klein modes in theories with large extra dimensions is considered. The experimental dijet mass and angular distribution are exploited to obtain stringent limits ($`1.2`$ TeV) on the effective string scale $`M_S`$.
There have recently been major breakthroughs in the understanding of string theories at strong coupling in the framework of what is now known as $`M`$-theory . In particular, unification of gravity with other interactions now seems possible in the $`M`$-theoretic framework. But of tremendous interest to phenomenology is the possibility that the effects of gravity could become large at very low scales ($``$ TeV), because of the effects of large extra compact dimensions where gravity can propagate . Starting from a higher-dimensional theory of open and closed strings , the effective low-energy theory is obtained by compactifying to 3+1 dimensions, in such a way that $`n`$ of these extra dimensions are compactified to a common scale $`R`$ which is large, while the remaining dimensions are compactified to extremely tiny scales which are of the order of the inverse Planck scale. In such a scenario, the Standard Model (SM) particles correspond to open strings, which end on a 3-brane and are, therefore, confined to the $`(3+1)`$-dimensional spacetime. On the other hand, the gravitons (corresponding to closed strings) propagate in the $`(4+n)`$-dimensional bulk. The relation between the scales in $`(4+n)`$ dimensions and in $`4`$ dimensions is given by
$$M_\mathrm{P}^2=M_S^{n+2}R^n,$$
(1)
where $`M_S`$ is the low-energy effective string scale. This equation has the interesting consequence that we can choose $`M_S`$ to be of the order of a TeV and thus get around the hierarchy problem. For such a value of $`M_S`$, it follows that $`R=10^{32/n19}`$ m, and so we find that $`M_S`$ can be arranged to be a TeV for any value $`n>1`$. Effects of non-Newtonian gravity can become apparent at these surprisingly low values of energy. For example, for $`n=2`$ the compactified dimensions are of the order of 1 mm, just below the experimentally tested region for the validity of Newton’s law of gravitation and within the possible reach of ongoing experiments . In fact, it has been shown that is possible to construct a phenomenologically viable scenario with large extra dimensions, which can survive the existing astrophysical and cosmological constraints. For some early papers on large Kaluza-Klein dimensions, see Ref. and for recent investigations on different aspects of the TeV scale quantum gravity scenario and related ideas, see Ref. .
Below the scale $`M_S`$ , we have an effective theory with an infinite tower of massive Kaluza-Klein states. which contain spin-2, spin-1 and spin-0 excitations. The spin-1 couplings to the SM particles in the low-energy effective theory are not important, whereas the scalar modes couple to the trace of the energy-momentum tensor, which vanishes for massless particles. Other particles related to brane dynamics (for example, the $`Y`$ modes which are related to the deformation of the brane) have effects which are subleading, compared to those of the graviton. The only states, then, that contribute to low-energy phenomenology are the spin-2 Kaluza-Klein states. For graviton momenta smaller than the scale $`M_S`$, the effective description reduces to one where the gravitons in the bulk propagate in the flat background and couple to the SM fields via a (four-dimensional) induced metric $`g_{\mu \nu }`$. The interactions of the SM particles with the graviton, $`G_{\mu \nu }`$, can be derived from the following Lagrangian:
$$=\frac{1}{\overline{M}_P}G_{\mu \nu }^{(j)}T^{\mu \nu },$$
(2)
where $`j`$ labels the Kaluza-Klein mode, $`\overline{M}_P=M_P/\sqrt{8\pi }`$ and $`T^{\mu \nu }`$ is the energy-momentum tensor. Given that the effective Lagrangian given in Eq. 2 is suppressed by $`1/\overline{M}_P`$, it may seem that the effects at colliders will be hopelessly suppressed. However, in the case of real graviton production, the phase space for the Kaluza-Klein modes cancels the dependence on $`\overline{M}_P`$ and, instead, provides a suppression of the order of $`M_S`$. For the case of virtual production, we have to sum over the whole tower of Kaluza-Klein states and this sum when properly evaluated provides the correct order of suppression ($`M_S`$). The summation of time-like propagators and space-like propagators yield exactly the same form for the leading terms in the expansion of the sum and this shows that the low-energy effective theories for the $`s`$ and $`t`$-channels are equivalent.
Recently, several papers have explored the consequences of the above effective Lagrangian for experimental observables at high-energy colliders. In particular, direct searches for graviton production at $`e^+e^{}`$, $`p\overline{p}`$ and $`pp`$ colliders, leading to spectacular single photon + missing energy or monojet + missing energy signatures, have been suggested . The virtual effects of graviton exchange in $`e^+e^{}f\overline{f}`$ and in high-mass dilepton production , in $`t\overline{t}`$ production at the Tevatron and the LHC, and in deep-inelastic scattering at HERA have been studied. The bounds on $`M_S`$ obtained from direct searches depend on the number of extra dimensions. Non-observation of the Kaluza-Klein modes yield bounds which are around 500 GeV to 1.2 TeV at LEP2 and around 600 GeV to 750 GeV at Tevatron (for $`n`$ between 2 and 6) . Indirect bounds from virtual graviton exchange in dilepton production at Tevatron yields a bound of around 950 GeV . Virtual effects in $`t\overline{t}`$ production at Tevatron yields a bound of about 650 GeV , while from deep-inelastic scattering a bound of 550 GeV results . At LHC, it is expected that $`t\overline{t}`$ production can be used to explore a range of $`M_S`$ values upto 4 TeV . More recently, these studies have been extended to the case of $`e^+e^{}`$ and $`\gamma \gamma `$ collisions at the NLC . There have also been papers discussing the implications of the large dimensions for higgs production and electroweak precision observables . Astrophysical constraints, like bounds from energy loss for supernovae cores, have also been discussed .
In the present work, we study the effect of the virtual graviton exchange on the dijet production cross-section in $`p\overline{p}`$ collisions at the Tevatron. The presence of the new couplings from the low-energy effective theory of gravity leads to new diagrams for dijet production. Using the couplings given in Refs. , and summing over all the graviton modes, we have calculated the sub-process cross-section due to the new physics <sup>1</sup><sup>1</sup>1The explicit expressions for the subprocess cross-sections will appear in a future publication . The graviton induced cross-sections involve two new parameters : the effective string scale $`M_S`$ and $`\lambda `$ which is the effective coupling at $`M_S`$. $`\lambda `$ is expected to be of $`𝒪(1)`$, but its sign is not known $`apriori`$. In our work we will explore the sensitivity of our results to the choice of the sign of $`\lambda `$.
Significant changes in the angular distribution of jets is expected when spin-2 particle exchanges are added to the spin-1 exchange of the SM. With this in mind, we study the normalised $`\chi `$ distribution, $`1/NdN/d\chi `$, where the variable $`\chi `$ is defined as
$$\chi =\frac{\widehat{u}}{\widehat{t}}\mathrm{exp}|\eta _1\eta _2|,$$
(3)
with $`\eta _1`$ and $`\eta _2`$ being the pseudo-rapidities of the two jets, so as to be able to compare with the experimental results from the CDF and the D0 collaborations. The $`\chi `$ distributions in both the experiments have been calculated in different mass bins, and we have used the same binning as used by the two experiments. Using the same kinematic cuts as used by the experimentalists (insofar as can be implemented in a parton-level analysis), we study the normalised $`\chi `$ distribution as a function of the effective string scale, $`M_S`$, and obtain the 95% C.L. limits on the string scale by doing a $`\chi ^2`$ fit to the data in each bin and to the data integrated over the entire mass range. For our computations, we have used CTEQ4M parton densities taken from PDFLIB . The 95% C.L. limits on $`M_S`$ derived from the CDF and the D0 $`\chi `$ distributions, respectively, are displayed in Tables 1 and 2 for the cases $`\lambda =\pm 1`$. We find that the $`\chi `$ distribution integrated over the entire mass range yields a limit of 1070 (1108) GeV for $`\lambda =1(1)`$ for CDF and a limit of 1160 (1159) GeV for $`\lambda =1(1)`$ for D0. These bounds are the most stringent bounds obtained from processes involving virtual graviton exchange. Interestingly, the bounds obtained by considering the highest mass bin are almost as good as those obtained by comparing with all the data. This tells us that the deviations from the SM is greater as the invariant mass increases. We, therefore, consider the data in the invariant mass distribution as well.
Recently, dijet mass distributions from the D0 experiment have become available . We have studied these (using the cuts used by the D0 experiment) and obtain, as before, the 95% C.L. limits on $`M_S`$. In Fig. 1, we have plotted the mass distribution for different $`M_S`$ values and compared it to the experimental and the SM numbers. We find again that very stringent bounds for both signs of the $`\lambda `$ coupling are obtained. For $`\lambda =1`$, we find that the 95 % C.L. limit on $`M_S`$ is 1123 GeV, whereas for $`\lambda =1`$ it is 1131 GeV. Since the effect of the new physics is larger for larger values of dijet mass, we find that if we use a lower cut of 500 GeV on the dijet mass the resultant $`\chi ^2`$ fit can yield a better limit on $`M_S`$.
We have studied the implications of large extra dimensions and a low effective quantum gravity scale for dijet production at the Tevatron. Virtual exchange of the Kaluza-Klein states are considered and the sensitivity of the experimental cross-sections to this interesting new physics is studied. We find that this process allows us to put very stringent limits on the effective string scale $`M_S`$ – in fact, of all processes with virtual graviton exchanges considered so far, these bounds are by far the best. To obtain these bounds, we have considered the angular distributions and the mass distributions. The resulting limits from either of these observables are quite similar. Jet production at higher energies is able to probe the physics of large extra dimensions to much higher scales. These results will be presented in a future publication .
Acknowledgements: It is a pleasure to thank T. Askawa for help with the experimental data and Dilip K. Ghosh for discussions.
|
no-problem/9904/astro-ph9904044.html
|
ar5iv
|
text
|
# Origin of companion galaxies in QSO hosts
## 1 Introduction
Since faint nebulosity around quasars was discovered (Matthews & Sandage 1963; Sandage & Miller 1966), morphological studies of QSO host galaxies have revealed the evolutionary link between the formation of QSO hosts and the activation of QSO nucleus (Hutchings et al. 1982; Malkan 1984; Margon, Downes, & Chanan 1984; Smith et al. 1986; Heckman et al. 1991). Photometric and spectroscopic studies of QSO hosts furthermore provided valuable clues to the nature of stellar populations of QSO host (MacKenty & Stockton 1984; Boronson, Perrson, & Oke 1985; Stockton & Rigeway 1991; Dunlop et al. 1993; McLeod & Rieke 1994). One of the most remarkable observational evidences is that galaxy interaction and merging can trigger the nuclear activities of QSOs (Stockton 1982; Hutchings & Campbell 1983; Stockton & Mackenty1983; Hutchings & Neff 1992; Bahcall et al. 1997). In particular, the recent high-resolution morphological studies of QSO host galaxies by the Hubble Space Telescope (HST) and large grand-based ones found that a sizable fraction of QSO hosts have close companion galaxies likely to be interacting or merging with the hosts (Bahcall et al. 1995; Disney et al 1995). Although these observational studies strongly suggest that close companion galaxies in QSO hosts play a vital role in triggering QSO activities (Bahcall et al. 1995), it is still theoretically unclear why QSO host galaxies so frequently have companions and how QSO activities are physically associated with the formation and the evolution of such companion galaxies.
In this Letter, we numerically investigate both gas fueling to the seed black holes located in the central part of two disks in a gas-rich merger and the morphological evolution of the merger in order to present a plausible interpretation on the origin of small companion galaxies frequently observed in QSO host galaxies. We here demonstrate that the observed QSO companion galaxies are formed in the outer part of strong tidal tails during gas-rich major galaxy merging and then become self-gravitating compact galaxies orbiting elliptical galaxies formed by merging. We furthermore demonstrate that such companion galaxies are located within a few tens kpc of elliptical galaxies when efficient gas fueling to the central seed QSO black holes continues. We thus suggest that both the formation of QSO companion galaxies and the activation of QSO nucleus result from one physical process of gas-rich major galaxy merging. We furthermore discuss whether such companion galaxies formed in QSO hosts can finally become compact elliptical galaxies that are frequently observed in the present-day bright massive galaxies.
## 2 Model
We construct models of galaxy mergers between gas-rich disk galaxies with equal mass by using Fall-Efstathiou model (1980). The total mass and the size of a progenitor disk are $`M_\mathrm{d}`$ and $`R_\mathrm{d}`$, respectively. From now on, all the mass and length are measured in units of $`M_\mathrm{d}`$ and $`R_\mathrm{d}`$, respectively, unless specified. Velocity and time are measured in units of $`v`$ = $`(GM_\mathrm{d}/R_\mathrm{d})^{1/2}`$ and $`t_{\mathrm{dyn}}`$ = $`(R_\mathrm{d}^3/GM_\mathrm{d})^{1/2}`$, respectively, where $`G`$ is the gravitational constant and assumed to be 1.0 in the present study. If we adopt $`M_\mathrm{d}`$ = 6.0 $`\times `$ $`10^{10}`$ $`\mathrm{M}_{}`$ and $`R_\mathrm{d}`$ = 17.5 kpc as a fiducial value, then $`v`$ = 1.21 $`\times `$ $`10^2`$ km/s and $`t_{\mathrm{dyn}}`$ = 1.41 $`\times `$ $`10^8`$ yr, respectively. In the present model, the rotation curve becomes nearly flat at 0.35 $`R_\mathrm{d}`$ with the maximum rotational velocity $`v_\mathrm{m}`$ = 1.8 in our units. The corresponding total mass $`M_\mathrm{t}`$ and halo mass $`M_\mathrm{h}`$ are 5.0 and 4.0 in our units, respectively. The radial ($`R`$) and vertical ($`Z`$) density profile of a disk are assumed to be proportional to $`\mathrm{exp}(R/R_0)`$ with scale length $`R_0`$ = 0.2 and to $`\mathrm{sech}^2(Z/Z_0)`$ with scale length $`Z_0`$ = 0.04 in our units, respectively. The Toomre’s parameter (Binney & Tremaine (1987)) for the initial disks is set to be 1.2. The collisional and dissipative nature of the interstellar medium is modeled by the sticky particle method (Schwarz (1981)). Star formation is modeled by converting the collisional gas particles into collisionless new stellar particles according to the Schmidt law (Schmidt 1959) with the exponent of 2.0. The initial gas mass fraction ($`f_\mathrm{g}`$) is considered to be a free parameter ranging from 0.1 (corresponding to a gas poor disk) to 0.5 (a very gas-rich one). We here present the result of the model with $`f_\mathrm{g}=0.5`$, because this model most clearly shows the typical behavior of QSO companion formation. The dependence of the details of QSO companion formation on $`f_\mathrm{g}`$ will be described by our future paper (Bekki 1999).
The orbital plane of a galaxy merger is assumed to be the same as $`xy`$ plane and the initial distance between the center of mass of merger progenitor disks is 8.0 in our units (140 kpc). Two disks in the merger are assumed to encounter each other parabolically with the pericentric distance of 1.0 in our units (17.5 kpc). The intrinsic spin vector of one galaxy in a merger is exactly parallel with $`z`$ axis whereas that of the other is tilted by $`30^{}`$ from $`z`$ axis. The present study describes the QSO companion formation only for a nearly prograde-retrograde merger in which only one intrinsic spin vector of a merger progenitor galaxy is nearly parallel with orbital spin vector of the merger. The dependence of the details of QSO companion formation processes on the initial orbital configurations of galaxy mergers will be given by Bekki (1999). The number of particles used in a simulation is 20000 for dark halo components, 20000 for stellar ones, and 20000 for gaseous ones. All the calculations including the dissipative and dissipationless dynamics and star formation have been carried out on the GRAPE board (Sugimoto et al. (1990)) at Astronomical Institute of Tohoku University. The parameter of gravitational softening is set to be fixed at 0.03 in all the simulations.
By using this merger model, we firstly investigate morphological and dynamical evolution of a gas-rich major galaxy merger with a particular emphasis on the formation of close small companions (dwarf-like galaxy) in the merger. Secondly, we investigate when and how QSO activities are triggered by major galaxy merging by counting total mass of interstellar gas accumulated within the central 100 pc of a galaxy merger. In order to estimate the gas mass in a explicitly self-consistent manner, we initially place a collisionless particle with the mass equal to $`3.0\times 10^6`$ in the mass center of a disk and regard this particle as a ‘seed black hole’. We then investigate both the time evolution of the orbit of the seed black hole and the total gas mass transferred to the central 100 pc around the black hole. Here we hypothetically assume that interstellar gas transferred to the central 100 pc around the seed black hole can be furthermore fueled to the central sub-pc region where a massive black hole gravitationally dominates and utilizes gas falling onto the accretion disk for a QSO activity. The reason for our adopting this assumption is that we regard a certain mechanism for gas fueling to the sub-pc region, such as the so-called ‘bars within bars’ proposed by Shlosman, Frank, & Begelman (1989), as being occurred naturally in the high-density self-gravitating central regions of galaxy mergers. The above two-fold investigation just allows us to address questions as to when and how galaxy merging not only forms small companions but also triggers QSO nuclear activities.
## 3 Result
Figure 1 describes how a QSO companion galaxy is formed by gas-rich major galaxy merging. As two gas-rich disks merge to form a tidal tail composed of gas and stars (the time $`T`$ = 1.1 Gyr), the stellar components in the tail first collapse to form a self-gravitating dwarf-like object. Gaseous components are then swept into the deep gravitational potential well of the dwarf galaxy to form a massive gaseous clump owing to the enhanced gaseous dissipation in the shocked region of the tidal tail and the dwarf. Star formation proceeds very efficiently in the high density gas clump, and consequently new stellar components are formed in the dwarf galaxy ($`T`$ = 1.7 Gyr). The physical processes of the dwarf galaxy formation in the present star-forming galaxy merger are essentially the same as those described by Barnes & Hernquist (1992). This self-gravitating dwarf galaxy can then orbit an elliptical galaxy formed by galaxy merging without significant radial orbital decay due to dynamical friction between the dwarf and the host elliptical and tidal destruction by the elliptical ($`T`$ = 2.3 and 2.8 Gyr). Total mass in the dwarf at $`T`$ = 2.3 Gyr is roughly estimated to be $`2.7\times 10^9\mathrm{M}_{}`$ corresponding to 4.5 % of the initial disk mass. The gas mass fraction of the dwarf is rather large ($`25\%`$), which reflects the fact of the dwarf’s being formed in the gas-rich tidal tail. About 45 % of stellar components of the dwarf are very young stars formed from gaseous components of the tidal tail, which implies that this dwarf galaxy can be observed to show very blue colors until its hot and massive stars died out. Considering that the present gas-rich star-forming merger model also shows efficient gas fueling to the central seed black holes (as is described later), we regard the above results as demonstrating clearly that the dwarf galaxy formed in galaxy merging can be observed as a companion galaxy in a QSO host galaxy.
Figure 2 shows the star formation history of the merger and the time evolution of gas mass located within the central 100pc around the seed black holes of the merger. Star formation rate becomes maximum ($`378\mathrm{M}_{}/\mathrm{yr}`$) at $`T`$ = 1.3 Gyr, when two disks finally merge to form an elliptical galaxy and the efficient redistribution of angular momentum and gaseous dissipation by cloud-cloud collisions cooperate to form the extremely high-density gaseous regions in the central part of the merger. After the intense secondary starburst, the star formation then rapidly declines owing to the efficient gas consumption by the starburst. Gas fueling to the central seed black holes becomes maximum ($`6.5\times 10^8\mathrm{M}_{}`$) at $`T`$ = 1.3 Gyr, which is the same as the maximum starburst of the merger. Gas supply for the seed black holes is greatly controlled by the rapid gas consumption by star formation, and consequently gas fueling gradually declines after the completion of the secondary starburst. The gas fueling in the present study tends to be more efficient in the late phase of galaxy merging ($`T>1.3`$ Gyr) than in the early one ($`T<1.3`$ Gyr). Assuming that all of the gas transferred to the central 100pc around the seed black holes can be directly accreted onto the accretion disk of the black holes, we can estimate that the mean accretion rate in the merger late phase (1.3 Gyr $`<T<`$ 2.3 Gyr) is $`6.3\mathrm{M}_{}/\mathrm{yr}`$. The derived accretion rate is sufficient enough to trigger the typical magnitude of QSO activity (e.g., Rees 1984). These results imply that secondary massive starburst and QSO nuclear activity (AGN) can be observed to coexist in a QSO host galaxy, which is consistent with the observational evidence that some of QSO host galaxies show very bluer colors and spectroscopic properties indicative of the past starburst (MacKenty & Stockton 1984; Boronson, Perrson, & Oke 1985; Stockton & Rigeway 1991).
Thus Figure 1 and 2 clearly demonstrate that gas-rich major galaxy merging not only contributes to the formation of a companion galaxy orbiting a merger remnant but also triggers QSO nuclear activities. Accordingly our numerical study can naturally explain why QSO host galaxies, some of which are actually observed to be ongoing mergers and elliptical galaxies (e.g., Bahcall et al. 1997), are more likely to have close small companion galaxies; This is essentially because both QSO host galaxies with pronounced nuclear activities and their companions result from $`one`$ physical process of major galaxy merging. Our numerical studies furthermore provide the following three predictions on physical properties of QSO companions and hosts. First prediction is that the luminosity of a QSO companion galaxy is roughly proportional to that of the QSO host, principally because the mass of tidal debris that is a progenitor of a QSO companion depends strongly on the initial mass of a galaxy merger. Second is that a QSO companion has very young stellar population formed in secondary starburst of galaxy merging and thus shows photometric and spectroscopic properties indicative of starburst or post-starburst. Third is that not all of galaxy mergers can create QSO companions galaxies, essentially because nearly retrograde-retrograde mergers can not produce strong tidal tails indispensable for the formation of companion galaxies because of the weaker tidal perturbation of the mergers (The details of the physical conditions required for the formation of QSO companions will be described in Bekki (1999)). We suggest that future observational studies on the dependence of the luminosity-ratio of QSO hosts to QSO companions on QSO host luminosity, age and metallicity distribution of stellar populations of QSO companions, and the probability that QSO host galaxies have companion galaxies physically associated with them can verify the above three predictions and thereby can determine whether major galaxy merging is a really plausible model of QSO companion formation.
## 4 Discussion and Conclusion
The fate of QSO companion galaxies is an interesting problem of the present merger scenario of QSO companion formation. We here propose that some of the companions finally evolve into compact elliptical galaxies (cE) that have typical blue magnitude $`M_\mathrm{B}`$ ranging from -18 mag to -14 mag, truncated de Vaucouleurs luminosity profile, color-magnitude relation of giant ellipticals, typically solar-metallicity, and higher degree of global rotation (Faber 1973; Wirth & Gallagher 1984; Nieto & Prugniel 1987; Freedman 1989; Bender & Nieto 1990; Burkert 1994). The essential reason of this proposal is described as follows. Burkert (1994) numerically investigated the dynamical evolution of proto-galaxies experiencing an initial strong starburst and the subsequent violent relaxation in the tidal external gravitational field of a massive elliptical galaxy and revealed that the observed peculiar properties of cEs are due to the external tidal field around progenitor proto-galaxies of cEs. Burkert (1994) accordingly proposed a scenario in which satellite proto-galaxies revolving initially around a bright elliptical galaxy eventually form cEs after violent cold collapse and strong starburst around the galaxy. Although his model of cE formation is not directly related to physical processes of gas-rich major galaxy merging, the physical environment of cE formation in his model is very similar to that of gas-rich galaxy merging; Tidal debris collapses to form a self-gravitating small galaxy in the rapidly changing external gravitational field of two merging disk galaxies in the present study. Accordingly it is not unreasonable to consider that some of companion galaxies created in tidal tails finally become cEs orbiting elliptical galaxies formed by major galaxy merging. The observational fact that cEs exist almost exclusively as satellites of bright massive galaxies (Faber 1973; Burkert 1994) strengthens the validity of the proposed evolutionary link between QSO companions and cEs. Furthermore, the larger degree of global rotation in kinematics observed in cEs (e.g., Bender & Nieto 1990) seems to be consistent with the proposed scenario, since QSO companions are created in the tidal debris of rotationally supported disk galaxies in the scenario. The present numerical study unfortunately cannot investigate in detail structural and kinematical properties of companion galaxies formed in galaxy mergers because of very small particle number of the simulated companion ($`800`$). Our future high resolution simulations with the total particle number of $`10^7`$ will enable us to compare the numerical results of structural, kinematical, and chemical properties of QSO companions formed in major mergers with observational ones of cEs located near giant ellipticals in an explicitly self-consistent manner and thereby answer the question as to the evolutionary link between intermediate and high redshift QSO companions and the present-day cEs.
The most important observational test to assess the validity of the proposed formation scenario of QSO companion galaxies is to investigate whether a QSO companion galaxy has younger stellar populations formed by secondary starburst and thus shows photometric and spectroscopic properties indicative of starburst or post-starburst. Canalizo & Stockton (1997) recently investigated spectroscopic properties of companion galaxies in three QSOs (3CR 323.1, PG 1700+518, PKS 2135-147) and found that the spectra of a companion galaxy in QSO PG 1700+518 shows both strong Balmer absorption lines from a relatively young stellar population and Mg I $`b`$ absorption feature and the 4000 $`\mathrm{\AA }`$ break from an old stelar population. Stockton, Canalizo, & Close (1998) furthermore demonstrated that the time that has elapsed since the end of the most recent major starburst event in the companion of QSO PG 1700+518 is roughly 0.085 Gyr, based on the spectral energy distribution derived from adaptive-optics image in $`J`$ and $`H`$ band. These observational results on the post-starburst signature of QSO companions are consistent reasonably well with the proposed scenario which predicts that a QSO companion galaxy contains both relatively old stellar populations previously located in merger progenitor disks and very younger stellar populations formed in gas-rich tidal tails. Detailed spectroscopic studies of QSO companion galaxies, such as Canalizo & Stockton (1997) and Stockton, Canalizo, & Close (1998), have not been yet so accumulated. Future extensive spectroscopic studies of companions in each of intermediate and high redshift QSOs will clarify the age distribution of stellar populations of the companions and thus determine whether most of QSO companions are really formed in major galaxy mergers.
We conclude that gas-rich major galaxy merging can naturally explain the prevalence of small companion galaxies in QSO hosts; The essential reason for the origin of QSO companions is that strong tidal gravitational field of major galaxy merging both triggers the formation of companions and provides efficient fuel for QSO nuclear activities. This explanation of QSO companion formation is consistent reasonably well with the observational fact that QSO nucleus are already activated though the companions are still located in the vicinity of the QSO hosts ($``$ a few tens kpc from the center of the hosts). Our numerical simulations accordingly suggest that the observed companion galaxies in QSO hosts are not the direct $`cause`$ of QSO nuclear activities but the $`result`$ of gas-rich major galaxy merging. Although minor galaxy merging between small companion galaxies and giant elliptical galaxies or disk ones is demonstrated to be closely associated with secondary massive starburst in disks (Mihos & Hernquist 1995) and strong starburst in shell galaxies (Hernquist & Weil 1992), the present study implies that this minor merging is probably less important in the activation of QSO nucleus and the formation of QSO companions. The present study provides only one scenario of QSO companion formation, thus we lastly stress that physical processes related to the companion formation are likely to be more variously different and complicated than is described in the present study.
K.B. thank to the Japan Society for Promotion of Science (JSPS) Research Fellowships for Young Scientist.
|
no-problem/9904/astro-ph9904404.html
|
ar5iv
|
text
|
# The metal-rich globular clusters of the Milky Way
## 1 Introduction
The globular cluster system of the inner Milky Way is still not well understood. This is particularly true for the clusters’ classification with respect of the galactic population structure. Because reliably determined parameters, e.g. metallicity, reddening, distance and age, are the basic requirement of any discussion, we present new photometry in (V, I) of the metal-rich globular clusters (GC’s) NGC~5927, 6316, 6342, 6441 and 6760. We also re-discuss NGC~6528 and NGC~6553, where the data have already been published (Richtler et al. RTL98 (1998), Sagar et al. SAG98 (1998)).
As there has been evidence for a correlation between metallicity and spatial distribution of the GC’s since the late 1950’s, Zinn (ZIN85 (1985)) classified the clusters via their kinematics, spatial distribution and metallicity into two subsystems: The disk-system with clusters of metallicity $`[\text{M}/\text{H}]0.8`$ dex and the halo-system with $`[\text{M}/\text{H}]0.8`$ dex. The disk-system shows a high rotational velocity and a small velocity dispersion, the halo-system vice versa. Armandroff (ARM89 (1989)) derived a scale height of $`1.1`$ kpc for the disk-system, which he identified with the galactic thick disk via rotational velocities and velocity dispersions. By comparing the metal-rich GC’s of the inner $`3`$ kpc with the underlying stellar population, Minniti (MIN95 (1995)) assigned these objects to the bulge rather than to the disk. Burkert & Smith (BUR97 (1997)) used kinematical arguments and the masses of the clusters to divide the metal-rich subsystem of Zinn (ZIN85 (1985)) into a bulge, a bar and a disk-group.
The thing these subdivisions have in common is, that they refer to the entire system of clusters and they try to formulate their criteria by identifying subsystems within the whole system. For the other way around, i.e. to classify observed objects with any of these subgroups, accurate parameters are needed. The halo clusters are well discernible from any other subsystem, but the metal-rich clusters near the galactic center are not. The determination of their parameters encounters observational difficulties, as their low galactic latitudes lead to strong contamination with field stars and to strong (differential) reddening. These effects have to be taken care of.
There is a variety of photometry existing for the program clusters. Recent studies on NGC~5927 were done by Fullton et al. (FUL96 (1996)) and Samus et al. (SAM96 (1996)). Armandroff (ARM88 (1988)) presented and discussed CMDs including NGC~6316, 6342 and 6760. There is a photometry of NGC~6441 in (B,V) of Hesser & Hartwick (HES76 (1976)) and a more recent one in Rich et al. (RIC97 (1997)). CMDs of NGC~6528 have been discussed by Ortolani et al. (ORT90 (1990)) and Richtler et al. (RTL98 (1998)). Guarnieri et al. (GUA98 (1998)) as well as Sagar et al. (SAG98 (1998)) present (V,I)-photometry of NGC~6553. Zinn (ZIN85 (1985)), Armandroff (ARM89 (1989)), Richtler et al. (RTL94 (1994)), Minniti (MIN95 (1995)) and Burkert & Smith (BUR97 (1997)) discuss the subdivision of the GC-system into a halo, disk and/or bulge component.
As the data and their reduction shall be published in a forthcoming paper, section 2 deals only briefly with this subject. In section 3 and 4, the derived CMDs are presented and the effects of differential reddening are discussed and removed. Section 5 contains the methods and results of the parameter determination, and in section 6 we will discuss the resulting classification and its problems.
## 2 Observations and reduction
The observations in V and I were carried out at La Silla/Chile between July 16th and 19th 1993. We used the 2.2m with CCD ESO #19, which covers with $`1024\times 1024`$ pix an area of $`5.7^{}\times 5.7^{}`$ on the sky. The seeing was $`1.1^{\prime \prime }`$. In addition to the ground-based data, we used data of the Hubble-Space-Telescope (HST) for NGC~5927. These data have already been published by Fullton et al. (FUL96 (1996)).
To reduce the data, we used the DAOPHOT-package (Stetson STE87 (1987), STE92 (1992)), together with the ESO-MIDAS-system (version 1996). The calibrating equations (1), determined via Landolt standard stars (Landolt 1992), are
$`V_{st}=V_{inst}(1.38\pm 0.05)+(0.057\pm 0.003)(VI)_{st}`$
$`I_{st}=I_{inst}(2.62\pm 0.05)(0.060\pm 0.003)(VI)_{st}`$ (1)
Together with the error of the photometry and of the PSF-aperture-shift, we get an absolute error for a single measurement of $`\pm 0.06`$ mag. For a more extensive treatment of the data and their reduction, see Heitsch & Richtler (HEI99 (1999)). To calibrate the HST-data, we used the relations and coefficients as described by Holtzman (HOL95 (1995)). In agreement with Guarnieri et al. (GUA98 (1998)), we detected a systematic shift between calibrated ground-based and HST-magnitudes of about $`0.2`$ mag with the HST-magnitudes being fainter. This difference might be due to crowding influences on the calibration stars in the ESO-frame, as explained by Guarnieri et al.
## 3 Colour-Magnitude-Diagrams
### 3.1 NGC~5927
The unselected CMD for NGC~5927 is shown in Fig. 7. The cluster’s HB and RGB are clearly distinguishable, with the HB overlapping the RGB, as well as the stars of some field population to the blue of the cluster’s structures covering the TOP-region. Some $`0.5`$ mag below the HB, the RGB-bump is discernible. The elongation of the HB and the broadening of the RGB are due to differential reddening, as we now argue. As the HB of metal-rich GCs generally is rather clumped and the HB-stars all have the same luminosity, differential reddening should cause an elongation of the HB parallel to the reddening vector.
Fig. 3 shows the coordinates of the radially selected cluster stars with special markings for the HB-stars as given by Fig. 1. If differential reddening is indeed responsible for the observed elongation, we do not expect to find any red faint stars in areas where blue bright stars are to be found, unless the reddening is very (!) patchy. In the case of NGC~5927, we note (Fig. 3) that the blue bright stars are located in an area west of the cluster’s center, and thus differential reddening is indeed responsible for the elongated HB structure.
In Fig. 21 we present the calibrated HST-CMDs of NGC~5927. They all show a slightly broadened lower RGB as well as a slightly tilted HB. The TOP is well resolved. However, the CMDs do not extend to the bright stars of the AGB/RGB due to pixel overflow on the exposures. As mentioned above, the HST-CMDs are shifted with respect to the ESO-CMDs to fainter magnitudes. Richtler et al. (RTL98 (1998)) argue that the ground-based calibration is not erroneous. Thus, we will use the ground-based calibrationed data for the further analysis. For a detailed discussion see Heitsch & Richtler (HEI99 (1999)).
### 3.2 NGC~6316
The unselected CMD (Fig. 9) shows beside the cluster a strong contribution from the field population. The field main sequence is striking. The cluster RGB is broadened, but since the clumpy HB indicates only small differential reddening, the RGB width is probably to a large part due to the field contamination. Determining a correlation between HB-stars and coordinates as in Fig. 3 led to no convincing results, because the field does not contain enough stars.
### 3.3 NGC~6342
NGC~6342 (Fig. 11) shows a sparsely populated AGB/RGB due to the small size of the cluster. The TOP-region and upper MS are reached. Fig. 3 gives the location of HB-stars as in Fig. 3. Blue, bright HB-stars are to be found in an area to the south of the cluster’s center.
### 3.4 NGC~6441
NGC~6441 (Fig. 13) is located behind a dense field population, the stars of which can be found between $`1.0VI1.5`$ mag. This population covers the lower part of the RGB of NGC~6441 as well. Its TOP is not reached. We mention some special features. First, we find some stars between $`1.8VI2.0`$ mag and above the HB of NGC~6441. These could be HB-stars of a population which is similar to NGC~6441 and is located between the cluster and ourselves, as the stars are shifted in V only. In this case, we would have to assume that the absolute reddening is caused by some cloud between this population and the observer. Otherwise it would have to be shifted in $`VI`$ as well. Second, we find some stars to the blue of the clumpy, tilted HB of NGC~6441. These stars seemingly belong to the cluster, as they are still visible when selecting for small radii. Probably they belong to the blue HB of NGC~6441, which has been discovered by Rich et al. (RIC97 (1997)). The difference in star density (Fig. 5) is due to the fact that the calibration exposures were shifted by around 360 pix to the south. Taking this into account, we find that the blue bright stars are mostly found in the western two thirds of the cluster.
### 3.5 NGC~6760
Fig. 15 not only shows the already discussed structures such as HB, RGB and field population, but it shows the RGB-bump below the HB as well. The RGB is rather broadened. The HB and RGB-bumps are elongated and tilted with the same slope. The HB-stars, marked according to colour and brightness, are found in Fig. 5.
### 3.6 NGC~6528 and NGC~6553
The CMDs for NGC~6528 and 6553 (Fig. 17, 19) have already been published (Richtler et al. RTL98 (1998), Sagar et al. SAG98 (1998)). As described in their papers, NGC~6528 not only shows a broadened RGB and a tilted and elongated HB, but it shows also some background population below the AGB/RGB. The field population covers the TOP-region of the cluster. Moreover, the RGB-bump of NGC~6528 is clearly visible some $`0.5`$ mag below the HB. The CMD of NGC~6553 shows the same characteristics as NGC~6528, however they are even more distinct. Here we clearly see the background population with its RGB and AGB/RGB strongly differentially reddened.
## 4 Correction for differential reddening
In order to correct the CMDs for differential reddening, we used a refined version of the method described by Grebel et al. (GRE95 (1995)). The entire frame is divided into subframes. These are determined by covering the whole frame with a regular subgrid and dividing the grid cells further until the number of stars in one cell becomes too small to define the CMD structure. The CMDs will be shifted according to the reddening vector (described below) with respect to CMDs from neighbouring cells. The shift in colour supplies the differential reddening. If two neighbouring subframes have the same reddening, these subframes are merged. There are two problems with this method: First, one has to be careful to use the HB as a means for comparing two CMDs, as the HB may be intrinsically elongated. Useful results can only be achieved by comparing the RGBs and TOPs, as far as they are accessible. Second, the size of the subfields must be large enough to render meaningful CMDs. Fig. 22 shows the resulting extinction maps for the seven clusters. The smallest subfields have a size of about $`28^{\prime \prime }\times 28^{\prime \prime }`$. But as some of them still showed differential reddening, the scale of the structures responsible for the differential reddening is be expected to be even smaller. The smallest scales we got from a comparison of coordinates of stars with different reddening amounted to values of $`4^{\prime \prime }`$.
For the correction of the CMDs we need the extinction
$$A_V=R_V^BE_{BV}=R_V^IE_{VI}\text{.}$$
(2)
However, assuming a uniform reddening law led to CMDs which in some cases showed the corrected HBs having larger or smaller slopes than the uncorrected HBs. Moreover, as there is some uncertainty in the literature regarding $`R_V^B`$, with values varying between $`R_V^B=3.1`$ (Savage & Mathis SAV79 (1979)) and $`R_V^B=3.6`$ (Grebel & Roberts GRR95 (1995)), we determined the slope of the reddening vector via the tilted HBs of our CMDs. This leads to reasonable results only if the HBs are intrinsically clumpy. This assumption is corroborated by the fact that the well dereddened CMDs (Fig. 24, 28, 29) have clumpy HBs indeed. Table 1 shows the slopes $`R_V^I`$ for each cluster. In Fig. 21, the slopes are plotted against the galactic longitude. These variations, although at the margin of the errors, confirm earlier observations by Meyer & Savage (MEY81 (1981)) and Turner (TUR94 (1994)). Meyer & Savage determined via two-color-diagrams the deviation of single stars in the extinction behaviour from the galactic mean extinction law. Turner demonstrated the inapplicability of a mean galactic reddening law for objects lying close to the galactic plane.
To correct the diagrams for differential reddening, we referred all sub-CMDs of one cluster to the one with detected minimal reddening and we shifted all other sub-CMDs onto that. As we thus use the minimal absolute reddening as a point of reference, the absolute reddening determined later on will be smaller than value given in the literature. The differentially dereddened CMDs are shown in Figs. 7 through 19. As the correction led to a clearly improved appearance for all clusters, the corrected versions of the CMDs will be used for further investigation.
## 5 The Globular Cluster Parameters
This section deals with the determination of the metallicity, reddening and distance using the differentially dereddened CMDs. There are two possible ways to achieve the goal. In the first, theoretical models are compared with the CMDs, in the second, empirical relations between parameters and loci in the CMDs are used.
### 5.1 Isochrone fitting
To derive metallicity, distance and absolute reddening via isochrone fitting, we used the Padova-tracks (Bertelli et al. BER94 (1994)) with a fixed age of $`14.5`$ Gyr ($`\mathrm{log}(age)=10.160`$). Isochrones with different ages ($`10.120\mathrm{log}(age)10.200`$) led to identical results. To avoid systematic errors, we used the middle of the broadened structures to fit the isochrones by eye. These loci are easily determined for the ascending part of the RGB, as it runs more or less perpendicular to the reddening vector. Regarding the upper part of the RGB, we take into account that we cannot distinguish between the AGB and the RGB in our diagrams. Hence, the densest regions of the AGB/RGB lie between the model’s tracks. We additionally used the HB and the lower part of the RGB, as far as they were accessible. The parameters resulting from the isochrone fit are given in Table 2.
Figures 24 to 29 show the differentially dereddened CMDs with the fitted isochrones. For a discussion and comparison of these parameters with the literature, see paragraph 5.3.
### 5.2 Metallicity and reddening: relations
#### 5.2.1 Metallicity
The luminosity difference between HB and the turn over of the AGB/RGB in $`(V,VI)`$-CMDs is very sensitive to metallicity in the metal-rich domain (e.g. Ortolani et al. ORT97 (1997)). Moreover, it is a differential metallicity indicator, thus it is independent of absolute colour or luminosity, in contrast to the $`[\text{M}/\text{H}](VI)_{0,g}`$-method (see e.g. Sarajedini SAR94 (1994)). We present a preliminary linear calibration of this method,
$$[\text{M}/\text{H}]=a(V_{HB}V_{RGB}^{max})+b,$$
(3)
as there has not been any so far. Because there still are only very few $`(V,VI)`$-CMDs which clearly show both the HB and turn over of the AGB/RGB and which have reliable metallicity determinations, we used the Padova-isochrones and a CMD of NGC 6791 (Garnavich et al. GAR94 (1994)) to set up a calibration. NGC 6791 is one of the richest old open clusters with a good metallicity determination and it is therefore suitable to serve as a zero-point check.
As the form of the RGB depends slightly on age as well (e.g. Stetson et al. STE96 (1996)), we have to check this dependence before applying our calibration. Fig. 30 shows the linear relation between $`[\text{M}/\text{H}]`$ and $`\mathrm{\Delta }VV_{HB}V_{max}`$ for four GC-ages. Table 3 contains the respective coefficients. As the metal-poorest isochrones of the Padova-sample ($`[\text{M}/\text{H}]=1.70,1.30`$ dex) do not show a maximum of the AGB/RGB, they have not been used. Fig. 30 makes clear that the age has only a minor influence on the resulting metallicity. To be consistent with the isochrone-fit, we used the relation for $`\mathrm{log}(age)=10.160`$.
To estimate the metallicities of our clusters, we now only have to measure the relevant luminosities. The results are given in Table 4. The value for NGC~5927 given in column $`[M/H]_2`$ relates to a single star (François FRA91 (1991)); NGC~6528 and NGC~6553 are from Richtler et al. (RTL98 (1998)) and Sagar et al. (SAG98 (1998)).
#### 5.2.2 Reddening
It should be remembered, that we used the differentially dereddened CMDs to determine the parameters. Thus, the given reddenings are minimal ones.
As mentioned above, the absolute colour of the RGB at the level of the HB can be used to estimate the metallicity. Conversely (Armandroff ARM88 (1988)), if we know the metallicity, we can determine the absolute colour $`(VI)_{0,g}`$ and thus the absolute reddening of the cluster.
These relations between colour $`(VI)_{0,g}`$ and metallicity are well calibrated for the metal-poor to intermediate regime. However, it is difficult to set up a calibration for the metal-rich regime of our clusters. Linear calibrations have been provided by e.g. Sarajedini (SAR94 (1994)). A more recent calibration by Caretta & Bragaglia (1998a ) uses a 2nd order polynomial. To set up a calibration for the metal-rich regime we used again the Padova-tracks together with NGC 6791 to derive the coefficients for a relation of the form
$$(VI)_{0,g}=a+b[\text{M}/\text{H}]+c[\text{M}/\text{H}]^2+d[\text{M}/\text{H}]^3$$
(4)
In addition, we used the $`[\text{M}/\text{H}]`$ and $`(VI)_{0,g}`$ values for M67 given by Montgomery et al. (MON93 (1993)) to check the zero point. Taking into accound that M67 is even younger than NGC 6791 by 3 to 5 Gyrs, the measured quantities fit reasonably well. Table 5 contains the calibration coefficients, Fig. 31 the graphic relations, again for different ages. As above, we used the relation for $`\mathrm{log}(age)=10.160`$.
Using the metallicities listed in Table 4, column $`[\text{M}/\text{H}]`$, we get the absolute reddening as given in Table 6. Metallicities as well as reddenings fit very well with the values derived via isochrone-fitting, but are significantly lower than the values given in the literature. This is partly explained by the fact that we take the minimal reddening from the reddening map. Another part of the explanation may be that previous isochrone fits tend to use the red ridge of the RGB and thus overestimate the reddening.
Sarajedini (SAR94 (1994)) proposed a method to simultaneously determine metallicity and reddening. For this, he used the (linear) $`[\text{M}/\text{H}](VI)_{0,g}`$-relation and the dependence of metallicity on the luminosity of the RGB at the absolute colour of $`VI=1.2`$ mag, in linear form as well. He calibrated both relations for a metallicity range of $`(2.2[\text{M}/\text{H}]0.70)`$ dex. We recalibrated these relations in order to use them for our clusters. Using NGC 6791 and the Padova-tracks. The graphic results are shown in Figs. 32 and 33; the calibration coefficients are given in Table 7.
For a discussion and new calibration of Sarajedini’s method see Caretta & Bragaglia (1998a ). We did not make use of this method, as the extrapolation of Sarajedini’s calibration did not seem to be advisable, with reference to Figs. 32 and 33.
#### 5.2.3 Distance
The brightness $`M_V^{HB}`$ of the horizontal branch is the best distance indicator for GCs. However, there is a lively discussion on how this brightness depends on the metallicity of the cluster.
We take the LMC distance as the fundamental distance for calibrating the zero point in the relation between metallicity and horizontal branch/RR Lyrae brightness. The third fundamental distance determination beside trigonometric parallaxes and stellar stream parallaxes is the method of Baade-Wesselink parallaxes. It had been applied to the LMC in its modified form known as Barnes-Evans parallaxes. So far, it has been applied to Cepheids in NGC~1866 (Gieren et al. GIE94 (1994)), and the most accurate LMC distance until now stems from the period-luminosity relation of LMC Cepheids by Gieren et al. (GIE98 (1998)). We adopt the distance modulus from the latter work, which is $`18.46\pm 0.06`$ mag, and which is in very good agreement with most other work (e.g. Tanvir TAN96 (1996)).
If we adopt the apparent magnitude of RR Lyrae stars in the LMC from Walker (WAL92 (1992)), $`18.94\pm 0.1`$ mag for a metallicity of $`[{}_{}{}^{}\mathrm{Fe}_{}^{}/{}_{}{}^{}\mathrm{H}_{}^{}]=1.9`$ dex, and the metallicity dependence from Caretta et al. (1998b ), one gets
$$M_V(RR)=(0.18\pm 0.09)([{}_{}{}^{}\mathrm{Fe}_{}^{}/{}_{}{}^{}\mathrm{H}_{}^{}]+1.6)+0.53\pm 0.12$$
(5)
This zero-point is in excellent agreement with the one derived from HB-brightnesses of old LMC globular clusters, if the above metallicity dependence is used (Olszewski et al. OLS91 (1991)).
With relation 5, with the reddenings (as shown in Table 6, column $`E_{VI}^{rel}`$) and with the extinction $`A_V=R_V^IE_{VI}`$ we can calculate the distance moduli
$$(mM)_0=V_{HB}A_VM_V^{HB}$$
(6)
The values for $`M_V^{HB}`$ and $`[\text{M}/\text{H}]`$ are listed in Table 4, and the results are given in Table 8. $`R_V^I`$ comes from Table 1.
### 5.3 Comparison
The distances determined via the $`M_{HB}[\text{M}/\text{H}]`$-relation are larger than those determined by the isochrone-fitting (Table 9). However, as the related reddenings do not show any significant differences, this effect is attributed to the $`M_{HB}[\text{M}/\text{H}]`$-relation and the isochrone-fitting itself. As described above, the isochrone fitting is lacking the desired accuracy especially because the TOP cannot be resolved for most of the clusters. Moreover, the fact that AGB and RGB cannot be distinguished in our CMDs leads to a systematic error in the isochrone distances in the sense that the isochrones tend to have been fitted with brightnesses which are too large. In the following, we discuss some possible explanations for differences between distances taken from the literature and this work. It should be remembered that the distance errors amount to about 10%.
1. The distance to NGC~6528 increases by nearly 30% compared to Richtler et al. (RTL98 (1998)). Taking into account, that the isochrone (Fig.28) might have been fitted slightly too low, we still get a distance of about $`8.1`$ kpc. Moreover, Richtler et al. determine the absolute reddening via the differentially reddened CMD, which leads to larger values ($`0.6E_{VI}0.8`$) compared to our $`E_{VI}=0.46`$. Thus the distance modulus decreases by about $`0.4`$ mag, as equation 6 is corrected more strongly on reddening. Finally, the different slopes of the reddening vector have to be regarded. Richtler et al. assume $`A_V/E_{VI}=2.6`$, our slope, which we determined via the slope of the HB, amounts to $`A_V/E_{VI}=2.4`$. On the whole, we get a difference between Richtler et al. and this work of $`0.7`$ mag in the distance modulus.
2. In the CMDs of NGC~5927 and 6760 the differential reddening becomes noticable especially along the steep part of the RGB, as this is running nearly perpendicularly to the reddening vector. Around the turn over of the AGB/RGB and for its redder part, it leads to an elongation, but not to a broadening of the structures. Fitting an isochrone to the broadened RGB, one generally would use the middle of the RGB as an orientation, as one cannot distinguish between reddening effects and photometric errors in the outer regions. However, the red part of the AGB/RGB approximately keeps its unextinguished brightness. Thus the differential reddening might be overestimated, which leads to decreasing distances. A similar point can be made for the determination of $`E_{VI}`$ via the $`(VI)_{0,g}[\text{M}/\text{H}]`$-relation. Measuring the colour $`(VI)_g`$ in the differentially reddened diagram is best done at the middle of the broadened RGB again. This leads to an increased reddening, i.e. the distance modulus will be corrected too strongly for extinction. Overestimating the colour by $`0.1`$ mag leads to a decrease in distance of about 10%.
3. For NGC~6441, Harris (HAR96 (1996)) cites a value of $`V_{HB}=17.10`$ mag. From our CMD we get $`17.66`$ mag. This lower brightness is supported by (V,B-V)-CMDs of Rich et al. (RIC97 (1997)), especially as Harris’ value comes from a CMD by Hesser & Hartwick (HES76 (1976)), whose lower limiting brightness is around $`17.3`$ mag.
4. The distances as determined via the $`[\text{M}/\text{H}]\mathrm{\Delta }V`$\- and the $`(VI)_{0,g}[\text{M}/\text{H}]`$-relation relate to the differentially dereddened CMDs, i.e to the minimal absolute reddening. However, the papers we obtained the cited values from (Table 6), do not take differential reddening into account (e.g. Armandroff (ARM88 (1988)) for NGC~6342 and 6760, Ortolani et al. (ORT90 (1990), ORT92 (1992)) for NGC~6528 and 6553). So their absolute reddenings are systematically larger and the distances smaller. Interestingly, the absolute reddening for NGC~6316 of $`E_{VI}=0.63`$ mag as determined in this work fits very well the value of $`E_{VI}=0.61`$ mag given by Davidge et al. (DAV92 (1992)); NGC~6316 shows the smallest differential reddening ($`\delta E_{VI}=0.07`$ mag) of our cluster sample.
5. Finally, the distances depend on the assumed extinction law. The value varies between $`3.1R_V^B3.6`$ (Savage & Mathis SAV79 (1979), Grebel & Roberts GRE95 (1995), see Fig 21 and discussion). This effect should have the strongest influence on the distances as determined in this work. Taking an absolute reddening of $`0.5`$ mag, the variation between the above cited values results in a difference of $`0.25`$ mag in the distance modulus. This corresponds to about 25% of the distance in kpc.
The distance error mostly depends on the absolute reddening used. The errors in the metallicites have only a minor influence on the distances (see Table 5). They amount to around 3% of the total distance in kpc. In conclusion, the increased distances $`r_{rel}`$ (Table 9) are due to the fact that we determine the distance-relevant parameters using the differentially dereddened CMDs.
### 5.4 Masses
To classify the clusters according to Burkert & Smith (BUR97 (1997)), we have to determine the masses from the total absolute brightnesses. Because we could not measure the apparent total brightness, we used the values given by Harris (HAR96 (1996)). With the extinctions and distance moduli given above (Table 8), we get the absolute total brightnesses via
$$M_V^{total}=V^{total}A_V(mM)_0$$
(7)
We determined the masses using a mass-to-light-ratio of $`\left(\frac{M}{L}\right)_V=3`$ (Chernoff & Djorgovski CHE89 (1989)). Table 10 shows the results. Thus, NGC~6441 is one of the most massive clusters of the galaxy. $`\omega `$ Cen/NGC 5139 has $`\mathrm{log}(M/M_{})=6.51`$ (Harris HAR96 (1996)).
## 6 Classification and assignment
After having determined the parameters of our cluster sample, we now discuss each cluster’s possible affiliations with the galactic structure components i.e. halo, disk or bulge for each cluster. The necessary criteria are introduced in the following subsection.
### 6.1 The assignment criteria
#### 6.1.1 Disk and Halo: Zinn (1985)
Zinn (ZIN85 (1985)) divided the GC-system into a metal-poor ($`[\text{M}/\text{H}]0.8`$ dex) halo- and a metal-rich ($`[\text{M}/\text{H}]0.8`$) disk-subsystem. This distinction also correlated with the kinematics and spatial distribution of their objects. The resulting criteria are listed in Table 11. Equation 8 gives the orbital velocity $`v_c`$ of a cluster depending on its observed radial velocity $`v_{rad}`$. $`v_c`$ can be compared to the net rotation as given in Table 11.
#### 6.1.2 Bulge and (thick) disk: Minniti (1995,1996)
Minniti (MIN95 (1995), MIN96 (1996)) divided Zinn’s metal-rich disk system further into GCs belonging to the (thick) disk on the one hand and to the bulge on the other. Comparing the GCs with their corresponding field population, he assigned the GCs with galactocentric distances $`R_{gc}3`$ kpc to the bulge and the ones with $`R_{gc}3`$ kpc to the thick disk.
#### 6.1.3 Inner halo, bar and disk: Burkert & Smith
Burkert & Smith (BUR97 (1997)) used the masses of the metal-rich GCs to distinguish between a group belonging to the inner halo and a group which can be further divided into a bar- and a ring-system using the kinematics and spatial distribution of the clusters (see Table 12).
#### 6.1.4 Radial velocities
Unfortunately, there do not exist any data on proper motions of our clusters. The only kinematic information available are radial velocities, catalogued by Harris (HAR96 (1996)). Thus, we can only check, whether a disk orbit is compatible with a given radial velocity. This is possible by comparing the measured radial velocity $`v_{rad}`$ with the expected one, calculated via equation 8 assuming that disk clusters move on circular orbits in the galactic plane.
$$v_{rad}=v_c\mathrm{sin}\left(l+\mathrm{arctan}\left(\frac{y}{R_sx}\right)\right)v_s\mathrm{sin}(l),$$
(8)
where $`l`$ is the galactic longitude, and $`x`$ and $`y`$ are the heliocentric coordinates. We used $`R_s=8.0`$ kpc and $`v_s=220`$ km/s. $`v_c`$ gives the velocities of the clusters in the plane, corresponding to the galactic rotational velocity $`v_{rot}(R_sx)`$ with the values taken from Fich & Tremaine (FIC91 (1991)).
#### 6.1.5 Metallicity gradient
The metallicity gradient of the disk is an uncertain criteria insofar, as it is defined for the outer ranges of the galactic disk. We use a metallicity gradient referring to the population of old open clusters. The oldest of these objects have ages similar to the youngest GCs (Phelps et al. PHE94 (1994)). Their scale height is comparable to other thick disk objects. Assuming that they are related to a possible disk population of GCs (Friel FRI95 (1995)), we can use their metallicity gradient
$$\frac{[\text{Fe}/\text{H}]}{R_{gc}}=0.091\pm 0.014$$
(9)
(Friel FRI95 (1995)) as a criterion for whether our GCs belong to the galactic thick disk or not.
### 6.2 The assignment
Using the above criteria, we assigned the clusters of our sample according to Table 13. The values of the parameters necessary to decide on group membership are listed in Table 14.
As for the metallicities of our clusters, they all belong to the disk system according to Zinn, which is obvious as the sample had been selected in this way. Not so obvious is the comparison with the net rotation of Zinn’s disk group. Only NGC~5927 shows a value of $`v_{rot}`$ which is not totally off the net rotation as given in Table 11.
The clusters belonging to the bulge according to Minniti’s criterion are members of the bar following the arguments of Burkert & Smith (BUR97 (1997)). Binney et al. (BIN97 (1997)) quote a value of $`20^{}`$ for the angle between x-axis in galactocentric coordinates and the major semiaxis of the bulge structure. Its end lying nearer to the sun is located at small galactic longitudes ($`y0`$ in cartesian coordinates). Fig 34 shows the spatial distribution of our cluster sample. The coordinates of the ’bar’ clusters NGC~6342, 6528 and 6553 according to Burkert & Smith seem to be consistent with a structure described by Binney et al. (BIN97 (1997)). However, as the referee pointed out, we do not know how long-lived the Milky-Way bar is, and other tracers of old populations such as RR Lyrae do not follow the bar (Alcock et al. ALC98 (1998)). Moreover, the distance between the ’bar’ clusters NGC 6528 and NGC 6553 is about $`5`$ kpc, which is much larger than the length of the Milky-Way bar according to most authors (e.g. Binney et al. BIN97 (1997)). Also note in Fig 34, that the errors in the x-coordinate are larger than those in y and z.
There are only two ’disk’ clusters remaining, assuming Burkert & Smith’s definition of disk clusters: NGC~5927 and NGC~6760. However, the radial velocities corroborate this result for NGC~5927 only. For any other cluster, the radial velocities seem to exclude an assignment to the disk.
The metallicity gradient of the old open clusters leads to the conclusion that none of our clusters is to be assigned to the thick disk. Taken the whole sample of metal rich clusters (i.e. clusters with $`[\text{M}/\text{H}]0.8`$ dex according to Zinn ZIN85 (1985), see Tables 16, 15), we find only three objects, which could be disk clusters according to the metallicity gradient criterium.
Some of the clusters do not meet any of the criteria. Interestingly, they are the most massive, but metal-poorest objects of the sample. These objects are NGC~6316, 6760, and 6441 as well as (in Table 16) NGC 104, 6356 and 6388. Although NGC 104 seems to be a disk cluster and mostly is referred to as such, the large distance to the galactic plane (3 kpc) does not support this assignment. Probably, the mentioned objects belong to the halo, being its metal-richest clusters. Zinn (ZIN85 (1985)) and Armandroff (ARM93 (1993)) point to the fact that the division into metal-rich and poor clusters is by no means an exact one, but that there is a metal-rich sample of halo clusters as well as a metal-poorer one of disk objects. Richtler et al. (RTL94 (1994)) discussed the existence of a subgroup of disk clusters according to Zinn (ZIN85 (1985)), based on an analysis of the metallicities and the minimum inclination angles derived from $`z/R_{gc}`$-values for these clusters. They conclude that the clusters NGC 6496, 6624 and 6637 might not be disk clusters after all, but belong to the halo. Adding their argument to the above discussion, we end up with 3 probable disk members (NGC~5927, additionally from Table 16 Liller 1 and Pal 10. Pal 11 is excluded because of its large minimum inclination angle.) and 9 clusters (NGC 104, 6316, 6356, 6388, 6441, 6496, 6624, 6637 and 6760) that more likely belong to the halo than to the (thick) disk. The rest of the clusters (NGC~6342, 6528, 6553 and the remaining ones of Table 16) fall in with the bulge/bar-group of Minniti (MIN95 (1995)) and Burkert & Smith (BUR97 (1997)).
## 7 Conclusions
We derived the parameters for five GCs near the galactic center in a uniform manner, employing a new calibration of methods which relate structures in the CMDs with the parameters. Taking into account the differential reddening and correcting the CMDs for it leads to more accurately determined parameters and a decreasing absolute reddening. There might be a systematic effect on distances, if the differential reddening is not taken care of.
With the $`[\text{M}/\text{H}]\mathrm{\Delta }V`$\- method we present a accurate way to estimate differentially metallicities of metal-rich GCs. Especially it might prove useful for surveys of clusters in V, V-I, as their CMDs only need to contain the HB and turnover of the AGB/RGB.
The metallicities of our program clusters all lie in the range of the clusters constituting the classical disk-system of GCs in the Milky Way. However, different criteria defining subgroups of the GC-system partly lead to differing results. Most of the metal-rich GCs seem to belong to a bar/bulge-structure, and only a minority could clearly be addressed as ’disk’-clusters. So the classical disk-system is more likely to be a mixture between a halo- and bulge-component.
###### Acknowledgements.
We are indebted to E.K. Grebel for the most interesting and valuable discussions, especially on the technique of differential dereddening. We would like to thank the referee D. Minniti for helpful comments and criticism.
|
no-problem/9904/cs9904018.html
|
ar5iv
|
text
|
# A Computational Memory and Processing Model for Prosody
## 1. Introduction
Ask any lay person to imitate computer speech and you will be treated to an utterance delivered in melodic and rhythmic monotone, possibly accompanied by choppy articulation and a voice quality that is nasal and strained. In fact, current synthesized speech is far superior. Yet few would argue that synthetic and natural speech are indistinguishable. The difference, as popular impression suggests, is the relative lack of interesting and natural variability in the synthetic version. It may be traced in part to the lack of a common causal account of pitch, timing, articulation and voice quality. Intonation and stress are usually linked to the linguistic and information structure of text. Features such as pause location and word duration are linked mainly to the speaker’s cognitive and expressive capacities, and pitch range, intensity, voice quality and articulation to her physiological and affective state.
In this paper, I describe a production model that attributes pitch and timing to the essential operations of a speaker’s working memory – the storage and retrieval of information. Simulations with this model produce synthetic speech in three of the prosodic styles likely to be associated with attentional and memory differences: a child-like exaggerated prosody for limited recall; a more adult but still expressive style for mid-range capacities; and a knowledgeable style for maximum recall. The same model also produces individual differences within each style, owing to its stochastic storage algorithm. The ability to produce both individual and genre variations supports its eventual use in prosthetic, entertainment and information applications, especially in the production of reading materials for the blind and the use of computer-based autonomous and communicative agents.
## 2. A Memory Model for Prosody
Prosody organizes spoken text into phrases, and highlights its most salient components with pitch accents, distinctive pitch contours applied to the word. Pitch accents are both attentional and propositional. Their very use indicates salience; their particular form conveys a proposition about the words they mark. For example, speakers typically use a high pitch accent (denoted as H\*) to mark salient information that they believe to be new to the addressee. Conversely, when they believe the addressee is already aware of the information, they will typically de-accent it or, if it is salient, apply a low pitch accent (L\*). Re-stated as a commentary on working memory, the H\* accent conveys the speaker’s belief that the addressee can not retrieve the accented information from working memory. De-accenting implicitly conveys the opposite expectation. The L\* accent does so explicitly. This view predicts different speaking styles as a consequence of the speaker’s beliefs about an addressee’s storage and retrieval capacities. For example, it ascribes the exaggerated intonation that adults use with infants and young children, to the adults’ belief that the child’s knowledge and attention are extremely limited; therefore, he needs clear and explicit prosodic instructions as to how to process language and interaction.
The model of working memory I use shows how retrieval limits can determine the information status of an item as either given or new, and therefore, its corresponding prosody. It was developed and implemented by Thomas Landauer and models working memory as a periodic three dimensional Cartesian space, the focus of attention via a moving search and storage pointer that traverses the space in a slow random walk, and retrieval ability via a search radius that defines the size of a region whose center is the pointer’s current location. Search for familiar items proceeds outward from the pointer, one city block per time step, up to the distance specified by the search radius.
As a consequence of the random walk, incoming stimuli are stored in a spatial pattern that is locally random but globally coherent. That is, temporal proximity in the stimuli begets spatial proximity in the model. It contrasts with stack models of memory that are strictly chronological, and semantic spaces in which distance is conceptual rather than temporal. Most importantly, it is a valid computational model of attention and working memory (AWM, from here on). Landauer used it to reproduce the well-known learning phenomena of recency and frequency, in which subjects tend to recall stimuli encountered most recently or most frequently. It has since been used by Walker to show that resource-bound dialog partners will make a proposition explicit when it is not retrievable or inferable, despite having been previously mentioned.
Retrieval in AWM is the process of matching the current stimulus to the contents of the region centered around the pointer. The search radius determines the size of this region and therefore is the main AWM simulation parameter. If a match is found within the search region, the stimulus is classified as given, otherwise, it is new. Figure 1 illustrates this with the simple example of filled and unfilled circles, a 4x4 AWM space, and a search radius of one. At the center of the search region is the current stimulus, a filled circle. Because the region contains no other filled circles, the stimulus is classed as new. Had the stimulus been an unfilled circle, it would have instead been classed as given because a match is retrievable within the search radius. Or, alternatively, had the search radius been two instead of one, a matching filled circle would have been found, and the stimulus again classed as given.
The ability to identify given and new items makes AWM a useful producer of prosody based on this distinction. Ostensibly, it shows how a speaker’s processing affects her prosody. However, although the working memory belongs to the speaker, its operation and determinations may reflect the speaker’s own retrieval capacities, her estimate of those of the addressee, or a mixture of both. That is, a speaker can always adapt her style (prosodic and lexical) to the needs of a less knowledgeable or capable addressee. A cooperative and communicative speaker will usually do this. However, she cannot model a retrieval capacity greater than her own – her own knowledge and attentional limits always constitute the upper bounds on her performance.
## 3. System design
The AWM component is embedded in a software implementation, Loq, that takes a text-to-speech approach. As shown Figure 2, the input to AWM is text, the output is speech. Therefore, Loq models read rather than spontaneous speech. Text comprehension is the process of searching for a match. Uttering the text is a question of mapping the search process and its results to prosodic features and sending the prosodically annotated text to the synthesizer.
Like many commercial text-to-speech synthesizers, the text structure is analyzed before prosody is assigned. However, the Loq analysis is richer. It takes advantage of on-line linguistic databases to approximate the speaker’s knowledge of English semantics, pronunciation and usage. The structural analysis is richer as well, providing both grammatical structure (subject, verb, object), empty categories (ellipses, for example) and information about clausal attachment. The main qualitative difference is that Loq interposes a model of limited attention and working memory between the text analysis and prosodic mapping components.
### 3.1. Matching
For the example in Figure 1, the matching criterion is binary and simple – a circle is either filled or unfilled. However, language is many more times complex, and matches may occur for a variety of features, some of which are more informative than others. The matching criteria used in Loq attempt to distill from the literature (e.g., ) the most relevant and prevalent ways that items in memory prime for the current stimulus, and by the same token, the ways in which the current stimulus can function as a retrieval cue. In other words, they gauge the mutual information between the current stimulus and previously stored items.
Altogether, Loq tests for matches on twenty-four semantic, syntactic, collocation, grammatical and acoustical features. Each test contributes to the total match score, which is then compared to a threshold. If it is below, the search continues; if above, it stops. As shown in Figure 3, matches on any criterion express priming, and scores above the threshold constitute a match sufficient to stop the search even before it reaches the edge of the search region. Because some tests are more informative than others, a high score can reflect the positive outcome of many un-informative tests, or of one that is definitive. Thus, in the current ordering, co-reference ensures a match, while structural parallelism in and of itself does not.
### 3.2. Input
The matching criteria determine the form and kind of information in the text input. As with commercial synthesizers, this includes part of speech tagging. Loq uses the output of Lingsoft’s ENGCG (English Constraint Grammar) software which provides both tags and phrase structure information. However, reliable automatic means for identifying other information, such as grammatical clauses, empty categories, attachment and co-reference do not yet exist. Therefore, this information was entered by hand.
The Loq software turns the parsed and annotated text into a sequence of tokens that assembles clauses in a bottom up fashion, starting with the word and followed by the syntactic and grammatical clauses to which it belongs. This models the reader’s assembly of the words into meaningful syntactic and grammatical groupings.<sup>1</sup><sup>1</sup>1Adapting this for a spontaneous speaker would proceed in reverse, from the concept, to grammatical roles, syntactic phrases and finally, the words.
To facilitate the matching process, the text is also augmented with information from the WordNet database for semantic comparisons, a pronunciation database for acoustical comparisons and the Thorndike-Lorge and Kucera-Francis for word frequency counts<sup>2</sup><sup>2</sup>2As provided in the Oxford Psycholinguistic Database. to scale the match score by the prior probability for the language. The WordNet synonym indices were assigned by hand. However, all subsequent semantic comparisons using WordNet are automatic as required by the matching process.
### 3.3. Mapping
I have described how AWM produces the L\* accent (or none) for retrievable items, and H\* for new ones. However, there are more than two pitch accents – Pierrehumbert et al. identify six<sup>3</sup><sup>3</sup>3L\*, H\*, L+H\*, L\*+H, H+L\*, H\*+L. – and more components to prosody. Obtaining them from one model first requires an adjustment such that given or new status is determined from the effect of the stimulus on the region as a whole, as follows: The result of any one comparison affects the “state” of the item to which the stimulus is compared. State is simply defined – a L annotation records a match most any criterion,<sup>4</sup><sup>4</sup>4Some criteria are parasitic and only contribute to the score in combination with other criteria. and a H annotation records a match score of zero. Thus, the comparison process registers both priming and a true match. Both receive L annotations, but only a match whose score exceeds the threshold stops the search.
A pitch accent is then derived by comparing the contents of the search radius before and after the matching process. Majority rules apply such that the annotation with the higher count becomes the defining tone. If both the before and after configurations are composed mainly of L annotations, the accent form is L+L, which becomes the L\* accent. However, if there is a change, for example, from a L to H majority, the accent form is L+H. The interpretation of L+L is, roughly, that a familiar item was expected and provided. Likewise, the interpretation for L+H is that a familiar item was expected but an unexpected one provided.
To complete the bitonal derivation, Loq treats the location of the main tone as a categorical reflection of the magnitude of the effect of the stimulus. If the stimulus changes the annotations for the majority of items in the search region, the second tone is the main tone. Otherwise, it is the first. This schema produces the six pitch accents identified by Pierrehumbert et al. More generally, the annotation schema provides the model with a simple form of feedback – the results of prior processing persist and contribute to a bias that affects future processing.
The pitch accent mapping illustrates the main features of the prosodic mapping in general. First, all mappings reflect the activity and state within the region defined by the search radius. Second, they express some aspects of prosody as a plausible consequence of search and storage. For example, storage and search times are mapped to word and pause duration. However, others – for example, the bitonal derivation– are, at best, coherent with the operation and purpose of the model and not contradicted by the current (sparse) data on the relation of cognitive capacity to the prosody of read speech. In all, the mapping from AWM activity and state produces intonational categories (pitch accent, phrase accent and boundary tone) and their prominence, word duration, pause duration and the pitch range of an intonational phrase.
## 4. Results
Although simulations were run using text from three different genres (fiction, radio broadcast, rhymed poetry), two and three dimensional AWM spaces and three memory sizes (small, mid-range and large), most of the prosodic output was correlated with the search radius. Therefore, the results reported here are for the mid-range two-dimensional memory (22x22) and for the news report text only (one paragraph, 68 words.) Five simulations were run for each radius.
True to the attentional predictions, Figure 5 shows that as the search radius increases, the mean number of unaccented words increases as well, while the number of H\* accents decreases. Under the current mapping, pitch accent prominence is a function the distance at which the search stops and the number of comparisons performed prior to stopping. This produces a decrease in the mean prominence as the search radius increases (Figure 6). These patterns contribute to the lively and child-like intonation produced for the smallest radii (1 and 2), the expressive but more subdued intonation for the mid-range radii (3-8) and flatter intonation of the higher radii.
The naturalness of synthetic prosody is difficult to evaluate via in perceptual tests. However, informal comments from listeners revealed that while the three styles were recognizable and the prosody more natural-sounding than the commercial default, it was best for shorter sections rather than for the passages as a whole. A comparison with the natural prosody for the same text (the BU Corpus radio newscasts) showed that when the simulations agreed on pitch accent location and type, they tended to disagree on boundary location and type, mostly because the Loq simulations produced many more phrase breaks than the natural speaker.
## 5. Conclusion and Future Directions
Loq is a production model. It produces prosody as the consequence of cognitive processing as modeled by the AWM component. Its focus on retrieval makes it a performance model as well, demonstrating that prosody is not determined solely by the text. It produces three recognizable styles that appear to correlate with retrieval capacities as defined by the search radius: child-like (for radii of 1 and2), adult expressive (for radii between 3 and 8) and knowledgeable (for radii higher than 8). This is a step towards producing prosody that is both expressive and natural and, in addition, specific to the speaker,
Currently, the main problem is that the prosody is not entirely cohesive within one text. Therefore, one next step is to explore variations on the mapping of AWM activity and state to prosodic features. More distant work includes extending the model to incorporate other influences, especially the influence of physiology. This may be the key to producing more than three styles, and to incorporating both the dynamics and constraints that will produce consistently natural-sounding speech.
|
no-problem/9904/physics9904002.html
|
ar5iv
|
text
|
# What caused the onset of the 1997–1998 El Niño?
## The Problem
The 1997–1998 El Niño was one of the strongest on record. Unfortunately, its onset was not predicted as well as had been hoped (Pearce, 1997). In spite of claims that an El Niño could be predicted a year in advance, most predictions (Stockdale et al., 1998; Ji et al., 1996; Huang and Schneider, 1999; Kleeman et al., 1995) only started to indicate a weak event six months ahead of time. There have therefore been suggestions that El Niño depends not only on internal factors, but also on external noise in the form of weather events in the western Pacific.
The classical picture of El Niño (Bjerknes, 1966; Philander, 1990) is that the usual temperature difference between the warm water near Indonesia and the ‘cold tongue’ in the eastern equatorial Pacific causes an intensification of the trade winds. These keep the eastern region cool by drawing cold water to the surface. This positive feedback loop is kept in check by nonlinear effects. During an El Niño the loop is broken: a decreased temperature difference causes a slackening or reversal of the trade winds over large parts of the Pacific. This prevents cold water from reaching the surface, keeping the surface waters warm and sustaining the El Niño.
This picture leaves open the question how an El Niño event is triggered and terminated. A variety of mechanisms has been proposed. On long time scales an unstable mode of the nonlinear coupled ocean-atmosphere system may be responsible (Neelin, 1991), either oscillatory or chaotic. Other authors stress the importance of a ‘recharge’ mechanism (Wyrtki, 1975; Jin, 1997), with a built-up of warm water in the western Pacific preceding an El Niño. Another description on shorter time scales is in terms of reflections of equatorial Rossby and Kelvin waves in the thermocline (the interface between warm surface water and the cold water below at about 100 m depth). These would provide the negative feedback that sustains oscillations (Suarez and Schopf, 1988; Battisti and Hirst, 1989; Kessler and McPhaden, 1995). However, short-scale atmospheric ‘noise’ in the form of westerly wind events in the western Pacific may also be essential in triggering an El Niño (Wyrtki, 1985; Kessler et al., 1995).
Here we trace the causes of the onset of last year’s El Niño in May 1997 over the six months from 1 December 1996. This is the time scale over which predictions are currently skillful. Although El Niño is an oscillation of the coupled ocean-atmosphere system, the analysis can be simplified by first studying the response of the ocean to forcing with observed wind stress and heat flux fields. This response contains all time delays. The other part of the loop, the dependence of the wind stress and heat flux on the ocean surface temperature will be discussed separately.
The ocean model used is the Hamburg Ocean Primitive Equation Model, hope (Frey et al., 1997; Wolff et al., 1997) version 2.3, which is very similar to the ocean component of the European Centre for Medium-range Weather Forecasts (ecmwf) seasonal prediction system (Stockdale et al., 1998), but restricted to the Pacific Ocean. It is a general circulation model with a horizontal resolution of 2.8, increased to 0.5 along the equator, and a vertical resolution of 25 m in the upper ocean. It traces the evolution of temperature $`T`$, salinity $`S`$, horizontal velocities $`u,v`$ and sea level $`\zeta `$.
This ocean model is forced with daily wind stress $`(\tau _x,\tau _y)`$ and heat flux $`Q`$ from the ecmwf analysis, which in turn uses the excellent system of buoys (McPhaden et al., 1997) that observed this El Niño. Evaporation and precipitation are only implemented as a relaxation to climatological surface salinity. The initial state conditions are ecmwf analysed ocean states. To suppress systematic model errors we subtract a run starting from an average 1 December ocean state forced with average wind and heat fluxes (both 1979–1996 averages (Gibson et al., 1997)).
The model simulates the onset of the 1997–1998 El Niño quite well. We use the nino3 index $`N_3`$, which is a common measure of the strength of El Niño (the anomalous sea surface temperature in the area 5S–5N, 90W–150W). In Fig. 1 the weekly observed nino3 index (Reynolds and Smith, 1994) is shown together with the index in the model run, compared to the same period one year earlier. The model overreacts somewhat to the forcing and simulates a nino3 index of 2.3 K at 1 June 1997, whereas in reality the index reached this value one month later. In 1995–1996 the simulation follows reality very well.
## The Adjoint Model
The value of the nino3 index at the end of a model run can be traced back to the model input (initial state, forcing) with an *adjoint model*. The normal ocean model is a (complicated) function $``$ that takes as input the state of the ocean at some time $`t_0`$ (temperature $`T_0`$, salinity $`S_0`$, etc.). Using the wind stress $`\stackrel{}{\tau }_i`$ and heat flux $`Q_i`$ for each day $`i`$ for six months it then produces a final state temperature $`T_n`$. The adjoint model (or backward derivative model) is the related function that takes as input derivatives to a scalar function of the final state, here the nino3 index, $`N_3/T_n`$. It goes backward in time and uses the chain rule of differentiation (Giering and Kaminski, 1998) to compute from these (and the forward trajectory) the derivatives $`N_3/T_0`$, $`N_3/S_0`$, $`N_3/\stackrel{}{\tau }_i`$ and $`N_3/Q_i`$. These derivatives can be interpreted as *sensitivity fields*, giving the effect of a perturbation in the initial state or forcing fields. We can use them to make a Taylor expansion of the nino3 index to all the input variables:
$`N_3`$ $``$ $`{\displaystyle \frac{N_3}{T_0}}\delta T_0+{\displaystyle \frac{N_3}{S_0}}\delta S_0`$ (1)
$`+{\displaystyle \underset{\mathrm{days}i}{}}\left({\displaystyle \frac{N_3}{\stackrel{}{\tau }_i}}\delta \stackrel{}{\tau }_i+{\displaystyle \frac{N_3}{Q_i}}\delta Q_i\right)`$
This means that the value of the index is explained as a sum of the influences of initial state temperature and salinity, and the wind and heat forcing during the six months of the run. These influences are each a dot product of the sensitivity to this variable (computed with the adjoint model) multiplied by its deviation from the normal state (extracted from the ecmwf analyses). To minimize higher order terms we take the average derivative from the simulation and the climatology run. We have checked with actual perturbations that the accuracy of the linear approximation Eq. 1 is usually better than about 30% (within the model). Details can be found in van Oldenborgh et al. (1999).
## The 1997–1998 El Niño
For the value of the nino3 index on 1 June 1997 the linearization Eq. 1 gives a value of 1.8 K, compared with the 2.3 K simulated (and 1.3 K observed), this is within the expected error. The high value is mainly due to the influence of the westerly wind anomalies (1.0 K) and the initial state temperature on 1 December 1996 (1.1 K). The salinity contributes $``$0.3 K, with a large uncertainty.
The spatial structure of the influence of the initial state temperature is shown in Fig. 2. The top panel gives the temperature anomaly $`\delta T_0`$ along the equator at the beginning of the run (Dec 1996), showing an unusually deep thermocline in the western Pacific and a shallower thermocline in the eastern Pacific. The second frame depicts the sensitivity of the June nino3 index to temperature anomalies six months earlier, $`N_3/T_0`$. The third frame is just the product of the previous two; the integral of this over the whole ocean gives the 1.1 K contribution to the nino3 index mentioned before. The contribution is concentrated in the deeper layer of warm water along the equator in the western Pacific, in agreement with a ‘recharge’ mechanism.
Fig. 3 shows the time structure of the influence of the zonal wind stress. The area under the solid graph gives the total influence, 1.0 K. The main causes of warming are the three peaks in zonal wind stress (dashed line) at the beginning of March, the end of March and the beginning of April, contributing about 0.6 K, 0.3 K and 0.5 K respectively. The peaks correspond with (very) strong westerly wind events in the western Pacific. These generated downwelling Kelvin waves in the thermocline that travelled east and deepened the layer of warm water in the eastern Pacific 2–3 months later, increasing the surface temperature. There was also a strong wind event in December, contributing about 0.4 K over a negative baseline. From Fig. 3 it seems likely that it increased the strength of the later wind events by heating the eastern Pacific in March. The heating effect of the March wind event also gave rise to an increase of the wind stress $`\delta \tau _x`$ in May, but this reversal of the trade winds does not yet influence the nino3 index $`N_3/\tau _x\delta \tau _x`$, justifying the uncoupled analysis.
The structure of the peaks in Fig. 3 can be seen more clearly in spatial views. In Fig. 4a the zonal wind stress anomaly $`\delta \tau _x`$ is plotted for the second week of March. The westerly wind event corresponds to the large localized westerly anomaly around 150E. Fig. 4b shows the sensitivity of the nino3 index in June to the zonal wind stress during this week, $`N_3/\tau _x`$. This sensitivity consists of two main parts, both equatorally confined. In the western and central Pacific extra westerly wind stress would excite a downwelling Kelvin wave, raising the nino3 index three months later. In the eastern Pacific the response would be in the form of a Rossby wave. The product of the anomaly and sensitivity fields is shown in Fig. 4c. This gives the influence of zonal wind stress during this week on the nino3 index, the integral of this field gives the corresponding value (0.22 K) in Fig. 3. The influence is contained in the intersection of the westerly wind event and the equatorial wave guide, and very localized in time and space. The effects of the other wind events are similar.
The question remains whether the big influence of these wind events was due to their strength $`\delta \tau _x`$ or to an increased sensitivity of the ocean $`N_3/\tau _x`$. We therefore repeated the analysis for the same months one year earlier, when the temperature in the eastern Pacific stayed below normal (Fig. 1). The adjoint model gives a nino3 index of $`0.6`$ K, equal to the simulated index (the observed index was $`0.7`$ K). This index is built up by a large negative influence of the wind stress, $`1.5`$ K, and a positive influence of the heat flux, $`+0.9`$ K. The influence of the initial state temperature is also positive, but weaker than in the 1996–1997 $`+0.6`$ K, and the salinity contributes $`0.5`$ K.
Although the built-up of warm water is also less pronounced, the largest difference is in the influence of the zonal wind stress. The sensitivity to zonal wind stress $`N_3/\tau _x`$ (over the area where its variability is largest) is compared for these two years in Fig. 5. During the time of the strong early March windevent the sensitivity was not very different bewteen the two years, but it was a factor two higher in April 1997 than in April 1996, and lower during the first two months. In all, these differences cannot explain more than a few tenths of a degree difference in the nino3 index on 1 June.
The difference between an El Niño in 1997 and no El Niño in 1996 can be attributed for about 30% to an even stronger built-up of warm water in the western Pacific, and for about 90% to the the absence of strong westerly wind events in the western Pacific in the 1995–1996 rain season. A successful prediction scheme will have to predict the intensity of the westerly wind events correctly. However, the year-to-year variability of these wind events does not seem to depend on the state of the Pacific ocean (Slingo et al., 1999), and at the moment is not predictable.
## Conclusions
Using an adjoint ocean model we have shown that a successful prediction of the strong onset of the 1997–1998 El Niño, required a successful prediction of strong westerly wind events in March–April, which in our model contributed about 90% to the strength of the El Niño on 1 June 1997 compared to the situation one year earlier. The sensitivity to these wind events was not significantly different from the year before. The built-up of warm water contributed about 30% of the difference. The strong dependence on the westerly wind events would explain the relatively short lead time for correct predictions of the strong onset of this El Niño.
### Acknowledgments
I would like to thank the ecmwf seasonal prediction group for their help and support and Gerrit Burgers for his part in the construction of the adjoint model. This research was supported by the Netherlands Organization for Scientific Research (NWO).
|
no-problem/9904/hep-ph9904387.html
|
ar5iv
|
text
|
# 1 Estimate of the theoretical error of TOPAZ0 for the full Bhabha cross section at 10^∘ maximum acollinearity. 𝜎^𝑇, 𝜎_𝑠^𝑇 and 𝜎_{𝑛𝑠}^𝑇 are the full, 𝑠 and non-𝑠 part of the TOPAZ0 cross section. 𝜎_{𝑛𝑠}^𝐴 is the non-𝑠 part of the ALIBABA cross section. 𝛿𝜎_𝑠^𝑇 and 𝛿𝜎_{𝑛𝑠}^𝑇 are the absolute theoretical error of the 𝑠 and non-𝑠 parts of the TOPAZ0 cross section, as obtained according to the procedure given in the text. 𝛿𝜎/𝜎 is the total relative error.
FNT/T-99/05
On Large-Angle Bhabha Scattering at LEP
Guido MONTAGNA<sup>a,b</sup>, Oreste NICROSINI<sup>b,a</sup> and Fulvio PICCININI<sup>b,a</sup>
<sup>a</sup> Dipartimento di Fisica Nucleare e Teorica, Università di Pavia,
Via A. Bassi 6, 27100, Pavia, Italy
<sup>b</sup> INFN, Sezione di Pavia, Via A. Bassi 6, 27100, Pavia, Italy
## Abstract
The theoretical accuracy of the program TOPAZ0 in the large-angle Bhabha channel is estimated. The physical error associated with the full Bhabha cross section and its forward and backward components separately is given for some event selections and several energy points of interest for LEP1 physics, both for the $`s`$ and non-$`s`$ contributions to the cross section.
E-mail:
montagna@pv.infn.it
nicrosini@pv.infn.it
piccinini@pv.infn.it
FNT/T-99/05
April 19, 1999
One of the open issues of precision physics at LEP is the determination of the accuracy of the theoretical predictions for the large-angle Bhabha scattering cross section. At present, several computer codes developed for large-angle Bhabha scattering studies can be found in the literature, ranging from semi-analytical to truly Monte Carlo ones. A detailed account of them has been presented in refs. . In particular, in ref. several comparisons have been performed, both for “academic” and realistic event selections (ES’s), both for LEP1 and LEP2 energies.
After the publication of ref. , a new analysis concerning specifically two codes, namely ALIBABA and TOPAZ0 , has been performed , where a very detailed comparison between the two programs is developed and the estimate of the theoretical error associated with their predictions is given. If, on the one hand, the comparison is very careful, on the other hand the study is, in the opinion of the authors of the present note, lacking for the following aspects: it considers, as the main source of information on large-angle Bhabha observables, only ALIBABA and TOPAZ0; it is based on a comparison for a “bare” ES, which is far from being realistic, and ignores more realistic ES’s such as the ones considered in ref. ; it does not fully exploit the detailed comparisons for $`s`$-channel annihilation processes, that can be found in the literature and give valuable pieces of information on a significant part of the full Bhabha cross section; it considers only centre of mass energies around the $`Z^0`$ resonance, leaving aside LEP2 energies, from which additional information can be extracted concerning the accuracy of the non-$`s`$ component of the full Bhabha cross section; it addresses the problem of assigning a theoretical error to the full Bhabha cross section both for ALIBABA and TOPAZ0, but the error for the non-$`s`$ part of the Bhabha cross section is given for ALIBABA only; moreover no information is given concerning the forward and backward components of the cross section itself.
The aim of the present study is to critically analyze as much as possible of the available literature on large-angle Bhabha scattering, in order to give a reliable estimate of the theoretical error associated with TOPAZ0, both for full cross sections and for the forward and backward components, both for the $`s`$ and non-$`s`$ parts. The numerical results presented in the following are obtained mostly by elaborating on the ALIBABA and TOPAZ0 predictions shown in ref. .
Let us consider first the problem of assigning a theoretical error to the full $`s+t`$ Bhabha cross section. It can be decomposed into $`s`$ and non-$`s`$ contributions. As far as the $`s`$ part is concerned, TOPAZ0 includes exact $`O(\alpha )`$ electroweak corrections plus all the relevant and presently under control higher order contributions. From several tuned comparisons discussed in recent literature , one can see that the overall difference between TOPAZ0 and ZFITTER is at the scale of $`0.01\%`$ for $`s`$ channel QED convoluted cross sections, for both extrapolated and realistic set up. While for completely inclusive $`s`$-channel cross sections, or for $`s`$-channel cross sections with an $`s^{}`$ cut, TOPAZ0 includes $`O(\alpha ^3L^3)`$ and $`O(\alpha ^2L)`$ hard photon corrections according to ref. , these are not taken into account for $`s`$-channel processes with angular acceptance cuts and, in particular, for the $`s`$ part of the full Bhabha cross section. When taking into account the theoretical error due to neglecting them, and the one due to other minor sources such as the approximate treatment of additional light pairs, one can conclude that the overall theoretical error of the $`s`$ part of the Bhabha cross section in TOPAZ0 is $`0.1\%`$. In setting the theoretical error for the $`s`$ part, no information coming from ALIBABA is considered, because it is known that the code is not accurate for $`s`$-channel processes at the 0.1% level as, for instance, TOPAZ0 and ZFITTER are. As far as the non-$`s`$ part is concerned, the theoretical error of TOPAZ0 is dominated by missing $`O(\alpha )`$ non-logarithmic QED corrections, that on the contrary are present in ALIBABA. A way of estimating such an error is to consider the comparisons between TOPAZ0 and BHWIDE performed in ref. at LEP2 energies. Actually, in the LEP2 energy regime Bhabha scattering is essentially a $`t`$-channel dominated process. Since BHWIDE contains exact $`O(\alpha )`$ QED corrections for the $`s`$ and non-$`s`$ contributions to the cross section, such a comparison sets the size of the missing non-log contributions in the non-$`s`$ part of the TOPAZ0 cross section, which is at the 1% level. In order to be as much as conservative as possible, and not to loose the information contained in ALIBABA for the non-$`s`$ contributions, a reliable recipe for setting the theoretical error of TOPAZ0 for the non-$`s`$ part of the Bhabha cross section is to take it as the maximum between 1% of the non-$`s`$ part of the cross section and the absolute deviation from ALIBABA.
By following the above recipe for the estimate of the theoretical error of TOPAZ0 for the full Bhabha cross section, tables 1 and 2 follow. As already stated, the numerical results shown are elaborated from ref. , where all the details of the ES and input parameters adopted can be found.
The estimate of the theoretical error for the F+B cross section given in tabs. 1 and 2 refers to a BARE ES. Anyway, it is worth considering also the results of the comparison between BHWIDE and TOPAZ0 shown in ref. for the LEP1 energy range, for both BARE and CALO ES’s. Actually, it is known that ALIBABA does not contain the bulk of the $`O(\alpha ^2L)`$ corrections, while BHWIDE and TOPAZ0 do, by virtue of their factorized formulation . Such missing corrections are, for instance, responsible of part of the theoretical error of ALIBABA, that above the $`Z`$ peak for $`s`$ channel processes can be of the order of several 0.1%. When considering the comparison between BHWIDE and TOPAZ0 for LEP1 energies, one realizes that the difference between the two programs for a realistic CALO ES is generally smaller than the errors quoted above<sup>1</sup><sup>1</sup>1The authors of BHWIDE consider the program as more reliable for realistic ES’s (CALO) rather than for BARE ones .. Hence, the estimate of the theoretical error of tabs. 1 and 2 has to be considered as a conservative one, and has its origin in the fact that the recipe adopted aims at using as much information as possible, and in particular the piece of information given by ALIBABA. In the light of the above comments, the error estimate of tabs. 1 and 2 can be considered as a conservative error estimate also for CALO ES’s. Less conservative error estimates for the full Bhabha cross section can be found in refs. and .
Besides a reliable estimate of the theoretical error for the full $`s+t`$ Bhabha cross section, it is also interesting to give the forward (F) and backward (B) parts of the cross section, together with their theoretical error, for both the $`s`$ and non-$`s`$ components. For the program TOPAZ0, the first part of the task can be accomplished by solving the system
$`\sigma =\sigma _F+\sigma _B,`$
$`\sigma A_{FB}=\sigma _F\sigma _B,`$ (1)
where $`\sigma `$ and $`A_{FB}`$ are the cross section and the forward-backward asymmetry, respectively, even if TOPAZ0 has been designed for computing cross sections and asymmetries directly. For the second part, i.e. assigning a theoretical error to the F/B components, one should notice that a naive error propagation can lead to artificially overestimated errors. Hence, in the following an alternative procedure is proposed.
First of all, by using the $`s`$ components of cross section and asymmetry as quoted in ref. one can compute the $`s`$ components of the forward and backward cross sections. Since in general the $`s`$ component of the F/B asymmetry is small, one can attribute to the F/B components of the $`s`$-channel cross section the same theoretical error as the one attributed to the integrated $`s`$-channel cross section, namely 0.1%. From this recipe, tabs. 3 and 4 follow.
For the non-$`s`$ component of the cross section, it should be still desirable to exploit the information provided by ALIBABA. To this aim, it has to be noticed that two procedures can be followed. The first one consists in solving the system above as done for TOPAZ0 (ALIBABA1). The second one consists in computing directly the F/B components of the cross section, both for the full and $`s`$ parts (ALIBABA2). For the first procedure, the results of ref. have been used. For the second one, ALIBABA has been re-run with high numerical precision in order to neglect the integration error. The theoretical error to be attributed to the non-$`s`$ part of the F/B cross section, similarly to what has been done for the F+B cross section, has then been defined as the maximum among the 1% of the corresponding cross section and the absolute deviation from ALIBABA1 and ALIBABA2. By following the recipe here described, tabs. 5 and 6 follow.
Two technical comments are in order here. The first one is that the recipe adopted for the estimate of the non-$`s`$ theoretical error is sensible, since in some energy points the error is fixed to be 1% of the corresponding cross section (typically in the region below the $`Z`$ peak), whereas in other ones it is fixed by one of the absolute differences (typically in the region at and above the $`Z`$ peak, where the F/B components are numerically small). The second comment concerns the fact that in reconstructing the full theoretical error of the F+B cross section by adding its F/B components from tabs. 36, for the $`s`$ and non-$`s`$ parts, one obtains values that are equal or slightly larger than the ones obtained directly in tabs. 1 and 2, as expected. Of course, the same remarks concerning the conservativeness of the error estimate for the F+B cross section apply to the F/B components separately, also.
The present results correspond to an angular acceptance of $`40^{}`$$`140^{}`$ for the scattered electron. Since the sharing between the $`s`$ and non-$`s`$ component of the cross section depends on the angular acceptance, the above estimate of the theoretical error can be considered valid for the presently adopted angular cuts; anyway, an important change in the angular cuts would require a reanalysis of the situation, bearing in mind that for larger/narrower angular acceptances the total error can be expected to increase/decrease, respectively.
To summarize, the estimate of the theoretical error of the Bhabha cross section derived in the present letter is based upon the following pieces of information: tuned comparisons between TOPAZ0 and ZFITTER for $`s`$-channel observables ; estimate of missing higher-order QED corrections ; comparisons between BHWIDE and TOPAZ0 for the full Bhabha cross section in the LEP1 and LEP2 energy range ; comparisons between ALIBABA and TOPAZ0 for the non-$`s`$ part of the cross section. The present paper updates the existing literature for the theoretical error of the full F+B Bhabha cross section , and improves it by providing additional information on the theoretical uncertainty to be associated to the F/B components of the $`s`$/non-$`s`$ parts of the Bhabha cross section.
Acknowledgements The authors are indebted with Marta Calvi and Giampiero Passarino for stimulating discussions on the subject.
|
no-problem/9904/hep-lat9904006.html
|
ar5iv
|
text
|
# Phase diagram and Debye mass in thermally reduced QCD
## 1 Introduction
One of the main challenges of thermal QCD is to get reliable numbers. Though the gauge coupling may be small, Linde’s argument tells us that perturbation theory will fail. The powerlike infrared divergencies one meets in perturbation theory will off-set the powers of the coupling constant. At what order in perturbation theory this will happen depends on the observable in question. For the free energy this happens when the static sector starts to dominate, and a simple dimensional argument shows this will happen at $`O(g^6)`$. For the Debye mass Linde’s phenomenon starts already at next to leading order. So the problem is certainly not academic! One should bear in mind that Linde’s argument does not deny the existence of a perturbation series. It says that from a certain order on the coefficients are no longer obtained by evaluating a finite number of diagrams of a given loop order.
So we are faced with evaluating non-perturbative effects from the three dimensional sector defined by the static configurations. It was realized some time ago that one could take the static part of the 4D action combined with induced effects by the non-static configurations. This theory gives at large distances the same physics as the 4D theory, and has the advantage of relatively straightforward lattice simulations. In section 2 we discuss the relation between the 4D and the 3D theory. In particular we show how the phase diagram of the 3D theory has a remarkable property: the curve of 4D physics, and the critical curve as determined by perturbation theory do coincide to one and two loop order. However, perturbation theory has no reason to be trustworthy in determining the critical curve, and this is probably the reason why the fit to the numerical determination is problematic.
In section 3 we discuss the physics of the domain wall in some detail.
## 2 Effective 3D action and symmetry in 4D
Construction of the effective action proceeds along familiar lines. In the case of QCD with $`n_f`$ quarks its form is given by integrating out the heavy modes of $`O(T)`$:
$$S_{3D}=S_{YM,n=0}+S_{ind}$$
(1)
The first term is the static sector of the pure Yang-Mills theory in 4D with coupling constant $`g_3=g\sqrt{T}`$.
The second term in eq. 1 must contain the symmetries of the original QCD action, as long as they are respected by the reduction process.
So we expect the induced action to be of the form:
$$S_{ind}=V(A_0)+\text{ terms involving derivatives}$$
(2)
$`V(A_0)`$ should be invariant under static gauge transformations, C, CP ($`A_0A_0^T`$) and this reduces it to a sum of traces of even powers of $`A_0`$:
$$V(A_0)=m^2TrA_0^2+\lambda _1(TrA_0^2)^2+\lambda _2TrA_0^4+\mathrm{}.$$
(3)
Only one independent quartic coupling survives for SU(2) and SU(3). We take it to be $`TrA_{0}^{2}{}_{}{}^{2}`$. Note that we lost a symmetry present in the 4D action for gluons alone, and less and less conserved when quarks get lighter and lighter: Z(N) symmetry.
Remember from the lattice formulation of pure Yang-Mills that one can multiply at a given time slice in the original 4D action all links in the time direction with a factor $`\mathrm{exp}\pm i\frac{2\pi }{3}`$. This will not change the form of the action, but will change by the same factor the value of the Wilson line $`P`$ wrapping around the periodic time direction:
$$P(A_0)=𝒫\mathrm{exp}iA_0𝑑\tau $$
(4)
Clearly in eq. 3 this symmetry has gone. Apparently the reduction process does not respect $`Z(3)`$ symmetry! The reason for this is twofold:
i)the reduction process does not include the static modes.
ii)the values of $`A_0/T`$ in the effective action are order $`g`$, whereas the Z(N) symmetry equates the free energy in $`A_0/T`$ and $`A_0/T+O(2\pi /3)`$.
To understand this better – and to prepare the way for the discussion of the domain wall observable in the last section 3 – we recall some familiar facts in 4D for SU(3).
### 2.1 Z(3) symmetry and domain walls in 4D gauge theory
The free energy $`U`$ as a function of the Wilson line invariants $`TrP,TrP^2`$ is naturally defined through:
$$\mathrm{exp}\frac{VU(t_1,t_2)}{T}=DA_0D\stackrel{}{A}\delta (t_1\overline{TrP})\delta (t_2\overline{TrP^2})\mathrm{exp}\frac{S(A)}{g^2}$$
(5)
where $`\overline{TrP}`$ is the normalized space average of the trace over the volume $`V`$. A natural parametrization of the parameters $`t_1`$ and $`t_2`$ suggests itself: define the phase matrix $`\mathrm{exp}iC`$ with $`C`$ being a traceless diagonal 3x3 matrix with entries $`C_i,(i=1,2,3)`$ and $`_iC_i=0`$, because we have SU(N), not U(N).
Consider pure Yang-Mills. A gauge transformation that is periodic modulo a phase in Z(3) will only change the arguments in the delta functions in eq. 5. Hence the potential $`U`$ has degenerate minima in all points of the C-plane, where $`\mathrm{exp}iC=1`$, or $`\mathrm{exp}\pm i2\pi /3`$. This is called Z(3) symmetry (and the degeneracy is lifted by the presence of quarks).
This statement is independent of perturbation theory. In fact the potential in eq. 5 has been computed in perturbation theory including two loop order. And this potential includes the static modes. Propagators acquire a mass proportional to the phases $`C`$, because it acts like a VEV of the adjoint Higgs $`A_0`$.
Hence, for small $`C`$, eventually Linde’s argument will apply and the perturbative evaluation becomes impossible.
For SU(3) the direction in which the Wilson line phase causes minimal breaking is in the hypercharge direction $`C=\frac{1}{3}diag(q,q,2q)`$. Minimal breaking means the maximal number of unbroken massless excitations, that do not contribute to the potential. Hence this is at the same time the valley through which the system tunnels from one minimum to the next. In this ”q- valley” the combined 1 and 2 loop result is exceedingly simple:
$$U^{(1)}+U^{(2)}=\frac{4\pi ^2}{3}T^4(N1)\left(15\frac{g^2N}{(4\pi )^2}\right)q^2(1q)^2$$
(6)
For use in the reduced theory we isolate the static part of the one and two loop contribution in the q-valley from eq. 6:
$$\left(U^{(1)}+U^{(2)}\right)_{(n=0)}=T^4(N1)\frac{4\pi ^2}{3}\left(2q^3+3\frac{g^2N}{(4\pi )^2}q^2\right)$$
(7)
Note that the two loop contribution is quadratic in q in contrast to the one loop which is cubic. The two-loop cubic part in eq. 6 comes from a combination of static and non-static modes.
If we prepare the 4D system conveniently this symmetry will give rise to domain walls. Profile and energy of these wall have been computed semi-classically a long time ago. The method of twisted boundary conditions triggers walls and is most economic computerwise. We will discuss them in the context of the lattice formulation in section 3. Be it enough to mention that these boundary conditions force the Wilson lines to change by a Z(N) phase in going from one side to another side of the volume in some a priori fixed space direction. This will trigger a wall profile for the loop in this direction.
It is the long range behaviour of this profile that contains the information on the Debye mass. To one loop order this behaviour comes entirely from the slope of the potential, see above. But to two loop order we have to take the one-loop renormalization of the gradient part of the Wilson line phase into account, and this suffers the Linde effect: there is an infinity of many-loop diagrams contributing to the gradient part. So to next to leading order there are already non perturbative effects in the long range tail of the wall, and hence in the Debye mass, as we mentioned earlier.
On the other hand we know that the effective 3D action correctly reproduces the large distance behaviour of the 4D theory. So a 3D projection of the twist should produce a wall with the same tail as the 4D one. The inside of the wall in both formulations may be quite different but the inside is anyway computable by perturbation theory.
### 2.2 3D action and 4D physics
The parameters of the 3D theory ($`m^2`$ and $`\lambda \lambda _1+\lambda _2`$ for SU(3)) in eq. 3 can be calculated in perturbation theory by integrating out all modes in a path integral except the mode $`A_\mu (\stackrel{}{x},n=0)`$. To one loop order we have the well known result for the Debye mass and for the four point coupling $`\lambda `$. All higher order terms have a coefficient zero . To two loop order one has to take care not only of the two loop graphs, but also of the 1-loop renormalization of the three dimensional gauge coupling $`g_3`$ and the renormalization of the $`A_0`$ field in the gradient terms. The latter renormalization is taking care of gauge dependence in the two loop graphs.
The result in the $`\overline{MS}`$ scheme is that both parameters are expressed in the renormalized 4D coupling $`g(\mu )`$ where $`\mu `$ is the subtraction point. Eliminating the 4D coupling gives for the dimensionless quantities $`x\frac{\lambda }{g_3^2}`$ and $`y=\frac{m^2}{g_3^4}`$ the result for N=3:
$$xy_{4D}=\frac{3}{8\pi ^2}(1+\frac{3}{2}x)$$
(8)
whereas for N=2:
$$xy_{4D}=\frac{2}{9\pi ^2}(1+\frac{9}{8}x)$$
(9)
Note the absence of explicit $`\mu `$ dependence in this relation. The variable $`x`$ has a $`\frac{\mu }{T}`$ dependence such that as T becomes large $`x`$ becomes small.
In conclusion, it is along this line that we have to simulate the 3D system, in order to get information about the 4D theory. Before we do this, we still have to settle an important question: where are – in the $`xy`$ versus $`x`$ diagram – possible phase transitions?
### 2.3 Phase diagram of the 3D theory
To get the phase diagram we must first decide what order parameters to take. In the case of SU(3) there are two: $`TrA_0^2`$ and $`TrA_0^3`$. Strictly speaking, only the latter is an order parameter, since it flips sign under C. We will study the analogue of eq.5:
$$\mathrm{exp}VS_{eff}(D,E)=DA\delta \left(g_3^2D\overline{TrA_0^2}\right)\delta \left(g_3^3E\overline{TrA_0^3}\right)\mathrm{exp}S$$
(10)
Again as for the Wilson line we parametrize D and E in terms of $`D=Tr\left[C^2\right]`$ and $`E=Tr\left[C^3\right]`$ respectively. Let us first state the result one gets for $`S_{eff}`$ to one and two loop order:
$$S_{eff}=\frac{U(n=0)}{T}\text{one and two loop only}$$
(11)
The one and two loop result equals the static part of the 4D Z(3) potential,eq. 5! This static part was explicitely written in the q-valley, eq.7. It has to be added to the tree result and one gets in terms of the dimensionless variables x and y for N= 2 or 3 colours, absorbing a factor $`2\pi `$ in q:
$$\frac{S_{eff}}{g_3^6}=y\left(\frac{N1}{N}\right)q^2+x\left(\frac{N1}{N}\right)^2q^4(N1)\left(\frac{1}{3\pi }q^3+\frac{N}{(4\pi )^2}q^2\right)$$
(12)
The question is now: for what values of x and y we have degenerate minima for q? Keeping only the 1 loop result cubic in q we see that it must be of the order of magnitude of the quartic term of the tree result to get a second degenerate minimum. So q must be of $`O(\frac{1}{x})`$ in that minimum. Thus the quadratic two loop result contributes $`O(x)`$ less.
From eq.12 we find the potential develops two degenerate minima for N=3 when:
$$xy_c=\frac{3}{8\pi ^2}(1+\frac{3}{2}x)$$
(13)
For N=2:
$$xy_c=\frac{2}{9\pi ^2}(1+\frac{9}{8}x)$$
(14)
This is important: slope and intercept of the physics line 8 are identical with those of the critical line 13, at least if we can take the low order loop results for the critical line seriously. This was numerically found in ref. The intercept equality is just due to the Z(N) potential in 4D and the effective potential $`S_{eff}`$ in 3D being identical to one loop. But to two loop order this simple explanation is no longer true. The cubic term in eq.6 is appearing also in the two loop result, but not in the two loop result for the 3D effective action. It is however true that also in 2 loops the leading contribution is the static part of the Z(N) potential, eq.7.
### 2.4 Saddle point of the effective potential in 3D
In this subsection we will investigate in more detail the computation of the 3D effective potential. The saddle point is found by admitting $`A_0`$ fluctuates around a diagonal and constant background B:
$$A_0=B+Q_0$$
(15)
whereas the spatial gauge fields fluctuate around zero:
$$A_i=Q_i.$$
(16)
One then goes through the usual procedure of expanding the effective action 10. The equations of motion fix the background B to be equal to the matrix C, and the part quadratic in the fluctuations will not contain any reference to the Higgs potential $`V(A_0)`$. This is clear because the quadratic constraint tells the mass term not to fluctuate. Only the Higgs component parallel to C , $`TrCQ_0`$, has a mass term due to the Higgs potential, $`4\lambda TrC^2`$. So apart from this the quadratic part comes entirely from the static part of the 4D action. We can make a convenient gauge choice, namely the static form of the covariant gauge fixing:
$$S_{gf}=Tr\left([ig_3B,Q_0]+_kQ_k\right)^2$$
(17)
This gives propagators which are precisely the static version of the propagators appearing in the Wilson line potential 5. Only the component $`Q_0`$ parallel to C is the exception: its propagator has a mass from the Higgs potential and can be written as the the sum of the static propagator and a remaining part (“massive”) containing the mass term:
$$\frac{1}{\stackrel{}{p}^2}+\frac{1}{\stackrel{}{p}^2+4\lambda TrC^2}\frac{1}{\stackrel{}{p}^2}$$
(18)
The static propagator dominates in diagrams over the rest. The massive propagator will give rise to half integer powers of x in the perturbative expansion of the potential; gauge couplings contribute $`O(1)`$ in dimensionless units, whereas Higgs couplings contribute $`O(x)`$.
As long as we are interested in intercept and slope of the critical curve, it follows that only the static part of the Feynman rules contributes.
Hence the result 11.
Let’s from now on work in the q-valley where we evaluate the effective action 12.
Then two remarks are crucial:
i)The broken minimum occurs for $`q=O(1/x)`$. Power counting then reveals that from $`O(x^{3/2})`$ on an infinite number of diagrams contributes to each order.
ii)From five loop order on, the potential starts to develop poles in $`q=0`$.
We are bringing this up, because insisting on the low order result 13 and fitting numerically the coefficients of $`x^{3/2}`$ and higher order gives an unexpected result: the numerical coefficients are orders of magnitude larger than the first two in 13. In fig. 1, taken from ref. , the situation is shown. Only for very small x the critical and the 4D physics line are allowed to become tangent. It seems that this constraint affects the quality of the fit. Dropping it altogether necessitates numerical determination of transition points at $`x0.04`$.
## 3 Debye mass from a 3D domain wall
After this long discussion of where the physics line lies with respect to the critical curve we have to come to grips with the domain wall method.
The idea here is extremely simple and has been explained elsewhere. Twisted boundary conditions in 4D have a very simple and intuitive form in the reduced theory. Remember that a twisted plaquette in the time-space direction is of the form $`Tr(1\mathrm{\Omega }U(P))`$, with $`\mathrm{\Omega }=\mathrm{exp}i2\pi /N`$.
Thus intuitively one would say that all one has to do in the reduced action is to modify the kinetic part of the Higgs field by the twist, because that’s what the plaquette in the time space direction is reducing to.
In the next subsection we work out this idea in more detail.
### 3.1 Construction of the wall
In this section we want to make more precise the action that defines the wall.
We follow the notation of ref., specifically that of hep-lat/9811004 and write the kinetic part of the action as:
$`_{kin}`$ $`=`$ $`{\displaystyle \frac{36}{\beta }}{\displaystyle \underset{\stackrel{}{x}}{}}Tr\left[A^2(\stackrel{}{x})\right]{\displaystyle \frac{12}{\beta }}{\displaystyle \underset{\stackrel{}{x},j}{}}Tr\left[A(\stackrel{}{x})U_j(\stackrel{}{x})A(\stackrel{}{x}+a\stackrel{}{e}_j)U_j^+(\stackrel{}{x})\right]`$
$`=`$ $`{\displaystyle \frac{12}{\beta }}{\displaystyle \underset{\stackrel{}{x},j}{}}Tr\left[{\displaystyle \frac{1}{2}}\left(A^2(\stackrel{}{x})+A^2(\stackrel{}{x}+a\stackrel{}{e}_j)\right)A(\stackrel{}{x})U_j(\stackrel{}{x})A(\stackrel{}{x}+a\stackrel{}{e}_j)U_j^+(\stackrel{}{x})\right]`$
where $`\stackrel{}{x}`$ is a vector with three components $`(x,y,z)`$.
Consider the following expression:
$`𝒳=1{\displaystyle \frac{1}{\mathrm{N}}}eTr\left[e^{\mathrm{i}\alpha A(x,y,0)}Ue^{\mathrm{i}\alpha A(x,y,1)}U^+\right]`$
If $`\alpha A`$ is small we get:
$$𝒳=\frac{1}{\mathrm{N}}\alpha ^2Tr\left[\frac{1}{2}A^2(0)+\frac{1}{2}A^2(1)A(0)UA(1)U^+\right]$$
This is precisely the kind of expression that appears in eq. (19). ¿From this follows the expression for the modified kinetic energy in the plane $`(x,y,0)`$:
$`_{kin}^{mod}={\displaystyle \frac{12}{\beta }}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\stackrel{}{x},j}{(z,j)(0,3)}}{}}Tr\left[{\displaystyle \frac{1}{2}}\left(A^2(\stackrel{}{x})+A^2(\stackrel{}{x}+a\stackrel{}{e}_j)\right)A(\stackrel{}{x})U_j(\stackrel{}{x})A(\stackrel{}{x}+a\stackrel{}{e}_j)U_j^+(\stackrel{}{x})\right]`$
$`+{\displaystyle \frac{12}{\beta }}{\displaystyle \frac{\mathrm{N}}{\alpha ^2}}{\displaystyle \underset{x,y}{}}\left\{1{\displaystyle \frac{1}{\mathrm{N}}}eTr\left[e^{\mathrm{i}\alpha A(x,y,0)}U_3(x,y,0)e^{\mathrm{i}\alpha A(x,y,1)}U_3^+(x,y,0)\right]\right\}`$
So all we need is to put a twist $`\mathrm{\Omega }Z(\mathrm{N})`$ in order to get a wall:
$`_{kin}^{wall}={\displaystyle \frac{12}{\beta }}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\stackrel{}{x},j}{(z,j)(0,3)}}{}}Tr\left[{\displaystyle \frac{1}{2}}\left(A^2(\stackrel{}{x})+A^2(\stackrel{}{x}+a\stackrel{}{e}_j)\right)A(\stackrel{}{x})U_j(\stackrel{}{x})A(\stackrel{}{x}+a\stackrel{}{e}_j)U_j^+(\stackrel{}{x})\right]`$
$`+{\displaystyle \frac{12N}{\beta \alpha ^2}}{\displaystyle \underset{x,y}{}}\left\{1{\displaystyle \frac{1}{\mathrm{N}}}e\left(\mathrm{\Omega }Tr\left[e^{\mathrm{i}\alpha A(x,y,0)}U_3(x,y,0)e^{\mathrm{i}\alpha A(x,y,1)}U_3^+(x,y,0)\right]\right)\right\}`$
What is now the actual value of $`\alpha `$ to use? We recover the kinetic term in the continuum if we relate the field $`A`$ on the lattice to the field $`A_{cont}`$ in the continuum by the relation:
$`A={\displaystyle \frac{A_{cont}}{g_3}}`$
This is not the usual normalization for the lattice fields. Usually we have: $`A_{latt}=ag_3A_{cont}`$ and $`A_{latt}0`$ in the continuum limit.
Here this is not anymore the case. Remember that to expand the modified action we had to suppose that $`\alpha A`$ was small. To enforce this condition it seems natural to put : $`\alpha =ag_3^2`$; in this manner terms of the kind $`e^{\mathrm{i}\alpha A}`$ become $`e^{\mathrm{i}ag_3^2A}`$. That is to say, they become of the usual sort : $`e^{\mathrm{i}ag_3A_{cont}}`$.
With this choice the term in the exponential indeed goes to zero as the lattice spacing goes to zero, so:
$`\alpha ag_3^2={\displaystyle \frac{6}{\beta }}`$
In the end we obtain as final expression for the kinetic part of the action supporting the wall:
$`_{cin}^{wall}={\displaystyle \frac{12}{\beta }}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\stackrel{}{x},j}{(z,j)(0,3)}}{}}Tr\left[{\displaystyle \frac{1}{2}}\left(A^2(\stackrel{}{x})+A^2(\stackrel{}{x}+a\stackrel{}{e}_j)\right)A(\stackrel{}{x})U_j(\stackrel{}{x})A(\stackrel{}{x}+a\stackrel{}{e}_j)U_j^+(\stackrel{}{x})\right]`$
$`+\beta {\displaystyle \underset{x,y}{}}\left\{1{\displaystyle \frac{1}{\mathrm{N}}}e\left(\mathrm{\Omega }Tr\left[e^{\mathrm{i}\frac{6}{\beta }A(x,y,0)}U_3(x,y,0)e^{\mathrm{i}\frac{6}{\beta }A(x,y,1)}U_3^+(x,y,0)\right]\right)\right\}`$
(22)
### 3.2 Excitations of the wall
Now the system with the wall is defined by adding the 3D gauge field action and the Higgs potential V(A) to eq. 22. Let us call the resulting twisted action $`S_t`$.
Both twisted and untwisted action have periodic boundary conditions. When we compute the average of an observable $`O`$ in the twisted box we average the observable over the $`(x,y)`$ plane at the point $`z`$, written as $`\overline{O(z)}`$, and compute in the twisted box (action $`S_t`$). It is quite trivial to relate this average to the correlation of the wall and $`O`$ in the untwisted box (action $`S`$):
$$\overline{O(z)}_{S_t}=\mathrm{exp}(S_tS)\overline{O(z)}_S$$
(23)
There is no difference between the two actions except at $`z=0`$, at the location of the wall.
The twist is C and P odd, but T even. This means we can expect a signal for the Debye mass by taking any observable $`O`$ C odd (a necessary condition). Whatever operator gives the lowest mass in the correlation 23 is the preferred one. Thus one and the same updating with the twisted box can be used for various operators.
## 4 Conclusions
Once we know the 4D physics line we can do a simulation of the twisted box with some convenient observable, and measure the mass through eq. 23. Care should be taken, as emphasized by Kajantie et al. , that we start in the symmetric phase and then move to the 4D physics line. In so doing we will stay on the physical branch of the hysteresis curve for the mass, that we will meet when crossing the transition curve.
Nethertheless our discussion of the location of the critical curve underlines the importance to know wether the 4D physics line lies for small x in the symmetric phase or in the broken phase.
## Acknowledgments
One of us (C.P.K.A.) thanks the organizers of this conference for their hospitality and for the occasion to present this material.
## References
|
no-problem/9904/astro-ph9904079.html
|
ar5iv
|
text
|
# The Mass-to-Light Ratio of Binary Galaxies
## 1 Introduction
The mass of a galaxy is a fundamental quantity in understanding its dynamics and structure. Mass distribution in galaxies has been extensively studied with optical and HI rotation curves. Several studies revealed that spirals galaxies have flat rotation curves even at the observed outermost points, indicating the existence of extended dark halos (e.g., Sancisi & van Albada 1987) . The extent and total mass of dark halos are, however, not understood well and yet to be studied in detail. For further investigation of extended halos, different approaches are required to trace the mass distribution beyond the HI disk, where rotation curves cannot be measured.
Binary galaxies are useful for the determination of the total mass or mass-to-light ratio ($`M/L`$) of galaxies, like stellar masses are measured from the motion of binary stars. Unlike stellar binaries, however, the total mass of individual binary cannot be directly determined because of the long orbital periods. Instead, statistical treatment is necessary to obtain the average mass or $`M/L`$ of the sample galaxies. Many efforts were made to determine the $`M/L`$ ratio of binary galaxies statistically (e.g., Page 1952; Karachentsev 1974; Turner 1976a, b; Peterson 1979a, b; White 1981; van Moorsel 1987; Schweizer 1987; Chengalur et al.1993; Soares 1996). In binary galaxy studies, a careful selection of binary galaxies is very important, since the biases in selecting pairs should be corrected for to determine the mass or $`M/L`$ ratio. Turner (1976a) proposed well-defined selection criteria based only on the positions and magnitudes of galaxies. Later investigations (e.g., Peterson 1979a) also made use of similar selection criteria independent of radial velocities, which are so-called ‘velocity-blind’ pairs. According to such velocity-blind selection criteria, two galaxies are regarded as a pair if they have no close companion compared to their projected separation. This ‘velocity-blind’ selection criterion is simple and convenient for pair selection, but its problem is that the criterion could introduce strong bias toward pairs with small separations; for pairs with wider separations, company galaxies are searched for in a larger region, leading to exclusion of widely-separated pairs with higher probability. In fact, the average separations of selected pairs in these studies were 50 kpc $``$ 100 kpc (see Peterson 1979b). Since the dark halos could extend beyond this range, it is important to study binary galaxies further based on pairs with wider separations.
The other major problem of the velocity-blind sample is that the sample suffers from the contamination of ‘optical pairs’, which consists of two isolated galaxies projected close by chance. In order to reduce the contamination of optical pairs, it is better to select binary galaxies based not only on the positions but also on the radial velocities. Fortunately, the number of radial velocity observations is rapidly increasing thanks to recent large-scale redshift surveys. Moreover, the observational uncertainty has been significantly reduced due to the recent development of observational instruments, which enables us to estimate $`M/L`$ with better accuracy compared to previous studies. Therefore, it is interesting to study binary galaxies again by utilizing such huge data.
For these reasons, in this paper we study the mass-to-light ratio of binary galaxies by making use of the database. The plan of this paper is as follows. In section 2, we will describe how to select widely-separated pairs effectively, while reducing the contamination of optical pairs. The selection criteria, and the basic data for selected pairs will be presented in section 2. In section 3 we will perform maximum-likelihood analysis based on the orbital models of binary galaxies, and determine $`M/L`$. We will also consider $`M/L`$ dependence on galaxies’ type. Discussions on the dark halo extent will be given in section 4.
## 2 Selection of Pairs
### 2.1 Basic Idea, Selection Criteria, and Sample
The observable quantities for orbital motion of pairs are the projected separation $`r_\mathrm{p}`$, and the radial velocity differences $`v_\mathrm{p}`$. A set of $`r_\mathrm{p}`$ and $`v_\mathrm{p}`$ can be used to estimate the total mass of a pair through an estimator of mass, for example, $`r_\mathrm{p}v_\mathrm{p}^2/G`$. However, the mass of galaxies varies by about 3 order of magnitude from dwarf galaxies to giant ellipticals. A better quantity which represents the mass of galaxies is the mass-to-light ratio, $`M/L`$. The mass-to-light ratio of galaxies is expected to vary much less than the mass itself, and hence we focus on the mass-to-light ration of pairs in this paper.
What can be obtained through binary galaxy analysis is the total mass to total light ratio of pairs which is written as $`(M_1+M_2)/(L_1+L_2)`$, but in the rest of this paper we denote this ratio as $`M/L`$ for simplicity. Note that if $`M_1/L_1=M_2/L_2`$, the total mass-to-light ratio $`M/L`$ is equal to $`M_1/L_1`$ and $`M_2/L_2`$. For convenience in $`M/L`$ estimate, we define the luminosity-corrected separation $`R_\mathrm{p}`$ and the luminosity-corrected velocity difference $`V_\mathrm{p}`$ as
$$R_\mathrm{p}r_\mathrm{p}/L^{1/3},$$
(1)
$$V_\mathrm{p}|v_\mathrm{p}|/L^{1/3}.$$
(2)
A combination of $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ can give an estimator of the mass-to-light ratio of pairs. This estimator, which we call the projected mass-to-light ratio, is defined as
$$(M/L)_\mathrm{p}\frac{r_\mathrm{p}v_\mathrm{p}^2}{GL}=\frac{R_\mathrm{p}V_\mathrm{p}^2}{G}.$$
(3)
If a bound pair of galaxies are separated so widely that they can be approximated as two point masses, the law of energy conservation gives that
$$\frac{rv^2}{2GL}M/L,$$
(4)
because the total energy for a bound pair is always negative. A combination of equation (3) with inequality (4) gives
$$(M/L)_\mathrm{p}2(M/L).$$
(5)
If all pairs are bound and have the same $`M/L`$, the binary populations in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space lie below the envelope which corresponds to $`2(M/L)`$. In practice, the number of pairs is limited and insufficient to see the true envelope corresponding to $`2(M/L)`$ in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ space. Detailed calculations of probability distribution show that pairs are likely to concentrate to small $`V_\mathrm{p}`$, and thus smaller $`(M/L)_\mathrm{p}`$ due to projection effect(e.g., Noerdlinger 1975). We will discuss this in later section by calculating the probability distribution based on the Monte-Carlo simulation. In any case, the pair distribution in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space can be used for testing whether or not bound pairs are efficiently selected: while bound pairs are likely to have small $`V_\mathrm{p}`$, optical pairs could have extremely high $`V_\mathrm{p}`$ and $`(M/L)_\mathrm{p}`$.
Here we describe the selection criteria for pairs. For convenience, we define the total luminosity of a pair normalized with $`10^{10}L_{}`$ in the B band as $`L_{10}(L_1+L_2)/(10^{10}L_{})`$. Note that this luminosity roughly corresponds to that of the Milky Way Galaxy. We re-define $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ normalizing with luminosity $`L_{10}`$ as,
$$R_\mathrm{p}=\frac{r_\mathrm{p}}{L_{10}^{1/3}}(\mathrm{kpc}),$$
(6)
and
$$V_\mathrm{p}=\frac{|v_\mathrm{p}|}{L_{10}^{1/3}}(\mathrm{km}\mathrm{s}^1).$$
(7)
As the first step of pair selection, we have to select a pair of galaxies that are relatively close to each other both in the sky plane and in the redshift space so that they are likely to be bound. Since the average separations in previous studies were around 100 kpc, and since we are interested in widely-separated pairs, we set the maximum projected separation of a pair in the sky plane as
$$1:R_\mathrm{p}400\mathrm{kpc}.$$
The maximum velocity difference must also be large enough to include pairs orbiting around each other at high velocity. Since galaxies as luminous as $`10^{10}L_{}`$ have rotation velocity of $``$ 200 km s<sup>-1</sup>, the velocity difference of a pair could be as high as a few hundred km s<sup>-1</sup>. Hence, we set the maximum velocity difference of pairs as
$$2:V_\mathrm{p}400\mathrm{km}\mathrm{s}^1.$$
Note that the radial velocity difference is usually quite small compared to the true velocity difference due to projection effect (Noerdlinger 1975). Thus, physical pairs are unlikely to have larger radial velocity differences than the maximum value given above.
The observational data set cannot be complete to faint galaxies, and hence we limit the application of our analysis to sufficiently bright galaxies. Since nearby galaxies are cataloged almost completely down to 15.5 magnitude (e.g, de Vaucouleurs et al. 1991), we set a criterion for the B band magnitude as
$$3:m_1,m_215.0\mathrm{mag},$$
and also set an upper limit for the total magnitude of a pair in the B band, $`m_{1+2}`$, as
$$4:m_{1+2}13.5\mathrm{mag}$$
In order for selected pairs to be likely to be bound, pair galaxies must be isolated well. We regarded two galaxies as a pair if all of its companion galaxies brighter than $`m_{1+2}+2.0\mathrm{mag}`$ satisfy both
$$5:\frac{r_i}{L_{10}^{1/3}}a\mathrm{\hspace{0.33em}400}\mathrm{kpc},$$
and
$$6:\frac{|v_i|}{L_{10}^{1/3}}b\mathrm{\hspace{0.33em}400}\mathrm{kms}^1.$$
Here $`r_i`$,and $`v_i`$ are the projected separation and the radial velocity difference of $`i`$th companion galaxy with respect to the luminosity center of the pair. Note that we set the lower limit of the total magnitude $`m_{1+2}`$ to be 13.5 mag (criterion 4). The faintest galaxies that should be considered is, hence, at 15.5 mag, to which magnitude galaxies are cataloged almost completely.
Parameters $`a`$ and $`b`$ determine the volume for companion search, and so determine the degree of isolation. Note that the volume depends only on the total luminosity of a pair, $`L_{10}`$, but independent of the separation $`R_\mathrm{p}`$ of a pair. Therefore, as far as pairs with the same luminosity are concerned, companions are searched for in the same volume, and thus the criterion does not introduce bias toward pairs with small separations. We set $`b=1.5`$ throughout this paper, but tested three values of $`a`$ (1.5, 2.0, and 2.5) to seek a value for effective selection of bound pairs.
We applied criteria described above to the sample of galaxies that we compiled for this study using NED (NASA Extra-galactic Database). The sample consists of bright nearby galaxies with redshift less than 4,500 $`\mathrm{km}\mathrm{s}^1`$. The upper limit for redshift is introduced because the number of bight pairs which satisfy the criteria 3 and 4 becomes small at large redshift. The data for positions, heliocentric velocities, and the B band magnitudes were mainly taken from NED, and supplied with RC3 catalog (de Vaucouleurs et al.1991). The distances to the pairs are obtained using the redshift of the luminosity center ($`H_0=`$ 50 $`\mathrm{km}\mathrm{s}^1`$ is assumed). In order to avoid the error in distance due to the local deviation from Hubble flow, galaxies with redshift smaller than 1,000 $`\mathrm{km}\mathrm{s}^1`$ are excluded from the sample. Galaxies in clusters and close to clusters may deviate from the Hubble flow even beyond the redshift of 1000 $`\mathrm{km}\mathrm{s}^1`$, but this effect is expected to be small because the pairs selected with criteria 5 and 6 are likely to be field binary galaxies. Galaxies with $`b20^{}`$ are also excluded from the sample since objects at low galactic latitude may be significantly obscured by galactic extinction The sample we compiled consists of 6475 galaxies with magnitude brighter than 15.5 mag and redshift between 1000 km s<sup>-1</sup> and 4500 km s<sup>-1</sup>. The uncertainty in the magnitude is typically 0.2 mag., which leads to the uncertainty in $`L`$ of 20 %. The corrections for intrinsic absorption and galactic extinction were made according to de Vaucouleurs et al.(1991).
The sample is, of course, incomplete in terms of redshift because redshift measurements were not made for all galaxies. This incompleteness leads to possible mis-identification of pairs, if the criteria described above are applied only to a sample of redshift-know galaxies. To correct for the effect of redshift incompleteness, primary binary candidates are at first searched in the sample of redshift-known galaxies, and then a redshift-blind search was performed for the primary binary candidates. If there is any redshift-unknown companion which is brighter than $`m_{1+2}+2.0`$ mag and is so close to the pair that the criterion 5 is violated, the pair was rejected from the binary candidates. About 30 % of pairs in the primary binary candidates were rejected through this procedure.
The sample of binary galaxies after the correction for the redshift incompleteness still contains some pairs that are not appropriate for this study. For instance, the basic data for the analysis, such as $`v_\mathrm{p}`$, $`m_1`$, and $`m_2`$ could be quite uncertain for some pairs. In particular, the uncertainty in the radial velocity is crucial for $`M/L`$ determination, as the $`M/L`$ estimator depends on $`V_\mathrm{p}^2`$. Therefore, if redshift uncertainty is not reported for any of the two galaxies of pair, the pair is excluded. This process reduced the number of pairs by 7%. If the magnitude uncertainty and the absorption-corrected magnitude are not available, the pair is also excluded, and in this process 7% of primary binary candidates were rejected. Moreover, a galaxy could appear in the binary sample twice or more with different partner. This can happen if one of pair galaxies is a bright galaxy like the cD galaxy and it has several companion galaxies around it. In this case, however, these galaxies should be regarded as cluster or group rather than binary galaxies. Therefore, we also excluded possible clusters or groups of galaxies that appear in the binary sample twice or more. We found only two possible groups in the primary binary candidates.
### 2.2 Results
Figures 1 show the distribution of thus selected binary galaxies in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space. Two cases for the isolation parameter, $`a`$=1.5 and 2.5, are shown. The number of selected pairs is 109 and 57, respectively. In the case of $`a`$=1.5, the pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ space shows only weak concentration toward small $`V_\mathrm{p}`$, and a large number of galaxies have high $`(M/L)_\mathrm{p}`$ exceeding a few hundred $`M_{}/L_{}`$. Even if their true $`M/L`$ is a few hundred, it is unlikely that so many galaxies appear to have so large $`(M/L)_\mathrm{p}`$ in the projected phase space, as $`(M/L)_\mathrm{p}`$ is expected to be significantly smaller than the true $`M/L`$ due to projection effect (Noerdlinger 1975; see also Section 3 of the present paper). This indicates that they are probably optical pairs, and that the degree of isolation is not strong enough to select bound pairs effectively. On the other hand, for $`a=2.5`$, the concentration of pairs to $`V_\mathrm{p}`$=0 is much more clear than for $`a=1.5`$. Most of 57 pairs are distributed below $`(M/L)_\mathrm{p}`$ of 20 in solar unit, and there are only few galaxies that have high $`(M/L)_\mathrm{p}`$. This correlation between $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ are naturally explained if the separation of pairs are larger than the extent of halos so that the pairs can be approximated as point masses (but note that even in case of extended hale such an envelope would appear in the projected phase-space like point-mass cases; see Soares 1990). In the rest of this paper, we use the binary galaxies sample selected with $`a=1.5`$ and 2.5 for $`M/L`$ determination. We call the sample selected with $`a=2.5`$ as sample I, and the one selected with $`a=1.5`$ as sample II. Table 1 summarizes the basic data for 57 pairs in sample I.
## 3 $`M/L`$ determination
In this section, we estimate the $`M/L`$ ratio of the sample pairs selected above. We construct the orbital models for physical pairs, and calculate the probability distribution of pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space considering the contamination of optical pairs. Then, we compare the models with the observational data, and determine the $`M/L`$ ratio based-on maximum-likelihood analysis.
### 3.1 Distribution of Bound Pairs
First we construct models for orbital populations of binaries. For simplicity, binary galaxies are treated as point masses in the following analysis. As can be seen in figure 1b, pairs show strong concentration toward small $`V_\mathrm{p}`$ in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ space, which is just expected from the point-mass assumption. Further tests for validity of the assumption will be made in the next section.
An ensemble of well-mixed binary population satisfy the Jeans equation (Binney and Tremaine 1987),
$$\frac{d(\nu \overline{v_\mathrm{r}^2})}{dr}+\frac{2\nu \beta \overline{v_\mathrm{r}^2}}{r}=\frac{GM}{r^2}\nu .$$
(8)
Here $`\nu `$ denotes the separation distribution of pairs, and $`\beta `$ is the anisotropy parameter defined as
$$\beta =1\frac{\overline{v_\theta ^2}}{\overline{v_\mathrm{r}^2}},$$
(9)
where $`\overline{v_\theta }`$ and $`\overline{v_\mathrm{r}}`$ denote each component of velocity ellipsoids. Note that $`\beta =\mathrm{}`$ for circular orbits, $`\beta =0`$ for isotropic orbits, and $`\beta =1`$ for radial orbits. We may rewrite equation (8) by normalizing with luminosity as
$$\frac{d(\nu \overline{V_r^2})}{dR}+\frac{2\nu \beta \overline{V_r^2}}{R}=\frac{GM}{R^2L}\nu ,$$
(10)
where $`R=r/L^{1/3}`$ and $`\overline{V_r^2}=\overline{v_r^2}/L^{2/3}`$. Note that they are true separation and velocity, but not projected ones.
For the separation distribution $`\nu `$, we also assume a power law with an inner cutoff radius $`r_{\mathrm{min}}`$ as,
$$\nu (R)R^\gamma \mathrm{for}RR_{\mathrm{min}}.$$
(11)
We introduced the cutoff radius because galaxies have finite sizes, and pairs that are too close are not likely to exist. The model used here is, therefore, not exactly a scale-free model (cf. White 1981).
In order to model the distribution of pairs, one should choose suitable values for parameters $`\beta `$, $`\gamma `$, and $`R_{\mathrm{min}}`$. The parameters related to the separation distribution are obtained directly from observed separation distribution, because the probability distribution for projection effect can be written analytically as
$$p[R_\mathrm{p}|R]=\frac{2R_\mathrm{p}}{\pi R(R^2R_\mathrm{p}^2)^{1/2}}(\mathrm{for}R_\mathrm{p}R).$$
(12)
We compared the separation distribution of observed and model pairs, and obtained the best-fit values $`\gamma =2.6`$ and $`R_{\mathrm{min}}=10`$ kpc. In the rest of this paper, we adopt these best-fit values for $`\gamma `$ and $`R_{\mathrm{min}}`$, but we note the results are not sensitive to changes in the assumed values. Once the separation distribution is obtained, the distribution of the velocity difference is obtained by solving the Jeans equation \[eq.(8)\]. Then, one can calculate the probability distribution of pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space, $`p_{\mathrm{bin}}[R_\mathrm{p},V_\mathrm{p}|(M/L)]`$, by taking the projection effect into consideration.
### 3.2 Selection Bias and Contamination of Optical Pair
In addition to the orbital models for true pairs, here we consider the selection effect for pairs and the contamination of optical pairs. As described in the previous section, the isolation criteria are independent of the separation or velocity difference of pairs, and hence, the sample is free from biases both for the separation in the sky plane, and for the separation along the line of sight. The isolation criteria, however, may cause possible exclusion of true pairs due to chance projection of another companion galaxy that are not physically related to the pair. According to the isolation criteria, the maximum velocity difference of a pair is $`400`$ $`\mathrm{km}\mathrm{s}^1`$ for galaxies with $`L10^{10}L_{}`$. Therefore, a system of a true pair plus any foreground or background galaxy at distant within 8 Mpc from the pair cannot be regarded as a true pair because the criterion 6 is violated ($`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> assumed). Unfortunately, there is no way to determine whether the observed velocity difference of two galaxies is due to Hubble flow or due to binary orbital motion, and so this kind of exclusion of true pairs is unavoidable. Furthermore, two galaxies which are separated well along the line of sight and are not physically associated could be regarded as a pair because of misidentification of the redshift as binary orbital motion. For these reasons, the sample selected in the present paper is far from perfect but likely to contain unphysical pairs which would lead to wrong estimates of $`M/L`$. Therefore, the exclusion of true pairs and the contamination of optical pairs must be taken into consideration for $`M/L`$ determination.
Fortunately, the possibility of true-pair exclusion is independent of $`R_\mathrm{p}`$ or $`V_\mathrm{p}`$, as the selection criteria do not depend on them. Hence, the probability distribution of pairs in $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ space, which is to be compared with the observed pairs, can be expressed as
$$p[R_\mathrm{p},V_\mathrm{p}|M/L,f]=fp_{\mathrm{bin}}[R_\mathrm{p},V_\mathrm{p}|(M/L)]+(1f)p_{\mathrm{opt}}[R_\mathrm{p},V_\mathrm{p}],$$
(13)
where $`f`$ is a constant corresponding to the fraction of true pairs out of observed pairs. Clearly the first term on the right side expresses the contribution of true binaries, and the second term describes the contamination from optical pairs. Probabilities $`p`$, $`p_{\mathrm{bin}}`$, and $`p_{\mathrm{opt}}`$ are normalized so that
$$p𝑑R_\mathrm{p}𝑑V_\mathrm{p}=p_{\mathrm{bin}}𝑑R_\mathrm{p}𝑑V_\mathrm{p}=p_{\mathrm{opt}}𝑑R_\mathrm{p}𝑑V_\mathrm{p}=1,$$
(14)
where the integrations are performed from 0 to 400 kpc for $`R_\mathrm{p}`$, and from 0 to 400 $`\mathrm{km}\mathrm{s}^1`$ for $`V_\mathrm{p}`$.
The possibility of mis-identification of optical pairs is proportional to the number density of galaxies. It is generally known that the distribution of galaxies in the Universe is not uniform but shows strong clustering, which is usually described in terms of the two-point correlation function (e.g. Peebles 1993). With this function the probability distribution of optical pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space can be written as
$$p_{\mathrm{opt}}[R_\mathrm{p},V_\mathrm{p}]dR_\mathrm{p}dV_\mathrm{p}\left[1+\xi (r)\right]R_\mathrm{p}dR_\mathrm{p}dV_\mathrm{p},$$
(15)
where $`\xi (r)`$ is the two-point correlation function, and this is usually written in the form of
$$\xi (r)=\left(\frac{r_0}{r}\right)^q.$$
(16)
The two-point correlation function is determined well in the scale of 10 Mpc but less certain in the scale of 1 Mpc. Hence in the following analysis we consider two cases, no clustering case with $`q=0`$ and clustering case with $`q=1.8`$ and $`r_0=10`$ Mpc (Peebles 1993), and see how the $`M/L`$ estimates depend on the clustering effect. The separation $`r`$ can be calculated from the projected separation and the velocity difference by assuming the Hubble constant of 50 km s<sup>-1</sup> Mpc<sup>-1</sup>. Note that in any case the probability distribution of optical pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ space is independent of the $`M/L`$ ratio of galaxies.
### 3.3 $`M/L`$ Determination
To evaluate the mass-to-light ratio and the fraction of true pairs we make use of the maximum-likelihood method for $`M/L`$ and $`f`$. The probability for finding a pair at $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ in the projected phase space is proportional to $`p[R_\mathrm{p},V_\mathrm{p}|M/L,f]`$, and hence the logarithmic likelihood for finding all observed pairs at their observed positions in the projected phase space is expressed as the summation of the probability for finding each pair at its position. Therefore, the logarithmic likelihood of $`M/L`$ for observed pairs can be written as
$$\mathrm{log}(M/L,f)=n(R_\mathrm{p},V_\mathrm{p})\mathrm{log}p[R_\mathrm{p},V_\mathrm{p}|M/L,f],$$
(17)
where $`n(R_\mathrm{p},V_\mathrm{p})`$ denotes the observed number of pairs having $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$, and the summation is done over the whole projected phase space ($`R_\mathrm{p}`$ less than 400 kpc and $`V_\mathrm{p}`$ less than 400 km/s). Evidently $`n(R_\mathrm{p},V_\mathrm{p})`$ is integral as long as the values of $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ are determined with sufficient accuracy. However, for the pairs we consider here $`R_\mathrm{p}`$ and $`V_\mathrm{p}`$ have uncertainties, and the uncertainty in $`V_\mathrm{p}`$ is particularly crucial for $`M/L`$ determination because the $`M/L`$ estimator depends on $`V_\mathrm{p}^2`$. Therefore, we treated each observed pair as a Gaussian distribution spread in the direction of $`V_\mathrm{p}`$, and then $`n(R_\mathrm{p},V_\mathrm{p})`$ is given as
$$n(R_\mathrm{p},V_\mathrm{p})dR_\mathrm{p}dV_\mathrm{p}\underset{i}{}g_idR_\mathrm{p}dV_\mathrm{p},$$
(18)
where
$$g_i=(2\pi \sigma _i)^{1/2}\mathrm{exp}\left(\frac{(V_\mathrm{p}V_i)^2}{2\sigma _i^2}\right).$$
(19)
Here $`V_i`$ and $`\sigma _i`$ denote the observed $`V_\mathrm{p}`$, and the uncertainty for $`V_\mathrm{p}`$ for $`i`$th pair, respectively. Note that $`n`$ is normalized so that $`n𝑑R_\mathrm{p}𝑑V_\mathrm{p}=N_{\mathrm{tot}}`$, where $`N_{\mathrm{tot}}`$ is the total number of pairs.
Since the probability distribution of physical pair in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space, $`p_{\mathrm{bin}}[R_\mathrm{p},V_\mathrm{p}|(M/L)]`$, cannot be expressed analytically due to the projection effect, we performed Monte-Carlo simulations to evaluate $`p_{\mathrm{bin}}`$. The distribution of one million pairs in the projected phase space were simulated assuming random orientation of orbital planes with respect to the line of sight, and then the logarithmic likelihood (equation ) was calculated in the parameter space of $`f`$ and $`M/L`$.
Figure 2 shows the likelihood contours for sample I (57 pairs) for the case of $`q=0`$ (no clustering for optical pairs). The thick lines are for $`\beta =0`$ (isotropic orbit) and dotted lines are for $`\beta =\mathrm{}`$ (circular orbit). The figures show that the M/L estimates are not affected strongly by the assumed orbital parameters. The best estimates of $`M/L`$ are $`35_5^{+7}`$ for $`\beta =0`$ and $`28_3^{+5}`$ for $`\beta =\mathrm{}`$, with true binary fraction $`f`$ of $`0.88_{0.1}^{+0.07}`$ for both cases (the error bars denote the 68 % confidence level). The results for the true binary fraction $`f`$ indicates that most of pairs in sample I are likely to be bound. The expected number of optical pair is about 7 out of 57 pairs, which is comparable to the number of pairs which appear in Figure 1b above the envelope corresponding to $`(M/L)_\mathrm{p}`$ of 20.
On the other hand, figure 3 shows the likelihood contours for sample I like figure 2, but for $`q=1.8`$ (clustering for optical pairs). The contours for two orbital models are shown, and again one can see that the weak dependence of $`M/L`$ on the orbital parameter. However, the true pair fractions $`f`$ are quite different from those for no clustering cases, as we obtained $`f=0.71_{0.15}^{+0.14}`$ ($`\beta =0`$) and $`f=0.73_{0.15}^{+0.14}`$ ($`\beta =\mathrm{}`$) for clustering case. This is because the expected number of optical pairs with small $`V_\mathrm{p}`$ is much larger than that for no clustering case, and hence more number of galaxies with small $`V_\mathrm{p}`$ are regarded as optical pairs. However, the best estimates of $`M/L`$ are $`36_4^{+8}`$ for $`\beta =0`$ and $`30_3^{+6}`$ for $`\beta =\mathrm{}`$, which are fairly close to those for no clustering cases.
In figure 2 and 3 the best estimates of $`M/L`$ change little depending on $`f`$. This is because the $`M/L`$ is essentially determined by the pairs which lie below the envelope in figure 1b: in the most range of $`f`$ (e.g. $`f`$ less than 0.95) the pairs far above the envelope are likely to be optical pairs, and have little effect on the $`M/L`$ determination. On the other hand, if $`f`$ is set to be almost unity, the most likely $`M/L`$ could become as high as 100 to explain the pairs with high $`(M/L)_\mathrm{p}`$ without optical pairs. However, the likelihood for finding $`f`$ of almost unity is very small compared to that for $`f`$ between 0.6 to 0.9, for in that case the concentration of pairs to small $`(M/L)_\mathrm{p}`$ seen figure 1 cannot be explained at all.
In order to test whether $`M/L`$ and $`f`$ obtained above can reproduce the distribution of observed pairs, we simulated distribution of model pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space using the best-fit parameters. Figure 4 shows the simulated distribution of 57 modeled pairs with $`f=0.88`$ and $`M/L=35`$ assuming isotropic orbits for true pairs and no clustering for optical pairs. The figure resembles well the observed distribution in figure 1b in many aspects; small fraction of pairs with extremely high $`(M/L)_\mathrm{p}`$, concentration of pairs to small $`V_\mathrm{p}`$, and the envelope corresponding to $`(M/L)_\mathrm{p}20`$. This simulation confirms the validity of the results.
In order to see if these results depend strongly on the sample we used, we also performed the same analysis for sample II (109 pairs), which are selected with weaker isolation criterion ($`a=1.5`$). As seen in Figure 1a, this sample is likely to contain more number of optical pairs than sample I due to the weak selection criterion. In fact, the resultant value of $`f`$ is 0.80$`{}_{}{}^{+0.07}{}_{0.08}{}^{}`$ for no clustering case ($`q=0`$) and $`0.62_{0.11}^{+0.10}`$ for clustering case ($`q=1.8`$), with $`\beta =0`$ for both cases. However, the best estimates $`M/L`$ are $`35_3^{+5}`$ ($`\beta =0`$) and $`28_2^{+5}`$ ($`\beta =\mathrm{}`$) for no clustering cases, and $`36_3^{+12}`$ ($`\beta =0`$) and $`30_3^{+8}`$ ($`\beta =\mathrm{}`$) for clustering cases. These results remarkably agree with those for sample I, indicating that the results does not depend strongly on the samples. The results for sample II as well as those for sample I are listed in Table 2.
However, we would like to note that the results might be changed if we take the other limit of $`\beta `$; $`\beta =1`$ corresponding to radial orbits. In this case the best $`M/L`$ for sample I was found to be $`42_7^{+34}`$ ($`q=0`$), and so the $`M/L`$ would exceed 50 within 68 % confidence level. Yet the assumption of $`\beta =1`$ seems too radical, because in this case the pairs suffer from direct encounters that will probably lead them to mergers. In order for bound pairs to survive for many orbital periods, their orbits must be elliptical at least to some degree, and so $`\beta `$ cannot be too close to unity. On the other hand, perfectly circular orbits ($`\beta =\mathrm{}`$), are also unlikely because this requires fine tuning of orbital parameters. Therefore, the results with $`\beta =0`$ presumably represent best the mass-to-light ratio of true pairs.
### 3.4 $`M/L`$ for pure spiral pairs
In the $`M/L`$ determination above, we did not consider the variation of $`M/L`$ among the sample galaxies. The $`M/L`$ of galaxies are, however, usually considered to vary depending on the type of galaxies. Indeed, previous binary galaxy studies claimed larger $`M/L`$ for ellipticals than that of spirals (e.g., Schweizer 1987). Therefore, it is interesting to study the $`M/L`$ ratio of galaxies for a specific type.
The 57 pairs in sample I consist of spirals, S0s, ellipticals and some others such as peculiars. The dominant type among them is spirals, which occupies a fraction of 70% in sample I, and hence we try to estimate $`M/L`$ for spiral galaxies. In particular we concentrate on ‘pure’ spiral pairs which consists of two spiral galaxies later than Sa, because a pair of a spiral and an S0 or elliptical do not necessarily reflect the $`M/L`$ of spiral galaxies. We selected 30 pure spiral pairs out of 57 pairs in sample I (hereafter sample III), and performed the same analysis described above. The likelihood contours for no clustering case ($`q=0`$) and for clustering case ($`q=1.8`$) are plotted in figure 5 and 6, respectively. Most likely value for no clustering case is found to be $`15_3^{+5}`$ for $`\beta =0`$, and $`12_3^{+4}`$ for $`\beta =\mathrm{}`$, which are compared to the results for the 57 pairs with mixed types, 35 for $`\beta =0`$ or 28 for $`\beta =\mathrm{}`$. The best estimates of $`M/L`$ for clustering case are similar to those for no clustering case whereas the true pair fraction $`f`$ is relatively smaller (see table 2). In both cases the difference in $`M/L`$ between sample I and III is significant, being above the 3$`\sigma `$ level. Therefore, we conclude the difference is real, and that $`M/L`$ for spirals are smaller to ellipticals or S0s. This is consistent with previous studies of binary galaxies, although the $`M/L`$ obtained here are somewhat smaller than those from previous studies.
## 4 Discussion
### 4.1 $`M/L`$ Dependence on Separation
In the previous sections, pairs are approximated as two point masses. However, real galaxies have finite size, and it could be as much as 100 kpc if dark halos extend well beyond the optical disks. In this case, the approximation of point masses is not valid, and $`M/L`$ obtained above could be underestimation, particularly for pairs with small separations. Here we investigate the dependency of $`M/L`$ on the separation of pairs, and test if the assumption of point masses is reasonable for the present samples.
We divided 57 pairs in sample I into 3 subgroups depending on the separation. The three subgroups consist of 27 pairs with $`0<R_\mathrm{p}<100`$ kpc, 12 with $`100<R_\mathrm{p}<200`$ kpc, and 18 with $`200<R_\mathrm{p}`$ kpc, respectively. The $`M/L`$ ratios for three subsample were obtained in the same manner described in the previous section. Assuming $`\beta =0`$ and $`q=0`$, we obtained the best estimates of $`M/L`$ with $`1\sigma `$ errors to be $`36_5^{+10}`$, $`37_{17}^{+31}`$, and $`25_{13}^{+29}`$, respectively. The error bars are increased compared to the results in Section 3 because the number of galaxies in a sample is reduced. figure 7 shows the $`M/L`$ dependence on the mean separations of pairs. The figure demonstrates that the $`M/L`$ ratio is almost constant, and that the variations are within the $`1\sigma `$ error bars. If the density distribution of dark halo is proportional to $`r^2`$ at a large radius, the $`M/L`$ increases linearly with radius, and if it is proportional to $`r^3`$ as suggested by recent simulations (Navarro et al.1996), $`M/L`$ increases with radius logarithmically. However, figure 7 shows no tendency of increasing $`M/L`$ at large radii. Therefore, the halos of galaxies as luminous as the Milky Way Galaxies may be truncated within 100 kpc. The indication of constant $`M/L`$ beyond 100 kpc is consistent with previous studies of binary galaxies. According to Peterson (1979b), the $`M/L`$ ratio is gradually increasing with radius at $`R<100`$ kpc, but remains constant beyond it. Schweizer (1987) also showed that $`M/L`$ does not increase beyond 100 kpc.
The $`M/L`$ estimate for spiral galaxies is also consistent with previous studies. Schweizer (1987) obtained $`M/L`$ of $`21\pm 5`$ (V band, absorption corrected) with the sample whose mean separation is about 90 kpc. Peterson (1979b) obtained spiral galaxies’ $`M/L`$ of $`35\pm 13`$ ($`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>) based on 39 pairs with their mean separation of 110 kpc, and Turner (1976b) also obtained $`M/L`$ for spiral galaxies of $``$ 35. Note that in the 70’s the correction for the galactic and internal absorption were not usually made, and this partly explains smaller $`M/L`$ in the present paper. The mean of absorption correction in our sample galaxies is about 0.4 mag, which reduces $`M/L`$ by $``$ 30 %. Therefore, if similar amount of absorption correction were made, the studies by Peterson (1979b) or Turner (1976b) would give $`M/L`$ of 23$``$27, which are close to our results of $`M/L`$. Although the $`M/L`$ obtained in the present paper is not significantly different from previous studies, we would like to emphasize that the mean of absolute separations (not the luminosity corrected separation $`R_\mathrm{p}`$) of sample III in this paper is $``$ 206 kpc, which is almost twice of those in previous studies. Nevertheless, the $`M/L`$ ratio does not show any tendency of increase with increasing the separation when compared with previous studies.
### 4.2 Dark Halo Extent of Spiral Galaxies
Since the mass of spiral galaxies’ optical disk can be estimated from rotation curves, we can compare the total $`M/L`$ with optical disk $`M/L`$, and discuss the extent of dark halos of spiral galaxies. We define $`(M/L)_{R25}`$ as the ratio of the enclosed mass within $`R_{25}`$ to the total luminosity, where $`R_{25}`$ is a radius at which the surface brightness becomes 25 mag per arcsec<sup>2</sup>. The enclosed mass can be estimated from the HI velocity line width $`W_{\mathrm{HI}}`$. Assuming that $`W_{\mathrm{HI}}`$ corresponds to twice of the rotation velocity, we obtain
$$(M/L)_{R25}=\frac{R_{25}W_{\mathrm{HI}}^2}{4GL}.$$
(20)
The central surface luminosity of spiral galaxies is constant, about 22 mag per arcsec<sup>2</sup> (Freeman 1970). If this applies to the spiral galaxies in the binary sample, $`R_{25}`$ corresponds to about 3 times disk scale length $`d`$, and $`(M/L)_{R25}`$ roughly approximates the mass-to-right ratio of the disk. We calculated $`(M/L)_{R25}`$ of spiral galaxies in sample I for which $`R_{25}`$ and $`W_{\mathrm{HI}}`$ are available. $`R_{25}`$ and $`W_{\mathrm{HI}}`$ were taken from de Vaucouleurs et al.(1991), and Huchtmeier and Richter (1989), respectively. The values of $`(M/L)_{R25}`$ range from 2 to 18 with the average of 7. Note that this $`M/L`$ is consistent with the previous studies for disk $`M/L`$; for example, Faber & Gallagher (1979) obtained the $`M/L`$ of about 5 ($`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>). This $`M/L`$ is compared to the total $`M/L`$ obtained in the section 3, $`M/L`$ of 12 $``$ 16. The total $`M/L`$ is somewhat larger than the disk $`M/L`$, and the difference is almost $`3\sigma `$ level (see figures 5 and 6). This difference is of course due to the dark halo, and this indicates that under the maximum disk assumption the contributions of dark halo and the optical disk to the total mass of galaxies are comparable. However, the assumption of maximum disk is still controversy. If the disk mass is smaller than that indicated by the maximum rotation velocity within the optical disk, the dark halo could dominantly contribute to the total mass.
We can also estimate the extent of dark halos by comparing the total $`M/L`$ with disk $`M/L`$. If a flat rotation curve is assumed, the value of $`M/L`$ increases linearly with radius. In this case, the resultant $`M/L`$ of 15 implies that the typical halo extends $`15/72`$ times $`R_{25}`$. If we adopt disk $`M/L`$ of 5 according to Faber & Gallagher (1979), the halo extent is $`15/53`$ times $`R_{25}`$. Therefore, the typical dark halo size maybe about 6 to 9 times disk scale length, if $`R_{25}`$ is $`3`$ times disk scale length $`d`$. This is to be compared with the size of optical disk, which is about $`45`$ times disk scale length (van der Kruit & Searle 1981). Hence, if the rotation curve is perfectly flat out to the radius at which the halo mass distribution is truncated, the halo size may be about 2 times larger than optical disks. This is, of course, not exactly true when the rotation curve is not completely flat, which is preferred by the recent simulations. In this case, the halo size may be somewhat lager, but it cannot exceed a few hundred kpc as indicated by the $`M/L`$ constancy.
The halo size indicated here is somewhat smaller than those in previous studies, but quite consistent with recent investigations. For instance, a number of declining rotation curves, which may be fitted even with a Keplerian, are found recently (e.g., Jörsäter & van Moorsel 1995; Olling 1996; Honma & Sofue 1996). Honma & Sofue (1997) showed that such declining rotation curves are not uncommon by considering the observational uncertainty. These rotation curves studies are generally based on the observation of HI, which are usually observed out to $`510`$ times disk-scale length. Therefore, the fact that the declining part of rotation curves were found is consistent with the present results.
We are grateful to Y. Sofue for his supervision, and to Y. Tutui and J. Koda for fruitful discussion. This work was financially supported by the Japan Society for the Promotion of Science.
## figure captions
Figure 1. Distribution of selected pairs in the $`R_\mathrm{p}`$-$`V_\mathrm{p}`$ phase space. Figure 1a is for the pairs selected with $`a=1.5`$, and 1b is for the pairs selected with $`a=2.5`$. The dotted curve in figure 1b corresponds to the constant $`(M/L)_\mathrm{p}`$ of 20.
Figure 2. Likelihood contours in the parameter space of $`M/L`$ and $`f`$ for sample I. As for the optical pair distribution no clustering is assumed ($`q=0`$). Solid lines are for $`\beta =0`$ (isotropic orbits), and dotted lines are for $`\beta =\mathrm{}`$ (circular orbits). Three contours for each case correspond to 68, 95 and 99% level, and the crosses denote the peak of the likelihood.
Figure 3. Likelihood contours same to figure 2, but for $`q=1.8`$ (clustering case for optical pairs).
Figure 4. Simulated distribution of 57 pairs which are to be compared with figure 1b. The distribution is calculated with the best fit parameters obtained in the Section 3. The dotted curve corresponds to $`(M/L)_\mathrm{p}`$ of 20.
Figure 5. Likelihood contours same to figure 2 (no clustering case), but for sample III (30 pure spiral pairs).
Figure 6. Likelihood contours same to figure 3 (clustering case), but for sample III (30 pure spiral pairs).
Figure 7. $`M/L`$ for three subgroups against the mean separation (see text for the sample). The error bar for $`R`$ denotes the $`1\sigma `$ deviation of pairs in the sample.
|
no-problem/9904/astro-ph9904135.html
|
ar5iv
|
text
|
# The Planck Low Frequency Instrument
## Abstract
ABSTRACT – The Low Frequency Instrument (LFI) of the “Planck Surveyor” ESA mission will perform high-resolution imaging of the Cosmic Microwave Background anisotropies at four frequencies in the 30–100 GHz range. We review the LFI main scientific objectives, the current status of the instrument design and the on-going effort to develop software simulations of the LFI observations. In particular we discuss the design status of the PLANCK telescope, which is critical for reaching adequate effective angular resolution.
<sup>1</sup>Istituto TeSRE, CNR, Bologna, Italy; <sup>2</sup>IFC, CNR, Milano, Italy.
KEYWORDS: Cosmic Microwave Background – Space Missions – Telescopes.
1. INTRODUCTION The Planck LFI represents the third generation of mm-wave instruments designed for space observations of CMB anisotropies, following the COBE Differential Microwave Radiometer (DMR) and the Microwave Anisotropy Probe (MAP). The DMR, launched in 1989, detected structure in the CMB angular distribution at angular scales $`>7\mathrm{deg}`$. The LFI will produce images of the sky at four frequencies between 30 and $`100`$GHz, with an unprecedented combination of sky coverage, calibration accuracy, freedom from systematic errors, stability and sensitivity (including polarized components). The LFI will produce full sky maps at 30, 44, 70 and 100 GHz, with angular resolution of $`33^{}`$, $`23^{}`$, $`14^{}`$ and $`10^{}`$, respectively, and with an average sensitivity per resolution element $`\mathrm{\Delta }T/T`$ a $`few\times 10^6`$. These unprecedented angular resolution and sensitivity will uncover the wealth of cosmological information encoded in the anisotropy pattern at degree and sub-degree angular scales.
In the LFI frequency range the contaminating effect of the galactic emission, dominating the astrophysical foreground noise on scale $`30^{}`$, is minimum at around $`60`$GHz; while the confusion noise due to extragalactic sources, dominating on smaller angular scales, is minimum in the range 100–$`200`$GHz, where it is primarily due to radio sources (De Zotti & Toffolatti 1998). The 70 and $`100`$GHz channels are therefore optimal to get the cleanest possible view of primordial CMB fluctuations, over the full range of angular scales. In both channels the astrophysical foreground noise is expected to be well below the cosmological signal for all observed angular scales. At $`100`$GHz then the LFI will accurately measure the power spectrum of CMB anisotropies up to multipoles $`\mathrm{}1300`$ with an accuracy of the order of, or better, than 1%. Little cosmological information is left at angular scales smaller than 10 arcminutes, if standard inflationary models hold; in fact anisotropies at such scales are quasi-exponentially erased by photon diffusion.
The LFI measurements will determine of the primary cosmological parameters (Hubble constant, deceleration parameter, curvature of space, baryon density, dark matter densities including neutrinos, amplitude and spectral index of the primordial scalar density perturbations, and the gravity wave content of the Universe) to an accuracy of a few percent (see, e.g., Bond et al. 1997). The LFI data can test models for the origin of primordial perturbations, i.e. whether they are due to topological defects or to quantum fluctuations, and constrain the global properties of the Universe (topology, rotation, shear, etc.) and the theories of particle physics at energies $`10^{16}`$ GeV. Polarization measurements, which will be possible with an accuracy of a few $`\mu `$K towards the ecliptic caps, will independently confirm these findings and help in breaking degeneracy in the determination of cosmological parameters (Zaldarriaga et al. 1997). Also, contraints on (or possible detections of) deviations of the CMB spectrum from a planckian shape can be accurately studied by Planck, by analyzing the dipole signature, so providing interesting information on cosmological and astrophysical processes at high redshifts (Danese & De Zotti 1981, Burigana et al. 1998b).
The LFI will also detect the Sunyaev-Zeldovich effect (Sunyaev & Zeldovich 1970) towards a few hundred of clusters of galaxies, allowing an independent determination of the Hubble constant (Cavaliere et al. 1979; Myers et al. 1997) and providing information on the intercluster medium complementary to those from X-rays (Rephaeli 1995). The LFI four-frequency all-sky surveys will also be unique in providing complete samples comprising from several hundred to a few thousands extragalactic sources, selected in an essentially unexplored frequency range, like familiar “flat-spectrum” radiosources, sources with strongly inverted spectra and possible new classes of sources (Toffolatti et al. 1998, De Zotti & Toffolatti 1998).
Moreover, the LFI maps will provide a rich database for studies of Galactic evolution, the interstellar medium, and discrete Galactic sources, including supernova remnants, Sagittarius A, and sources self-absorbed up to high frequencies such as some symbiotic stars and planetary nebulae.
The combination of data from its two instruments, LFI and HFI (High Frequency Instrument, see Puget et al. 1998), give to the Planck Surveyor the imaging power, the redundancy and the control of systematic effects and foreground emissions needed to achieve the extraordinarily exciting scientific goals of this mission in a broad spectral range, from 30 to 857 GHz. This, in turn, is crucial for improving the accuracy in the determination of the cosmological parameters. LFI and HFI will deal primarily with different astrophysical processes, radio and dust emission, respectively, both coexisting in real astrophysical sources. The LFI data are crucial to separate the cosmic signal from the contaminating effect of extragalactic radio sources which dominate the foreground fluctuations on small angular scales in the cosmologically cleanest frequency range, at least up to 200 GHz. On the other hand, the HFI maps will be useful to subtract the Galactic dust emission which is important on intermediate to large angular scales down to $`100`$GHz. Also, the full set of Planck data will be essential to address a number of important astrophysical problems such as to elucidate physical and evolutionary connections between nuclear activity (responsible for the radio emission) and processes governing the abundance and the properties of the interstellar material (responsible for the sub-mm dust emission).
2. PROGRAMMATICS Planck was formerly called COBRAS/SAMBA (Bersanelli et al. 1996), a combination of the two CMB proposals “COBRAS” and “SAMBA” submitted to ESA in 1993 in response to the call for mission ideas for the Medium-Size M3 mission, expected to be launched in 2003. After the Assessment Study and Phase A Study, COBRAS/SAMBA was selected and approved in late 1996, it was renamed in honor of Max Planck and the launch was then planned for 2004. The ESA Announcement Opportunity (AO) for the instruments for the FIRST/Planck Programme was issued in 1997 announcing a launch in 2006. Budgetary pressures within ESA’s scientific programme have forced a reconsideration of the original implementation plan for Planck. Between 1997 and 1998 several studies were carried out to determine how Planck will be implemented. After we replied to the AO the launch date was shifted to 2007. The preferred option at the present time is to launch Planck together with the FIRST mission; in this solution (known as the ”Carrier” configuration because of the launch arrangement) both Planck and FIRST will be placed in separate orbits around the second Lagrangian point of the Earth-Sun System. The LFI data will be transmitted to the LFI Data Processing Center (see Pasian & Gispert 1998 for a detailed description).
3. INSTRUMENT DESCRIPTION A schematic overview of the LFI front-end unit is shown in Figure 1. The front-end unit is located at the focus of the Planck off-axis telescope in a circular configuration around the High Frequency Instrument. The front-end unit is the heart of the instrument, and includes 28 modules, each containing one feed horn, one orthomode transducer (OMT), two hybrid couplers, two phase switches, and four cryogenic amplifiers.
The radiation focused by the telescope is coupled to the radiometers by conical, corrugated feedhorns (Bersanelli et al. 1998, Villa et al. 1997,1998a). The radiation patterns of the horns must be highly symmetric, with low sidelobes and a beam width that matches the telescope edge taper requirement ($`<30`$ dB at 22$`\mathrm{deg}`$). In addition, the electromagnetic field inside the horn must propagate with low attenuation and low return loss. The OMTs separate the orthogonal polarizations with minimal losses and cross-talk. The return loss ($`<25`$ dB) and the insertion loss ($`<0.1`$ to 0.3 dB depending on frequency) must be met over the whole 20% bandwidth.
Each OMT is coupled to an integrated front-end module containing the hybrid couplers (two for each module) and the amplifier chains including phase switches and output hybrids. The front-end modules are mounted on the 20 K plate. Each hybrid has two inputs, one of which sees the sky through one of the OMT arms and the feed horn, the other of which looks at the 4 K reference load through a small horn fabricated into the hybrid block. Four amplifiers are contained in each amplifier block, with multiple inputs and outputs on a single flange to minimize size. The hybrid coupler combines the signals from the sky and cold load with a fixed phase offset of either $`90^{}`$ or $`180^{}`$ between them. More details on the LFI receiver concept and potential systematics are given by Bersanelli & Mandolesi (1998).
The LFI Front End Unit is cooled to 20 K by a hydrogen sorption cooler (Wade & Levy 1997). In addition to meeting temperature and capacity requirements, the sorption cooler has major advantages for Planck: it is highly efficient; there are no cold moving parts, no vibration, and very low EMI; and integration with the instruments and spacecraft is simple, since the only part of the cooler in the focal assembly is a J-T valve and associated tubing while the compressor and control electronics are located remotely on the spacecraft. The sorption cooler also provides 18 K precooling to the HFI helium J-T cooler. The Planck thermal design allows efficient radiative cooling of the payload and an optimized design of the combined LFI and HFI front-end assemblies in the focal plane.
Following amplification, the signals are passed through a cryogenic, low-power phase switch, which adds $`90^{}`$ or $`180^{}`$ of phase lag to the signals thus selecting the input source as either the sky or the reference load at the radiometer output at a rate of about 1 kHz. The phase lagged pair of signals is then passed into a second hybrid coupler, separating the signals. The inclusion of the phase switches and second hybrids in the front-end blocks eliminates the need for phase matching of the long transmission paths to the back-end, greatly simplifying integration and testing.
Each cryogenic front-end module is connected with the room temperature section with four waveguides or cables, grouped together. This results in a total of 20 coax cables and 92 waveguide sections running between the front-end unit and the back-end unit. The transmission lines are further grouped into four bundles and configured to allow the required flexibility and clearance for integration of HFI in the central portion of the assembly.
Each back-end module comprises two parallel chains of amplification, filtering, detection, and integration. The detected signals are amplified and a low-pass filter or integrator reduces the variance of the random signal, providing in each channel a DC output voltage related to the average value. Post-detection amplifiers are integrated into the modules to avoid data transmission problems between the radiometer and the electronics box. The sky and reference signals are at different levels, which are equalized after detection and integration by modulating the gain synchronously with the phase switch.
The thermal design is based on three principles: a) minimize the power dissipated in the focal assembly; b) maximize the effectiveness of radiative cooling by providing good views of cold space, and by intercepting conductive heat loads and radiating them away; c) segregate the warm and cold components, and keep them as far from each other as possible.
4. MISSION PERFORMANCE AND SIMULATIONS
A large set of detailed simulation codes is being developed by both LFI and HFI Consortia, in close contact with the theoretical and hardware progress, with the aim of testing and contributing to improve the mission design. We briefly summarize here the basic concepts on several issues investigated through simulations.
A crucial effect is introduced by straylight, i.e., contamination from off-axis sources through the sidelobes of the instrument beam. Several sources of contamination, both instrumental and astrophysical, may introduce spurious signals. The primary environmental sources of error for the LFI are those due to imperfect off-axis rejection by the optical system of radiation from the Sun, Earth, Moon planets, and Galaxy. Sidelobe structure sweeping across the Galaxy can produce artifacts in any direction. Sidelobe contributions are dominated by the features relatively near the optical axis and typical maximum ratios between the sidelobe and the central beam signals reach levels of $`10\%`$ (Mandolesi et al. 1998, section 2.1.2); we require the level of galactic contamination be below the noise level with a factor of two margin to allow for uncertainties in the level of galactic emission. The exact contamination levels depend on the details of the sidelobe pattern, typically in the range $`60`$ to $`70`$ dB (for a more detailed study see Polegre et al. 1998). In addition, emission from near field objects may affect the anisotropy measurements, such as emission from the warm parts of the spacecraft or fluctuations of mirror and shield temperature.
Transistor gain fluctuations in the receivers introduce amplifier noise temperature fluctuations that dominate in generating instrumental drifts with a $`1/f`$ noise spectrum (Seiffert et al. 1997). The LFI radiometer design minimizes this effect, but residual stripes may be present. Destriping algorithms (Delabrouille 1998, Burigana et al. 1997), based on the idea that the same position in the sky must give the same observed temperature for different satellite spin axis positions, are quite efficient in further reducing these stripes, provided that the crossings between pointing positions in the sky obtained by different satellite spin axis positions are spreaded enough in the sky. In terms of increased noise with respect to the case of receiver pure white noise, these destriping techniques allow to reduce the noise added by stripes from some tens percent to few percent. Even in the case in which the angle, $`\alpha `$, between the telescope optical axis and the satellite spin axis is constantly keeped at $`90^{}`$, the worst from this point of view, for the major part of LFI beams located at $`2^{}÷4^{}`$ from the telescope optical axis – but not close to the sky scanning direction – we find a good destriping efficiency. For on-axis beams (or, equivalently, for beams located close to the sky scanning direction), reducing the angle $`\alpha `$ significantly improves the efficiency of these destriping methods. By working with maps at resolutions close to the Planck FWHM’s, we find that an angle of $`85^{}`$ is a good compromise for having efficient destriping and, considering the spread of the projected beam positions on the sky, practically full sky coverage.
Stripes in the observed maps can be also introduced by thermal instabilities. The closer the spin axis remains to the Sun-Planck direction, the smaller are any temperature variation induced by departure from perfect cylindrical symmetry. Thermal design aims at producing both low temperatures for sensitivity and thermal stability for reducing drifts. Destriping algorithms can be applied also for reducing residual stripes due to thermal instabilities, which typically show a noise spectrum close to $`1/f^2`$ (Delabrouille 1998).
The amplitude and the reduction in the data analysis of stripes introduced by thermal and gain fluctuations is related to the Planck observational strategy. Sinusoidal oscillations of the spin axis may produce significant variations of the illumination by the Sun, so introducing unwanted thermal instabilities. On the other hand, different kinds of spin axis “oscillations” in which the angle between the spin axis and the Sun-Planck direction is keeped constant (like for example a precession motion of the spin axis around an axis constantly keeped on the ecliptic plane) minimise this effect and can be useful in reducing the stripes even for on-axis beams in the case $`\alpha =90^{}`$.
On the other hand, the scanning strategy controls another important issue: only when $`\alpha `$ is constant the distribution in the sky of the sensitivity per pixel is smooth, the global integration time per pixel increasing continuously from the ecliptic equators to the ecliptic poles. In the other cases, we can have large areas in sky where the sensitivity significantly varies from a pixel to another even for small changes of the position in the sky. This is exactly what we want to avoid to not complicate the data analysis, particularly in presence of foreground contamination.
As discussed in the introduction, the most important LFI channels from the cosmological point of view are at 70 and 100 GHz; in particular the 100 GHz channel presents the best LFI resolution. In the new symmetric configuration of the Focal Plane Unit (FPU) the LFI feeds are located in a ring around the telescope optical axis at about $`2^{}÷4^{}`$ from it. Therefore, the issue of the main beam optical distortions is crucial, particularly at 100 GHz, the LFI channel where we want to reach a FWHM angular resolution of $`10^{}`$, necessary for the cosmological goal, and where the optical aberrations are maximum, because of their increasing with the frequency.
5. TELESCOPE DESIGN The optimization of the Planck telescope is one of the goals of the Planck Teams. For the present optical study we have considered the 100 GHz channel.
The baseline design (report on Phase-A, TICRA yellow report, etc..) is a 1292.4 mm projected aperture gregorian off-axis telescope satisfying the Dragone-Mizuguchi condition (Dragone 1978; Mizuguchi et al. 1978; Rush et al. 1990). This condition set the tilting of the subreflector axis with respect the main reflector axis in order to cancel the cross-polarization. Unfortunately, only in the center of the focal surface (null scan angle) this kind of design shows symmetrical beams. Beam aberrations (expecially coma aberration) rise when the feedhorn is located outside the center of the focal surface, and increase with the frequency and the distance from the optical axis. Since one of the most crucial effect of beam distortions is an angular resolution degradation with respect to the central beam, we have studied two configurations with increased main reflector apertures in order to improve the resolution of the beams. The first one has a projected aperture of 1492.4 mm, while the projected aperture of the second configuration is 1750.0 mm. All these configurations have the same subreflector of the 1292.4 mm baseline design, as well as the same overall focal ratio. This means that the angular geometry is preserved for all the designs and no relevant changes of the FPU arrangement are needed.
Among other possible design configurations, an alternative to the Dragone-Mizuguchi Gregorian off-axis solution (in short “Standard”) is represented by the Aplanatic Gregorian design (in short “Aplanatic”), firstly proposed by Mark Dragovan and the LFI Consortium in order to reduce the coma and the spherical aberration on a large portion of the focal surface (Villa et al. 1998b). This new solution is obtained by changing the conic constants of both mirrors (both ellipsoids of rotation) in order to satisfy the Aplanatic Condition. Two configurations have been studied, with 1292.4 mm and 1492.4 mm projected aperture respectively. Details of all the considered configurations, sketched in Figure 2, are reported in Table 1.
To analyze a general dual reflector system a dedicated software has been implemented at Istituto TeSRE/CNR (Villa et al. 1998b). The code calculates the amplitude and phase distribution on a regular grid of points on the tilted aperture plane (normal to azimuth and elevation directions on the sky).
The amplitude is calculated starting from the feed pattern and takes into account the space attenuation.
The phase is derived by calculating the path length of each ray, from the corresponding point on the aperture plane grid up to the focal point previously calculated (minimizing the wave front error of the spot diagram). Performing the Fourier Transform of the spatial phase-amplitude distribution on the aperture plane, the far field radiation pattern is readily obtained. For each configurations the contour plot of the normalized patterns as function of the sky-pointing scan angles (elevation, azimuth) have been calculated. Figure 3 shows our results for a typical beam position. Diffraction effects on the reflectors rim are not considered, but for the main beam response they are expected to be quite small.
All simulations have been done by considering the $`\mathrm{cos}^N(\theta )`$ primary pattern with $`N=91`$ (for the Standard 1.3 m configuration this gives an edge taper of $`30`$ dB at $`22^{}`$ of angle).
In order to quantify the impact on the effective angular resolution, FWHM<sub>eff</sub>, of the beams in CMB anisotropy measurements we have compared convolutions of a CDM anisotropy sky with the simulated beams and with a suitable grid of gaussian symmetric beams (see Burigana et al. 1998a and Mandolesi et al. 1997, section 3.2, for further details on the method). In Figure 4 we summarize our results for the five considered configurations. Note that the August 1998 ESA Baseline Planck telescope is a 1492.4 m aperture Gregorian telescope with the secondary (position, shape and size) still optimized for the 1.3 m Standard configuration: the main beam resolution is then equivalent to the 1.3 m Standard telescope.
The average of the FWHM<sub>eff</sub> in the relevant regions (between $`2.5`$ and $`2.5`$ degrees for the 1.3m telescopes and between $`2`$ and $`2`$ degrees for the 1.5m telescopes) are similar for Standard and Aplanatic configurations with same aperture ($`10`$ arcmin for the $`1.5`$ m telescopes and even better for the 1.75 m telescope). On the other hand the FWHM<sub>eff</sub> of beams located at angular distances from the center roughly equal or larger than $`1^{}`$, where typical Planck (LFI and also HFI) feeds are located, is somewhat better and also the spread of the FWHM<sub>eff</sub>’s of the different beams is smaller for the Aplanatic configuration (Villa et al. 1998b).
We find that the beam shapes are more regular for the Aplanatic configuration, and, although elliptical, closer to gaussian shapes, due to the strong reduction of the coma. This could help also the reconstruction in flight of the beam pattern.
In addition, the Aplanatic configuration leads essentially unchanged the edge taper at the bottom edge of the main reflector ($`30`$dB for the central feed) compared to the Standard telescope, while it allows to improve the edge taper at the main reflector top edge ($`40`$dB for the central feed), where the spillover radiation is not shielded. This will most probably lead to an improvement of the top edge straylight.
This preliminary study suggests that the Aplanatic configuration can represent a significant improvement for the main beam properties compared to the Standard configuration, possibly decreasing the sidelobe contamination. Further studies which include straylight, focal surface and feed positioning optimization, and mirror shapes, need to be done.
REFERENCES
Bersanelli, M., et al. 1996, ESA, COBRAS/SAMBA Report on the Phase A Study, D/SCI(96)3
Bersanelli, M., et al. 1998, Experimental Astronomy, in press
Bersanelli, M., Mandolesi, N. 1998, this Conference
Bond, R.J., Efstathiou, G., Tegmark, M. 1997, MNRAS 291, L33
Burigana, C., et al. 1997, Int. Rep. TeSRE/CNR 198/1997
Burigana, C., et al. 1998a, A&ASS 130, 551
Burigana, C., et al. 1998b, this Conference
Cavaliere, A., Danese, L., De Zotti, G. 1979, A&A 75, 322
Delabrouille, J. 1998, A&ASS 127, 555
De Zotti, G., Toffolatti, L. 1998, this Conference
Dragone, C. 1978, B.S.T.J., Vol. 57, No. 7, 2663
Mandolesi, N., et al. 1997, Int. Rep. TeSRE/CNR 199/1997
Mandolesi, N., et al. 1998, Planck LFI, A Proposal Submitted to the ESA.
Mizuguchi, Y., Akagawa, M., Yokoi, H. 1978, Electronics & Comm. in Japan, Vol. 61-B, No. 3, 58
Myers, S.T, Baker, J.E., Readhead, A.C.S., Leitch, E.M., Herbig, T. 1997, ApJ 485, 1
Pasian, F., Gispert, R. 1998, this Conference
Polegre, A.M., et al. 1998, this Conference
Puget, J.-L., et al. 1998, HFI for the Planck Mission, A Proposal Submitted to the ESA.
Rephaeli, Y., 1995, ARA&A 33, 541
Rush, W.V.T., et al. 1990, IEEE Trans. AP., Vol. 38, No. 8, 1141
Seiffert, M., et al. 1997, The Review of Scientific Instruments, submitted
Toffolatti, L., et al. 1998, MNRAS 297, 117
Villa, F., Bersanelli, M., Mandolesi, N. 1997, Int. Rep. TeSRE/CNR 188/1997
Villa, F., Bersanelli, M., Mandolesi, N. 1998a, Int. Rep. TeSRE/CNR 206/1998
Villa, F., Mandolesi, N., Burigana, C. 1998b, Int. Rep. TeSRE/CNR 221/1998
Wade, L.A., Levy, A.R. 1997, Cryocoolers, 9, 587
Zaldarriaga, M., Spergel, D., Seljak, U. 1997, ApJ 488, 1
|
no-problem/9904/hep-th9904190.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
As approximation schemes for gauge dynamics, instanton calculus and ’t Hooft’s $`1/N`$ expansion do not seem to combine in a useful fashion. Since effects of a charge $`k`$ instanton sector are of $`𝒪(e^{8\pi ^2k/g^2})=𝒪(e^N)`$, it would seem that they are always irrelevant in the large-$`N`$ limit unless they control the leading contribution to some observable (for instance, because of supersymmetry non-renormalization arguments), or somehow the integral over instanton moduli space is ill-defined. Such non-commutativity of the large-$`N`$ limit and the instanton sum is assumed to be behind well-known instances of theta-angle dependence at perturbative order in the $`1/N`$ expansion, notably in the context of large-$`N`$ chiral dynamics .
On the other hand, it is known that some toy models completely suppress instanton-like excitations once the large-$`N`$ limit has been taken. In other words, the ‘effective action’ resulting after large-$`N`$ diagram summation does not support instantons any more. So, one may wonder whether the large-$`N`$ ‘master field’ always loses its discrete topological structure. Recently, the AdS/CFT correspondence of has provided a new set of non-trivial master fields for some gauge theories. In particular, the theta-angle dependence of $`𝒩=4`$ Super Yang–Mills $`(\mathrm{SYM}_4)`$ in four dimensions can be studied in the large-$`N`$ expansion via perturbative Type IIB string theory in $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$. It is saturated by instantons, which appear in the supergravity description as D-instantons . So, the radical view that an instanton gas is incompatible with the large-$`N`$ limit is not vindicated in this case.
In this paper, we investigate these questions in a QCD cousin introduced by Witten , which admits a supergravity description of its master field, while removing the constraints of extended supersymmetry and conformal invariance. More precisely, we would like to learn under what conditions some kind of topological configurations (instantons) still give the leading semiclassical effects of $`𝒪(e^N)`$, even after the planar diagrams have been summed over. We shall focus on the most elementary case of the dilute limit, i.e. the single-instanton sector.
One description of this theory is in terms of $`N`$ D4-branes wrapped on a circle $`𝐒_\beta ^1`$ of length $`\beta `$, with thermal boundary conditions. At weak coupling, the low-energy theory on the D4-branes is a perturbative five-dimensional Super Yang–Mills theory $`(\mathrm{SYM}_{4+1})`$, which reduces to non-supersymmetric, $`SU(N)`$ Yang–Mills theory in four euclidean dimensions $`(\mathrm{YM}_4)`$, at distance scales much larger than the inverse temperature $`T^1=\beta `$. Since five-dimensional gauge fields originate from massless open strings, their coupling scales as $`g_5^2g_s\sqrt{\alpha ^{}}`$ with $`g_s=\mathrm{exp}(\varphi _{\mathrm{}})`$ denoting the string coupling constant. Therefore, the four-dimensional coupling at the cut-off scale $`T`$ is given by $`g^2g_sT\sqrt{\alpha ^{}}`$.
The weak-coupling description of instantons in this set up is in terms of D0/D4-branes bound states. Due to the Wess–Zumino coupling between the type IIA Ramond–Ramond (RR) one-form and the gauge fields on the D4-branes world-volume, $`_{\mathrm{WZ}}=C_{\mathrm{D0}}FF`$, a D0-brane ‘inside’ a D4-brane carries the instanton charge. The action of an euclidean world-line wrapped on a circle $`𝐒_\beta ^1`$ is
$$S_{\mathrm{D0}}=M_{\mathrm{D0}}\beta =\frac{\beta }{\sqrt{\alpha ^{}}g_s}=\frac{8\pi ^2}{g^2}\frac{8\pi ^2N}{\lambda }.$$
(1.1)
Incidentally, this relation also fixes the numerical conventions in the definition of the four-dimensional coupling $`g`$. We have also introduced the standard notation for the large-$`N`$ ’t Hooft coupling $`\lambda g^2N`$.
The moduli of these instantons are encoded in the quantum mechanical zero-modes of the D0–D0 and D0–D4 strings. For a standard compactification, the D0-branes (i.e. the ‘instanton particles’ of the gauge theory) describe standard instantons of $`𝒩=4`$ $`\mathrm{SYM}_4`$ (see for some generalizations). If the circle breaks supersymmetry, the instanton fermionic zero modes should be lifted accordingly to mass of $`𝒪(T)`$, and one should get essentially a Yang–Mills instanton with no fermionic zero modes. Other one-loop effects would incorporate the perturbative running of the coupling constant in the standard way.
The supergravity framework for $`\mathrm{SYM}_{4+1}`$ at finite temperature is given by the black D4-brane solution . The full metric in the string frame is:<sup>1</sup><sup>1</sup>1See, for example, and references therein for a review of metrics relevant to this paper.
$$ds^2=H_4^{\frac{1}{2}}(hd\tau ^2+d\stackrel{}{y}^{\mathrm{\hspace{0.17em}2}})+H_4^{\frac{1}{2}}\left(dr^2/h+r^2d\mathrm{\Omega }_4^2\right)$$
(1.2)
where
$$H_4=1+(r_{Q4}/r)^3,h=1(r_0/r)^3.$$
(1.3)
There are two length scales associated with this metric: the Schwarzschild radius, $`r_0`$, related to the Hawking temperature $`T`$ by $`T^1=\beta =(4\pi /3)r_0[H_4(r_0)]^{1/2}`$, and the charge radius $`r_{Q4}`$, given by
$$r_{Q4}^3=\frac{1}{2}r_0^3+\sqrt{\frac{1}{4}r_0^6+\alpha ^{\mathrm{\hspace{0.17em}3}}(\pi g_sN)^2},$$
(1.4)
In the Maldacena or gauge-theory limit, one scales $`\alpha ^{}0`$ with $`r/\alpha ^{}=u`$ and $`r_0/\alpha ^{}=u_0`$ fixed. The new coordinate $`u`$ has dimensions of energy and the scaling properties of the Higgs expectation value. In this limit, only the combination
$$\alpha ^{\mathrm{\hspace{0.17em}2}}H_4\frac{\pi g_sN\sqrt{\alpha ^{}}}{u^3}=\frac{\lambda \beta }{8\pi u^3}$$
(1.5)
is relevant. In the supergravity picture, the D4-branes have disappeared in favour of the ‘throat geometry’ $`𝐗_{\mathrm{bb}}`$ (1.2), i.e. we have no open strings and the description is fully gauge invariant. The black-brane manifold $`𝐗_{\mathrm{bb}}`$, with topology $`𝐑^2\times 𝐑^4\times 𝐒^4`$, has a boundary at $`u=\mathrm{}`$ of topology $`𝐒^1\times 𝐑^4`$, which is interpreted as the $`\mathrm{SYM}_5`$ space-time (the $`(\tau ,\stackrel{}{y})`$ space). The physical interpretation is that asymptotic boundary conditions for the supergravity fields at $`u=\mathrm{}`$ represent coupling constants of microscopic operators in the gauge theory .
The same boundary conditions are satisfied by the extremal D4-brane metric with thermal boundary conditions. This is the ‘vacuum’ manifold, denoted $`𝐗_{\mathrm{vac}}`$, with topology $`𝐒^1\times 𝐑^5\times 𝐒^4`$, obtained from (1.2) by setting $`u_0=0`$, with fixed $`\beta `$. However, one can show that $`𝐗_{\mathrm{vac}}`$ is suppressed by a relative factor of $`𝒪(e^{N^2})`$ with respect to $`𝐗_{\mathrm{bb}}`$, in the large-$`N`$ limit. In other words, the $`𝒪(N^2)`$ actions satisfy
$$I(𝐗_{\mathrm{bb}})I(𝐗_{\mathrm{vac}})=KN^2\lambda VT^4<0$$
(1.6)
for any $`T>0`$, with $`K`$ a positive constant, i.e. there is no Hawking-Page transition .
Unlike the case of $`𝒩=4`$ $`\mathrm{SYM}_4`$, the dilaton is not constant in the supergravity description. It becomes strongly coupled at radial coordinates of order $`u𝒪(N^{4/3}/\beta \lambda )`$, where one has $`e^\varphi =g_s(H_4)^{1/4}=𝒪(1)`$. Beyond this point, one should use a dual picture in terms of a wrapped M5-brane in M-theory, i.e. a quotient of $`\mathrm{𝐀𝐝𝐒}_7\times 𝐒^4`$. For the purposes of the discussions in this paper, we are studying the theory at fixed energy scales of $`𝒪(1)`$ in the ’t Hooft’s large-$`N`$ limit, with fixed $`\lambda =g^2N`$. Therefore, such non-perturbative thresholds effectively decouple in the regime of interest, and we shall formally extend the D4-brane manifold all the way up to $`u=\mathrm{}`$.
From a physical point of view, $`\alpha ^{}`$-corrections to the classical geometry pose a more serious limitation to the supergravity description. The curvature at the horizon scales as $`(u_0g_sN\sqrt{\alpha ^{}})^{1/2}\lambda ^1`$, in string units, so that the supergravity description is accurate only for large bare ’t Hooft coupling $`\lambda 1`$. On the other hand, the glueball mass gap in this theory is of order $`M_{\mathrm{glue}}\beta ^1=T`$, while inspection of the Wilson loop expectation value gives a four-dimensional string tension of order $`\sigma \lambda T^2`$, i.e. hierarchically larger in the supergravity regime. This lack of scaling indicates that the supergravity picture is far from the ‘continuum limit’ of the $`\mathrm{YM}_4`$ theory, a suspicion already clear from the existence of non-QCD states of Kaluza–Klein origin at the same mass scale as the glueballs: $`M_{\mathrm{KK}}TM_{\mathrm{glue}}`$.
## 2 The Localized Instanton
The natural candidate for an instanton excitation in the large-$`N`$ supergravity picture is a D0-brane probe wrapped around the thermal circle. For the supersymmetric case, this is indeed the T-dual configuration to the D-instantons in $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$ discussed in . Wrapped D0-branes have the correct quantum numbers to be interpreted as Yang–Mills instantons in the effective four-dimensional theory. The topological charge is interpreted as the wrapping number on the thermal circle $`𝐒_\beta ^1`$. In the large-$`N`$ limit, it is justified to take the D0-brane as a probe, neglecting its back-reaction on the supergravity fields, since the gravitational radius is sub-stringy: $`(r_{\mathrm{probe}})^7\alpha ^{\mathrm{\hspace{0.17em}7}/2}e^\varphi 𝒪(1/N)`$, although for instanton numbers of $`𝒪(N)`$ with identical moduli we may need a supergravity description for the instanton dynamics in terms of the D0-branes near-horizon geometry (i.e. a T-dual of the limit in , or the solution of section 3 below). We shall postpone these interesting complications by working in the single-instanton sector, and with instanton moduli of $`𝒪(1)`$ in the ’t Hooft large-$`N`$ limit.
One important ingredient of the the instanton/D0-brane mapping is a physical interpretation in gauge-theory language of the wrapped D0 world-line’s radial position. For this purpose, we use the generalized UV/IR connection as discussed in . According to this, a radial coordinate $`u`$ is associated to a length scale in the $`\mathrm{SYM}_{p+1}`$ gauge theory of order $`\mathrm{}\sqrt{g_{p+1}^2N/u^{5p}}`$. Thus, in our case, the size parameter $`\rho `$ of the instanton satisfies:
$$\rho ^2=\frac{\beta \lambda }{u}.$$
(2.1)
We will assume this relation as the definition of the instanton’s size modulus.
We will now discuss both manifolds with $`𝐒^1\times 𝐑^4`$ boundary conditions at $`u=\mathrm{}`$, in spite of the fact that eq. (1.6) ensures the dynamical dominance of $`𝐗_{\mathrm{bb}}`$. The reason for considering also the ‘vacuum’ manifold is first that we find interesting differences between the manifolds, and that $`𝐗_{\mathrm{vac}}`$ is the only relevant manifold for supersymmetric compactification, with which we can make contact with the $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$ case.
### 2.1 Vacuum Manifold
The $`𝐒^1`$ factor on the boundary extends to the bulk of $`𝐗_{\mathrm{vac}}`$ becoming singular as $`u0`$, since $`\mathrm{Vol}(𝐒_u^1)=\beta \sqrt{g_{\tau \tau }}=\beta (H_4)^{1/4}0`$. However, the action of the instanton is constant, due to the dilaton dependence in the Dirac–Born–Infeld action:
$$S_{\mathrm{D0}}=M_{\mathrm{D0}}_0^\beta 𝑑\tau (g_se^\varphi )\sqrt{g_{\tau \tau }}=\frac{8\pi ^2}{g^2}.$$
(2.2)
Thus, the size $`\rho `$ is an exact modulus in the supergravity description on $`𝐗_{\mathrm{vac}}`$. On general grounds, the path integral of a D-particle in a curved background $`𝐗`$ contains an ultralocal term in the measure of the form $`𝒟X^\mu [\mathrm{det}(g_{\mu \nu })]^{1/2}`$, to ensure invariance under target-space diffeomorphisms. In the description of instantons on the manifold $`𝐗`$, we concentrate on the zero-mode part which then leads to a single-instanton measure
$$d\mu (𝐗)=C_{N,\lambda }d\eta _{𝐒_\beta ^1\times 𝐒_\mathrm{\Omega }^4}(\alpha ^{})^5𝑑\mathrm{Vol}(𝐗),$$
(2.3)
where $`d\eta `$ is the measure over fermionic zero-modes (sixteen in the supersymmetric case), and $`C_{N,\lambda }`$ is a constant to be determined by the matching to the perturbative measure. We have produced a measure in the physical space-time and scale-parameter space by averaging over $`𝐒_\beta ^1\times 𝐒_\mathrm{\Omega }^4`$. The result for $`𝐗_{\mathrm{D4}}=𝐗_{\mathrm{vac}}`$, using the UV/IR connection (2.1) is:
$$d\mu (𝐗_{\mathrm{D4}})=C_{N,\lambda }\lambda ^5(\rho T)^6\rho ^5d\rho d\stackrel{}{y}d\eta .$$
(2.4)
We see that the presence of the dimensionful scale $`T`$ explicitly violates the conformal invariance of the measure, which we must take as a concrete prediction of the supergravity approach. As such, it is valid at large $`N`$ and $`\lambda `$.
The singularity of $`𝐗_{\mathrm{vac}}`$ as $`u0`$ is not relevant. At $`uu_s=T\lambda ^{1/3}`$ the size of the world-line is of $`𝒪(1)`$ in string units. So, for $`uu_s`$ we must use the T-dual metric of $`N`$ D3-branes smeared over the dual circle of coordinate length $`\stackrel{~}{\beta }=4\pi ^2\alpha ^{}/\beta `$:
$$ds^2(𝐗_{\stackrel{~}{\mathrm{D3}}})=H_4^{\frac{1}{2}}d\stackrel{}{y}^{\mathrm{\hspace{0.17em}2}}+H_4^{\frac{1}{2}}\left(d\stackrel{~}{\tau }^2+dr^2+r^2d\mathrm{\Omega }_4^2\right),$$
(2.5)
with $`\stackrel{~}{\tau }\stackrel{~}{\tau }+\stackrel{~}{\beta }`$. In the T-dual metric,<sup>2</sup><sup>2</sup>2Notice that the UV/IR connection (2.1) remains unchanged by T-duality, as the new metric only differs by $`g_{\tau \tau }1/g_{\tau \tau }`$, with the $`u,\stackrel{}{y}`$ components of the metric unaffected. the size of $`𝐒_u^1`$ grows with decreasing $`u`$.
In fact, the metric (2.5) is unstable if any small amount of energy is added. It collapses to the array solution of localized D3-branes :
$$ds^2(𝐗_{\mathrm{D3}})=H_3^{\frac{1}{2}}d\stackrel{}{y}^{\mathrm{\hspace{0.17em}2}}+H_3^{\frac{1}{2}}\left(d\stackrel{~}{\tau }^2+dr^2+r^2d\mathrm{\Omega }_4^2\right),\mathrm{with}H_3=1+\underset{n}{}\frac{4\pi \stackrel{~}{g}_s\alpha ^{\mathrm{\hspace{0.17em}2}}}{|r^2+(n\stackrel{~}{\beta })^2|^2}.$$
By the T-duality rules and our coupling conventions: $`4\pi \stackrel{~}{g}_s=8\pi ^2g_s\sqrt{\alpha ^{}}/\beta =g^2`$. In the regime $`r\stackrel{~}{\beta }`$ we can approximate the discrete sum over images by a continuous integral, and we recover the smeared metric (2.5) as an approximation. On the other hand, for $`r\stackrel{~}{\beta }`$ we can instead neglect the images and approximate the sum by the $`n=0`$ term. The result is of course the standard $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$ metric corresponding to D3-branes at strong coupling. Indeed, the UV/IR relation for D-instantons in D3-branes , $`\rho =\sqrt{\lambda }/u,`$ matches the five-dimensional one (2.1) precisely at $`u=u_{\mathrm{loc}}=1/\beta `$, which is equivalent to $`r=r_{\mathrm{loc}}=\stackrel{~}{\beta }/4\pi ^2`$.
The instanton measure (2.4) matches across these finite-size transitions to the corresponding measures for the new manifolds $`𝐗_{\stackrel{~}{\mathrm{D3}}}`$ and $`𝐗_{\mathrm{D3}}`$, because the definition (2.3) applies in general and the volume form matches across the transitions at $`u=u_s`$ and $`u=u_{\mathrm{loc}}`$. The resulting measures are (both up to $`𝒪(1)`$ numerical factors):
$`d\mu (𝐗_{\stackrel{~}{\mathrm{D3}}})`$ $`=`$ $`C_{N,\lambda }\lambda ^4(\rho T)^3\rho ^5d\rho d\stackrel{}{y}d\eta ,`$
$`d\mu (𝐗_{\mathrm{D3}})`$ $`=`$ $`C_{N,\lambda }\lambda ^{5/2}\rho ^5d\rho d\stackrel{}{y}d\eta .`$ (2.6)
This last measure is conformally invariant, and coincides with that of refs. for D-instantons in $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$.
Finally, as pointed out in the introduction, the validity of the supergravity picture is limited by the requirement that we can control the $`\alpha ^{}`$-corrections. The curvature of the D4-brane metrics is of $`𝒪(1)`$ in string units at the ‘correspondence line’ $`u_c(\beta \lambda )^1`$ . For the D3-brane metrics, the condition is simply $`\lambda 1`$. This implies that, for $`\rho <\beta `$, we have a correspondence line for instanton sizes $`\rho =\rho _c=\beta \lambda .`$ For $`\rho >\beta `$ the correspondence line is independent of $`\rho `$ and lies at $`\lambda 1`$. Below the correspondence line the system is better described in Yang–Mills perturbation theory, although we lose the analytic control over the $`1/N`$ expansion.
The geometrical D-instanton measure in $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$ has been matched to the perturbative instanton measure in the $`𝒩=4`$ $`\mathrm{SYM}_4`$ theory in great detail, including multi-instanton terms . In particular, this allows us to fix the coupling-dependent constant as $`C_{N,\lambda }=N^{7/2}\lambda ^{3/2}`$. This is rather remarkable, since the geometrical measure holds at large $`\lambda `$, whereas the perturbative measure is derived in Yang–Mills perturbation theory, valid for $`\lambda 1`$. This robustness of the instanton measure in this case might be due to the high degree of supersymmetry and/or conformal symmetry. For instance, the analogous matching between the D4-brane supergravity measure (2.4) and the perturbative description of the ‘instanton particles’ of $`\mathrm{SYM}_5`$, through the correspondence line $`\rho =\rho _c=\beta \lambda `$, fails by one power of $`\lambda `$. This means that the very precise matching of for $`\mathrm{𝐀𝐝𝐒}_5`$ D-instantons is probably a consequence of conformal invariance.
This discussion may be summarized in Fig. 1, where the finite-size transitions, as well as the correspondence lines are depicted as a function of the ’t Hooft coupling and the instanton size.
### 2.2 Black-brane Manifold
Although the wrapping charge of D0-branes is well defined in $`𝐗_{\mathrm{vac}}`$, the thermal circle being non-contractible, this is not the case for $`𝐗_{\mathrm{bb}}`$, whose $`(\tau ,u)`$ subspace has $`𝐑^2`$ topology. Therefore, the thermal circle at fixed radial coordinate $`𝐒_u^1`$, is contractible, being the boundary of a disc: $`𝐒_u^1=𝐃_u`$, i.e. we can ‘unwrap’ the D0-brane instanton through the horizon. Thus, while exact instanton charges can be identified in the supersymmetric case, no quantized topological charge seems to survive in the non-supersymmetric case, due to the dynamical dominace of $`𝐗_{\mathrm{bb}}`$ in the large-$`N`$ limit (1.6).
Still, we can talk of approximate or ‘constrained’ instantons, provided the probe D0-brane world-line wraps far away from the horizon. In this case the un-wrapping costs a large action. In order to estimate the action as a function of $`u`$ (or the instanton size $`\rho `$), we calculate the Dirac–Born–Infeld action of the probe D0-brane:
$$S_{\mathrm{D0}}=M_{\mathrm{D0}}_0^\beta 𝑑\tau (g_se^\varphi )\sqrt{g_{\tau \tau }}=\frac{8\pi ^2}{g^2}\sqrt{h}=\frac{8\pi ^2}{g^2}\sqrt{1(\rho /\beta )^6},$$
(2.7)
where we have used the UV/IR relation (2.1) in the last step. Thus, $`\rho `$ is not an exact modulus, as instantons tend to grow. For an instanton of the order of the glueball’s Compton wave-length $`\rho \beta `$, the action is comparable to the vacuum action, and the instanton has disappeared (un-wrapped).
In the far ultraviolet, we can use the approximate instantons of very small size $`\rho \beta `$, to measure a ‘running effective theta angle’, by requiring that the approximate instanton is weighed by a phase $`\mathrm{exp}(i\theta _{\mathrm{eff}})`$, with $`\theta _{\mathrm{eff}}(u=\mathrm{})=\theta `$, the bare theta angle of the four-dimensional $`\mathrm{YM}_4`$ theory. Following Witten , a bare theta angle is associated to a RR two-form
$$f_{\mathrm{D0}}=dC_{\mathrm{D0}}=\overline{\theta }\frac{3}{\pi \zeta ^7}d\zeta d\psi ,$$
(2.8)
where, in the notation of , $`\zeta ^2=u/u_0`$, and $`\psi =2\pi \tau /\beta `$. The bare theta angle, measured at $`u=\mathrm{}`$, is $`\theta =\overline{\theta }(\mathrm{mod}\mathrm{\hspace{0.33em}2}\pi )`$, due to the multiplicity of meta-stable vacua as described in , i.e. $`f_{\mathrm{D0}}\overline{\theta }=(\theta +2\pi n)`$ in the $`n`$-th vacuum (see also for another geometric approach to this question). In what follows, we shall obviate this technicality by working in the $`n=0`$ vacuum, so that $`\theta =\overline{\theta }`$. The effective theta angle at throat radius $`u`$ is
$$\theta _{\mathrm{eff}}(u)=_{𝐒_u^1}C_{\mathrm{D0}}=_{𝐃_u}f_{\mathrm{D0}}=\theta \left(16_{\zeta (u)}^{\mathrm{}}𝑑\zeta \zeta ^7\right)=\theta h(u)=\theta \left(1(\rho /\beta )^6\right).$$
(2.9)
The ‘correspondence line’ $`u_c(\beta \lambda )^1`$ , controlling $`\alpha ^{}`$-corrections is also defined in $`𝐗_{\mathrm{bb}}`$. In terms of instanton sizes, for $`\rho <\beta `$, we have a correspondence line at $`\rho _c=\beta \lambda `$. Since no instantons survive for $`\rho >\beta `$ in the supergravity picture, the finite-size effects related to T-duality in $`𝐒_\beta ^1`$ and localization effects are absent for $`𝐗_{\mathrm{bb}}`$, i.e. there is no phase of D-instantons in $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$. The situation can be summarized by Fig. 2.
## 3 The Smeared Instanton Solution
In the previous section we have seen that probe D0-branes wrapping the thermal circle of a black D4-brane in the far ultraviolet are dual to (unstable) small-size instantons. Vice-versa, there exists a different supergravity solution dual to a field-theory configuration which can be interpreted as containing a condensate of large instantons.
Indeed, the smeared, black D0/D4-brane solution is interpreted (as in ref. for the supersymmetric T-dual case) to be dual to a Yang-Mills theory with a non-vanishing self-dual background. The self-duality of the background implies that it can be related to instantons, and the smeariness of the D0-branes can be interpreted very heuristically as the fact that the instantons are ‘smooth’ and then ‘large’.
In fact, in real time, the D0-branes are smeared on the D4-branes as soon as they ‘fall behind’ the horizon, due to the no-hair property (this corresponds $`u=u_0`$ or, using (2.1), to $`\rho =\beta `$.) This statement has only a heuristic value because, in the euclidean time configurations we are considering, space-time effectively ends at $`u=u_0`$. Still, the effects of the source D0-branes can be detected on the long-range fields such as the metric, dilaton, and RR fields. In this section, we pursue this view of the smeared D0-branes not as probes, as in the previous section, but as background data.
The string-frame metric outside a system of $`k`$ D0-branes smeared over the volume of $`N`$ D4-branes differs from that in (1.2) by one more harmonic function $`H_0`$:
$$ds^2=H_0^{\frac{1}{2}}H_4^{\frac{1}{2}}hd\tau ^2+H_0^{\frac{1}{2}}H_4^{\frac{1}{2}}d\stackrel{}{y}^{\mathrm{\hspace{0.17em}2}}+H_0^{\frac{1}{2}}H_4^{\frac{1}{2}}\left(dr^2/h+r^2d\mathrm{\Omega }_4^2\right).$$
(3.1)
In the gauge-theory limit, this function is given by
$$H_0(u)=1+(u_{Q0}/u)^3=1\frac{1}{2}(u_0/u)^3+\sqrt{\frac{1}{4}(u_0/u)^6+(u_k/u)^6}.$$
(3.2)
It depends on a new energy scale $`u_k`$, related to the number density of D0-branes $`k/V\kappa `$ by
$$u_k^3=\kappa \frac{2\pi ^3\beta \lambda }{N}.$$
(3.3)
The new scale is small $`(u_k^3=𝒪(1/N))`$ in the large-$`N`$ limit with fixed instanton charge density. In this paper, we are interested in the physics at energies of $`𝒪(1)`$ in the large-$`N`$ limit, so that $`u_ku_0`$ and $`H_0`$ may be approximated by <sup>3</sup><sup>3</sup>3At very low energies, $`u_0u_k`$, the smeared solution is $`𝐗_6\times 𝐓^4`$, with $`𝐗_6`$ conformal to $`\mathrm{𝐀𝐝𝐒}_2\times 𝐒^4`$ in the sense of . It is presumably related to quantum mechanics in the large-$`k`$ instanton moduli space . $`H_0=1+(u_k^2/u_0u)^3+𝒪(1/N^4).`$ The dilaton profile also receives $`\kappa `$-dependent corrections, $`g_se^\varphi =(H_4/H_0^3)^{1/4}`$, as well as the Hawking temperature:
$$T^1=\beta =\frac{4\pi }{3}r_0\sqrt{H_0(r_0)H_4(r_0)}=\left(\frac{2\pi \lambda \beta }{9u_0}H_0(u_0)\right)^{1/2}.$$
(3.4)
This yields an equation for $`u_0`$ that can be solved iteratively in powers of $`(u_k/\lambda T)^6`$.
The relation between the smeared D0-brane number density $`\kappa `$ and the running theta angle is obtained from the supergravity solution for the RR two form:
$$f_{\mathrm{D0}}=\frac{c\kappa }{u^4}\frac{1}{(H_0)^2}dud\tau ,$$
(3.5)
with $`c`$ a known numerical constant. As before, a wrapped D0-brane probe can be used to measure an effective theta angle whose value at $`u=\mathrm{}`$ defines the bare theta angle. Plugging (3.5) into (2.9) we obtain:
$$\theta _{\mathrm{eff}}(u)=_{𝐃_u}f_{\mathrm{D0}}=\theta \frac{u^3u_0^3}{u^3+u_{Q0}^3}=\theta \frac{h(u)}{H_0(u)},\mathrm{where}\theta =\frac{\beta c\kappa }{3}\frac{1}{u_0^3+u_{Q0}^3}.$$
(3.6)
The two-form solution found by Witten (2.8) corresponds to the $`u_0u_k`$ regime of (3.5). This provides a relation between the number density $`\kappa `$ of smeared D0-branes and the bare theta angle, valid in the large-$`N`$ limit with fixed $`\kappa `$:
$$\theta =\frac{\beta c\kappa }{3u_0^3H_0(u_0)}=\kappa \frac{9c}{\lambda ^3T^4}\left(\frac{3}{2\pi }\right)^3+𝒪(1/N^2),$$
(3.7)
where we have used $`u_0=2\pi \lambda T/9+𝒪(1/N^2)`$, from (3.4). In interpreting this relation, it is important to remember that we are working in the $`n=0`$ vacuum, out of the $`𝒪(N)`$ metastable vacua mentioned in section 2.2, i.e. the actual values of the parameters are such that the r.h.s of (3.7) is smaller than $`2\pi `$.
Equation (3.7) is a very suggestive relation, holographic in nature, in which the bare theta angle is obtained in a ‘mean-field’ picture from the parameters of a kind of ‘instanton condensate’. We should stress that (3.7) is only valid in the non-supersymmetric case. The extremal (supersymmetric) solution has a non-contractible $`𝐒^1`$ so that we can add an arbitrary harmonic piece to $`C_{\mathrm{D0}}`$, thereby changing the asymptotic value of $`\theta `$ independently of $`k`$ and $`\beta `$ (i.e. we cannot use Stokes’s theorem as we do in (2.9) and (3.6)).
An interesting application of this connection is the computation of topological charge correlations to the leading order in the large-$`N`$ and large $`\lambda `$ limit. In view of (3.7), this can be done by studying the $`\kappa `$-dependence of the vacuum energy of the $`\mathrm{YM}_4`$ theory (or equivalently the thermal free energy of the $`\mathrm{SYM}_5`$ theory.) For example, the action can be calculated as $`I=\beta E_{\mathrm{YM}}S_{\mathrm{BH}}`$, with $`S_{\mathrm{BH}}=(A_{\mathrm{horizon}})/4G_{10}`$ the black-brane entropy and $`E_{\mathrm{YM}}=M_{\mathrm{ADM}}NVT_{\mathrm{D4}}`$, the ADM mass above extremality. One obtains
$$I=\frac{3\mathrm{Vol}(𝐒^4)\beta V}{16\pi G_{10}}r_0^3\left(H_0(r_0)\frac{7}{6}\right)=N^2\frac{4VT}{\pi \lambda ^2}u_0^3\left(H_0(u_0)\frac{7}{6}\right).$$
(3.8)
Solving $`\theta `$ from (3.7) and using the relation
$$\left(\frac{u_k}{u_0}\right)^3=\frac{6\pi ^3}{c}H_0(u_0)\frac{\lambda \theta }{N},$$
(3.9)
combined with (3.2) and (3.4), we learn that the functional form of the $`n=0`$ vacuum energy is given by
$$I(\theta )_{n=0}=N^2VT^4\lambda f(\lambda \theta /N),$$
(3.10)
with $`f(x)`$ an even function (as expected from considerations of CP symmetry), whose Taylor expansion around $`\theta =0`$ may be determined by solving (3.4) iteratively. These selection rules determine the large-$`N`$ and large $`\lambda `$ scaling of the topological charge correlators at $`\theta =0`$:
$$(Q_{\mathrm{top}})^{2m}_{\mathrm{connected}}^{\theta =0}=\left(\frac{d}{d\theta }\right)_{\theta =0}^{2m}I(\theta )VT^4\frac{\lambda ^{2m+1}}{N^{2m2}}.$$
(3.11)
For the standard topological susceptibility, $`m=1`$, the scaling agrees with ref. .
## 4 Concluding Remarks
Within the AdS/CFT correspondence, the large-$`N`$ master field of the gauge theory is encoded in the gravitational saddle-points of the supergravity description, subject to boundary conditions.
In the model of ref. , which has a good supergravity description for large $`N`$ and large ’t Hooft coupling $`\lambda =g^2N`$, there are two ‘master fields’, or generalized large-$`N`$ saddle-points, given by the two manifolds $`𝐗_{\mathrm{vac}}`$ and $`𝐗_{\mathrm{bb}}`$, with $`𝐒^1\times 𝐑^4`$ boundary. We find that $`𝐗_{\mathrm{vac}}`$ supports instantons in the form of wrapped D0-branes, and leads to exponentially suppressed theta-angle dependence, very much like in the $`\mathrm{𝐀𝐝𝐒}_5\times 𝐒^5`$ case, to which it is dual through a set of T-duality and localization transitions that we discuss in some detail, including the matching of the single-instanton measure.
However, $`𝐗_{\mathrm{vac}}`$ is only the dominant master field in the supersymmetric case. The large-$`N`$ dynamics in the non-supersymmetric case is dominated by $`𝐗_{\mathrm{bb}}`$, which does not support finite-action topological excitations with the instanton charge. Therefore, the dominant master field shows perturbative (in $`1/N`$) theta-angle dependence, but has no ‘instanton topology’, very much like in the two-dimensional toy models of refs. . Instead, we can identify approximate (constrained) instantons of size $`\rho \beta `$, merging with the vacuum at sizes of the order of the glueball’s Compton wave-length $`\rho \beta `$, which for this model coincides with the Kaluza–Klein threshold.
The approximate equivalence of $`𝐗_{\mathrm{vac}}`$ and $`𝐗_{\mathrm{bb}}`$ in the ultraviolet regime $`u\mathrm{}`$, poses the question of whether the approximate small instantons of $`𝐗_{\mathrm{bb}}`$ are really artifacts of the regularization of the Yang–Mills theory by a hot five-dimensional supersymmetric theory. Unfortunately, this question cannot be settled with present techniques, since $`M_{\mathrm{glue}}M_{\mathrm{KK}}`$ in the supergravity approximation, $`\lambda 1`$, and we lack a regime in which we could follow the instantons as genuine four-dimensional configurations. It would be very interesting to see if the non-supersymmetric gravity duals based on Type 0 D-branes provide a more vantageous point to study this question.
Heuristically, according to the UV/IR relation, an instanton of size $`\rho \beta `$ would be associated to a D0-brane ‘inside’ the horizon of the black D4-brane. Because of the no-hair properties, such a configuration would have the D0-charge completely de-localized over the horizon (see for a recent discussion in the extremal case). Therefore, such configurations should be interpreted as homogeneous self-dual backgrounds in the gauge theory, and the supergravity description involves the ‘smeared’ D0/D4 solution. Although this picture cannot be held literally in the euclidean solutions, which lack an ‘interior region’ behind the horizon, we can still identify the RR two-form generated by the D0-branes ‘dissolved’ in the D4-brane horizon. This RR flux is in turn responsible for the generation of a theta angle, via the AdS/CFT rules of . Therefore, we obtain a holographic relation between the theta angle and the smeared instanton charge. Although the general relation between background fields and theta angle is not new (see for explicit two-dimensional examples), we find it interesting that in our case the background field is explicitly associated to an instanton condensate, with quantized topological charge (equal to the number $`k`$ of smeared D0-branes). This is reminiscent of the instanton liquid models, where the instanton density is fixed self-consistently (see for instance ).
## Acknowledgements
We would like to thank Margarita García Pérez, Yaron Oz and Kostas Skenderis for useful discussions. This work is partially supported by the European Commission TMR programme ERBFMRX-CT96-0045 in which J.L.F.B. is associated to the University of Utrecht and A.P. is associated to the Physics Department, University of Milano. A.P. would like to thank CERN for its hospitality while part of this work was carried out.
|
no-problem/9904/cond-mat9904269.html
|
ar5iv
|
text
|
# Vortex stability of interacting Bose-Einstein condensates confined in anisotropic harmonic traps
\[
## Abstract
Vortex states of weakly-interacting Bose-Einstein condensates confined in three-dimensional rotating harmonic traps are investigated numerically at zero temperature. The ground state in the rotating frame is obtained by propagating the Gross-Pitaevskii (GP) equation for the condensate in imaginary time. The total energies between states with and without a vortex are compared, yielding critical rotation frequencies that depend on the anisotropy of the trap and the number of atoms. Vortices displaced from the center of nonrotating traps are found to have long lifetimes for sufficiently large numbers of atoms. The relationship between vortex stability and bound core states is explored.
\]
The recent experimental achievement of Bose-Einstein condensation (BEC) in trapped ultracold atomic vapors has provided a unique opportunity to investigate the superfluid properties of weakly-interacting dilute Bose gases. Mean-field theories, which are usually based on the Bogoliubov approximation or its finite-temperature extensions , yield an excellent description of both the static and dynamic properties of the confined gases . These theories also predict that a continuum Bose condensate with repulsive interactions should be a superfluid, which can exhibit second sound, quantized vortices, and persistent currents. While there exists some evidence for second sound in trapped condensates , vortices in these systems have never been observed despite considerable experimental effort . Numerous techniques for the generation of vortices have been suggested, including stirring the condensate with a blue detuned laser , adiabatic population transfer via a Raman transition into an angular momentum state , spontaneous vortex formation during evaporative cooling , and rotation of anisotropic traps .
Several studies of vortex stability recently have been carried out . The free energy of a singly-quantized vortex attains a local maximum when the vortex is centered in a stationary trap . In the presence of dissipation, such a vortex would migrate to the edge of the trap and eventually disappear . It has been suggested that this mechanical instability may be related to a bound state in the vortex core, corresponding to a negative-energy ‘anomalous’ dipole mode found numerically in the vortex state at low densities . It is possible to stabilize singly-quantized vortices by rotating the trap at an angular frequency $`\mathrm{\Omega }`$. When $`\mathrm{\Omega }`$ is larger than the ‘metastability’ frequency $`\mathrm{\Omega }_0`$, infinitesimal displacements of the vortex no longer decrease the system’s free energy, and the vortex becomes locally stable; above the critical frequency $`\mathrm{\Omega }_c\mathrm{\Omega }_0`$, the vortex state becomes the ground state of the condensate .
While rotation of the confining potential has been proposed as a method to both nucleate and stabilize vortices in trapped Bose gases, the relevant critical angular frequencies are not presently known. In the present work, numerical results are obtained for Bose-condensed atoms confined in three-dimensional rotating anisotropic traps at zero temperature. The critical frequency $`\mathrm{\Omega }_c`$ is found to increase with the degree of anisotropy in the plane of rotation. In order to nucleate vortices, however, the trapped gas must be rotated either more rapidly than $`\mathrm{\Omega }_c`$, or at temperatures above the BEC transition . While vortices in nonrotating traps are found to be always unstable, for large numbers of atoms their lifetimes can be very long compared with a trap period.
The trapped Bose condensate, comprised of $`N_0`$ repulsively-interacting Rb atoms with mass $`M=1.44\times 10^{25}`$ kg and scattering length $`a100a_0=5.29`$ nm , obeys the time-dependent Gross-Pitaevskii (GP) equation in the rotating reference frame :
$$i_\tau \psi (𝐫,\tau )=\left[\frac{1}{/}2\stackrel{}{}^2+V_\mathrm{t}+V_\mathrm{H}\mathrm{\Omega }L_z\right]\psi (𝐫,\tau ),$$
(1)
where the trap potential is $`V_\mathrm{t}=\frac{1}{/}2\left(x^2+\alpha ^2y^2+\beta ^2z^2\right)`$, the Hartree term is $`V_\mathrm{H}=4\pi \eta |\psi |^2`$, and the condensate is rotated about the $`z`$-axis at the trap center. The effects of gravity (along $`\widehat{z}`$) are presumed negligible. The constant rotation at frequency $`\mathrm{\Omega }`$ induces angular momentum per particle given by the expectation value of $`L_z=i\left(y_xx_y\right)`$. The trapping frequencies are $`(\omega _x,\omega _y,\omega _z)=\omega _x(1,\alpha ,\beta )`$ with $`\omega _x=2\pi \times 132`$ rad/s, $`\alpha 1`$, and $`\beta =\sqrt{8}`$ . Choosing the condensate to be normalized to unity yields the scaling parameter $`\eta =N_0a/d_x`$. Note that energy, length, and time are given throughout in scaled harmonic oscillator units $`\mathrm{}\omega _x`$, $`d_x=\sqrt{\mathrm{}/M\omega _x}0.94\mu `$m, and $`\mathrm{T}=2\pi /\omega _x7.6`$ ms, respectively.
The ground-state of the GP equation is found within a discrete-variable representation (DVR) by imaginary time propagation using an adaptive stepsize Runge-Kutta integrator. A total of between $`\mathrm{40\hspace{0.17em}000}`$ and $`\mathrm{130\hspace{0.17em}000}`$ DVR points of a Gauss-Hermite quadrature are used, and all calculations are performed on a standard workstation. The stationary ground state in the rotating frame is found by setting $`\stackrel{~}{\tau }i\tau `$ and solving the diffusion equation:
$$_{\stackrel{~}{\tau }}\psi (𝐫,\stackrel{~}{\tau })=\left(H\mu \right)\psi (𝐫,\stackrel{~}{\tau }),$$
(2)
where $`H`$ is the GP operator appearing on the right side of Eq. (1) and $`\mu `$ is the chemical potential. The condensate wavefunction is assumed to be even under inversion of $`z`$, and is initially taken to be the vortex-free Thomas-Fermi result, which is the time-independent solution of Eq. (2), neglecting the kinetic energy operator and $`L_z`$. A vortex is generated by imposing one quantum of circulation $`\kappa `$, a $`2\pi `$-winding of the phase around the $`z`$-axis, on the condensate wavefunction at $`\stackrel{~}{\tau }=0`$. At each imaginary timestep, the chemical potential $`\mu `$ is readjusted in order to preserve the norm of the wavefunction (i.e. the value of $`N_0`$). The propagation continues until the right side of Eq. (2) is equal to a tolerance $`\delta 10^{10}`$ defining the error in the dimensionless chemical potential. Stationary solutions are verified by subsequently integrating in real time; any deviations from self-consistency would be made manifest by collective motion.
The solution to the GP equation for a vortex located at the center of a nonrotating anisotropic trap containing $`N_0=10^5`$ atoms is shown in Fig. 1. In general, the vortex core is found to become decreasingly anisotropic as $`N_0`$ increases. Furthermore, the condensate density preserves its overall vortex-free shape far from the origin except for a slight overall bulge in order to preserve the norm. The structure of the vortex indicates that the healing length $`\xi `$ is governed largely by the local density and is only weakly dependent on trap geometry. In the Thomas-Fermi (TF) approximation, which is valid for large $`N_0`$, the healing length scales with the TF $`\widehat{x}`$-axis radius $`R=(15\alpha \beta \eta )^{1/5}d_x`$ as $`\xi (d_x/R)d_x`$. Indeed, the numerics clearly indicate that the mean vortex core radius (approximately $`d_x`$ at low densities) shrinks very slowly as the TF limit is approached.
A superfluid subjected to a torque will remain purely irrotational until the critical frequency $`\mathrm{\Omega }_c`$ is reached, at which point it becomes globally favorable for the system to contain a vortex with a single quantum of circulation $`\kappa `$. In cylindrically-symmetric systems where the Hamiltonian commutes with $`L_z`$, the circulation and angular momentum (with quantum $`m`$) are identical; in the rotating frame, the free energies of the $`m0`$-states are shifted by $`m\mathrm{\Omega }`$, and $`\mathrm{\Omega }_c`$ is simply the difference in energy between the $`m=1`$ and $`m=0`$ states (divided by $`\mathrm{}`$). In fully anisotropic traps, however, even the $`\kappa =0`$ state is shifted, so the applied $`\mathrm{\Omega }`$ in Eq. (1) must be increased until the free energy curves cross. It is straightforward to extend the TF estimate of $`\mathrm{\Omega }_c`$ to include a small deviation from cylindrical symmetry ; neglecting the shift of the vortex-free chemical potential (valid for $`\alpha 1`$), one obtains:
$$\mathrm{\Omega }_c\frac{5\alpha }{2}\left(\frac{d_x^2}{R^2}\right)\mathrm{ln}\left(\frac{R}{\xi }\right)\omega _x.$$
(3)
Fig. 2 shows the critical frequencies for the global stability of a vortex with $`\kappa =1`$ at the trap center. For all geometries, the critical frequency drops monotonically as $`N_0`$ is increased. For a given number of atoms, the value of $`\mathrm{\Omega }_c`$ increases with in-plane anisotropy, similar to the behavior found for liquid helium in rotating elliptical containers . The energy of vortex formation must exceed that of the irrotational velocity field, which is finite for a vortex-free condensate in a rotating anisotropic trap. The TF result (3) agrees well with the numerical data in its regime of validity $`\alpha 1`$, though it tends to slightly overestimate the value of $`\mathrm{\Omega }_c`$.
While $`\mathrm{\Omega }_c`$ provides the criterion for the global stability of a vortex, it does not necessarily indicate the critical frequency for vortex nucleation. When initially vortex-free condensates are placed in anisotropic traps rotating at a frequency $`\mathrm{\Omega }<\omega _x`$, the velocity field of the stationary solution is found to be irrotational even for $`\mathrm{\Omega }\mathrm{\Omega }_c`$. In a harmonic trap with smooth edges, it is not clear if there exists any suitable locus for vortex formation. The vortices are most likely to originate at the condensate surfaces normal to the axis of weak confinement, where the local critical velocity is small but the tangential superfluid velocity in the laboratory frame is largest . While these issues are beyond the scope of the present issue of vortex stability, there is evidence that multiple vortices appear at higher frequencies . For smaller $`N_0`$, it would likely be easier to generate a vortex experimentally by rotating the anisotropic trap before the condensate is cooled below the BEC transition .
When $`\alpha >1`$, the angular momentum per particle $`l_\kappa `$ is a nontrivial function of $`N_0`$, $`\alpha `$, and $`\mathrm{\Omega }`$. In a nonrotating system with unit vorticity, $`l_\kappa `$ increases with $`N_0`$. In the absence of a vortex, $`l_\kappa `$ is finite for a given $`\mathrm{\Omega }`$, and increases with $`\alpha `$; the superfluid velocity $`𝐯_s`$ can be locally appreciable but still remain irrotational $`\times 𝐯_s=0`$. At the critical frequency, the difference between $`l_1`$ and $`l_0`$ is always less than unity; for the most extreme case considered here, a system with $`N_0=10^6`$ and $`\alpha =3`$ rotating at $`\mathrm{\Omega }_c=0.14\omega _x`$, one obtains $`l_1=2.63\mathrm{}`$ and $`l_0=1.77\mathrm{}`$. As $`\alpha \mathrm{}`$, the angular momentum approaches that of a non-superfluid TF cloud $`l_0I_{\mathrm{sb}}\mathrm{\Omega }`$ with ‘solid-body’ moment of inertia $`I_{\mathrm{sb}}=\frac{1}{/}7MR^2`$.
An anisotropic harmonic oscillator potential becomes unconfining when it is rotated at a frequency between the smallest and largest trapping frequencies. Since $`\mathrm{\Omega }_c`$ exceeds $`\omega _x`$ for sufficiently large $`\alpha `$, there exists a critical minimum number of condensed atoms $`N_c`$ able to support a vortex. The value of $`N_c`$ increases with $`\alpha `$ and is given by the intercept of the $`\mathrm{\Omega }/\omega _x=1`$ line in Fig. 2. In cylindrically-symmetric systems $`N_c=1`$, since in the rotating frame the free energies for all the $`m`$-states become degenerate at $`\mathrm{\Omega }=\omega _x`$. In the limit of extreme anisotropy $`\alpha \mathrm{}`$ vortices can never be stabilized.
It should be noted that states with vortices at the center of anisotropic harmonic traps are found to be stationary solutions of Eq. (2) for all values of $`N_0`$ and $`\mathrm{\Omega }0`$ considered; such configurations do not appear to decay in either real or imaginary time. A vortex at the center of a nonrotating cylindrical trap increases the system’s free energy, but is stationary because both the vorticity and angular momentum commute with the Hamiltonian; in principle, the angular momentum can be eliminated, and the free energy reduced, only if this symmetry is broken by displacing the vortex from the center. Since angular momentum is not conserved in anisotropic traps, the apparent vortex stability is likely due to the free energy maximum at the trap center . In the absence of an external pinning mechanism, any such configuration should be unstable against infinitesimal displacements.
In order to further explore the issue of vortex stability in nonrotating traps, the initial condensate phase is wound by $`2\pi `$ a small distance $`x_00.2d_x`$ from the origin of a trap with $`\alpha =1`$. For all values of $`N_010^6`$, the condensate wavefunction rapidly (by $`\stackrel{~}{\tau }`$ T) converges to a metastable solution with a vortex, where the fluctuations in $`\mu `$ become smaller than $`\delta 10^7`$ per timestep $`\mathrm{\Delta }\stackrel{~}{\tau }10^3`$T. This wavefunction subsequently decays to the true ground state, but both the real and imaginary time required to do so is found to increase with $`N_0`$ . To an excellent approximation, the total time diverges like $`\stackrel{~}{\tau }N_0^{2/5}`$T; for $`N_0\genfrac{}{}{0pt}{}{>}{}10^5`$, the time required $`30`$T becomes computationally inaccessible and the vortex state becomes numerically indistinguishable from stationary. The numerics suggest that while vortices in nonrotating traps are always unstable against off-center displacements, they may be very long-lived.
The observed $`x_0>0`$ instability of the vortex state is likely due to the existence of an ‘anomalous’ collective mode $`\omega _a`$ at low densities . This dipole mode, which has positive norm but negative energy (or vice versa), is associated with a zero angular momentum bound state in the vortex core ; its value corresponds to the precession frequency of the vortex relative to the cloud . Previous numerical calculations found $`|\omega _a|>0`$ for all $`N_010^4`$. As the core radius shrinks with larger $`N_0`$, however, the anomalous energy might be pushed to zero, yielding long-lived or even stable vortices in the TF limit.
The low-lying excitation frequencies of a nonrotating condensate in the vortex state are calculated using the Bogoliubov equations . For completely anisotropic geometries, however, the Bogoliubov operator is too large to diagonalize explicitly. Calculations are therefore restricted to the cylindrical case $`\alpha =1`$, where the vortex condensate is $`\psi \psi _1(\rho ,z)e^{i\varphi }`$ and the quasiparticle amplitudes $`u`$ and $`v`$ are labeled by $`m`$, the projection of the angular momentum operator $`L_z`$. The Bogoliubov equations are then
$$\left(\begin{array}{cc}\widehat{O}& V_\mathrm{H}\\ V_\mathrm{H}& \widehat{O}^{}\end{array}\right)\left(\begin{array}{c}u_m\\ v_{2m}\end{array}\right)=ϵ_m\left(\begin{array}{c}u_m\\ v_{2m}\end{array}\right),$$
(4)
where $`\widehat{O}\frac{1}{/}2\rho \frac{}{/}\rho \rho \frac{}{/}\rho \frac{1}{/}2\frac{^2}{/}z^2+\frac{m^2}{/}2\rho ^2+V_\mathrm{t}+2V_\mathrm{H}`$, $`\widehat{O}^{}\widehat{O}+\frac{2(1m)}{/}\rho ^2`$, and $`u_1=v_1=\psi _1`$ when $`ϵ_1=0`$. In the $`\widehat{\rho }`$-direction, the points of the DVR grid correspond to those of Gauss-Laguerre quadrature, and the kinetic energy matrix elements are obtained using the prescription of Baye and Heenen .
The anomalous mode $`\omega _a`$, which is labeled by $`m=2`$, is shown as a function of $`N_0`$ in Fig. 2. The results indicate that $`0<|\omega _a|\mathrm{\Omega }_c`$ for all $`N_010^6`$ considered. Our calculations suggest that for large numbers of atoms, $`\omega _a`$ coincides with the metastability rotation frequency $`\mathrm{\Omega }_0`$ discussed above; the numerical value of $`\omega _a`$ is consistent with the TF result $`\mathrm{\Omega }_0=\frac{3}{/}5\mathrm{\Omega }_c`$ . Indeed, in the frame of a condensate rotating at $`\mathrm{\Omega }=\omega _a`$, the frequency of the vortex oscillation would be Doppler shifted to zero. Alternatively, it can be shown in both the weakly-interacting and TF limits that $`\mathrm{\Omega }_0`$ is also the frequency at which the chemical potentials $`\mu `$ for the vortex and vortex-free states become equal; in the TF limit, $`\omega _a`$ vanishes when the vacua (or energy zero) for quasiparticle excitations for both states coincide.
Since the anomalous mode corresponds to the precession of the vortex about the trap origin, one may make a crude estimate of the vortex lifetime $`\tau `$. In the presence of dissipation, the vortex will spiral out of the condensate after a few orbit periods $`\omega _a^1`$. Assuming that $`\omega _a=\frac{5}{/}3\mathrm{\Omega }_c`$, then with Eq. (3) one obtains $`\tau N_0^{2/5}`$T in the TF limit neglecting logarithmic factors. This result is consistent with the imaginary time $`\stackrel{~}{\tau }`$ required to yield the vortex-free ground state in the fully three-dimensional numerical calculations discussed above. Similar decay times have been obtained for solitons and vortices in the presence of a small noncondensate component .
In summary, we have obtained numerically the critical frequencies $`\mathrm{\Omega }_c`$ for the stabilization of a vortex at the center of a rotating anisotropically-trapped Bose condensate. Since $`\mathrm{\Omega }_c`$ increases with the in-plane anisotropy $`\alpha =\omega _y/\omega _x`$ and the condensate becomes unconfined for $`\mathrm{\Omega }>\omega _x`$, there is a minimum number of atoms able to support a vortex state. Vortices in nonrotating traps are found to be unstable against small off-center displacements, but their decay time diverges with the total number of atoms.
###### Acknowledgements.
The authors are grateful for many stimulating discussions with M. Brewczyk, J. Denschlag, M. Edwards, A. L. Fetter, D. Guery-Odelin, D. A. W. Hutchinson, M. Matthews, W. D. Phillips, W. Reinhardt, and C. J. Williams. This work was supported by the U.S. Office of Naval Research.
|
no-problem/9904/astro-ph9904429.html
|
ar5iv
|
text
|
# The Effects of Clumping and Substructure on ICM Mass Measurements
## 1 Introduction
Studies of the intracluster medium (ICM) address many important cosmological and astrophysical questions. Precision measurements of individual ICM spectra and metallicities can be used to constrain models of massive star formation. The slope and evolution of cluster scaling relations (e.g. the luminosity vs. temperature or ICM mass vs. temperature) can provide insight into the nature and importance of galactic winds, as well as the background cosmology in which the ICM develops. In addition, current models of structure formation suggest that the mass components of clusters are representative of the universe as a whole; for this reason there has been much effort expended towards measuring the contributions of dark matter, ICM, and galaxies in the population. The mean ICM mass fraction is therefore a useful lower limit on the global baryon fraction, and can also be used to constrain $`\mathrm{\Omega }_0`$ if combined with primordial nucleosynthesis calculations.
Many studies have been carried out recently which attempt to measure the form of the ICM density profile using X–ray images (e.g. Mohr, Mathiesen, & Evrard 1999 (MME); White, Jones, & Forman 1997; Loewenstein & Mushotzky 1996; David, Jones, & Forman 1995, White & Fabian 1995). Following the lead of early work in the field (Forman & Jones 1982), such studies typically create an analytic model for the azimuthally averaged ICM density profile, and use the X–ray surface brightness to constrain its parameters. These models generally assume that the ICM is uniform at a given radius, that the density profile decreases monotonically from the center, and that the cluster is spherically symmetric.
These assumptions break down in real clusters; even apparently relaxed clusters tend to exhibit small asphericities. The prevalence of accretion events and major mergers in the local population is also well established, which calls into question the validity of a monotonic density profile. The assumption of a uniform ICM is a reasonable approximation, but it too must fail on sufficiently small scales. A certain level of random density fluctuations is to be expected, and even minor accretion events are found in simulations to produce persistent mild shocks and acoustic disturbances in the ICM. The small scatter in observed scaling relations, for example the ICM mass and mass fraction vs. temperature (MME), implies that these assumptions produce results with at least 20% precision; this fact inspires confidence. It is important, however, to directly measure the systematic errors incurred by these inaccuracies.
In accordance with the increasing quality of X–ray observations, the scientific community has recently begun to address these issues. A number of groups are investigating the possibility of a multiphase medium (Thomas 1998, Lima Neto et al. 1997, Waxman & Miralde-Escude 1995), although most of the work to date focuses on cooling flows rather than global ICM properties. A notable exception is the work of Gunn & Thomas (1996), who examine the effect of a multiphase ICM on baryon fraction measurements. There have also been attempts to get around the assumption of spherical symmetry by using the isophotal area (Mohr & Evrard 1997) or elliptical isophotes (Knopp & Henry 1996, Buote & Canizares 1996) to constrain a density model. All of these methods, however, still fall prey to one or more of the assumptions mentioned above.
Hydrodynamic simulations of cluster evolution are ideally suited to the task of estimating the systematic errors incurred by these assumptions. In this study, we produce realistic X–ray surface brightness images from a set of simulations and measure ICM masses by fitting the emission profile to a spherically symmetric beta model for the density. Our independent knowledge of the three-dimensional structure allows us to observe the effects of gross substructure and small-scale density fluctuations on the derived ICM mass directly, and correlate these errors with the kind of substructure present. In §2 of this paper, we present some detail on the nature of the simulations and how we create our X–ray images. In §3, we describe how we analyze these images and extract ICM density profiles. In §4, we compare the “observed” masses to the simulations and correlate the errors with structural properties of the ICM. Finally, in §5 we restate our results and remark on how they can be applied to real observations. Our results are phrased throughout in a manner indpendent of the Hubble constant $`H_0`$.
## 2 Data
We use an ensemble of 48 hydrodynamical cluster simulations, divided among four different cold dark matter (CDM) cosmological models. These models are (i) SCDM ($`\mathrm{\Omega }_0=1`$, $`\sigma _8=0.6`$, $`h=0.5`$, $`\mathrm{\Gamma }=0.5`$); (ii) $`\tau `$CDM ($`\mathrm{\Omega }_0=1`$, $`\sigma _8=0.6`$, $`h=0.5`$, $`\mathrm{\Gamma }=0.24`$); (iii) OCDM ($`\mathrm{\Omega }_0=0.3`$, $`\sigma _8=1.0`$, $`h=0.8`$, $`\mathrm{\Gamma }=0.24`$); and (iv) $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_0=0.3`$, $`\lambda _0=0.7`$, $`\sigma _8=1.0`$, $`h=0.8`$, $`\mathrm{\Gamma }=0.24`$). Here the Hubble constant is $`100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, and $`\sigma _8`$ is the power spectrum normalization on $`8h^1`$ Mpc scales. The initial conditions are Gaussian random fields consistent with a CDM transfer function with the specified $`\mathrm{\Gamma }`$ (Davis et al. 1985). The baryon density is set in each case to a fixed fraction of the total density ($`\mathrm{\Omega }_b=0.2\mathrm{\Omega }_0`$). The simulation scheme is P3MSPH; first a P<sup>3</sup>M (DM only) simulation is used to find cluster formation sites in a large volume, then a hydrodynamic simulation is performed on individual clusters to resolve their DM halo and ICM structure in detail. The resulting cluster sample covers a little more than a decade in total mass, ranging from about $`10^{14}`$ to $`2\times 10^{15}M_{}`$.
We work with X–ray surface brightness maps derived from the simulations according to procedures described by Evrard (1990a,b). The particles representing pieces of the ICM are treated as bremsstrahlung emitters at the local temperature and density, and this emission is distributed over the image pixels according to a two-dimensional Gaussian with a width equal to that of the particle’s SPH smoothing kernel. Emission is collected in the energy band \[0.1,2.4\] keV, and the resulting surface brightness in each pixel is converted to counts per second according to an approximate ROSAT PSPC energy conversion factor. We then “observe” this model image using an effective area map of the PSPC and an exposure sufficient to yield $`10^4`$ cluster photons, comparable to ROSAT exposures of clusters in the Edge sample (Edge et al. 1990). Finally, each pixel is given a Poisson uncertainty based on the number of photons and the whole image is smoothed on a scale of $`24.4\mathrm{}`$ to improve the signal-to-noise ratio. Angular distances are calculated as $`d_A=cz/H_0`$ with $`z=0.06`$, so the physical smoothing scale is either 42.6 or 26.6 kpc. Note that beause we are trying to isolate the systematic errors due to substructure, we do not add Poisson noise to the image; the error bars are only used to assign appropriate weights to the data in the analysis.
It should be noted that the simulations do not include certain processes known to be present in real clusters, such as radiative cooling and the injection of gas and energy by galactic supernovae. The cooling time for at least 99% of the SPH particles is much longer than the Hubble time, and would have little effect on the structure of the ICM; nevertheless, our simulations cannot develop cooling flows. Detailed studies of real cooling flows find mass deposition rates no larger than hundreds of solar masses per year, suggesting that they comprise only a small fraction of the ICM mass (White et al. 1998). Simulations which include galactic winds create clusters with more realistic density profiles, but the winds are not found to greatly affect the temperature structure of the gas (Metzler & Evrard 1994). It is conceivable that the presence of galactic winds would also create more clumping in the ICM, but this effect is probably negligible next to the variations caused by accretion events.
The clusters display great morphological diversity, ranging from examples which appear almost perfectly spherical and relaxed to others which are undergoing a three-way merger event. Each cluster is imaged in three orthogonal projections, allowing us to compare the biases caused by substructure along the line-of-sight and substructure in the plane of the sky. We use all three projections in general, but confine ourselves to one projection per cluster when making statistical comparisons to insure that the probabilities are derived from statistically independent data. Further details on the nature of these simulations can be found in both MME and Mohr & Evrard (1997).
## 3 Image Analysis
We fit the emission to a beta model, with three-dimensional density profile $`\rho (r)=\rho _0[1+(r/r_c)^2]^{3\beta /2}`$ (Cavaliere & Fusco-Femiano, 1978). In this model $`r_c`$ and $`\beta `$ are free parameters to be constrained by the X–ray emission profile, and $`\rho _0`$ is found by normalizing the emission integral to the bolometric luminosity of the cluster.
Producing a cluster emission profile appropriate for fitting requires three steps. First, the emission center is found by sliding a circular aperture of radius 10 pixels over the image and minimizing the distance between the center of the aperture and the centroid of the X-ray photon distribution within the aperture. This approach converges even in cases where the cluster emission is significantly skew (Mohr, Fabricant, & Geller 1993). Second, we calculate the azimuthally averaged radial profile. Finally, each point in the profile is assigned a Poisson uncertainty based on the number of photons in the annulus; this allows the fitting program to assign approprate relative weights to the datapoints.
The PSPC point spread function (PSF) reduces a cluster’s central intensity and increases the apparent value of $`r_c`$. This effect changes the best-fit values of our model parameters and biases the derived ICM mass if left untreated. The incurred error is potentially significant for clusters with small angular diameter or pronounced cooling flows, and is simulated here by the image smoothing mentioned in §2. Thus, rather than fitting the beta model directly to our profile, we first convolve it with a PSF appropriate for our smoothing kernel. Details on the mathematics of this technique can be found in MME and Saglia et al. (1993).
The fitting was performed from the center of the cluster out to $`r_{500}`$, the radius at which the mean interior density is 500 times the critical density. This measure was chosen to probe the easily visible region of each cluster given typical backgrounds and sensitivities, as well as for physical reasons which will become apparent in the next section. By fitting out to a fixed fraction of the virial radius rather than a fixed metric radius, we insure that we are probing regions with similar dynamical gravitational time scales. The ratio $`\mathrm{{\rm Y}}`$ between cluster baryon fractions and the universal baryon fraction has also been calibrated by simulations; a bias factor of $`\mathrm{{\rm Y}}=0.9\pm 0.1`$ was found at $`r_{500}`$ (Evrard 1997). Recent work with an independent simulation has corroborated this result, finding $`\mathrm{{\rm Y}}=0.92\pm 0.06`$ at $`r_{200}`$ (Frenk et al. 1999). The radius $`r_{500}`$ is found in our simulations to scale as $`1.4h^1(T/10\mathrm{k}\mathrm{e}\mathrm{V})^{1/2}`$ Mpc, regardless of cosmology.
Finally, we checked our process by fitting 100 monte carlo images of true beta models with similar smoothing and individual realizations of Poisson noise, and found a chi-squared distribution consistent with a perfect match between the fitting function and the underlying model. This confirms that the high chi-squared values obtained in some images result from real physical deviations from a spherical beta model. A more complete discussion of our method, as applied to real PSPC data, is given in MME.
## 4 Results
We measure the variance of density fluctuations in spherical shells around the ICM particle with the lowest gravitational potential. This variance is expressed in terms of a clumping factor $`C`$,
$$C\rho ^2/\rho ^2.$$
(1)
Each ICM particle in the simulation has mass $`m_{\mathrm{gas}}`$. For a given shell lying between radii $`r_1`$ and $`r_2`$ with $`N_{12}`$ gas particles, $`\rho `$ is defined as the total mass in gas particles divided by the volume of the shell, $`m_{\mathrm{gas}}N_{12}/V`$. The numerator is calculated as
$$\rho ^2\frac{1}{V_{12}}_{r_1}^{r_2}d^3r\rho ^2\frac{m_{\mathrm{gas}}}{V_{12}}\underset{i}{\overset{N_{12}}{}}\rho _i,$$
(2)
with the right-hand term being the Lagrangian limit of the integral and $`\rho _i`$ the SPH gas density of particle $`i`$ (Evrard 1988).
The results of this analysis are displayed in Figure 1. Although more complicated structure is visible in some of our clusters, the average magnitude of density fluctuations interior to $`r_{500}`$ seems to be a slowly increasing function of radius. The radial bins are chosen such that they represent fixed logarithmic intervals in the density constrast $`\delta _c`$. Dramatic increases in clumping are occasionally seen between $`r_{500}`$ and the more traditional virial radius; this is another reason that for choosing to work within a density constrast of 500. Extending our analysis to the virial radius would return essentially the same results with lower statistical significance.
The presence of these fluctuations enhances the cluster luminosity over what would be expected for a smooth, single-phase ICM. The emissivity in a given shell is given by
$$\epsilon (r)=r^2dr𝑑\mathrm{\Omega }\frac{\rho ^2(r,\mathrm{\Omega })}{\mu _e\mu _Hm_p^2}\mathrm{\Lambda }(T,\mathrm{\Omega }).$$
(3)
If the density fluctuations are modest, the temperature and ionization structure of the gas in a given shell will be approximately uniform. Taking these factors outside the integral and using equation (1) to incorporate our clumping factor, we have
$$\epsilon (r)=4\pi r^2dr\frac{\mathrm{\Lambda }(T)}{\mu _e\mu _Hm_p^2}\rho (r)^2C(r).$$
(4)
$`C`$, therefore, represents the factor by which a shell’s emissivity is enhanced over the single-phase case. This formalism can be invoked to describe a true multiphase medium under arbitrary constraints by choosing an appropriate function $`C(r)`$. Nagai, Evrard, & Sulkanen (1999) look at such distributions in more detail, investigating the observable consequences of applying isobaric multiphase models.
For each cluster we calculate the mass-weighted mean $`\overline{C}`$ over all shells as an approximation to the overall bias on its total luminosity and inferred core density $`\rho _0^2`$. These values, averaged again over all clusters within a cosmological model, are as follows: $`\overline{C}_o=1.40`$, $`\overline{C}_s=1.32`$, $`\overline{C}_\mathrm{\Lambda }=1.29`$, and $`\overline{C}_\tau =1.38`$. The mean values of $`M_\beta /M_{\mathrm{true}}`$ for the four cosmologies are 1.34 (OCDM), 1.13 (SCDM), 1.11 ($`\mathrm{\Lambda }`$CDM), and 1.14 ($`\tau `$CDM). The mass error distributions for the four cosmologies are mutually consistent, so we combine them to determine a mean ICM bias for the entire ensemble of $`1.18\pm 0.02`$. We also calculate the ensemble mean of $`\overline{C}`$, arriving at $`1.34\pm 0.02`$. Since the total luminosity is proportional to the square of the central gas density, we expect an overall bias in our ICM mass estimates of $`\overline{C}^{1/2}`$, or $`1.16\pm 0.01`$. The error bars given here are one standard deviation of the mean, so this is excellent agreement. We do not, however, find a strong correspondence between the level of clumping and the ICM mass error in individual clusters, however. Variations in the shape of the mass profile and large-scale asymmetries, which contribute a random component to the error, mask this relationship.
It is difficult to judge what weighting scheme is most appropriate for this calculation, since the pivotal regions of the emission profile vary from cluster to cluster. The central points are very accurate and do much to constrain the fit, but are few in number; the outer regions have large error bars but many more data points. We began our anaysis with a mass-weighted clumping factor under the simple reasoning that it would greatly favor neither region. We also performed the exercise of weighting the shells by their luminosity (which gives more importance to the core) and volume (which emphasizes the outskirts). The resulting ensemble mean clumping factors do not vary greatly under the different weighting schemes: we found averages of $`\overline{C}=1.27`$ for luminosity-weighted shells and $`\overline{C}=1.39`$ for volume-weighted shells.
We also attempt to identify correlations of mass error with large-scale substructure signatures. A surprising result is that we find no correlation of $`M_\beta /M_{\mathrm{true}}`$ with centroid shift, which has been found to be a good indicator of a cluster’s dynamical state (Mohr, Evrard, Fabricant, & Geller 1995). Even those clusters with the highest centroid shifts don’t show a significantly different gas mass bias from the rest of the sample. Apparently, azimuthally averaging the cluster profile compensates for mild asphericities very well. We also looked for correlations of mass error with various bulk properties of the ICM such as mass, emission-weighted temperature, and luminosity, finding none. There is evidence for a weak correlation of mass error with the model parameters $`\beta `$ and $`r_c`$, but this can be entirely attributed to the tendency for strongly bimodal clusters to have profiles which are not well fit by the model. Although our clusters typically have steeper profiles than are observed in reality (c.f.), the lack of shape dependence indicates that this is probably not a problem. In none of the above tests were there discernible differences among the four cosmologies.
In fact, we found only one ICM property to be correlated with $`\delta M`$. Defining a “secondary peak” as a local maximum in the surface brightness at least 1% of the global maximum, we find that the most important property is the existence of secondary peaks within $`r_{500}`$. We attempted to further quantify the degree of asymmetry caused by subclumping, but the mass error turned out to be uncorrelated with the number and strength of the subpeaks. We therefore divide our ensemble into two subsets on this basis, hereafter referred to as “regular” and “bimodal” clusters for ease of language. Note that the subset of “regular” clusters contains examples with large centroid shifts or asymmetries, and the “bimodal” subset includes a few trimodal clusters as well. The two subsets comprise 62% and 38% of the total, respectively. Figure 2 displays a histogram of ICM mass errors for all the images in our ensemble. The shaded region corresponds to the regular clusters, while the remaining subset represents the bimodal sample.
The bimodal population has a distribution of mass errors with mean $`\delta M=0.188`$ ($`\delta M=1M_\beta /M_{\mathrm{true}}`$) and standard deviation $`\sigma =0.084`$. The remaining population has a distribution with $`\overline{\delta M}=0.093`$ and $`\sigma =0.041`$. Applying the K–S test, we find that the two populations are drawn from different distributions with very high confidence (greater than 5$`\sigma `$) and that the subset of non-bimodal images has an error distribution consistent with the Gaussian form. In applying the K–S tests we worked with a subset of the data consisting of just one image from each simulation, in order to maintain statistical independence. The means and standard deviations just quoted, however, reflect the distribution of the entire image ensemble. The smaller samples in each category have distributions entirely consistent with the complete sample, but we feel that the stated values better reflect the size of real uncertainties by taking advantage of multiple projections.
Within our sample there are also 17 (out of 144) images which contain a major subclump in line-of-sight with the primary cluster and indistinguishable as a secondary peak. (A major subclump is one which produces a secondary peak when the cluster is viewed from a different angle.) These images, which occur in both the regular and bimodal populations, have a distribution in $`\delta M`$ consistent with the remaining set of bimodal images. This is reasonable given the similarity of their three-dimensional physical structures. Their partial membership in the set of regular images, however, does not change the shape or mean of that distribution significantly. Such occurences seem to be sufficiently uncommon that our heuristic classification system remains useful. Since strongly bimodal clusters are more uncommon than our simulations indicate, real samples will probably suffer even less from this contamination.
## 5 Conclusions
Assuming a density profile which is uniform at a given radius introduces a significant bias into measurements of the ICM mass, because it ignores the presence of luminosity enhancements from overdense regions. Our simulations indicate that these small-scale fluctuations produce a mean overestimate of $`10\%`$ when we confine our analysis to regular clusters. The advent of spatially resolved X–ray spectral imaging should allow us to test for irregularity in ICM structures and constrain the level of these fluctuations with direct observations; in the meantime it seems prudent to apply a correction of this scale to current measurements, as is done in MME. The relationship between these fluctuations and the cluster accretion history remains an open question, but this analysis suggests that there is no strong connection.
Applying our spherically symmetric model to a cluster’s azimuthally averaged surface brightness profile returns reasonably accurate ICM mass measurements even in clusters exhibiting significant asymmetries, so long as there are no significant secondary peaks in the image. The assumption of spherical symmetry, while formally invalid in most cases, seems to be borne out in practice in that the process of azimuthal averaging returns a mean density profile which is unbiased by the presence of moderate substructure. Such deviations from spherical symmetry contribute a random error of $`5\%`$ to ICM mass measurments under our method. The subset of bimodal (or multi-peaked) clusters has a much larger bias and dispersion, and is perhaps best excluded from population studies of the ICM.
This research was supported by NASA grants NAG5-2790, NAG5-3401, and NAG5-7108, as well as NSF grant AST-9803199. JJM is supported through Chandra Fellowship grant PF8-1003, awarded through the Chandra Science Center. The Chandra Science Center is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073.
|
no-problem/9904/math9904011.html
|
ar5iv
|
text
|
# Untitled Document
EDGE-BANDWIDTH OF GRAPHS
Tao Jiang
University of Illinois, Urbana, IL 61801-2975
Dhruv Mubayi
Georgia Institute of Technology, Atlanta, GA 30332-0160
Aditya Shastri
Banasthali University, Rajasthan 304 022, India
Douglas B. West
University of Illinois, Urbana, IL 61801-2975, west@math.uiuc.edu
Research supported in part by NSA/MSP Grant MDA904-93-H-3040. Running head: EDGE-BANDWIDTH AMS codes: 05C78, 05C35 Keywords: bandwidth, edge-bandwidth, clique, biclique, caterpillar Written June, 1997.
Abstract. The edge-bandwidth of a graph is the minimum, over all labelings of the edges with distinct integers, of the maximum difference between labels of two incident edges. We prove that edge-bandwidth is at least as large as bandwidth for every graph, with equality for certain caterpillars. We obtain sharp or nearly-sharp bounds on the change in edge-bandwidth under addition, subdivision, or contraction of edges. We compute edge-bandwidth for $`K_n`$, $`K_{n,n}`$, caterpillars, and some theta graphs.
1. INTRODUCTION
A classical optimization problem is to label the vertices of a graph with distinct integers so that the maximum difference between labels on adjacent vertices is minimized. For a graph $`G`$, the optimal bound on the differences is the bandwidth $`B(G)`$. The name arises from computations with sparse symmetric matrices, where operations run faster when the matrix is permuted so that all entries lie near the diagonal. The bandwidth of a matrix $`M`$ is the bandwidth of the corresponding graph whose adjacency matrix has a 1 in those positions where $`M`$ is nonzero. Early results on bandwidth are surveyed in and .
In this paper, we introduce an analogous parameter for edge-labelings. An edge-numbering (or edge-labeling) of a graph $`G`$ is a function $`f`$ that assigns distinct integers to the edges of $`G`$. We let $`B^{}(f)`$ denote the maximum of the difference between labels assigned to adjacent (incident) edges. The edge-bandwidth $`B^{}(G)`$ is the minimum of $`B^{}(f)`$ over all edge-labelings. The term “edge-numbering” is used because we may assume that $`f`$ is a bijection from $`E(G)`$ to the first $`|E(G)|`$ natural numbers.
We use the notation $`B^{}(G)`$ for the edge-bandwidth of $`G`$ because it is immediate that the edge-bandwidth of a graph equals the bandwidth of its line graph. Thus well-known elementary bounds on bandwidth can be applied to line graphs to obtain bounds on edge-bandwidth. We mention several such bounds. We compute edge-bandwidth on a special class where all these bounds are arbitrarily bad.
The relationship between edge-bandwidth and bandwidth is particularly interesting. Always $`B(G)B^{}(G)`$, with equality for caterpillars of diameter more than $`k`$ in which every vertex has degree 1 or $`k+1`$. Among forests, $`B^{}(G)2B(G)`$, which is almost sharp for stars. More generally, if $`G`$ is a union of $`t`$ forests, then $`B^{}(G)2tB(G)+t1`$.
Chvátalová and Opatrny studied the effect on bandwidth of edge addition, contraction, and subdivision (see for further results on edge addition). We study these for edge-bandwidth. Adding or contracting an edge at most doubles the edge-bandwidth. Subdividing an edge decreases the edge-bandwidth by at most a factor of $`1/3`$. All these bounds are sharp within additive constants. Surprisingly, subdivision can also increase edge-bandwidth, but at most by 1, and contraction can decrease it by 1.
Because the edge-bandwidth problem is a restriction of the bandwidth problem, it may be easier computationally. Computation of bandwidth is NP-complete , remaining so for trees with maximum degree 4 and for several classes of caterpillar-like graphs. Such graphs generally are not line graphs (they contain claws). It remains open whether computing edge-bandwidth (computing bandwidth of line graphs) is NP-hard.
Due to the computational difficulty, bandwidth has been studied on various special classes. Bandwidth has been determined for caterpillars and for various generalizations of caterpillars (), for complete $`k`$-ary trees , for rectangular and triangular grids (higher dimensions ), for unions of pairwise internally-disjoint paths with common endpoints (called “theta graphs” ), etc. Polynomial-time algorithms exist for computing bandwidth for graphs in these classes and for interval graphs . We begin analogous investigations for edge-bandwidth by computing the edge-bandwidth for cliques, for equipartite complete bipartite graphs, and for some theta graphs.
2. RELATION TO OTHER PARAMETERS
We begin by listing elementary lower bounds on edge-bandwidth that follow from standard arguments about bandwidth when applied to line graphs.
PROPOSITION 1. Edge-bandwidth satisfies the following. a) $`B^{}(H)B^{}(G)`$ when $`H`$ is a subgraph of $`G`$. b) $`B^{}(G)=\mathrm{max}\{B^{}(G_i)\}`$, where $`\{G_i\}`$ are the components of $`G`$. c) $`B^{}(G)\mathrm{\Delta }(G)1`$.
Proof: (a) A labeling of $`G`$ contains a labeling of $`H`$. (b) Concatenating labelings of the components achieves the lower bound established by (a). (c) The edges incident to a single vertex induce a clique in the line graph. The lowest and highest among these labels are at least $`\mathrm{\Delta }(G)1`$ apart.
PROPOSITION 2. $`B^{}(G)\mathrm{max}_{HG}\frac{e(H)1}{\mathrm{diam}(L(H))}`$.
Proof: This is the statement of Chung’s “density bound” for line graphs. Every labeling of a graph contains a labeling of every subgraph. In a subgraph $`H`$, the lowest and highest labels are at least $`e(H)1`$ apart, and the edges receiving these labels are connected by a path of length at most $`\mathrm{diam}(L(H))`$, so by the pigeonhole principle some consecutive pair of edges along the path have labels differing by at least $`(e(H)1)/\mathrm{diam}(L(H))`$.
Subgraphs of diameter 2 include stars, and a star in a line graph is generated from an edge of $`G`$ with its incident edges at both endpoints. The size of such a subgraph is at most $`d(u)+d(v)1`$, yielding the bound $`B^{}(G)[d(u)+d(v)]/21`$ for $`uvE(G)`$. This is at most $`\mathrm{\Delta }(G)1`$, the lower bound from Proposition 1. Nevertheless, because of the way in which stars in line graphs arise, they can yield a better lower bound for regular or nearly-regular graphs. We develop this next.
PROPOSITION 3. For $`FE(G)`$, let $`(F)`$ denote the set of edges not in $`F`$ that are incident to at least one edge in $`F`$. The edge-bandwidth satisfies $`B^{}(G)\mathrm{max}_k\mathrm{min}_{|F|=k}|(F)|`$.
Proof: This is the statement of Harper’s “boundary bound” for line graphs. Some set $`F`$ of $`k`$ edges must be the set given the $`k`$ smallest labels. If $`m`$ edges outside this set have incidences with this set, then the largest label on the edges of $`F`$ is at least $`k+m`$, and the difference between the labels on this and its incident edge in $`F`$ is at least $`m`$.
COROLLARY 4. $`B^{}(G)\mathrm{min}_{uvE(G)}d(u)+d(v)2`$.
Proof: We apply Proposition 3 with $`k=1`$. Each edge $`uv`$ is incident to $`d(u)+d(v)2`$ other edges. Some edge must have the least label, and this establishes the lower bound.
Although these bounds are often useful, they can be arbitrarily bad. The theta graph $`\mathrm{\Theta }(l_1,\mathrm{},l_m)`$ is the graph that is the union of $`m`$ pairwise internally-disjoint paths with common endpoints and lengths $`l_1,\mathrm{},l_m`$. The name “theta graph” comes from the case $`m=3`$. The bandwidth is known for all theta graphs, but settling this was a difficult process finished in . When the path lengths are equal, the edge-bandwidth and bandwidth both equal $`m`$, using the density lower bound and a simple construction. The edge-bandwidth can be much higher when the lengths are unequal. Our example showing this will later demonstrate sharpness of some bounds.
Our original proof of the lower bound was lengthy. The simple argument presented here originated with Dennis Eichhorn and Kevin O’Bryant. It will be generalized in to compute edge-bandwidth for a large class of theta graphs.
Example A. Consider $`G=\mathrm{\Theta }(l_1,\mathrm{},l_m)`$ with $`l_m=1`$ and $`l_1=\mathrm{}=l_{m1}=3`$. Let $`a_i,b_i,c_i`$ denote the edges of the $`i`$th path of length 3, and let $`e`$ be the edge incident to all $`a_i`$’s at one end and to all $`c_i`$’s at the other end. Since $`\mathrm{\Delta }(G)=m`$, Proposition 1c yields $`B^{}(G)m1`$. Proposition 2 also yields $`B^{}(G)m1`$. For $`1k2m2`$, the first $`k`$ edges in the list $`a_1,\mathrm{},a_{m1},b_1,\mathrm{},b_{m1}`$ are together incident to exactly $`m`$ other edges, and larger sets are incident to at most $`m1`$ other edges. Thus the best lower bound from Proposition 3 is at most $`m`$.
Nevertheless, $`B^{}(G)=(3m3)/2`$. For the upper bound, we assign the $`3m2`$ labels in order to $`a`$’s, $`b`$’s, and $`c`$’s, inserting $`e`$ before $`b_{m/2}`$. The difference between labels of incidence edges is always at most $`m`$ except for incidences involving $`e`$, which are at most $`(3m3)/2`$ since $`e`$ has the middle label.
$$a_1,\mathrm{},a_{m1},b_1,\mathrm{},b_{m/21},e,b_{m/2},\mathrm{},b_{m1},c_1,\mathrm{},c_{m1}.$$
To prove the lower bound, consider a numbering $`f:E(G)Z`$, and let $`k=B^{}(f)`$. Let $`\alpha =\mathrm{max}\{f(e),\mathrm{max}_i\{f(a_i)\}\}`$ and $`\alpha ^{}=\mathrm{min}\{f(e),\mathrm{min}_i\{f(c_i)\}\}`$. Comparing the edges with labels $`\alpha ,f(e),\alpha ^{}`$ yields $`\alpha kf(e)\alpha ^{}+k`$. Let $`I`$ be the interval $`[\alpha k,\alpha ^{}+k]`$. By construction, $`I`$ contains the labels of all $`a`$’s, all $`c`$’s, and $`e`$. If $`f(a_i)<\alpha ^{}`$ and $`f(c_i)>\alpha `$, then also $`f(b_i)I`$. By the choice of $`\alpha ,\alpha ^{}`$, avoiding this requires $`\alpha ^{}<f(a_i)\alpha `$ or $`\alpha ^{}f(c_i)<\alpha `$. Since each label is assigned only once and the label $`f(e)`$ cannot play this role, only $`\alpha \alpha ^{}`$ of the $`b`$’s can have labels outside $`I`$. Counting the labels we have forced into $`I`$ yields $`\left|I\right|(2m1)+(m1\alpha +\alpha ^{})`$. On the other hand, $`\left|I\right|=2k+\alpha ^{}\alpha +1`$. Thus $`k(3m3)/2`$, as desired.
3. EDGE-BANDWIDTH VS. BANDWIDTH
In this section we prove various best-possible inequalities involving bandwidth and edge-bandwidth. The proof that $`B(G)B^{}(G)`$ requires several steps. All steps are constructive. When $`f`$ or $`g`$ is a labeling of the edges or vertices of $`G`$, we say that $`f(e)`$ of $`g(v)`$ is the $`f`$-label or $`g`$-label of the edge $`e`$ or vertex $`v`$. An $`f`$-label on an edge incident to $`u`$ is an incident $`f`$-label of $`u`$.
LEMMA 5. If a finite graph $`G`$ has minimum degree at least two, then $`B(G)B^{}(G)`$.
Proof: From an optimal edge-numbering $`f`$ (such that $`B^{}(f)=B^{}(G)=m`$), we define a labeling $`g`$ of the vertices. The labels used by $`g`$ need not be consecutive, but we show that $`|g(u)g(v)|m`$ when $`u`$ and $`v`$ are adjacent.
We produce $`g`$ in phases. At the beginning of each phase, we choose an arbitrary unlabeled vertex $`u`$ and call it the active vertex. At each step in a phase, we select the unused edge $`e`$ of smallest $`f`$-label among those incident to the active vertex. We let $`f(e)`$ be the $`g`$-label of the active vertex, mark $`e`$ used, and designate the other endpoint of $`e`$ as the active vertex. If the new active vertex already has a label, we end the phase. Otherwise, we continue the phase.
When we examine a new active vertex, it has an edge with least incident label, because every vertex has degree at least 2 and we have not previously reached this vertex. Each phase eventually ends, because the vertex set is finite and we cannot continue reaching new vertices. The procedure assigns a label $`g(u)`$ for each $`uV(G)`$, since we continue to a new phase as long as an unlabeled vertex remains.
It remains to verify that $`|g(u)g(v)|m`$ when $`uvE(G)`$. Suppose that $`g(u)=a=f(e)`$ and $`g(v)=b=f(e^{})`$. Since each vertex is assigned the $`f`$-label of an incident edge, we have $`e,e^{}`$ incident to $`u,v`$, respectively. If the edge $`uv`$ is one of $`e,e^{}`$, then $`e`$ and $`e^{}`$ are incident, which implies that $`|g(u)g(v)|=|f(e)f(e^{})|m`$.
Otherwise, we have $`f(uv)=c`$ for some other value $`c`$. We may assume that $`a<b`$ by symmetry. If $`a<c`$ and $`b<c`$, then $`|g(u)g(v)|=ba<ca=f(uv)f(e)m`$. Thus we may assume that $`b>c`$. In particular, $`g(v)`$ is not the least $`f`$-label incident to $`v`$.
The algorithm assigns $`v`$ a label when $`v`$ first becomes active, using the least $`f`$-label among unused incident edges. When $`v`$ first becomes active, only the edge of arrival is a used incident edge. Thus $`g(v)`$ is the least incident $`f`$-label except when $`v`$ is first reached via the least-labeled incident edge. In this case, $`g(v)`$ is the second smallest incident $`f`$-label. Thus $`c`$ is the least $`f`$-label incident to $`v`$ and $`v`$ becomes active by arrival from $`u`$. This requires $`g(u)=c`$, which contradicts $`g(u)=a`$ and eliminates the bad case.
LEMMA 6. If $`G`$ is a tree, then $`B(G)B^{}(G)`$.
Proof: Again we use an optimal edge-numbering $`f`$ to define a vertex-labeling $`g`$ whose adjacent vertices differ by at most $`B^{}(f)`$. We may assume that the least $`f`$-label is 1, occurring on the edge $`e=uv`$. Assign (temporarily) $`g(u)=g(v)=f(e)`$. View the edge $`e`$ as the root of $`G`$. For each vertex $`x\{u,v\}`$, let $`g(x)`$ be the $`f`$-label of the edge incident to $`x`$ along the path from $`x`$ to the root.
If $`xyE(G)`$ and $`xyuv`$, then we may assume that $`y`$ is on the path from $`x`$ to the root. We have assigned $`g(x)=f(xy)`$, and $`g(y)`$ is the $`f`$-label of an edge incident to $`y`$, so $`|g(x)g(y)|B^{}(f)`$.
Our labeling $`g`$ fails to be the desired labeling only because we used 1 on both $`u`$ and $`v`$. Observe that the largest $`f`$-label incident to $`uv`$ occurs on an edge incident to $`u`$ or on an edge incident to $`v`$ but not both; we may assume the latter. Now we change $`g(u)`$ to 0. Because the differences between $`f(uv)`$ and $`f`$-labels on edges incident to $`u`$ were less than $`B^{}(f)`$, this produces the desired labeling $`g`$.
THEOREM 7. For every graph $`G`$, $`B(G)B^{}(G)`$.
Proof: By Proposition 1b, it suffices to consider connected graphs. Let $`f`$ be an optimal edge-numbering of $`G`$; we produce a vertex labeling $`g`$. Lemma 6 applies when $`G`$ is a tree. Otherwise, $`G`$ contains a cycle, and iteratively deleting vertices of degree 1 produces a subgraph $`G^{}`$ in which every vertex has degree at least 2. The algorithm of Lemma 5, applied to the restriction of $`f`$ to $`G^{}`$, produces a vertex labeling $`g`$ of $`G^{}`$ in which (1) adjacent vertices have labels differing by at most $`B^{}(f)`$, and (2) the label on each vertex is the $`f`$-label of some edge incident to it in $`G^{}`$.
To obtain a vertex labeling of $`G`$, reverse the deletion procedure. This iteratively adds a vertex $`x`$ adjacent to a vertex $`y`$ that already has a $`g`$-label. Assign to $`x`$ the $`f`$-label of the edge $`xy`$ in the full edge-numbering $`f`$ of $`G`$. Now $`g(x)`$ and $`g(y)`$ are the $`f`$-labels of two edges incident to $`y`$ in $`G`$, and thus $`|g(x)g(y)|B^{}(f)`$. The claims (1) and (2) are preserved, and we continue this process until we replace all vertices that were deleted from $`G`$.
A caterpillar is a tree in which the subtree obtained by deleting all leaves is a path. One of the characterizations of caterpillars is the existence of a linear ordering of the edges such that each prefix and each suffix forms a subtree. We show that such an ordering is optimal for edge-bandwidth and use this to show that Theorem 7 is nearly sharp.
PROPOSITION 8. If $`G`$ is a caterpillar, then $`B^{}(G)=\mathrm{\Delta }(G)1`$. Let $`G`$ be the caterpillar of diameter $`d`$ in which every vertex has degree $`k+1`$ or 1. If $`dk`$, then $`B(G)=B^{}(G)=k`$.
Proof: Let $`G`$ be a caterpillar. Let $`v_1,\mathrm{},v_{d1}`$ be the non-leaf vertices of the dominating path. The diameter of $`G`$ is $`d`$. Number the edges by assigning labels in the following order: first the pendant edges incident to $`v_1`$, then $`v_1v_2`$, then the pendant edges incident to $`v_2`$, then $`v_2v_3`$, etc. Since edges are incident only at $`v_1,\mathrm{},v_{d1}`$, this ordering places all pairs of incident edges within $`\mathrm{\Delta }(G)1`$ positions of each other. Since $`B^{}(G)\mathrm{\Delta }(G)1`$ for all $`G`$, equality holds.
For a caterpillar $`G`$ with order $`n`$ and diameter $`d`$, Chung’s density bound yields $`B(G)(n1)/d`$. Let $`G`$ be the caterpillar of diameter $`d`$ in which every vertex has degree $`k+1`$ or 1. We have $`d1`$ vertices of degree $`k+1`$, so $`n=(d1)k+2`$ and $`B(G)>kk/d`$. When $`dk`$, we have $`B(G)k`$.
On the other hand, we have observed that $`B^{}(G)\mathrm{\Delta }(G)1=k`$ for caterpillars. By Theorem 7, equality holds throughout for these special caterpillars.
Theorem 7 places a lower bound on $`B^{}(G)`$ in terms of $`B(G)`$. We next establish an upper bound. The arboricity is the minimum number of forests needed to partition the edges of $`G`$.
THEOREM 9. If $`G`$ has arboricity $`t`$, then $`B^{}(G)2tB(G)+t1`$. When $`t=1`$, the inequality is almost sharp; there are caterpillars with $`B^{}(G)=2B(G)1`$.
Proof: Given an optimal number $`g`$ of $`V(G)`$, we construct a labeling $`f`$ of $`E(G)`$. Let $`G_1,\mathrm{},G_t`$ be a decomposition of $`G`$ into the minimum number of forests. In each component of each $`G_i`$, select a root. Each edge of $`G_i`$ is the first edge on the path from one of its endpoints to the root of its component in $`G_i`$; for $`eE(G_i)`$, let $`v(e)`$ denote this endpoint. Define $`f(e)=tg(v(e))+i`$.
Each vertex of each forest heads toward the root of its component in that forest along exactly one edge, so the $`f`$-labels of the edges are distinct. Each $`f`$-label arises from the $`g`$-label of one of its endpoints. Thus the $`f`$-labels of two incident edges arise from the $`g`$-labels of vertices separated by distance at most 2 in $`G`$. Also the indices of the forests containing these edges differ by at most $`t1`$. Thus when $`e,e^{}`$ are incident we have $`|f(e)f(e^{})|t2B(g)+t1`$.
The bandwidth of a caterpillar is the maximum density (#edges/diameter) over subtrees . This equals $`\mathrm{\Delta }(G)/2`$ whenever the vertex degrees all lie in $`\{\mathrm{\Delta }(G),2,1\}`$ and the vertices of degree $`\mathrm{\Delta }(G)`$ are pairwise nonadjacent. When $`\mathrm{\Delta }(G)`$ is even, Proposition 8 yields $`B^{}(G)=2B(G)1`$. (Without , this still holds explicitly for stars.)
4. EFFECT OF EDGE OPERATIONS
In this section, we obtain bounds on the effect of local edge operations on the edge-bandwidth. The variations can be linear in the value of the edge-bandwidth, and our bounds are optimal except for additive constants. We study addition, subdivision, and contraction of edges.
THEOREM 10. If $`H`$ is obtained from $`G`$ by adding an edge, then $`B^{}(G)B^{}(H)2B^{}(G)`$. Furthermore, for odd $`k`$ there are examples of $`H=G+e`$ such that $`B^{}(G)=k`$ and $`B^{}(H)2k1`$.
Proof: The first inequality holds because $`G`$ is a subgraph of $`H`$. For the second, let $`g`$ be an optimal edge-numbering of $`G`$; we produce an edge-numbering $`f`$ of $`H`$ such that $`B^{}(f)2B^{}(g)`$.
If $`e`$ is not incident to an edge of $`G`$, form $`f`$ from $`g`$ by giving $`e`$ a new label higher than the others. If only one endpoint of $`e`$ is incident to an edge $`e^{}`$ of $`G`$, form $`f`$ by leaving the $`g`$-labels less than $`g(e^{})`$ unchanged, augmenting the remaining labels by 1, and letting $`f(e)=g(e^{})+1`$. We have $`B(f)B(g)+1`$.
Thus we may assume that the new edge $`e`$ joins two vertices of $`G`$. Our construction for this case modifies an argument in . Let $`e_i`$ be the edge such that $`g(e_i)=i`$, for $`1iB(g)`$. Let $`p,q`$ be the smallest and largest indices of edges of $`G`$ incident to $`e`$, respectively, and let $`r=(p+q)/2`$.
The idea in defining $`f`$ from $`g`$ is to “fold” the ordering at $`r`$, renumbering out from there so that $`e_p`$ and $`e_q`$ receive consecutive labels, and inserting $`e`$ just before this. The renumbering of the old edges is as follows
$$f(e_j)=\{\begin{array}{cc}2(jr)\hfill & \text{if }r<j<q\hfill \\ 2(jr)+1\hfill & \text{if }qj\hfill \\ 2(rj)+1\hfill & \text{if }p<jr\hfill \\ 2(rj)+2\hfill & \text{if }jp\hfill \end{array}$$
Finally, let $`f(e)=\mathrm{min}\{f(e_p),f(e_q)\}1=qp`$. After the edges with $`g`$-labels higher than $`q`$ or lower than $`p`$ are exhausted, the new numbering leaves gaps. For edges $`e_i,e_jE(G)`$, we have $`|f(e_i)f(e_j)|2|ij|+1`$, where the possible added 1 stems from the insertion of $`e`$. When $`r`$ is between $`i`$ and $`j`$, the actual stretch is smaller.
It remains to consider incidences involving $`e`$. Suppose that $`e^{}=e_j`$ is incident to $`e`$. Note that $`1f(e^{})qp+2=f(e)+2`$; we may assume that $`1f(e^{})<f(e)`$. If $`e_p`$ and $`e_q`$ are incident to the same endpoint of $`e`$, then $`1f(e)f(e^{})qp+1B(g)+1`$. If $`e_p`$ and $`e_q`$ are incident to opposite endpoints of $`e`$, then $`e^{}`$ is incident to $`e_p`$ or $`e_q`$. In these two cases, we have $`pjp+B(g)`$ or $`qB(g)jq`$. Since $`j`$ differs from $`p`$ or $`q`$, respectively, by at most $`B(g)`$, we obtain $`1f(e)f(e^{})2B(g)`$.
The bound is nearly sharp when $`k`$ is odd. Let $`G`$ be the caterpillar of diameter $`k+1`$ with vertices of degree $`k+1`$ and 1 (see Proposition 8). We have $`e(G)=k^2+1`$ and $`B^{}(G)=B(G)=k`$. The graph $`H`$ formed by adding the edge $`v_1v_k`$ is a cycle of length $`k`$ plus pendant edges; each vertex of the cycle has degree $`k+1`$ except for two adjacent vertices of degree $`k+2`$. The diameter of $`L(H)`$ is $`k/2+1=(k+1)/2`$, and $`H`$ has $`k^2+2`$ edges. By Proposition 2, we obtain $`B^{}(H)\frac{k^2+1}{(k+1)/2}=2k2+\frac{4}{k+1}=2k1`$.
Subdividing an edge $`uv`$ means replacing $`uv`$ by a path $`u,w,v`$ passing through a new vertex $`w`$. If $`H`$ is obtained from $`G`$ by subdividing one edge of $`G`$, then $`H`$ is an elementary subdivision of $`G`$. Edge subdivision can reduce the edge-bandwidth considerably, but it increases the edge-bandwidth by at most one.
THEOREM 11. If $`H`$ is an elementary subdivision of $`G`$, then $`(2B^{}(G)+\delta )/3B^{}(H)B^{}(G)+1`$, where $`\delta `$ is 1 if $`B^{}(H)`$ is odd and 0 if $`B^{}(H)`$ is even, and these bounds are sharp.
Proof: Suppose that $`H`$ is obtained from $`G`$ by subdividing edge $`e`$. From an optimal edge-numbering $`g`$ of $`G`$, we obtain an edge-numbering of $`H`$ by augmenting the labels greater than $`g(e)`$ and letting the labels of the two new edges be $`g(e)`$ and $`g(e)+1`$. This stretches the difference between incident labels by at most 1.
To show that this bound is sharp, compare $`G=\mathrm{\Theta }(1,2,\mathrm{},2)`$ and $`G^{}=\mathrm{\Theta }(1,3,\mathrm{},3)`$, where each has $`m`$ paths with common endpoints. In Example A, we proved that $`B^{}(G^{})=3(m1)/2`$. In $`G`$, let the $`i`$th path have edges $`a_i,b_i`$ for $`i<m`$, with $`e`$ the extra edge. The ordering $`a_1,\mathrm{},a_{m1},e,b_1,\mathrm{},b_{m1}`$ yields $`B^{}(G)m`$. The graph $`G^{}`$ is obtained from $`G`$ by a sequence of $`m1`$ elementary subdivisions, roughly half of which must increase the edge-bandwidth. The desired graph $`H`$ is the first where the bandwidth is $`m+1`$.
To prove the lower bound on $`B^{}(H)`$, we consider an optimal edge-numbering $`f`$ of $`H`$ and obtain an edge-numbering of $`G`$. For the edges $`e^{},e^{\prime \prime }`$ introduced to form $`H`$ after deleting $`e`$, let $`p=f(e^{})`$ and $`q=f(e^{\prime \prime })`$. We may assume that $`p<q`$. Let $`r=(p+q)/2`$. Define $`g`$ by leaving the $`f`$-labels below $`p`$ and in $`[r+1,q1]`$ unchanged, decreasing those in $`[p+1,r]`$ and above $`q`$ by one, and setting $`g(e)=r`$. The differences between labels on edges belonging to both $`G`$ and $`H`$ change by at most one and increase only when the difference is less than $`B^{}(f)`$. For incidences involving $`e`$, the incident edge $`ϵ`$ was incident in $`H`$ to $`e^{}`$ or $`e^{\prime \prime }`$. The difference $`|g(e)g(ϵ)|`$ exceeds $`B^{}(f)`$ only if $`g(ϵ)<p`$ or $`g(ϵ)>q`$. In the first case, the difference increases by $`rp=(qp)/2`$. In the second, it increases by $`qr1=(qp)/21`$. We obtain $`B^{}(G)B^{}(H)+\frac{qp}{2}\frac{3B^{}(H)}{2}`$. Whether $`B^{}(H)`$ is even or odd, this establishes the bound claimed.
To show that this bound is sharp, compare $`G=\mathrm{\Theta }(1,3\mathrm{},3)`$ and $`H=\mathrm{\Theta }(2,3\mathrm{},3)`$. In $`H`$ let the $`i`$th path have edges $`a_i,b_i,c_i`$ for $`i<m`$, with $`d,e`$ the remaining path. The ordering $`a_1,\mathrm{},a_{m1},d,b_1,\mathrm{},b_{m1},e,c_1,\mathrm{},c_{m1}`$ yields $`B^{}(H)m`$. From Example A, $`B^{}(G)=3(m1)/2`$. Whether $`m`$ is odd or even, this example achieves the lower bound on $`B^{}(H)`$.
Contracting an edge $`uv`$ means deleting the edge and replacing its endpoints by a single combined vertex $`w`$ inheriting all other edge incidences involving $`u`$ and $`v`$. Contraction tends to make a graph denser and thus increase edge-bandwidth. In some applications, one restricts attention to simple graphs and thus discards loops or multiple edges that arise under contraction. Such a convention can discard many edges and thus lead to a decrease in edge-bandwidth. In particular, contracting an edge of a clique would yield a smaller clique under this model and thus smaller edge-bandwidth.
For the next result, we say that $`H`$ is an elementary contraction of $`G`$ if $`H`$ is obtained from $`G`$ by contracting one edge and keeping all other edges, regardless of whether loops or multiple edges arise. Edge-bandwidth is a valid parameter for multigraphs.
THEOREM 12. If $`H`$ is an elementary contraction of $`G`$, then $`B^{}(G)1B^{}(H)2B^{}(G)1`$, and these bounds are sharp for each value of $`B^{}(G)`$.
Proof: Let $`e`$ be the edge contracted to produce $`H`$. For the upper bound, let $`g`$ be an optimal edge-numbering of $`G`$, and let $`f`$ be the edge-numbering of $`H`$ produced by deleting $`e`$ from the numbering. In particular, leave the $`g`$-labels below $`g(e)`$ unchanged and decrement those above $`g(e)`$ by 1. Edges incident in $`H`$ have distance at most two in $`L(G)`$, and their distance in $`L(G)`$ is two only if $`e`$ lies between them. Thus the difference between their $`g`$-labels is at most $`2B^{}(g)`$, with equality only if the difference between their $`f`$-labels is $`2B^{}(G)1`$.
Equality holds when $`G`$ is the double-star (the caterpillar with two vertices of degree $`k+1`$ and $`2k`$ vertices of degree 1) and $`e`$ is the central edge of $`G`$, so $`H`$ is the star $`K_{1,2k}`$. We have observed that $`B^{}(G)=k`$ and $`B^{}(H)=2k1`$.
For the lower bound, let $`f`$ be an optimal edge-numbering of $`H`$, and let $`g`$ be the edge-numbering of $`G`$ produced by inserting $`e`$ into the numbering just above the edge $`e^{}`$ with lowest $`f`$-label among those incident to the contracted vertex $`w`$ in $`H`$. In particular, leave $`f`$-labels up to $`f(e^{})`$ unchanged, augment those above $`f(e^{})`$ by 1, and let $`g(e)=f(e^{})+1`$. The construction and the argument depend on the preservation of loops and multiple edges. Edges other than $`e`$ that are incident in $`G`$ are also incident in $`H`$, and the difference between their labels under $`g`$ is at most one more than the difference under $`f`$. Edges incident to $`e`$ in $`G`$ are incident to $`e^{}`$ in $`H`$ and thus have $`f`$-label at most $`f(e^{})+B^{}(f)`$. Thus their $`g`$-label differs from that of $`e^{}`$ by at most $`B^{}(f)`$.
The lower bound must be sharp for each value of $`B^{}(G)`$, because successive contractions eventually eliminate all edges and thus reduce the bandwidth.
5. EDGE-BANDWIDTH OF CLIQUES AND BICLIQUES
We have computed edge-bandwidth for caterpillars and other sparse graphs. In this section we compute edge-bandwidth for classical dense families, the cliques and equipartite complete bipartite graphs. Give the difficulty of bandwidth computations, the existence of exact formulas is of as much interest as the formulas themselves.
THEOREM 13. $`B^{}(K_n)=n^2/4+n/22`$.
Proof: Lower bound. Consider an optimal numbering. Among the lowest $`\left(\genfrac{}{}{0pt}{}{n/21}{2}\right)+1`$ values there must be edges involving at least $`n/2`$ vertices of $`K_n`$. Among the highest $`\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)+1`$ values there must be edges involving at least $`n/2+1`$ vertices of $`K_n`$. Since $`n/2+n/2+1>n`$, some vertex has incident edges with labels among the lowest $`\left(\genfrac{}{}{0pt}{}{n/21}{2}\right)+1`$ and among the highest $`\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)+1`$. Therefore,
$$\begin{array}{cc}\hfill B^{}(K_n)& \left[\left(\genfrac{}{}{0pt}{}{n}{2}\right)\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)\right]\left[\left(\genfrac{}{}{0pt}{}{n/21}{2}\right)+1\right]\hfill \\ & =(\frac{n}{2}1)(\frac{n}{2})+n11\hfill \\ & =\frac{n^2}{4}+\frac{n}{2}2\hfill \end{array}$$
Upper bound. To achieve the bound above, let $`X,Y`$ be the vertex partition with $`X=\{1,\mathrm{},n/2\}`$ and $`Y=\{n/2+1,\mathrm{},n\}`$. We assign the lowest $`\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)`$ values to the edges within $`X`$. We use reverse lexicographic order, listing first the edges with higher vertex 2, then higher vertex 3, etc. We assign the highest $`\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)`$ values to the edges within $`Y`$ by the symmetric procedure. Thus
$$\begin{array}{cccccccccccccccc}u& 1& 1& 2& 1& 2& 3& \mathrm{}& \mathrm{}& n3& n3& n3& n2& n2& n1& \\ v& 2& 3& 3& 4& 4& 4& \mathrm{}& \mathrm{}& n2& n1& n& n1& n& n& \\ f(uv)& 1& 2& 3& 4& 5& 6& \mathrm{}& \mathrm{}& \left(\genfrac{}{}{0pt}{}{n}{2}\right)5& & & & & \left(\genfrac{}{}{0pt}{}{n}{2}\right)& \end{array}$$
Note that the lowest label on an edge incident to vertex $`n/2`$ is $`1+\left(\genfrac{}{}{0pt}{}{n/21}{2}\right)`$.
The labels between these ranges are assigned to the “cross-edges” between $`X`$ and $`Y`$. The cross-edges involving the vertex $`n/2X`$ receive the highest of the central labels, and the cross-edges involving $`n/2+1Y`$ (but not $`n/2`$) receive the lowest of these labels. Since the highest cross-edge label is $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)\left(\genfrac{}{}{0pt}{}{n/2}{2}\right)`$ and the lowest label of an edge incident to $`n/2`$ is $`1+\left(\genfrac{}{}{0pt}{}{n/21}{2}\right)`$, the maximum difference between labels on edges incident to $`n/2`$ is precisely the lower bound on $`B^{}(K_n)`$ computed above. This observation holds symmetrically for the edges incident to $`n/2+1`$.
$$\genfrac{}{}{0pt}{}{1}{2}\genfrac{}{}{0pt}{}{1}{3}\genfrac{}{}{0pt}{}{2}{3}\genfrac{}{}{0pt}{}{1}{4}\genfrac{}{}{0pt}{}{2}{4}\genfrac{}{}{0pt}{}{3}{4}\genfrac{}{}{0pt}{}{5}{1}\genfrac{}{}{0pt}{}{5}{2}\genfrac{}{}{0pt}{}{5}{3}\genfrac{}{}{0pt}{}{6}{1}\genfrac{}{}{0pt}{}{6}{2}\genfrac{}{}{0pt}{}{7}{1}\genfrac{}{}{0pt}{}{8}{1}\genfrac{}{}{0pt}{}{7}{2}\genfrac{}{}{0pt}{}{8}{2}\genfrac{}{}{0pt}{}{6}{3}\genfrac{}{}{0pt}{}{7}{3}\genfrac{}{}{0pt}{}{8}{3}\genfrac{}{}{0pt}{}{5}{4}\genfrac{}{}{0pt}{}{6}{4}\genfrac{}{}{0pt}{}{7}{4}\genfrac{}{}{0pt}{}{8}{4}\genfrac{}{}{0pt}{}{5}{6}\genfrac{}{}{0pt}{}{5}{7}\genfrac{}{}{0pt}{}{5}{8}\genfrac{}{}{0pt}{}{6}{7}\genfrac{}{}{0pt}{}{6}{8}\genfrac{}{}{0pt}{}{7}{8}$$
We now procede iteratively. On the high end of the remaining gap, we assign the values to the remaining edges incident to $`n/21`$. Then on the low end, we assign values to the remaining edges incident to $`n/2+2`$. We continue alternating between the top and the bottom, completing the edges incident to the more extreme labels as we approach the center of the numbering. We have illustrated the resulting order for $`K_8`$. Each time we insert the remaining edges incident to a vertex of $`X`$, the rightmost extreme moves toward the center at least as much from the previous extreme as the leftmost extreme moves toward the left. Thus the bound on the difference is maintained for the edges incident to each vertex. The observation is symmetric for edges incident to vertices of $`Y`$.
For equipartite complete bipartite graphs, we have a similar construction involving low vertices, high vertices, and cross-edges.
THEOREM 14. $`B^{}(K_{n,n})=\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)1`$.
Proof: Lower bound. We use the boundary bound of Proposition 3 with $`k=n^2/4+1`$. Every set of $`k`$ edges is together incident to at least $`n+1`$ vertices, since a bipartite graph with $`n`$ vertices has at most $`k1`$ edges. Since $`K_{n,n}`$ has $`2n`$ vertices, at most $`(n1)^2/4`$ edges remain when these vertices are deleted. Thus when $`\left|F\right|=k`$, we have
$$B^{}(K_{n,n})\left|(F)\right|n^2\frac{(n1)^2}{4}\frac{n^2}{4}1=\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)1.$$
We construct an ordering achieving this bound. Let $`X=\{x_1,\mathrm{},x_n\}`$ and $`Y=\{y_1,\mathrm{},y_n\}`$ be the partite sets. Order the vertices as $`L=x_1,y_1,\mathrm{},x_n,y_n`$. We alternately finish a vertex from the beginning of $`L`$ and a vertex from the end. When finishing a vertex from the beginning, we place its incident edges to vertices earlier in $`L`$ at the end of the initial portion of the numbering $`f`$ that has already been determined. When finishing a vertex from the end of $`L`$, we place its incident edges to vertices later in $`L`$ at the beginning of the terminal portion of $`f`$ that has been determined. We do not place an edge twice. When we have finished each vertex in each direction, we have placed all edges in the numbering. For example, this produces the following edge ordering for $`K_{6,6}`$:
$$\genfrac{}{}{0pt}{}{X}{Y}\genfrac{}{}{0pt}{}{1}{1}\genfrac{}{}{0pt}{}{2}{1}\genfrac{}{}{0pt}{}{1}{2}\genfrac{}{}{0pt}{}{2}{2}\genfrac{}{}{0pt}{}{3}{1}\genfrac{}{}{0pt}{}{3}{2}\genfrac{}{}{0pt}{}{1}{3}\genfrac{}{}{0pt}{}{2}{3}\genfrac{}{}{0pt}{}{3}{3}\genfrac{}{}{0pt}{}{4}{1}\genfrac{}{}{0pt}{}{4}{2}\genfrac{}{}{0pt}{}{4}{3}\genfrac{}{}{0pt}{}{1}{4}\genfrac{}{}{0pt}{}{2}{4}\genfrac{}{}{0pt}{}{5}{1}\genfrac{}{}{0pt}{}{1}{5}\genfrac{}{}{0pt}{}{1}{6}\genfrac{}{}{0pt}{}{6}{1}\genfrac{}{}{0pt}{}{2}{5}\genfrac{}{}{0pt}{}{2}{6}\genfrac{}{}{0pt}{}{5}{2}\genfrac{}{}{0pt}{}{6}{2}\genfrac{}{}{0pt}{}{3}{4}\genfrac{}{}{0pt}{}{3}{5}\genfrac{}{}{0pt}{}{3}{6}\genfrac{}{}{0pt}{}{5}{3}\genfrac{}{}{0pt}{}{6}{3}\genfrac{}{}{0pt}{}{4}{4}\genfrac{}{}{0pt}{}{4}{5}\genfrac{}{}{0pt}{}{4}{6}\genfrac{}{}{0pt}{}{5}{4}\genfrac{}{}{0pt}{}{6}{4}\genfrac{}{}{0pt}{}{5}{5}\genfrac{}{}{0pt}{}{5}{6}\genfrac{}{}{0pt}{}{6}{5}\genfrac{}{}{0pt}{}{6}{6}$$
It suffices to show that for the $`j`$th vertex $`v_jL`$, there are at least $`n^2\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)=\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$ edges that come before the first edge incident to $`v`$ or after the last edge incident to $`v`$. For $`j=n+1`$, there are exactly $`n^2/4`$ edges before the first appearance of $`v_j`$ and exactly $`(n1)^2/4`$ edges after its last appearance, which matches the argument in the lower bound. As $`j`$ decreases, the leftmost appearance of $`v_j`$ moves leftward no more quickly than the rightmost appearance; we omit the numerical details. The symmetric argument applies for $`jn`$.
References
S.F. Assmann, G.W. Peck, M.M. Sysło, and J. Zak, The bandwidth of caterpillars with hairs of length $`1`$ and $`2`$, SIAM J. Algeb. Disc. Meth. 2(1981), 387–393.
P.Z. Chinn, J. Chvátalová, A.K. Dewdney, and N.E. Gibbs, The bandwidth problem for graphs and matrices - a survey, J. Graph Theory 6(1982), 223–254.
F.R.K. Chung, Labelings of graphs, in Selected Topics in Graph Theory, III (L. Beineke and R. Wilson, eds.), (Academic Press 1988), 151–168.
J. Chvátalová, Optimal labelling of a product of two paths, Discrete Math. 11 (1975), 249–253.
J. Chvátalová and J.Opatrný, The bandwidth problem and operations on graphs, Discrete Math. 61 (1986), 141–150.
J. Chvátalová and J. Opatrný, The bandwidth of theta graphs, Utilitas Math. 33 (1988), 9–22.
D. Eichhorn, D. Mubayi, K. O’Bryant, and D.B. West, The edge-bandwidth of theta graphs (in preparation).
M.R. Garey, R.L. Graham, D.S. Johnson, and D.E. Knuth, Complexity results for bandwidth minimization. SIAM J. Appl. Math. 34(1978), 477–495.
L.H. Harper, Optimal assignments of numbers to vertices, J. Soc. Indust. Appl. Math. 12(1964), 131–135.
R. Hochberg, C. McDiarmid, and M. Saks, On the bandwidth of triangulated triangles, (Proc. 14th Brit. Comb. Conf. - Keele, 1993), Discrete Math. 138 (1995), 261–265.
L.T.Q. Hung, M.M. Sysło, M.L. Weaver, and D.B. West, Bandwidth and density for block graphs, Discrete Math. 189 (1998), 163–176.
D.J. Kleitman and R.V. Vohra, Computing the bandwidth of interval graphs, SIAM J. Discr. Math. 3(1990), 373–375.
J. H. Mai, The bandwidth of the graph formed by $`n`$ meridian lines on a sphere (Chinese, English summary), J. Math. Res. Exposition 3 (1983), 55–60.
Z. Miller, The bandwidth of caterpillar graphs, Proc. Southeastern Conf., Congressus Numerantium 33(1981), 235–252.
H.S. Moghadam, Compression operators and a solution to the bandwidth problem of the product of $`n`$ paths, PhD Thesis, University of California—Riverside (1983).
B. Monien, The bandwidth minimization problem for caterpillars with hair length 3 is NP-complete, SIAM J. Algeb. Disc. Meth. 7(1986), 505–512.
C.H. Papadimitriou, the NP-completeness of the bandwidth minimization problem. Computing 16(1976), 263–270.
G.W. Peck and A. Shastri, Bandwidth of theta graphs with short paths. Discrete Math. 103 (1992), 177–187.
L. Smithline, Bandwidth of the complete $`k`$-ary tree, Discrete Math. 142(1995), 203–212.
A.P. Sprague, An $`O(n\mathrm{log}n)`$ algorithm for bandwidth of interval graphs, SIAM J. Discr. Math. 7(1994), 213–220.
M.M. Sysło and J. Zak, The bandwidth problem: critical subgraphs and the solution for caterpillars, Annals Discr. Math. 16(1982), 281–286. See also Comp. Sci. Dept. Report CS-80-065, Washington State Univ. (1980).
J.F. Wang, D.B. West, and B. Yao, Maximum bandwidth under edge addition, J. Graph Theory 20(1995), 87–90.
|
no-problem/9904/hep-ph9904484.html
|
ar5iv
|
text
|
# Contents
## 1 Introduction
CPT symmetry has an impressive theoretical pedigree as an almost inescapable consequence of Lorentz invariant quantum field theories. Observation of a CP asymmetry is therefore usually seen as tantamount to the discovery of a violation of time reversal invariance T. However the experimental verification of CPT invariance is much less impressive. Furthermore the emergence of superstring theories has opened – by their fundamentally non-local structure – a theoretical backdoor through which CPT breaking might slip in. This asks for carefully analysing the empirical basis of CPT invariance and the degree to which an observable can establish T violation directly, i.e. without invoking the CPT theorem. In addressing this issue, we will rely on as few other theoretical principles as possible: since we view the observation of CPT violation as a rather exotic possibility, we believe we should accept other theoretical restrictions very reluctantly only.
Data from the CPLEAR collaboration have provided direct evidence for T violation. In this note we want to address the following questions:
* To which degree and in which sectors of $`\mathrm{\Delta }S0`$ dynamics is T violated?
* How accurately is the validity of CPT invariance established experimentally?
* Which conclusions can be drawn without invoking the Bell-Steinberger relation.
* Which is the most promising – or the least hopeless – observable for finding CPT violations in kaon decays?
The reader might wonder why we are insisting on analyzing T symmetry without assuming the Bell-Steinberger relation. After all, it is viewed as just a consequence of unitarity. Yet the following has to be kept in mind: when contemplating the possibility of CPT violation – a quite remote and exotic scenario – we should not consider the Bell-Steinberger relation sacrosanct. The latter is based on the assumption that all relevant decay channels are known. Since the major branching fractions have been measured with at best an error of 1%, some yet undetermined decay mode with a branching fraction of $`10^3`$ can easily be hidden . We are not arguing that this is a likely scenario – it is certainly not! However we do not view it to be more exotic than CPT violation. Then it does not make a lot of sense to us to allow for the latter while forbidding the former.
The paper will be organized as follows: after briefly reviewing the formalism relevant for $`K^0\overline{K}^0`$ oscillations in Sect. 2 we list the direct evidence for T being violated in Sect. 3; in Sect. 4 we analyse the phases of $`\eta _+`$ and $`\eta _{00}`$; after evaluating what can be learnt from $`K_L\pi ^+\pi ^{}e^+e^{}`$ in Sect. 5, we give our conclusions in Sect. 6.
## 2 Formalism
To introduce our notation and make the paper self-contained we shall record here the standard formalism for the neutral $`K`$ meson system.
### 2.1 $`\mathrm{\Delta }S=2`$ Transitions
The time dependence of the state $`\mathrm{\Psi }`$, which is a linear combination of $`K^0`$ and $`\overline{K}^0`$, is given by
$$i\mathrm{}\frac{}{t}\mathrm{\Psi }(t)=\mathrm{\Psi }(t),\mathrm{\Psi }(t)=\left(\begin{array}{c}K^0(t)\\ \overline{K}^0(t)\end{array}\right).$$
(1)
The $`2\times 2`$ matrix $``$ can be expressed through the identity and the Pauli matrices
$$𝐌\frac{i}{2}𝚪=E_1\sigma _1+E_2\sigma _2+E_3\sigma _3iD\mathrm{𝟏}.$$
(2)
with
$`E_1`$ $`=`$ $`\text{Re}M_{12}{\displaystyle \frac{i}{2}}\text{Re}\mathrm{\Gamma }_{12},E_2=\text{Im}M_{12}+{\displaystyle \frac{i}{2}}\text{Im}\mathrm{\Gamma }_{12}`$
$`E_3`$ $`=`$ $`{\displaystyle \frac{1}{2}}(M_{11}M_{22}){\displaystyle \frac{i}{4}}(\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22}),D={\displaystyle \frac{i}{2}}(M_{11}+M_{22})+{\displaystyle \frac{1}{4}}(\mathrm{\Gamma }_{11}+\mathrm{\Gamma }_{22}).`$ (3)
It is often convenient to use instead complex numbers $`E,\theta `$, and $`\varphi `$ defined by
$`E_1=E\mathrm{sin}\theta \mathrm{cos}\varphi ,E_2`$ $`=`$ $`E\mathrm{sin}\theta \mathrm{sin}\varphi ,E_3=E\mathrm{cos}\theta `$
$`E`$ $`=`$ $`\sqrt{E_1^2+E_2^2+E_3^2}.`$ (4)
The mass eigenstates are given by
$`|K_S`$ $`=`$ $`p_1|K^0+q_1|\overline{K}^0`$
$`|K_L`$ $`=`$ $`p_2|K^0q_2|\overline{K}^0`$ (5)
with the convention $`\mathrm{𝐂𝐏}|K^0=|\overline{K}^0`$ and
$`p_1`$ $`=`$ $`N_1\mathrm{cos}{\displaystyle \frac{\theta }{2}},q_1=N_1e^{i\varphi }\mathrm{sin}{\displaystyle \frac{\theta }{2}}`$
$`p_2`$ $`=`$ $`N_2\mathrm{sin}{\displaystyle \frac{\theta }{2}},q_2=N_2e^{i\varphi }\mathrm{cos}{\displaystyle \frac{\theta }{2}}`$
$`N_1`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{|\mathrm{cos}\frac{\theta }{2}|^2+|e^{i\varphi }\mathrm{sin}\frac{\theta }{2}|^2}}}`$
$`N_2`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{|\mathrm{sin}\frac{\theta }{2}|^2+|e^{i\varphi }\mathrm{cos}\frac{\theta }{2}|^2}}}.`$ (6)
The discrete symmetries impose the following constraints:
$`\mathrm{𝐂𝐏𝐓}\mathrm{or}\mathrm{𝐂𝐏}\mathrm{invariance}`$ $`\mathrm{cos}\theta =0,M_{11}=M_{22},\mathrm{\Gamma }_{11}=\mathrm{\Gamma }_{22}`$
$`\mathrm{𝐂𝐏}\mathrm{or}𝐓\mathrm{invariance}`$ $`\varphi =0,\text{Im}M_{12}=0=\mathrm{Im}\mathrm{\Gamma }_{12}`$ (7)
### 2.2 Nonleptonic Amplitudes
We write for the amplitudes describing decays into final states with isospin $`I`$:
$`T(K^0[\pi \pi ]_I)`$ $`=`$ $`A_Ie^{i\delta _I},`$
$`T(\overline{K}^0[\pi \pi ]_I)`$ $`=`$ $`\overline{A}_Ie^{i\delta _I}`$ (8)
where the strong phases $`\delta _I`$ have been factored out and find:
$`\mathrm{𝐂𝐏𝐓}\mathrm{invariance}`$ $`A_I=\overline{A}_I^{}`$
$`\mathrm{𝐂𝐏}\mathrm{invariance}`$ $`A_I=\overline{A}_I`$
$`𝐓\mathrm{invariance}`$ $`A_I=A_I^{}`$ (9)
The expressions for $`\eta _+`$ and $`\eta _{00}`$
$`\eta _+`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\mathrm{\Delta }_0{\displaystyle \frac{1}{\sqrt{2}}}\omega e^{i(\delta _2\delta _0)}(\mathrm{\Delta }_0\mathrm{\Delta }_2)\right),`$
$`\eta _{00}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\mathrm{\Delta }_0+\sqrt{2}\omega e^{i(\delta _2\delta _0)}(\mathrm{\Delta }_0\mathrm{\Delta }_2)\right),`$
$`\mathrm{\Delta }_I`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(1{\displaystyle \frac{q_2}{p_2}}{\displaystyle \frac{\overline{A}_I}{A_I}}\right),|\omega |\left|{\displaystyle \frac{A_2}{A_0}}\right|{\displaystyle \frac{1}{20}},`$ (10)
are valid irrespective of CPT symmetry.
### 2.3 Semileptonic Amplitudes
The general amplitudes for semileptonic $`K`$ decays can be expressed as follows:
$`l^+\nu \pi ^{}|_W|K^0`$ $`=`$ $`F_l(1y_l)`$
$`l^+\nu \pi ^{}|_W|\overline{K}^0`$ $`=`$ $`x_lF_l(1y_l)`$
$`l^{}\overline{\nu }\pi ^+|_W|K^0`$ $`=`$ $`\overline{x}_l^{}F_l^{}(1+y_l^{})`$
$`l^{}\overline{\nu }\pi ^+|_W|\overline{K}^0`$ $`=`$ $`F_l^{}(1+y_l^{}).`$ (11)
with the selection rules
| $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ rule: | $`x_l=\overline{x}_l=0`$ |
| --- | --- |
| CP invariance: | $`x_l=\overline{x}_l^{};F_l=F_l^{};y_l=y_l^{}`$ |
| T invariance: | $`\mathrm{Im}F=\mathrm{Im}y_l=\mathrm{Im}x_l=\mathrm{Im}\overline{x}_l=0`$ |
| CPT invariance: | $`y_l=0,x_l=\overline{x}_l`$. |
## 3 Direct Evidence for T Violation
The so-called Kabir test represents a quantity that probes T violation without reference to CPT symmetry:
$$A_𝐓\frac{\mathrm{\Gamma }(K^0\overline{K}^0)\mathrm{\Gamma }(\overline{K}^0K^0)}{\mathrm{\Gamma }(K^0\overline{K}^0)+\mathrm{\Gamma }(\overline{K}^0K^0)}$$
(12)
A nonvanishing $`A_𝐓`$ requires
$$M_{12}\frac{i}{2}\mathrm{\Gamma }_{12}M_{21}\frac{i}{2}\mathrm{\Gamma }_{21}.$$
(13)
which constitutes CP as well as T violation. Associated production flavor-tags the initial kaon. The flavor of the final kaon is inferred from semileptonic decays; i.e., we measure the CP asymmetry
$$A_{\mathrm{𝐂𝐏}}\frac{\mathrm{\Gamma }(Kl^{}\nu \pi ^+)\mathrm{\Gamma }(\overline{K}l^+\nu \pi ^{})}{\mathrm{\Gamma }(Kl^{}\nu \pi ^+)+\mathrm{\Gamma }(\overline{K}l^+\nu \pi ^{})}$$
(14)
Yet a violation of CPT invariance and/or of the $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ rule can produce an asymmetry in the latter – $`A_{\mathrm{𝐂𝐏}}0`$ – without one being present in the former – $`A_𝐓=0`$. These issues have to be tackled first. There is nothing new in our remarks on this subject; we add them for clarity and completeness.
Analysing the asymmetries in $`\mathrm{\Gamma }(\overline{K}^0(t)l^+\nu K^{})`$ vs. $`\mathrm{\Gamma }(K^0(t)l^{}\overline{\nu }K^+)`$ and $`\mathrm{\Gamma }(\overline{K}^0(t)l^{}\overline{\nu }K^+)`$ vs. $`\mathrm{\Gamma }(K^0(t)l^+\nu K^{})`$ for large times $`t`$ CPLEAR has found
$$\text{Re}\mathrm{cos}\theta =(6.0\pm 6.6\pm 1.2)\times 10^4.$$
(15)
¿From the decay rate evolution they have inferred
$`\text{Im}\mathrm{cos}\theta =(3.0\pm 4.6\pm 0.6)\times 10^2,`$
$`{\displaystyle \frac{1}{2}}\text{Re}(x_l\overline{x}_l)=(0.2\pm 1.3\pm 0.3)\times 10^2,`$
$`{\displaystyle \frac{1}{2}}\text{Im}(x_l+\overline{x}_l)=(1.2\pm 2.2\pm 0.3)\times 10^2.`$ (16)
While there is no sign of CPT violation in any of these observables, the bounds of Eq.(16) are not overly restrictive.
Another input is provided by the charge asymmetry in semileptonic $`K_L`$ decays for which the general expression reads as follows:
$`\delta _{\mathrm{Lept}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }(K_Ll^+\nu \pi ^{})\mathrm{\Gamma }(K_Ll^{}\nu \pi ^+)}{\mathrm{\Gamma }(K_Ll^+\nu \pi ^{})+\mathrm{\Gamma }(K_Ll^{}\nu \pi ^+)}}`$ (17)
$`=`$ $`\mathrm{Im}\varphi \mathrm{Re}\mathrm{cos}\theta \mathrm{Re}(x_l\overline{x}_l)2\mathrm{R}\mathrm{e}y_l.`$
CPT violation, if it exists, is most likely to surface in $`M_{12}`$, which is of second order in the weak interactions. It is then natural to assume semileptonic decay amplitudes to conserve CPT , which is fully consistent with Eq.(16), but not confirmed to the required level:
$$x_l\overline{x}_l=0,\mathrm{or}y_l=0.$$
(18)
With this assumption, and from the data
$$\delta _{\mathrm{Lept}}=(3.27\pm 0.12)\times 10^3.$$
(19)
one obtains
$$\mathrm{Im}\varphi \mathrm{Re}\mathrm{cos}\theta =(3.27\pm 0.12)\times 10^3$$
(20)
and infers from Eq.(15)
$$\text{Im}\varphi =(3.9\pm 0.7)\times 10^3,$$
(21)
showing that T is violated in kaon dynamics.
This result can be stated more concisely as follows :
$$A_TA_{\mathrm{𝐂𝐏}}=(6.6\pm 1.3\pm 1.0)\times 10^3.$$
(22)
In order to get a result independent of the assumption that direct semileptonic kaon decays obey CPT symmetry, the CPLEAR collaboration has employed constraints from the Bell-Steinberger relation to deduce the bound
$$\frac{1}{2}\mathrm{Re}(x_l\overline{x}_l)\mathrm{Re}y_l=(0.4\pm 0.6)\times 10^3,$$
(23)
which again is fully consistent with CPT invariance of the semileptonic decays. This results in establishing violation of T symmetry – provided the assumption mentioned above is valid.
## 4 Phases of $`\eta _+`$ & $`\eta _{00}`$ and CPT
### 4.1 Basic Expressions
Manipulating Eq.(10) we obtain through $`𝒪(\varphi )`$ and $`𝒪`$(cos$`\theta `$)
$$|\eta _+|\frac{\mathrm{\Delta }\mathrm{\Phi }}{\mathrm{sin}\varphi _{SW}}=\left(\frac{M_{\overline{K}}M_K}{2\mathrm{\Delta }M}+R_{direct}\right)$$
(24)
$`\mathrm{\Delta }\mathrm{\Phi }`$ $``$ $`{\displaystyle \frac{2}{3}}\varphi _++{\displaystyle \frac{1}{3}}\varphi _{00}\varphi _{SW}`$
$`R_{direct}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\text{Re}r_A{\displaystyle \frac{ie^{i\varphi _{SW}}}{\mathrm{sin}\varphi _{SW}}}{\displaystyle \underset{f[2\pi ]_0}{}}ϵ(f)`$
$`r_A`$ $``$ $`{\displaystyle \frac{\overline{A}_0}{A_0}}1,\varphi _{SW}\mathrm{tan}^1{\displaystyle \frac{2\mathrm{\Delta }M}{\mathrm{\Delta }\mathrm{\Gamma }}}`$
$`ϵ(f)`$ $`=`$ $`e^{i\varphi _{SW}}i\mathrm{cos}\varphi _{SW}{\displaystyle \frac{\text{Im}\mathrm{\Gamma }_{12}(f)}{\mathrm{\Delta }\mathrm{\Gamma }}}.`$ (25)
Since CPT symmetry predicts $`M_K=M_{\overline{K}}`$ and $`\text{Re}r_A=𝒪(\xi _0^2)`$, where $`\xi _0=\mathrm{arg}A_0`$, it implies $`|\mathrm{\Delta }\mathrm{\Phi }|=0`$ to within the uncertainty given by $`|_{f[2\pi ]_0}ϵ(f)|`$; the latter sum thus represents the theoretical ‘noise’.
### 4.2 Estimating $`ϵ(f)`$
The major kaon decay modes fall into two classes, namely flavor-nonspecific or flavor-specific channels.
* With $`A_f=f|H_W|K^0`$ and $`\overline{A}_f=f|H_W|\overline{K}^0`$, we have, to first order in CP violation,
$$\text{Im}\mathrm{\Gamma }_{12}(f)=i\eta _f\mathrm{\Gamma }(Kf)\left(1\eta _f\frac{\overline{A}_f}{A_f}\right),$$
(26)
for CP eigenstates with eigenvalue $`\eta _f`$. Im $`\mathrm{\Gamma }_{12}(f)0`$ can hold only if $`\overline{A}_f\eta _fA_f`$, i.e. if there is direct CP violation in the channel $`f`$.
Using data on $`ϵ^{}`$, Br($`K_{L,S}3\pi `$) and
$$\mathrm{Im}\eta _{+0}=\left(2\pm 9\genfrac{}{}{0pt}{}{+2}{1}\right)\times 10^3,\text{Im}\eta _{000}=0.07\pm 0.16\text{[10, 11]},$$
(27)
where
$$\eta _{+0,000}\frac{1}{2}\left(1+\frac{q_1}{p_1}\frac{\overline{A}(\pi ^+\pi ^{}\pi ^0,3\pi ^0)}{A(\pi ^+\pi ^{}\pi ^0,3\pi ^0)}\right),$$
(28)
we obtain
$`|ϵ(3\pi ^0)|`$ $`<`$ $`1.1\times 10^4`$
$`|ϵ([2\pi ]_2)|`$ $``$ $`0.28\times 10^6`$
$`|ϵ((\pi ^+\pi ^{}\pi ^0)_{\mathrm{𝐂𝐏}[+]})|`$ $`<`$ $`5[0.2]\times 10^6,`$ (29)
* Allowing for a violation of the $`\mathrm{\Delta }Q=\mathrm{\Delta }S`$ rule in semileptonic decays as expressed by $`x_l\frac{l^+\nu \pi ^{}|_W|\overline{K}}{l^+\nu \pi ^{}|_W|K}`$, we find
$$|ϵ(\pi l\nu )|4\times 10^7.$$
(30)
### 4.3 Quantifying CPT Tests
With the measured values for the phases $`\varphi _+,\varphi _{00}\varphi _+`$, and $`\varphi _{SW}`$ we arrive at a result quite consistent with zero :
$$\mathrm{\Delta }\mathrm{\Phi }=0.01^o\pm 0.7^o|_{exp.}\pm 1.5^o|_{theor.},$$
(31)
i.e., the phases $`\varphi _+`$ and $`\varphi _{00}`$ agree with their CPT prescribed values to within $`2^o`$ . CPT invariance is thus probed to about the $`\delta \varphi /\varphi _{SW}5\%`$ level. The relationship between $`\varphi _+`$, $`\varphi _{00}`$ on one side and $`\varphi _{SW}`$ on the other is a truly meaningful gauge; yet the numerical accuracy of that test is not overwhelming. The theoretical error can be reduced significantly by making quite reasonable assumptions on CP violation; however, we refrain from doing so based on our belief that assuming observable CPT breaking is not very reasonable to start with.
In Eq.(31), the theoretical uncertainty $`_fϵ(f)`$ provides the limiting factor for this test; it is dominated by $`K3\pi ^0`$. Future experiments could reduce the uncertainty by a factor of up to two .
Alternatively we can state
$$\frac{M_{\overline{K}}M_K}{2\mathrm{\Delta }M}+\frac{1}{2}\mathrm{Re}r_A=\left(0.06\pm 4.0|_{exp}\pm 9|_{theor}\right)\times 10^5.$$
(32)
Yet $`\mathrm{\Delta }M`$ does not provide a meaningful calibrator; for it arises basically unchanged even if CP were conserved while the latter would imply $`M_{\overline{K}}M_K=0`$ and $`r_A=0`$ irrespective of CPT breaking.
The often quoted truly spectacular bound (for $`R_{direct}=0`$)
$$\frac{M_{\overline{K}}M_K}{M_K}=(0.08\pm 5.3|_{exp})\times 10^{19}$$
(33)
definitely overstates the numerical degree to which CPT invariance has been probed. $`M_K`$ is not generated by weak interactions and thus cannot serve as a meaningful yardstick.
In summary: while no hint has has found indicating a limitation to CPT symmetry, the experimental evidence for it is far from overwhelming:
* Comparing the phases of $`\eta _+`$ and $`\eta _{00}`$ with the superweak phase constitutes a meaningful test of CPT symmetry. Yet there is a ‘noise’ level of about $`2^o`$ that cannot be reduced significantly .
* Relating the bound on the difference $`|M_{\overline{K}}M_K|`$ to the kaon mass itself is extremely impressive numerically – yet meaningless.
* When entertaining the idea of CPT violation, we should not limit our curiosity to a single quantity like $`\mathrm{\Delta }\mathrm{\Phi }`$ (or equivalently $`M_{\overline{K}}M_K`$).
* Finally, the reader should be reminded that CPT symmetry implies $`\mathrm{\Delta }\mathrm{\Phi }\varphi _{SW}`$ but the converse does not follow.
## 5 Consequences in a T Conserving World
### 5.1 Reproducing $`\eta _+`$
Assuming nature to conserve T, which implies $`\varphi =0`$, see Eq.(7), we have:
$`{\displaystyle \frac{|\eta _+|\mathrm{\Delta }\mathrm{\Phi }}{\mathrm{sin}\varphi _{SW}}}={\displaystyle \frac{M_{11}M_{22}}{2\mathrm{\Delta }M}}+{\displaystyle \frac{1}{2}}r_A,`$
$`{\displaystyle \frac{|\eta _+|}{\mathrm{cos}\varphi _{SW}}}={\displaystyle \frac{\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22}}{4\mathrm{\Delta }M}}\mathrm{tg}\varphi _{SW}{\displaystyle \frac{1}{2}}r_A.`$
$`\text{Re}\mathrm{cos}\theta ={\displaystyle \frac{M_{11}M_{22}}{\mathrm{\Delta }M}}\mathrm{sin}^2\varphi _{SW}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22}}{\mathrm{\Delta }M}}\mathrm{sin}\varphi _{SW}\mathrm{cos}\varphi _{SW}.`$ (34)
Inserting the values of $`\eta _+`$, $`\varphi _{SW}`$ and Eq.(15) we can solve for the three unknowns:
$`{\displaystyle \frac{M_{11}M_{22}}{\mathrm{\Delta }M}}`$ $``$ $`r_A(3.9\pm 0.7)\times 10^3`$
$`{\displaystyle \frac{\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22}}{\mathrm{\Delta }M}}`$ $``$ $`(5.01.4)\times 10^3.`$ (35)
The solution is very unnatural – Eq.(35), for example, requires cancellation between CPT violating $`\mathrm{\Delta }S=1`$ and 2 amplitudes. Yet however unnatural they may be, we must entertain this possibility unless we can exclude it empirically.
As a side remark, we mention that if we invoke the Bell-Steinberger relation in its usual form – meaning that kaon decays are effectively saturated by the $`K2\pi ,\mathrm{\hspace{0.17em}3}\pi ,l\nu \pi `$ channels, then we have an additional relation :
$$\frac{1}{2}r_A\frac{\mathrm{\Gamma }_{11}\mathrm{\Gamma }_{22}}{4\mathrm{\Delta }M};$$
(36)
i.e, Eq.(34) then implies $`\eta _+0`$. This is not surprising since these known modes do not exhibit any sign of CPT violation. But, as we have remarked before, in testing CPT we want to stay away from invoking saturation by the known channels.
### 5.2 $`K\pi \pi `$
Where should such a large CPT violation show its face? Imposing $`r_A0`$ raises the prospects of unacceptably large direct CP violation in $`K_L\pi \pi `$. Eq.(10) can be reexpressed as follows:
$$ϵ\frac{1}{\sqrt{1+(\frac{\mathrm{\Delta }\mathrm{\Gamma }}{2\mathrm{\Delta }M})^2}}e^{i\varphi _{SW}}\left(\frac{\mathrm{Im}M_{12}}{\mathrm{\Delta }M_K}+\xi _0\right)$$
(37)
$$ϵ^{}=\frac{1}{2\sqrt{2}}\omega e^{i(\delta _2\delta _0)}\frac{q_2}{p_2}\left(\frac{\overline{A}_0}{A_0}\frac{\overline{A}_2}{A_2}\right)$$
(38)
If T is conserved, $`\frac{q_2}{p_2}\left(\frac{\overline{A}_0}{A_0}\frac{\overline{A}_2}{A_2}\right)`$ is real and Eq.(38) then tells us
$$\mathrm{arg}\left(\frac{ϵ^{}}{ϵ}\right)=\delta _2\delta _0\varphi _{SW}(85.5\pm 4)^{}.$$
(39)
Therefore
$`\mathrm{Re}{\displaystyle \frac{ϵ^{}}{ϵ}}`$ $``$ $`\mathrm{cos}(\delta _2\delta _0\varphi _{SW}){\displaystyle \frac{|\omega |}{2\sqrt{2}|\eta _+|}}|\mathrm{\Delta }_0\mathrm{\Delta }_2|`$ (40)
$`=`$ $`0.035\left(0.087_{0.078}^{+0.061}\right)\left|{\displaystyle \frac{r_A^{}}{r_A}}1\right|=\left(3.0_{2.7}^{+2.2}\right)10^3\left|{\displaystyle \frac{r_A^{}}{r_A}}1\right|`$
where
$$r_A^{}\frac{\overline{A}_2}{A_2}1$$
(41)
Some remarkable features can be read off from this expression:
* For
$$\delta _2\delta _0\varphi _{SW}=90^{}$$
(42)
which is still allowed by the data, one obtains
$$\mathrm{Re}\frac{ϵ^{}}{ϵ}=0.$$
(43)
As far as $`K\pi \pi `$ is concerned this amounts to a superweak scenario!
* The empirical landscape of CP violation has changed qualitatively: KTeV, confirming earlier observations of NA 31, has conclusively established the existence of direct CP violation :
$$\mathrm{Re}\frac{ϵ^{}}{ϵ}=\left(2.80\pm 0.30\pm 0.28\right)10^3$$
(44)
Including previous data and preliminary results from NA 48 one arrives at a world average of
$$\mathrm{Re}\frac{ϵ^{}}{ϵ}=\left(2.12\pm 0.28\right)10^3$$
(45)
This can be reproduced with a ‘canonical’ $`r_A^{}=0`$, but only for a very narrow slice in the phase of $`ϵ^{}/ϵ`$, namely
$$\delta _2\delta _0\varphi _{SW}(86.5\pm 0.5)^{}.$$
(46)
* The dominant uncertainty here enters through the phase shifts $`\delta _{0,2}`$. If $`\delta _2\delta _0\varphi _{SW}`$ falls outside the range of Eq.(46), then $`r_A^{}0`$ is needed to reproduce Re$`(ϵ^{}/ϵ)`$. As an illustration consider $`\delta _2\delta _0\varphi _{SW}=80^{}`$. In that case $`1/2r_A^{}/r_A5/6`$ had to hold to obtain $`110^3\mathrm{Re}(ϵ^{}/ϵ)310^3`$. Hence $`r_A^{}(2÷4)10^3`$. More generally if
$$\delta _2\delta _0\varphi _{SW}83^{}$$
(47)
then the observed value of Re$`(ϵ^{}/ϵ)`$ would imply
$$r_A^{}10^3$$
(48)
if T is conserved.
* This would have a dramatic impact on $`K^\pm \pi ^\pm \pi ^0`$ decays. For Eq.(48) implies a sizeable CPT asymmetry there
$$\frac{\mathrm{\Gamma }(K^+\pi ^+\pi ^0)\mathrm{\Gamma }(K^{}\pi ^{}\pi ^0)}{\mathrm{\Gamma }(K^+\pi ^+\pi ^0)+\mathrm{\Gamma }(K^{}\pi ^{}\pi ^0)}>10^3$$
(49)
With CPT symmetry we predict here a direct CP asymmetry of at most $`𝒪(10^6)`$ due to electromagnetic corrections. Thirty year old data yield $`(0.8\pm 1.2)10^2`$. Upcoming experiments will produce a much better measurement.
### 5.3 $`K_L\pi ^+\pi ^{}e^+e^{}`$
If the photon polarization $`\stackrel{}{ϵ}_\gamma `$ in $`K_L\pi ^+\pi ^{}\gamma `$ were measured, we could form the CP and T odd correlation $`P_{}^\gamma \stackrel{}{ϵ}_\gamma (\stackrel{}{p}_{\pi ^+}\times \stackrel{}{p}_\pi ^{})`$. A more practical realization of this idea is to analyze $`K_L\pi ^+\pi ^{}e^+e^{}`$ which proceeds like $`K_L\pi ^+\pi ^{}\gamma ^{}\pi ^+\pi ^{}e^+e^{}`$. It allows to determine a CP and T odd moment $`A`$ related to $`P_{}^\gamma `$ by measuring the correlation between the $`\pi ^+\pi ^{}`$ and $`e^+e^{}`$ planes. This effect was predicted to be
$$A=(14.3\pm 1.3)\%$$
(50)
and observed by KTeV :
$$A=(13.6\pm 2.5\pm 1.2)\%$$
(51)
It is mainly due to the interference between the bremsstrahlung process $`K_LK_{\mathrm{𝐂𝐏}+}\pi ^+\pi ^{}\pi ^+\pi ^{}\gamma ^{}`$ and a one-step M1 reaction $`K_L\pi ^+\pi ^{}\gamma ^{}`$. The former is CP violating and described by $`\eta _+`$ irrespective of the theory underlying CP violation.
It is a remarkable measurement since it has revealed a huge CP asymmetry in a rare channel that had not been observed before. While T odd correlations have been seen before in production processes and in nonleptonic hyperon decays, those – due to their sheer magnitude – had to be blamed on final state interactions; such an explanation turned out to be consistent with what we know about those. The quantity $`A`$ on the other hand is a T odd correlation sui generis since it has a chance to be generated by microscopic T violation.
Yet the most intriguing question is what does this measurement teach us about T violation without reference to CPT symmetry? The answer is: Nothing really! For we have just shown – by giving a concrete example – that if we are sufficiently determined we can dial CPT violation in such a way that both the modulus and phase of $`\eta _+`$ are reproduced even with T invariant dynamics, and it is $`\eta _+`$ that controls $`A`$.
#### 5.3.1 A Comment on the Intricacies of Final State Interactions
It is well-known that a non-vanishing T-odd correlation does not necessarily establish T violation since final state interactions can induce it even if T is conserved. Yet even so the reader might be surprised by our findings that a value of $`A`$ as large as 10% does not establish T violation. For it would be tempting to argue that in the case at hand final state interactions could not induce an effect even within an order of magnitude of the observed size. The argument might proceed as follows: $`A`$ reflects the correlation between the $`\pi ^+\pi ^{}`$ and the $`e^+e^{}`$ planes; their relative orientation can be affected by final state interactions – but only of the electromagnetic variety; then $`A1\%`$ could not arise.
If nothing else, our brute force scenario shows that such an argument is fallacious. This can be seen also more directly. As stated above there are two different contributions to $`K_L\pi ^+\pi ^{}e^+e^{}`$, namely the M1 amplitude which is CP neutral, and the bremsstrahlung one due to the presence of CP violation. One should note that the presence of the this second amplitude requires neither T violation nor final state interactions!
Let us assume for the moment that arg $`\eta _+=0`$ were to hold. Ignoring final state interactions both in the M1 and the bremsstrahlung amplitudes one obtains $`A=0`$, since the former is imaginary and the latter real now. When the final state interactions are switched back on, they affect the two amplitudes differently. Interference can take place, and one finds (with arg $`\eta _+=0`$) $`A8\%`$. How can the orientation of the $`\pi ^+\pi ^{}`$ and the $`e^+e^{}`$ planes get shifted so much by strong final state interactions? The fallacy of the intuitive argument sketched above derives from its purely classical nature. In quantum mechanics it is not surprising at all that phase shifts between coherent amplitudes change angular correlations.
## 6 Summary
In this note we have listed the information we can infer on T and CPT invariance from the data on kaon decays. Our reasoning was guided by the conviction that once we contemplate CPT breaking the notion of a reasonable or natural assumption starts to resemble an oxymoron.
Our findings can be summarized as follows:
* The presence of T violation in $`\mathrm{\Delta }S0`$ dynamics has been shown without invoking CPT symmetry through the Kabir test performed by CPLEAR. Yet their analysis had to assume semileptonic kaon decays to be CPT symmetric or it had to impose the Bell-Steinberger relation in its conventional form. We do not view either assumption as qualitatively more sacrosanct than CPT symmetry.
* $`\varphi _{+,00}`$ lie within $`2^o`$ of what is expected from CPT symmetry.
* A meaningful yardstick for calibrating bounds on limitations to CPT symmetry is provided by CP asymmetries. CPT breaking forces could – empirically – still be as large as few percent of CP violating forces.
* It is grossly misleading to calibrate the bound on $`M_\frac{}{K}M_K`$ inferred from $`\varphi _+`$, $`\varphi _{00}`$ and $`\varphi _{SW}`$ to the kaon mass.
* The measured values of $`\eta _+`$ and $`\eta _{00}`$ provide us with little information on the level of T versus CPT violation. More specifically $`\eta _+`$ – both its modulus as well as its phase – can be reproduced with T invariant dynamics (unless one imposes the Bell-Steinberger relation):
+ This is achieved by carefully adjusting CPT violation in $`\mathrm{\Delta }S=1\&2`$ transitions.
+ The observed level of direct CP violation – $`\eta _+\eta _{00}`$ – is not a natural consequence of such a scenario. However it could arise due to a fine-tuning of $`\delta _2\delta _0\varphi _{SW}`$ – which had to be viewed as completely accidental – or to a compensation of direct CPT violation in $`K_L[\pi \pi ]_0`$ and $`K_L[\pi \pi ]_2`$.
+ In the latter subscenario one is stuck with a CPT asymmetry in $`K^\pm \pi ^\pm \pi ^0`$ that could be up to few$`\times 10^3`$ without upsetting any known empirical bound.
+ The KTeV observation of a large CP and T odd correlation in $`K_L\pi ^+\pi ^{}e^+e^{}`$ in agreement with theoretical predictions is highly intriguing, yet does not constitute an unequivocal signal for T violation. This has also been noted before using a different line of reasoning.
* We are fully aware that our construction is purely ad-hoc without any redeeming theoretical feature. Nevertheless we do not view it as l’art pour l’art (or more appropriately non-art pour non-art):
+ We have shown by constructing an explicit counter-example that the T odd correlation observed in $`K_L\pi ^+\pi ^{}e^+e^{}`$ does not establish T violation without invoking the CPT theorem.
+ As a by-product we have found that $`K^\pm \pi ^\pm \pi ^0`$ could exhibit a CPT asymmetry large enough to become observable soon.
Finally we would like to add the remark that even negative searches for CPT violation in kaon transitions will not free us from the obligation to probe for such effects in beauty meson decays at the $`B`$ factories.
Acknowledgements We thank Y. Nagashima for discussions. We are grateful to P. Bloch and his colleagues for pointing out several relevant errors and omissions in the original version of this paper. The work of I.I.B. has been supported by the NSF under the grant PHY 96-0508 and that of A.I.S. by Grant-in-Aid for Special Project Research (Physics of CP violation).
|
no-problem/9904/astro-ph9904176.html
|
ar5iv
|
text
|
# IRAS 03313+6058: an AGB star with 30 micron emission Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with participation of ISAS and NASA
## 1 Introduction
The 30 micron emission was first observed in the bright AGB star, IRC$`+`$10216 (Low et al 1973). Observations of several carbon stars by Goebel & Moseley (1985) found that the emission also occurs in AFGL 3068. Recent ISO observations of some carbon stars turned up a few more candidates: AFGL 2256, AFGL 2155 and IRC$`+`$40540 (Yamamura et al. 1998). This feature is seen not only in extreme C–rich AGB stars, but also in C–rich proto–planetary nebulae (PPNe; called also post–AGB objects) and planetary nebulae (PNe). Omont et al. (1995) observed five C–rich PPN objects with an unidentified emission feature at 21 micron in their IRAS LRS spectra (Kwok et al. 1989) and detected the 30 micron feature in all of them. In the case of PNe a similar feature is observed for IC 418 and NGC 6752 (Forrest et al. 1981). A few suggestions for the carrier of this emission have been proposed. Because the feature has never been detected in O–rich objects, and is the broadest known feature in the mid–infrared, solid species that form in the absence of oxides are taken into consideration. Omont et al. (1995) suggested iron atoms bound to PAH molecules as a possible emitter for the 30 micron feature. But Chan et al. (1997) raise doubt about this suggestion after the detection of this feature in direction of the Galactic center, where the existence of PAH molecules is questionable. A more acceptable candidate is solid magnesium sulfide (MgS), first suggested by Goebel & Moseley (1985). The laboratory spectra of MgS samples showed very good agreement of band turn–on and cut–off wavelengths, as well as the overall band shape, with the observed feature seen in AFGL 3068 and IRC$`+`$10216. A reasonable fit was achieved to the 30 micron feature of the PPN object IRAS 22272$`+`$5435 (Szczerba et al. 1997) by means of MgS grains with a distribution of shapes. In addition, MgS is one of the molecules which condensate at low temperature when no oxides are present. This is consistent with the detection of the 30 micron feature only in objects with cold dust shells.
IRAS 03313$`+`$6058 was classified as a candidate extreme carbon star based on the similarity of its IRAS LRS spectrum to those of AFGL 3068 and IRC$`+`$10216 (Volk et al. 1992). It has no optical counterpart in the POSS plates (Jiang & Hu 1992). In the near infrared, the source was detected at K$`=`$15.6 mag and was not detected in J and H bands with upper limit of magnitude 17 and 16, respectively (Jiang et al. 1997). The color index between 12 and 25 micron, based on the IRAS PSC catalogue, indicates a color temperature of about 250 K. Therefore, this is an object with a cold and optically thick circumstellar envelope. Though the IRAS LRS type of this object is 22, no clear silicate feature is seen (note that Kwok et al. 1997 classified the LRS spectrum of this object as unusual). The detection of HCN (1–0) line (Omont et al. 1993) indicated the possibility of a C–rich nature. O–rich maser lines such as OH (Le Squeren et al. 1992, Galt et al. 1989), H<sub>2</sub>O (Wouterloot et al. 1993) or SiO (Jiang et al. 1996) have not been detected at all and this is an indirect indication of a C–rich circumstellar envelope. The CO (2–1) line profile suggests that this object is a late–type star rather than a young stellar object, and gives an expansion velocity of the shell of 13.9 km/s (Volk et al. 1993).
## 2 Observation and data reduction
The spectroscopic observation was carried out by using the SWS spectrometer (de Graauw et al. 1996) of the Infrared Space Observatory (ISO) satellite on 31 July 1997 with the fastest scan speed covering full wavelength range from 2.3 $`\mu `$m to 45 $`\mu `$m (AOT 01, speed 1). The achieved resolution is about 200 $``$ 300 with S/N higher than 100 at wavelength longer than 5 $`\mu `$m.
The original pipeline data were corrected for dark current, up–down scan difference, flat–field and flux calibration by using the SWS Interactive Analysis (IA) at MPE Garching <sup>1</sup><sup>1</sup>1We acknowledge support from the ISO Spectrometer Data Centre at MPE Garching, funded by DARA under grant 50 QI 9402 3. The deglitching and averaging to equidistant spectral point in wavelength across the scans and detectors was done using the ISAP package.
The ISOCAM imaging was performed on 31 July 1997 with the CAM camera in the mode AOT 01 with the filter LW3, centered at 15 micron. The data were reduced by using CIA (a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matiere, C.E.A., France; Cesarsky et al 1996). The object is still point-like at this angular resolution of 1.5 arcsec/pixel. The flux through the filter LW3 is 59.03 Jy as measured by aperture photometry method, about twice of the 12 $`\mu `$m flux (30.87 Jy) given by the IRAS PSC catalogue. This difference could be caused by the strong emission in the mid–infrared range of the spectrum or by the object being variable. However, the flux ($``$ 39 Jy) at 15 micron of the SWS spectrum, which was taken at the same day as the ISOCAM image, does not match the photometric result from the ISOCAM image while it is in rough agreement with the 12 $`\mu `$m IRAS flux. By calculating the IRAS fluxes at 12 and 25 micron from SWS, correction factors of 1.11 and 1.06 should be applied to the corresponding SWS bands, respectively, to agree with the photometric data from IRAS PSC. After correction, the spectrum is smooth and the shape is similar to its IRAS LRS spectrum. In the same time, the IRAS LRS spectrum should be multiplied by a factor of 1.27 to agree with the IRAS PSC data. These factors are within the calibration uncertainties of the ISO–SWS and the IRAS LRS. Such agreement may show that the flux calibration of ISO–SWS is reliable, and that the object is non–variable (the variability index from IRAS observation is 50, at the border between variable and nonvariable indices). Therefore, we suspect that the discrepancy between ISO–SWS and ISO–CAM results is related to the uncertainties in the flux calibration from the image at LW3 band, but the reason for this is not clear to us. Anyway this ISO-CAM result is shown in Fig. 1 as an open circle.
## 3 Modeling
### 3.1 Spectral Energy Distribution
The overall spectral energy distribution of this object is observationally determined from the near infrared to the far infrared. The K band magnitude at 2.2 micron is 15.6 (Jiang et al. 1997), i.e., the flux is 3.7$`\times `$10<sup>-4</sup> Jy. The fluxes at 12, 25, 60 and 100 $`\mu `$m are 30.87, 43.44, 15.08 and 6.40 Jy, respectively, from the IRAS photometric observations ( quality index = 3 in all four IRAS bands). Combination of these photometric results with the ISO SWS 01 spectrum defines the observed spectral energy distribution. In Fig. 1, the observational data are shown where the dots represent the photometric results and the thin solid line shows the ISO SWS 01 spectrum. The spectrum of ISO–SWS from 2.3 micron to 3.52 micron (band 1A, 1B and 1D of ISO–SWS) is not shown because it is too noisy.
The model we used to fit the observational data is described in detail by Szczerba et al. (1997). In brief, the frequency–dependent radiative transfer equations are solved under the assumption of spherically–symmetric geometry simultaneously with the thermal balance equation for a dusty envelope. The radiation of the central star is assumed to be a blackbody. The mass loss rate is taken to be constant and the envelope is assumed to expand at the velocity derived from the CO (2–1) line observation. The dust opacity is represented by amorphous carbon grains of AC type (Bussoletti et al. 1987, Rouleau & Martin 1991).
The appropriate values of important parameters for fitting the observational data are listed in Table 1. The symbols have their usualmeanings, but more details can be found in Szczerba et al. (1997). Dynamical time t<sub>dyn</sub> means the time required for the matter to reach the outer radius of the envelope.
The result from model calculation with the values listed in Table 1 is plotted in Fig. 1 by a heavy solid line, while the long–dashed line represents the radiation from the central star, which radiates as a blackbody of effective temperature 2500 K. The luminosity and distance of the star depend on each other, as well as the dust–to–gas mass ratio and mass loss rate assumed. We adopted the value of 8000 L for the luminosity. The corresponding distance is 4.25 kpc. Since the outer radius R<sub>out</sub> is 7.0 $`\times `$ 10<sup>17</sup>cm, the angular diameter of the circumstellar envelope is predicted to be 22 arcseconds at this distance. This size is big enough to be resolved by the ISOCAM at the resolution mode of 1.5<sup>′′</sup>. However, the ISOCAM observations centered at 15 $`\mu `$m are the most sensitive for the peak temperature of about 200 K which, according to modeling results, corresponds to radius of 4.1$`\times `$10<sup>15</sup> cm and reflects a much smaller angular size of about 0.13 arcseconds. So the result of ISOCAM that no extension is seen may be attributed to a large distance of the object. The value of R<sub>out</sub> is determined from the model–fitting to the observational results and its choice affects mainly the flux intensity in the mid– and far–infrared. A reasoanable fit can be achieved with R<sub>out</sub> values ranging from 5.0 $`\times `$ 10<sup>17</sup>cm to 9.0 $`\times `$ 10<sup>17</sup>cm assuming a constant mass loss rate. For increasing mass loss rate (which means density distribution steeper than r<sup>-2</sup>), the outer radius should be larger to compensate for the smaller far infrared emission of the outer enevelope layers, but range of the allowed changes in R<sub>out</sub> is quite similar. Note, however, such density distribution cannot be much different from that corresponding to the constant mass loss rate due to strong constraints from the 60 and 100 $`\mu `$m flux densities unless we assume the dust optical properties at far infrared wavelengths have a less steep slope than in the case of amorphous carbon used here.
The adopted dust–to–gas mass ratio of 5.0$`\times `$ 10<sup>-3</sup> corresponds to mass loss rate of 8.0 $`\times `$ 10<sup>-5</sup> M/yr. This mass loss rate is higher than 5.8 $`\times `$ 10<sup>-6</sup> M/yr deduced from CO (1–0) line (Loup et al. 1993) or 1.4 $`\times `$ 10<sup>-5</sup> M/yr inferred from CO (2–1) line (Omont et al. 1993). Note, in addition, that to get velocity of the shell around 14 km/s dynamical considerations would suggest even smaller value of dust–to–gas mass ratio (see Steffen et al. 1998), and in consequence a larger mass loss rate and a larger total mass of the envelope. However, the assumption of a density distribution slightly more steep than r<sup>-2</sup> would cancel the increase in total mass of the envelope. The calculation of the mass loss rate from the CO line suffers mainly from the uncertainty of the distance and mass fraction factor for CO molecules. For example, the mass loss rate of W Hya derived from infrared water lines lies a factor about 30 above the estimates based on the CO line observation (Neufeld et al. 1996). The case of Y CVn is similar in that the mass loss rate derived from interpretation of far infrared ISOPHOT images is 2 orders of magnitude higher than that found from the CO line (Izumiura et al. 1996). Izumiura et al.(1996) explained the result for Y CVn by suggesting that the far infrared and CO observations represent different epoches of mass loss. There may be another possibility that, the mass fraction of CO molecules is overestimated so that the mass loss rate is underestimated. The mass loss rate of IRAS 03313$`+`$6058 from modeling lies a little above that estimated from the flux at 60 micron, 4.8 $`\times `$ 10<sup>-5</sup> M/yr (Omont et al. 1993). Derivation of mass loss rate from the flux at IRAS bands depends on the distance and on the bolometric correction factor which would induce some uncertainty. Though the mass loss rate derived from our modeling depends on the value of dust–to–gas ratio and density distribution, the result is relatively stable (probably better than a factor of two) because of the constraints required to fit the spectral energy distribution over the wide–range of wavelengths. The object then experiences quite strong wind, and has a circumstellar envelope perhaps as massive as 1.282 M. By considering that the outer radius R<sub>out</sub> can vary in the range from 5.0 $`\times `$ 10<sup>17</sup> to 9.0 $`\times `$ 10<sup>17</sup>cm, the mass of the circumstellar envelope may be in the range of 0.92 M to 1.65 M under the assumed dust–to–gas ratio of 0.005. Since the mass of the cirsumstellar envelope appears to be above one solar mass, the star could very possibly be an intermediate–mass AGB star.
### 3.2 The 30 micron emission
As can be seen in Fig. 1, the object shows emission around 30 micron which is superimposed on the continuum radiation from the star and the circumstellar envelope. The emission starts at 20 micron, peaks at about 30 micron and extends to longward of 40 micron. Because of another unidentified emission around 43.6 micron (see Discussion), it is difficult to define the cut–off position of the 30 micron emission band from this spectrum, though the emission may extend to the long–wavelength limit of ISO–SWS.
As described in the Introduction, MgS is regarded as a reasonable candidate to be the carrier of the 30 micron emission. We tried to model this with the optical constants taken from the tables based on laboratory measurements of a MgS(90%)–FeS(10%) mixture (Begemann et al. 1994). For our computations, we used two shapes for the grains, i.e. the CDE (Continuous Distribution of Ellipsoids) and Mie theory (spherical grains). Assumptions and method used are described in detail by Szczerba et al. (1997).
Because the temperature structure of the circumstellar envelope is determined from modeling of the spectral energy distribution, and because there is little difference in the dust temperature structure between the largest and smallest AC grains (a<sub>-</sub> = 0.005 and a<sub>+</sub> = 0.25 $`\mu `$m) used in the calculations, the only free parameter after taking MgS into account is the number ratio between Sulfur atoms in MgS and total Hydrogen atoms n(S)/n(H). Under the CDE approximation a value of 3.0$`\times `$ 10<sup>-6</sup> for n(S)/n(H) gives a fit to the observed feature at the long–wavelength wing, though there is a little inadequacy in the short–wavelength wing of the band. On the other hand, the Mie theory (spherical) grains can make up the emission at the short–wavelength wing. This means that a combination of MgS grains shapes may account for the observed emission. In Fig. 2, the observation and modeling of this band are shown.
## 4 Discussion
It is interesting to compare the 30 micron feature of this object with those of other AGB and post–AGB stars. Fig. 3 exhibits the spectrum of IRAS 03313$`+`$6058 together with the profiles of this feature for another AGB star, AFGL 3068, and for the post–AGB object IRAS 22272$`+`$5435. The data for AFGL 3068 are taken from Yamamura et al. (1998) while the data for the IRAS 22272$`+`$5435 are from Omont et al. (1995). Because these two objects are much brighter than IRAS 03313$`+`$6058, their spectra are scaled downward to agree at about 20 $`\mu `$m with the spectrum of IRAS 03313$`+`$6058. Besides that IRAS 22272$`+`$5435 has another feature at 21 micron, the most evident difference is that the emission of IRAS 03313$`+`$6058 is much weaker than that of IRAS 22272$`+`$5435. On the other hand, AFGL 3068 exhibits a similar strength of the 30 $`\mu `$m band as does IRAS 03313$`+`$6058 which seems, however, to show more fine structures in this wavelength range.
The 30 micron emission for IRAS 22272$`+`$5435 was previously modeled by wing of MgS grains (Szczerba et al. 1997). That fit resulted in an estimate of n(S)/n(H) = 4 $`\times `$ 10<sup>-6</sup> for the highest dust temperature distribution and 1.6 $`\times `$ 10<sup>-5</sup> for the lowest dust temperature distribution considered. The value of 3.0 $`\times `$ 10<sup>-6</sup> for IRAS 03313$`+`$6058 is very close to the case of highest dust temperature distribution of IRAS 22272$`+`$5435. Then there is probably little difference in the sulfur abundance between these two objects and the strength of the 30 micron emission band may be influenced more strongly by other factors. The temperature may be one of the important factors through the way that lower temperature favors the excitation of this band. As the dust temperature of post–AGB envelopes is generally lower than that of AGB ones, this emission is stronger in post–AGB stars. For example IRAS 22272$`+`$5425 has colder dust than IRAS 03313$`+`$6058 and much stronger emission at this band. From laboratory experiment, MgS is one of the molecules which condenses at low temperature in the environment without oxides. Up to now, this band emission is detected in the C–rich AGB stars only with cold and high optical depth dust shells. This may indicate the existence of a critical temperature to form the 30 micron emission band carrier, and that such low temperatures together with approporiate chemical conditions are only possible in the extreme carbon stars. On the other hand, it is still not clear if carbon stars with smaller mass loss rates could form a carrier of this band. The existence of the 30 $`\mu `$m emission in AFGL 2155 and IRC$`+`$40540 (Yamamura et al. 1998), neither of them was classified as extreme C–stars by Volk et al. (1992), suggests that formation of the approporiate chemical material is much more common and that higher envelope temperature in other carbon stars (not extreme ones) does not allow us to detect this emission feature. But it could be as well that some special chemical reactions responsible for the carrier formation of the 30 $`\mu `$m band are efficient only when temperature is enough low and/or the chemical composition is approporiate. Unfortunately, since without optical spectra the determination of the chemical abbundances is rather impossible other methods of investigation should be elaborated, and especially better statistics created by the ISO data could help solve the problem of the 30 $`\mu `$m carrier formation.
Besides this feature around 30 micron, some other features, e.g. absorptions around 7.5 and 14 micron, emissions around 41 and 43.5 $`\mu `$m in the SWS spectrum of this object are not discussed here; they are currently under investigation. We note only that the absorptions are probably related to the C<sub>2</sub>H<sub>2</sub> and/or HCN molecular bands, while emissions could be related to the crystaline silicates (especially as the enstatite mass absorption coefficient matches these emissions well - see Jäger et al. 1998). If crystalline silicate emissions are confirmed then it will allow to deduce an evolutionary status of IRAS 03313$`+`$6058 which could bring some more information on the exciting transition phase between oxygen and carbon rich parts of the AGB evolution.
###### Acknowledgements.
B.W.J. thanks the people in N. Copernicus Astronomical Center, Torun for their help and support. We express also our gratitudes to Dr. Kevin Volk for his careful reading of the manuscript and useful suggestions. This work has been partly supported by grant 2.P03D.002.13 of the Polish State Committee for Scientific Research.
|
no-problem/9904/cond-mat9904187.html
|
ar5iv
|
text
|
# Learning, competition and cooperation in simple games
\[
## Abstract
The minority model was introduced to study the competition between agents with limited information. It has the remarkable feature that, as the amount of information available increases, the collective gain made by the agents is reduced. This crowd effect arises from the fact that only a minority can profit at each moment, while all agents make their choices using the same input. We show that the properties of the model change drastically if the agents make choices based on their individual stories, keeping all remaining rules unaltered. This variation reduces the intrinsic frustration of the model, and improves the tendency towards cooperation and self organization. We finally study the stable mixing of individual and collective behavior.
\]
The minority game was first introduced in the analysis of decision making by agents with bounded rationality, based on the “El Farol” bar problem . A number of agents must make a choice between two alternatives. The choice proves beneficial to a given agent if the total number of agents making that choice is below a given threshold. The game was formulated in a precise way by D. Challet and Y.-C. Zhang . The bounded rationality of the agents is modeled by assuming that each agent can only process information about the outcomes in the $`m`$ previous time steps. Given the $`2^m`$ possible states an agent could afford, there are $`2^{2^m}`$ strategies. Each agent has $`s`$ strategies, taken at random from the total pool, and for making next decision selects the best performing one of her own set. The choice is successful if the agent is in the minority group, which means that the “comfort” threshold is set at 50% the total number of agents. Finally, the agents assign a score to each strategies at their disposal. The score of the strategies which, at a given time, have predicted the correct outcome is increased by one point.
The game has by now been extensively studied. Particular emphasis has been devoted to the mean square deviation of the number of agents making a given choice, $`\sigma `$, which measures the efficiency of the system. When the fluctuations are large (larger $`\sigma `$), the number of agents in the majority side (the number of losers) increases. In this way, the variance measures the degree of cooperation, or mutual benefit of the agents. It has been shown that it scales with $`\rho 2^m/N`$ , where $`N`$ is the number of agents and $`2^m`$ is the number of different configurations that the agents are capable of processing (or states of the world, see ). When $`\rho 1`$, the amount of information available to the agents is so large that they cannot manage and exploit it, and agents take decisions like coin tossing, so that in this limit $`\sigma ^2/N1/4`$. In the opposite limit, $`\rho 1`$, the set of strategies of different agents overlap significantly. The agents tend to make similar choices, which puts them often in the majority group. Then $`\sigma ^2`$ scales with $`N^2`$, instead of $`N`$. This regime is highly inefficient from the point of view of the whole population. The agents manage, however, to arbitrage away all information in the collective history. The value of $`\sigma `$ has a minimum for intermediate values of $`\rho `$ which can be appreciated for not too large values of $`s`$. At this minimum, the agents perform better than random, and some degree of cooperation is established. This minimum can be understood as a critical point in an effective spin model with frustrated interactions and an applied field .
A crucial ingredient in the model is the fact that all agents act on the same information, irrespective of how it has been generated. Similar results are obtained when the histories are replaced by successions of random numbers , which allows for interesting analytical analyses . Evolutionary variations, in which agents with different number of strategies, $`s`$, capabilities to analyze the time series (as given by $`m`$), or additional adjustable parameters have also been studied . The $`\rho 1`$ regime leads not only to large values of $`\sigma `$ but also to complex distribution probabilities with a rich structure .
The model has been used to describe the interactions of agents competing for scarce resources in different contexts . However, it is unlikely that the rules by which the agents make their choices define a evolutionary stable strategy, in the sense commonly used in theoretical biology . The low global gain in the limit $`\rho 1`$ implies that alternative rules can easily improve the performance of the agents. This hypothesis has been verified in different variations of the minority game as defined above. Competition between agents with different memories was first analyzed in . The rules were extended using an additional parameter to improve the chance that the agents use anticorrelated strategies. The value of this parameter was set using an evolution scheme which favors the agent’s performance . It has been shown that two populations of agents with different memories, $`m`$, perform better than pure populations taken separately . Renewal of the strategies available to the agents also leads to improvements in the performance . In a different context, the global gain made by the agents can increase by adding randomness to the decision making process .
We analyze the simplest extension of the model which preserves the basic structure of the agents’ decision process. Each agent has the same number of strategies, $`s`$, defined in the usual way, which process information from the $`m`$ preceding time intervals. Unlike in the usual definition of the game, the agents do not analyze the successions of best choices from the collective point of view, but respond to the story of the individual choices made by each of them. Each agent updates the scores of the strategies according to which strategy, when applied to the individual succession of choices made by that agent, leads to a successful outcome. This is the only difference from the usual case. The processing power of the agents is exactly the same. The model provides a simple way in which agents can avoid a frustrating situation by ignoring or distorting the information that has lead them to it.
We compare the values of the mean square deviations of the attendances, $`\sigma `$, in the present version of the minority game and the canonical results in fig. 1.
In the limit when the information available to the agents is too large, we find that $`\sigma ^2N/4`$, the same result as if the agents made their choices at random, just in the case of high values of $`s`$. For case $`s=3`$ shown in the figure $`\sigma ^2N/5`$, and this value is even lower for $`s=2`$. $`s=1`$ is, as in the standard game, highly sensible to the initial conditions, and averaging over them, gives a dispersion equal to $`N/4`$ independently of $`\rho `$. In the limit $`\rho 0`$, the values of $`\sigma `$ are significantly lower in the “individual” version of the game presented here, and comparable, or lower, than those found in other extensions of the model. There is a significant spreading as function of $`m`$ and $`N`$, meaning that the scaling with $`\rho `$ is not too well satisfied. The scaling with $`\rho `$ implicitly assumes that all possible histories appear with the same probability in the collective history . In the present version of the model, if the individual stories used by the agents are replaced by random series, $`\sigma `$ takes values close to the random case, irrespective of the value of $`\rho `$. Thus, the main hypothesis used to justify the scaling in the minority game in its usual form does not hold in this case.
The group which was on the winning side can be be inferred from the “comfort” that the agent gained after each outcome. This information is used in updating the score of the strategies, which, however, act on a different input. As this input is not the same for all agents, they have no obstacle in following anticorrelated dynamics, even when all use similar strategies. The measure of that correlation can be analyzed explicitly by taking the average Hamming distance between agents histories . We have further analyzed this point by calculating the average number of histories processed by the agents. The number of histories is always significantly below that in the canonical model ($`P=2^m`$), implying that the system tends to be locked into situations where agents generate a relatively small number of possibly anticorrelated individual histories. This $`P`$, is also a function of $`m`$, $`N`$, and $`s`$, in such a way that it decreases monotonically when increasing $`N`$ and decreasing $`m`$. When $`s`$ is small the limit for large $`m`$ and small $`N`$, is not $`2^m`$, but some lower value. This would explain the limit of $`\sigma ^2/N1/4`$ when $`\rho `$ is large discussed above.
The present version of the model needs not to define a evolutionary stable strategy. If there is information available in the series of global minority groups, an agent playing according to the canonical rules will benefit from doing so. We have analyzed the competition between these two types of behavior by allowing each agent to have a dual scoring system for its strategies, following the two set of rules. Each agent plays the strategy with the highest score at a given time step. Thus, the population can be divided into those using collective rules and those using individual rules. The values of $`\sigma `$ obtained in this way, and the fraction of agents using a collective strategy are shown in fig. 2.
In the limit when the information available is large, $`\rho \mathrm{}`$, we recover the random value for $`\sigma `$. Then, both behaviors are indifferent, and the agents use 50% of the time each of them. The fraction of agents which use a collective behavior has a maximum near the value of $`\rho `$ for which $`\sigma `$ has a minimum in the usual version of the model. Finally, the number of agents using collective rules strongly decreases as $`\rho 0`$. In this limit, the preferable behavior is the individual one outlined here, although a small fraction of agents using a collective approach survives. The global efficiency, however, is decreased. Thus, although a mixed population is the stable situation, the small fraction of agents which follow collective rules behave in a parasitic way, lowering the overall gain.
The most striking difference with the usual version of the minority game takes place when only one strategy is available to each agent, $`s=1`$. This case is trivial in the minority game, as the agents have no way to learn or to adapt. The same applies if each agent uses a purely individual set of rules. When the agents can use the best of the two behaviors, the strategy of each agent can be used to process two inputs: the collective history of winning sides, or the succession of prior choices made by that agent. This is shown in fig. 3.
The global performance of an hybrid set of agents using both collective and individual rules is best when $`s=1`$ for a large range of values of $`\rho `$. A qualitative explanation of the adaptability of the agents in this extreme limit can be obtained by noting that, when a given agent repeatedly makes an incorrect choice, its individual history is anticorrelated with the sequence of collective best choices. Thus, if the strategy at its disposal gives a different outcome when presented with the two inputs, the agent will tend to give the opposite answer to that used, unsuccessfully, before. There is a self correcting mechanism built into the model, which tends to prevent very negative performances. On the other hand, if the agents are locked in into a situation where each of them obtains about 50% of the points, a stable situation can be achieved, where the agents remain anticorrelated by alternating between the two inputs at the disposal of each of them. This is consistent with the result that the fraction of agents using collective and individual behavior is comparable for all values of $`\rho `$.
In conclusion, we have discussed the simplest extension of the minority game which preserves the basic parameters of the model. We show that agents with the same processing power as in the usual model can perform much better if they use their individual histories as input, instead of the evolution of the global system. An evolutionary stable situation arises with agents which can use both collective and individual rules. The capability of the agents to adapt and increase the global performance is significantly enhanced, and herd effects disappear. These emergent features change qualitatively even the simplest and most trivial version of the minority game, that in which each agent disposes of a single strategy.
Financial support from the European Union through grant ERB4061PL970910 and the Spanish Administration through grant PB97/0875 are gratefully acknowledged.
We acknowledge useful conversations with Damien Challet, Matteo Marsili and Yi-Cheng Zhang.
|
no-problem/9904/cond-mat9904073.html
|
ar5iv
|
text
|
# Phase transitions in finite systems = topological peculiarities of the microcanonical entropy surface
## Abstract
It is discussed how phase transitions of first order (with phase separation and surface tension), continuous transitions and (multi)-critical points can be defined and classified for finite systems from the topology of the energy surface $`e^{S(E,N)}`$ of the mechanical N-body phase space or more precisely of the curvature determinant $`D(E,N)=^2S/E^2^2S/N^2(^2S/EN)^2`$ without taking the thermodynamic limit. The first calculation of the entire entropy surface $`S(E,N)`$ for a $`q=3`$-states Potts lattice gas on a $`5050`$ square lattice is shown. There are two lines, where $`S(E,N)`$ has a maximum curvature $`0`$. One is the border between the regions in {$`E,N`$} with $`D(E,N)>0`$ and with $`D(E,N)<0`$, the other line is critical starting as a valley in $`D(E,N)`$ running from the continuous transition in the ordinary $`q=3`$-Potts model, converting at $`P_m`$ into a flat ridge/plateau (maximum) deep inside the convex intruder of $`S(E,N)`$ which characterizes the first order liquid–gas transition. The multi-critical point $`P_m`$ is their crossing.
Boltzmann’s gravestone has the famous epigraph:
$`𝑺\mathbf{=}𝒌\mathbf{}𝒍𝒏𝑾`$
which puts thermodynamics on the ground of mechanics . It relates the entropy $`S`$ to the volume $`W(E,N,V)=\delta ϵtr\delta (EH_N)`$ of the energy ($`E`$) surface of the N-body phase space at given volume ($`V`$), the microcanonical partition sum. Here $`\delta ϵ`$ is a suitable small energy constant, $`H_N`$ is the $`N`$-particle Hamiltonian, and
$$tr\delta (EH_N)=\frac{d^{3N}pd^{3N}q}{(2\pi \mathrm{})^{3N}}\delta (EH_N).$$
(1)
The set of points on this surface defines the microcanonical ensemble (ME).
Today conventional thermodynamics is based on the canonical statistical mechanics as introduced by Gibbs. In the thermodynamic limit ThL ($`N\mathrm{}|_{N/V=const}`$) the canonical ensemble (CE) is equivalent to the fundamental (ME) if the system is in a pure phase and the ThL exists.
The fundamental difference of microcanonical thermodynamics (MT) to conventional thermodynamics is that no non-mechanical quantities like temperature, heat, pressure have to be introduced a priori.
The link between ME and CE is established by Laplace transform. E.g. the usual grand canonical partition sum is the double Laplace transform of ME:
$$Z(T,\mu ,V)=\text{ }_0^{\mathrm{}}𝑑E𝑑Ne^{(E\mu N)/T}tr\delta (EH_N).$$
(2)
This excludes all inhomogeneous situations, especially phase separations. There the entropy is non-extensive and the CE contains several Gibbs states at the same temperature. Consequently, the statistical fluctuations do not disappear in the CE even in the thermodynamic limit. This is the reason why Gibbs himself excluded phase separations in chapter VII of in a footnote, page 75. At phase transitions the ME and the CE describe different physical situations. If one combines a small system of water at the specific energy $`ϵ_1`$ of boiling water with a large heat bath at $`100^0`$C it will remain at $`100^0`$C but may convert into steam in $`50\%`$ of the cases. I.e. the fundamental assumption used e.g. by Einstein that our “system changes only by infinitely little” does not hold.
It is important to notice that Boltzmann’s and also Einstein’s formulation allows for defining the entropy by $`S_{micro}:=ln[W(E,N,V)]`$ (in the following we use $`S_{micro}`$ for $`S(E,N)`$ if it is not clear) as a single valued, non-singular, in the classical case differentiable, function of all “extensive”, conserved dynamical variables. No thermodynamic limit must be invoked and the theory applies to non-extensive systems as well. Of course this is achieved by avoiding Gibbs-states, “equilibrium states” or “most random” states . On the other hand fluctuations become then important and must be simulated by Monte Carlo methods. The microcanonical ensemble is the entire microcanonical N-body phase space without any exception. In MT the entropy is not “a measure of randomness” , it is simply the volume $`e^{S(E,N,V)}`$ of the energy surface. The latter point is extremely important as it allows to address even thermodynamically unstable systems like collapsing gravitating systems (for a recent application of MT to thermodynamical unstable, collapsing systems under high angular momentum see). In so far it is the most fundamental formulation of equilibrium statistics. From here the whole thermostatics may be deduced. MT describes how $`e^{S(E,N,V)}`$ depends on the dynamically conserved energy, number of particles etc.. Of course we must assume that the system can be found in each phase-space cell of $`e^S`$ with the same probability.
Following Lee and Yang phase transitions are indicated by singularities in $`Z(T,\mu ,V)`$. Singularities of $`Z(T,\mu ,V)`$, however, can occur in formula (2) in the thermodynamic limit only ($`V\mathrm{}|_{N/V=\varrho ,E/N=\epsilon }`$). For finite volume $`Z(T,\mu ,V)`$ is a finite sum of exponentials and everywhere analytical. Only at points where $`S(E,N)`$ has a curvature $`0`$ will the integral eq.2 diverge in the thermodynamic limit. In these points, the Laplace integral 2 does not have a stable saddle point. Here van Hove’s concavity condition for the entropy $`S(E,N,V)`$ of a stable phase is violated. Consequently we define phase transitions also for finite systems topologically by the points of non-negative curvature of the entropy surface $`S(E,N,V)`$ as a function of the mechanical, conserved “extensive” quantities like energy, mass, angular momentum etc..
Experimentally one identifies phase transitions of course not by the singularity of $`Z(T,\mu )`$ but by the interfaces separating coexisting phases, e.g. liquid and gas, i.e. by the inhomogeneities. The interfaces have three effects on the entropy:
1. There is an entropic gain by putting a part ($`N_1`$) of the system from the majority phase (e.g. liquid) into the minority phase (bubbles, e.g. gas), but this is connected with an energy-loss due to the higher specific energy of the “gas”-phase,
2. an entropic loss proportional to the interface area by the correlations between the particles in the interface, leading to the convex intruder in $`S(E,N,V)`$ and is the origin of surface tension ,
3. and an additional mixing entropy for distributing the $`N_1`$-particles in various ways into bubbles.
At a (multi-) critical point two (or more) phases become indistinguishable and the interface entropy (surface tension) disappears.
Microcanonical thermostatics was introduced in great detail for atomic clusters and nuclei in . In two further papers we showed how the surface tension can quantitatively be determined from the microcanonical $`S(E,N,V)`$ for realistic systems like small liquid metal drops . Needless to say that only by our extension of the definition of phase transitions to finite systems it make sense to ask the question: “How many particles are needed to establish a phase transition?” The results we are going to show here will again demonstrate that often $`N`$ does not need to be large in order to see realistic values (close to the known bulk values) for the characteristic parameters. Other examples where shown in our earlier papers.
In the following we investigate the 3-state Potts lattice-gas model on a 2-dim $`L^2=50^2`$ square lattice. We will see how the total microcanonical entropy surface $`S(E,N)`$ decovers even the most sophisticated thermostatic features as first order phase transitions, continuous phase transitions, critical and multicritical points even for finite systems and non-extensive systems. This demonstrates that the microcanonical statistics is in contrast to Schrödinger’s claim quite able to handle not only gases but also phase transitions and critical phenomena. Important details like the separation of phases and the origin of surface tension can be treated. The singularity of the canonical partition sum at a transition of first order can be traced back to the loss of entropy due to the correlations between the surface atoms at phase boundaries .
Briefly, a few words about our method which will be published in detail in : The Hamiltonian is $`H=\frac{1}{2}_{i,j}\delta _{\sigma i,\sigma j}`$ ($`i,j`$ nearest neighbors). We covered all space $`\{E=ϵL^2,N=nL^2\}`$ by a mesh with about $`1000`$ knots with distances of $`\mathrm{\Delta }ϵ=0.04`$ and $`\mathrm{\Delta }n=0.02`$. Due to our limited computational resources (DEC-Alpha workstation) we could not use a significantly denser mesh. At each knot $`\{ϵ_i,n_k\}`$ we performed by microcanonical simulations ($`210^8`$ events) a histogram for the probabilities $`P(ϵ_i,n_k)`$ for the system to be in the narrow region $`(E_i\pm 4)(N_k\pm 4)`$ of phase space. Local derivatives $`\beta =\left(S(E,N)/E\right)_N`$, $`\beta \mu =\left(S(E,N)/N\right)_E`$ in each histogram give the “intensive” quantities, so that the entire surfaces of $`S(E,N)`$, $`\beta (E,N)`$, $`\beta \mu (E,N)`$ can be interpolated. The first derivatives of the interpolated (smoothed) $`\beta (E,N)`$ and $`\beta \mu (E,N)`$ give the curvatures.
The figure 1 shows some of our recent results for $`S(E/L^2,N/L^2)`$ for the case of the diluted $`q=3`$ Potts model. Grid lines are in direction $`[EE_0(N)]/[E_{max}(N)E_0(N)]=`$const. resp. $`N/L^2=`$const.. The black region is the intruder at the first-order condensation transition (“liquid–gas coexistence”) with positive largest curvature of $`S(E,N)`$. This corresponds to the similar region in the Ising lattice gas, respectively the original Ising model as function of the magnetization. At the light grey strip $`S(E,N)`$ is critical with vanishing largest curvature. The line from point $`C`$ over the multicritical point $`P_m`$ to $`D`$ corresponds from $`C`$ to $`P_m`$ to the familiar continuous transition in the ordinary $`q=3`$ Potts model. At $`P_m`$ this line crosses the rim of the intruder from $`A`$ to $`B`$ which is the border of the first order transition. This crossing determines the multicritical point $`P_m`$ quite well at $`\beta _m=1.48\pm 0.03`$, $`\beta \mu _m=2.67\pm 0.02`$ or $`ϵ_m1`$, $`n_m0.7`$. From here the largest curvature starts to become $`\stackrel{>}{}\mathrm{\hspace{0.17em}0}`$. Naturally, $`P_m`$ spans a much broader region in {$`ϵ,n`$} than in {$`\beta ,\beta \mu `$}, remember here $`S(E,N)`$ is flat.
If one plots the entropy $`s_{micro}(\beta ,\beta \mu )`$ as function of the “intensive” variables $`\beta \mu =S/N`$ and $`\beta =S/E`$, we obtain picture 2. This corresponds to the conventional grand-canonical representation if we would have calculated the grand canonical entropy from the Laplace transform $`Z(T,\mu ,V)`$, eq.2. As there are several points $`E_i,N_k`$ with identical $`\beta ,\beta \mu `$, $`s_{micro}(\beta ,\beta \mu )`$ is a multivalued function of $`\beta ,\beta \mu `$. Here the entropy surface $`S(E,N)`$ is folded onto itself see fig.3 and in fig.2 these points show up as a black critical line (dense region). The backfolded branches of $`S(E,N)`$ are jumped over in eq. 2 and get consequently lost in $`Z(T,\mu )`$. This demonstrates the far more detailed insight one obtains into phase transitions and critical phenomena by microcanonical thermostatics which is not accessible by the canonical treatment.
In figure 4 the determinant of curvatures of $`S(E,N)`$:
$$D(E,N)=\begin{array}{cc}\frac{^2S}{E^2}& \frac{^2S}{NE}\\ \frac{^2S}{EN}& \frac{^2S}{N^2}\end{array}$$
(3)
is shown. On the diagonal we have the ground-state of the $`2`$-dim Potts lattice-gas with $`ϵ=2n`$, the upper-right end is the complete random configuration (not shown), with the maximum allowed excitation $`ϵ_{rand}=\frac{2n^2}{q}`$. In the upper right (white) $`D>0`$, both curvatures are negative. In this region the Laplace integral eq.2 has a stable saddle point. This region corresponds to pure phases.
In the light gray region we have $`D0`$. This is the critical region. Here the largest eigenvalue of $`D`$ is $`0`$. Two branches cross here: One goes $``$ parallel to the ground state ($`E2N`$) from $`A`$ to $`B`$. This is a rim in $`D(E,N)`$, the border line between the region with $`D(E,N)>0`$, and the region with $`D(E,N)<0`$ (black) where we have the first order liquid—gas transition of the lattice-gas. The Laplace integral (2) has no stable saddle point and in the ThL the grand canonical partition sum (2) diverges. Here we have a separation into coexisting phases, e.g. liquid and gas. Due to the surface tension or the negative surface entropy of the phase boundaries, $`S(E,N)`$ has a convex intruder with positive largest curvature.
The other branch from $`C`$ to $`P_m`$ is a valley in $`D(E,N)`$. Here the largest curvature of $`S(E,N)`$ has a local minimum and $`D0`$ (it would be $`D=0`$ with a higher precision of the simulation), running from the point (near $`C`$) of the continuous phase transition at $`n=1`$ and $`ϵ=1.57`$ of the ordinary $`q=3`$-Potts model downwards to $`P_m`$. It converts below the crossing point $`P_m`$ into a flat ridge inside the convex intruder of the first order lattice-gas transition. The area of the crossing of the two critical branches $`CP_mD`$ and $`AP_mB`$ is the multi-critical region $`P_m`$ of the $`q=3`$ Potts lattice gas model.
Conclusion: Microcanonical thermostatics (MT) describes how the entropy $`S(E,N)`$ as defined entirely in mechanical terms by Boltzmann depends on the conserved “extensive” mechanical variables: energy $`E`$, particle number $`N`$, angular momentum $`L`$ etc. This allows to study phase transitions also in small and in non-extensive systems. If we define phase transitions in finite systems by the topological properties of the determinant of curvatures $`D(E,N)`$ (eq.3) of the microcanonical entropy-surface $`S(E,N)`$: a single stable phase by $`D(E,N)>0`$, a transition of first order with phase separation and surface tension by $`D(E,N)<0`$ , a continuous (“second order”) transition with $`D(E,N)=0`$, and a multi- critical point where more than two phases become indistinguishable by the branching of several lines with $`D(E,N)=0`$, then there are remarkable similarities with the corresponding properties of the bulk transitions.
The advantage of MT compared to CT is clearly demonstrated: About half of the whole phase space, the intruder of $`S(E,N)`$ or the non-white region in fig.4, gets lost in conventional canonical thermodynamics. Without any doubts this contains the most sophisticated physics of this system. Due to limited computer resources this could be demonstrated with only limited precision. We are convinced our conclusions will be verified by more extensive – and more expensive – calculations.
Acknowledgment: D.H.E.G thanks M.E.Fisher for the suggestion to study the Potts-3 model and to test how the multicritical point is described microcanonically. We are gratefull to the DFG for financial support.
|
no-problem/9904/quant-ph9904080.html
|
ar5iv
|
text
|
# Noise perturbations in the Brownian motion and quantum dynamics
## Abstract
The third Newton law for mean velocity fields is utilised to generate anomalous (enhanced) or non-dispersive diffusion-type processes which, in particular, can be interpreted as a probabilistic counterpart of the Schrödinger picture quantum dynamics.
If we consider a fluid in thermal equilibrium as the noise carrier, a kinetic theory viewpoint amounts to visualizing the constituent molecules that collide not only with each other but also with the tagged (colloidal) particle, so enforcing and maintaining its observed erratic motion. The Smoluchowski approximation takes us away from those kinetic theory intuitions by projecting the phase-space theory of random motions into its configuration space image which is a spatial Markovian diffusion process, whose formal infinitesimal encoding reads:
$$d\stackrel{}{X}(t)=\frac{\stackrel{}{F}}{m\beta }dt+\sqrt{2D}d\stackrel{}{W}(t).$$
$`(1)`$
In the above $`m`$ stands for the mass of a diffusing particle, $`\beta `$ is a friction parameter, D is a diffusion constant and $`\stackrel{}{W}(t)`$ is a normalised Wiener process. The Smoluchowski forward drift can be traced back to a presumed selective action of the external force $`\stackrel{}{F}=\stackrel{}{}V`$ on the Brownian particle that has a negligible effect on the thermal bath but in view of frictional resistance imparts to a particle the mean velocity $`\stackrel{}{F}/m\beta `$ on the $`\beta ^1`$ time scale, . The noise carrier (fluid in the present considerations) statistically remains in the state of rest, with no intrinsic mean flows.
An implicit phase-space scenario of the Brownian motion refers to minute acceleration/deceleration events which modify (infinitely gently, , at a generic rate of $`10^{21}`$ times per second) velocities of realistic particles. Clearly, the microscopic energy-momentum conservation laws need to be respected in each separate collision event. In contrast to derivations based on the Boltzmann equation, this feature is completely alien to the Brownian motion theory. That energy-momentum deficit is one of ”forgotten” or tacitly ”overlooked” problems in the standard theory of the Brownian motion, see e.g. .
An issue of origins of general isothermal flows in the noisy environment is far from being settled, . Namely, the standard theory does not account (or refers to phenomena for which that is not necessary) for generic perturbations of the random medium as a reaction to the enforced (exclusively due to the impacts coming from the medium) particle motion. An ability of the medium to perform work i. e. to give a kinetic energy to Brownian particles (strictly speaking, in the local mean, hence on the ensemble average) appears to be a universal feature of the thermal bath even in the absence of any external force.
Then, there arise problems with the thermal equilibrium, . The tagged particle always propagates ”at the expense” of the bath, which nonetheless remains ”close” to its thermal equilibrium. In view of the involved local cooling and heating phanomena, which are necessary to keep in conformity with the second law of thermodynamics, the bath should develop local flows and thus in turn actively react back to what is being happenning to the particle in the course of its random propagation. Such environmental reaction, while interpreted as a feedback effect, surely can be neglected in an individual particle propagation sample. However, an obvious Brownian particle energy-momentum ”deficit” on the ensemble average (kinetic energy and momentum is carried locally by diffusion currents), indicates that the medium reaction may give observable contributions on the ensemble average, as a statistically accumulated feature. For a statistical ensemble of weakly-out-of-equilibrium systems, an effective isothermal scenario can be used again, but then mean flows in the noise carrier are ”born” as a consequence of the averaging, .
It is well known that a spatial diffusion (Smoluchowski) approximation of the phase-space process, allows to reduce the number of independent local conservation laws (cf. -) to two only. Therefore the Fokker-Planck equation can always be supplemented by another (independent) partial differential equation to form a closed system.
If we assign a probability density $`\rho _0(\stackrel{}{x})`$ with which the initial data $`\stackrel{}{x}_0=\stackrel{}{X}(0)`$ for Eq. (1) are distributed, then the emergent Fick law would reveal a statistical tendency of particles to flow away from higher probability residence areas. This feature is encoded in the corresponding Fokker-Planck equation:
$$_t\rho =\stackrel{}{}(\stackrel{}{v}\rho )=\stackrel{}{}[(\frac{\stackrel{}{F}}{m\beta }D\frac{\stackrel{}{}\rho }{\rho })\rho ]$$
$`(2)`$
where a diffusion current velocity is $`\stackrel{}{v}(\stackrel{}{x},t)=\stackrel{}{b}(\stackrel{}{x},t)D\frac{\stackrel{}{}\rho (\stackrel{}{x},t)}{\rho (\stackrel{}{x},t)}`$ while the forward drift reads $`\stackrel{}{b}(\stackrel{}{x},t)=\frac{\stackrel{}{F}}{m\beta }`$. Clearly, the local diffusion current (a local flow that might be experimentally observed for a cloud of suspended particles in a liquid) $`\stackrel{}{j}=\stackrel{}{v}\rho `$ gives rise to a non-negligible matter transport on the ensemble average, .
It is interesting to notice that the local velocity field $`\stackrel{}{v}(\stackrel{}{x},t)`$ obeys the natural (local) momentum conservation law which directly originates from the rules of the Itô calculus for Markovian diffusion processes, , and from the first moment equation in the diffusion approximation (!) of the Kramers theory, :
$$_t\stackrel{}{v}+(\stackrel{}{v}\stackrel{}{})\stackrel{}{v}=\stackrel{}{}(\mathrm{\Omega }Q).$$
$`(3)`$
An effective potential function $`\mathrm{\Omega }(\stackrel{}{x})`$ can be expressed in terms of the forward drift $`\stackrel{}{b}(\stackrel{}{x})=\frac{\stackrel{}{F}(\stackrel{}{x})}{m\beta }`$ as follows: $`\mathrm{\Omega }=\frac{\stackrel{}{F}^2}{2m^2\beta ^2}+\frac{D}{m\beta }\stackrel{}{}\stackrel{}{F}`$.
Let us emphasize that it is the diffusion (Smoluchowski) approximation which makes the right-hand-side of Eq. (3) substantially different from the usual moment equations appropriate for the Brownian motion. In particular, the force $`\stackrel{}{F}`$ presumed to act upon an individual particle, does not give rise in Eq. (3) to the expression $`\frac{1}{m}\stackrel{}{}V`$ which might be expected on the basis of kinetic theory intuitions and moment identities directly derivable from the Karmers equation, but to the term $`+\stackrel{}{}\mathrm{\Omega }`$.
Moreover, instead of the standard pressure term, there appears a contribution from a probability density $`\rho `$-dependent potential $`Q(\stackrel{}{x},t)`$. It is given in terms of the so-called osmotic velocity field $`\stackrel{}{u}(\stackrel{}{x},t)=D\stackrel{}{}ln\rho (\stackrel{}{x},t)`$, (cf. ): $`Q(\stackrel{}{x},t)=\frac{1}{2}\stackrel{}{u}^2+D\stackrel{}{}\stackrel{}{u}`$ and is generic to a local momentum conservation law respected by isothermal Markovian diffusion processes, cf. -.
The Smoluchowski drift does not refer to any flows in the noise carrier. To analyze perturbations of the medium and then the resulting intrinsic (mean) flows, a more general function $`\stackrel{}{b}(\stackrel{}{X}(t),t)`$, must replace the Smoluchowski drift in Eqs. (1), (2). Forward drifts modify additively the pure noise (Wiener process entry) term in the Itô equations. Under suitable restrictions, we can relate probability measures corresponding to different (in terms of forward drifts !) Fokker-Planck equations and processes by means of the Cameron-Martin-Girsanov theory of measure transformations. The Radon-Nikodym derivative of measures is here involved and for suitable forward drifts which are are gradient fields that yields, , the most general form of an auxiliary potential $`\mathrm{\Omega }(\stackrel{}{x},t)`$ in Eq. (3):
$$\mathrm{\Omega }(\stackrel{}{x},t)=2D[_t\varphi +\frac{1}{2}(\frac{\stackrel{}{b}^2}{2D}+\stackrel{}{}\stackrel{}{b})].$$
$`(4)`$
We denote $`\stackrel{}{b}(\stackrel{}{x},t)=2D\stackrel{}{}\varphi (\stackrel{}{x},t)`$.
Eq. (4) is a trivial identity, if we take for granted that all drifts are known from the beginning, like in case of typical Smoluchowski diffusions where the external force $`\stackrel{}{F}`$ is a priori postulated. We can proceed otherwise and, on the contrary, one can depart from a suitably chosen space-time dependent function $`\mathrm{\Omega }(\stackrel{}{x},t)`$. From this point of view, while developing the formalism, one should decide what is a quantity of a primary physical interest: the field of drifts $`\stackrel{}{b}(\stackrel{}{x},t)`$ or the potential $`\mathrm{\Omega }(\stackrel{}{x},t)`$.
Mathematical features of the formalism appear to depend crucially on the properties (like continuity, local and global boundedness, Rellich class) of the potential $`\mathrm{\Omega }`$, see e.g. . Let us consider a bounded from below (local boundedness from above is useful as well), continuous function $`\mathrm{\Omega }(\stackrel{}{x},t)`$, cf. ). Then, by means of the gradient field ansatz for the diffusion current velocity ($`\stackrel{}{v}=\stackrel{}{}S_t\rho =\stackrel{}{}[(\stackrel{}{}S)\rho ]`$) we can transform the momentum conservation law (3) of a Markovian diffusion process to the universal Hamilton-Jacobi form:
$$\mathrm{\Omega }=_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q$$
$`(5)`$
where $`Q(\stackrel{}{x},t)`$ was defined before. By applying the gradient operation to Eq. (5) we recover (3). In the above, the contribution due to $`Q`$ is a direct consequence of an initial probability measure choice for the diffusion process, while $`\mathrm{\Omega }`$ via Eq. (4) does account for an appropriate forward drift of the process.
Thus, in the context of Markovian diffusion processes, we can consider a closed system of partial differential equations which comprises the continuity equation $`_t\rho =\stackrel{}{}(\stackrel{}{v}\rho )`$ and the Hamilton-Jacobi equation (5), plus suitable initial (and/or boundary) data. The underlying diffusion process is specified uniquely, cf. .
Since the pertinent nonlinearly coupled system equations looks discouraging, it is useful to mention that a linearisation of this problem is provided by a time-adjoint pair of generalised diffusion equations in the framework of the so-called Schrödinger boundary data problem, . The standard heat equation appears as a very special case in this formalism.
The local conservation law (3) acquires a direct physical meaning (the rate of change of momentum carried by a locally co-moving with the flow volume, ), only if averaged with respect to $`\rho (\stackrel{}{x},t)`$ over a simply connected spatial area. If $`V`$ stands for a volume enclosed by a two-dimensional outward oriented surface $`V`$, we define a co-moving volume on small time scales, by deforming the boundary surface in accordance with the local current velocity field values. Then , let us consider at time $`t`$ the displacement of the boundary surface $`V(t)`$ defined as follows: $`\stackrel{}{x}V\stackrel{}{x}+\stackrel{}{v}(\stackrel{}{x},t)\mathrm{}t`$ for all $`\stackrel{}{x}V`$. Up to the first order in $`\mathrm{}t`$ this guarantees the conservation of mass (probability measure) contained in $`V`$ at time $`t`$ i. e. $`_{V(t+\mathrm{}t)}\rho (\stackrel{}{x},t+\mathrm{}t)d^3x_{V(t)}\rho (\stackrel{}{x},t)d^3x0`$.
The corresponding (to the leading order in $`\mathrm{}t`$) quantitative momentum rate-of-change measure reads, cf. , $`_V\rho \stackrel{}{}(\mathrm{\Omega }Q)d^3x`$.
For a particular case of the free Brownian expansion of an initially given $`\rho _0(\stackrel{}{x})=\frac{1}{(\pi \alpha ^2)^{3/2}}exp(\frac{x^2}{\alpha ^2})`$, where $`\alpha ^2=4Dt_0`$, we would have $`_V\rho \stackrel{}{}Qd^3x=_VP𝑑\stackrel{}{\sigma }`$, where $`Q(\stackrel{}{x},t)=\frac{\stackrel{}{x}^2}{8(t+t_0)^2}\frac{3D}{2(t+t_0)}`$, while the ”osmotic pressure” contribution reads $`P(\stackrel{}{x},t)=\frac{D}{2(t+t_0)}\rho (\stackrel{}{x},t)`$ for all $`\stackrel{}{x}R^3`$ and $`t0`$.
The current velocity $`\stackrel{}{v}(\stackrel{}{x},t)=\stackrel{}{}S(\stackrel{}{x},t)=\frac{\stackrel{}{x}}{2(t+t_0)}`$ is linked to the Hamilton-Jacobi equation $`_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q=0`$ whose solution reads: $`S(\stackrel{}{x},t)=\frac{\stackrel{}{x}^2}{4(t+t_0)}+\frac{3}{2}Dln[4\pi D(t+t_0)]`$.
Let us observe that the initial data $`\stackrel{}{v}_0=D\stackrel{}{}ln\rho _0=\stackrel{}{u}_0`$ for the current velocity field indicate that we have totally ignored a crucial preliminary stage of the dynamics on the $`\beta ^1`$ time scale, when the Brownian expansion of an initially static ensemble has been ignited and so particles have been ultimately set in motion.
Notice also that our ”osmotic expansion pressure” $`P(\stackrel{}{x},t)`$ is not positive definite, in contrast to the familiar kinetic theory (equation of state) expression for the pressure $`P(\stackrel{}{x})=\alpha \rho ^\beta (\stackrel{}{x}),\alpha >0`$ appropriate for gases. The admissibility of the negative sign of the ”pressure” function encodes the fact that the Brownian evolving concentration of particles generically decompresses (blows up), instead of being compressed by the surrounding medium.
The loss (in view of the ”osmotic” migration) of momentum stored in a control volume at a given time, may be here interpreted in terms of an acceleration $`_V\rho \stackrel{}{}Qd^3x`$ induced by a fictituous ”attractive force”.
By invoking an explicit Hamilton-Jacobi connection (5), we may attribute to a diffusing Brownian ensemble the mean kinetic energy per unit of mass $`_V\rho \frac{1}{2}\stackrel{}{v}^2d^3x`$. In view of $`<\stackrel{}{x}^2>=6D(t+t_0)`$, we have also $`_{R^3}\rho \frac{1}{2}\stackrel{}{v}^2d^3x=\frac{3D}{4(t+t_0)}`$. Notice that the mean energy $`_V\rho (\frac{1}{2}\stackrel{}{v}^2+Q)d^3x`$ needs not to be positive. Indeed, this expression identically vanishes after extending integrations from $`V`$ to $`R^3`$. On the other hand the kinetic contribution, initially equal $`_{R^3}\frac{1}{2}\rho v^2d^3x=3D/\alpha ^2`$ and evidently coming from nowhere, continually diminishes and is bound to disappear in the asymptotic $`t\mathrm{}`$ limit, when Brownian particles become uniformly distributed in space.
Normally, diffusion processes yielding a nontrivial matter transport (diffusion currents) are observed for a non-uniform concentration of colloidal particles which are regarded as independent (non-interacting). We can however devise a thought (numerical) experiment that gives rise to a corresponding transport in terms of an ensemble of sample (and thus independent) Brownian motion realisations on a fixed finite time interval, instead of considering a multitude of them (migrating swarm of Brownian particles) simultaneously.
Let us assume that ”an effort” (hence, an energy loss/gain which implies local deviations from thermal equilibrium conditions) of the random medium, on the $`\beta ^1`$ scale, to produce a local Brownian diffusion current $`(\rho \stackrel{}{v})(\stackrel{}{x},t_0)`$ out of the initially static ensemble and thus to decompress (lower the blow-up tendency) an initial non-uniform probability distribution, results in the effective osmotic reaction of the random medium. This is the Brownian recoil effect of Ref. .
In that case, the particle swarm propagation scenario becomes entirely different from the standard Brownian one. First of all, the nonvanishing forward drift $`\stackrel{}{b}=\stackrel{}{u}`$ is generated as a dynamical (effective, statistical here !) response of the bath to the enforced by the bath particle transport with the local velocity $`\stackrel{}{v}=\stackrel{}{u}`$. Second, we need to account for a parellel inversion of the pressure effects (compression $`+\stackrel{}{}Q`$ should replace the decompression $`\stackrel{}{}Q`$) in the respective local momentum conservation law.
Those features can be secured through an explicit realization of the action-reaction principle which we promote to the status of the third Newton law in the mean.
On the level of Eq. (3), once averaged over a finite volume, we interpret the momentum per unit of mass rate-of-change $`_V\rho \stackrel{}{}(\mathrm{\Omega }Q)d^3x`$ which occurs exclusively due to the Brownian expansion mechanism, to generate a counterbalancing rate-of-change tendency in the random medium. To account for the emerging forward drift and an obvious modification of the subsequent expansion of an ensemble of particles, we re-define Eq. (3) by setting $`_V\rho \stackrel{}{}(\mathrm{\Omega }Q)d^3x`$ in its right-hand-side instead of $`+_V\rho \stackrel{}{}(\mathrm{\Omega }Q)d^3x`$ . That amounts to an instantaneous implementation of the third Newton law in the mean (action-reaction principle) in Eq. (3).
Hence, the momentum conservation law for the process with a recoil (where the reaction term replaces the decompressive ”action” term) would read:
$$_t\stackrel{}{v}+(\stackrel{}{v}\stackrel{}{})\stackrel{}{v}=\stackrel{}{}(Q\mathrm{\Omega })$$
$`(6)`$
so that
$$_tS+\frac{1}{2}|\stackrel{}{}S|^2Q=\mathrm{\Omega }$$
$`(7)`$
stands for the corresponding Hamilton-Jacobi equation, cf. , instead of Eq. (5). A suitable adjustment (re-setting) of the initial data is here necessary.
In the coarse-grained picture of motion we thus deal with a sequence of repeatable scenarios realised on the Smoluchowski process time scale $`\mathrm{}t`$: the Brownian swarm expansion build-up is accompanied by the parallel counterflow build-up, which in turn modifies the subsequent stage of the Brownian swarm migration (being interpreted to modify the forward drift of the process) and the corresponding built-up anew counterflow.
The new closed system of partial differential equations refers to Markovian diffusion-type processes again, . The link is particularly obvious if we observe that the new Hamilton-Jacobi equation (7) can be formally rewritten in the previous form (5) by introducing:
$$\mathrm{\Omega }_r=_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q$$
$`(8)`$
where $`\mathrm{\Omega }_r=2Q\mathrm{\Omega }`$ and $`\mathrm{\Omega }`$ represents the previously defined potential function of any Smoluchowski (or more general) diffusion process.
It is $`\mathrm{\Omega }_r`$ which via Eq. (4) would determine forward drifts of the Markovian diffusion process with a recoil. They must obey the Cameron-Martin-Girsanov identity $`\mathrm{\Omega }_r=2Q\mathrm{\Omega }=2D[_t\varphi +\frac{1}{2}(\frac{\stackrel{}{b}^2}{2D}+\stackrel{}{}\stackrel{}{b})]`$.
Our new closed system of equations is badly nonlinear and coupled, but its linearisation can be immediately given in terms of an adjoint pair of Schrödinger equations with a potential $`\mathrm{\Omega }`$, . Indeed, $`i_t\psi =D\mathrm{}\psi +\frac{\mathrm{\Omega }}{2D}\psi `$ with a solution $`\psi =\rho ^{1/2}exp(iS)`$ and its complex adjoint makes the job, if we regard $`\rho `$ together with $`S`$ to remain in conformity with the previous notations. The choice of $`\psi (\stackrel{}{x},0)`$ gives rise to a solvable Cauchy problem. (Notice that by setting $`D=\frac{\mathrm{}}{2m}`$ we recover the standard quantum mechanical notation.)
This feature we shall exploit in below. Notice that, for time-indepedent $`\mathrm{\Omega }`$, the total energy $`_{R^3}(\frac{v^2}{2}Q+\mathrm{\Omega })\rho d^3x`$ of the diffusing ensemble is a conserved quantity.
The general existence criterions for Markovian diffusion processes of that kind, were formulated in Ref. , see also .
Let us consider a simple one-dimensional example. In the absence of external forces, we solve the equations (in space dimension one) $`_t\rho =(v\rho )`$ and $`_tv+(v)v=+Q`$, with an initial probability density $`\rho _0(x)`$ chosen in correspondence with the previous free Brownian motion example. We denote $`\alpha ^2=4Dt_0`$. Then, $`\rho (x,t)=\frac{\alpha }{[\pi (\alpha ^4+4D^2t^2)]^{1/2}}exp[\frac{x^2\alpha ^2}{\alpha ^4+D^2t^2}]`$ and $`b(x,t)=v(x,t)+u(x,t)=\frac{2D(\alpha ^22Dt)x}{\alpha ^4+4D^2t^2}`$ are the pertinent solutions. Notice that $`u(x,0)=\frac{2Dx}{\alpha ^2}=b(x,0)`$ amounts to $`v(x,0)=0`$, while in the previous free Brownian case the initial current velocity was equal to $`Dln\rho _0`$. This re-adjustment of the initial data can be interpreted in terms of the counterbalancing (recoil) phenomenon: the would-be initial Brownian ensemble current velocity $`v_0=u_0`$ is here completely saturated by the emerging forward drift $`b_0=u_0`$, see e.g. also . We deal also with a fictituous ”repulsive” force, which corresponds to the compression (pressure upon) of the Brownian ensemble due to the counter-reaction of the surrounding medium. We can write things more explicitly. Namely, now: $`Q(x,t)=\frac{2D^2\alpha ^2}{\alpha ^4+4D^2t^2}(\frac{\alpha ^2x^2}{\alpha ^4+4D^2t^2}1)`$ and the corresponding pressure term ($`Q=\frac{1}{\rho }P`$) reads $`P(x,t)=\frac{2D^2\alpha ^2}{\alpha ^4+4D^2t^2}\rho (x,t)`$ giving a positive contribution $`+Q`$ to the local conservation law. The related Hamilton-Jacobi equation $`_tS+\frac{1}{2}|S|^2=+Q`$ is solved by $`S(x,t)=\frac{2D^2x^2t}{\alpha ^4+4D^2t^2}Darctan(\frac{2Dt}{\alpha ^2})`$. With the above form of $`Q(x,t)`$ one can readily check that the Cameron-Martin-Girsanov constraint euqation for the forward drift of the Markovian diffusion process with a recoil is automatically valid for $`\varphi =\frac{1}{2}ln\rho +S`$: $`2Q=2D[_t\varphi +\frac{1}{2}(\frac{b^2}{2D}+b)]`$.
In anology with our free Brownian motion discussion, let us observe that presently $`<x^2>=\frac{\alpha ^2}{2}+\frac{2D^2t^2}{\alpha ^2}`$. It is easy to demonstrate that the quadratic dependence on time persists for arbitrarily shaped initial choices of the probability distribution $`\rho _0(x)>0`$. That signalizes an anomalous behaviour (enhanced diffusion) of the pertinent Markovian process when $`\mathrm{\Omega }=0`$ i. e. $`\mathrm{\Omega }_r=2Q`$.
We can evaluate the kinetic energy contribution $`_R\rho \frac{v^2}{2}𝑑x=\frac{4D^4t^2}{\alpha ^2(\alpha ^4+4D^2t^2)}`$ which in contrast to the Brownian case shows up a continual growth up to the terminal (asymptotic) value $`\frac{D^2}{\alpha ^2}`$. This value was in turn an initial kinetic contribution in the previous free Brownian expansion example. In contrast to that case, the total energy integral is now finite (finite energy diffusions of Ref. ) and reads $`_R(\frac{1}{2}v^2Q)\rho 𝑑x=\frac{D^2}{\alpha ^2}`$ (it is a conservation law). The asymptotic value of the current velocity $`v\frac{x}{t}`$ is twice larger than this appropriate for the Brownian motion, $`v\frac{x}{2t}`$.
It is easy to produce an explicit solution to (7), (8) in case of $`\mathrm{\Omega }(x)=\frac{1}{2}\gamma ^2x^2D\gamma `$, with exactly the same inital probability density $`\rho _0(x)`$ as before. The forward drift of the corresponding diffusion-type process does not show up any obvious contribution from the harmonic Smoluchowski force. It is completely eliminated by the Brownian recoil scenario. One may check that $`b(x,0)=\frac{2Dx}{\alpha ^2}=u(x,0)`$, while obviously $`b=\frac{F}{m\beta }=\gamma x`$ would hold true for all times, in case of the Smoluchowski diffusion process. Because of the harmonic attraction and suitable initial probability measure choice, we have here wiped out all previously discussed enhanced diffusion features. Now, the dispersion is attentuated and actually the non-dispersive diffusion-type process is realised: $`<x^2>`$ does not spread at all despite of the intrinsically stochastic nature of the dynamics (finite-energy diffusions of Ref. ).
It is clear that stationary processes are the same both in case of the standard Brownian motion and the Brownian motion with a recoil. The respective propagation scenarios substantially differ in the non-stationary case only.
One may possibly argue, that a sign inversion of the right-hand-side of Eq. (3) which implies (6), can be accomplished by means of an analytic continuation in time (cf. the Euclidean quantum mechanics discussion in Ref. ). An examination of free dynamics cases given in the above proves that the third Newton law ansatz is an independent procedure.
Acknowledgements: I would like to thank Professor Eric Carlen for discussion about the scaling limits of the Boltzmann equation and related conservation laws.
|
no-problem/9904/astro-ph9904023.html
|
ar5iv
|
text
|
# Geometric Solutions for the Neutrino Oscillation Length Resonance
## Abstract
We give a geometric interpretation of the neutrino “oscillation length resonance ” recently discovered by Petcov. We use this picture to identify two new solutions for oscillation length resonances in a 3-layer earth model.
In a recent study of the expected day-night asymmetry in the observed solar neutrino spectrum Petcov pointed out that neutrinos propagating through regions of varying density may undergo what has been called an “oscillation length resonance ” . This has interesting consequences for the day-night asymmetry. In particular it suggests a region of vacuum mixing angle $`\theta _0`$ and mass-squared difference $`\delta m^2`$ parameter space where the asymmetry may be enhanced. Significantly, this region coincides with the values of $`\mathrm{sin}^2(2\theta _0)`$ and $`\delta m^2`$ that best explain the Kamiokande and SuperKamiokande data . As Petcov describes, this suggests the possibilty of detecting a signature of this resonance and hence a clear indication of neutrino oscillations.
In this brief paper we give a geometric interpretation of the resonance. These geometric considerations permit an easy derivation of the resonance properties and allow us to find two new resonance solution regions. We use our results to explain some interesting features of the calculated day-night asymmetry and also to clarify some confusion which has arisen from the more difficult algebraic approach.
In the following we work in the flavor basis and consider mixing between two neutrino species. We choose $`|\nu _e=\left(\genfrac{}{}{0pt}{}{1}{0}\right)`$ and $`|\nu _\mu =\left(\genfrac{}{}{0pt}{}{0}{1}\right)`$. The method we use is based on the well known observation that in this basis the MSW Hamiltonian
$$H=\frac{\mathrm{\Delta }_0}{2}\left(\begin{array}{cc}\mathrm{cos}2\theta _0+\sqrt{2}G_fN_e/\mathrm{\Delta }_0& \mathrm{sin}2\theta _0\\ \mathrm{sin}2\theta _0& \mathrm{cos}2\theta _0\sqrt{2}G_fN_e/\mathrm{\Delta }_0\end{array}\right)$$
(1)
can be written as $`H=\frac{1}{2}\mathrm{\Delta }_n\overline{\sigma }\widehat{n}`$ and so has the same form as the generator of spatial rotations for two-spinors . In the above $`\overline{\sigma }`$ are the Pauli matrices, $`N_e`$ is the electron number density, $`\mathrm{\Delta }_n=2\sqrt{H_{11}^2+H_{12}^2}`$ , $`\widehat{n}`$ is a unit vector with the cartesian components $`(1/\mathrm{\Delta }_n)(\mathrm{\Delta }_0\mathrm{sin}2\theta _0,0,\mathrm{\Delta }_0\mathrm{cos}2\theta _0+\sqrt{2}G_fN_e)`$ , $`\mathrm{\Delta }_0=\delta m^2/2E_\nu `$, and $`\widehat{n}\widehat{z}=\mathrm{cos}2\theta _n`$ where $`\theta _n`$ is the matter mixing angle. Let us represent a state $`|\nu =\left(\genfrac{}{}{0pt}{}{\alpha }{\beta }\right)`$ which satisfies $`\overline{\sigma }\widehat{𝐩}\left(\genfrac{}{}{0pt}{}{\alpha }{\beta }\right)=\left(\genfrac{}{}{0pt}{}{\alpha }{\beta }\right)`$ for some unit 3-vector $`\widehat{p}`$ by $`|\nu =|\widehat{p}`$. The ambiguity in overall phase will not be of concern here.
The following points are easily verified.
i) A neutrino which begins in a state represented by $`|\widehat{a}`$ evolves, after passing for a time $`t`$ through a region with constant electron number density $`N_e`$ into a state $`|\widehat{b}`$ where $`\widehat{b}`$ is obtained by rotating $`\widehat{a}`$ about $`\widehat{n}(N_e)`$ through an angle $`\varphi =\mathrm{\Delta }_n(N_e)t`$. In an obvious notation this is represented by $`|\widehat{a}U(t)|\widehat{a}=|\widehat{b}=R\{\widehat{n}(N_e),\varphi \}\widehat{a}`$, where R is a 3x3 rotation matrix.
ii) The transition probability is given by,
$$|\widehat{a}|\widehat{b}|^2=(1/2)(1+\widehat{a}\widehat{b}).$$
(2)
These points simply serve to verify that the evolution of the neutrino can be represented by a 3-vector precessing in a cone about $`\widehat{n}(N_e)`$. The case of a neutrino propagating through regions of different density $`(N_e^{(1)},N_e^{(2)},\mathrm{})`$ can be represented by a vector rotating first about $`\widehat{n}(N_e^{(1)})`$, then about $`\widehat{n}(N_e^{(2)})`$ after the appropriate time, and so on.
We now use this picture to solve the oscillation length resonance problem for neutrinos passing through the earth. In this scenario a neutrino passes through a low density mantle characterized by electron number density $`N_e^m`$ and length $`L_m`$, a higher density core with $`N_e^c`$ and length $`L_c`$ and the lower density mantle again ($`N_e^m`$,$`L_m`$). For the $`\delta m^2`$ and neutrino energies we consider, a neutrino which begins in the sun as a combination of the mass eigenstates $`|\nu _1`$ and $`|\nu _2`$ arrives at the earth with these components well separated spatially, owing to the difference in their propagation velocities . The relevant conversion probability in this case is $`P_{\nu _1\nu _\mu }`$. The aim is to find, for a given $`N_e^c,N_e^m`$ (or equivelantly matter mixing angles $`\theta _c`$ and $`\theta _m`$) the $`L_c`$ and $`L_m`$ which maximize this transition probability. For illustration we first consider the easier problem of finding the resonances for the transition probabilty between flavor eigenstates, $`P_{\nu _e\nu _\mu }`$. This corresponds to finding the $`\varphi _1`$ and $`\varphi _2`$ which maximize $`\widehat{z}^TR(\widehat{n}_m,\varphi _1)R(\widehat{n}_c,\varphi _2)R(\widehat{n}_m,\varphi _1)\widehat{z}`$. Clearly, $`R(\widehat{n}_m,\varphi _1)\widehat{z}=R(\widehat{n}_c,\varphi ^{})\widehat{p}`$ for some $`\varphi ^{}`$ and $`\widehat{p}`$ where $`\widehat{p}`$ is in the x-z plane and satisfies $`1\widehat{p}\widehat{z}\mathrm{cos}(2\theta _c|2(\theta _c2\theta _m)|)`$. Therefore, the quantity we wish to maximize is $`\widehat{p}^TR(\widehat{n}_c,\varphi _2+2\varphi ^{})\widehat{p}`$. This takes a maximum when $`\varphi _2+2\varphi ^{}=\pi `$, and when $`\widehat{p}\widehat{n}_c`$ is as close to zero as allowed by the above constraint on $`\widehat{p}`$. This and some geometry completely solves the problem of finding the resonances for $`P_{\nu _e\nu _\mu }`$. Similar considerations solve the $`P_{\nu _1\nu _\mu }`$ problem.
The results divide into three cases. In the following $`P_{\nu _1\nu _\mu }^{res}`$ is the resonance (maximum) value of the transition probability, obtained when $`L_m=L_m^{res}`$ and $`L_c=L_c^{res}`$.
I) For case I we have, $`2\theta _c\theta _0<\pi /2`$,
$$P_{\nu _1\nu _\mu }^{res}=\mathrm{sin}^2(2\theta _c\theta _0),$$
(3)
$$L_m^{res}=\frac{c}{\mathrm{\Delta }_n^{(mantle)}}2n\pi ,$$
(4)
$$L_c^{res}=\frac{c}{\mathrm{\Delta }_n^{(core)}}(2m+1)\pi ,$$
(5)
for $`m,n=0,1,2,\mathrm{}`$. In this case there is no enhancement of the transition probability, as $`P_{\nu _1\nu _\mu }`$ can take this value in the core alone.
II) For case II we have $`|2\theta _c4\theta _m+\theta _0|<\pi /2<2\theta _c\theta _0,`$
$$P_{\nu _1\nu _\mu }^{res}=1,$$
(6)
$$L_m^{res}=\frac{c}{\mathrm{\Delta }_n^{(mantle)}}(\rho +2n\pi ),$$
(7)
$$L_c^{res}=\frac{c}{\mathrm{\Delta }_n^{(core)}}((2m+1)\pi \gamma \eta ),$$
(8)
for $`n,m=0,1,2,\mathrm{}`$, and where
$$\mathrm{cos}\rho =\frac{\mathrm{cot}(2(\theta _c\theta _m))(\mathrm{cos}2\theta _m+\mathrm{cos}(2(\theta _m\theta _0)))}{(\mathrm{sin}2\theta _m+\mathrm{sin}2(\theta _m\theta _0))},$$
(9)
$$\mathrm{tan}\gamma =\frac{\mathrm{sin}\rho }{(\mathrm{cot}2\theta _m\mathrm{sin}2(\theta _c\theta _m)+\mathrm{cos}\rho \mathrm{cos}2(\theta _c\theta _m))},$$
(10)
$$\mathrm{tan}\eta =\frac{\mathrm{sin}\rho }{(\mathrm{sin}2(\theta _c\theta _m)\mathrm{cot}2(\theta _m\theta _0)+\mathrm{cos}\rho \mathrm{cos}2(\theta _c\theta _m))},$$
(11)
Note that if $`\rho `$ is a solution then $`2\pi \rho `$ is also, with $`\gamma \gamma ,\eta \eta `$.
III) Petcov’s original solution. $`|2\theta _c4\theta _m+\theta _0|>\pi /2`$
$$P_{\nu _1\nu _\mu }^{res}=\mathrm{sin}^2(2\theta _c+\theta _04\theta _m),$$
(12)
$$L_m^{res}=\frac{c}{\mathrm{\Delta }_n^{(mantle)}}(2n+1)\pi ,$$
(13)
$$L_c^{res}=\frac{c}{\mathrm{\Delta }_n^{(core)}}(2m+1)\pi ,$$
(14)
for n,m=0,1,2,…. This is the case discussed by Petcov in Ref. .
Note that equations \[12a,12b\] of that paper, which give the conditions that must be satisfied in order for the transition probability to take a local maximum, differ from the conditions above because we are discussing global maxima here.
An interesting quantity is $`P^{res}P_{max}^{onelayer}`$, the difference between the parametric resonance transition probability and the maximum that the transition probability can take in a single layer (defined to be max$`\{\mathrm{sin}^2[2\theta _{inner}\theta _0],\mathrm{sin}^2[2\theta _{outer}\theta _0]\}`$). This is plotted in Figure 1, where we have chosen the outer layers of the sandwich to have density $`\rho =4.5\mathrm{gcm}^3`$ and electron fraction $`Y_e=.49`$ and the inner layer to have $`\rho =11.5\mathrm{gcm}^3`$ and $`Y_e=.46`$. These are the average values given by the Stacey earth model and those used by Petcov in .
To illustrate these ideas we consider a particular case. Suppose that $`\theta _0=.05`$ and that for the neutrino energy we are interested in we satisfy $`\mathrm{log}[\delta m^2/2E_\nu (ev)]=12.55`$. Then by calculating the matter mixing angles we find that we are in Case II above and that we can choose the lengths of the inner and outer layers in such a way that the transition probability is unity. For this case $`L^{(outer)}=11629`$km and $`L^{(inner)}=2080`$ km. This is to be contrasted with the maximum that the transition probability could take in a single layer, which for this case is $`.53`$. Note that for a given $`\theta _0`$, this type of resonance only has an interesting effect in a narrow region about the MSW resonance energies in the mantle and core.
The answer to the question of whether or not an oscillation length resonance actually occurs in the earth depends on the actual core and mantle lengths and the extent to which this simple model of the earth mirrors reality. The question is most easily answered by simply examining $`P^{earth}P^{onelayer}`$. An examination of the dependence of $`P^{earth}P^{onelayer}`$ on $`\theta _0`$ and $`E_\nu `$ reveals that this quantity is positive in a small region. We conclude that there is destructive interference as often as constructive. A different way of describing the phenomena in the earth then is to say not that it leads to an enhancement of the transition probabilty, but that the transition probability is more sensitive to the neutrino energy and phase of the neutrino state at the mantle/core boundary. Consequently, the transition probability for core-crossing neutrinos has a sharp energy dependence. It is characterized, for $`2\theta _0.1`$, by a peak centered at $`E_{\mathrm{res}}^{\mathrm{mantle}}(1+.5)`$, with a bump on each side. With a detector like SNO, with an expected energy resolution of $`15\%`$, this sharp structure may be useful for determining the mixing parameters. Of course, the clearer signature of MSW resonance in the mantle would first be observed.
In summary, we have shown how an analogy with the simple rotational geometry of spin 1/2 systems can be used to picture and solve certain neutrino oscillation problems. We have illustrated the method by finding the core and mantle lengths in a 3 layer earth model which maximize the $`\nu _1\nu _\mu `$ transition probability.
We note that two papers by Chizhov and Petcov recently appeared which independently addressed some of the same issues investigated here. In contrast to our geometric arguments they have algebraically derived the results for cases I-III.
Acknowledgements. This work was supported in part by NSF Grant PHY98-00980 at UCSD. G.M.F. acknowledges stimulating conversations with S. Petcov and A.B. Balantekin at the 1998 Neutrino Workshop at the Aspen Center for Physics.
Figure Captions: Figure 1. Contour Plot of $`P^{res}P^{onelayer}`$. The vertical axis is $`2\theta _0`$, the horizontal is $`\mathrm{log}\delta m^2/2E`$ and the argument of the log is in eV.
|
no-problem/9904/astro-ph9904228.html
|
ar5iv
|
text
|
# Gravity waves goodbye
## Abstract
The detection of a stochastic background of long-wavelength gravitational waves (tensors) in the cosmic microwave background (CMB) anisotropy would be an invaluable probe of the high energy physics of the early universe. Unfortunately a combination of factors now makes such a detection seem unlikely: the vast majority of the CMB signal appears to come from density perturbations (scalars) - detailed fits to current observations indicate a tensor-to-scalar quadrupole ratio of $`T/S<0.5`$ for the simplest models; and on the theoretical side the best-motivated inflationary models seem to require very small $`T/S`$. Unfortunately CMB temperature anisotropies can only probe a gravity wave signal down to $`T/S10\%`$ and optimistic assumptions about polarization of the CMB only lower this another order of magnitude.
Inflation is the only known mechanism for producing an almost scale-invariant spectrum of adiabatic scalar (density) fluctuations, a prediction which is steadily gaining observational support. The simplest models of inflation also predict an almost scale-invariant spectrum of gravity waves. A definitive detection of these waves would constitute a window onto physics at higher energies than have ever been probed before. Indeed, it has been realized for some time that, for monomial inflation models within the slow-roll approximation, measurement of this spectrum could allow a reconstruction of the inflaton potential itself . Unfortunately, a combination of factors now makes this seem unlikely: the vast majority of the CMB signal probably comes from scalar perturbations, for reasons we shall now describe.
Theoretically one could imagine that whatever mechanism produces the initial fluctuations which seeded structure formation would produce all three types of perturbations (scalar, vector and tensor) roughly equally. If this happens early enough, the vector modes – representing fluid vorticity – would decay with the expansion of the universe leaving only scalar and tensor perturbations today. Unfortunately we live in a “special” universe in which the perturbations appear to be both adiabatic and close to scale-invariant (equal contribution to the metric perturbation per logarithmic interval in wavelength). Our best paradigm for producing such fluctuations is amplification of quantum fluctuations by a period of accelerated expansion, i.e. inflation. Within this paradigm we shall see that $`T/S`$ is expected to be considerably smaller than the naive $`1:1`$ ratio.
While some simple models of inflation predict $`T/S1`$, we would hope that inflation would one day find a home in modern particle physics theories. From this perspective, there is currently a “considerable theoretical prejudice against the likelihood” of an observable gravity wave signal in the CMB anisotropy. While our knowledge of physics above the electroweak scale is extremely uncertain, a large $`T/S`$ requires two unlikely events. First the scale of variation of the inflaton field during inflation would need to be $`𝒪(m_{\mathrm{Pl}})`$ or greater, which is inconsistent with an ordinary extension of the standard model . Secondly, the size of the inflaton potential would necessarily be $`V^{1/4}10^2m_{\mathrm{Pl}}`$, orders of magnitude larger than generically expected from particle theory . Thus inflation, in a particle physics context, predicts that the scalar perturbations will be dramatically enhanced over the tensor perturbations. A separate line of argument, involving a description of inflation motivated by quantum gravity, leads to a prediction of $`T/S1.7\times 10^3`$ .
What is the situation on the observational side? The measurement of the large-angle CMB anisotropies by the COBE satellite has been followed by ground- and balloon-based observations of the smaller scale regions of the CMB power spectrum. Measurements on a range of scales are needed to constrain gravity waves, since tensors are expected to contribute only to angular scales greater than about $`1^{}`$. Thus large-scale power greater than that expected from the extrapolation of small-scale scalar power can be attributed to tensors. This small-scale power can be measured from the CMB itself or additionally from matter fluctuations in more recent epochs, allowing a large lever arm in scale.
We have recently placed upper limits on $`T/S`$ using a variety of observations . We used CMB anisotropy data as well as information about the matter fluctuations from galaxy correlation, cluster abundance, and Lyman $`\alpha `$ forest measurements. Our limits include a variety of additional constraints (such as the age of the universe, cluster baryon density, and recent supernova measurements), in all cases marginalizing over the relevant but as yet imprecisely determined cosmological parameters. We placed constraints on exponential and polynomial inflaton potential models; these “large-field” models predict substantial $`T/S`$ and are therefore of interest here. We found $`T/S<0.5`$ at 95% confidence, with the small-angle CMB data providing the bulk of the constraint (see Fig. 1).
In the next several years a pair of satellite missions should dramatically improve our picture of the CMB. MAP and especially the Planck Surveyor will map the CMB at unprecedented precision. What can we expect from these missions regarding gravity wave constraints? With a cosmic variance limited experiment capable of determining only the anisotropy in the CMB, but with all other parameters known, one can measure $`T/S`$ only if it is larger than about 10% . To additionally measure the tensor spectral index and check the inflationary consistency relation requires $`T/S`$ to be a factor of several larger. This is in conflict with current theoretical prejudice, and realistically also in conflict with the experimental limits.
One can potentially improve sensitivity to $`T/S`$ by measuring the polarization of the CMB. With more observables the error bars on parameters are tightened. In addition, polarization breaks the degeneracy between reionization and a tensor component, both of which affect the relative amplitude of large and small angular scales, allowing extraction of smaller levels of signal . Model dependent constraints on a tensor perturbation mode as low as $`1\%`$ appear to be possible with the Planck satellite , though numerical inaccuracies plagued earlier work making these numbers somewhat soft.
Scalar modes have no “handedness” and hence they generate only parity even, or $`E`$-mode polarization . A detection of $`B`$-mode polarization would thus indicate the presence of other modes, with tensors being more likely since vector modes decay cosmologically. Unfortunately the detection of a $`B`$-mode polarization will be a formidable experimental challenge. The level of the signal is expected to be very small: only a few tens of nK. One can regain some signal-to-noise ratio by concentrating on the sign of the correlation of polarization with temperature anisotropies on large-angular scales (scalar polarization is tangential around hot spots, while tensor polarization is radial). Unfortunately only a small fraction of the signal is correlated, so again the signal is extremely small (less than $`1\mu `$K), and the correlation is swamped by the scalar signal unless $`T/S`$ is significant.
The ground-based laser interferometers LIGO and VIRGO, the proposed space-based interferometer LISA, and millisecond pulsar timing offer another conceivable route to primordial gravity wave detection. However, the long lever arm from the horizon scale to the scales probed by these experiments makes direct detection infeasible .
All of these arguments combine to considerably reduce the optimism for the detailed reconstruction of the inflaton potential through cosmological gravity wave measurement. However, since inflation appears to predict low $`T/S`$, it is good news that observations support a small tensor contribution. The apparent demise of the significance of tensors for the CMB has one important consequence: detection of even a modest contribution of gravity waves would profoundly affect our view of early universe particle physics.
###### Acknowledgements.
This research was supported by the Natural Sciences and Engineering Research Council of Canada and by the U. S. National Science Foundation.
|
no-problem/9904/hep-ex9904023.html
|
ar5iv
|
text
|
# Search for Second Generation Leptoquark Pairs Decaying to 𝜇𝜈+𝑗𝑒𝑡𝑠 in 𝑝𝑝̄ Collisions at √𝑠 = 1.8 TeV
## Abstract
We report on a search for second generation leptoquarks (LQ) produced in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV using the DØ detector at Fermilab. Second generation leptoquarks are assumed to be produced in pairs and to decay to either $`\mu `$ or $`\nu `$ and either a strange or a charm quark ($`q`$). Limits are placed on $`\sigma (p\overline{p}LQ\overline{LQ}\mu \nu +jets)`$ as a function of the mass of the leptoquark. For equal branching ratios to $`\mu q`$ and $`\nu q`$, second generation scalar leptoquarks with a mass below 160 GeV/$`c^2`$, vector leptoquarks with anomalous minimal vector couplings with a mass below 240 GeV/$`c^2`$, and vector leptoquarks with Yang-Mills couplings with a mass below 290 GeV/$`c^2`$, are excluded at the 95% confidence level.
Leptoquarks (LQ) are hypothetical particles that carry color, fractional electric charge, and both lepton and baryon number. They appear in several extended gauge theories and composite models beyond the standard model. Leptoquarks with universal couplings to all lepton flavors would give rise to flavor-changing neutral currents, and are therefore tightly constrained by experimental data. To satisfy experimental constraints on flavor-changing neutral currents, leptoquarks that couple only to second generation leptons and quarks are considered.
This Letter reports on a search for second generation leptoquark pairs produced in $`p\overline{p}`$ interactions at a center-of-mass energy $`\sqrt{s}`$ = 1.8 TeV. They are assumed to be produced dominantly via the strong interaction, $`p\overline{p}g+XLQ\overline{LQ}+X`$. The search is conducted for the signature where one of the leptoquarks decays via LQ $``$ muon + quark and the other via LQ $``$ neutrino + quark, where the quark may be either a strange or a charm quark. The corresponding experimental cross section is $`2\beta (1\beta )\times \sigma (p\overline{p}LQ\overline{LQ})`$ with $`\beta `$ the unknown branching fraction to a charged lepton ($`e,\mu ,\tau `$) and a quark (jet) and ($`1\beta `$) the branching fraction to a neutrino ($`\nu `$) and a jet. The search considers leptoquarks with scalar or vector couplings in the $`\mu \nu +jets`$ final state. Additional details on this analysis may be found in reference mywork . Previous studies by the DØ and CDF collaborations have considered the $`\mu \mu +jets`$ final state for scalar couplings, resulting in limits of 140 GeV/$`c^2`$ and 160 GeV/$`c^2`$ respectively for $`\beta `$ = 1/2.
The DØ detector consists of three major components: an inner detector for tracking charged particles, a uranium–liquid argon calorimeter for measuring electromagnetic and hadronic showers, and a muon spectrometer consisting of a magnetized iron toroid and three layers of drift tubes. Jets are measured with an energy resolution of approximately $`\sigma (E)`$ = 0.8/$`\sqrt{E}`$ ($`E`$ in GeV). Muons are measured with a momentum resolution $`\sigma (1/p)=0.18(p2)/p^20.003`$ ($`p`$ in GeV/$`c`$).
Event samples are obtained from triggers requiring the presence of a muon candidate with transverse momentum $`p_T^\mu >`$ 5 GeV/$`c`$ in the fiducial region $`|\eta _\mu |<1.7`$ ($`\eta \mathrm{ln}[\mathrm{tan}(\frac{1}{2}\theta )]`$, where $`\theta `$ is the polar angle of the track with respect to the $`z`$ axis taken along the proton beam line), and at least one jet candidate with transverse energy $`E_T^j>`$ 8 GeV and $`|\eta _j|<`$ 2.5. The data used for this analysis correspond to an integrated luminosity of 94$`\pm `$5 pb<sup>-1</sup> collected during the 1993–1995 and 1996 Tevatron collider runs at Fermilab.
In the final event sample, muon candidates are required to have a reconstructed track originating from the interaction region consistent with a muon of $`p_T^\mu >`$ 25 GeV/$`c`$ and $`|\eta _\mu |<0.95`$. To reduce backgrounds from heavy quark production, muons must be isolated from jets ($`\mathrm{\Delta }(\mu ,jet)>0.5`$ for $`E_T^j>`$ 15 GeV, where $`\mathrm{\Delta }(\mu ,jet)`$ is the separation between the muon and jet in the $`\eta \varphi `$ plane), and have energy deposition in the calorimeter consistent with that of a minimum ionizing particle. Events are required to have one muon satisfying these requirements. Events containing a second muon which satisfy these requirements, with the fiducial requirement relaxed to $`|\eta _\mu |<1.7`$, are rejected.
Jets are measured in the calorimeters and are reconstructed using a cone algorithm with a radius $`=0.5`$ $`(\sqrt{\mathrm{\Delta }\varphi ^2+\mathrm{\Delta }\eta ^2})`$. Jets must be produced within $`|\eta _j|<2.0`$, and have $`E_T^j>15`$ GeV; with the most energetic jet in each event required to have $`|\eta _j|<1.5`$.
The transverse energy of the neutrino is not directly measured, but is inferred from the energy imbalance in the calorimeters and the momentum of the reconstructed muon. Events are required to have missing transverse energy $`E\text{/}_T>30`$ GeV. To ensure that $`E\text{/}_T`$ is not dominated by mismeasurement of the muon $`p_T`$, events having $`E\text{/}_T`$ within $`\pi \pm 0.1`$ radians of the muon track in azimuth are rejected.
To provide further rejection against dimuon events in which one of the muons was not identified in the spectrometer, muons are identified by a pattern of isolated energy deposited in the longitudinal segments of the hadronic calorimeter . Any event where such deposited energy lies along a track originating from the interaction vertex in the region $`|\eta |<1.7`$ and is within 0.25 radians in azimuth of the direction of the $`E\text{/}_T`$ vector is rejected.
Each candidate event is required to pass a selection based on the expected LQ event topology. Since the decay products of the LQ are $`\mu q`$ or $`\nu q`$, the muon and neutrino in LQ pair decays come from different parent particles nearly at rest and are therefore uncorrelated. For the primary background events (e.g. $`W+jets`$), the two leptons have the same parent. Similar reasoning holds for the jets. Correlated backgrounds are rejected with the requirement of significant separation between the muon and $`E\text{/}_T`$ ($`|\mathrm{\Delta }\varphi (\mu ,E\text{/}_T)|>0.3`$) and between the two leading jets ($`\mathrm{\Delta }(j_1,j_2)>1.4`$).
The ISAJET Monte Carlo event generator is used to simulate the scalar leptoquark ($`S_{LQ}`$) signal, and PYTHIA is used for the vector leptoquark ($`V_{LQ}`$) signal. The efficiencies for $`V_{LQ}`$ and $`S_{LQ}`$ are consistent within differences due to the choice of generator. This is verified by choosing a test point at which both scalar and vector Monte Carlo events from the same generator are compared. Therefore, efficiencies obtained from the two simulations are not distinguished. In addition, the efficiencies for vector leptoquarks are insensitive to differences between minimal vector ($`\kappa _G`$ = 1; $`\lambda _G`$ = 0) and Yang-Mills ($`\kappa _G`$ = 0; $`\lambda _G`$ = 0) couplings at large mass ($`M_{V_{LQ}}>`$ 200 GeV/$`c^2`$). The leptoquark production cross sections used for the $`S_{LQ}`$ are from next-to-leading order (NLO) calculations with a renormalization scale $`\mu =M_{S_{LQ}}`$ and uncertainties determined from variation of the renormalization/factorization scales from $`2M_{S_{LQ}}`$ to $`\frac{1}{2}M_{S_{LQ}}`$. The $`V_{LQ}`$ cross sections are leading order (LO) calculations at a scale $`\mu =M_{V_{LQ}}`$.
The dominant backgrounds, from $`W+jets`$ and $`Z+jets`$, are simulated using VECBOS for parton level generation and HERWIG for parton fragmentation. Background due to $`WW`$ production is simulated with PYTHIA. Additional background from $`t\overline{t}`$ decays into one or more muons and two or more jets, is simulated using the HERWIG Monte Carlo program for a top quark mass of 170 GeV/$`c^2`$. Monte Carlo samples are processed through a detector simulation program based on the GEANT package.
With the initial data selection described above, there are 107 events, consistent with a background of $`106\pm 30`$ events (see Fig. 1). The dominant background is $`W+jets`$ with 100$`\pm `$30 events. Other backgrounds are 2.7$`\pm `$0.7 ($`Z+jets`$), 2.4$`\pm `$0.8 $`t\overline{t}`$, and 1.5$`\pm `$0.6 ($`WW`$). The uncertainty in the background is dominated by the statistical uncertainty in the $`W+jets`$ simulation and the systematic uncertainty in the $`W+jets`$ cross section. The expected signal for 160 GeV/$`c^2`$ scalar leptoquarks is $`4.8\pm 0.7`$ events. Signal estimations are shown for a $`S_{LQ}`$ mass of 160 GeV/$`c^2`$ using the NLO cross section with a scale of $`2M_{S_{LQ}}`$.
To separate any possible signal from the backgrounds, a neural network (NN) with inputs: $`E_T^{j_1},E_T^{j_2},p_T^\mu `$ and $`E\text{/}_T`$ and nine nodes in a single hidden layer is used. The network is trained on a mixture of $`W+jets`$, $`Z+jets`$ and $`t\overline{t}`$ background Monte Carlo events, and an independently generated signal Monte Carlo sample at a mass of 160 GeV/$`c^2`$. Figure 1 shows distributions of the four input quantities and Fig. 2 the network output (referred to as the discriminant, $`D_{\text{NN}}`$). No evidence of a signal is observed in either the discriminant distribution or any of the kinematic distributions. For setting limits, the selection on $`D_{\text{NN}}`$ is optimized by maximizing a measure of sensitivity defined by
$$S(D_{\text{NN}})\underset{k=0}{\overset{n}{}}P(k,b)M_A^{95\%}(k,b,s(M_{LQ}))$$
where $`P(k,b)=e^bb^k/k!`$ is a Poisson coefficient with $`k`$ being any possible number of observable events, $`b`$ the expected mean number of background events, and $`s(M_{LQ})`$ the expected signal for a given leptoquark mass. $`M_A^{95\%}`$ is an approximate mass limit at the 95% confidence level for a given $`k`$, $`s`$ and $`b`$. $`S(D_{\text{NN}})`$ is the sum of the approximate mass limits, weighted by the probability of observing $`k=0,1,2,\mathrm{},n`$($`P(n,b)<0.05`$) events for a particular choice of the $`D_{\text{NN}}`$ selection criterion.
By maximizing the value of $`S(D_{\text{NN}})`$ a discriminant selection of $`D_{\text{NN}}>0.9`$ is obtained. With this selection, no events remain in the data, which is consistent with an expected background of $`0.7\pm 0.9`$ events. The remaining background is dominated by $`t\overline{t}`$ (0.6$`\pm `$0.2 events). The uncertainty on the total background is dominated by the statistical and systematic uncertainties from $`W+jets`$.
Table I shows the signal detection efficiencies and upper limits on the cross section at the 95% confidence level as a function of the leptoquark mass. The dominant systematic uncertainty on the signal efficiency is due to the simulation, (initial and final state radiation, parton distribution function, renormalization scale, choice of generator) with a 10% uncertainty. The systematic uncertainties shown include approximately equal contributions from uncertainty in the jet energy scale and the trigger efficiency/spectrometer resolution for high $`p_T`$ muons (6.6% and 6.4% respectively). The overall systematic uncertainty for the signal efficiency is 15%.
The limits on the observed cross section are shown in Fig. 3, and are compared with the theoretical cross section times branching ratio for scalar and vector leptoquark production for $`\beta =\frac{1}{2}`$. Mass limits of 160 GeV/$`c^2`$ for scalar leptoquarks and 290 (240) GeV/$`c^2`$ for vector leptoquarks with Yang-Mills (minimal vector) couplings, are obtained at the 95% confidence level.
In conclusion, we have performed a search for second generation leptoquarks in the $`\mu \nu +jets`$ decay channel using $`94\pm 5`$ pb<sup>-1</sup> of data collected with the DØ detector at the Fermilab Tevatron. No evidence for a signal is seen and limits are set at the 95% confidence level on the mass of second generation leptoquarks. For equal branching fractions to $`\mu q`$ and $`\nu q`$ ($`\beta =\frac{1}{2}`$) limits of 160 GeV/$`c^2`$, 240 GeV/$`c^2`$, and 290 GeV/$`c^2`$ for $`S_{LQ}`$, minimal vector, and Yang-Mills vector couplings, respectively, are obtained.
We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina).
|
no-problem/9904/hep-ph9904467.html
|
ar5iv
|
text
|
# THEORY AND TESTS OF CPT AND LORENTZ VIOLATION11footnote 1Talk presented at CPT 98, Bloomington, Indiana, November 1998
## 1 Introduction
Nature appears to be covariant both under the discrete transformation CPT, formed from the product of charge conjugation C, parity inversion P, and time reversal T, and under the continuous Lorentz transformations including rotations and boosts. The CPT theorem links these symmetries, stating that under mild technical assumptions CPT is an exact symmetry of local Lorentz-covariant field theories of point particles.
High-precision tests of both CPT and Lorentz invariance exist. According to the Particle Data Group the best figure of merit for CPT tests involves the kaon particle-antiparticle mass difference, which has been bounded by experiments at Fermilab and CERN to
$$\frac{|m_Km_{\overline{K}}|}{m_K}\text{ }<10^{18}.$$
(1)
Indeed, at present CPT is the only combination of C, P, T observed as an exact symmetry of nature at the fundamental level.
The existence of high-precision experimental tests and of the general CPT theorem for Lorentz-covariant particle theories means that the observation of CPT or Lorentz violation would be a sensitive signal for unconventional physics beyond the standard model. It is therefore interesting to consider possible theoretical mechanisms through which CPT or Lorentz symmetry might be violated. Most suggestions along these lines in the literature either have physical features that seem unlikely to be realized in nature or involve radical revisions of conventional quantum field theory, or both.
Nonetheless, there does exists at least one promising theoretical possibility, based on spontaneous breaking of CPT and Lorentz symmetry in an underlying theory, that appears to be compatible both with experimental constraints and with established quantum field theory. It suggests that apparent breaking of CPT and Lorentz symmetry might be observable in existing or feasible experiments, and it leads to a general phenomenology for CPT and Lorentz violation at the level of the standard model and quantum electrodynamics (QED). The formulation and experimental implications of this theory are briefly described in this talk.
## 2 Framework
In principle, one can attempt to circumvent the difficult issue of developing a satisfactory microscopic theory allowing CPT and Lorentz breaking by adopting a purely phenomenological approach. This can be done by identifying and parametrizing observable quantities that allow for CPT or Lorentz violation.
A well-known example is the phenomenology of CPT violation in oscillations of neutral kaons. In the neutral-kaon system, linear combinations of the strong-interaction eigenstates $`K^0`$ and $`\overline{K^0}`$ form the physical eigenstates $`K_S`$ and $`K_L`$. These combinations contain two complex parameters, $`ϵ_K`$ and $`\delta _K`$, parametrizing CP violation. One, $`ϵ_K`$, governs T violation with CPT symmetry while the other, $`\delta _K`$, governs CPT violation with T symmetry. The standard model of particle physics has a mechanism for T violation, and so $`ϵ_K`$ is in this context nonzero and in principle calculable. However, CPT is a symmetry of the standard model and so $`\delta _K`$ is expected to vanish. The possibility of a nonzero value of $`\delta _K`$ is from this prespective only a phenomenological choice. It has no grounds in a microscopic theory and $`\delta _K`$ is therefore not calculable. Indeed, in the absence of a microscopic theory, it is even unclear whether this parametrization makes physical sense. Moreover, without a microscopic origin $`\delta _K`$ cannot be linked to other phenomenological parameters for CPT tests in different experiments.
Evidently, it is more attractive theoretically to develop an explicit microscopic theory for CPT and Lorentz violation. With a theory of sufficient scope, a general and quantitative phenomenology for CPT and Lorentz violation could then be extracted at the level of the standard model. This would allow calculation of phenomenological parameters, direct comparisons between experiments, and perhaps the prediction of signals.
The development of a microscopic theory of this type is feasible within the context of spontaneous CPT and Lorentz breaking. The idea is that the underlying theory of nature has a Lorentz- and CPT-covariant action, but apparent violations of these symmetries could result from their spontanteous violation in solutions to the theory. It appears that this mechanism is viable from the theoretical viewpoint and is an attractive way to violate CPT and Lorentz invariance.
Since spontaneous breaking is a property of the solution rather than the dynamics of a theory, the broken symmetry plays an important role in establishing the physics. In the case of CPT and Lorentz violation, spontaneous breaking has the advantage that many of the desirable properties of a Lorentz-covariant theory can be expected. This is in sharp distinction to other types of CPT and Lorentz breaking, which often are inconsistent with theoretical notions such as causality or probability conservation.
The physics of a particle in a vacuum with spontaneous Lorentz violation is in some respects similar to that of a conventional particle moving inside a biaxial crystal. This system typically breaks Lorentz covariance both under rotations and under boosts. However, instead of leading to fundamental problems, the lack of Lorentz covariance is merely a result of the presence of the background crystal fields, which leaves unaffected features such as causality. Indeed, one can explicitly confirm microcausality in certain simple models arising from spontaneous CPT and Lorentz breaking.
In a Lorentz-covariant theory, certain types of interaction among Lorentz-tensor fields could trigger spontaneous breaking of Lorentz symmetry. The idea is that these interactions could destabilize the naive vacuum and generate nonzero Lorentz-tensor expectation values, which fill the true vacuum and cause spontaneous Lorentz breaking. This also induces spontaneous CPT violation whenever the expectation values involve tensor fields with an odd number of spacetime indices. Provided components of the expectation values lie along the four macroscopic spacetime dimensions, apparent violations of CPT and Lorentz symmetry could arise at the level of the standard model. This could lead to observable effects, some of which are described in the following sections.
Conventional four-dimensional renormalizable gauge theories such as the standard model lack the necessary destabilizing interactions to trigger spontaneous Lorentz violation. However, the mechanism may be realized in some string (M) theories because suitable Lorentz-tensor interactions occur. This can be investigated using string field theory in the special case of the open bosonic string, where the action and equations of motion can be analytically derived for particle fields below some fixed level number $`N`$. Obtaining and comparing solutions for different $`N`$ allows the identification of solutions that persist as $`N`$ increases. For some cases this procedure has been performed to a depth of over 20,000 terms in the static potential. The solutions remaining stable as $`N`$ increases include ones spontaneously breaking Lorentz symmetry.
In standard field theories, spontaneous breaking of a continuous global symmetry is accompanied by the appearance of massless modes, ensured by the Nambu-Goldstone theorem. Promoting a global spontaneously broken symmetry to a local gauge symmetry leads to the Higgs mechanism: the massless modes disappear and a mass is generated for the gauge boson. Similarly, spontaneous breaking of a continuous global Lorentz symmetry would also lead to massless modes. However, although the inclusion of gravity promotes Lorentz invariance to a local symmetry, no analogue to the Higgs effect occurs. The dependence of the connection on derivatives of the metric rather than the metric itself ensures that the graviton propagator is affected in such a way that no graviton mass is generated when local Lorentz symmetry is spontaneously broken.
## 3 Standard-Model and QED Extensions
Assuming spontaneous CPT and Lorentz violation does occur, then any apparent breaking at the level of the SU(3)$`\times `$SU(2)$`\times `$U(1) standard model and QED must be highly suppressed to remain compatible with established experimental bounds. If the appropriate dimensionless suppression factor is determined by the ratio of a low-energy (standard-model) scale to the (Planck) scale of an underlying fundamental theory, then relatively few observable effects of Lorentz or CPT violation would arise. To study these, it is useful to develop an extension of the standard model obtained as the low-energy limit of the fundamental theory.
To gain insight about the construction of such an extension, consider as an example a possible coupling between one or more bosonic tensor fields and fermion bilinears in the low-energy limit of the underlying theory. When the tensors acquire expectation values $`T`$, the low-energy theory gains additional terms of the form
$$\frac{\lambda }{M^k}T\overline{\psi }\mathrm{\Gamma }(i)^k\chi +h.c..$$
(2)
Here, the gamma-matrix structure $`\mathrm{\Gamma }`$ and the $`k`$ spacetime derivatives $`i`$ determine the Lorentz properties of the bilinear in the fermion fields $`\psi `$, $`\chi `$ and hence fix the type of apparent CPT and Lorentz violation in the low-energy theory. The effective coupling involves an expectation $`T`$ together with a dimensionless coupling $`\lambda `$ and a suitable power of a large (Planck or compactification) scale $`M`$ associated with the fundamental theory.
Proceeding along these lines, one can determine all possible terms arising at the level of the standard model from spontaneous CPT and Lorentz breaking in any underlying theory (not necessarily string theory). This leads to a general Lorentz-violating extension of the standard model that includes both CPT-even and CPT-odd terms. It contains all possible allowed hermitian terms preserving both SU(3)$`\times `$SU(2)$`\times `$U(1) gauge invariance and power-counting renormalizability. It appears at present to be the sole candidate for a consistent extension of the standard model based on a microscopic theory of Lorentz violation.
Despite the apparent CPT and Lorentz breaking, the standard-model extension exhibits several desirable properties of conventional Lorentz-covariant field theories by virtue of its origin in spontaneous symmetry breaking from a covariant underlying theory. Thus, the usual quantization methods are valid and features like microcausality and positivity of the energy are to be expected. Also, energy and momentum are conserved provided the tensor expectation values are independent of spacetime position (no soliton solutions). Even one type of Lorentz symmetry remains: the theory is covariant under rotations or boosts of the observer’s inertial frame (observer Lorentz transformations). The apparent Lorentz violations appear only when (localized) fields are rotated or boosted (particle Lorentz transformations) relative to the vacuum tensor expectation values.
In the case of the conventional standard model, one can obtain the usual versions of QED by taking suitable limits. For the standard-model extension, it can be shown that the usual gauge symmetry breaking to the electromagnetic U(1) occurs, and taking appropriate limits yields generalizations of the usual versions of QED. It turns out that the apparent CPT and Lorentz breaking can arise in both the photon and fermion sectors. These extensions of QED are of particular interest because many high-precision QED tests of CPT and Lorentz symmetry exist.
An explicit and relatively simple example is the restriction of the standard-model extension to an extension of QED involving only photons, electrons, and positrons. The usual lagrangian is:
$$^{\mathrm{QED}}=\overline{\psi }\gamma ^\mu (\frac{1}{2}i\stackrel{}{_\mu }qA_\mu )\psi m\overline{\psi }\psi \frac{1}{4}F_{\mu \nu }F^{\mu \nu }.$$
(3)
Apparent Lorentz violation can occur in both the fermion and photon sectors, and it can be CPT even or CPT odd. The CPT-violating terms are:
$$_e^{\mathrm{CPT}}=a_\mu \overline{\psi }\gamma ^\mu \psi b_\mu \overline{\psi }\gamma _5\gamma ^\mu \psi ,$$
$$_\gamma ^{\mathrm{CPT}}=\frac{1}{2}(k_{AF})^\kappa ϵ_{\kappa \lambda \mu \nu }A^\lambda F^{\mu \nu }.$$
(4)
The CPT-preserving terms are:
$$_e^{\mathrm{Lorentz}}=c_{\mu \nu }\overline{\psi }\gamma ^\mu (\frac{1}{2}i\stackrel{}{^\nu }qA^\nu )\psi +d_{\mu \nu }\overline{\psi }\gamma _5\gamma ^\mu (\frac{1}{2}i\stackrel{}{^\nu }qA^\nu )\psi \frac{1}{2}H_{\mu \nu }\overline{\psi }\sigma ^{\mu \nu }\psi $$
$$_\gamma ^{\mathrm{Lorentz}}=\frac{1}{4}(k_F)_{\kappa \lambda \mu \nu }F^{\kappa \lambda }F^{\mu \nu }.$$
(5)
The reader is referred to the literature for details of the notation and conventions and for information about the properties of the extra terms. Note, however, that all these terms are invariant under observer Lorentz transformations, whereas the expressions in Eqs. (4) and (5) violate particle Lorentz invariance: the coefficients of the extra terms behave as (minuscule) Lorentz- and CPT-violating couplings. Note also that not all the components of the coefficients appearing are physically observable. For example, field redefinitions can be used to eliminate some coefficients of the type $`a_\mu `$ in the standard-model extension. It turns out that these can be directly detected only in flavor-changing experiments, and so they are unobservable at leading order in experiments restricted to electrons, positrons, and photons.
## 4 Experimental Tests
The standard-model extension described above forms a quantitative framework within which various experimental tests of CPT and Lorentz symmetry can be studied and compared. Moreover, potentially observable signals can be deduced in some cases. Evidently, any tests seeking to establish nonzero CPT- and Lorentz-violating terms in the standard-model extension must contend with the expected heavy suppression of physical effects.
Although many tests of CPT and Lorentz symmetry lack the necessary sensitivity to possible signals, a few special ones can already place useful constraints on some of the new couplings in the standard-model extension. Several of these tests are discussed elsewhere in these proceedings. Among the ones investigated to date are experiments with neutral-meson oscillations,$`^{}`$ comparative tests of QED in Penning traps, spectroscopy of hydrogen and antihydrogen, measurements of cosmological birefringence, and observations of the baryon asymmetry. The remainder of this talk provides a brief outline of some of these studies. Other work is in progress, including an investigation of constraints from clock-comparison experiments.
### 4.1 Neutral-Meson Oscillations
Flavor oscillations occur or are anticipated in a variety of neutral-meson systems, including $`K`$, $`D`$, $`B_d`$, and $`B_s`$. A neutral-meson state evolves in time according to a non-hermitian two-by-two effective hamiltonian in the meson-antimeson state space. The effective hamiltonian involves complex parameters $`ϵ_P`$ and $`\delta _P`$ that govern (indirect) CP violation, where the neutral meson is denoted by $`P`$. In the $`K`$ system, $`ϵ_K`$ and $`\delta _K`$ are the same phenomenological quantities mentioned in section 2. The parameter $`ϵ_P`$ governs T violation, while $`\delta _P`$ governs CPT violation. Bounds on CPT violation can be obtained by constraining the magnitude of $`\delta _P`$ in experiments with meson oscillations.
In the context of the usual standard model, which preserves CPT, $`\delta _P`$ is necessarily zero. In contrast, in the context of the standard-model extension $`\delta _P`$ is a derivable quantity. It turns out that at leading order $`\delta _P`$ depends only on a single type of extra coupling in the standard-model extension. This type of coupling has the form $`a_\mu ^q\overline{q}\gamma ^\mu q`$, where $`q`$ represents one of the valence quark fields in the $`P`$ meson and the quantity $`a_\mu ^q`$ is spacetime constant but depends on the quark flavor $`q`$.
Since Lorentz symmetry is broken in the standard-model extension, the derived expression for $`\delta _P`$ varies with the boost and orientation of the $`P`$ meson. Denoting by $`\beta ^\mu \gamma (1,\stackrel{}{\beta })`$ the four-velocity of the $`P`$-meson in the frame in which the quantities $`a_\mu ^q`$ are specified, it can be shown that $`\delta _P`$ is given at leading order in all coupling coefficients in the standard-model extension by
$$\delta _Pi\mathrm{sin}\widehat{\varphi }\mathrm{exp}(i\widehat{\varphi })\gamma (\mathrm{\Delta }a_0\stackrel{}{\beta }\mathrm{\Delta }\stackrel{}{a})/\mathrm{\Delta }m.$$
(6)
For simplicity, subscripts $`P`$ have been omitted on the right-hand side. In Eq. (6), $`\mathrm{\Delta }a_\mu a_\mu ^{q_2}a_\mu ^{q_1}`$, where $`q_1`$ and $`q_2`$ are the valence-quark flavors for the $`P`$ meson. Also, $`\widehat{\varphi }\mathrm{tan}^1(2\mathrm{\Delta }m/\mathrm{\Delta }\gamma )`$, where $`\mathrm{\Delta }m`$ and $`\mathrm{\Delta }\gamma `$ are the mass and decay-rate differences, respectively, between the $`P`$-meson eigenstates.
This result has several implications. One is that tests of CPT and Lorentz symmetry with neutral mesons are independent at leading order of all other types of tests mentioned here. This is because $`\delta _P`$ is sensitive only to $`a_\mu ^q`$ and because this sensitivity arises from flavor-changing effects. None of the other experiments described here involve flavor changes, which can be shown to imply that none are sensitive to any $`a_\mu ^q`$.
The result (6) also makes predictions about signals in experiments with neutral mesons. For example, the real and imaginary parts of $`\delta _P`$ are predicted to be proportional. Similarly, Eq. (6) suggests that the magnitude of $`\delta _P`$ may be different for different $`P`$ due to the flavor dependence of the coefficients $`a_\mu ^q`$. For example, if the coefficients $`a_\mu ^q`$ grow with mass as do the usual Yukawa couplings, then the heavier neutral mesons such as $`D`$ or $`B_d`$ may exhibit the largest CPT-violating effects.
The dependence of the result (6) on the meson boost magnitude and orientation implies several notable effects in the signals for CPT and Lorentz violation. For example, two different experiments may have inequivalent CPT and Lorentz reach despite having comparable statistical sensitivity. This could arise if the mesons for one experiment are well collimated while those for the other have a $`4\pi `$ distribution, or if the mesons involved in the two experiments have very different momentum spectra. Another interesting effect is the possibility of diurnal variations in the data, arising from the rotation of the Earth relative to the orientation of the coupling coefficients. This issue may be of some importance because the data in neutral-meson experiments are typically taken over many days.
At present, the tightest clean experimental constraints on CPT violation come from observations of the $`K`$ system. Some experimental results are now also available for the heavier neutral-meson systems. Two collaborations at CERN have performed analyses to investigate whether existing data suffice to bound CPT violation. The OPAL collaboration has published the measurement $`\text{Im}\delta _{B_d}=0.020\pm 0.016\pm 0.006`$, while the DELPHI collaboration has announced a preliminary result of $`\text{Im}\delta _{B_d}=0.011\pm 0.017\pm 0.005`$. Further theoretical and experimental studies are underway.
### 4.2 QED Experiments
High-precision measurements of properties of particles and antiparticles can be obtained by trapping individual particles for extended time periods. Comparison of the results provides sensitive CPT tests. Such experiments can constrain the couplings in the fermion sector of the QED extension.
Penning traps can be used to obtain comparative measurements of particle and antiparticle anomaly and cyclotron frequencies. The QED extension predicts direct signals and also effects arising from diurnal variations in the Earth-comoving laboratory frame. Appropriate figures of merit for the various signals have been defined and the attainable experimental sensitivity estimated.
As one example, comparing the anomalous magnetic moments of electrons and positrons would generate an interesting bound on the spatial components of the coefficient $`b_\mu ^e`$ in the laboratory frame. Available technology could place a limit of order $`10^{20}`$ on the associated figure of merit. A related test involves the search for diurnal variations of the electron anomaly frequency, for which a new experimental result with a figure of merit bounded at $`6\times 10^{21}`$ is presented in these proceedings by Mittleman, Ioannou, and Dehmelt. Analogous experiments with protons and antiprotons may be feasible.
Particle and antiparticle cyclotron frequencies can also be compared. In these proceedings, Gabrielse and coworkers present the results of an experiment comparing the cyclotron frequencies of $`H^{}`$ ions and antiprotons in the same trap. The leading-order effects in this experiment provide a test of Lorentz violation in the context of the standard-model extension, with an associated figure of merit bounded at $`4\times 10^{25}`$.
Tests of CPT and Lorentz symmetry are also possible via high-precision comparisons of spectroscopic data from trapped hydrogen and antihydrogen. An investigation of the possible experimental signals within the context of the standard-model and QED extensions has been performed. Direct sensitivity to CPT- and Lorentz-violating couplings, without suppression factors associated with the fine-structure constant, arises for certain specific 1S-2S and hyperfine transitions in magnetically trapped hydrogen and antihydrogen. In principle, theoretically clean signals might be observed for particular types of CPT and Lorentz violation.
The photon sector of the QED extension can also be tightly constrained from a combination of theoretical considerations and terrestrial, astrophysical, and cosmological experiments on electromagnetic radiation. It is known that the pure-photon CPT-violating term in Eq. (4) can generate negative contributions to the energy, which may limit its viability and suggests the coefficient $`(k_{AF})^\kappa `$ should be zero. In contrast, the CPT-even term in the following equation maintains a positive conserved energy.
The solutions of the extended Maxwell equations with CPT- and Lorentz-breaking effects involve two independent propagating degrees of freedom, as usual. Unlike the conventional propagation of electromagnetic waves in vacuum, however, in the extended Maxwell case the two modes have different dispersion relations. This means the vacuum is birefringent. Indeed, the effects of the CPT and Lorentz violation on an electromagnetic wave traveling in the vacuum are closely analogous to those exhibited by an electromagnetic wave in conventional electrodynamics that is passing through a transparent optically anisotropic and gyrotropic crystal with spatial dispersion of the axes.
The sharpest experimental limits on the extra coefficients in the extended Maxwell equations can be obtained by constraining the birefringence of radio waves on cosmological distance scales. Considering first the CPT-odd coefficient $`(k_{AF})_\mu `$, one finds a bound of the order of $`\text{ }<10^{42}`$ GeV on its components. A disputed claim exists for a nonzero effect at the level of $`|\stackrel{}{k}_{AF}|10^{41}`$ GeV.
For the CPT-even dimensionless coefficient $`(k_F)_{\kappa \lambda \mu \nu }`$, the single rotation-invariant irreducible component is constrained to $`\text{ }<10^{23}`$ by the existence of cosmic rays and other tests. Rotation invariance is broken by all the other irreducible components of $`(k_F)_{\kappa \lambda \mu \nu }`$. Although in principle it might be feasible to constrain these coefficients with existing techniques for measuring cosmological birefringence, no limits presently exist. It is plausible that a bound at the level of about $`10^{27}`$ could be placed on components of $`(k_F)_{\kappa \lambda \mu \nu }`$.
The sharp experimental constraints obtained on $`(k_{AF})_\mu `$ are compatible with the zero value needed to avoid negative-energy contributions. However, no symmetry protects a zero tree-level value of $`(k_{AF})_\mu `$. It might therefore seem reasonable to expect $`(k_{AF})_\mu `$ to acquire a nonzero value from radiative corrections involving CPT-violating couplings in the fermion sector. Nonetheless, this does not occur: an anomaly-cancellation mechanism can ensure that the net sum of all one-loop radiative corrections is finite. The situation is technically involved because the contribution from each individual radiative correction is ambiguous, but the anomaly-cancellation mechanism can hold even if one chooses to define the theory such that each individual radiative correction is nonzero. Thus, a tree-level CPT-odd term is unnecessary for one-loop renormalizability. Similar effects may occur at higher loops. This ability to impose the vanishing of an otherwise allowed CPT-odd term represents a significant check on the consistency of the standard-model extension.
For the CPT-even Lorentz-violating pure-photon term there is no similar mechanism, and in fact calculations have explicitly demonstrated the existence of divergent radiative corrections at the one-loop level. This therefore leaves open the interesting possibility of future detection of a nonzero effect via measurements of cosmological birefringence.
Various other possible observable CPT effects have been identified. For example, under suitable conditions the observed baryon asymmetry can be generated in thermal equilibrium through CPT- and Lorentz-violating bilinear terms. A relatively large baryon asymmetry produced at grand-unified scales would eventually become diluted to the observed value through sphaleron or other effects. This mechanism represents one possible alternative to the conventional scenarios for baryogenesis, in which nonequilibrium processes and C- and CP-breaking interactions are required.
## Acknowledgments
I thank Orfeu Bertolami, Robert Bluhm, Chuck Lane, Don Colladay, Roman Jackiw, Rob Potting, Neil Russell, Stuart Samuel, and Rick Van Kooten for collaborations. This work was supported in part by the United States Department of Energy under grant number DE-FG02-91ER40661.
## References
|
no-problem/9904/cond-mat9904071.html
|
ar5iv
|
text
|
# Phase Transition and Symmetry Breaking in the Minority Game
\[
## Abstract
We show that the Minority Game, a model of interacting heterogeneous agents, can be described as a spin systems and it displays a phase transition between a symmetric phase and a symmetry broken phase where the games outcome is predicable. As a result a “spontaneous magnetization” arises in the spin formalism.
\]
Market interactions among economic agents give rise to fluctuation phenomena which are raising much interest in statistical physics. The search for a toy system to study agents with market-like interactions has led to the definition of the Minority Game (MG), a model inspired by Arthur’s “El Farrol” problem, which embodies some basic market mechanisms while keeping the mathematical complexity to a minimum.
In short, the MG is a repeated game where $`N`$ agents have to decide which of two actions (such as buy or sell) to make. With $`N`$ odd, this procedure identifies a minority action as that choosen by the minority. Agents who took the minority action are rewarded by one payoff unit, whereas the majority of agents looses one unit. Agents do not communicate one with the other and they have access to a “public information” – related to past game outcomes – represented by one of $`P`$ possible patterns.
The strategic point of view of game theory may require, in a case like this, a prohibitive computational task for each of the agents. That is specially true if $`N`$ and $`P`$ are very large and agents have not complete information on the detailed mechanism which determines their payoffs, the identity of their opponents or even their number $`N`$. In such complex strategic situations – which are similar to those that agents face in stock markets – agents may prefer to simplify their decision task by looking for simple behavioral rules which prescribe an action for each of the $`P`$ possible patterns. This may be particularly advantageous if computational costs exist.
This behavior, called inductive reasoning in ref. , is the basis of the MG: each agent has a pool of $`S`$ rules which prescribe an action for each of the $`P`$ patterns. At each time, she follows her best rule (see below for a more precise definition). These rules, called strategies below, are initially drawn at random among all possible rules, independently for each agent in order to model agents’ heterogeneity of beliefs and behaviors.
Numerical simulations have shown that this system displays a cooperative phase for large values of the ratio $`\alpha =P/N`$: With respect to the simple “random agent” state – where each agent just tosses a coin to choose her action – agents are better off because they get to enstablish a sort of coordination. For small values of $`\alpha `$ agents receive, on average, poorer payoffs than in the random agent state, a behavior which has been related to crowd effects in markets. A qualitative understanding of this behavior has been given in terms of geometric considerations.
In this Letter we show that the model can be described as a spin system and, as $`\alpha =P/N`$ varies, it undergoes a dynamical phase transition with symmetry breaking. The symmetry which gets broken is the equivalence between the two actions: in the symmetric phase ($`\alpha <\alpha _c`$) both actions are taken by the minority with the same frequency (e.g. there are, on average, as many buyers as sellers). For $`\alpha >\alpha _c`$, in each of the $`P`$ possible states, the minority does more frequently an action than the other one, i.e. the game’s outcome is asymmetric. An asymmetry in the game’s outcome is an opportunity that an agent could in principle exploit to gain. This is called an arbitrage in economics and it bears a particularly relevant meaning (see discussions in ). The asymmetry for $`\alpha >\alpha _c`$ naturally suggests an order parameter and is related to a “phase separation” in the population of agents: while for $`\alpha <\alpha _c`$ all agents use all of their strategies, for $`\alpha >\alpha _c`$ a finite fraction $`\varphi `$ of the agents ends up using only one strategy which, in the spin formalism, is the analog of spontaneous magnetization. The point $`\alpha _c`$ also marks the transition from persistence (for $`\alpha >\alpha _c`$) to anti-persistence ($`\alpha <\alpha _c`$) of the game’s time series.
Let us start from a sharp definition of the model: We use $`+`$ and $``$ to denote the two possible actions, so that a generic action is a sign. At each time $`t`$, the information available to each agent is the string $`\mu _t=(\chi _{t1},\mathrm{},\chi _{tM})`$ of the last $`M`$ actions taken by the minority. This, in our notation is a string of $`M`$ minority signs $`\chi _{tk}\{\pm 1\}`$. There are $`P=2^M`$ possible such strings, which we shall label, by an index $`\mu =1,\mathrm{},P`$. The index $`\mu _t`$ corresponding to $`(\chi _{t1},\mathrm{},\chi _{tM})`$ shall be called the present history, for short. For each history $`\mu `$, a strategy $`a`$ specifies a fixed action $`a^\mu `$ . Each agent $`i=1,\mathrm{},N`$ has $`S=2`$ strategies, denoted by $`a_{\pm ,i}`$, which are randomly drawn from the set of all $`2^P`$ possible strategies (the generalization to $`S>2`$ strategies will be discussed below). We define
$`\omega _i^\mu ={\displaystyle \frac{a_{+,i}^\mu +a_{,i}^\mu }{2}},\xi _i^\mu ={\displaystyle \frac{a_{+,i}^\mu a_{,i}^\mu }{2}}`$
so that the strategies of agent $`i`$ can be written as $`a_{s_i,i}^\mu =\omega _i^\mu +s_i\xi _i^\mu `$ with $`s_i=\pm 1`$. If $`\omega _i^\mu 0`$, then $`\xi _i^\mu =0`$ (and viceversa) and the player always takes the decision $`\omega _i^\mu `$ whenever the history is $`\mu `$. The current best strategy of agent $`i`$, which she shall adopt at time $`t`$, is that which has the highest cumulated payoff. Let us define $`\mathrm{\Delta }_{i,t}U_{i,t}^{(+)}U_{i,t}^{()}`$ as the difference between the cumulated payoffs $`U_{i,t}^{(\pm )}`$ of strategies $`+`$ and $``$ for agent $`i`$ at time $`t`$. Therefore her choice is given by
$$s_i=\text{sign}\mathrm{\Delta }_{i,t}$$
(1)
where ties ($`\mathrm{\Delta }_{i,t}=0`$) are broken by coin tossing. The difference in the population of agents choosing the $`+`$ and the $``$ sign, at time $`t`$, is then
$$A_t=\underset{i=1}{\overset{N}{}}a_{s_i,i}^{\mu _t}=\mathrm{\Omega }^{\mu _t}+\underset{i=1}{\overset{N}{}}\xi _i^{\mu _t}s_i$$
(2)
where $`\mathrm{\Omega }^\mu =_i\omega _i^\mu `$. The sign chosen by the minority gives the minority sign at time $`t`$
$$\chi _t=\text{sign}A_t$$
(3)
and this determines the new history $`\mu _{t+1}`$ which corresponds to the string $`(\chi _t,\mathrm{},\chi _{tM+1})`$. Finally, each agent $`i`$ rewards those of her strategies which have predicted the right sign ($`a_{s,i}^{\mu _t}=\chi _t`$) updating the cumulated payoffs $`U_{i,t+1}^{(\pm )}=U_{i,t}^{(\pm )}+a_{\pm ,i}^{\mu _t}\chi _t`$. This implies that the cumulated payoff difference $`\mathrm{\Delta }_{i,t}`$ is updated according to
$$\mathrm{\Delta }_{i,t+1}=\mathrm{\Delta }_{i,t}+2\chi _t\xi _i^{\mu _t}.$$
(4)
Eqs. (14) update the state $`\{\mu _t,\mathrm{\Delta }_{i,t}\}`$ of the system from $`t`$ to $`t+1`$. With an initial condition (e.g. $`\mu _0=1`$, $`\mathrm{\Delta }_{i,0}=0`$, $`i`$) the dynamics of the MG is completely specified. The “quenched” variables $`\{\mathrm{\Omega }^\mu ,\xi _i^\mu \}`$ play here the same role as disorder in statistical mechanics.
An important quantity in the MG is the variance $`\sigma ^2=A^2`$ of the difference $`A`$ in the sizes of the two populations, where $``$ is a time average in the stationary state of the process specified by Eqs. (14). The number of winners, at each time step, is $`(N|A|)/2(N\sigma )/2`$ so that smaller fluctuations $`\sigma ^2`$ correspond to larger global gain. A population of random agents would yield $`\sigma ^2=N`$. Numerical simulations (see Fig. 1) show that, for $`\alpha =P/N`$ large enough, agents with inductive reasoning manage to behave globally better (i.e. $`\sigma ^2<N`$) than random agents whereas $`\sigma ^2>N`$ for small $`\alpha `$ (see Fig. 1). However no singularity (and no order parameter) has been yet identified in order to locate a phase transition.
As shown in ref. , to a good approximation one can neglect the coupling of the dynamics of $`\mathrm{\Delta }_{i,t}`$ and $`\mu _t`$ and replace the dynamics of the latter by random sampling of the history space, i.e. $`\mathrm{Prob}(\mu _t=\mu )=1/P`$, $`\mu `$. This simplifies considerably our discussion since then
$$\sigma ^2\frac{1}{P}\underset{\mu =1}{\overset{P}{}}\left(\mathrm{\Omega }^\mu \right)^2+2\underset{i=1}{\overset{N}{}}h_is_i+\underset{i,j=1}{\overset{N}{}}J_{i,j}s_is_j,$$
(5)
where $``$ stands for a time average and
$$h_i=\frac{1}{P}\underset{\mu =1}{\overset{P}{}}\mathrm{\Omega }^\mu \xi _i^\mu ,J_{i,j}=\frac{1}{P}\underset{\mu =1}{\overset{P}{}}\xi _i^\mu \xi _j^\mu .$$
(6)
The field $`h_i`$ measures the difference of correlation of the two strategies with $`\mathrm{\Omega }^\mu `$ whereas the coupling $`J_{i,j}`$ accounts for the interaction between agents as well as for agents self-interaction ($`J_{i,i}`$). The structure of the couplings (6) is reminiscent of neural networks models where $`\xi _i^\mu `$ play the role of memory patterns. This similarity confirms the conclusion of refs. that the relevant parameter is the ratio $`\alpha =P/N`$ between the number of patterns and the number of spins.
The key element which is at the origin of the behavior of the model is the fact that for each history $`\mu `$, there are agents which always take the same decision. This gives rise to the time independent contribution $`\mathrm{\Omega }^\mu `$ in $`A`$ which produces a bias in the value of $`\chi _t`$ whenever $`\mu _t=\mu `$. A measure of this bias, is given by the parameter
$$\theta =\sqrt{\frac{1}{P}\underset{\mu =1}{\overset{P}{}}\chi |\mu ^2}$$
(7)
where $`\chi |\mu `$ is the conditional average of $`\chi _t`$ given that $`\mu _t=\mu `$. Loosely speaking, $`\theta `$ measures the presence of information or arbitrages in the signal $`\chi _t`$. If $`\theta >0`$ an agent with strategies of “length” $`M=\mathrm{log}_2P`$, can detect and exploit this information if one of her’s strategies is more correlated with $`\chi |\mu `$ than the other. More precisely, we observe that if $`v_i\mathrm{\Delta }_{i,t+1}\mathrm{\Delta }_{i,t}0`$ then $`\mathrm{\Delta }_{i,t}v_it`$ grows linearly with time, and the agent’s spin will always take the value $`s_i=\text{sign}v_i`$. We shall call this a frozen agent, since her spin variable is frozen. We find
$$v_i=\chi _t\xi _i^{\mu _t}\frac{1}{P}\underset{\mu =1}{\overset{P}{}}\chi |\mu \xi _i^\mu h_i\underset{j=1}{\overset{N}{}}J_{i,j}s_j$$
(8)
where the last equation relies on an expansion of $`\chi |\mu `$ to linear order in $`A`$.
It is instructive to consider first the case where other agents choose by coin tossing (i.e. $`s_j=0`$ for $`ji`$) so that $`v_ih_iJ_{i,i}s_i`$. If $`v_i0`$ then $`s_i=\text{sign}v_i=\text{sign}(h_i+J_{i,i}s_i)`$. But this last equation has a solution only if $`|h_i|>J_{i,i}`$ whereas otherwise $`|s_i|<1`$ and $`v_i=0`$. Note that $`J_{i,i}1/2`$ and that $`h_i`$ can be approximated by a gaussian variable with zero average and variance $`(4\alpha )^1`$. This means that $`|h_i|J_{i,i}`$ for $`\alpha 1`$, which implies that most agents have $`s_i0`$ in this limit and we can indeed neglect agent–agent interaction. This allows to compute the probability for an agent to be frozen
$$\varphi =P\{|h_i|>J_{i,i}\}e^{\alpha /2},$$
(9)
for $`\alpha 1`$. Numerical simulations show that $`\varphi e^{(0.37\pm 0.02)\alpha }`$ indeed decays exponentially. As $`\alpha \mathrm{}`$, the random agents limit is attained because $`s_i0`$ for all $`i`$ and $`s_is_j=s_is_j`$ for $`ij`$. By Eq. (5) we find $`\sigma ^2=_\mu (\mathrm{\Omega }^\mu )^2/P+_iJ_{i,i}N`$.
The same argument applies in general, with the difference that the “bare” field $`h_i`$ must be replaced by the “effective” field $`\stackrel{~}{h}_i=h_i+_{ji}J_{i,j}s_j`$. In order for agent $`i`$ to get frozen, her effective field $`\stackrel{~}{h}_i`$ must ovecome the self interaction $`J_{i,i}`$, i.e. $`|\stackrel{~}{h}_i|>J_{i,i}1/2`$. If this condition is met, $`s_i=\text{sign}\stackrel{~}{h}_i`$. It can also be shown that a frozen agent will, on average, receive a larger payoff than an unfrozen agent. Loosely speaking, one can say that a frozen agent has a good and a bad strategy and the good one remains better than the bad one even when she actually uses it. On the contrary, unfrozen agents have two strategies each of which seems better than the other when it is not adopted. In this sense, symmetry breaking in $`\chi |\mu `$ induced a sort of breakdown in the a priori equivalence of agents’ strategies.
A quantitative analysis of the fully interacting system shall be presented elsewhere. For the time being we shall discuss the behavior of the system on the basis of extensive numerical simulations. Fig. 1 reports the behavior of $`\theta `$, $`\varphi `$ and $`\sigma ^2`$ as functions of $`\alpha `$ for several values of $`P`$. As $`\alpha `$ decreases, i.e. as more and more agents join the game, the arbitrages opportunities, as measured by $`\theta `$ decrease. In loose words, agents’ exploitation of the signal $`\mathrm{\Omega }^\mu `$ weakens its strength by screening it with their adaptive behavior. If the number $`N`$ of agents is small compared to the signal “complexity” $`P=2^M`$, agents exploit only partially the signal $`\mathrm{\Omega }^\mu `$ whereas if $`NP`$ then $`\mathrm{\Omega }^\mu `$ is completely screened by agents’ behavior and $`\theta =0`$. As Figure 1 shows, the parameter $`\theta `$ displays the characteristic behavior of an order parameter with a singularity at $`\alpha _c0.34`$. Accordingly also the fraction $`\varphi `$ of frozen agents drops to zero as $`\alpha \alpha _c^+`$. The comparison between different system sizes in Fig. 1 strongly suggests that $`\varphi `$ drops discontinuously to zero at $`\alpha _c`$ (and it also gives the value of $`\alpha _c`$). The vanishing of $`\varphi `$ is clearly a consequence of the fact that $`\theta `$ also vanishes at $`\alpha _c`$. Indeed if $`\chi |\mu =0`$ for all $`\mu `$, by Eq. (8), also $`v_i=0`$ for all $`i`$, so that $`\mathrm{\Delta }_{i,t}`$ remains bounded and $`|s_i|<1`$.
The transition can also be understood in terms of the variables $`\mathrm{\Delta }_{i,t}`$ as an “unbinding” transition as $`\alpha \alpha _c^{}`$: For $`\alpha <\alpha _c`$ a “bound state” exists with finite $`\mathrm{\Delta }_{i,t}`$, which corresponds to the fact that the equations $`v_i=0`$, $`i=1,\mathrm{},N`$ admit a solution with $`|s_i|<1`$, $`i`$ (only $`P`$ of the equations $`v_i=0`$ are linearly independent). For $`\alpha >\alpha _c`$ this is no longer true and the population separates: a fraction $`\varphi `$ of variables $`\mathrm{\Delta }_{i,t}`$ acquire a constant “velocity” $`v_i0`$ (with $`|s_i|=1`$) whereas for the remaining agents $`v_i=0`$, $`\mathrm{\Delta }_{i,t}`$ remains bounded and $`|s_i|<1`$.
It is suggestive to observe that $`v_i\frac{\sigma ^2}{s_i}`$ so that the dynamics of the minority game is actually similar to a spin dynamics with hamiltonian $`\sigma ^2`$. Indeed either the spin is frozen in the direction which minimizes $`s_iv_i(s_i)`$, or its average $`s_i`$ is such that $`v_i=0`$. This then explains why cooperation occurs in the MG. A closer analysis, to be reported elsewhere, reveals that indeed the stationary state of the MG is described by the ground state properties of an Hamiltonian very similar to $`\sigma ^2`$. Finite size scaling suggests that $`\sigma ^2`$ has a minimum at $`\alpha _c`$ with a discontinuity in its derivative (see Fig. 1). These conclusions are indeed confirmed by exact results. It is worth to stress, however, that the qualitative aspects of the transition are already captured at the simple level of approximation of Eq. (8).
Let us go back to Fig. 1. Above $`\alpha _c`$ agents do not fully exploit the information $`\mathrm{\Omega }^\mu `$ and, as a result, $`\chi |\mu 0`$. Figure 2 shows that $`\chi _t`$ shows persistence in time, in the sense that when $`\mu _t=\mu _{t+\tau }`$ the minority signs $`\chi _t`$ and $`\chi _{t+\tau }`$ tend to be the same. This persistence disappears, $`\chi _t\chi _{t+\tau }|\mu _t=\mu _{t+\tau }0`$ as $`\alpha `$ decreases and it turns into anti-persistence for smaller $`\alpha `$. The oscillatory behavior in Fig. 2 has indeed period $`2P`$ which means that typically when the population comes again on the same history $`\mu `$ it tends to do the opposite of what it did the time before. Even if finite size effects do not allow a definite conclusion, it is quite likely that this change in time correlations also occurs at $`\alpha _c`$. Time correlations, even though of opposite nature, are present both above and below $`\alpha _c`$. These are like arbitrages in a market which could be exploited by agents. In this sense the market is efficient, i.e. arbitrage free, only for $`\alpha =\alpha _c`$.
The same qualitative behavior is expected when agents have $`S>2`$ strategies. Again for a given history $`\mu `$ it may happen that all the $`S`$ agent’s $`i`$ strategies prescribe the same action: agent $`i`$ will do that action no matter what strategy she has choosen. As $`S`$ increases, this will occur for a smaller and smaller number of histories (more precisely with a probability $`2^{1S}`$). This shall correspond to a weaker signal $`\mathrm{\Omega }^\mu `$ which is in complete agreement with the observation of shallower features for larger $`S`$. Note that, for each agent it would be rewarding to increase the number of strategies because they would have more chances to outguess $`\chi _t`$. At the same time, if all agents increase $`S`$ the game becomes less rewarding for all of them, at least for $`\alpha >\alpha _c`$. This situation is typical of games, such as the tragedy of commons, where many agents interact through a global resource.
The condition $`v_i=0`$ for the bound state in the symmetric phase involves $`P`$ equations with $`(S1)N`$ variables. This suggests that in general the scaling parameter is $`\alpha =P/[(S1)N]`$. The curve $`\sigma ^2/N`$ as a function of $`\alpha =P/[(S1)N]`$ collapse remarkably well one on the other for $`\alpha \alpha _c`$ (especially for $`S>2`$) but not for $`\alpha >\alpha _c`$ (e.g. in the large $`\alpha `$ behavior $`\varphi e^{C(S)\alpha }`$ we found $`C(2)0.37`$, $`C(3)1.50`$ and $`C(4)2.90`$).
Our approach also implies that no coordination is possible if agents have $`S=2`$ opposite strategies ($`a_{+,i}^\mu =a_{,i}^\mu `$) because then $`\mathrm{\Omega }^\mu =0`$. Numerical simulations show that indeed $`\sigma ^2N`$ for all $`\alpha >0`$ in this case.
The same qualitative behavior also occurs in a wide range of related models. First, total freezing occurs in majority models. Note indeed that changing the sign of Eq. (3) would also change the sign in Eq. (8). In particular the self-interaction $`J_{i,i}`$ changes sign so that it becomes favorable for each agent to stick to only one strategy anyway. The model is therefore trivial. More interesting models are obtained keeping the “frustration” effects of the MG but changing the definition of payoffs in Eq. (4). It can be shown that the phase transition and the large $`\alpha `$ behavior are quite robust features of minority games (see e.g. ).
In summary we find that a phase transition occurs in the minority game. The cooperative phase ($`\alpha >\alpha _c`$) is characterized by the presence of a fraction $`\varphi `$ of frozen agents (who use only one strategy), unexploited arbitrages ($`\chi |\mu 0`$) and persistence in the global signal $`\chi _t`$. In the symmetric phase ($`\alpha <\alpha _c`$) inductive dynamics is inefficient: agents adopt strategies when they are no more good. There is no arbitrage (for strategies of length $`M`$) to exploit and the signal shows anti-persistence.
We acknowledge Y.-C. Zhang for enlightening discussions, useful suggestions and for introducing us to the Minority Game. This work was partially supported by Swiss National Science Foundation Grant Nr 20-46918.98.
|
no-problem/9904/nucl-th9904082.html
|
ar5iv
|
text
|
# Elliptic Flow Based on a Relativistic Hydrodynamic Model
## Abstract
Based on the (3+1)-dimensional hydrodynamic model, the space-time evolution of hot and dense nuclear matter produced in non-central relativistic heavy-ion collisions is discussed. The elliptic flow parameter $`v_2`$ is obtained by Fourier analysis of the azimuthal distribution of pions and protons which are emitted from the freeze-out hypersurface. As a function of rapidity, the pion and proton elliptic flow parameters both have a peak at midrapidity.
Department of Physics, Waseda University, Tokyo 169-8555, Japan
One of the main goals in relativistic heavy-ion physics is the creation of a quark-gluon plasma (QGP) and the determination of its equation of state (EoS) . It is therefore very important to study collective flow in non-central collisions, such as directed or elliptic flow . Recently experimental data concerning collective flow in semi-central collisions at SPS energies has been reported . This data should be analysed using various models. Some groups have used their microscopic transport models to analyse the collective flow obtained by the NA49 Collaboration . In this paper we investigate collective flow, especially elliptic flow, in terms of a relativistic hydrodynamic model.
In non-central collisions elliptic flow arises due to the fact that the spatial overlap region of two colliding nuclei in the transverse plane has an “almond shape”. That is, the hydrodynamical flow becomes larger along the short axis than along the long axis because the pressure gradient is larger in that direction. Therefore this spatial anisotropy causes the nuclear matter to also have momentum anisotropy. Consequently, the azimuthal distribution may carry information about the pressure of the nuclear matter produced in the early stage of the heavy-ion collisions .
The relativistic hydrodynamical equations for a perfect fluid represent energy-momentum conservation
$`_\mu T^{\mu \nu }`$ $`=`$ $`0,`$ (1)
$`T^{\mu \nu }`$ $`=`$ $`(E+P)u^\mu u^\nu Pg^{\mu \nu }`$ (2)
and baryon density conservation
$`_\mu n_\mathrm{B}^\mu `$ $`=`$ $`0,`$ (3)
$`n_\mathrm{B}^\mu `$ $`=`$ $`n_\mathrm{B}u^\mu ,`$ (4)
where $`E`$, $`P`$, $`n_\mathrm{B}`$ and $`u^\mu `$ are, respectively, the energy density, pressure, baryon density and local four velocity. We numerically solve these equations without assuming cylindrical symmetry by specifying the model EoS and we obtain the space-time dependent thermodynamical variables and the four velocity.
We use the following models of the EoS with a phase transition. Hagedorn’s statistical bootstrap model with Hagedorn temperature $`T_\mathrm{H}=155`$ MeV is employed for the hadronic phase. We directly use the integral representation of the solution of the bootstrap equation instead of using the very famous hadronic mass spectrum, $`\mathrm{exp}(m/T_\mathrm{H})`$, which is the asymptotic solution of this equation. It is well known that this model has a limited temperature range, i.e., the energy density and pressure diverge at $`T_\mathrm{H}`$. This singularity, however, disappears when an exclude volume approximation (with a Bag constant $`B^{\frac{1}{4}}=230`$ MeV) is associated with the Hagedorn model. In the QGP phase, we use massless free u, d and s-quarks and the gluon gas model for simplicity. The two equations of state are matched by imposing Gibbs’ condition for phase equilibrium. Consequently we obtain a first order phase transition model which has a critical temperature $`T_\mathrm{C}=159`$ MeV and a mixed phase pressure of $`P_{\mathrm{mix}}=70.9`$ MeV/fm<sup>3</sup> at zero baryon density.
We mention our numerical algorithm for the relativistic hydrodynamic model. It is known that the Piecewise Parabolic Method (PPM) is very robust scheme for the non-relativistic gas equation with a shock wave. We have extended the PPM scheme of Eulerian hydrodynamics to the relativistic hydrodynamical equation. Note that this is a higher order extension of the piecewise linear method .
Assuming non-central Pb+Pb collisions at SPS energy, we choose very simple formulas for the initial condition at the initial (or passage) time $`t_0=2r_0/(\gamma v)1.4`$ fm ($`r_0`$, $`\gamma `$ and $`v`$ are, respectively, the nuclear radius, Lorentz factor and the velocity of a spectator in the center of mass system)
$`E(x,y,z)`$ $`=`$ $`E_1(z)\theta (\stackrel{~}{z}_0z)\theta (z+\stackrel{~}{z}_0)\rho (r_\mathrm{p})\rho (r_\mathrm{t}),`$ (5)
$`n_\mathrm{B}(x,y,z)`$ $`=`$ $`n_{\mathrm{B1}}(z)\theta (\stackrel{~}{z}_0z)\theta (z+\stackrel{~}{z}_0)\rho (r_\mathrm{p})\rho (r_\mathrm{t}),`$ (6)
$`v_\mathrm{z}(x,y,z)`$ $`=`$ $`v_0\mathrm{tanh}(z/z_0)`$ (7)
$`\times `$ $`\theta (\stackrel{~}{z}_0z)\theta (z+\stackrel{~}{z}_0)\rho (r_\mathrm{p})\rho (r_\mathrm{t}),`$
where $`\theta (z)`$ is the step function, $`\rho (r)`$ is the Woods-Saxon parameterization in the transverse direction,
$`\rho (r)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{exp}\left(\frac{rr_0}{\delta _\mathrm{r}}\right)+1}},`$ (8)
$`E_1(z)`$ is Bjorken’s solution and the $`z`$ dependence of the baryon density $`n_{\mathrm{B1}}(z)`$ is taken from Ref.
$`E_1(z)`$ $`=`$ $`E_0\times \left({\displaystyle \frac{\sqrt{t_0^2z^2}}{t_0}}\right)^{\frac{4}{3}},`$ (9)
$`n_{\mathrm{B1}}(z)`$ $`=`$ $`\kappa \times 0.17{\displaystyle \frac{\sqrt{t_0^2z^2}}{t_0}}.`$ (10)
See also Fig. 1. We have employed Bjorken’s longitudinal solution just as an initial condition. This is in contrast to Ref. , in which Bjorken’s boost-invariant solution was used as an assumption and the hydrodynamical equation was numerically solved only in the transverse plane.
At relativistic energies the Lorentz-contracted spectators leave the interaction region after $`1`$ fm, we therefore assume the hydrodynamical description is valid only in the overlap region and neglect the interaction between the spectators and the fluid. Therefore we can say that our model gives a good description only in the vicinity of the midrapidity region and fails to reproduce directed flow at present. It may be possible to treat this problem if we use a hadronic cascade model for both spectators and particles emitted from the freeze-out hypersurface, together with the hydrodynamic model.
There are four initial (and adjustable) parameters in our hydrodynamic model: 1) the energy density at $`z=0`$, $`E_0=2500`$ MeV/fm<sup>3</sup>, 2) the factor in the baryon density distribution $`\kappa =2.5`$, 3) the initial longitudinal factor $`\epsilon =0.9`$ and 4) the “diffuseness parameter” $`\delta _\mathrm{r}=0.3`$ fm. In the present analysis we select these values ‘by hand’, i.e., we guess them.
These parameters, however, should be chosen so as to reproduce the experimental data for the (pseudo-)rapidity and the transverse momentum distribution. To make our analysis more quantitative, we need this experimental data. We would like the experimental group to analyze the centrality dependence of the hadron spectra, especially, the (pseudo-)rapidity distribution. For this reason we wish to emphasize that our numerical results presented below are only preliminary.
Figure 2 shows our numerical results for the temporal behavior of the pressure (left column) and the baryonic flow (right column) at $`z=0`$ in the non-central Pb+Pb collision with impact parameter $`b=7`$ fm at SPS energy. Initially almost all matter in this plane is in the QGP phase and there is no transverse flow anywhere by definition. At $`t=t_0+0.5`$ fm we see the shell structure corresponding to the mixed phase with the same pressure $``$ 70 MeV/fm<sup>3</sup>, and the initial pressure gradient gives the baryons transverse flow. The QGP phase disappears at $`t=t_0+1.0`$ fm and after that the mixed phase occupies the central region. There is still no transverse flow near the origin due to the absence of a pressure gradient. At about $`t=t_0+5.0`$ fm all the nuclear matter initially in the QGP phase has gone through the phase transition and is in the hadronic phase. We can see from these figures that the shape of the nuclear matter is changing from almond (top figure on page 5) to round (bottom figure on page 6), and the elliptic flow reduces the initial geometric deformation.
The numerical results of the hydrodynamical simulation give us the momentum distribution through the Cooper-Frye formula with freeze-out temperature $`T_\mathrm{f}=140`$ MeV. The elliptic flow parameter $`v_2`$, as a function of rapidity $`y`$, is obtained from the momentum distribution
$`v_2(y)`$ $`=`$ $`\left({\displaystyle \frac{p_x}{p_t}}\right)^2\left({\displaystyle \frac{p_y}{p_t}}\right)^2`$ (11)
$`=`$ $`{\displaystyle \frac{_0^{2\pi }𝑑\varphi \mathrm{cos}(2\varphi )_p_{}^{p_+}p_t𝑑p_tE\frac{d^3N}{dp^3}}{_0^{2\pi }𝑑\varphi _p_{}^{p_+}p_t𝑑p_tE\frac{d^3N}{dp^3}}}.`$
Before calculating $`v_2`$ in non-central collisions with impact parameter $`b=7`$ fm, we checked the numerical error in our hydrodynamic model in central collisions. Since there is no special direction in the transverse plane for head-on collisions, ideally the elliptic flow vanishes in the infinite particle limit. Performing the numerical simulation with $`b=0`$ fm, we obtain the value of $`v_2`$ as less than 10<sup>-1</sup> percent, therefore we can safely neglect the numerical error. Note that the numerical error in the energy and baryon density conservation of the fluid is less than one percent in our analysis.
Figure 3 shows our results for the rapidity dependence of elliptic flow for pions in different transverse momentum regions. These results show that elliptic flow rises with transverse momentum $`p_t`$ and has a peak at midrapidity.
This seems to be in contrast with the experimental data obtained by the NA49 Collaboration . Their data appears to be slightly peaked at medium-high rapidity.
Our results for $`v_2`$ for protons are shown in Fig. 4. We see the same behavior as for the pion case. We obtain a larger $`v_2`$ for protons than for pions because we are integrating over a larger transverse momentum region. Since the initial parameters in our hydrodynamic model have been chosen by hand, we would like readers to not take these results quantitatively.
In summary, we reported our preliminary analysis of elliptic flow in non-central heavy-ion collisions using the hydrodynamic model. We numerically simulated the hydrodynamic model without assuming cylindrical symmetry or Bjorken’s boost-invariant solution, using the extended version of the Piecewise Parabolic Method which is known as a robust scheme for the non-relativistic gas equation with a shock wave. We presented the temporal behavior of high temperature and high density nuclear matter produced in Pb+Pb collisions with $`b=7`$ fm at SPS energy. Our preliminary results showed that the elliptic flow parameter $`v_2`$ has a peak at midrapidity for both pions and protons and increases with transverse momentum. Since there are some ambiguities in the initial parameters of our hydrodynamical model, we should fix these parameter using experimental data for the rapidity distribution in non-central collisions. If we regard the hydrodynamical model as a predictive one, we can choose initial parameters using results from a parton cascade model, such as VNI . The study of these issues is a future work.
The author is much indebted to Prof. I. Ohba, Prof. H. Nakazato, Dr. Y. Yamanaka and Prof. S. Muroya for their helpful comments, and to Dr. H. Nakamura, Dr. C. Nonaka and Dr. S. Nishimura for many interesting discussions. The numerical calculations were performed on workstations of the Waseda Univ. high-energy physics group.
|
no-problem/9904/cond-mat9904076.html
|
ar5iv
|
text
|
# Tunnelling Spectroscopy of Localized States near the Quantum Hall Edge
## Acknowledgements
The authors would like to thank J. Fröhlich for stimulating discussions. V.C. is grateful to B. I. Halperin for important remarks and for hospitality at the Lyman Lab. We hould like to thank L. Levitov and A. Chang for valuable comments and information about their work.
|
no-problem/9904/astro-ph9904394.html
|
ar5iv
|
text
|
# Intermittency as a possible underlying mechanism for solar and stellar variability
## 1 Introduction
It is now well established that middle-aged solar-type stars show variability on a wide range of time scales, including the intermediate time scales of $`10^010^4`$ years (Weiss 1990). The evidence for the latter comes from a variety of sources, including observational, historical and proxy records. Many solar-type stars seem to show cyclic types of behaviour in their mean magnetic fields (e.g. Weiss 1994, Wilson 1994), which in the case of the Sun have a period of nearly 22 years. Furthermore, the studies of the historical records of the annual mean sunspot data since 1607 AD show the occurrence of epochs of suppressed sunspot activity, such as the Maunder minimum (Eddy 1976, Foukal 1990, Wilson 1994, Ribes & Nesme-Ribes 1993, Hoyt & Schatten 1996). Further research, employing $`{}_{}{}^{14}C`$ (Eddy 1980, Stuiver & Quey 1980, Stuiver & Braziunas 1988, 1989) and $`{}_{}{}^{10}B`$ (Beer et al. 1990, 1994a,b, Weiss & Tobias 1997) as proxy indicators, has provided strong evidence that the occurrence of such epochs of reduced activity (referred to as grand minima) has persisted in the past with similar time scales, albeit irregularly.
These latter, seemingly irregular, variations are important for two reasons. Firstly, the absence of naturally occurring mechanisms in solar and stellar settings, with appropriate time scales (Gough, 1990), makes the explanation of such variations theoretically challenging. Secondly, the time scales of such variations makes them of potential significance in understanding the climatic variability on similar time scales (e.g. Friis-Christensen & Lassen 1991, Beer at al. 1994b, Lean 1994, Stuiver, Grootes & Braziunas 1995, O’Brien et al. 1995, Baliunas & Soon 1995, Butler & Johnston 1996, White et al. 1997). In view of this, a great deal of effort has gone into trying to understand the mechanism(s) underlying such variations by employing a variety of approaches.
Our aim here is to give a brief account of some recent results that may throw some new light on our understanding of such variations.
## 2 Theoretical frameworks
Theoretically there are essentially two frameworks within which such variabilities could be studied: stochastic and deterministic.
Here we mainly concentrate on the deterministic approach and recall that given the usual length and nature of the solar and stellar observational data, it is in practice difficult to distinguish between these two frameworks (Weiss 1990). Nevertheless, even if the stochastic features play a significant role in producing such variations, the deterministic components will still be present and are likely to play an important role.
The original attempts at understanding such variabilities were made within the linear theoretical framework. An important example is that of linear mean-field dynamo models (Krause & Rädler 1980) which succeeded in reproducing the nearly 22 year cyclic behaviour. Unfortunately such linear models cannot easily and naturally<sup>1</sup><sup>1</sup>1It is worth bearing in mind that one can always produce complicated looking behaviour within the linear framework, by combining many simpler behaviours. The crucial point is that in this case complexity in behaviour requires a complicated underlying mechanism. Furthermore, there are qualitative differences, in terms of spectra and other dynamical indicators, between complicated dynamical behaviours produced by linearly complex and nonlinearly chaotic systems. account for the complicated, irregular looking solar and stellar variability.
The developments in nonlinear dynamical systems theory, over the last few decades, have provided an alternative framework for understanding such variability. Within this nonlinear deterministic framework, irregularities of the grand minima type are probably best understood in terms of various types of dynamical intermittency, characterised by different statistics over different intervals of time. The idea that some type of dynamical intermittency may be responsible for understanding the Maunder minima type variability in the sunspot record goes back at least to the late 1970’s (e.g. Tavakol 1978, Ruzmaikin 1981, Zeldovich et al. 1983, Weiss et al. 1984, Spiegel 1985, Feudel et al. 1994). We shall refer to the assumption that grand minima type variability in solar-type starts can be understood in terms of some type of dynamical intermittency as the intermittency hypothesis.
To test this hypothesis one can proceed by adopting either a quantitative or a quantitative approach.
### 2.1 Quantitative approach
Given the complexity of the underlying equations, the most direct approach to the study of dynamo equations is numerical. Ideally one would like to start with the full 3–D dynamo models with the least number of simplifying assumptions and approximations. There have been a great deal of effort in this direction over the last two decades (e.g. Gilman 1983, Nordlund et al. 1992, Brandenburg et al. 1996, Tobias 1998). The difficulty of dealing with small scale turbulence has meant that a detailed fully self-consistent model is beyond the range of the computational resources currently available, although important attempts have been made to understand turbulent dynamos in stars (e.g. Cattaneo, Hughes & Weiss 1991, Nordlund et al. 1992, Moss et al. 1995, Brandenburg et al. 1996, Cattaneo & Hughes 1996) and accretion discs (e.g. Brandenburg et al. 1995, Hawley et al. 1996). Such studies have had to be restricted to the geometry of a Cartesian box, which in essence makes them local dynamos, whereas magnetic fields in astrophysical objects are observed to exhibit large scale structure, related to the shape of the object, and thus can only be captured fully by global dynamo models (Tobias 1998). Furthermore, despite great advancements in numerical capabilities, these models still involve approximations and parametrisations and are extremely expensive numerically, specially if the aim is to make a comprehensive search for possible ranges of dynamical modes of behaviours as a function of control parameters<sup>2</sup><sup>2</sup>2Which at times would require extremely long runs to transcend transients..
An alternative approach, which is much cheaper numerically, has been to employ mean-field dynamo models. Despite their idealised nature, these models reproduce some features of more complicated models and allow us to analyse certain global properties of magnetic fields in the Sun. For example, the dependence of various outcomes of these models (such as parity, time dependence, cycle period, etc.) on global properties, including boundary conditions, have been shown to be remarkably similar to those produced by full three-dimensional simulations of turbulent models (Brandenburg 1999a,b). This gives some motivation for using these models for our studies below.
A number of attempts have recently been made to numerically study such models, or their truncations, to see whether they are capable of producing the grand minima type behaviours. There are a number of problems with these attempts. Firstly, the developments in dynamical systems theory over the last two decades have uncovered a number of theoretical mechanisms for intermittency, each with their dynamical and statistical signatures. Secondly, the simplifications and approximations involved in these models, make it difficult to decide whether a particular type of behaviour obtained in a specific model is in fact generic. And finally, the characterisation of such numerically obtained behaviours as “intermittent” is often phenomenological and based on simple observations of the resulting time series (e.g. Zeldovich et al. 1983, Jones et al. 1985, Schmalz & Stix 1991, Feudel et al. 1993, Covas et al. 1997a,b,c, Tworkowski et al. 1998, and references therein), rather than a concrete dynamical understanding coupled with measurements of the predicted dynamical signatures and scalings. There are, however, examples where the presence of various forms of intermittency has been established concretely in such dynamo models, by using various signatures and scalings (Brooke 1997, (Covas & Tavakol 1997, Covas et al. 1997c, Brooke et al. 1998, Covas & Tavakol 1998, Covas et al. 1999b).
### 2.2 Qualitative approach
Given the inevitable approximations and simplifications involved in dynamo modelling (specially given the turbulent nature of the regimes underlying such dynamo behaviours and hence the parametrisations necessary for their modelling in practice), a great deal of effort has recently gone into the development of approaches that are in some sense generic. The main idea is to start with various qualitative features that are thought to be commonly present in such settings and then to study the generic dynamical consequences of such assumptions.
Such attempts essentially fall into the following categories. Firstly, there are the low dimensional ODE models that are obtained using the Normal Form approach (Spiegel 1994, Tobias et al. 1995, Knobloch et al. 1996). These models are robust and have been successful in accounting for certain aspects of the dynamos, such as several types of amplitude modulation of the magnetic field energy, with potential relevance for solar variability of the Maunder minima type.
The other approach is to single out the main generic ingredients of such models and to study their dynamical consequences. For axisymmetric dynamo models, these ingredients consist of the presence of invariant subspaces, non-normal parameters and non-skew property. The dynamics underlying such systems has recently been studied in (Covas et al., 1997c,1999b; Ashwin et al. 1999). This has led to a number of novel phenomena, including a new type of intermittency, referred to as in–out intermittency, which we shall briefly discuss in section 4
## 3 Models
The standard mean-field dynamo equation is given by
$$\frac{𝐁}{t}=\times \left(𝐮\times 𝐁+\alpha 𝐁\eta _t\times 𝐁\right),$$
(1)
where $`𝐁`$ and $`𝐮`$ are the mean magnetic field and mean velocity respectively and the turbulent magnetic diffusivity $`\eta _t`$ and the coefficient $`\alpha `$ arise from the correlation of small scale turbulent velocities and magnetic fields (Krause & Rädler, 1980). In axisymmetric geometry, eq. (1) is solved by splitting the magnetic field into meridional and azimuthal components, $`𝐁=𝐁_𝐩+𝐁_\varphi `$, and expressing these components in terms of scalar field functions $`𝐁_𝐩=\times A\widehat{\varphi }`$, $`𝐁_\varphi =B\widehat{\varphi }`$.
In the following we shall also employ a family of truncations of the one dimensional version of equation (1), along with a time dependent form of $`\alpha `$, obtained by using a spectral expansions of the form:
$`{\displaystyle \frac{dA_n}{dt}}`$ $`=`$ $`n^2A_n+{\displaystyle \frac{D}{2}}(B_{n1}+B_{n+1})+{\displaystyle \underset{m=1}{\overset{N}{}}}{\displaystyle \underset{l=1}{\overset{N}{}}}(n,m,l)B_mC_l,`$
$`{\displaystyle \frac{dB_n}{dt}}`$ $`=`$ $`n^2B_n+{\displaystyle \underset{m=1}{\overset{N}{}}}𝒢(n,m)A_m,`$ (2)
$`{\displaystyle \frac{dC_n}{dt}}`$ $`=`$ $`\nu n^2C_n{\displaystyle \underset{m=1}{\overset{N}{}}}{\displaystyle \underset{l=1}{\overset{N}{}}}(n,m,l)A_mB_l.`$
where $`A_n`$, $`B_n`$ and $`C_n`$ are derived from the spectral expansion of the magnetic field $`𝐁`$ and $`\alpha `$ respectively, $`,`$ and $`𝒢`$ are coefficients expressible in terms of $`m,n`$ and $`l`$, $`N`$ is the truncation order, $`D`$ is the dynamo number and $`\nu `$ is the Prandtl number (see Covas et al. 1997a,b,c for details).
## 4 Different forms of intermittency in ODE and PDE dynamo models
Recent detailed studies of axisymmetric mean field dynamo models have produced concrete evidence for the presence of various forms of dynamical intermittency in such models. We shall give a brief overview of these results in this section.
### 4.1 Crisis (or attractor merging) intermittency
A particular form of this type of intermittency, discovered by Grebogi, Ott & Yorke (Grebogi et al. 1982, 1987), is the so called “attractor merging crisis”, where as a system parameter is varied, two or more chaotic attractors merge to form a single attractor. There is both experimental and numerical evidence for this type of intermittency (see for example Ott (1993) and references therein). We have found concrete evidence for the presence of such a behaviour in a 6-dimensional truncation of mean-field dynamo model of the type (2) (Covas & Tavakol 1997) and more recently, in a PDE model of type (1) (see Covas & Tavakol (1999) for details). Fig. 1 shows an example of the latter which clearly demonstrates the merging of two attractors, with different time averages for energy and parity. For a concrete characterisation and scaling, see Covas & Tavakol (1999).
### 4.2 Type I-Intermittency
This form of intermittency, first discovered by Pomeau and Manneville in the early 1980’s (Pomeau & Manneville 1980), has been extensively studied analytically, numerically and experimentally (see Bussac & Meunier 1982, Richter et al. 1994 and references therein). It is identified by long almost regular phases interspersed by (usually) shorter chaotic bursts. In particular, this type of intermittency has been found in a 12–D truncated dynamo model of type (2) (Covas et al. 1997c), and more recently in a PDE dynamo model of type (1) (Covas & Tavakol 1999). Fig. 2 gives an example of such time series, where the irregular interruptions of the laminar phases by chaotic bursts can easily be seen. For a concrete characterisation, including the scaling for the average length of laminar phases see Covas & Tavakol (1999).
### 4.3 On-Off and In-Out Intermittency
An important feature of systems with symmetry (as in the case of solar and stellar dynamos) is the presence of invariant submanifolds. It may happen that attractors in such invariant submanifolds may become unstable in transverse directions. When this happens, one possible outcome could be that the trajectories can come arbitrarily close to this submanifold but also have intermittent large deviations from it. This form of intermittency is referred as on-off intermittency (Platt et al. 1993a,b). Examples of this type of intermittency have been found in dynamo models, both phenomenologically (Schmitt et al., 1996) and concretely in truncated dynamo models of the type (2) (Covas et al. 1997c).
A generalisation of on-off intermittency, the in-out intermittency, discovered recently (Ashwin et al. 1999) is expected to be generic for axisymmetric dynamo settings. The crucial distinguishing feature of this type of intermittency is that, as opposed to on-off intermittency, there can be different invariant sets associated with the transverse attraction and repulsion to the invariant submanifold, which are not necessarily chaotic. This gives rise to identifiable signatures and scalings (Ashwin et al. 1999).
Concrete evidence for the occurrence of this type of intermittency has been found recently in both PDE and truncated dynamo models of the types (1) and (2) respectively (see Covas et al. (1999a,b) for details).
## 5 Intermittency hypothesis: theory and observation
In the previous section, we have summarised concrete evidence for the presence of four different types of dynamical intermittency in both truncated and PDE mean-field dynamo models. ¿From a theoretical point of view, the intermittency hypothesis may therefore be said to have been established, at least within this family of mean-field models. What remains to be seen is whether these types of intermittency still persist in more realistic models. An encouraging development in this connection is the discovery of a type of intermittency which is expected to occur generically in axisymmetric dynamo settings, independently of the details of specific models. Despite these developments, testing the intermittency hypothesis poses a number of difficulties in practice:
1. Observationally, all precise dynamical characterisation of solar and stellar variability are constrained by the length and the quality of the available observational data. This is particularly true of the intermediate (and of course longer) time scale variations. Such a characterisation is further constrained by the fact that some of the indicators of such mechanisms, such as scalings, require very long and high quality data.
2. Theoretically, there is now a large number of such mechanisms, some of which share similar signatures and scalings, which could potentially complicate the process of differentiation between the different mechanisms.
3. An important feature of real dynamo settings is the inevitable presence of noise. This calls for a theoretical and numerical study of effects of noise on the dynamics, on the one hand (e.g. Meinel & Brandenburg 1990, Moss et al. 1992, Ossendrijver & Hoyng 1996, Ossendrijver, Hoyng & Schmitt 1996) and on the signatures and scalings of various mechanisms of intermittency on the other.
These issues raise a number of interesting questions. Is, for example, the intermittency hypothesis operationally decidable at present? Will it be operationally decidable in foreseeable future?
In this connection it is worth bearing in mind that some types of intermittency do possess signatures that are rather easily identifiable. Nevertheless, we believe the answer to these difficult questions can only be realistically contemplated once a more clear picture has emerged of all the possible types of intermittency that can occur in more realistic solar-type dynamo models (and ultimately real dynamos) and once their precise signatures and scalings, in presence of noise, have been identified.
###### Acknowledgements.
We would like to thank the organisers of this meeting for their kind hospitality and for bringing about the opportunity for many fruitful exchanges. We would also like to thank Peter Ashwin, Axel Brandenburg, John Brooke, David Moss, Ilkka Tuominen and Andrew Tworkowski for the work we have done together and Edgar Knobloch, Steve Tobias, Alastair Rucklidge, Michael Proctor and Nigel Weiss for many stimulating discussions. EC is supported by grant BD/5708/95 – PRAXIS XXI, JNICT. EC thanks the Astronomy Unit at QMW for support to attend the conference. RT benefited from PPARC UK Grant No. L39094. This research also benefited from the EC Human Capital and Mobility (Networks) grant “Late type stars: activity, magnetism, turbulence” No. ERBCHRXCT940483.
|
no-problem/9904/cond-mat9904306.html
|
ar5iv
|
text
|
# Optimized local modes for lattice dynamical applications
## I Introduction
Although the translational symmetry of a crystalline solid imposes a delocalized basis of Hamiltonian eigenstates (Bloch’s functions), it is sometimes advantageous to consider a transformation to a new set of basis functions with a local character. Beyond the mathematical equivalence (both sets span the same space of states), a local viewpoint is better suited for the analysis of concepts such as bonding which are eminently local in character. Recent work on electronic Wannier functions has shown the usefulness of a local representation in the chemical characterization of a given band subspace, in the analysis of bonding topology in a disordered system, and in more formal developments.
The lattice dynamical problem is formally very similar to the electronic one: a set of Bloch eigenstates (normal modes) represents the collective vibrations of the atoms in the crystal. A basis change to a set of local displacement patterns (lattice Wannier functions or local modes) can in principle be achieved. So far, the main application for these local modes has been in the field of structural phase transitions. Typically, the behavior of a given dispersion branch or set of branches determines the essential instabilities of the system, and the associated degrees of freedom enter into the construction of an effective Hamiltonian which reproduces the relevant physics. Through the use of a localized basis set, the number of coupling terms in the effective Hamiltonian can be relatively small, easing the statistical mechanical treatment and the interpretation of the results. In particular, the anharmonic terms in the effective Hamiltonian can be kept local (on-site), in contrast with what happens in a reciprocal-space description.
This local mode approach has been used extensively in the past to gain an understanding of the behavior of complex systems, but until recently the local variables were treated as dummy degrees of freedom in a semi-empirical model, with their interactions fitted to reproduce the observed phenomena. In the last few years, a new approach, in which the effective Hamiltonian is parametrized on the basis of first principles calculations, has had great success in studies of the phase transition sequences in perovskite oxides. Central to the parametrization process is the explicit construction of lattice Wannier functions, and two schemes have been proposed to carry it out. Zhong, Vanderbilt, and Rabe (ZVR) used the structure of the zone-center soft mode in perovskite BaTiO<sub>3</sub> to construct symmetry-adapted highly localized local modes. Subsequently, Rabe and Waghmare (RW) generalized this approach to reproduce the normal modes at several (typically, high symmetry) points of the Brillouin Zone.
While both the ZVR and RW approaches have been broadly successful in the specific problems for which they were conceived, in this paper we will argue that they are not completely satisfying in some respects. We will present a new procedure to generate lattice Wannier functions, an approach which makes use of the available symmetry information, produces local modes with a high degree of localization, enables a systematic improvement of their quality, and is straightforward to implement.
## II Method
We are interested in describing a relevant subspace $``$ of the full $`3Np`$-dimensional configuration space of a crystal with $`p`$ atoms per unit cell. Typically, we can choose $``$ as a complete band of dispersion branches (complete in the sense that it is invariant under the action of the space group of the crystal ). Associated to a branch $`j`$ is a set of normal modes ($`3Np`$-dimensional vectors) $`\{u_j^𝐤\}`$ which are eigenvectors of the Fourier transform of the force-constant matrix. (The displacement in the $`\alpha `$ cartesian direction of the atom $`\kappa `$ in cell $`𝐥`$ is given explicitly by $`u_j^𝐤(𝐥,\kappa ,\alpha )`$.) The normal modes transform according to irreducible representations of the little groups $`G^𝐤`$. These representations, considered over the whole BZ, determine the band symmetry. The relevant subspace $``$ is spanned by all the $`\{u_j^𝐤\}`$ in the band, but it is clear that any transformation
$$\stackrel{~}{u}_j^𝐤=\underset{i=1}{\overset{n}{}}M_{ji}^𝐤u_i^𝐤$$
(1)
will lead to a new basis of extended states which we will call Bloch modes. Here $`n`$ is the band dimension, the number of dispersion branches in the band.
Having thus specified the relevant subspace by means of Fourier space variables $`\{u_j^𝐤\}`$, the problem we tackle is the construction of a new basis $`\{w_j^𝐧\}`$ which is local, as opposed to extended, in character. Mathematically, the $`𝐤`$ label should be exchanged by a local label $`𝐧`$ associated to the different unit cells in the crystal. Translational symmetry takes the form:
$$w_j^𝐧(𝐥,\kappa ,\alpha )=w_j^{𝐧+𝐭}(𝐥+𝐭,\kappa ,\alpha ),$$
(2)
which is trivially satisfied by the standard Wannier function form :
$$w_j^𝐧=\frac{1}{\mathrm{\Omega }}_{BZ}\mathrm{exp}(i\mathrm{𝐤𝐧})\stackrel{~}{u}_j^𝐤𝑑𝐤$$
(3)
in which $`\mathrm{\Omega }`$ is the volume of the BZ. A high degree of localization means that the displacement $`w_j^𝐧(𝐥,\kappa ,\alpha )`$ should be very small or zero when $`𝐥`$ is a few lattice constants away from $`𝐧`$. The arbitrariness implicit in their definition (Eq. 1) means that the Wannier functions are non-unique, and a relatively large latitude then exists to tune their properties. In particular, the degree of localization has traditionally been the focus of great interest, and recently, Marzari and Vanderbilt have succeeded in optimizing the matrices appearing in Eq. 1 to construct very localized electronic Wannier functions starting from the Bloch states. A restriction to unitary matrices resulted in an orthonormal basis of Wannier functions, and the optimization process led to symmetric-looking functions, even though no symmetry conditions were explicitly imposed. In principle, such an approach should work for the vibrational problem, too. However, we prefer to take an alternate route which takes advantage of the knowledge of the band symmetry.
### A Symmetry requirements
As studied extensively in the literature, one should supplement the translational constraints of Eq. 2 with another set of conditions which represent the transformational properties of the $`w_j^𝐧`$ under the effect of the point symmetry of the crystal. These are most easily discussed by introducing a symmetry-based definition of the center of a mode. Consider a Wyckoff set with representative site $`𝐫`$ and the set $`\widehat{G}_𝐫`$ of operations in $`G`$ that leave $`𝐫`$ invariant. Given an irreducible representation $`\tau `$ of $`\widehat{G}_𝐫`$ with dimension $`d_\tau `$, any $`d_\tau `$ displacement patterns transforming with $`\tau `$ under the action of $`\widehat{G}_𝐫`$ are said to be centered in $`𝐫`$. It is then notationally more convenient to use a double index to label these patterns: $`w_{𝐫,s}`$ where $`s`$ ranges from 1 to $`d_\tau `$. The action of the elements of the space group $`G`$ on this set generates images at the rest of the positions in the Wyckoff set, i.e., $`d_rd_\tau `$ patterns $`w_{𝐫_i,s}^𝐧`$ per cell, where $`i`$ ranges from 1 to $`d_r`$ (the multiplicity of the Wyckoff set). This set of lattice functions is represented by the pair $`(𝐫,\tau )`$ and define a representation of $`G`$ which is called band representation
A necessary condition for the description of a relevant band subspace by means of these symmetry-adapted local modes is the equivalence of the band symmetry of $``$ and the band representation $`(𝐫,\tau )`$ (in particular this implies $`n=d_rd_\tau `$). More details about the choice of the correct $`(𝐫,\tau )`$ for a given $``$ are presented in the Appendix, where we also discuss the transformation properties of the corresponding $`\{\stackrel{~}{u}_j^𝐤\}`$. Incidentally, since Eq. 3 establishes a correspondence between lattice Wannier functions and Bloch modes, in what follows the latter can be also labeled by the site and representation indexes: $`\stackrel{~}{u}_{𝐫,s}^𝐤`$
### B Practical criterion for localization
A straightforward scheme to obtain lattice Wannier functions can be based on a direct use of Eq. 3, performing the BZ sum by means of any of the standard “special k-points” methods. The quality of the subspace description can thus be systematically improved by simply using denser k-point sets. This approach can incorporate information about the normal modes throughout the whole Brillouin zone, as opposed to at just one point (as in the ZVR method), or at a very special set of high-symmetry k-points (as in the RW scheme).
As stated in the Introduction, it is highly desirable that the local mode basis functions be as localized as possible, in order to permit the consideration of only a few coupling terms in the effective Hamiltonian. From the point of view of real applications, a basis of Wannier functions which are not localized is not efficient, even if it spans $``$ perfectly. The form of Eq. 3 suggests a very simple heuristic criterion to achieve a high degree of localization for the lattice Wannier functions: choose the $`M^𝐤`$ matrices in such a way that the $`\stackrel{~}{u}^𝐤`$ Bloch vectors at different $`𝐤`$ add their contributions coherently at the center of the Wannier function. Interference effects can then be counted on to automatically dampen the amplitude of the displacements at sites away from the center.
Both the symmetry requirements and the localization condition can be formulated in the following way. Assume a $`(𝐫,\tau )`$ pair has been determined on the basis of band symmetry, and that we focus on the construction of local modes at cell $`𝐧`$=$`\mathrm{𝟎}`$. Consider a set of $`d_\tau `$ ($`3Np`$-dimensional) orthonormal vectors $`\{x_{𝐫,s}\}`$ which are centered in $`𝐫`$, transform with irrep $`\tau `$ and involve atoms in an orbit as close to $`𝐫`$ as possible. The localization criterion is implemented by requiring that $`\stackrel{~}{u}_{𝐫,s}^𝐤`$ be orthogonal to $`x_{𝐫,t}`$ if $`st`$, and “parallel” (meaning that their scalar product is positive) if $`s=t`$ It can be seen that this condition fixes the form of the $`M^𝐤`$ matrices, up to overall normalization factors. In general, the $`M^𝐤`$ will not be unitary, with the result that two lattice Wannier functions at different cells $`w_{𝐫,s}^𝐧`$ and $`w_{𝐫^{},s^{}}^𝐧^{}`$ will not be orthogonal if the pairs $`(𝐫,s)`$ and $`(𝐫^{},s^{})`$ are not equal. In the next section we will provide a simple worked example of the new construction scheme and will compare its results to those of other methods.
## III Examples and Discussion
In order to illustrate the scheme presented in the previous section, we will employ a two-dimensional model crystal with two different atoms which occupy the $`1a`$ (white) and $`1b`$ (black) Wyckoff positions of the plane group $`p4mm`$ (See Fig. 1 a)). A simple harmonic model for the force constants (with the couplings among the white atoms considered up to fourth nearest neighbors and the rest to first nearest neighbors, which corresponds to 6 independent parameters) gives the dispersion branches of panel b) in the figure. We will focus our attention on the two optical branches, which form a single band since they are essentially degenerate at the $`\mathrm{\Gamma }`$ and M points. These optical branches transform according to the decompositions
$$\begin{array}{ccc}\mathrm{\Gamma }\hfill & (4mm):\hfill & E\hfill \\ \mathrm{X}\hfill & (2mm):\hfill & B_1+B_2\hfill \\ \mathrm{M}\hfill & (4mm):\hfill & E,\hfill \end{array}$$
(4)
in irreducible representations of the little co-groups at the high symmetry points. A simple application of the procedure spelled out in the Appendix shows that the band representation compatible with the above band symmetry is that represented by the pair $`(,E)`$, in which $`E`$ is a two-dimensional irreducible representation which turns out to be the vector representation of the point symmetry group at $``$. The set of $`\{x\}`$ vectors is then trivial to construct: as the “$``$” Wyckoff position is occupied, it is just enough to make $`x_{,1}`$ and $`x_{,2}`$ unit vectors attached to the central atom and pointing in the $`x`$ and $`y`$ cartesian directions, respectively. For this crystal structure, the simplest non-trivial set of special k-points is given by $`\{(1/8,1/8);(1/8,3/8);(3/8,3/8)\}`$. The explicit application of the localization criterion proceeds as follows. At each k-point in the set the normal modes are computed and the $`M^𝐤`$ matrices constructed. For example, at $`(1/8,3/8)`$, the normal modes are
$$\begin{array}{cc}u_1^𝐤=\hfill & (\mathrm{};0.23,0.93;\mathrm{})\hfill \\ u_2^𝐤=\hfill & (\mathrm{};0.84,0.19;\mathrm{}),\hfill \end{array}$$
(5)
where the “$`\mathrm{}`$” refer to displacements on atoms other than the one at the center. The “coherent addition at the center” condition then becomes:
$$\begin{array}{c}0.23M_{11}+0.84M_{12}>0\hfill \\ 0.23M_{21}+0.84M_{22}=0\hfill \\ 0.93M_{11}0.19M_{12}=0\hfill \\ 0.93M_{21}0.19M_{22}>0,\hfill \end{array}$$
(6)
and is satisfied by
$$M=\left(\begin{array}{cc}0.200& 0.980\\ 0.964& 0.264\end{array}\right),$$
(7)
uniquely defined but for row-specific arbitrary factors. Since M is not unitary, the two optical Bloch vectors $`\stackrel{~}{u}_{,s}`$ at this k-point will not be orthogonal (although they can of course still be chosen to be normalized).
Once this procedure has been performed at every k-point in the set, the integral (sum) in Eq. 3 can be carried out to give the components of the lattice Wannier functions. Since the $`\{\stackrel{~}{u}^𝐤\}`$ determined by the localization criterion also satisfy the symmetry compatibility relations (see Appendix), the Wannier functions are symmetry-adapted. In Fig. 2 c) we show the displacements associated to the local mode $`w_{,1}`$, which transform as the first component of the vector representation $`(E)`$ of $`4mm`$. The degree of localization of these lattice Wannier functions can be gauged by computing the contribution to the total norm from a given shell around the center atom, as presented on Table I. Less than one per cent of the norm is outside the fourth shell (which corresponds roughly to the second-neighbor unit cells). If the integral in Eq. 3 is computed using a denser special-point set, the degree of localization is maintained, as can be seen by comparing the columns labeled “this work (3 k)” and “this work (10 k)” on Table I. This means that the quality of the local modes can be systematically improved while retaining a high degree of localization.
It is enlightening to compare this scheme to that of Rabe and Waghmare. In the latter the analysis of the symmetry compatibility relations proceeds in the same way, and once the right $`(𝐫,\tau )`$ set has been identified, a series of orthonormal $`x`$ sets is constructed at successive shells centered on $`𝐫`$. The extent of the outermost shell fixes the localization of the Wannier functions by construction, and the actual atomic displacements are determined by fitting to the normal modes computed at a few high-symmetry points of the Brillouin Zone. In essence, the normal-mode information determines the weight assigned to each symmetry-adapted shell, so there is a tradeoff between the extent of the lattice Wannier functions and the amount of information from the real dispersion relations that can be used in the construction procedure. For example, in the PbTiO<sub>3</sub> work, Rabe and Waghmare found that adding information about the normal modes at the X point resulted in a less localized local mode than if only the $`(\mathrm{\Gamma },\mathrm{M},\mathrm{R})`$ set was used. In contrast, our scheme can deal with the extra k-point without loss of localization: our local modes for PbTiO<sub>3</sub> using four high-symmetry points are more localized than the best (three point) RW lattice Wannier functions.
It is clear that localization cannot be the main quality criterion for the construction of local modes. If it were, then the ZVR scheme, which uses only one high-symmetry k-point to construct a (very localized) lattice Wannier function, would be the method of choice. In fact, the real test for local mode sets is the degree to which they reproduce the energetics of the relevant subspace $``$. That is, in our case, the degree to which the dispersion relations of the effective Hamiltonian
$$H_{\mathrm{eff}}=H_{\mathrm{eff}}(Q_1,Q_2,\mathrm{},Q_{Nn})$$
(8)
match the real dispersion branches associated with $``$.
In Eq. 8, the variables $`Q_i`$ are the amplitudes of the local mode variables, so that $`H_{\mathrm{eff}}`$ can be thought of as the “projection” of the complete Hamiltonian into the relevant subspace $``$ (which is typically considered as energetically decoupled from the rest of the configuration space of the crystal). The explicit form of $`H_{\mathrm{eff}}`$ will depend on the detailed structure of the lattice Wannier functions. In particular, the number of distinct coupling coefficients (representing the interaction of modes at different sites) which one should take into account in $`H_{\mathrm{eff}}`$ is determined by the spatial extent of the local modes.
We have constructed effective Hamiltonians for the model crystal for each of the three local-mode construction schemes discussed above (we obtain the coupling between $`w_{,s}^𝐧`$ and $`w_{,s^{}}^𝐧^{}`$ by calculating the energy associated to the crystal when it is distorted by just these modes). The original crystal Hamiltonian involved interactions up to fourth nearest neighbors for white atoms. Since the local modes involve basically displacements of the central white atom, we have kept the same range of interaction in $`H_{\mathrm{eff}}`$, but now referring of course to fourth nearest local modes. This amounts to using ten independent coupling coefficients; a larger number of parameters would not be reasonable in a practical application.
ZVR-style local modes are very localized and do not couple beyond the fourth neighbor shell, so the considered $`H_{\mathrm{eff}}`$ includes all the existing interactions. This can be seen on panel a) of Figure 3: the dispersion branches computed from $`H_{\mathrm{eff}}`$ match the exact ones at the $`\mathrm{\Gamma }`$ point. However, $`H_{\mathrm{eff}}`$ gives a poor description of the dispersion branches away from $`\mathrm{\Gamma }`$, as it should be expected in view of the construction procedure. (Incidentally, the inverse of Eq. 3 leads to Bloch modes which are not normalized to unity, except at the $`\mathrm{\Gamma }`$ point. The standard analysis of $`H_{\mathrm{eff}}`$ as given would lead to the low-lying dispersion branches in the figure. The higher branches are obtained by considering the corresponding generalized eigenvalue problem.) Panel b) shows that the $`H_{\mathrm{eff}}`$ constructed on the basis of RW local modes gives a good qualitative overall description of the dispersion, but fails to match the exact branches at the $`\mathrm{\Gamma }`$ point (as it should, given that this point was used in the construction scheme). The reason is that the local modes are more extended, and it is necessary to include couplings to further shells (at least up to seventh nearest neighbors) for the match to be essentially perfect. This means that the RW scheme does not lead to efficient local modes, in the sense stated above. The situation gets worse if more accuracy is needed in the overall description of the dispersion branches: the local modes turn out to be more extended, and even more coupling terms are needed in $`H_{\mathrm{eff}}`$.
In contrast, the local modes constructed following our heuristic criterion for localization do exhibit good efficiency (the dispersion branches do not change much when couplings to more than fourth nearest neighbors are included) and provide a very good qualitative match of the true branches throughout the BZ (Fig. 3 c)). (It should be noted that our construction scheme does not involve any high-symmetry points, hence the offset of the branches at $`\mathrm{\Gamma }`$ and M. We trade an overall good match for perfect accuracy at a few points.) Since a few coupling terms are enough to take into account the structure of the local modes, and the resulting $`H_{\mathrm{eff}}`$ provides a good fit to the true branches, our lattice Wannier functions are well suited for the local representation of the relevant subspace $``$. Moreover, they can be improved if needed by including more k-points in the integration set, with only a minor sacrifice in the compactness of the effective Hamiltonian.
We find these general conclusions to remain valid when more complicated interaction models are considered.
The practical application of our method of local mode construction to real materials requires the knowledge of the normal modes at general points of the Brillouin Zone. This information is easily obtained with modern linear-response codes without the need for large supercells. In the field of phase transitions, the use of this new scheme should enable the study of more complicated situations than those considered up to now. Competition of instabilities associated to different regions of the BZ or complications derived from anti-crossing phenomena are examples in which this method is bound to be useful. On the other hand, this work might provide an illustration of some of its theoretical underpinnings: the physical interpretation of the band representation associated to a dispersion band or the symmetry-induced continuity of phonon spectra are two instances of this.
## IV Conclusions
We have presented a straightforward scheme for the construction of very localized lattice Wannier functions, with explicit consideration of crystal symmetry. The new localization procedure enables a systematic improvement in the description of the relevant physics (by simply using denser sets of special k-points in the BZ integration) while still being quite efficient in regard to the number of coupling parameters needed in the effective Hamiltonian. Besides, the present method is straightforward to implement.
## Acknowledgements
We thank Karin Rabe, Philippe Ghosez, and David Vanderbilt for useful comments. This work was supported in part by the UPV research grant 060.310-EA149/95 and by the Spanish Ministry of Education grant PB97-0598. J.I. acknowledges fellowship support from the Basque regional government and thanks Agustin Válgoma for comments on the manuscript.
## Appendix
Let us study in more detail the equivalence between the band symmetry emerging from the transformation properties of the Bloch modes and the band representation associated to a $`(𝐫,\tau )`$ set. This can be done by considering the action of $`G`$ on the $`(𝐫,\tau )`$ set of local modes and, consequently, on the associated Bloch modes $`\stackrel{~}{u}_{𝐫_i,s}^𝐤`$.
In order to proceed, we need to formulate the transformation properties of the modes $`w_{𝐫,s}^\mathrm{𝟎}`$ under the action of $`\{\overline{R}|\overline{𝐯}\}\widehat{G}_r`$. Since we consider $`\{\overline{R}|\overline{𝐯}\}`$ acting on the modes themselves and not on their components, we denote symmetry operations by the associated operators $`O\{\overline{R}|\overline{𝐯}\}`$. We have
$$O\{\overline{R}|\overline{𝐯}\}w_{𝐫,s}^\mathrm{𝟎}=\underset{h=1}{\overset{d_\tau }{}}D_{hs}^\tau (\{\overline{R}|\overline{𝐯}\})w_{𝐫,h}^\mathrm{𝟎}$$
(9)
where $`𝐃^\tau (\{\overline{R}|\overline{𝐯}\})`$ is the matrix associated to $`\{\overline{R}|\overline{𝐯}\}`$ by irrep $`\tau `$. Now, we consider the rest of elements in the $`(𝐫,\tau )`$ set. They can be mathematically defined as
$$w_{𝐫_i,s}^𝐧:=O\{E|𝐧\}O\{R_i|𝐯_i\}w_{𝐫,s}^\mathrm{𝟎}$$
(10)
where $`\{E|𝐧\}`$ is a lattice translation and $`\{R_i|𝐯_i\}`$ is one of the $`d_r`$ elements in $`G/G_r`$, which are chosen so that all the $`𝐫_i:=\{R_i|𝐯_i\}𝐫`$ lie in the same cell. The action of any $`\{R|𝐯\}G`$ on an arbitrary $`w_{𝐫_i,s}^𝐧`$ can be decomposed in: a lattice translation, a change of the center and a local transformation. Mathematically, this is expressed as
$`O\{R|𝐯\}(O\{E|𝐧\}O\{R_i|𝐯_i\}w_{𝐫,s}^\mathrm{𝟎})=`$ (11)
$`O\{E|\{R|𝐯\}(𝐧+𝐫_i)𝐫_j\}O\{R_j|𝐯_j\}O\{\overline{R}|\overline{𝐯}\}w_{𝐫,s}^\mathrm{𝟎}`$ (12)
where $`\{R_j|𝐯_j\}G/G_r`$ and $`\{\overline{R}|\overline{𝐯}\}\widehat{G}_𝐫`$ are univocally determined. Together with Eq. 9, this expression defines the band representation and, by using the inverse of Eq. 3, it can be written in the basis of Bloch modes. We obtain
$`O\{R|𝐯\}\stackrel{~}{u}_{𝐫_i,s}^𝐤=`$ (13)
$`\mathrm{exp}(iR𝐤(\{R|𝐯\}𝐫_i𝐫_j)){\displaystyle \underset{h=1}{\overset{d_\tau }{}}}D_{hs}^\tau (\{\overline{R}|\overline{𝐯}\})\stackrel{~}{u}_{𝐫_j,h}^{R𝐤}`$ (14)
By examining the representations this equation defines in the high symmetry k-stars, it can be easily checked whether the band representation $`(𝐫,\tau )`$ is equivalent to the band symmetry we want to describe.
Once a convenient $`(𝐫,\tau )`$ set is chosen, Eq. 14 fixes the requirements on Bloch modes so that they lead to symmetry adapted local modes $`w_{𝐫_i,s}^𝐧`$. For pure translations, Eq. 14 reduces to Bloch theorem. Point symmetry determines the transformation properties of the $`d_\tau d_r`$ Bloch modes in each k-point and establishes the relationship of these with those in the rest of the k-star. However, Eq. 14 does not determine the form of the $`M^𝐤`$ matrices completely. For instance, in a general k-star ($`\overline{G}^𝐤=\{E\}`$) no condition is imposed on the choice of Bloch modes in a representative $`𝐤`$, though, once this is done, the modes in the rest of the star are fixed. This is the freedom we use in our construction procedure to get the localization of the modes.
|
no-problem/9904/astro-ph9904222.html
|
ar5iv
|
text
|
# Discovery of a Third Harmonic Cyclotron Resonance Scattering Feature in the X-ray Spectrum of 4U 0115+63
## 1. Introduction
The transient X–ray source 4U 0115+63 is an accreting X–ray pulsar in an eccentric 24 day orbit (Bildsten et al. (1997)) with the O9e star, V635 Cassiopeiae (Unger et al. (1998)). X–ray outbursts have been observed from 4U 0115+63 with *Uhuru* (Forman, Jones & Tananbaum (1976)), HEAO-1 (Wheaton et al. (1979); Rose et al. (1979)), Ginga (e.g. Tamura et al. (1992)), and CGRO/BATSE (Bildsten et al. (1997)).
A cyclotron resonance scattering feature (CRSF) in 4U 0115+63 was first noted near 20 keV by Wheaton, et al. (1979) with the UCSD/MIT hard X–ray (10-100 keV) experiment aboard HEAO-1. White, Swank & Holt (1983) analyzed concurrent data from the lower energy (2-50 keV) HEAO-1/A2 experiment and found an additional feature at $``$12 keV. Two outbursts of 4U 0115+63 were observed with Ginga, in 1990 February and 1991 April (Nagase et al. (1991); Tamura et al. (1992); Mihara (1995)). The pattern of absorption features differed dramatically in the two outbursts. A pair of features similar to the HEAO-1 results in the 1990 outburst gave way to a single feature near 17 keV in 1991 (Mihara, Makishima & Nagase (1998)).
We discuss here spectral and timing analyses of observations of the 1999 March outburst (Wilson, Harmon & Finger (1999); Heindl & Coburn (1999)) obtained with the *Rossi X-Ray Timing Explorer* (RXTE).
## 2. Observations and Analysis
Observations were made with the Proportional Counter Array (PCA) (Jahoda et al. (1996)) and High Energy X-ray Timing Experiment (HEXTE) (Rothschild et al. (1998)) on board RXTE. The PCA is a set of 5 Xenon proportional counters sensitive in the energy range 2–60 keV with a total effective area of $``$7000 $`\mathrm{cm}^2`$. HEXTE consists of two arrays of 4 NaI(Tl)/CsI(Na) phoswich scintillation counters (15-250 keV) totaling $``$1600 $`\mathrm{cm}^2`$. The HEXTE alternates between source and background fields in order to measure the background. The PCA and HEXTE fields of view are collimated to the same 1 full width half maximum (FWHM) region.
Beginning on 1999 March 3, daily, short ($``$1 ks) monitoring observations were carried out. In addition, we performed four long pointings (duration $``$15-35 ks, labeled A – D in Fig. 1) to search for CRSFs. Observation B, on 1999 March 11.87-12.32, spanned periastron passage at March 11.95 (Bildsten et al. (1997)). Figure 1 shows the RXTE/All Sky Monitor (ASM, 1.5–12 keV) light curve of 4U 0115+63 together with the times of the pointed observations. In this work, we concentrate on observation B.
### 2.1. Spectral Analysis
The spectrum of 4U 0115+63 varies significantly with neutron star rotation phase (Nagase et al. (1991)), making fits to the average spectrum difficult to interpret. In order to study the evolution of the spectrum through the pulse, we corrected photon arrival times to both the solar system and the binary system barycenters using the ephemeris of Bildsten, et al. (1997). We then applied a Z<sup>2</sup> period search (Buccheri et al. (1983)) to the HEXTE data to determine the pulse period. Figure 2 shows folded light curves derived from spectra in 50 pulse phase bins for observation B, where the period was 3.614512(33)s. The folded light curve has a sharp main peak, followed by a broader, softer second peak, similar to earlier reports (White, Swank & Holt (1983); Bildsten et al. (1997)). In searching for CRSFs, we followed the spectral analysis methods described in Kreykenbohm, et al. (1998).
Because CRSFs at $``$12 and $``$22 keV are known in 4U 0115+63 (White, Swank & Holt (1983); Nagase et al. (1991)), we first concentrated on the HEXTE data where higher harmonics might appear. We fit a “cut-off power law” (a power law times an exponential) to the HEXTE spectra in 20 phase bins. The reduced $`\chi ^2`$ of these fits ranged from 1.3 to 12.3 (62 degrees of freedom, “dof”). Concentrated at the main pulse and through the rise and peak of the second, there were significant residuals resembling absorption features near 20–25 keV. In the fall of the second peak, residuals appeared between 30–40 keV. We then fit for an absorption feature near 20–25 keV, resulting in reduced $`\chi ^2`$s between 0.7 and 2.0 (59 dof). We used a simple Gaussian model for the optical depth profile. Fig. 2, shows the result of an F-Test for adding this line. In the cases where no line was allowed by the fit, the points are plotted with a value of $`10^0`$. Next, we allowed a CRSF between 28–45 keV. This significantly improved the fits in about half of the phase bins, including some near the main peak where large $``$20 keV residuals in the initial fits masked the presence of this line. The corresponding F-test results are also plotted in Fig. 2. Although there is strong evidence for multiple lines at other phases, the phase range 0.70–0.76 shared both a significant $``$20 keV line as well as the most clearly line-like residuals in 30–40 keV in the no-lines fit. We therefore chose to concentrate on this phase in this *Letter*. We plan to perform detailed analysis of all phases and all four long (A–D) observations in a future paper.
Next, we jointly fit the HEXTE and PCA data for phase 0.70–0.76. To account for uncertainties in the response matrix, 1% systematic errors were applied to the PCA data. None of the continuum models (high energy cut-off power law, Fermi-Dirac cut-off (FDCO) times a power law, and Negative and Positive power law Exponential (NPEX); see Kreykenbohm, et al. 1998) typically used for accreting pulsar spectra provided an acceptable fit without the inclusion of absorption features. A black body with kT$``$0.4 keV and photoelectric absorption were required to describe the data. No Fe-K line was required. Ultimately, it was necessary to include CRSFs at $``$12, $``$21, and $``$34 keV in the joint spectrum. The fitted line parameters were insensitive to the details of the continuum model used. The results given here used a Fermi-Dirac cut-off (FDCO) times a power law, given by $`F(E)(1+e^{(EE_c)/E_f})^1\times E^\mathrm{\Gamma }`$. F is the photon flux, $`\mathrm{E}_\mathrm{c}`$ the cutoff energy, $`\mathrm{E}_\mathrm{f}`$ the folding energy, and $`\mathrm{\Gamma }`$ the photon index. $`\mathrm{E}_\mathrm{c}`$ was fixed at zero. An F-Test for the significance of adding the $``$34 keV line to a model with only two absorption features gave a chance probability of $`10^{17}`$.
### 2.2. Temporal Variability
Along with standard Fourier techniques, we analyzed the data in the time domain using the linear state space model (LSSM) formalism described by König & Timmer (1997) and Pottschmidt, et al. (1998). Parameters of the LSSM are related to dynamical timescales of the system such as oscillation periods, decay times of damped oscillators and stochastic noise. As shown in Figure 4, the data are well described by a LSSM of order 2. This model, based on an auto-regressive process, is dominated by a stochastically driven sinusoid of period 552 s which exponentially decays with an e-folding time of $`P_{\mathrm{fold}}=282`$ s. This short $`P_{\mathrm{fold}}`$ accounts for the broad QPO peak seen in the PSD (Fig. 5). A Kolmogoroff-Smirnoff test shows that the difference between the data and the LSSM is purely attributable to white noise. The light curve is thus consistent with a single exponentially decaying sinusoid, driven by a white noise process.
## 3. Results and Discussion
### 3.1. Spectrum and CRSFs
Fig. 3 shows the best fit joint count spectrum from pulse phase 0.70–0.76. Also plotted is the inferred incident photon spectrum. Best fit parameters are given in Table 1, and the reduced $`\chi ^2`$ of the fit is 1.66 (74 dof). This is the first time that a fundamental and two harmonic CRSFs have been detected in a single accreting X–ray pulsar spectrum. Previously, at most a fundamental and second harmonic have been seen: 4U 1907+09 (Cusumano et al. (1998)); or suggested: Vela X-1 (Kreykenbohm et al. (1998)), A0535+25 (Kendziorra et al. (1994)), 1E 2259+586 (Iwasawa, Koyama & Halpern (1992)). Those earlier observations lacked the broad-band sensitivity of RXTE. Furthermore, simple fits to the phase-resolved HEXTE spectra show that the X–ray spectrum varies rapidly with neutron star rotation. We observe significant variations between consecutive phase bins which cover only 2% of the pulse phase. This suggests complex spatial variations of conditions near the neutron star polar cap. Since the 35 keV CRSF is only present in about half of the pulse (predominantly during the fall of the second, weaker pulse), and both the 22 and 35 keV lines are only present together in 3 of 20 coarse phase bins, averaging over large phase angles would likely have washed out the line in the variable continuum.
The fundamental energy of 12.4 keV implies a neutron star surface field of $`1.1\times 10^{12}(1+\mathrm{z})`$ G. Contrary to simple Landau theory, the observed line spacing is not quite harmonic. The ratio of the $`2^{\mathrm{nd}}`$ and $`3^{\mathrm{rd}}`$ harmonics to the fundamental are $`1.73\pm 0.08`$ and $`2.71\pm 0.13`$ respectively. We tried fits with the $`2^{\mathrm{nd}}`$ and $`3^{\mathrm{rd}}`$ lines constrained to be exact harmonics of the first, whose energy was allowed to vary. The resulting fit was unacceptable (reduced $`\chi ^2`$ of 6.7 with 76 dof). We did, however, find a reasonable fit with the $`3^{\mathrm{rd}}`$ harmonic tied to the first but the second free to vary. Nevertheless, an F-test comparing this fit to our best model gives a chance improvement probability of $`2\times 10^4`$ for allowing the non-integer energy ratio of the first and third lines.
In addition to relativistic shifts, line energies may deviate from harmonic for a number of reasons. The main scattering for the harmonics may take place in regions of different magnetic field, either resulting from optical depth effects in the mound of accreting matter, or for lines primarily produced at opposite magnetic poles. It is interesting to note (see Fig. 2) that the second and third harmonics are most significant in the main and secondary pulses, respectively, possibly indicating origins at opposite poles.
In our initial fits to the HEXTE data alone, we observed that the 20 keV CRSF varied in both strength and energy (by 20%) through the pulse phase. This was first observed with Ginga by Nagase, et al. (1991) in the 1990 Feb. outburst. The line is strongest and the energy highest ($``$24 keV), on the falling edge of the main pulse, which is similar to the behavior of the CRSFs in Cen X-3 and 4U 1626-67 (Heindl & Chakrabarty (1999)). The Ginga data showed significant $``$11 and $``$22 keV lines at all 8 pulse phases analyzed (Mihara (1995)). The HEXTE data find that the $``$20 keV line is not required just before the rise of the main pulse (Fig. 2). However, with the addition of the PCA data, which constrain the continuum and fundamental line, it is possible that this line will be required at all phases. In any case, strong long term variability of the lines is known (Mihara (1995)), so differences between these results and earlier observations are not surprising.
### 3.2. Temporal Variability: a 2 mHz QPO
Figures 4 and 5 show the PCA light curve of observation B and its power spectral density (PSD), respectively. Strong variability with an $``$500 s period is obvious. At frequencies above 5 mHz, the PSD can be described by an overall power law $`f^1`$ plus peaks at the neutron star rotational frequency and its multiples. Some accreting pulsars have shown a QPO at the beat frequency between the neutron star rotation and the Keplerian orbit at the inner edge of the accretion disk (Finger, Wilson & Harmon (1996)). Using the relations in Finger, Wilson & Harmon (1996) with a surface B field strength of $`1.3\times 10^{12}`$ G, a distance of 6 kpc (Negueruela (1999)), and a total flux of $`2.1\times 10^8`$$`\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$, the expected beat QPO frequency is 0.8 Hz. Overtones of the rotational frequency confuse the search for QPOs in this region (see Fig. 5), and no obvious peaks apart from the pulsation were seen in the 1 ks short pointings or in observation B.
Below 5 mHz, the PSD is dominated by a broad QPO feature from the 500 s oscillation. The shape of this feature is complex and asymmetric, it can neither be described by a Lorentzian line nor by the superposition of two Lorentzian lines. The feature peaks at $``$1.5 mHz and has a FWHM of 1 mHz ($`Q=f_0/\mathrm{\Delta }f1.5`$). The excess power of the QPO with respect to the underlying red noise component in the range from 1 mHz to 4 mHz is $``$5% RMS. The $``$500 s period of this oscillation is longer than any X–ray QPO previously reported in an accreting X–ray pulsar. Soong & Swank (1989) reported a broad 0.062 Hz QPO in HEAO-1 observations of a flaring state in 4U 0115+63 that also did not fit into the beat frequency model.
The QPO was probably present in several of the short ($``$1 ks) observations as well, as there was apparent, slow variability on a several hundred second timescale. In observation A, the QPO was at most present weakly, as no clear peak is evident in the PSD. Two possible explanations for the QPO are: 1) modulation of the accretion flow, and 2) occultation of the beam by intervening matter in an accretion disk. It seems unlikely that the variability is due to modulation of the accretion flow itself. The timescales at the neutron star pole (milliseconds) and the inner edge of the disk (seconds) are too fast. Furthermore, the rotation period (days) of V635Cas is too long.
We compared PCA spectra at minima of the 500 s cycle to the spectra of the following maxima. The spectral shape is unchanged from 2.5 – 5 keV, and only $``$20% deviations appear at higher energies. Because the spectrum below 5 keV is steady through the QPO and the flux varies by a factor of two from peak to minimum, the QPO mechanism cannot be absorption in *cold* material. If it were, the low energy spectrum would be highly modified by photoelectric absorption. It is possible that Thomson scattering in ionized matter causes the variability. We suggest two possible mechanisms, which both require that the accretion disk be viewed nearly edge on. Both are consistent with a second order LSSM process. First, an azimuthal warp propagating around the disk could cause the ionized disk surface to intervene in the line of sight. In this case, the 500 s timescale is the time for the wave to circle the disk (assuming a single-peaked warp). In the second picture, the absorption takes place in a lump in the disk which orbits at a Kepler period of 500 s.
## 4. Summary
We have made two discoveries in the RXTE observations of the 1999 March outburst of 4U 0115+63. The HEXTE data have revealed for the first time in any pulsar a third harmonic CRSF. The line spacing between the fundamental and second harmonic and between the second and third harmonics are not equal, and are not multiples of the fundamental line energy. We have also discovered the slowest (2 mHz) QPO yet observed from an accreting pulsar. It was most pronounced during an observation spanning periastron passage of the neutron star around its massive companion. Based on the timescale, amplitude, and energy spectrum of the oscillation, it is most likely due to obscuration of the neutron star by hot accretion disk matter. The longest QPO previously observed had a timescale of 100 s in SMC X-1 (Angelini, Stella & White (1991)).
We thank E. Smith, J. Swank, and C. Williams-Heikkila for rapidly scheduling the observations and supplying the realtime data. ASM data are provided by the RXTE/ASM teams at MIT and at the RXTE SOF and GOF at NASA’s GSFC. This work was supported by NASA grants NAS5-30720 and NAG5-7339, DFG grant Sta 173/22, and a travel grant from the DAAD.
|
no-problem/9904/cond-mat9904385.html
|
ar5iv
|
text
|
# Temperature-scaling behavior of the Hall conductivity for Hg-based superconducting thin films
## I INTRODUCTION
The Hall effect in the mixed state of high-temperature superconductors (HTS) is one of the most interesting and controversial problems related to vortex dynamics. Many experiments have shown that the Hall anomaly occurs not only in HTS, such as YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO), Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (Bi-2212) , and Tl<sub>2</sub>Ba<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (Tl-2212), but also in conventional superconductors, for example, in the thin-film and the single-crystalline forms of Nb, V, and In-Pb alloys.
So far, two kinds of sign reversals have been observed. The first is a simple, single sign reversal as observed in YBCO and La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>(LSCO), and the second is a double sign reversal from positive to negative and then to positive again as the temperature decreases. The distinction between the two is that the single sign reversal is observed for the case of relatively low anisotropy while the double sign reversal is observed in the case of relatively high anisotropy, such as Bi-, Tl-, and Hg-based compounds. The anisotropy ratio is known to be on the order of $`10^4`$ for Tl-2212, $`10^210^3`$ for LSCO, and $`1010^2`$ for YBCO, and the ratio for Hg-based superconductors is between the values for Tl-2212 and YBCO.
Recently, even a third sign reversal was observed in the low-temperature region for heavy-ion-irradiated Hg-based compounds. This observation is quite meaningful because this multiple sign reversal was predicted by Kopnin and depended on the behaviors of the density of states and of the gap of the superconductor. This multiple sign reversal is possible if there are localized or almost localized energy states in the superconducting state.
Just after the detection of the Hall effect in Nb crystals, Bardeen and Stephen derived the flux-flow resistivity and the Hall resistivity due to vortex motion. However, in their theory, the sign of the Hall resistivity is always positive. Quite a few other theories, based on the flux-backflow, two-band, or induced-pinning phenomena, have also been developed to explain the Hall effect in mixed states. However, the origin of the Hall anomaly is still not well understood.
In this paper, we report the magnetic-field dependence of the Hall conductivity in the mixed state of HgBa<sub>2</sub>CaCu<sub>2</sub>O<sub>6+δ</sub> (Hg-1212) and HgBa<sub>2</sub>Ca<sub>2</sub>Cu<sub>3</sub>O<sub>8+δ</sub> (Hg-1223) thin films. As expected, a double sign reversal is observed in these highly anisotropic superconductors. The measured Hall conductivities in the mixed states are better fitted by the form $`\sigma _{xy}=C_1/H+C_2+C_3H`$, which is different from the case of the less anisotropic YBCO superconductor where $`C_2`$ is negligible. In this Hg-based superconductor, $`C_1`$ and $`C_2`$ depend strongly on the temperature, but that is not the case for $`C_3`$. $`C_1`$ scales as $`(1t)^n`$, which is partially understood from the temperature dependence of the gap and the coupling constant, but this understanding is not rigorous. Here, we claim that $`C_2`$, which appears only for highly anisotropic materials, scales as $`C_2(1t)^n^{}`$. The critical exponent $`n^{}`$ is $`2.0\pm 0.2`$ for Hg-1223 and $`3.2\pm 0.1`$ for Hg-1212. We observe, for the first time to the best of our knowledge, this scaling behavior of $`C_2`$ for Hg-based superconductors. A similar behavior was previously observed in LSCO, but was not analyzed. $`C_3`$ for these Hg-based thin films weakly depends on the temperature, which is a different behavior than those observed for YBCO and LSCO.
## II THEORETICAL BACKGROUD
Kopnin et al. and Dorsey obtained the Hall conductivity by using the time-dependent Ginzburg-Landau (TDGL) theory in which the relaxation time of the order parameter was taken to be complex. Kopnin et al. claimed that the negative Hall effect was very much related to the energy derivative of the density of states. According to their theory, the Hall conductivity can be expressed by two contributions. The first contribution due to the vortex motion is proportional to $`1/H`$ and is dominant in the low-field region. The second contribution, which originates from quasiparticles, is proportional to $`H`$.
An analysis of the Hall conductivity, based on the TDGL theory, for the YBCO single crystal was reported by Ginsberg and Manson. In their paper, the Hall conductivity $`\sigma _{xy}(H)`$ was well explained by the sum of $`H`$\- and $`1/H`$-dependent parts. However, the behavior of $`\sigma _{xy}(H)`$ varies on a case-by-case basis. For example, the $`H`$-dependent part for YBCO is replaced by a field-independent part for Tl-2212 . In the case of LSCO, the Hall conductivity is expressed as the sum of three terms : a $`1/H`$-dependent term, an $`H`$-dependent term, and an $`H`$-independent term. The temperature dependence of the coefficient of each component in the Hall conductivity was also investigated. The coefficient of the $`1/H`$ term varies as $`(1t)^n`$, where $`t=T/T_c`$ is the reduced temperature. $`n`$ is observed to be 2 for YBCO and $`23`$ for LSCO.
Recently, Kopnin et al. calculated the Hall conductivity based on the kinetic equations and the TDGL theory. Their approach included an additional force due to the kinetic effects of changing the quasiparticle densities in the normal core and in the superconducting state. The total Hall conductivity is given by
$$\sigma _{xy}(H)=\sigma _H^{(L)}+\sigma _H^{(D)}+\sigma _H^{(A)},$$
(1)
where $`\sigma _H^{(L)}`$ comes from localized excitations in the vortex cores, $`\sigma _H^{(D)}`$ is from delocalized quasiparticles above the gap, and $`\sigma _H^{(A)}`$ is from the additional force due to the kinetic effects of charge imbalance relaxation. In the vicinity of $`T_c`$, this term can be expressed as
$$\sigma _H^{(A)}\frac{1}{H\lambda }\left(\frac{d\nu }{d\zeta }\right)\mathrm{\Delta }^2,$$
(2)
where $`d\nu /d\zeta `$ is the energy derivative of the density of states at the Fermi surface, $`\lambda `$ is the coupling constant, and $`\mathrm{\Delta }`$ is the superconducting energy gap. $`\sigma _H^{(L)}`$ and $`\sigma _H^{(A)}`$ depend on $`1/H`$, while $`\sigma _H^{(D)}`$ is proportional to $`H`$. One important thing we have to notice is that first two terms on the right-hand side of Eq. (1) are always positive. For the dirty case, $`\sigma _H^{(L)}`$ is very small; hence, it can be neglected near $`T_c`$. A sign reversal can occur when $`\sigma _H^{(A)}`$ dominates over $`\sigma _H^{(D)}`$.
## III EXPERIMENTALS
High-quality Hg-1212 and Hg-1223 thin films were grown by using the pulsed laser deposition and post annealing method. The details are reported elsewhere. The onset-transition temperatures, $`T_c`$, are 127 K for Hg-1212 and 132 K for Hg-1223. The sizes of the specimens were 3 mm $`\times `$ 10 mm $`\times `$ 1 $`\mu `$m. A 20-T superconducting magnet system (Oxford Inc.) was used for the dc magnetic fields, and a two-channel nanovoltmeter (HP34420A) was used to measure the Hall resistivity ($`\rho _{xy}`$) and the longitudinal resistivity ($`\rho _{xx}`$) by using the standard dc five-probe method. The external magnetic field was applied parallel to the $`c`$ axis of the thin films, and the transport current density was $`200250`$ A/cm<sup>2</sup>. Both the Hall resistivity and the longitudinal resistivity showed Ohmic behavior, i. e., corresponding to the flux-flow region, at the current used in this study.
## IV RESULTS and DISCUSSION
We measured the longitudinal resistivities and the Hall resistivities of Hg-1212 and Hg-1223 thin films in the magnetic field region 0 T $`H`$ 18 T, and the results for Hg-1212 are shown in Fig. 1 while those for Hg-1223 are shown in Fig. 2. Compared to most of previous experiments performed at lower fields, we extended the magnetic field up to 18 T. The motivation for doing this was to check whether the previous analysis of the field dependence of the Hall conductivity based on the TDGL theory was valid even at this high field. Figures 1(a) and 2(a) show the field dependences of $`\rho _{xx}(H)`$ for various temperatures. $`\rho _{xx}(H,T)`$ increases monotonically with increasing temperature. In Figures 1(b) and 2(b), $`\rho _{xy}(H)`$ is plotted and has a nearly linear dependence on the field in the high-field region. The sign of the Hall resistivity in the low-field region near the transition temperature becomes negative, which is opposite to the positive sign of the Hall resistivity for the normal state. The range of the field in which sign reversal is observed for Hg-1223 is narrower than that for Hg-1212. The insets of Figs. 1 and 2 show detailed representations of the low-field region.
The Hall conductivity is typically defined as $`\sigma _{xy}\rho _{xy}/\rho _{xx}^2`$ by assuming $`\rho _{xx}\rho _{xy}`$. In Fig. 3, the field dependences of the Hall conductivities of Hg-1212 (Fig. 3(a)) and Hg-1223 (Fig. 3(b)) are shown for various temperatures. Based on the theoretical prediction of Kopnin et al., we analyze $`\sigma _{xy}(H)`$ by using
$$\sigma _{xy}(H)=\frac{C_1}{H}+C_2+C_3H,$$
(3)
which is plotted with solid lines in Fig. 3. Compared to YBCO, the component $`C_2`$ is added for better fitting. The data are well fitted in the region of 115 K $`T125`$ K for Hg-1212 and 125 K $`T130`$ K for Hg-1223. In this figure, the downward curves, as approaching zero field, show a sign reversal, but the upward curves do not. If the curve is downward, $`C_1/H`$ is negative, but if the curve is upward, it is positive.
The temperature dependences of $`C_1`$ and $`C_2`$ for Hg-1212 and Hg-1223 are shown in Fig. 4. Experiment shows that $`C_1`$ scales with temperature near $`T_c`$ as
$$C_1(1t)^n,$$
(4)
where $`n`$ is $`2.3\pm 0.2`$ for Hg-1212 and $`1.8\pm 0.3`$ for Hg-1223, as shown in the Table I. The scaling form of $`C_1`$ can be partially understood from the temperature dependences of the gap and of the coupling constant in Eq. (2) based on the theoretical prediction by Kopnin. However, this has not yet been proven rigorously. Scaling of $`C_1`$ has also been reported for YBCO and LSCO with $`n`$ values of 2 for YBCO and $`23`$ for LSCO. These are not significantly different from those for Hg-1212 and Hg-1223. Compared to several other critical exponents, such as the magnetization scaling and the irreversibility lines, $`n`$ does not critically depend on the anisotropy.
As shown in Fig. 4(b), $`C_2`$ steeply increases with decreasing temperature. Therefore, we can extract the following scaling form for $`C_2`$ near $`T_c`$ :
$$C_2(1t)^n^{},$$
(5)
where $`n^{}`$ is $`3.2\pm 0.1`$ for Hg-1212 and $`2.0\pm 0.2`$ for Hg-1223. Differently from the case of YBCO, in which $`C_2=0`$, we find that $`C_2`$ is not negligible for Hg-based superconductors. $`C_2`$ seems to be associated with the anisotropy ratio of the material, because the $`C_2`$ part of the resistivity becomes more explicit for the highly anisotropic superconductors, such as LSCO and Tl-1212. Specifically, a similar tendency to that shown in Fig. 4(b) for $`C_2`$ was observed in data previously reported for LSCO, but the temperature-scaling behavior was not determined. Not much information is reported for Tl-1212; however, the Hall conductivity is well fitted by $`\sigma _{xy}=C_1/H+C_2`$. As explained before, the origins of the $`1/H`$ and $`H`$ dependences can be explained by the TDGL theory or the microscopic theory, but neither of them can explain the scaling behavior of $`C_2`$.
The coefficient $`C_3`$ of the term linear in $`H`$ shows a weak temperature dependence in both Hg-1212 and Hg-1223. This is different from the cases of underdoped and slightly overdoped LSCO, in which $`C_3`$ decreases as the temperature decreases. On the other hand, $`C_3`$ in YBCO decreases linearly with temperature.
In order to investigate the effect of anisotropy on the coefficients $`C_1`$, $`C_2`$, and $`C_3`$ and on the powers $`n`$ and $`n^{}`$, we summarize our results along with previous results for HTS in Table I. As the anisotropy increases, the absolute value of $`C_1`$ and $`C_3`$ decrease while $`C_2`$ is increases. These values are evaluated at $`t0.92`$. According to this tendency, $`C_2`$ is very small for the case of low anisotropy whereas $`C_3`$ is very small for the highly anisotropic case. As a result, in Eq. (3), the second term is negligible for YBCO, and third term is negligible for Tl-2212. In case of Hg-base superconductors, however, since the anisotropy ratio ranges between that of YBCO and that of Tl-2212, all terms in Eq. (3) are required, just as in the case of LSCO. The Hall conductivities measured up to very high magnetic fields (0 T $`H18`$ T) for Hg-based superconductors are still well described by Eq. (3), but the temperature dependences of these coefficients have not yet been explained theoretically.
## V SUMMARY
We investigate the Hall effects for Hg-1212 and Hg-1223 thin films as functions of the magnetic field up to 18 T. The Hall conductivity in the mixed state is expressed well by $`\sigma _{xy}(H)=C_1/H+C_2+C_3H`$. The coefficient $`C_1`$ scales with temperature as $`(1t)^n`$ with $`n2.3`$ and 1.8 for Hg-1212 and Hg-1223, respectively; these values of $`n`$ are comparable to the values observed for YBCO and LSCO. We find that $`C_2`$ is more important for highly anisotropic compounds. $`C_2`$ is observed to follow the same scaling form, but with exponent $`n^{}3.2`$ and 2.0 for Hg-1212 and Hg-1223, respectively. These scaling behaviors of $`C_1`$ and $`C_2`$ have not yet been explained theoretically.
###### Acknowledgements.
This work is supported by the Ministry of Science and Technology of Korea through the Creative Research Initiative Program.
|
no-problem/9904/astro-ph9904280.html
|
ar5iv
|
text
|
# Abundance Ratios in Early-Type Galaxies
## 1. Introduction
The distribution of element abundances in galaxies is an important fossil record of their formation and evolution. Of primary importance is the metallicity (the mass fraction of all elements heavier than He), which contains important information about the past star formation history, which could well be strongly influenced by mergers and interactions. For early-type galaxies, the metallicity is an excellent estimator of the luminosity, or even the mass. Also very important are metallicity gradients, which are among the few parameters that can tell us something about the orbital structure in galaxies.
In recent years the quality of observational data has become so good, that one can also start thinking of measuring abundances of individual elements in external galaxies. Individual abundances will greatly help to understand their chemical evolution. In particular, one will be able to understand better the way in which the ISM of galaxies is enriched by metals, and what the relevant time-scales are. In the last 2 decades it has become clear that the abundance distribution in stars is not always the same as in the Sun. This was discovered first in individual stars in our galaxy (see Wheeler, Sneden & Truran 1989) and later in integrated spectra of elliptical galaxies (Peletier 1989, Worthey, Faber & González 1992). In this review I will discuss what we currently know about abundance ratios in galaxies, and what we can learn from this about the formation and evolution of galaxies. This paper is in part based on the excellent paper by Worthey (1998), but also includes some new high-quality data, which might shed new light on some issues in this rapidly evolving field.
The paper starts in Section 2 discussing some global relations for galaxies as a function of luminosity or velocity dispersion. In Section 3 the central abundances and abundance ratios are discussed, and in Section 4 their gradients. In Section 5 it is briefly discussed how we can understand non-solar abundance ratios. The need for stellar models with non-solar abundance ratios is mentioned in Section 6, after which some conclusions are given.
## 2. Global Relations
It has been known for some time that the stellar populations of early-type galaxies are strongly linked to their other properties. Two examples are the colour-magnitude relation (see e.g. Sandage & Visvanathan 1978) and the relation between Mg<sub>2</sub> and velocity dispersion ($`\sigma `$) (Terlevich et al. 1981). Both relations can be understood well if the average metallicity of a galaxy is larger when the galaxy is brighter. The usefulness of these relations in understanding galaxies strongly increased when Schweizer & Seitzer (1992) showed that the residuals of the colour/Mg<sub>2</sub>$`\sigma `$ relation correlate with a parameter indicating recent mergers. This correlation implies that the scatter for undisturbed early-type galaxies is smaller than the amount due to observational uncertainties, while colours of disturbed galaxies are somewhat bluer for a given $`\sigma `$, due to increased star formation during the merger. This interpretation was confirmed by the work of Bower, Lucey & Ellis (1992), who found a very low scatter in the colour-magnitude relation (CMR) in the Coma cluster, showing no sign of any recent star formation in the early-type galaxies in this cluster. Later, using the more sensitive H$`\gamma `$ absorption line index, Caldwell et al. (1993) showed that many early-type galaxies in the SW part of the cluster show signs of small amounts of young stars.
Recently, A. Terlevich (1998) redid the study of Bower et al. (1992) with a significantly larger sample in the Coma cluster. He finds that the intrinsic scatter in the elliptical galaxies in $`UV`$ is 0.036 mag (consistent with Bower et al.). No difference was found in the slope of the CMR between ellipticals and S0 galaxies. The slope of the CMR was also the same in different areas of the cluster. Most galaxies blue-ward of the CMR are either late-type galaxies (Andreon et al. 1996) or seem to have late-type morphologies. In the outer parts of the cluster the residuals about the CMR become somewhat bluer, in agreement with the the results of Caldwell et al. (1993). These detailed colour studies indicate that the colour of an early-type galaxy is determined by its luminosity to a high accuracy. The same can be said of the Mg<sub>2</sub> absorption line (Bender, Burstein & Faber 1993, Schweizer & Seitzer 1992). Although a colour of a galaxy, and also an absorption line, depends on many parameters, like metallicity, age and Initial Mass Function (IMF) slope, the fact that the CMR has the same slope across various factors of 10 in luminosity implies that the relations described above are almost certainly driven by metallicity: fainter galaxies have lower metallicities, and because of that bluer colours. This behaviour has been successfully reproduces in Galactic enrichment models (e.g. Arimoto & Yoshii 1987). Starting with a proto-galactic cloud, several generations of stars are formed, until the rate of Supernovae is so large that all the gas is expulsed from the galaxy, and the star formation stops. Being able to model the CMR has been very important for our understanding of galaxy formation. We are now however in a position that we can go one step further, and ask ourselves how the abundances of individual elements varies as a function of $`\sigma `$ or luminosity.
We know that the strength of the Mg<sub>2</sub> feature depends strongly on the Mg abundance (Worthey 1998), so the fact that there is a good correlation between Mg<sub>2</sub> and $`\sigma `$ or luminosity tells us that the Mg-abundance increases with that parameter. Colours don’t contain much information about individual element-abundances, except that blue colours, through line blanketing, depend very much on the abundance of Fe-peak elements, which would imply that the Fe-elements would be a strong function as well of $`\sigma `$. Do we know whether the abundances of other elements also correlate strongly with $`\sigma `$?
High-quality measurements of elements other than Mg are scarce, because of the high signal-to-noise required, and the difficulty associated with calibrating indices like $``$Fe$``$ and H$`\beta `$ onto the Lick system (Faber et al. 1985, Gorgas et al. 1993, Worthey et al. 1994). Recently Fisher, Franx & Illingworth (1996) and Jørgensen (1997) published $``$Fe$``$ and $`\sigma `$ data for a reasonably large number of galaxies. Although in both cases the scatter in the $``$Fe$``$$`\sigma `$ relation is large, it is smaller in the data of Jørgensen (1997). In Fig. 1a we show her Figure 2, displaying Mg<sub>2</sub>, $``$Fe$``$ and H$`\beta `$ as a function of $`\sigma `$. If, as she claims, the scatter is larger than the instrumental scatter, it would mean that $``$Fe$``$ cannot be a simple function of metallicity, like Mg<sub>2</sub>, but that there has to be a second parameter, most likely the Mg/Fe abundance ratio. This parameter cannot be the age, because the galaxies in Jørgensen (1997) are early-type galaxies in clusters, which fall well onto their CMR. Her interpretation however might not be correct. Recently, Kuntschner (1998) published some high-quality data of the Fornax cluster (see also Kuntschner & Davies 1998). As a comparison we have plotted them in Fig. 1b. He showed that for $`\mathrm{log}\sigma _0`$ $``$ 1.9 the scatter in Fe3’, an index very similar to $``$Fe$``$, and in H$`\beta `$ is smaller or comparable to the scatter in Mg<sub>2</sub>. This is a very important result, implying that in Fornax $``$Fe$``$, just like Mg<sub>2</sub>, depends only on $`\sigma `$, not on a second parameter. Since Kuntschner’s signal-to-noise is much larger than Jørgensen’s, it looks as if the observational scatter in Jørgensen (1997) has been underestimated. If this is not the case, it would mean that in some galaxy clusters the stellar populations are affected by a second parameter, while in others (Fornax) this would not be the case.
## 3. Abundance Ratios in the Centres of Galaxies
### 3.1. Mg and Fe
The largest dataset of nuclear line strengths in ellipticals and bulges has been the Lick sample, which was finally published by Trager et al. in 1998. It consists of measurements of almost 400 galaxies. The observations were performed with the Lick IDS, the same instrument with which the stars defining the Lick system have been observed. Worthey (1998) shows $``$Fe$``$ vs. Mg<sub>2</sub> for a subset of this sample in his Figure 1. In this Figure no difference can be seen between bulges of S0s and ellipticals. Generally, objects with central velocity dispersion below about 200 km/s seem to have solar Mg/Fe ratios, while the others are over-abundant in Mg. Since the Lick IDS detector suffered from several instrumental problems, the errors in the individual measurements are larger than one would like. For that reason I have made a different compilation, with data from more recent papers with high quality line strength measurements, and shown that in Fig. 2. It includes a sample of spiral bulges (Jablonka, Martin & Arimoto 1996) subdivided in a group of early-type bulges (type 0-2) and later type objects (type 3-5), a sample of bulges of S0 galaxies (Fisher et al. 1996), a mixed sample of ellipticals and S0 galaxies (Kuntschner 1998), a sample of mainly faint elliptical galaxies (Halliday 1998) and two samples of bright ellipticals. The models plotted here are from Vazdekis et al. (1996). The conclusion from Fig. 2 is again that bulges and ellipticals are indistinguishable. The models of Vazdekis et al. (1996) are compatible with those of Worthey (1994), and also for these models galaxies start deviating from their locus at Mg<sub>2</sub> $`>`$ 0.25. Bulges of Sa-Sc spirals, which were not included in the Lick sample, seem to have solar Mg/Fe ratios. Although there might be some small systematic offsets between the individual samples, they generally agree well with each other.. There might be a few objects, which do not follow the general trend. NGC 4458 and NGC 4464 (large filled dots, from Halliday 1998) have a very low Mg/Fe ratio, compared to other galaxies with the same Mg<sub>2</sub>. Both galaxies are faint ellipticals; NGC 4458 rotates very slowly, and has a small kinematically decoupled core (Simien & Prugniel 1998), while NGC 4464 has a v/$`\sigma `$ of about 0.5 for an ellipticity of 0.3, so (v/$`\sigma `$) is close to 1 (Davies et al. 1983), so that it is probably an oblate, rotating elliptical.
The situation for our own Galactic Bulge is in agreement with external galaxies. It is estimated from high-resolution spectra that the mean \[Fe/H\] in Baade’s window is somewhat lower than solar (–0.25, McWilliam & Rich 1994), but that \[Mg/Fe\] there is 0.3 – 0.4. For such a Mg-abundance, one would expect the Mg<sub>2</sub> index to be about 0.29, assuming an age of 17 Gyr, if one uses the Vazdekis et al. (1996) models, or less, if the bulge would be younger. Assuming that the bulge is indeed old, it will lie in Fig. 2 in the region where galaxies are over-abundant in Mg w.r.t. Fe.
### 3.2. Other elements
Worthey (1998) extensively summarises our knowledge about the relative abundance of metals other than Mg in giant ellipticals and in the Galactic Bulge. Although it appears that the situation concerning the elements Sc, V and Ti is very complicated and confusing, there are about half a dozen other elements for which we know something about their behaviour in giant elliptical galaxies. In Table 1 I have schematically summarised our knowledge about them. All the information has been obtained from various Lick indices, by comparing measured line strengths with the values that one expects based on stellar population models. As Worthey (1998) mentions, there are several differences in element abundances between giant ellipticals and our Bulge, which implies that there must have been differences in their formation processes. Especially in the case of N this is striking: while in the Bulge the abundance of CN is generally lower than solar, in giant ellipticals this is the opposite. It seems that \[C/Fe\] $``$ 0 in ellipticals (from the C<sub>2</sub>4668 line) and in our Bulge (Worthey 1998), so that it is thought that N is depleted in our Galaxy and overabundant in giant ellipticals. Peculiar is also that \[O/Fe\] $``$ 0 (McWilliam & Rich 1994), since one would expect that O, an $`\alpha `$-element, would follow Mg. McWilliam & Rich (1994) however warn us that the only stars for which they could measure O are at the tip of the RGB, where their abundances might not be representative of the Bulge because of metal-enrichment in the star itself. More measurements of O-abundances would be very welcome.
Here I would like to revisit our ideas about the \[Ca/Fe\] abundance ratio in giant ellipticals. The Lick system has two indices which can be used to measure the Ca abundances in galaxies: Ca 4227 and possibly Ca 4455. Both are faint, narrow features that are difficult to measure in giant ellipticals, because of the large correction for velocity broadening that one has to apply to measure them. Vazdekis et al. (1997), using high signal to noise spectra of three giant early-type galaxies, found that the measured Ca 4227 in all of them was much lower than expected from their and also Worthey (1994)’s stellar population model. Their Fig. 13 nicely illustrates how enormous the velocity dispersion correction is here. An independent confirmation of this result comes from observations of the NIR Ca II triplet (CaT) of the same 3 galaxies. We observed them using 2d-FIS, an Integral Field Spectrograph on the 4.2m WHT at La Palma, using a fibre bundle to feed the light from the Cassegrain focus to the slit of the ISIS double spectrograph (Peletier et al. 1999). Details about the instrument are given in Arribas, Mediavilla, & Rasilla (1991). In Fig. 3 our central measurements are plotted. Using the system of Díaz, Terlevich & Terlevich (1989) to define the band-passes, it was found that the CaT equivalent width is less strong than predicted by the models of Vazdekis et al. (1996) with solar Ca/Fe ratios (see also Fig. 3). Although those models were based on the stellar library of Díaz et al. (1989), which doesn’t fully cover the range of metallicities of giant ellipticals, the main conclusions will not change. Also, the same result is obtained if the models of García-Vargas, Mollá & Bressan (1998) are used. Our observations of the CaT however are in good agreement with Terlevich, Díaz & Terlevich (1990).
How to explain this apparent under-abundance of Ca? It is possibly that the effect is entirely caused by a problem in calculating the models. Idiart, Thevenin & de Freitas Pacheco (1997) and also Borges et al. (1995) point out that most models calculate integrated spectra by summing linear combinations of observed spectra of standard stars, assuming that \[Ca/Fe\] = 0 for those standard stars. Idiart et al. however obtained a separate library of standard stars, determined Ca, Mg and Fe abundances for each star, and calculated integrated indices using those individual abundances. Applying their models (Fig. 3) they find that \[Ca/Fe\] in the three galaxies is solar. This is however still peculiar, since Ca is an $`\alpha `$-element, which properties should follow closely those of Mg. Since the library of Idiart et al. (1997) also is rather limited, it is very important to obtain a stellar library of the size of the Lick library in the region of the CaT, to be able to interpret this important line index, which can also be used to constrain the IMF in galaxies. Together with the group at the Universidad Complutense in Madrid we are currently working on providing such a library (Cenarro et al. 1999, in preparation).
## 4. Line Strength Gradients in Galaxies
Line strength gradients have been presented by various authors (e.g. González 1993, Davies, Sadler & Peletier 1993, Carollo, Danziger & Buson 1993, Fisher, Franx & Illingworth 1995, 1996, Vazdekis et al. 1997), but only for the Mg<sub>2</sub> and Mg<sub>b</sub> indices reasonably high quality measurements are available in the literature for a sufficiently large sample, and possibly also for $``$Fe$``$ and H$`\beta `$. Gradients in elliptical galaxies in Fe and Mg indices are generally following tracks with constant \[Mg/Fe\], which means that for many galaxies the gradients are steeper than the line linking galaxy nuclei (e.g. Worthey et al. 1992, Davies et al. 1993). New, excellent quality data by Halliday (1998, Figure 4) confirm this also for low luminosity ellipticals. Current data seems to imply that Mg/Fe within all galaxies is constant. The behavior of Mg vs. Fe seems to be the same in bulges and disks. Fisher et al. (1996) find that in S0 galaxies the radial gradients in Mg<sub>2</sub> and $``$Fe$``$ are smaller along the major axis than on the minor axis, implying that the gradients in the disk are smaller than those in the bulge. However, in the Mg<sub>2</sub> vs. $``$Fe$``$ diagram the galaxies have the same slope, on both major and minor axis.
Information about abundance gradients in other elements is limited. Vazdekis et al. (1997) present gradients in about 20 lines for the three above mentioned galaxies. They find that the radial gradients can be explained well by gradients in the overall metallicity. Much more work however has to be done to investigate gradients in, e.g., bulges and smaller elliptical galaxies.
Recently it has become possible to make reliable line strength maps in two dimensions, using Integral Field Spectrography. This offers an exciting range of new possibilities. For example, in galaxies with kinematically decoupled cores it can be investigated whether for example Mg/Fe in the decoupled core is different from the ratio in the rest of the galaxy, giving clues to the origin of the decoupled core. In Peletier et al. (1999) one galaxy with a decoupled core is included, the Sombrero galaxy, which has a rapidly rotating inner disk (e.g. Wagner, Bender & Dettmar 1988). In Fig. 5 we show the Mg<sub>2</sub> and $``$Fe$``$ maps in an inner field of 8.2<sup>′′</sup> $`\times `$ 11<sup>′′</sup>. The continuum intensity is shown as well. The figure shows that Mg is enhanced in the inner disk, compared to the bulge, and that Fe is almost not enhanced (see also Emsellem et al. 1996). The noise however in the $``$Fe$``$ image is still so large that it cannot be established whether the Mg/Fe ratio itself in the inner disk has gone up. This galaxy shows, together with other cases in the literature (e.g. NGC 4365, Surma & Bender 1995, NGC 7626, Davies et al. 1993) that the central Mg abundance in galaxies with kinematically decoupled cores increases sharply, but up to now Mg/Fe seems to remain constant.
The inner disk also shows an increased H$`\gamma `$ line strength, while the CaT has a similar behaviour as $``$Fe$``$ (Peletier et al. 1999). Although these data are still rather noisy for this purpose, it is expected that much better data will become available soon from the new, wide-field (33<sup>′′</sup> $`\times `$ 41<sup>′′</sup>) integral field spectrograph SAURON (PIs R. Bacon, P.T. de Zeeuw and R.L. Davies) on the WHT.
## 5. What determines the Abundance Ratios in Galaxies?
From the previous sections we can draw the following conclusions:
* Small galaxies (Mg<sub>2,c</sub> $`<`$ 0.22, $`\sigma _c`$ $`<`$ 150 km/s) have solar Mg/Fe ratios, large galaxies (Mg<sub>2,c</sub> $`>`$ 0.28, $`\sigma _c`$ $`>`$ 225 km/s are overabundant in Mg.
* There is no difference in the Mg/Fe ratio between ellipticals and bulges of the same velocity dispersion (or Mg<sub>2,c</sub> value).
* Within a galaxy Mg/Fe appears to remain constant.
It is thought that the reason the Mg/Fe ratio from galaxy to galaxy varies, is that the ratio of the number of supernovae Type II vs. Type Ia can vary. SNe Type II occur in massive stars, and produce large amounts of light elements. SNe Type Ia come from binary accretion onto white dwarfs, and produce relatively much more Fe-peak elements (see e.g. Pagel 1998). Since the lifetime of the progenitors of SNe Type II is very short, there is a period of a few times 10<sup>8</sup> years from initial star formation in which enrichment through SNe Type II dominates (Worthey 1998). The most popular scenarios to vary this ratio of the two SNe types as a function of galaxy size (or velocity dispersion) are
* the formation time-scale, that should be shorter in large galaxies
* a variation in the IMF, such that large galaxies have a relatively larger fraction of massive stars.
Although it is difficult to reject the first scenario, there are various reasons why I would prefer the second. If the Mg/Fe ratio would be determined by the formation time-scale, then bulges and ellipticals would have had to form the majority of their stars on the same time-scale. Clearly this would predict that disks, in which the star formation is slow, would have lower Mg/Fe ratios. Although no good measurements of disks are available at present, the measurements by Fisher et al. (1996), which show no difference in Mg/Fe between major and minor axis in S0 galaxies, do not favour this scenario. Central disks (e.g. in the Sombrero) have high, rather than low Mg/Fe ratios. Secondly, it would be very difficult to form the brightest galaxies (with high Mg/Fe ratios) in an hierarchical way, on time-scales of Gyrs, since SNe Type Ia would lower the Mg/Fe of the gas, and make it very difficult to reach large Mg/Fe ratios. For example, the central disk of the Sombrero should have formed very fast from gas that was not very metal rich before the last merger event.
The second option seems more favourable, although it also has its difficulties. If Mg/Fe would be a function just of one variable, e.g., velocity dispersion (or escape velocity, as suggested by Franx & Illingworth 1990), then this scenario would not be able to explain the line strength gradients in galaxies, since it seems that for the same Mg abundance Mg/Fe in the outer parts of bright galaxies is generally lower than Mg/Fe in the centres of faint galaxies (gradients are steeper than the line connecting nuclei). To also be able to explain the gradients we will modify this scenario: the IMF, which mainly determines the enrichment of the elements, has to be dependent only on the mass of the galaxy as a whole. How realistic is this scenario? Although Elmegreen (these proceedings) claims that the IMF is generally universal, there are indications in some starburst galaxies that the IMF there is skewed towards high mass stars (Kennicutt 1998, p. 71). At present we can’t confirm, nor rule out this second scenario. It is however capable of explaining the current observations rather easily, much better than the first scenario.
The hot gas fraction, both in clusters of galaxies and in individual objects, might further constrain chemical enrichment models of galaxies. The measurements in the ISM of elliptical galaxies however indicate very low Fe-abundances (0.1 – 0.4 times solar, Arimoto et al. 1997), which seems inconsistent with the measurements from stellar spectra. For that reason we have to wait until the abundances from X-ray emssion lines are better understood (see also Barnes, these proceedings).
## 6. The Need for Better Stellar Models
Since abundance ratios in galaxies can only be obtained through detailed comparisons with stellar models, it is crucial that the models are up to date. At present, there are several stellar population models that predict integrated line strength indices on the Lick system. The models do not differ too much from each other (Worthey 1994, Bruzual & Charlot (see Leitherer et al. 1996), Vazdekis et al. 1996, Tantalo et al. 1996, Borges et al. 1995). Most of them use the stellar tracks of the Padova group (Bressan, Chiosi & Fagotto 1994). None of them however takes into account the fact that if the abundance ratios in the stars are non-solar, the stellar parameters, like effective temperature and gravity, might be different. There have been some papers studying how stellar isochrones change as a function of the Mg/Fe ratio for solar or larger metallicities (Weiss, Peletier & Matteucci 1995, Barbuy 1994). The conclusion of Weiss et al. is that the isochrones basically do not change, if the total metallicity (i.e. the mass fraction of elements heavier than He) remains constant. This is consistent with Barbuy (1994). If this result is reliable, isochrones with non-solar abundance ratios are not necessary, as long as the abundance ratios are taken into account in the fitting functions, which are used to calculate the line indices. This result is in agreement with Salaris, Chieffi & Straniero (1993) for lower metallicities. However, in a new paper Salaris & Weiss (1998), using new opacities, find that for Z=0.01 (their largest metallicity) $`\alpha `$-enhancement, even while keeping the total metallicity constant, does change properties like the Main Sequence Turnoff and the RGB colour significantly. It is important to find out why this result is in contradiction with previous work.
The results of Idiart et al. (1997), who showed that very different Ca-abundances are obtained for models of integrated stellar populations if the Ca-abundance of individual stars is taking into account in the fitting functions, indicate that accurate fitting functions are crucial. It is not expected that the models for $``$Fe$``$ and Mg<sub>2</sub> will change much, since in Weiss et al. (1995) it is shown that if one takes solar neighbourhood stars to calculate the fitting functions for Mg<sub>2</sub> and $``$Fe$``$, the integrated indices are very similar to the ones that one obtains when determining the fitting functions from Galactic Bulge stars. It is clear however, that the next generation of stellar population model will have to include fitting functions determined from as many stars as possible, using abundance ratios calculated for each star individually. Using 8m telescopes, it should be possible to obtain spectra of stars covering the parameter space of temperature, gravity, metallicity and some abundance ratios, that would be required to do this.
## 7. Summary
I discuss the evidence for abundance ratios in galaxies, supplementing the recent review by Worthey (1998). My main conclusions are:
* The scatter for early-type galaxies in the relations between $``$Fe$``$ and H$`\beta `$ vs. velocity dispersion is small, comparable to the scatter in the Mg<sub>2</sub> and colour vs. $`\sigma `$ relation.
* Small galaxies (Mg<sub>2,c</sub> $`<`$ 0.22, $`\sigma _c`$ $`<`$ 150 km/s) have solar Mg/Fe ratios, large galaxies (Mg<sub>2,c</sub> $`>`$ 0.28, $`\sigma _c`$ $`>`$ 225 km/s are overabundant in Mg. There is no difference in the Mg/Fe ratio between ellipticals and bulges of the same velocity dispersion (or Mg<sub>2,c</sub> value). Within a galaxy Mg/Fe appears to remain constant.
* We know very little known about the behaviour of other elements (see Worthey 1998), although \[Ca/Fe\] in giant ellipticals appears to be solar, contrary to what one would expect from an $`\alpha `$-element.
* Improved stellar population models, using stellar evolutionary tracks with non-solar abundance ratios and fitting functions using abundance ratios determined for each standard star separately, would be very welcome to calculate accurate abundance ratios.
#### Acknowledgments.
I like to thank Claire Halliday, Alejandro Terlevich and Harald Kuntschner for communicating results in advance of publication, and John Beckman for organising an interesting meeting.
## References
Andreon, S., Davoust, E., Michard, R., Nieto, J.L. & Poulain, P., 1996, A&AS, 116, 429
Arimoto, N., & Yoshii, Y. 1987, A&A, 173, 23
Arimoto, N., Matsushita, K., Ishimaru, Y., Ohashi, T. & Renzini, A., 1997, ApJ, 477, 128
Arribas, S., Mediavilla, E. & Rasilla, J.L., 1991, ApJ, 369, 260
Barbuy, B., 1994, ApJ, 430, 218
Bender, R., Burstein, D., & Faber, S. M. 1993 ApJ, 411, 153
Bower, R.G., Lucey, J.R. & Ellis, R.S., 1992, MNRAS, 254, 601
Borges, A.C. Idiart, T.P., de Freitas Pacheco, J.A. & Thevenin, F., 1995, AJ, 110, 2408
Bressan, A., Chiosi, C & Fagotto, F., 1994, ApJS, 94, 63
Caldwell, N., Rose, J.A., Sharples, R.M., Ellis, R.S. & Bower, R.G., 1993, AJ, 106, 473
Carollo, C.M., Danziger, I.J. & Buson, L., 1993, MNRAS, 265, 553
Davies, R.L., Efstathiou, G., Fall, S.M., Illingworth, G. & Schecter, P.L., 1983, ApJ, 266, 41
Davies, R.L., Sadler, E.M. & Peletier, R.F., 1993, MNRAS, 262, 650
Díaz, A.I., Terlevich, E. & Terlevich, R., 1989, MNRAS, 239, 325
Emsellem, E., Bacon, R., Monnet, G. & Poulain, P., 1996, A&A, 312, 777
Faber, S.M., Friel, E.D., Burstein, D. & Gaskell, C.M., 1985, ApJS, 57, 711
Fisher, D., Franx, M., & Illingworth, G. D. 1995, ApJ, 438, 539
Fisher, D., Franx, M., & Illingworth, G. D. 1996, ApJ, 459, 110
Franx, M. & Illingworth, G.D., 1990, ApJ, 359, L41
García-Vargas, M.L., Mollá, M. & Bressan, A., 1998, A&A, 130, 513
González, J.J., 1993, Ph.D. Thesis, University of California, Santa Cruz
Gorgas, J., Faber, S.M., Burstein, D., González, J.J, Courteau, S. & Prosser, C., 1993, ApJS, 86, 153
Halliday, C., 1998, Ph.D. Thesis, University of Durham
Idiart, T.P., Thevenin, F., de Freitas Pacheco, J.A., 1997, AJ, 113, 1066
Jablonka, P., Martin, P. & Arimoto, N, 1996, AJ, 112, 1415
Jørgensen, I., 1997, MNRAS, 288, 161
Kennicutt, R., 1998, in Galaxies: Interactions and Induced Star Formation, eds. D. Friedli, L. Martinet & D. Pfenniger (Springer, Berlin), p. 1
Kuntschner, H., 1998, Ph.D. Thesis, University of Durham
Kuntschner, H. & Davies, R.L., 1998, MNRAS, 295, L29
Leitherer, C., et al., 1996, PASP, 108, 996
McWilliam, A., & Rich, R.M., 1994, ApJS, 91, 749
Pagel, B.E.J., 1998, Nucleosynthesis and Chemical Evolution of Galaxies, Cambridge Univ. Press
Peletier, R.F., 1989, Ph.D. Thesis, University of Groningen
Peletier, R.F., Vazdekis, A., Arribas, S., del Burgo, C., García-Lorenzo, B., Gutiérrez, C., Mediavilla, E. & Prada, F., 1999, submitted to MNRAS
Salaris, M., Chieffi, A. & Straniero, O., 1993, ApJ, 414, 580
Salaris, M. & Weiss, A., 1998, A&A, 335, 943
Sandage, A.R. & Visvanathan, N., 1978, ApJ, 225, 742
Schweizer, F. & Seitzer, P., 1992, SJ, 104, 1039
Simien, F. & Prugniel, Ph., 1998, A&AS, 131, 287
Surma, P. & Bender, R., 1995, A&A, 298, 405
Tantalo, R., Chiosi, C., Bressan, A. & Fagotto, F., 1996, A&A, 311, 361
Terlevich, A., 1998, Ph.D. Thesis, University of Durham
Terlevich, E., Díaz, A.I. & Terlevich, R., 1990, MNRAS, 242, 271
Terlevich, R., Davies, R.L., Faber, S.M. & Burstein, D., 1981, MNRAS, 196, 381
Trager, S.C., Worthey, G., Faber, S.M., Burstein, D. & González, J.J., 1998, ApJS, 116, 1
Vazdekis, A., Casuso, E., Peletier, R.F. & Beckman, J.E., 1996, ApJS, 106, 307
Vazdekis, A., Peletier, R.F., Beckman, J. & Casuso, E., 1997, ApJS, 111, 203
Wagner, S.J., Bender, R. & Dettmar, R.-J., 1989, A&A, 215, 243
Weiss, A., Peletier, R. F., & Matteucci, F. 1995, A&A, 296, 73
Wheeler, J.C., Sneden, C., & Truran, J.W. 1989, ARA&A, 27, 279
Worthey, G., 1994, ApJS, 95, 107
Worthey, G. 1998, PASP, 110, 888
Worthey, G., Faber, S. M., González, J. J., & Burstein, D. 1994, ApJS, 94, 687
Worthey, G., Faber, S. M., González, J. J. 1992, ApJ, 398, 69
|
no-problem/9904/quant-ph9904054.html
|
ar5iv
|
text
|
# Inverted spectroscopy and interferometry for quantum-state reconstruction of systems with SU(2) symmetry
## I Introduction
The last few years were marked by an outburst of research devoted to the problem of reconstruction of quantum states for various physical systems (see, e.g., Ref. for an extensive list of the literature on the subject). The problem, as stated already in the fifties by Fano and Pauli , is to determine the density matrix $`\rho `$ from information obtained by a set of measurements performed on an ensemble of identically prepared systems. Significant theoretical and experimental progress has been achieved during the last decade in the reconstruction of quantum states of the light field . Also, numerous works were devoted to reconstruction methods for other physical systems. Most recently, a general theory of quantum-state reconstruction for physical systems with Lie-group symmetries was developed .
In the present work we consider state-reconstruction methods for some quantum systems possessing SU(2) symmetry. The principal procedure for the reconstruction of spin states was recently presented by Agarwal . A similar approach was also proposed by Dodonov and Man’ko , while the basic idea underlying this method goes back to the pioneering work by Royer . In brief, one applies a phase-space displacement \[specifically, a rotation in the SU(2) case\] to the initial quantum state and then measures the probability to find the displaced system in a specific state (the so-called “quantum ruler” state). Repeating this procedure with identically prepared systems for many phase-space points \[many rotation angles in the SU(2) case\], one determines a function on the phase space (the so-called operational phase-space probability distribution ). In particular, by measuring the population of the ground state, one obtains the so-called $`Q`$ function. The information contained in the operational phase-space probability distribution is sufficient to completely reconstruct the unknown density matrix of the initial quantum state. A general group-theoretical description of this method and some examples, including SU(2), are presented in Ref. .
The aim of the present paper is to study how the general state-reconstruction procedure outlined above can be implemented in practice for a number of specific physical systems with SU(2) symmetry. Three systems are considered: a collection of two-level atoms, a two-mode quantized radiation field with a fixed total number of photons, and a single laser-cooled ion in a two-dimensional harmonic trap with a fixed total number of vibrational quanta. We show that a simple rearrangement of conventional spectroscopic and interferometric schemes enables one to measure unknown quantum states of these systems.
## II Reconstruction of quantum states for systems with SU(2) symmetry
We start with some basic properties of SU(2) which is the dynamical symmetry group for the angular momentum or spin and for many other systems (e.g., a collection of two-level atoms, the Stokes operators describing the polarization of the quantized light field, two light modes with a fixed total photon number, etc.). The su(2) simple Lie algebra consists of the three operators $`\{J_x,J_y,J_z\}`$,
$$[J_p,J_r]=iϵ_{prt}J_t.$$
(1)
The Casimir operator is a constant times the unit operator, $`𝐉^2=j(j+1)I`$, for any unitary irreducible representation of the SU(2) group; so the representations are labeled by the single index $`j`$ that takes the values $`j=0,1/2,1,3/2,\mathrm{}`$. The representation Hilbert space $`_j`$ is spanned by the complete orthonormal basis $`\{|j,\mu \}`$ (where $`\mu =j,j1,\mathrm{},j`$):
$`𝐉^2|j,\mu =j(j+1)|j,\mu ,J_z|j,\mu =\mu |j,\mu .`$
In the following we assume that the state $`|\psi `$ of the system belongs to $`_j`$ (or, for mixed states, that the density matrix $`\rho `$ is an operator on $`_j`$). Group elements can be parametrized using the Euler angles $`\alpha ,\beta ,\gamma `$:
$$g=g(\alpha ,\beta ,\gamma )=e^{i\alpha J_z}e^{i\beta J_y}e^{i\gamma J_z}.$$
(2)
We will employ two very useful concepts: the phase space (which is the group coset space of maximum symmetry) and the coherent states (each point of the phase space corresponds to a coherent state). For SU(2), the phase space is the unit sphere $`𝕊^2=\mathrm{SU}(2)/\mathrm{U}(1)`$, and each coherent state is characterized by a unit vector
$$𝐧=(\mathrm{sin}\theta \mathrm{cos}\varphi ,\mathrm{sin}\theta \mathrm{sin}\varphi ,\mathrm{cos}\theta ).$$
(3)
Specifically, the coherent states $`|j;𝐧`$ are given by the action of the group element
$$g(𝐧)=e^{i\varphi J_z}e^{i\theta J_y}$$
(4)
on the highest-weight state $`|j,j`$:
$`|j;𝐧=g(𝐧)|j,j`$ $`=`$ $`{\displaystyle \underset{\mu =j}{\overset{j}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{2j}{j+\mu }}\right)^{1/2}\mathrm{cos}^{j+\mu }(\theta /2)`$ (6)
$`\times \mathrm{sin}^{j\mu }(\theta /2)e^{i\mu \varphi }|j,\mu .`$
An important property of the coherent states is the resolution of the identity:
$$\frac{2j+1}{4\pi }_{𝕊^2}𝑑𝐧|j;𝐧j;𝐧|=I,$$
(7)
where $`d𝐧=\mathrm{sin}\theta d\theta d\varphi `$.
A possible procedure for the quantum-state reconstruction is as follows . First, the system, whose initial state is described by the density matrix $`\rho `$, is displaced in the phase space:
$$\rho \rho (𝐧)=g^1(𝐧)\rho g(𝐧),𝐧𝕊^2.$$
(8)
Then one measures the probability to find the displaced system in one of the states $`|j,\mu `$ (e.g., in the highest state $`|j,j`$). This probability
$$p_\mu (𝐧)=j,\mu |\rho (𝐧)|j,\mu $$
(9)
(which is sometimes called the operational phase-space probability distribution) can be formally considered as the expectation value
$$p_\mu (𝐧)=\mathrm{Tr}[\rho \mathrm{\Gamma }_\mu (𝐧)]$$
(10)
of the so-called displaced projector
$$\mathrm{\Gamma }_\mu (𝐧)=g(𝐧)|j,\mu j,\mu |g^1(𝐧).$$
(11)
Repeating this procedure (with a large number of identically prepared systems) for a large number of phase-space points $`𝐧`$, one can determine the function $`p_\mu (𝐧)`$.
Knowledge of the function $`p_\mu (𝐧)`$ is sufficient for the reconstruction of the initial density matrix $`\rho `$. We can use the following expansion for the density matrix (such an expansion exists for any operator on $`_j`$):
$$\rho =\underset{l=0}{\overset{2j}{}}\underset{m=l}{\overset{l}{}}_{lm}D_{lm},_{lm}=\mathrm{Tr}(\rho D_{lm}^{}).$$
(12)
Here, $`D_{lm}`$ are the so-called tensor operators (also known in the context of angular momentum as the Fano multipole operators ),
$$D_{lm}=\sqrt{\frac{2l+1}{2j+1}}\underset{k,q=j}{\overset{j}{}}j,k;l,m|j,q|j,qj,k|,$$
(13)
where $`j_1,m_1;j_2,m_2|j,m`$ are the Clebsch-Gordan coefficients. Now, one can reconstruct the density matrix by using the relation
$$_{lm}=\frac{\sqrt{(2j+1)/4\pi }}{j,\mu ;l,0|j,\mu }_{𝕊^2}𝑑𝐧p_\mu (𝐧)Y_{lm}^{}(𝐧),$$
(14)
where $`Y_{lm}(𝐧)`$ are the spherical harmonics. Other ways to deduce the density matrix from the measured probabilities $`p_\mu (𝐧)`$ were also proposed .
Let us also consider the useful concept of phase-space quasiprobability distributions (QPDs). In the SU(2) case, one can introduce an $`s`$-parametrized family of the QPDs
$$P(𝐧;s)=\underset{l=0}{\overset{2j}{}}\underset{m=l}{\overset{l}{}}\frac{\sqrt{4\pi /(2j+1)}}{j,j;l,0|j,j^s}_{lm}Y_{lm}(𝐧).$$
(15)
For $`s=0`$, we have the SU(2) equivalent of the Wigner function,
$$W(𝐧)=\sqrt{\frac{4\pi }{2j+1}}\underset{l=0}{\overset{2j}{}}\underset{m=l}{\overset{l}{}}_{lm}Y_{lm}(𝐧).$$
(16)
For $`s=1`$, we obtain the SU(2) equivalent of the Glauber-Sudarshan function (also known as Berezin’s contravariant symbol), $`P(𝐧)`$, whose defining property is
$$\rho =\frac{2j+1}{4\pi }_{𝕊^2}𝑑𝐧P(𝐧)|j;𝐧j;𝐧|.$$
(17)
The function which is probably the most important for the reconstruction problem is the SU(2) equivalent of the Husimi function (also known as Berezin’s covariant symbol),
$$Q(𝐧)=j;𝐧|\rho |j;𝐧,$$
(18)
obtained for $`s=1`$. As is seen from Eq. (9), the function $`Q(𝐧)`$ gives the probability to find the displaced system in the highest spin state $`|j,j`$,
$$Q(𝐧)=p_j(𝐧).$$
(19)
Also, one can see that the probability $`p_j(\theta ,\varphi )`$ to find the displaced system in the lowest spin state $`|j,j`$ is equal to $`Q(\theta +\pi ,\varphi )`$. More generally, any one of the QPDs can be reconstructed using the relation
$`P(𝐧;s)={\displaystyle \frac{2j+1}{4\pi }}{\displaystyle _{𝕊^2}}𝑑𝐧^{}K_{\mu ,s}^{}(𝐧,𝐧^{})p_\mu (𝐧^{}),`$ (20)
$`K_{\mu ,s}^{}(𝐧,𝐧^{})={\displaystyle \underset{l=0}{\overset{2j}{}}}{\displaystyle \frac{2l+1}{2j+1}}{\displaystyle \frac{j,j;l,0|j,j^s}{j,\mu ;l,0|j,\mu }}P_l(𝐧𝐧^{}),`$ (21)
where $`P_l(x)`$ are the Legendre polynomials. For $`s=1`$ and $`\mu =j`$ we recover the relation (19).
## III General description of experimental schemes
### A Spectroscopy and interferometry
Quantum transformations which constitute the basic operations in spectroscopic and interferometric measurements can be conveniently described as rotations in an abstract 3-dimensional space. In this description, the system is characterized by the vector $`𝐉=(J_x,J_y,J_z)^T`$, where the three operators $`J_x`$, $`J_y`$, and $`J_z`$ satisfy the su(2) algebra (1).
A spectroscopic or interferometric process is usually described in the Heisenberg picture as a unitary transformation
$$𝐉_{\mathrm{out}}=U(\vartheta _1,\vartheta _2,\phi )𝐉U^{}(\vartheta _1,\vartheta _2,\phi )=𝖴(\vartheta _1,\vartheta _2,\phi )𝐉,$$
(22)
where $`𝖴(\vartheta _1,\vartheta _2,\phi )`$ is a $`3\times 3`$ transformation (rotation) matrix, and $`\vartheta _1`$, $`\vartheta _2`$, $`\phi `$ are transformation parameters (rotation angles). A standard transformation consists of three steps:
1. rotation around the $`\widehat{𝐲}`$ axis by $`\vartheta _1`$, with the transformation matrix $`𝖱_y(\vartheta _1)`$,
2. rotation around the $`\widehat{𝐳}`$ axis by $`\phi `$, with the transformation matrix $`𝖱_z(\phi )`$,
3. rotation around the $`\widehat{𝐲}`$ axis by $`\vartheta _2`$, with the transformation matrix $`𝖱_y(\vartheta _2)`$.
The overall transformation performed on $`𝐉`$ is
$$𝖴(\theta ,\varphi )=𝖱_y(\vartheta _2)𝖱_z(\phi )𝖱_y(\vartheta _1).$$
(23)
This transformation is slightly more general than those routinely made in spectroscopy and interferometry. The usual choice is $`\vartheta _2=\vartheta _1=\pm \pi /2`$, so $`𝖴=𝖱_x(\pm \phi )`$, respectively, while $`\phi `$ is the parameter to be estimated in the experiment. In the Schrödinger picture, the density matrix of the system transforms as
$$\rho _{\mathrm{out}}=U^{}(\vartheta _1,\vartheta _2,\phi )\rho U(\vartheta _1,\vartheta _2,\phi ),$$
(24)
where the transformation operator is
$$U(\vartheta _1,\vartheta _2,\phi )=e^{i\vartheta _1J_y}e^{i\phi J_z}e^{i\vartheta _2J_y}.$$
(25)
Now, the aim is to measure the value of $`\phi `$ which is proportional to the transition frequency in a spectroscopic experiment or to the optical path difference between the two arms of an interferometer. The information on $`\phi `$ is inferred from the measurement of the observable $`J_z`$ at the output. The quantum uncertainty in the estimation of $`\phi `$ is
$$\mathrm{\Delta }\phi =\frac{\mathrm{\Delta }J_{z\mathrm{out}}}{|J_{z\mathrm{out}}/\phi |},$$
(26)
where the expectation values are taken over the initial quantum state of the system. This state is assumed to be known, so one can estimate the value of $`\phi `$ and the corresponding uncertainty.
### B Reconstruction of the initial state
In this paper we consider how to use the spectroscopic or interferometric arrangement for the inverse purpose, i.e., for the measurement of an unknown initial quantum state by means of a large number of transformations with known parameters.
As discussed in Sec. II, the first part of the reconstruction procedure is the phase-space displacement of Eq. (8). With the phase space being the sphere, this displacement is just a rotation produced by the operator $`g(𝐧)`$ of Eq. (4). Now, compare this rotation with the one made during a spectroscopic or interferometric experiment, as given by Eqs. (24) and (25). One can immediately conclude that the phase-space displacement needed for the SU(2) state-reconstruction procedure can be neatly implemented by means of the spectroscopic and interferometric techniques. One only needs to omit the first rotation (i.e., take $`\vartheta _1=0`$), and recognize the two spherical angles as:
$$\theta =\vartheta _2,\varphi =\phi .$$
(27)
After the rotation $`g(𝐧)`$ is made, one should measure the probability $`p_\mu (𝐧)`$ to find the displaced system in the state $`|j,\mu `$. Perhaps the most convenient choice is to measure the population of the lowest state $`|j,j`$, which is usually the ground state of the system (e.g., this state corresponds to the case where all the atoms are unexcited; in the atomic case such a measurement can be made by monitoring the resonant fluorescence for an auxiliary dipole transition). This procedure should be repeated for many phase-space points $`𝐧`$ with a large number of identically prepared systems, thereby determining the function $`p_\mu (𝐧)`$ (e.g., for $`\mu =j`$ or $`\mu =j`$). According to the formalism presented in Sec. II, this information is sufficient to reconstruct the initial quantum state.
## IV Collections of two-level atoms
In Ramsey spectroscopy one deals with a collection of $`N`$ two-level systems (usually atoms or ions) interacting with classical light fields. One can equivalently describe this physical situation as the interaction of $`N`$ spin-$`\frac{1}{2}`$ particles with classical magnetic fields. Denoting by $`𝐒_i`$ the spin of $`i`$th particle, one can use the collective spin operators:
$$𝐉=\underset{i=1}{\overset{N}{}}𝐒_i.$$
(28)
The orthonormal basis $`\{|j,\mu \}`$ consists of the symmetric Dicke states :
$$|j,\mu =\left(\genfrac{}{}{0pt}{}{N}{p}\right)^{1/2}\underset{k=1}{\overset{p}{}}|+_{l_k}\underset{ll_k}{}|_l,$$
(29)
where $`|+_l`$ and $`|_l`$ are the upper and lower states, respectively, of the $`l`$th atom, and the summation is over all possible permutations of $`N`$ atoms. If only symmetric states are considered, then the “cooperative number” $`j`$ is equal to $`N/2`$ and $`p=\mu +j`$ is just the number of excited atoms. Usually the atoms (ions) are far enough apart so their wave functions do not overlap and the direct dipole-dipole coupling or other direct interactions between the atoms may be neglected.
In the spin formulation (see, e.g., Ref. for a very good description), the magnetic moment $`𝝁=\mu _0𝐒`$ is associated with each particle. If a uniform external magnetic field $`𝐁_0=B_0\widehat{𝐳}`$ is applied, the Hamiltonian for each particle is given by
$$H_0=𝝁𝐁_0=\mathrm{}\omega _0S_z,$$
(30)
where $`\mathrm{}\omega _0=\mu _0B_0`$ is the separation in energy between the two levels. The corresponding Heisenberg equation for the collective spin operator is
$$𝐉/t=𝝎_0\times 𝐉,$$
(31)
where $`𝝎_0=\omega _0\widehat{𝐳}`$. Then one applies the so-called clock radiation which is a classical field of the form
$$𝐁_{}=B_{}\left(\widehat{𝐲}\mathrm{cos}\omega t\widehat{𝐱}\mathrm{sin}\omega t\right),$$
(32)
where $`\omega \omega _0`$ and we assume $`\omega _0>0`$. In the reference frame that rotates at frequency $`\omega `$, the collective spin $`𝐉`$ interacts with the effective field
$$𝐁=B_r\widehat{𝐳}+B_{}\widehat{𝐲},$$
(33)
where $`B_r=B_0(\omega _0\omega )/\omega _0`$. In the rotating frame, the Hamiltonian is $`H=\mu _0𝐉𝐁`$, and the Heisenberg equation for $`𝐉`$ is
$$𝐉/t=𝝎^{}\times 𝐉,$$
(34)
where $`𝝎^{}=(\omega _0\omega )\widehat{𝐳}+\omega _{}\widehat{𝐲}`$ and $`\omega _{}=\mu _0B_{}/\mathrm{}`$.
The Ramsey method breaks the evolution time of the system into three parts. In the first part $`B_{}`$ is nonzero and constant with value $`B_1`$ during the time interval $`0tt_\vartheta `$. During this period (the first Ramsey pulse), $`𝐁=B_r\widehat{𝐳}+B_1\widehat{𝐲}B_1\widehat{𝐲}`$, where we made the assumption $`|B_1||B_r|`$, i.e., $`|\omega _1||\omega _0\omega |`$, with $`\omega _1=\mu _0B_1/\mathrm{}`$. Therefore, in the rotating frame of Eq. (34), $`𝐉`$ rotates around the $`\widehat{𝐲}`$ axis by the angle $`\vartheta _1=\omega _1t_\vartheta `$. During the second period, of duration $`T`$, (usually $`Tt_\vartheta `$), $`B_{}`$ is zero, so $`𝐁=B_r\widehat{𝐳}`$, and $`𝐉`$ rotates around the $`\widehat{𝐳}`$ axis by the angle $`\phi =(\omega _0\omega )T`$. The third period is exactly as the first one but with the field $`B_{}=B_2`$ and the corresponding angular frequency $`\omega _2=\mu _0B_2/\mathrm{}`$. This gives a rotation around the $`\widehat{𝐲}`$ axis by the angle $`\vartheta _2=\omega _2t_\vartheta `$. These three Ramsey pulses provide the rotations we described in Sec. III A (usually, $`\vartheta _1=\vartheta _2=\pi /2`$).
The aim of spectroscopic experiments is to measure the transition frequency $`\omega _0`$ (which is equivalent to the measurement of $`\phi `$, as $`\omega `$ and $`T`$ are determined by the experimenter). Usually, one measures the number of atoms in the upper state $`|+`$,
$$N_{+\mathrm{out}}=J_{z\mathrm{out}}+N/2,$$
(35)
and thus obtains the information about the angle $`\phi `$ or, equivalently, about the frequency $`\omega _0`$. Of course, in order to infer this information one should know the initial quantum state of the system. The measurement sensitivity, as seen from Eq. (26), also depends on the initial quantum state.
In the state-reconstruction procedure, the first Ramsey pulse should be omitted, while the second and third pulses produce the desired phase-space displacement $`g^{}(𝐧)`$. After the displacement is completed, one should measure the probability to find the system in one of the states $`|j,\mu `$, for example, measure the population of the ground state $`|j,j`$ or of the most excited state $`|j,j`$. This measurement can be made by driving a dipole transition to an auxiliary atomic level and then observing the resonance fluorescence. The phase space is scanned by repeating the measurement with many identically prepared systems for various durations of the Ramsey pulses, $`T`$ and $`t_\vartheta `$. Of course, the apparatus should be first calibrated by measuring the transition frequency $`\omega _0`$.
## V Two-mode light fields
The basic device employed in a passive optical interferometer is a beam splitter (a partially transparent mirror). A Mach-Zehnder interferometer consists of two beam splitters and its operation is as follows. Two light modes (with boson annihilation operators $`a_1`$ and $`a_2`$) are mixed by the first beam splitter, accumulate phase shifts $`\phi _1`$ and $`\phi _2`$, respectively, and then they are once again mixed by the second beam splitter. Photons in the output modes are counted by two photodetectors. In fact, a Michelson interferometer works in the same way, but due to its geometric layout the two beam splitters may coincide.
Each beam splitter has two input and two output ports. Let $`𝐚=(a_1,a_2)^T`$ and $`𝐛=(b_1,b_2)^T`$ be the column-vectors of the boson operators of the input and output modes, respectively. Then, in the Heisenberg picture, the action of the beam splitter is described by the transformation
$$𝐛=𝖡𝐚,$$
(36)
where $`𝖡`$ is a $`2\times 2`$ matrix. For a lossless beam splitter $`𝖡`$ must be unitary, thereby assuring the energy (photon number) conservation. A possible form of $`𝖡`$ is
$$𝖡(\vartheta )=\left(\begin{array}{cc}\mathrm{cos}(\vartheta /2)& \mathrm{sin}(\vartheta /2)\\ \mathrm{sin}(\vartheta /2)& \mathrm{cos}(\vartheta /2)\end{array}\right),$$
(37)
with $`T=\mathrm{cos}^2(\vartheta /2)`$ and $`R=\mathrm{sin}^2(\vartheta /2)`$ being the transmittance and reflectivity, respectively. When the two light modes accumulate phase shifts $`\phi _1`$ and $`\phi _2`$, respectively, the corresponding transformation is
$$𝐛=𝖯𝐚,𝖯=\left(\begin{array}{cc}e^{i\phi _1}& 0\\ 0& e^{i\phi _2}\end{array}\right).$$
(38)
The group-theoretic description of the interferometric process is based on the Schwinger realization of the su(2) algebra:
$`J_x=(a_1^{}a_2+a_2^{}a_1)/2,`$ (39)
$`J_y=i(a_1^{}a_2a_2^{}a_1)/2,`$ (40)
$`J_z=(a_1^{}a_1a_2^{}a_2)/2.`$ (41)
Actions of the interferometer elements (mixing by the beam splitters and phase shifts) can be represented as rotations of the column-vector $`𝐉=(J_x,J_y,J_z)^T`$. The beam-splitter transformation of Eq. (37) is represented by rotation $`𝖱_y(\vartheta )`$ around the $`\widehat{𝐲}`$ axis by the angle $`\vartheta `$, and the phase shift of Eq. (38) is represented by rotation $`𝖱_z(\phi )`$ around the $`\widehat{𝐳}`$ axis by the angle $`\phi =\phi _2\phi _1`$. Now, if the transmittances of the two beam splitters are $`T_1=\mathrm{cos}^2(\vartheta _1/2)`$ and $`T_2=\mathrm{cos}^2(\vartheta _2/2)`$, respectively, then the interferometer action is given by the three rotations described in Sec. III A. (Usually, one uses 50-50 beam splitters, so $`\vartheta _1=\vartheta _2=\pi /2`$.)
Interferometers are constructed to measure the relative phase shift $`\phi `$, which is proportional to the optical path difference between the two arms. Usually, one measures the difference between the photocurrents due to the two output light beams. This quantity is proportional to the photon-number difference at the output, $`q_{\mathrm{out}}=2J_{z\mathrm{out}}`$. If the input state of light is known, then the measurement of $`q_{\mathrm{out}}`$ can be used to infer the phase shift $`\phi `$ and estimate the measurement error due to the quantum fluctuations of the light field.
A simple calculation gives $`𝐉^2=(N/2)(1+N/2)`$, where $`N=a_1^{}a_1+a_2^{}a_2`$ is the total number of photons in the two modes. If $`N`$ has a fixed value for the input state of the two-mode light field, then this state belongs to the Hilbert space $`_j`$ of a specific SU(2) representation with $`j=N/2`$. Because $`N`$ is the SU(2) invariant, this state will remain in $`_j`$ during the interferometric process. Such input states of the two-mode light field can be reconstructed using a rearrangement of the interferometric scheme, according to the general procedure described in Secs. II and III B.
The phase-space displacement $`g^{}(𝐧)`$ needed for the state-reconstruction procedure can be implemented by using an interferometer without the first beam splitter. Then one should measure the probability $`p_\mu (𝐧)`$ to find the output light in one of the states $`|j,\mu `$. Note that these states are given by
$$|j,\mu =|j+\mu _1|j\mu _2$$
(42)
in the terms of the Fock states of the two light modes. So, $`\mu `$ is just one half of the photon-number difference measured at the output. Averaging over many measurements, one obtains the probabilities $`p_\mu (𝐧)`$. For example, $`p_j(𝐧)`$ is the probability that all photons exit in the first output beam while the number of photons in the second output beam is zero. The measurement should be repeated with identically prepared input light beams for many phase-space displacements. This means that one needs a well-calibrated apparatus which can be tuned for various values of the relative phase shift $`\phi `$. These phase shifts can be conveniently produced by moving a mirror with a precise electro-mechanical system. Various values of the angle $`\vartheta _2`$ can be realized using a collection of partially transparent mirrors with different reflectivities for the second beam splitter. An alternative possibility is to use the dependence of the reflectivity on the angle of incidence for light polarized in the plane of incidence.
In general, the state reconstruction for two-mode light fields is a tedious task, because the corresponding Hilbert space is very large . Obviously, this task can be greatly simplified for the subclass of two-mode states with a fixed total number of photons, by means of the reconstruction method presented here. However, this method is in principle suitable also for other two-mode states as well. In general, the whole Hilbert space of the two-mode system can be decomposed as
$$=\underset{j}{}_j.$$
(43)
The method of inverted interferometry enables one to reconstruct the part of the density matrix corresponding to each irreducible subspace $`_j`$. One case for which our method is applicable is the subclass of states, whose density matrices are block-diagonal in terms of the decomposition (43). This means that the corresponding operator can be written as
$$\rho =\underset{j}{}\rho _j,$$
(44)
where $`\rho _j`$ is an operator on $`_j`$. Each component $`\rho _j`$ evolves independently during the phase-space displacement; hence the state of the whole system can be measured by reconstructing all invariant components $`\rho _j`$. The other case for which our method works is the subclass of pure states,
$$|\psi =\underset{j}{}|\psi _j,|\psi _j=\underset{\mu =j}{\overset{j}{}}c_{j\mu }|j,\mu .$$
(45)
Then the density matrix can be written as
$$\rho =\underset{j}{}|\psi _j\psi _j|+\underset{jj^{}}{}|\psi _j\psi _j^{}|.$$
(46)
The populations of the states $`|j,\mu `$ are unaffected by the second term in (46), and one can reconstruct all invariant components $`\rho _j=|\psi _j\psi _j|`$. This gives information about the state $`|\psi `$ of the whole system, except for relative phases between different $`|\psi _j`$. From the technical point of view, each measurement of the photon-number difference $`2\mu `$, needed to determine the probabilities $`p_\mu (𝐧)`$, should be accompanied by a measurement of the photon-number sum $`N=2j`$, in order to determine to which invariant subspace $`_j`$ does the detected value of $`\mu `$ correspond. Consequently, one needs to make many more measurements, in order to accumulate enough data for each value of $`j`$. A technical problem is that quantum efficiencies of realistic photodetectors are always less then unity. While this problem is not too serious for the measurement of the photon-number difference (as long as both detectors have the same efficiency), it puts a serious limitation on the accuracy of the measurement of the total number of photons.
## VI Two-dimensional vibrations of a trapped ion
As was recently demonstrated by Wineland *et al.* , a single laser-cooled ion in a harmonic trap can be used to simulate various interactions governing many well-known optical processes. In particular, one can simulate transformations produced by elements of a Mach-Zehnder optical interferometer.
Consider a single ion confined in a two-dimensional harmonic trap, with angular frequencies of oscillations in two orthogonal directions $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$. Two internal states of the ion, $`|+`$ and $`|`$, are separated in energy by $`\mathrm{}\omega _0`$. The internal and motional degrees of freedom can be coupled by applying classical laser beams, with electric fields of the form
$`𝐄(𝐱,t)=𝐄_0\mathrm{cos}(𝐤𝐱\omega t+\mathrm{\Phi }).`$
For example, one can apply two laser beams to produce stimulated Raman transitions. We denote by $`\omega =\omega _1\omega _2`$, $`𝐤=𝐤_1𝐤_2`$, and $`\mathrm{\Phi }=\mathrm{\Phi }_1\mathrm{\Phi }_2`$ the differences between the angular frequencies, the wave vectors, and the phases, respectively, of the two applied fields. Then, in the rotating-wave approximation, the interaction Hamiltonian reads
$$H_I=\mathrm{}\kappa \mathrm{exp}[i(𝐤𝐱\delta t+\mathrm{\Phi })]+\mathrm{H}.\mathrm{c}.,$$
(47)
where $`\delta =\omega \omega _0`$ is the frequency detuning, $`𝐱`$ is the ion’s position relative to its equilibrium, and $`\kappa `$ is the coupling constant (the Rabi frequency). Each of the two modes of the ion’s motion can be modelled by a quantum harmonic oscillator:
$$x_r=x_{0r}(a_r+a_r^{}),x_{0r}=\sqrt{\mathrm{}/(2M\mathrm{\Omega }_r)},$$
(48)
where $`r=1,2`$ and $`M`$ is the ion’s mass. Also, let $`\eta _r=k_rx_{0r}`$ ($`r=1,2`$) be the Lamb-Dicke parameters for the two oscillatory modes. It is convenient to use the interaction picture for the ion’s motion:
$`\stackrel{~}{H}_I`$ $`=`$ $`\mathrm{exp}(iH_0t/\mathrm{})H_I\mathrm{exp}(iH_0t/\mathrm{})`$ (49)
$`=`$ $`\mathrm{}\kappa e^{i(\mathrm{\Phi }\delta t)}{\displaystyle \underset{r=1,2}{}}\mathrm{exp}[i\eta _r(\stackrel{~}{a}_r+\stackrel{~}{a}_r^{})]+\mathrm{H}.\mathrm{c}.,`$ (50)
where $`H_0`$ is the free Hamiltonian for the ion’s motion,
$$H_0=\mathrm{}\mathrm{\Omega }_1\left(a_1^{}a_1+\frac{1}{2}\right)+\mathrm{}\mathrm{\Omega }_2\left(a_2^{}a_2+\frac{1}{2}\right),$$
(51)
and $`\stackrel{~}{a}_r=a_r\mathrm{exp}(i\mathrm{\Omega }_rt)`$, $`r=1,2`$.
If the coupling constant $`\kappa `$ is small enough and $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ are incommensurate, one can resonantly excite only one spectral component of the possible transitions. For a particular resonance condition $`\delta =\mathrm{\Omega }_2\mathrm{\Omega }_1`$ (and in the Lamb-Dicke limit of small $`\eta _1`$ and $`\eta _2`$), the product in Eq. (50) will be dominated by the single term $`(i\eta _1a_1)(i\eta _2a_2^{})`$. Therefore, one obtains
$$\stackrel{~}{H}_I\mathrm{}\kappa \eta _1\eta _2\left(e^{i\mathrm{\Phi }}a_1a_2^{}+e^{i\mathrm{\Phi }}a_1^{}a_2\right).$$
(52)
Returning to the Schrödinger picture, the total evolution operator reads:
$`U(t)`$ $`=`$ $`\mathrm{exp}(iH_0t/\mathrm{})\mathrm{exp}(i\stackrel{~}{H}_It/\mathrm{})`$ (53)
$`=`$ $`\mathrm{exp}[i(\mathrm{\Omega }_1+\mathrm{\Omega }_2)(N+1)t/2]\mathrm{exp}[i(\mathrm{\Omega }_2\mathrm{\Omega }_1)J_zt]`$ (55)
$`\times \mathrm{exp}(2i\kappa \eta _1\eta _2J_\mathrm{\Phi }t).`$
Here, $`N=a_1^{}a_1+a_2^{}a_2`$ is the total number of vibrational quanta in the two modes, $`J_\mathrm{\Phi }=J_x\mathrm{cos}\mathrm{\Phi }+J_y\mathrm{sin}\mathrm{\Phi }`$, and we used the Schwinger realization (39) for the SU(2) generators.
Now, let us consider only such motional states of the ion for which $`N`$ has a fixed value, i.e., which belong to the irreducible Hilbert space $`_j`$ (with $`j=N/2`$). For these states, the first exponent in (55) will just produce an unimportant phase factor and can be omitted. Clearly, the evolution operator (55) can be used to simulate the action of an optical interferometer, with two vibrational modes of a trapped ion employed instead of two light beams. In order to simulate the action of a beam splitter, one should apply the interaction (52) during time $`t_\theta `$ and ensure that $`|2\kappa \eta _1\eta _2||\mathrm{\Omega }_2\mathrm{\Omega }_1|`$, so the effect of the free evolution can be neglected. Then, for $`\mathrm{\Phi }=\pi /2`$, the evolution operator reads
$$U_y(\theta )=\mathrm{exp}(i\theta J_y),\theta =2\kappa \eta _1\eta _2t_\theta .$$
(56)
A relative phase shift between the two modes can be produced just by using the free evolution, i.e., with no external laser fields applied. Letting the system evolve freely during time $`T`$, one obtains
$$U_z(\varphi )=\mathrm{exp}(i\varphi J_z),\varphi =(\mathrm{\Omega }_2\mathrm{\Omega }_1)T.$$
(57)
It is obvious that applying consequently the transformations (57) and (56) one will produce the phase-space displacement $`g^{}(𝐧)`$, employed in the state-reconstruction procedure. The whole phase space can be scanned by repeating the procedure with identically prepared systems for various durations $`T`$ and $`t_\theta `$. Each phase-space displacement should be followed by the measurement of the probability $`p_\mu (𝐧)`$ to find the system in one of the states $`|j,\mu `$. For example, $`p_j(𝐧)`$ is the probability that the first oscillatory mode is excited to the $`N`$th level ($`N=2j`$) while the second mode is in the ground state. Such a measurement can be made with the method used recently by the NIST group to reconstruct the one-dimensional motional state of a trapped ion. The principle of this method is as follows. One of the oscillatory modes is coupled to the internal transition $`|+|`$. This is done by applying one classical laser field, so single-photon transitions are excited. This results in an interaction of the Jaynes-Cummings type between the oscillatory mode and the internal transition. Then the population $`P_{}(t)`$ of the lower internal state $`|`$ is measured for various values of the interaction time $`t`$ (as we already mentioned, this measurement can be made by monitoring the resonant fluorescence produced in an auxiliary dipole transition). If $`|`$ is the internal state at $`t=0`$, then the signal averaged over many measurements is
$`P_{}(t)={\displaystyle \frac{1}{2}}\left[1+{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}P_n\mathrm{cos}(2\mathrm{\Omega }_{n,n+1}t)e^{\gamma _nt}\right],`$
where $`\mathrm{\Omega }_{n,n+1}`$ are the Rabi frequencies and $`\gamma _n`$ are the experimentally determined decay constants. This relation allows one to determine the populations $`P_n`$ of the motional eigenstates $`|n`$. By virtue of Eq. (42), this gives the populations $`p_\mu `$ of the SU(2) states $`|j,\mu `$ (with $`\mu =nj`$ for the first mode and $`\mu =jn`$ for the second mode). For example, $`p_j`$ and $`p_j`$ are given by $`P_0`$ for the first and second modes, respectively.
## VII Conclusions
In this paper we presented practical methods for the reconstruction of quantum states for a number of physical systems with SU(2) symmetry. All these methods employ the same basic idea—the measurement of displaced projectors—which in principle is applicable to any system possessing a Lie-group symmetry. Practical realizations, of course, vary for different physical systems. In our approach, we exploited the fact that transformations applied in conventional spectroscopic and interferometric schemes are, from the mathematical point of view, just rotations. In the context of the SU(2) group, these rotations constitute phase-space displacements needed to implement a part of the reconstruction procedure. Therefore, the spectroscopic and interferometric measurements can be easily rearranged in order to enable one to determine unknown quantum states for an ensemble of identically prepared systems. As the spectroscopic and interferometric measurements are known for their high accuracy, we hope that the corresponding rearrangements will allow accurate reconstructions of unknown quantum states.
###### Acknowledgements.
This work was supported by the Fund for Promotion of Research at the Technion and by the Technion VPR Fund.
|
no-problem/9904/astro-ph9904266.html
|
ar5iv
|
text
|
# CCD Photometry and Astrometry for Visual Double and Multiple Stars of the HIPPARCOS CatalogueBased on observations made at La Silla (ESO, Chile - Key Programme 7-009-49K), Observatoire de Haute-Provence (OHP), Calar Alto (CLA), La Palma (LPL) and Jungfraujoch (JFJ) Observatories
## 1 Introduction
### 1.1 A new project
Mid-1990 a large-scale project was started for combining the efforts of scientists in six European countries, from ten laboratories, with the goal to obtain accurate ground-based photometric and astrometric information on visual double and multiple stars: the European Network of Laboratories ”Visual Double Stars” was founded. Both hemispheres would be covered (Oblak et al. 1992a ). In 1992 a key programme was introduced at the European Southern Observatory (ESO) aiming at obtaining the photometry and the astrometry of visual double stars in the southern hemisphere: photoelectric and CCD observations would be obtained to complement the HIPPARCOS space observations on such systems (Oblak et al. 1992c ). This paper is meant to be a general introduction to this vast observational effort : its aim is to extensively report on the scientific goals (Sect. 1) and the technical aspects (Sects. 2, 3, 4) as well as to introduce a series of forthcoming data papers (Sect. 5). Our programme is defined in Sect. 2. A large part consists in the description of the observational protocol (Sect. 3) and the introduction of an original reduction method specifically developed for this programme (Sect. 4), some of the more important aspects to consider in order to finally obtain data of a quality that eventually can sustain the comparison with space projects (such as HIPPARCOS). General conclusions and future prospects are formulated at the end.
### 1.2 The scientific goals
It is common knowledge that (visual) binary and multiple stars are prime targets for determining and calibrating basic stellar physics in general. In the first place, they serve to determine the masses and to calibrate the mass-luminosity relation. This is possible with sufficient accuracy, say better than 10%, under optimum conditions only, i.e. for a visual binary that is both sufficiently nearby and orbiting with a short period. For example, making use of the new Hipparcos absolute parallaxes, only 55 of the more than 1000 previously known orbital pairs satisfy the condition of accuracy on each component mass better than 15% (Lampens et al. 1997b ). Since angular separations are generally below 1″, these are the ”close” visual binaries. The photometry on the individual components for these systems almost completely relies on visual estimates of their magnitude differences. In the second place, visual binaries also serve to calibrate other stellar parameters since their common formation implies a common origin (same overall metallicity) and a same age (sharing a common isochrone), conditions that may even apply stricter than for open cluster members. Elimination of those generally badly known parameters allows to focus on the remaining ones that can therefore be investigated in an independent way. Such testcases may also be found among wider or ”intermediate” visual binaries that have longer periods, separations larger than 1″, with high confidence that their components could not have influenced each other’s evolution. For these systems however, the physical association should be clearly established (i.e. this should not concern too wide pairs). The existing photometry of the components with separations less than 10″ still relies mostly on visual estimates of $`\mathrm{\Delta }m`$ but also on the area-scanning technique (Franz fra66 (1966), Rakos et al. rak82 (1982)) for some hundreds of visual double stars. The conventional photoelectric technique cannot be trusted in this range of separations.
In general, the vast majority of stars are members of binary or multiple systems. Recent surveys of high astrometric quality, both from space (Lindegren lin97 (1997)) or from the ground, show clear evidence that improving the resolution of the instruments generates an increasing number of new detections and that the frequency of binaries is probably underestimated. The correct determination of the frequency as a function of stellar parameters such as spectral type, luminosity, population is a major constraint for the modeling of, for example, the galatic content and structure. Although more poorly known than their ”single” counterparts - partly because the research is easily biased by the employed techniques and by the complication of the observational data’s interpretation due to the presence of the companion stars - the binary stars deserve to be studied ”in their own right”.
It is therefore still a crucial matter to investigate the fundamental properties and the typical characteristics of double and multiple stars of all classes. The determination of the distribution functions of a minimal set of basic parameters such as true separations (linked with angular separations), mass and luminosity ratios (linked with differences of magnitudes) and differences of temperatures (linked with differences of colour indices) defines the context of this observational work.
The aim of our programme is, more specifically, to acquire and analyse the astrometric and astrophysical information of the individual components of visual binaries measured by the HIPPARCOS satellite mission. Regarding the astrometry, confrontation of recent versus past astrometric observations may lead to detect or to contradict orbital motions (e.g. Brosche et al. br92 (1992), Bauer et al. bau94 (1994)). This is especially important for the intermediate and wide pairs since a study of their relative proper motions is the only way to discriminate between physical and orbital pairs. Regarding the photometry, we believe that complementary accurate photometric multi-colour data with reliable astrophysical content are needed for a well-chosen sample of binaries and multiple systems for which good quality astrometric data already exist. Indeed, although detection of new double and multiple stars from astrometric programmes is continuing at a high rate due to their improving resolution, this high quality astrometry is generally coupled to a scarce photometry or at best a poor quality photometry of the individual components compared to that of the joint photometry on the system. The reason is that accurate brightnesses and colours of the components of visual systems can be obtained by conventional photoelectric aperture photometry in good conditions only if the separation is larger than the size of the used diaphragms (typically 11-13″) and if sufficient care is also taken when measuring the sky contribution. At closer separations the photometric information on the components is often either inaccurate or lacking: global photometry may exist but it has to be combined with visual estimates of the differential magnitudes (such estimates are poorly known since they can be as much as 0.5 mag off) to obtain component magnitudes. At even closer separations and at the decimag accuracy level, neither are speckle differential magnitudes easily obtainable (Carbillet et al. car96 (1996); Ten Brummelaar et al. ten96 (1996)). The ($`B_T`$, $`V_T`$) photometry of the vast majority of the stars in the HIPPARCOS catalogue was obtained in the course of the TYCHO programme (ESA 1997a ), at least for systems with separations wider than 3″. Unfortunately, the Tycho photometry is not very accurate (the median precision is only 0.10 mag in $`B_TV_T`$), and, moreover, the magnitude measurements of the double star components are contaminated by the companions (Halbwachs et al. halb97 (1997)).
More generally, photographic magnitudes are found in very large double star catalogues such as the Washington Double Star Catalogue at USNO (Worley & Douglass wor97 (1997)) and the Catalogue of Components of Double and Multiple Stars at ROB (CCDM, Dommanget & Nys dom95 (1995)). The lack of accurate photometric data is reflected by the simple fact that much less than $`10\%`$ of the systems listed in the CCDM (Dommanget & Nys dom95 (1995)) and $`10\%`$ of the systems catalogued in the Annex of Double and Multiple Stars of the Hipparcos Input Catalogue (Turon et al. tur92 (1992)) have photoelectric photometry for both components (see the ’Catalogue Photométrique des Systèmes Doubles et Multiples’, CPSDM, Oblak obl88 (1988)).
Nowadays, observations made with CCD detectors permit to obtain accurate individual photometric data in a separation range where previous techniques failed (Sinachopoulos & Seggewiss sin89 (1989); Argue et al. arg92 (1992); Nakos et al. nak97 (1997)). We applied this observational technique to obtain the relevant data for each component of ”intermediate” visual pairs (defined as having angular separations between 1″and 15″) in parallel with the conventional photoelectric technique for the ”wide” visual pairs (with angular separations larger than 12″).
## 2 The observational programme
### 2.1 General description
A large programme for the systematic acquisition of accurate, homogeneous photometric colour indices of the components of several thousands of double and multiple systems was thus set up in both hemispheres with the following principal aims:
\- to construct a basic sample of nearby double stars with complete astrometric and photometric information for each component of the system. By choosing those systems that belong to the HIPPARCOS programme, we made sure that the full astrometric information would be measured in space. The HIPPARCOS proper motions and parallaxes may now help to define ”clean” samples (by filtering out systems that are most probably optical (Dommanget dom55 (1955), dom56 (1956), Brosche et al. br92 (1992)) as a function of parallax (i.e. distance-limited samples). The new photometric data will supplement the Hipparcos magnitudes with astrophysically significant colours that, once calibrated, will provide us hopefully with additional information such as temperature, gravity or metallicity (Oblak & Lampens 1992b ).
\- to improve the accuracy of the component photometric data for a large sample of visual double stars in view of applications that concern the distributions of true separations, mass ratios and colour differences. The usefulness of accurate component photometry is furthermore also evident in several other previously described applications, e.g. luminosity calibrations, age and evolution determinations, etc.
In addition, we also provide accurate astrometric and photometric data for components that, for one reason or another, were not succesfully measured or were ”missed” by HIPPARCOS. This may refer to components with angular separations larger than 10″ not included in the Input Catalogue, to components with angular separations comparable to or larger than the half-width of the ’instantaneous field of view’ (IFOV) ($``$ 10″, i.e. a two-pointing double) for which the resulting astrometry/photometry may be perturbed and to components fainter than the companion star by more than approx. 3.5 mag or to components of those systems that were too difficult to treat and without solution.
### 2.2 Selection of programme stars
The selection of the programme was made starting from 11434 double systems, 1960 triple systems, 536 quadruple and 237 multiple systems of the Annex of Double and Multiple Stars, containing a majority of objects within a distance of 500 pc (Turon et al. tur92 (1992)). Systems for which the component photometric information was lacking or poor have been selected by cross-examination with the CPSDM (Oblak obl88 (1988)). This catalogue contains all information in three photometric systems (UBV, Geneva, Strömgren) for visual double and multiple systems, with indications on which components have been observed. We eliminated a small number of systems for which all the known components already have complete and precise photometric measurements: it is the case for $``$ 11% out of the 11853 systems listed in this catalogue. This concerns the measurements of 237 systems in the Geneva photometric system, 948 systems in the Strömgren photometric system and 1540 systems in the UBV system (Oblak and Mermilliod, om88 (1988); Oblak et al., 1993c ). The gross of the data regarding these wide visual double stars (separations larger than 10-12″) comes from works such as done by Lindroos (lin81 (1981), lin83 (1983), lin85 (1985)), Olsen (1982a , 1982b ), Sinachopoulos (si89 (1989), sin90 (1990)) and Wallenquist (wal81 (1981)).
We selected all systems with angular separations $`>1`$″ for which either not all components had been observed or whose differential magnitudes were insufficiently precise for extraction of astrophysical quantities. We found that differential colour indices are almost nonexistent in the separation range 1″ - 12″. Some 10% of visual double stars, generally the ones with separations larger than 10-12″, have colour indices for both components. Our programme consisted of northern ($`\delta _A>10\mathrm{°}`$) and southern samples ($`\delta _A+10\mathrm{°}`$) to be measured in various photometric campaigns and in both hemispheres. The overlapping zone in declination was observed once according to feasibility.
Since both conventional (CVT) and CCD photometry were used, the samples were also split with respect to angular separation, with a common intersection between 12″ and 15″ for calibration purposes. Systems on the CCD observational programme had to satisfy the following criteria:
* $`1\mathrm{}<\mathrm{separation}15\mathrm{}`$,
* $`0\mathrm{\Delta }\mathrm{m}<3`$ mag,
* lacking component photometry.
Our goal was thus to obtain accurate magnitudes and colours for the components of some 3000 HIPPARCOS double stars and some 600 multiple stars using both techniques (Oblak & Lampens 1992b ).
It is relevant to recall here the Hipparcos observational strategy in the case of adjacent stars (Turon et al tur92 (1992)). With respect to angular separations, systems with separations $`<10`$″ represent one entry only, with the 35″ wide IFOV pointing at either the primary, the photocentre or the geometric centre, depending on separation and $`\mathrm{\Delta }\mathrm{Hp}`$. Systems with (maximum) separations $`10`$″ have two or more entries in the Input Catalogue (e.g. a two-pointing double). In such cases an alterning observing strategy has often been used, again depending on $`\mathrm{\Delta }\mathrm{Hp}`$. Some well-separated components had to be included for the purpose of correction only. With respect to magnitudes, the bulk of the stars are brighter than Hp = 10 mag, with an upper limit Hp = 12.4 mag (corresponding to V magnitude equal to 12.1 or 12.5 mag depending on the star’s colours (Grenon et al gre92 (1992)). On the other hand, the Survey is essentially complete within the following magnitude limits:
V $`7.9+1.1\mathrm{𝑠𝑖𝑛}|\mathrm{b}|`$ for spectral types earlier than or equal to G5,
V $`7.3+1.1\mathrm{𝑠𝑖𝑛}|\mathrm{b}|`$ for spectral types later than G5.
Some results of this first astrometric space mission with respect to double stars deserve to be mentioned: from the systematic monitoring of a sample of 118.000 stars over 3 years, 3000 newly resolved doubles and several thousands of suspected doubles have been detected. In the ($`\rho `$, $`\mathrm{\Delta }\mathrm{m}`$) plane, the distribution of these new discoveries shows a high concentration in the practically unexplored regime ($`\rho <1`$″, $`\mathrm{\Delta }\mathrm{Hp}<4`$ mag)(Fig. 1 in Lindegren et al., lin97a (1997)). It is also the first time that such a vast material of differential magnitudes (in the Hp passband) with a precision of 0.1 mag has been obtained for double stars of close and intermediate separation.
In terms of physical parameters, what kinds of double stars are actually considered by our programme? Limitations on the V magnitude of the primary components are obviously set by the abovecited mission constraints. Such limitations imply that, with respect to main sequence primaries,
\- there are no faint M dwarfs (with absolute magnitudes $``$ 16 mag) in the sample,
\- solar-type analogues (absolute magnitudes $``$ 5 mag) are included up to some 200 pc,
\- dwarfs with spectral types earlier than F5 (absolute magnitudes $``$ 4 mag) are included up to 500 pc.
Limitations on the V magnitudes of the secondary component, on the other hand, are governed by the observational restriction $`\mathrm{\Delta }\mathrm{m}<3`$ mag. This limits the range of the observable mass ratios. Depending on the spectral type, observable values range from unity up to a factor of 2 to 3.
By performing all-sky photometry with the CCD technique, we are entering a regime in angular separation where individual components are now photometrically measured with the same accuracy as with the conventional (photoelectric) technique currently used for joint systems and for individual components of wide pairs. The working area in the ($`\rho `$, $`\mathrm{\Delta }\mathrm{m}`$) plane is illustrated by Fig. 1. At the distance of 25 pc, conventionally adopted for the nearby stars, angular separations between 1″ and 15″ represent true separations ranging from 25 to 375 A.U. Taking an upper distance-limit of 500 pc, these values represent separations beyond 500 A.U. Therefore - though a wide range is covered- our sample specifically addresses that part of the distribution in true separation that is longward of the peak value near to 50 A.U. (cf. Fig. 1 in Dommanget and Lampens dom93 (1993)). It will furthermore be adequate to investigate the natural ”drop-off” for separations larger than some 2000-3000 A.U.
### 2.3 Summary of campaigns
Observations by this group have been performed in various observatories situated in both hemispheres. In the North we observed at Calar Alto (CAL, Spain), Jungfraujoch (JFJ, Switzerland), Observatoire de Haute-Provence (OHP, France), La Palma (LPL, Canary Islands). In the South the La Silla observatory was our unique facility (ESO, Chile). A three year lasting ESO Key Programme was dedicated to this project for the years 1992-1995 (7-009-49K: Periods 49 to 54). Tables 1 and 2 summarize all the campaigns and the used instrumentation for South and North respectively. N.N. is the number of nights. Abbreviation codes for observers can be found in Table 3.
The status of the overall project is presented in Table 4. For these statistics, usage was also made of the first large-scale results for CCD astrometry and photometry of double stars of the Hipparcos Input Catalogue during 1986 and 1987 (Argue et al. arg92 (1992); referred to as ”the” La Palma observations (LPA)). The CCD part of our programme has been completed to 56% of our initial objective, with a global contribution of 26% coming from LPA data, another 34% coming from observations by this group and 4% of data in common.
In the North, the main contribution of 35% comes from LPA observations while the ESO/OHP/CLA/LPL observations represent 25% and common stars from both sites contribute another 5%. The part of not observed northern systems represents another 35%.
In the South, the ESO observations represent a majority (35%) while 14% comes from LPA observations and common stars contribute another 4%. The part of not observed southern systems represents 47%.
Also to be found in Table 4 is the number of observations of multiple systems: coverage was achieved for 41% in the South and only 16% in the North. At the request of the Hipparcos Double Star Working Group in 1993 an additional set of 81 triple systems with maximum separations between 1″ and 15″, but minimum separations (almost) always below 1″ and no restriction in $`\mathrm{\Delta }\mathrm{m}`$ (called ’Multiple WG’ in Table 4) was observed with a partial coverage of only 31% . Another one hundred double stars of the Catalogue of Nearby Stars (Jahreiß and Gliese, jah91 (1991)) that were not in the Annex of Double and Multiple Stars, have been introduced in the programme lateron, with results to be presented on a independent basis. The overall percentage of absolute photometry for all our southern runs (ESO only) spanning the period Oct. 91 - Jan. 95 is about 40% of the truly observed time.
Moreover, also the conventional photometric part of our programme at ESO suffered heavily from poor climatological conditions: this part has been carried out to as little as 24% of our initial goal.
Some programme stars have been deliberately observed twice while some just happened to be in common between our observations and ”the” La Palma (LPA) ones. For the CCD programme, this amounts to 297 cases out of 1698 for the doubles and to 16 cases out of 162 for the multiples. These data are very useful in the assessment of the computed errors for both the astrometry and the photometry.
## 3 The observational method
### 3.1 The common protocol
Two different techniques were employed, each of which required a specific protocol to be taken into account by all observers. Multi-colour observations have been obtained with the V (occasionally R) and I passbands of the Cousins system or sufficiently close to it. At ESO, CCD observations at the Dutch telescope were gathered through a Bessel V and a Gunn i filter.
The conventional photometric protocol is a classical one: programme objects and standard stars of the Cousins system taken from a list compiled by M. Grenon (gre91 (1991)) on the basis of lists prepared by Menzies et al. (men89 (1989), men91 (1991)) and Taylor & Joner (tay89 (1989)) were alternatively observed in all filters under photometric sky conditions only. At ESO, the same V(R)I filters were used. At Jungfraujoch (Switzerland), the Geneva photometric system with filters UBV$`\mathrm{B}_1\mathrm{B}_2\mathrm{V}_1`$G was employed. The standard star measurements were regularly spaced both in time and in colour. Special care was taken when measuring the sky contribution for each component (specifically if the angular separation was of the same order as the diaphragm size): we tried systematically to measure it diametrically opposed with respect to the other component and at about the same distance in angular separation. Observations are being reduced in a standard way and results on this part of the programme will be reported later on (after the presentation of all our CCD results).
A strict protocol was set up concerning the use of the CCD technique:
\- we systematically avoided binning;
\- sky flat-fields in each filter were taken at the beginning and at the end of each observing night; bias frames were taken more regularly during the night;
\- focus sequences were made for each filter at the start of the night and, because of the focus problems (continuous shifts not always related to temperature effects), we frequently had to monitor and adjust the focus during the night in all our runs (compared to the short exposure times used for the acquisition of a single frame, the time needed for adjustment was not negligible, especially for the 0.9m Dutch telescope);
\- for the double star programme we used whenever possible a 200x200 pixels window on the CCD chip for quick readout and fast file transfer on tape; the full chip was used specifically for the flat-fields, the multiple star programme and for the calibration (see Sect. 3.2 below);
\- the 16-bits dynamic range was used whenever possible (requiring e.g. a one bit change option at the Dutch telescope);
\- neutral density filters (with magnitude reductions of 2.5 or 5 mag) were employed on both programme and standard stars throughout the night if the former objects would have required exposure times below one second without them;
\- for each object and each filter, a sequence of multiple exposures was defined with exposure times up to 30 s (sometimes 60 s) adapted so as to have maximum efficiency without overexposing the primary component. Mean exposure times are of the order of 10 s. The number of exposures in the sequence was evaluated as $`10^{0.4\mathrm{\Delta }m}`$, a function of the catalogued magnitude difference $`\mathrm{\Delta }m_{AB}=m_{V,B}m_{V,A}`$, with a maximum of 17 for $`\mathrm{\Delta }\mathrm{m}=3`$ mag.
\- as for the conventional programme, standard stars from the list (Grenon gre91 (1991)) were taken at regular intervals for the extinction calculation and to transform the data to the standard V(R)I system. Two possibilities were considered:
a) under photometric conditions: insertion of few standards (about two per hour) permits to obtain standard magnitudes and colours and their differences;
b) under non-photometric conditions: no standard stars observations; instrumental differences of magnitudes and colours only were acquired, implying some loss of accuracy.
### 3.2 The astrometric calibration
A by-product of our CCD observations is the relative geometric configuration of the objects. But different campaigns mean different CCD’s, so varying scales and also varying orientations. In order to determine the orientation of the CCD stars were trailed over the full length of the chip and a number of ”wide” double stars of fixed configuration (separation larger than 10″) from the lists by Brosche & Sinachopoulos (br88 (1988), br89 (1989)) called ”astrometric standards” were observed to determine the scale at first. Later we realized that much higher accuracy could be obtained by the inclusion of open star cluster observations (Sinachopoulos et al. sin93 (1993)). Finally, homogeneization of the astrometric data was achieved through direct comparison with the Hipparcos results. The transformation from the local coordinates to Hipparcos coordinates was done by minimizing the squares of all positional differences. Stars with large discrepancies (at the 3$`\sigma `$-level) were not included in the final calculation of the transformation. This allowed the determination of the orientation and of the scale at 0.07°, resp. at the 0.01% level (Oblak et al. obl97 (1997)).
In the near future, we plan to provide a revised list of wide double stars of steady configuration because they serve well for a first order approximation of the scale and orientation values and because they are more widely dispersed on the sky than the until now available open clusters.
### 3.3 Remarks on the Charge Coupled Devices
A summary is given in Table 5. Mentioned are the observatory and telescope, the date, the general features of the CCD’s used in the various campaigns such as identification number, pixel width, saturation level. Scale values and zero-points for the orientation will be given later. Worth mentioning is the fact that the stellar images obtained at the Dutch telescope always showed ellipticity. This is also taken into account in our reductions.
## 4 The reduction method
### 4.1 Pre-reduction
The raw CCD images in the various filters are treated in a standard way, i.e. bias subtracted and flat-fielded. Since our programme stars are bright, the results are not very sensitive to the choice of the used flat-fields. Median (sky) flats normalized to unit intensity over a couple of nights within one mission serve their purpose well as long as care is taken that identical observing conditions prevailed (no insertion/removal of neutral density filters; no CCD dismounting; no filter cleaning operation, etc.)
### 4.2 Reduction of differential measurements
Various packages exist for the accurate astrometric and photometric reduction of CCD images in crowded fields: they take advantage of some well-exposed, isolated stars in the field of interest. The larger the number of isolated stars, the better the accuracies (e.g. DAOPHOT developed and distributed by P. Stetson (ste87 (1987))). However they cannot be applied here: the limited chip size coupled to the brightness of the objects makes that the majority of our frames show two overlapping profiles, usually without any other star in the field. A direct profile fitting method, that allows the separation of the individual profiles of the closest pairs, i.e. the pairs with separations somewhat larger than the width of the seeing disk, is thus desirable.
A one dimensional profile fitting method has been used by Sinachopoulos (sin92 (1992)): it implies fitting a Franz profile to a row and a column projection for each frame via a least squares method supported by an expert system. It was applied to double stars having separations generally larger than 5″. The reduction of closer pairs observed at La Palma was done by applying another method based on centroids and isophotes (Irwin irw85 (1985)). Our approach is different: a specific two-dimensional profile fitting method was developed within the MIDAS software package (Cuypers cuy97 (1997)). This reduction method fits with a least squares technique a bidimensional Moffat profile (Moffat mo82 (1982)) with elliptical isophotes to all the components simultaneously. Since the angular separation of the components is generally less than the size of the isoplanatic area, the shape of the point spread function can indeed be considered identical for all components. This method favourably compared to DAOPHOT and was succesfully applied to systems with up to 10 ”components” (Lampens & Seggewiss lam95 (1995)). The data obtained consists, after sky subtraction, of relative positions and differential magnitudes in each of the filters.
### 4.3 Reduction of magnitudes in a standard photometric system
We already reported that during nights of good photometric quality standard stars in the Cousins system have been observed (see Sect. 3). These stars were used in a least squares model to derive extinction values for each night and transformation coefficients from the instrumental to the standard system. The differences between both pairs of standard passbands (V,i)(Bessell/Gunn) vs. (V,I)(Cousins) are not negligible but they are safely treated by the transformation equations.
If possible, several nights of the same observing campaign were reduced simultaneously in order to obtain more accurate values for the zeropoints and transformation coefficients. Nights with neutral density filters were always treated separately. Colour corrections were linear. A breakpoint in colour was introduced when necessary in order to have the possibility of using different colour corrections for blue and red stars.
In general, transformation errors were shown to be of the order of 0.02-0.03 mag (Lampens et al. 1997a ).
Instrumental global magnitudes (for the system) have also been computed for all observed programme stars. Individual component magnitudes were then derived from these and the differential magnitudes obtained earlier. All component magnitudes have been subsequently transformed into the Cousins standard system, after correction for the extinction. Errors on the CCD global magnitudes are comparable to those from photoelectric photometry (a few millimag in good conditions): they come from the photon statistics. On the other hand, errors on the differential magnitudes are introduced through the fitting procedure and are somewhat larger: they are much reduced by taking a series of exposures (and depend on angular separation as well as on the difference itself, Lampens et al. 1997a ). These two error sources contribute differently to the errors on the component magnitudes that will be listed in the forthcoming papers of the series.
## 5 Prospects and conclusions
### 5.1 First results and prospects
First but preliminary results have already been presented on various occasions (Oblak et al. 1993b ; Oblak et al. obl96 (1996); Lampens et al. 1997a ; Oblak et al. obl97 (1997)). The data papers should soon follow. A detailed comparison with the recently published Hipparcos data will be done after completion of the reduction for all missions. In addition, we will publish a list of astrometric calibration double stars: these are the wide double stars of our lists with steady configuration that can be used anywhere on the sky for estimating the scale and the orientation of CCD cameras.
### 5.2 General conclusions
1. We obtained high-quality astrometry and all-sky multi-colour photometry for the components of intermediate visual double stars using small telescopes of the 1m class equipped with a CCD. The final accuracy level of our data matches that of the Hipparcos mission for systems with angular separations in the range 2″ to 12″. This has been possible thanks to the introduction and application of a strict protocol combined with the common usage of a dedicated reduction procedure.
2. We used this specific reduction tool for images containing up to 10 ”components” and for separations ranging from very large (over 30″) to very close, i.e. down to the limit imposed by the seeing disk. The method proved to be superior to a reduction by means of the cluster reduction package DAOPHOT (also available in the MIDAS software).
3. We were able to provide a priori ground-based values for the geometric configuration of hundreds of ”intermediate” visual double and multiple systems for the HIPPARCOS double star reduction: for several systems with angular separations above 1″, this new information, given along with the individual colour indices of the components, allowed to remove the ambiguities generated by the grid step ($``$ 1.2″) and thus to improve the reliability of the Hipparcos Catalogue solution. These results were also valuable for providing a good starting point during the double star re-reduction by the Hipparcos Reduction Consortia (Falin & Mignard fal98 (1998)).
4. Such observations allow to deduce a very sparsely known quantity among close visual binaries: the colour difference between individual components. This information has been very useful for comparing with the data from the Hipparcos mission. Component colour indices are important astrophysical parameters that are missing in too many researches on double and multiple stars: however they are easily and accurately obtained for those systems with separations in the intermediate range with simple means in excellent photometric conditions. The determination of the distribution functions of true separations, mass and luminosity ratios, differences of temperatures for a significant sample of double stars are obvious future applications, profitable in the domains of e.g. stellar formation and modeling of double stars. Of course, we will include the $``$ 11% of (wide) systems already having good and complete photometry in such studies.
###### Acknowledgements.
We gratefully acknowledge the allocation of telescope time by the European Southern Observatory as well as the Observatoire de Haute-Provence and the Calar Alto, La Palma and Jungfraujoch Observatories during the full length of the programme. The network Réseau Européen des Laboratoires: Etoiles Doubles Visuelles was supported in 1992 by the French Ministère de la Recherche et de la Technologie. We particularly thank J.L. Falin of the FAST reduction team for communicating some preliminary Hipparcos results. We appreciate the help of our colleagues N. Argue, P. Brosche, J. Dommanget, A. Duquennoy, G. Jasniewicz, M. Geffert, M. Grenon, J.C. Mermilliod and F. Mignard for helpful discussions within the Network. We thank the referee, Prof. P. Brosche, for many helpful suggestions. EO acknowledges financial support from the French Ministère des Affaires Etrangères for the programmes Alliance with the United Kingdom and Procope with Germany. PL and JC acknowledge funding by project G.0265.97 of the Fonds voor Wetenschappelijk Onderzoek (FWO).
|
no-problem/9904/astro-ph9904026.html
|
ar5iv
|
text
|
# CCD PHOTOMETRY OF FAINT VARIABLE STARS IN THE GLOBULAR CLUSTER NGC 6752 1footnote 11footnote 1 Based on observations collected at the Las Campanas Observatory of the Carnegie Institution of Washington.
## 1 Introduction
NGC 6752 is a medium-rich globular cluster whose proximity, low reddening and relatively high galactic latitude ($`r3.8`$ kpc, $`E(BV)=0.04`$, $`b=25.6`$; Harris 1996) make it an excellent object for detailed studies. The cluster was selected as one of the targets of an ongoing survey for eclipsing binaries in globular clusters (Kaluzny, Thompson & Krzeminski 1997). The ultimate goal of the project is to use observations of detached eclipsing binaries to determine the ages and distances of globular clusters (Paczyński 1997), and to study the binary star fraction in these clusters.
Until recently only 3 variable stars were known in NGC 6752 (Hogg 1973; Clement 1996 and references therein). One of these is a population II Cepheid and there is insufficient information on the other two to define their types. The horizontal branch of NGC 6752 is blue and the cluster contains no RR Lyr stars. Three photometric studies of variable and binary stars based on HST data have been published during the last three years. Shara et al. (1995) reported null results in a search for short period variables in the core region of NGC 6752, while Bailyn et al. (1996) identified 3 candidate cataclysmic variables, also in the core of the cluster. From an analysis of the broadened, asymmetric main sequence in NGC 6752, Rubenstein & Bailyn (1997) determined that the binary fraction is probably in the range 15%–38% within the core radius.
In this contribution we present an analysis of CCD photometry of NGC 6752. This data set is best suited for a search for variable stars in the outer parts of the cluster. A separate paper will be devoted to the analysis of photometry for the central part of the cluster based on data obtained in 1998 with the 2.5-m du Pont telescope (Kaluzny et al., in preparation)
## 2 Observations and Data Reduction
Time-series photometry of NGC 6752 was obtained during the interval 1996 June 23 – 1997 September 15 with the 1.0m Swope telescope at Las Campanas Observatory. In 1996 a Loral CCD was used as the detector, with a scale of 0.435 arcsec/pixel and a field of view of $`14.8\times 14.8`$ arcmin. Two cameras were used in 1997. The first (LCO camera SITe1) has a field of view $`23.8\times 23.8`$ arcmin with a scale of 0.70 arcsec/pixel. The second (LCO camera SITe3) has a field of view of $`14.8\times 22.8`$ arcmin with a scale of 0.435 arcsec/pixel. In all cases the observations were approximately centered on the cluster. A total of 539 $`V`$-band images, 42 $`B`$-band images, and 2 $`U`$-band images were collected on 14 nights. Exposure times were sufficiency long – ranging from 100 to 300 s for the $`V`$-band, depending on the seeing – to ensure accurate photometry for stars located 2-3 mag below the cluster turnoff. The cluster was monitored for 28h35m, 13h30m and 12h00m with the SITe3, Loral and SITe1 cameras, respectively.
Instrumental photometry was extracted using DoPHOT (Schechter, Mateo & Saha 1993). We used DoPHOT in the fixed-position mode, with the stellar positions measured on ”template” images (either the best images obtained during a given run or a combination of the 2-3 best images). For each one of the CCD cameras used in this survey a separate data base was constructed using procedures described in detail in Kaluzny et al. (1996). The total number of stars included in the $`V`$ filter data bases for the SITe1, SITe3 and Loral cameras was 45049, 43882 and 22957, respectively. The quality of the derived photometry is illustrated in Fig. 1 in which we have plotted the $`rms`$ deviation versus average magnitude for stars measured with the SITe3 camera. This plot includes 31442 stars with $`12.1<V<20.8`$ with at least 74 observations. Photometry for stars with $`V<13.5`$ is poor since these stars were frequently over-exposed. To select potential variables we employed three methods, as described in some detail in Kaluzny et al. (1996). $`V`$-band light curves showing possible periodic signals or smooth changes on time scales of weeks were selected for further examination. Eleven certain variables were identified in this way. Note that the three variables listed in Clement (1996) are all saturated on our CCD frames.
The instrumental photometry was transformed to the standard $`BV`$ system using observations of standard stars from Landolt (1992), leading to relations of the form:
$`v=a_1+V+a_2\times (BV)`$ (1)
$`b=a_3+B+a_4\times (BV)`$ (2)
The linear coefficients $`a_2`$ and $`a_4`$ were separately derived for each of the CCD cameras. The additive constants $`a_1`$ and $`a_3`$ were derived based on $`BV`$ photometry of 12 secondary standards selected from Cannon & Stobie (1973). Average values of the $`BV`$ color were used when transforming the observations of the variables from instrumental $`v`$ magnitudes to standard $`V`$ magnitudes This procedure leads to systematic errors not exceeding 0.003 mag.<sup>2</sup><sup>2</sup>2The values of $`a_2`$ in Eq. 1 had values from –0.02 to 0.03 depending on the CCD. None of the detected variables has an observed variation of $`BV`$ exceed 0.1 mag. Hence, systematic errors due to the adoption of an average $`BV`$ color in Eq. 1 do not exceed 0.003 mag
On the night of Jun 26, 1997 we used the SITe1 camera to secure two $`600`$-sec $`U`$-band frames centered on the cluster. These frames, supplemented with a pair of short $`BV`$ exposures ($`V`$ 25-sec, $`B`$ 35-sec) and a pair of long exposures $`BV`$ exposures ($`V`$ 120-sec, $`B`$ 150-sec), were used to derive $`UBV`$ photometry for a large sample of stars from the cluster field. The color-magnitude diagram based on these data is discussed in Sec. 4. The transformation from the instrumental to the standard $`UBV`$ system was determined using observations of several Landolt (1992) fields obtained over the whole observing sub-run:
$`v=c_1+V+0.04\times (BV)`$ (3)
$`bv=c_2+0.912\times (BV)`$ (4)
$`ub=c_3+0.957\times (UB)`$ (5)
The zero points of our $`UBV`$ photometry of NGC 6752 were derived from observations of secondary standards from Cannon & Stobie (1973).
## 3 Results for variables
In Table 1 we list equatorial coordinates of the 11 newly identified variables. Approximate angular distances of the variables from the cluster center are given in the 4th column. The last column gives a variability type for each object (ECL – eclipsing binary; EW – contact binary; EA – detached eclipsing binary, SX – SX Phe variable). The limiting radius of the cluster is estimated to be 31 $`arcmin`$ (Webbink 1985) and all variables from Table 1 are located within this radius.
Variables V4, V5 and V11, which are located at large radii from the cluster center, were outside the field of view of the SITe3 and Loral cameras and were observed only with the SITe1 camera. Variable V10 was outside the field of view of the Loral camera. Variables V12 and V13, which are located relatively close to the projected cluster center, are absent in the data base for the SITe1 camera because of crowding. All other variables are present in all data sets. Finding charts for the 11 variables are given in Figs. 2 and 3.
Variable V9 is the brighter component of an unresolved blend of two images. However, this close visual pair could be resolved on images obtained with the du Pont telescope. The fainter component has a $`V`$-magnitude of $`V=17.65`$ and color $`BV=0.45`$, and we used these values to decompose the observations of the combined image obtained with the Swope telescope.
Figure 4 shows the location of all of the newly identified variables on the cluster color-magnitude diagram (CMD). For the SX Phe stars the plotted positions correspond to the intensity-averaged magnitudes. For the eclipsing variables the magnitudes at maximum light are plotted. The phased light curves of variables V4-9 and V11-14 are shown in Fig. 5. The periods of variability were derived using an algorithm based on an analysis of variance statistic introduced by Schwarzenberg-Czerny (1989, 1991). Variable V10 showed flat light curves on all but one night. On the night of June 7 1997 that variable showed an eclipse event lasting more than 5 hours. Figure 6 shows time-domain light curves of V10 obtained on the nights of June 2 1997 and June 7 1997.
### 3.1 SX Phe stars
The periods, average colors, intensity averaged $`V`$ brightnesses and full amplitudes of the 3 identified SX Phe stars are listed in Table 2. All three variables are candidate blue straggler stars. The observed luminosities of these SX Phe stars are consistent with membership in NGC 6752. This is demonstrated in Fig. 7 which shows the positions of the variables in a period $`vs.`$ absolute magnitude diagram. The standard relation for SX Phe stars from McNamara (1997) is also shown. Absolute magnitudes for the SX Phe stars were calculated assuming a distance modulus of $`(mM)_V=13.02`$ (Harris 1996).
While V17 is clearly a fundamental mode pulsator, based on its amplitude and asymmetric light curve, a classification of V12 and V13 is more problematical. Observational error and the low amplitudes of these stars combine to complicate a determination of the asymmetry of the light curves. In addition, as McNamara (1997) points out, classification of a star as a first overtone pulsator based only on low amplitude and symmetrical light curves is itself questionable. We note here simply that the properties of these three SX Phe stars are consistent with pulsation in the fundamental mode and membership in NGC 6752.
### 3.2 Eclipsing binaries
Our sample of newly identified variables includes 8 eclipsing binaries. Table 3 lists some basic photometric characteristics of their light curves. Seven objects can be classified as contact binaries (EW type eclipsing binaries according to the GCVS scheme) and one – variable V10 – is most likely a detached eclipsing binary. We have managed to catch just one eclipse-like event for V10. Our data indicate that the orbital period of that binary is longer than 2 days. The position of V10 on the cluster CMD (see Fig. 4) does not support its membership in NGC 6752 unless the error in $`BV`$ is unusually large. In addition, the variable is located relatively far from the cluster center. Determination of the radial velocity and/or proper motion of V10 is necessary in order to clarify the membership status of this potentially important binary.
Of the seven identified contact binaries only V8 is located among the blue stragglers on the cluster CMD. Variable V6 occupies a position slightly below the cluster main-sequence. V11 is located about 0.2 mag to the red of the subgiant branch. Its position on the CMD indicates that it does not belong to the cluster. The remaining 3 EW systems are located to the red of the cluster main-sequence. We have applied the absolute brightness calibration established by Rucinski (1995) to estimate $`M_V`$ for the newly identified contact binaries<sup>3</sup><sup>3</sup>3Rucinski & Duerbeck (1997) have derived a new version of the calibration $`M_V=M_V(logP,BV)`$ using a sample of EW systems with $`HIPPARCOS`$ parallaxes. However, that new calibration is based on stars from the solar neighbourhood and does not include a metallicity term.. That calibration gives $`M_V`$ as a function of period, unreddened color and metallicity:
$`M_V=2.38logP+4.26(BV)_0+0.280.3[\mathrm{Fe}/\mathrm{H}]`$ (6)
where $`P`$ is the period in days.
We adopt $`[\mathrm{Fe}/\mathrm{H}]=1.61`$ and $`E(BV)=0.04`$ for NGC 6752 (Harris 1996). The formal errors of the estimated values of $`M_V`$ are about 0.5 magnitude. Figure 8 shows a period versus apparent distance modulus diagram for the 7 contact binaries from the cluster field. The apparent distance modulus was calculated as the difference between $`V_{max}`$ and $`M_V^{cal}`$ for each system. The data presented in Fig. 8 indicate that only V14 can be considered a possible cluster member. The remaining 6 EW systems are most likely background/foreground objects.
Although V9 is classified as a contact binary with $`P=0.36`$ d we cannot exclude the possibility that the true period is $`P=0.72`$ d and that the variable is a single spotted star, presumably of type RS CVn. Its position on the cluster CMD suggests that in this case the star is a subgiant belonging to the cluster (see Fig. 4).
## 4 The color-magnitude diagrams
Since the pioneering studies by Alcaino (1972) and Cannon & Stobie (1973) it has been known that the horizontal branch of NGC 6752 is strongly dominated by stars located to the blue side of the instability strip. The CMD of the cluster was studied in detail by Buonanno et al. (1986) based on deep photographic photometry. They showed that the horizontal branch of NGC 6752 spans about 4 magnitudes in $`V`$ reaching $`V18.0`$ ($`M_V5.0`$) on its faint end. Detailed studies of the properties of the hot subdwarfs forming extended horizontal branch (EHB) of NGC 6752 have been published by Heber et al. (1986) and Moehler, Heber & Rupprecht (1997).
As a by-product of our variability program we have obtained medium-deep CMD’s for the surveyed fields. In Fig. 9 we present a $`V/BV`$ CMD based on the pair of ”template” images obtained with the SITe3 camera. The photometry was extracted using the Daophot/Allstar package (Stetson 1987). The CMD shows several features of the EHB discussed in some detail by Buonanno et al. (1986). In particular, we confirm the presence of stars in the EHB gap between $`V16.0`$ and $`V17.0`$, and an apparent faint limit to the EHB stars at $`V18.0`$. We comment briefly on two features of this CMD that bear on the origin and evolution of EHB stars. The first is that there are several stars located slightly to the red of the EHB in Fig. 9. Although we cannot exclude the possibility that some of them are field objects, we find that these stars are strongly concentrated toward the cluster center. These stars are candidate composite systems, consisting of an EHB star plus a red dwarf. We are planning a more detailed discussion of these systems in a forthcoming paper based on data obtained with the 2.5m du Pont telescope (Kaluzny et al. in preparation). The CMD in Fig. 9 also shows two blue stars with $`BV0.25`$ which form an apparent faint extension of the EHB of the cluster. These stars are marked with triangles in Fig. 9 and their coordinates as well as $`UBV`$ photometry are listed in Table 4. In Fig. 10 we present a $`V/UB`$ CMD based on the data collected with the SITe1 camera. The positions of the two faint blue stars in Fig. 9 are marked. Faint blue stars located below the EHB have been observed in other stellar clusters. Kaluzny & Rucinski (1995) noted the presence of several $`UV`$-bright stars with $`M_V>8`$ in the center of NGC 6791 and more recently Cool et al. (1998) identified similar objects in the core of NGC 6397. They argue that these stars are either low-mass helium white dwarfs or very-low-mass core-He-burning stars. We note that both faint blue objects discovered in the field of NGC 6752 are good targets for spectroscopic follow up with large ground-based telescopes.
Photometry and equatorial coordinates of all stars plotted in Figs. 9 and 10 are available on request from the second author of this paper.
## 5 Conclusions
We have used time series CCD observations to identify eleven new variables in the direction of the globular cluster NGC 6752. Three of these variables are SX Phe stars which are likely to be cluster members. Six out of the seven identified contact binaries are probably field objects. One candidate detached eclipsing binary has been discovered and follow-up observations are planned to get complete light curves for this potentially important variable. As a side-result we obtained $`UBV`$ photometry for a large sample of stars from the cluster field, and we note the presence of two faint blue objects located below the apparent cut-off of the EHB of the cluster.
JK, WP and WK were supported by the Polish Committee of Scientific Research through grant 2P03D-011-12 and by NSF grant AST-9528096 to Bohdan Paczyński. We are indebted to Dr. B. Paczyński for his long-term support to this project. Thanks are due to Randy Phelps for taking some data which were used in this paper. Dr. Dona Dinescu kindly provided us with positional data for stars from the NGC 6752 field.
|
no-problem/9904/cond-mat9904074.html
|
ar5iv
|
text
|
# On the estimate of the spin-gap in quasi-1D Heisenberg antiferromagnets from nuclear spin-lattice relaxation
\[
## Abstract
We present a careful analysis of the temperature dependence of the nuclear spin-lattice relaxation rate $`1/T_1`$ in gapped quasi-1D Heisenberg antiferromagnets. It is found that in order to estimate the value of the gap correctly from $`1/T_1`$ the peculiar features of the dispersion curve for the triplet excitations must be taken into account. The temperature dependence of $`1/T_1`$ due to two-magnon processes, is reported for different values of the ratio $`r=J_{}/J_{}`$ between the superexchange constants in a 2-leg-ladder. As an illustrative example we compare our results to the experimental findings for <sup>63</sup>Cu $`1/T_1`$ in the dimerized chains and 2-leg-ladders contained in Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub>.
\]
The many peculiar aspects of quasi one-dimensional quantum Heisenberg antiferromagnets (1DQHAF) have stimulated an intense research activity during the last decade . Moreover, the recent observation of superconductivity in the 2-leg-ladder compound (Sr,Ca)<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> and the occurrence of a phase separation in high temperature superconductors (HTSC) in hole-rich and hole-depleted regions analogous to spin-ladders , have brought to a renewed interest on 1DQHAF. One of the relevant issues is wether the spin-gap observed in some of these 1DQHAF is related to the one observed in the normal state of HTSC . For these reasons many NMR groups working on HTSC have focused their attention on these systems and on the determination of the spin-gap values in pure and hole-doped compounds . However, since the early measurements, a clear discrepancy between the values for the gap ($`\mathrm{\Delta }`$) estimated by means of nuclear spin-lattice relaxation ($`1/T_1`$) and susceptibility (or Knight shift) measurements has emerged . In many compounds the gap estimated by means of $`1/T_1`$ using the activated form $`1/T_1exp(\mathrm{\Delta }/T)`$ derived by Troyer et al. , turned out to be $`1.5`$ times larger than the one estimated by using susceptibility or inelastic neutron scattering measurements (see Tab. 1). Many attempt models, theoretical or phenomenological , have tryed to explain these differences, however, while they were able to describe the findings for some compounds they were not able to explain the results obtained in other gapped 1DQHAF. In fact, as can be observed in Tab. 1, while for certain 2-leg-ladders an agreement between the gap estimated from $`T_1`$ and through other techniques is found, in several other systems it is not . It is interesting to observe that the 1DQHAF where the agreement is observed are the ones in the strong coupling limit, namely either dimerized chains or 2-leg-ladders with a superexchange coupling along the rungs much larger than the one along the chains. Therefore, one can conclude that the disagreement is not always present and has to be associated with the peculiar properties of the spin excitations in each system, i.e. with the form of the dispersion curve for the triplet excitations. In this manuscript we will show that the discrepancy relies essentially on the use for $`1/T_1`$ of an expression which is valid in general only at very low temperatures ($`T0.2\mathrm{\Delta }`$) and its application to higher temperatures depends on the form of the dispersion curve for the triplet spin excitations. In particular, for dimerized chains the validity of a simple activated expression extends to higher temperatures than for a 2-leg-ladder. As an illustrative example we will analyse the temperature dependence of $`1/T_1`$ for the <sup>63</sup>Cu nuclei in the dimer chains (Cu(1)) and in the 2-leg-ladders (Cu(2)) contained in Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> .
In the following we will consider the contribution to nuclear relaxation arising from 2-magnon Raman processes only. Namely, we will assume that although the system is not in the very low temperature limit ($`T\mathrm{\Delta }`$), the temperature is low enough ($`T\mathrm{\Delta }`$) so that 3-magnon processes as well as the spin damping can be neglected. If the large value of the gap derived by means of $`1/T_1`$ was due to these contributions, which are proportional to $`exp(2\mathrm{\Delta }/T)`$, one should observe some discrepancy also for the 1DQHAF in the strong coupling limit, at variance with the experimental findings (see Tab. 1). The approach we use follows exactly the same steps outlined in the paper by Troyer et al. where, by assuming a quadratic dispersion for the triplet excitations (valid for $`T\mathrm{\Delta }`$), namely
$$E(k_x)=1+\alpha (k\pi )^2$$
(1)
in units of $`\mathrm{\Delta }`$, they found that
$$1/T_1=\frac{3\gamma ^2A_o^2}{4\alpha \pi ^2}\frac{\mathrm{}}{k_B\mathrm{\Delta }}e^{\omega _o/2T}e^{\mathrm{\Delta }/T}(0.80908ln(\omega _o/T))$$
(2)
with $`\omega _o`$ the resonance frequency and $`A_o`$ the hyperfine coupling constant. We remark that there is a factor 4 difference with respect to the equation reported by Troyer et al. , which is related to a different definition of the hyperfine hamiltonian and of the dispersion curve. The values of the hyperfine constants are $`A_o=120`$ kOe for <sup>63</sup>Cu(2) and $`A_o=29`$ kOe for <sup>63</sup>Cu(1) . In the case of a general form for the dispersion relation, by considering that the low-energy processes are the ones corresponding to an exchanged momentum $`q0`$ and $`q2k_x`$, one can write the contribution related to 2-magnon Raman processes in the form
$$1/T_1=\frac{3\gamma ^2A_o^2}{\pi ^2}\frac{\mathrm{}}{k_B\mathrm{\Delta }}_0^\pi 𝑑k_x\frac{e^{E(k_x)/T}}{\sqrt{v^2(k_x)+2\omega _o\frac{v(k_x)}{k_x}}}$$
(3)
where $`E(k_x)`$ is the dispersion relation for the triplet spin excitations, normalized to the gap value, whereas $`v(k_x)=E(k_x)/k_x`$. For a 2-leg-ladder a general form describing $`E(k_x)`$ is
$$E(k_x)^2=E(k_x=0)^2cos^2(\frac{k_x}{2})+sin^2(\frac{k_x}{2})+c_osin^2(k_x)$$
(4)
which is strongly dependent on the ratio $`r=J_{}/J_{}`$ between the superexchange coupling along the rungs and along the legs. We have taken the dispersion curves derived by Oitmaa et al. from an extensive series studies and estimated the parameters $`E(k_x=0)`$ and $`c_o`$ accordingly. Then, starting from Eqs. 3 and 4, by means of a numerical integration one can derive directly $`1/T_1`$ for a 2-leg-ladder for different values of $`r`$. It should be remarked that for $`r`$ of the order of unity the dispersion curve for the triplet excitations has a maximum around a wave-vector $`k_m`$ (see Fig. 1) and also low-energy processes from $`k_mk_x`$ to $`k_m+k_x`$ could contribute to the relaxation. However, this processes should become relevant only at $`T\mathrm{\Delta }`$, where also 3-magnon processes and the damping of the spin excitations become relevant.
In Fig. 2 we report the results obtained on the basis of Eqs. 3 and 4 for <sup>63</sup>Cu(2) for different values of the superexchange anisotropy $`r`$. One observes that while for the dimerized chains, corresponding to the limit $`r1`$, $`1/T_1`$ follows an activated behavior as the one given in Eq. 2, for the 2-leg-ladders with $`r1`$ one observes some differences with respect to the simple activated behavior already at temperatures $`T\mathrm{\Delta }/4`$. This analysis points out that for a 2-leg-ladder with $`r`$ of the order of unity it is not correct to estimate the gap from $`1/T_1`$ by using Eq. 2, at least for $`T\mathrm{\Delta }/4`$. In fact, it is noticed that the quadratic approximation for the dispersion curve becomes valid for a more restricted range of $`k_x`$ around $`\pi /a`$ as $`r`$ decreases (see Fig. 1). This seems to contradict the results reported in Fig. 2a, where the departure from the quadratic approximation is found more pronounced for $`r=1`$ than for $`r=0.5`$. However, this artifact is related to the choice of the horizontal scale, namely to have reported $`1/T_1`$ vs. $`\mathrm{\Delta }/T`$, since $`\mathrm{\Delta }`$ increases with $`r`$ . In fact, if we report $`1/T_1`$ vs $`J_{}/T`$ (Fig. 2b), with $`J_{}`$ independent of $`r`$, one immediately notices that the deviation from the quadratic approximation starts at lower temperatures for the lowest value of $`r`$.
One can then analyse the experimental data on the basis of Eq. 3 by taking the value for the gap estimated by other techniques and check if there is an agreement. We have fit the experimental data for <sup>63</sup>Cu(2) (Fig. 3b) and <sup>63</sup>Cu(1) (Fig.3a) in Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> by taking $`\mathrm{\Delta }=450`$ K and $`\mathrm{\Delta }=120`$ K, respectively, as estimated from susceptibility or NMR shift data . In both cases we find a good agreement between theory and experiment by taking $`1r0.5`$ for the ladder site and $`r1`$ for the chain site. If the data for <sup>63</sup>Cu(2) were fitted according to Eq. 2 one would derive a value for the gap around $`650`$ K, a factor $`1.5`$ larger than the actual value (see Tab. 1).
For $`r=1`$ also a quantitative agreement with the experimental data for <sup>63</sup>Cu(2) is found. However, this fact seems to be at variance with the estimates by Johnston based on the analysis of DC susceptibility data and with the recent findings by Imai et al. based on the study of <sup>17</sup>O NMR shift anisotropy, where a value for $`r0.5`$ was derived. If we take this value for $`r`$ we find that the experimental data are a factor $`8`$ larger than expected. This disagreement could originate, at least partially, from having considered for the $`q_x=2k_x`$ processes the values for the $`|<k_x|S_z|k_x>|^2`$ matrix elements estimated by Troyer for the case $`r=1`$ . One has also to mention that the estimate of the hyperfine coupling constants could suffer from some uncertainties, particularly the contribution from the transferred hyperfine interaction with the neighbouring Cu<sup>2+</sup> spins. This contribution should be particularly relevant for the <sup>63</sup>Cu(1) nuclei while it should be small for <sup>63</sup>Cu(2). However, it must be recalled that since $`1/T_1`$ depends quadratically on the hyperfine coupling constant even for <sup>63</sup>Cu(2) sizeable corrections can be exepected. Finally it has to be observed that in these systems the low-frequency divergence of $`1/T_1`$ is cut because of the finite coupling among the ladders (or chains), introducing another correction to the absolute value of $`1/T_1`$.
The low-frequency divergence of $`1/T_1`$ was found to follow the logarithmic behavior reported by Troyer et al. (see also Eq. 2) and does not change upon varying the anisotropy factor $`r`$, for $`r>0`$. In fact, the form of this divergence is related to the shape of the dispersion curve close to $`k_x=\pi /a`$, where it is always correctly approximated by a quadratic form for $`r>0`$.
In conclusion we have presented a careful analysis of the problem of estimating the spin-gap from nuclear spin-lattice relaxation measurements in 1DQHAF. It is found that in order to estimate correctly the gap one should either perform the experiments at temperatures $`T0.2\mathrm{\Delta }`$ where in many cases other contributions to the relaxation process emerge , or use an appropriate expression for $`1/T_1`$ which takes into account the form of the dispersion curve for the triplet excitations. Then a good agreement for the gap value estimated by means of $`1/T_1`$ and other techniques is found, allowing also to derive information on the anisotropy of the superexchange constants.
We would like to thank D. C. Johnston for useful discussions. The research was carried out with the financial support of INFM and of INFN.
.
|
no-problem/9904/quant-ph9904048.html
|
ar5iv
|
text
|
# State Vector Collapse Probabilities and Separability of Independent Systems in Hughston’s Stochastic Extension of the Schrödinger Equation
## Abstract
We give a general proof that Hughston’s stochastic extension of the Schrödinger equation leads to state vector collapse to energy eigenstates, with collapse probabilities given by the quantum mechanical probabilities computed from the initial state. We also show that for a system composed of independent subsystems, Hughston’s equation separates into similar independent equations for the each of the subsystems, correlated only through the common Wiener process that drives the state reduction.
preprint: IASSNS-HEP-99/36 April, 1999
Send correspondence to:
Stephen L. Adler
Institute for Advanced Study
Olden Lane, Princeton, NJ 08540
Phone 609-734-8051; FAX 609-924-8399; email adler@ias.edu
A substantial body of work has addressed the problem of state vector collapse by proposing that the Schrödinger equation be modified to include a stochastic process, presumably arising from physics at a deeper level, that drives the collapse process. Although interesting models have been constructed, there so far has been no demonstration that for a generic Hamiltonian, one can construct a stochastic dynamics that collapses the state vector with the correct quantum mechanical probabilities. Part of the problem has been that most earlier work has used stochastic equations that do not preserve state vector normalization, requiring additional ad hoc assumptions to give a consistent physical interpretation.
Various authors have proposed rewriting the Schrödinger equation as an equivalent dynamics on projective Hilbert space, i.e., on the space of rays, a formulation in which the imposition of a state vector normalization condition is not needed. Within this framework, Hughston has proposed a simple stochastic extension of the Schrödinger equation, constructed solely from the Hamiltonian function, and has shown that his equation leads to state vector reduction to an energy eigenstate, with energy conservation in the mean throughout the reduction process. In the simplest spin-1/2 case, Hughston exhibits an explicit solution that shows that his equation leads to collapse with the correct quantum mechanical probabilities, but the issue of collapse probabilities in the general case has remained open. In this Letter, we shall give a general proof that Hughston’s equation leads to state vector collapse to energy eigenstates with the correct quantum mechanical probabilities, using the martingale or “gambler’s ruin” argument pioneered by Pearle . We shall also show that Hughston’s equation separates into independent equations of similar structure for a wave function constructed as the product of independent subsystem wave functions.
We begin by explaining the basic elements needed to understand Hughston’s equation, working in an $`n+1`$ dimensional Hilbert space. We denote the general state vector in this space by $`|z`$, with $`z`$ a shorthand for the complex projections $`z^1,z^2,\mathrm{},z^{n+1}`$ of the state vector on an arbitrary fixed basis. Letting $`F`$ be an arbitrary Hermitian operator, and using the summation convention that repeated indices are summed over their range, we define
$$(F)\frac{z|F|z}{z|z}=\frac{\overline{z}^\alpha F_{\alpha \beta }z^\beta }{\overline{z}^\gamma z^\gamma },$$
(2)
so that $`(F)`$ is the expectation of the operator $`F`$ in the state $`|z`$, independent of the ray representative and normalization chosen for this state. Note that in this notation $`(F^2)`$ and $`(F)^2`$ are not the same; their difference is in fact the variance $`[\mathrm{\Delta }F]^2`$,
$$[\mathrm{\Delta }F]^2=(F^2)(F)^2.$$
(3)
We shall use two other parameterizations for the state $`|z`$ in what follows. Since $`(F)`$ is homogeneous of degree zero in both $`z^\alpha `$ and $`\overline{z}^\alpha `$, let us define new complex coordinates $`t^j`$ by
$$t^j=z^j/z^0,\overline{t}^j=\overline{z}^j/\overline{z}^0,j=1,\mathrm{},n.$$
(4)
Next, it is convenient to split each of the complex numbers $`t^j`$ into its real and imaginary part $`t_R^j,t_I^j`$, and to introduce a $`2n`$ component real vector $`x^a,a=1,\mathrm{},2n`$ defined by $`x^1=t_R^1,x^2=t_I^1,x^3=t_R^2,x^4=t_I^2,\mathrm{},x^{2n1}=t_R^n,x^{2n}=t_I^n`$. Clearly, specifying the projective coordinates $`t^j`$ or $`x^a`$ uniquely determines the unit ray containing the unnormalized state $`|z`$, while leaving the normalization and ray representative of the state $`|z`$ unspecified.
As discussed in Refs. , projective Hilbert space is also a Riemannian space with respect to the Fubini-Study metric $`g_{\alpha \beta }`$, defined by the line element
$$ds^2=g_{\alpha \beta }d\overline{z}^\alpha dz^\beta 4\left(1\frac{|z|z+dz|^2}{z|zz+dz|z+dz}\right).$$
(6)
Abbreviating $`\overline{z}^\gamma z^\gamma \overline{z}z`$, a simple calculation gives
$$g_{\alpha \beta }=4(\delta _{\alpha \beta }\overline{z}zz^\alpha \overline{z}^\beta )/(\overline{z}z)^2=4\frac{}{\overline{z}^\alpha }\frac{}{z^\beta }\mathrm{log}\overline{z}z.$$
(7)
Because of the homogeneity conditions $`\overline{z}^\alpha g_{\alpha \beta }=z^\beta g_{\alpha \beta }=0`$, the metric $`g_{\alpha \beta }`$ is not invertible, but if we hold the coordinates $`\overline{z}^0,z^0`$ fixed in the variation of Eq. (3a) and go over to the projective coordinates $`t^j`$, we can rewrite the line element of Eq. (3a) as
$$ds^2=g_{jk}d\overline{t}^jdt^k,$$
(9)
with the invertible metric
$$g_{jk}=\frac{4[(1+\overline{t}^{\mathrm{}}t^{\mathrm{}})\delta _{jk}t^j\overline{t}^k]}{(1+\overline{t}^mt^m)^2},$$
(10)
with inverse
$$g^{jk}=\frac{1}{4}(1+\overline{t}^mt^m)(\delta _{jk}+t^j\overline{t}^k).$$
(11)
Reexpressing the complex projective coordinates $`t^j`$ in terms of the real coordinates $`x^a`$, the line element can be rewritten as
$`ds^2=`$ $`g_{ab}dx^adx^b,`$ (12)
$`g_{ab}=`$ $`{\displaystyle \frac{4[(1+x^dx^d)\delta _{ab}(x^ax^b+\omega _{ac}x^c\omega _{bd}x^d)]}{(1+x^ex^e)^2}},`$ (13)
$`g^{ab}=`$ $`{\displaystyle \frac{1}{4}}(1+x^ex^e)(\delta _{ab}+x^ax^b+\omega _{ac}x^c\omega _{bd}x^d).`$ (14)
Here $`\omega _{ab}`$ is a numerical tensor whose only nonvanishing elements are $`\omega _{a=2j1b=2j}=1`$ and $`\omega _{a=2jb=2j1}=1`$ for $`j=1,\mathrm{},n`$. As discussed by Hughston, one can define a complex structure $`J_a^b`$ over the entire projective Hilbert space for which $`J_a^cJ_b^dg_{cd}=g_{ab},`$ $`J_a^bJ_b^c=\delta _a^c`$, such that $`\mathrm{\Omega }_{ab}=g_{bc}J_a^c`$ and $`\mathrm{\Omega }^{ab}=g^{ac}J_c^b`$ are antisymmetric tensors. At $`x=0`$, the metric and complex structure take the values
$`g_{ab}=`$ $`4\delta _{ab},g^{ab}={\displaystyle \frac{1}{4}}\delta _{ab},`$ (15)
$`J_a^b=`$ $`\omega _{ab},\mathrm{\Omega }_{ab}=4\omega _{ab},\mathrm{\Omega }^{ab}={\displaystyle \frac{1}{4}}\omega _{ab}.`$ (16)
Returning to Eq. (1a), we shall now derive some identities that are central to what follows. Differentiating Eq. (1a) with respect to $`\overline{z}^\alpha `$, with respect to $`z^\beta `$, and with respect to both $`\overline{z}^\alpha `$ and $`z^\beta `$, we get
$`z|z{\displaystyle \frac{(F)}{\overline{z}^\alpha }}=`$ $`F_{\alpha \beta }z^\beta (F)z^\alpha ,`$ (18)
$`z|z{\displaystyle \frac{(F)}{z^\beta }}=`$ $`\overline{z}^\alpha F_{\alpha \beta }(F)\overline{z}^\beta ,`$ (19)
$`z|z^2{\displaystyle \frac{^2(F)}{\overline{z}^\alpha z^\beta }}=`$ $`z|z[F_{\alpha \beta }\delta _{\alpha \beta }(F)]+2z^\alpha \overline{z}^\beta (F)\overline{z}^\gamma F_{\gamma \beta }z^\alpha \overline{z}^\beta F_{\alpha \gamma }z^\gamma .`$ (20)
Writing similar expressions for a second operator expectation $`(G)`$, contracting in various combinations with the relations of Eq. (6a), and using the homogeneity conditions
$$\overline{z}^\alpha \frac{(F)}{\overline{z}^\alpha }=z^\beta \frac{(F)}{z^\beta }=\overline{z}^\alpha \frac{^2(F)}{\overline{z}^\alpha z^\beta }=z^\beta \frac{^2(F)}{\overline{z}^\alpha z^\beta }=0$$
(21)
to eliminate derivatives with respect to $`\overline{z}^0,z^0`$, we get the following identities,
$`i(FGGF)`$ $`=iz|z\left({\displaystyle \frac{(F)}{z^\alpha }}{\displaystyle \frac{(G)}{\overline{z}^\alpha }}{\displaystyle \frac{(G)}{z^\alpha }}{\displaystyle \frac{(F)}{\overline{z}^\alpha }}\right)=2\mathrm{\Omega }^{aAb}_a(F)_b(G),`$ (23)
$`(FG+GF)2(F)(G)`$ $`=z|z\left({\displaystyle \frac{(F)}{z^\alpha }}{\displaystyle \frac{(G)}{\overline{z}^\alpha }}+{\displaystyle \frac{(G)}{z^\alpha }}{\displaystyle \frac{(F)}{\overline{z}^\alpha }}\right)=2g^{ab}_a(F)_b(G),`$ (24)
$`(FGF)(F^2)(G)`$ $`(F)(FG+GF)+2(F)^2(G)`$ (26)
$`=z|z^2{\displaystyle \frac{(F)}{z^\alpha }}{\displaystyle \frac{^2(G)}{\overline{z}^\alpha z^\beta }}{\displaystyle \frac{(F)}{\overline{z}^\beta }}=2^a(F)^b(F)_a_b(G),`$
with $`_a`$ the covariant derivative with respect to the Fubini-Study metric. It is not necessary to use the detailed form of the affine connection to verify the right hand equalities in these identities, because since $`(G)`$ is a Riemannian scalar, $`_a_b(G)`$$`=_a_b(G)`$, and since projective Hilbert space is a homogeneous manifold, it suffices to verify the identities at the single point $`x=0`$, where the affine connection vanishes and thus $`_a_b(G)=_a_b(G)`$. Using Eqs. (7a) and the chain rule we also find
$$^a[(F^2)(F)^2]_a(G)=\frac{1}{2}(F^2G+GF^2)+(F^2)(G)+(F)(FG+GF)2(F)^2(G),$$
(27)
which when combined with the final identity in Eq. (7a) gives
$$^a(F)^b(F)_a_b(G)\frac{1}{2}^a[(F^2)(F)^2]_a(G)=\frac{1}{4}([F,[F,G]]),$$
(28)
the right hand side of which vanishes when the operators $`F`$ and $`G`$ commute .
Let us now turn to Hughston’s stochastic differential equation, which in our notation is
$$dx^a=[2\mathrm{\Omega }^{ab}_b(H)\frac{1}{4}\sigma ^2^aV]dt+\sigma ^a(H)dW_t,$$
(30)
with $`W_t`$ a Brownian motion or Wiener process, with $`\sigma `$ a parameter governing the strength of the stochastic terms, with $`H`$ the Hamiltonian operator and $`(H)`$ its expectation, and with $`V`$ the variance of the Hamiltonian,
$$V=[\mathrm{\Delta }H]^2=(H^2)(H)^2.$$
(31)
When the parameter $`\sigma `$ is zero, Eq. (8a) is just the transcription of the Schrödinger equation to projective Hilbert space. For the time evolution of a general function $`G[x]`$, we get by Taylor expanding $`G[x+dx]`$ and using the Itô stochastic calculus rules
$$[dW_t]^2=dt,[dt]^2=dtdW_t=0,$$
(33)
the corresponding stochastic differential equation
$$dG[x]=\mu dt+\sigma _aG[x]^a(H)dW_t,$$
(34)
with the drift term $`\mu `$ given by
$$\mu =2\mathrm{\Omega }^{ab}_aG[x]_b(H)\frac{1}{4}\sigma ^2^aV_aG[x]+\frac{1}{2}\sigma ^2^a(H)^b(H)_a_bG[x].$$
(35)
Hughston shows that with the $`\sigma ^2`$ part of the drift term chosen as in Eq. (8a), the drift term $`\mu `$ in Eq. (9c) vanishes for the special case $`G[x]=(H)`$, guaranteeing conservation of the expectation of the energy with respect to the stochastic evolution of Eq. (8a). But referring to Eq. (7c) and the first identity in Eq. (7a), we see that in fact a much stronger result is also true, namely that $`\mu `$ vanishes \[and thus the stochastic process of Eq. (9b) is a martingale\] whenever $`G[x]=(G)`$, with $`G`$ any operator that commutes with the Hamiltonian $`H`$.
Let us now make two applications of this fact. First, taking $`G[x]=V=(H^2)(H)^2`$, we see that the contribution from $`(H^2)`$ to $`\mu `$ vanishes, so the drift term comes entirely from $`(H)^2`$. Substituting this into $`\mu `$ gives $`2(H)`$ times the drift term produced by $`(H)`$, which is again zero, plus an extra term
$$\sigma ^2^a(H)^b(H)_a(H)_b(H)=\sigma ^2V^2,$$
(37)
where we have used the relation $`V=_a(H)^a(H)`$ which follows from the $`F=G=H`$ case of the middle identity of Eq. (7a). Thus the variance $`V`$ of the Hamiltonian satisfies the stochastic differential equation, derived by Hughston by a more complicated method,
$$dV=\sigma ^2V^2dt+\sigma _aV^a(H)dW_t.$$
(38)
This implies that the expectation $`E[V]`$ with respect to the stochastic process obeys
$$E[V_t]=E[V_0]\sigma ^2_0^t𝑑sE[V_s^2],$$
(39)
which using the inequality $`0E[\{VE[V]\}^2]=E[V^2]E[V]^2`$ gives the inequality
$$E[V_t]E[V_0]\sigma ^2_0^t𝑑sE[V_s]^2.$$
(40)
Since $`V`$ is necessarily positive, Eq. (10d) implies that $`E[V_{\mathrm{}}]=0`$, and again using positivity of $`V`$ this implies that $`V_s`$ vanishes as $`s\mathrm{}`$, apart from a set of outcomes of probability measure zero. Thus, as concluded by Hughston, the stochastic term in his equation drives the system, as $`t\mathrm{}`$, to an energy eigenstate.
As our second application of the vanishing of the drift term $`\mu `$ for expectations of operators that commute with $`H`$, let us consider the projectors $`\mathrm{\Pi }_e|ee|`$ on a complete set of energy eigenstates $`|e`$. By definition, these projectors all commute with H, and so the drift term $`\mu `$ vanishes in the stochastic differential equation for $`G[x]=(\mathrm{\Pi }_e)`$, and consequently the expectations $`E[(\mathrm{\Pi }_e)]`$ are time independent; additionally, by completeness of the states $`|e`$, we have $`_e(\mathrm{\Pi }_e)=1`$. But these are just the conditions for Pearle’s gambler’s ruin argument to apply. At time zero, $`E[(\mathrm{\Pi }_e)]=(\mathrm{\Pi }_e)p_e`$ is the absolute value squared of the quantum mechanical amplitude to find the initial state in energy eigenstate $`|e`$. At $`t=\mathrm{}`$, the system always evolves to an energy eigenstate, with the eigenstate $`|f`$ occurring with some probability $`P_f`$. The expectation $`E[(\mathrm{\Pi }_e)]`$, evaluated at infinite time, is then
$$E[(\mathrm{\Pi }_e)]=1\times P_e+\underset{fe}{}0\times P_f=P_e;$$
(41)
hence $`p_e=P_e`$ for each $`e`$ and the state collapses into energy eigenstates at $`t=\mathrm{}`$ with probabilities given by the usual quantum mechanical rule applied to the initial wave function .
Let us now examine the structure of Hughston’s equation for a Hilbert space constructed as the direct product of independent subsystem Hilbert spaces, so that
$`|z=`$ $`{\displaystyle \underset{\mathrm{}}{}}|z_{\mathrm{}},`$ (43)
$`H=`$ $`{\displaystyle \underset{\mathrm{}}{}}H_{\mathrm{}},`$ (44)
with $`H_{\mathrm{}}`$ acting as the unit operator on the states $`|z_k,k\mathrm{}`$. Then a simple calculation shows that the expectation of the Hamiltonian $`(H)`$ and its variance $`V`$ are both additive over the subsystem Hilbert spaces,
$`(H)=`$ $`{\displaystyle \underset{\mathrm{}}{}}(H_{\mathrm{}})_{\mathrm{}},`$ (45)
$`V={\displaystyle \underset{\mathrm{}}{}}V_{\mathrm{}}=`$ $`{\displaystyle \underset{\mathrm{}}{}}[(H_{\mathrm{}}^2)_{\mathrm{}}(H_{\mathrm{}})_{\mathrm{}}^2],`$ (46)
with $`(F_{\mathrm{}})_{\mathrm{}}`$ the expectation of the operator $`F_{\mathrm{}}`$ formed according to Eq. (1a) with respect to the subsystem wave function $`|z_{\mathrm{}}`$. In addition, the Fubini-Study line element is also additive over the subsystem Hilbert spaces, since
$`1ds^2/4=`$ $`{\displaystyle \frac{|z|z+dz|^2}{z|zz+dz|z+dz}}={\displaystyle \underset{\mathrm{}}{}}{\displaystyle \frac{|z_{\mathrm{}}|z_{\mathrm{}}+dz_{\mathrm{}}|^2}{z_{\mathrm{}}|z_{\mathrm{}}z_{\mathrm{}}+dz_{\mathrm{}}|z_{\mathrm{}}+dz_{\mathrm{}}}}`$ (47)
$`=`$ $`{\displaystyle \underset{\mathrm{}}{}}[1ds_{\mathrm{}}^2/4]=1[{\displaystyle \underset{\mathrm{}}{}}ds_{\mathrm{}}^2]/4+\mathrm{O}(ds^4).`$ (48)
As a result of Eq. (13), the metric $`g^{ab}`$ and complex structure $`\mathrm{\Omega }^{ab}`$ block diagonalize over the independent subsystem subspaces. Equation (12b) then implies that Hughston’s stochastic extension of the Schrödinger equation given in Eq. (8a) separates into similar equations for the subsystems, that do not refer to one another’s $`x^a`$ coordinates, but are correlated only through the common Wiener process $`dW_t`$ that appears in all of them. Under the assumption that $`\sigma M_{\mathrm{Planck}}^{1/2}`$ in microscopic units with $`\mathrm{}=c=1`$, these correlations will be very small; it will be important to analyze whether they can have observable physical consequences on laboratory or cosmological scales .
To summarize, we have shown that Hughston’s stochastic extension of the Schrödinger equations has properties that make it a viable physical model for state vector reduction. This opens the challenge of seeing whether it can be derived as a phenomenological approximation to a fundamental pre-quantum dynamics. Specifically, we suggest that since Adler and Millard have argued that quantum mechanics can emerge as the thermodynamics of an underlying non-commutative operator dynamics, it may be possible to show that Hughston’s stochastic process is the leading statistical fluctuation correction to this thermodynamics.
###### Acknowledgements.
This work was supported in part by the Department of Energy under Grant #DE–FG02–90ER40542. One of us (S.L.A.) wishes to thank J. Anandan for conversations introducing him to the Fubini-Study metric. The other (L.P.H.) wishes to thank P. Leifer for many discussions on the properties of the complex projective space.
|
no-problem/9904/cond-mat9904392.html
|
ar5iv
|
text
|
# Statistical mechanics of systems with heterogeneous agents: Minority Games
\[
## Abstract
We study analytically a simple game theoretical model of heterogeneous interacting agents. We show that the stationary state of the system is described by the ground state of a disordered spin model which is exactly solvable within the simple replica symmetric ansatz. Such a stationary state differs from the Nash equilibrium where each agent maximizes her own utility. The latter turns out to be characterized by a replica symmetry broken structure. Numerical results fully agree with our analytic findings.
\]
Statistical mechanics of disordered systems provides analytical and numerical tools for the description of complex systems, which have found applications in many interdisciplinary areas. When the precise realization of the interactions in an heterogeneous system is expected not to be crucial for the overall macroscopic behavior, then the system itself can be modeled as having random interactions drawn from an appropriate distribution. Such an approach appears to be very promising also for the study of systems with many heterogeneous agents, such as markets, which have recently attracted much interest in the statistical physics community . Indeed it provides a workable alternative to the so called representative agent approach of micro-economic theory, where assuming that agents are identical, one is lead to a theory with one single (representative) agent.
In this Letter we present analytical results for a simple model of heterogeneous interacting agents, the so called minority game (MG), which is a toy model of $`N`$ agents interacting through a global quantity representing a market mechanism. Agents aim at anticipating market movements by following a simple adaptive dynamics inspired at Arthur’s inductive reasoning. This is based on simple speculative strategies that take advantage of the available public information concerning the recent market history, which can take the form of one of $`P`$ patterns. Numerical studies have shown that the model displays a remarkably rich behavior. The relevant control parameter turns out to be the ratio $`\alpha =P/N`$ between the “complexity” of information $`P`$ and the number $`N`$ of agents, and the model undergoes a phase transition with symmetry breaking independently of the origin of information.
We shall limit the discussion on the interpretation of the model – which is discussed at some length in refs. – to a minimum and rather focus on its mathematical structure and to the analysis of its statistical properties for $`N1`$. Our main aim is indeed to show that the model can be analyzed within the framework of statistical mechanics of disordered system.
We find that dynamical steady states can be mapped onto the ground state properties of a model very similar to that proposed in ref. in the context of optimal dynamics for attractor neural networks. There one shows that the minimization of the interference noise is equivalent to maximizing the dynamical stability of each device composing the system. Conversely, we show that the individual utility maximization in interacting agents systems is equivalent to the minimization of a global function. We also find that different learning models lead to different patterns of replica symmetry breaking.
The model is defined as follows: Agents live in a world which can be in one of $`P`$ states. These are labelled by an integer $`\mu =1,\mathrm{},P`$ which encodes all the information available to agents. For the moment being, we follow ref. and assume that this information concerns some external system so that $`\mu `$ is drawn from a uniform distribution $`\varrho ^\mu =1/P`$ in $`\{1,\mathrm{},P\}`$. Each agent $`i=1,\mathrm{},N`$ can choose between one of two strategies, labeled by a spin variable $`s_i\{\pm 1\}`$, which prescribes an action $`a_{s_i,i}^\mu `$ for each state $`\mu `$. Strategies may be “look up tables”, behavioral rules or information processing devices. The actions $`a_{s,i}^\mu `$ are drawn from a bimodal distribution $`P(a_{s,i}^\mu =\pm 1)=1/2`$ for all $`i,s`$ and $`\mu `$ and they will play the role of quenched disorder. Hence there are only two possible actions – such as “do something” ($`a_{s,i}^\mu =1`$) or “do the opposite” ($`a_{s,i}^\mu =1`$). It is convenient to make the dependence on $`s`$ explicit in $`a_{s,i}^\mu `$, introducing $`\omega _i^\mu `$ and $`\xi _i^\mu `$ so that $`a_{s,i}^\mu =\omega _i^\mu +s\xi _i^\mu `$. If agent $`i`$ chooses strategy $`s_i`$ and her opponents choose strategies $`s_i\{s_j,ji\}`$, in state $`\mu `$, she receives a payoff
$$u_i^\mu (s_i,s_i)=a_{s_i,i}^\mu G(A^\mu ),$$
(1)
where, defining $`\mathrm{\Omega }^\mu =_j\omega _j^\mu `$,
$$A^\mu =\underset{j}{}a_{s_j,j}^\mu =\mathrm{\Omega }^\mu +\underset{j}{}\xi _j^\mu s_j.$$
(2)
The function $`G(x)`$, which describes the market mechanism, is such that $`xG(x)>0`$ for all $`x`$ so that the total payoff to agents is always negative: the majority of agents receives a negative payoff whereas only the minority of them gain. Note that the agent–agent interaction, which comes from the aggregate quantity $`G(A^\mu )`$, is of mean-field character.
The game defined by the payoffs in Eq. (1) can be analyzed along the lines of game theory by looking for its Nash equilibria in the strategies space $`\{s_j,j=1,\mathrm{},N\}`$. Before doing this, we prefer to discuss the dynamics of inductive agents following refs. : There, the game is repeated many times and agents try to estimate empirically which of the two strategies they have is the best one, using past observations. More precisely, each agent $`i`$ assigns a score $`U_{s,i}(t)`$ to her $`s^{\mathrm{th}}`$ strategy at time $`t`$, and we assume, as in ref. , that she chooses that strategy with probability
$$\pi _{s,i}(t)\mathrm{Prob}\{s_i(t)=s\}=Ce^{\mathrm{\Gamma }U_{s,i}(t)}$$
(3)
with $`C^1=_s^{}e^{\mathrm{\Gamma }U_{s^{},i}(t)}`$ and $`\mathrm{\Gamma }>0`$. The scores are initially set to $`U_{s,i}(0)=0`$, and they are updated as
$$U_{s,i}(t+1)=U_{s,i}(t)a_{s,i}^{\mu (t)}G(A^{\mu (t)})/P.$$
(4)
The idea is that if a strategy $`s`$ has predicted the right sign i.e. if $`a_{s,i}^\mu =\mathrm{sign}G(A^\mu )`$, its score, and hence its probability of being used, increases. Note that $`a_{s,i}^\mu G(A^\mu )`$ in Eq. (4) is not the payoff $`u_i^\mu (s,s_i)`$ which agent $`i`$ would have received if she had actually played strategy $`ss_i(t)`$. Indeed $`G(A^\mu )`$ depends on the strategy $`s_i(t)`$ that agent $`i`$ has actually played through $`A^\mu `$. Agents in the MG neglect this effect and behave as if they were facing an external process $`G(A^\mu )`$ rather than playing against other $`N1`$ agents. This may seem reasonable for $`N1`$ since the relative dependence of aggregate quantities on each agent’s choice is expected to be small. We shall see below (see Eq. 11) that this is not true: if agents consider the impact of their actions on $`A^\mu `$, the collective behavior changes considerably.
We focus on the linear case $`G(x)=x`$, which allows for a simple treatment. Other choices, such as the original one $`G(x)=\text{sign}x`$, lead to similar conclusions, as it will be discussed elsewhere. With this choice, the total losses of agents is $`_iu_i^\mu =(A^\mu )^2`$. The time average $`\sigma ^2`$ of $`(A^\mu )^2`$ is shown in Fig. 1, as a function of $`\alpha P/N`$. The system shows a complex behavior characterized, among other things, by a phase transition at $`\alpha _c0.34`$ where $`\sigma ^2`$ shows a cusp and a small $`\alpha `$ phase where $`\sigma ^2`$ increases with $`\mathrm{\Gamma }`$.
In order to uncover this behavior, let us focus on the long time behavior of the dynamics. The key observation is that, in the long run, the score of a strategy depends on its performance in all $`P`$ states. Hence, the behavior of agents will change systematically only on time-scales of order $`P`$. This suggests to introduce the rescaled time $`\tau =t/P`$. As $`P\mathrm{}`$, any finite interval $`d\tau =\mathrm{\Delta }t/P`$ is made of infinitely many time steps and we can use the law of large numbers to approximate time averages with statistical averages over the variables $`\mu (t)`$ and $`s_i(t)`$ from their respective distributions $`\varrho ^\mu `$ and $`\pi _{s,i}`$. We henceforth use the notation $`\overline{o}=_\mu \varrho ^\mu o^\mu `$ for averages over $`\mu `$ and $``$ for averages on $`s_i(t)`$ and we define $`m_i(\tau )s_i(t)`$. With this notations, $`\sigma ^2`$ reads:
$$\sigma ^2=\overline{A^2}=\overline{\mathrm{\Omega }^2}+\underset{i}{}\left[\overline{\xi _i^2}+2\overline{\mathrm{\Omega }\xi _i}m_i\right]+\underset{ij}{}\overline{\xi _i\xi _j}m_im_j$$
(5)
where we have used statistical independence of $`s_i`$, i.e. $`s_is_j=m_im_j+(1m_i^2)\delta _{i,j}`$. The evolution of scores $`U_{s,i}`$ in continuum time $`\tau `$, is obtained iterating Eq. (4) for $`\mathrm{\Delta }t=Pd\tau `$ time steps. Using Eq. (3) in the form $`m_i=\mathrm{tanh}[\mathrm{\Gamma }(U_{+1,i}U_{1,i})]`$, we find
$$\frac{dm_i}{d\tau }=2\mathrm{\Gamma }(1m_i^2)\left[\overline{\mathrm{\Omega }\xi _i}+\underset{j}{}\overline{\xi _i\xi _j}m_j\right].$$
(6)
This can be easily written as a gradient descent dynamics $`\frac{dm_i}{d\tau }=\mathrm{\Gamma }(1m_i^2)\frac{H}{m_i}`$ which minimizes the Hamiltonian
$$H=\overline{A^2}=\sigma ^2\underset{i}{}\overline{\xi _i^2}(1m_i^2).$$
(7)
As a function of $`m_i`$, $`H`$ is a positive definite quadratic form, which has a unique minimum. This implies that the stationary state of the MG is described by the ground state properties of $`H`$. It is easy to see that $`H`$ is closely related to the order parameter $`\theta =\sqrt{\overline{\text{sign}A^2}}`$ introduced in , which is a measure of the system predictability. Indeed $`H\theta ^2`$ when $`\theta `$ is small, suggesting that inductive agents actually minimizes predictability rather than their collective losses $`\sigma ^2`$.
It is possible to study the ground state properties of $`H`$ in Eq. (7) using the replica method . First we introduce an inverse temperature $`\beta `$ and compute the average over the disorder variables $`\mathrm{\Xi }=\{a_{s,i}^\mu \}`$ of the partition function of $`n`$ replicas of the system, $`Z^n_\mathrm{\Xi }`$. Next we perform an analytic continuation for non-integer values of $`n`$, thus obtaining $`\mathrm{ln}Z_\mathrm{\Xi }=lim_{n0}\frac{Z^n_\mathrm{\Xi }1}{n}`$. The ‘free energy’ $`F_{ID}=\mathrm{ln}Z_\mathrm{\Xi }/\beta `$ depends on the the overlap matrix $`Q_{a,b}=m_i^am_i^b`$ ($`a,b=1,\mathrm{}n`$, $`ab`$) and on the order parameter $`Q_a=\frac{1}{N}_i(m_i^a)^2`$, together with their Lagrange multipliers $`r_{a,b}`$ and $`R_a`$ respectively. $`F_{ID}`$ can be calculated using a saddle point method that, within the replica symmetric (RS) ansatz $`Q_{a,b}=q`$, $`r_{a,b}=r`$ (for all $`a<b`$), and $`Q_a=Q`$, $`R_a=R`$ (for all $`a`$), leads to
$`F_{ID}`$ $`=`$ $`{\displaystyle \frac{\alpha }{2}}{\displaystyle \frac{1+q}{\alpha +\beta (Qq)}}+{\displaystyle \frac{\alpha }{2\beta }}\mathrm{log}\left[1+{\displaystyle \frac{\beta (Qq)}{\alpha }}\right]`$
$`+`$ $`{\displaystyle \frac{\beta }{2}}(RQrq){\displaystyle \frac{1}{\beta }}{\displaystyle 𝑑\mathrm{\Phi }(\zeta )\mathrm{log}_1^1𝑑se^{\beta V(s|\zeta )}}`$
where $`V(x|\zeta )=\beta (rR)\frac{x^2}{2}\sqrt{r}\zeta x`$ and $`\mathrm{\Phi }`$ is the normal distribution. The ground state properties of $`H`$ are obtained solving the saddle point equations in the limit $`\beta \mathrm{}`$. Fig. 1 compares the analytic and numerical findings for $`\sigma ^2`$. For $`\alpha >\alpha _c=0.33740\mathrm{}`$, the solution leads to $`Q=q<1`$ and a ground state energy $`H_0>0`$. $`H_00`$ as $`\alpha \alpha _c^+`$ and $`H_0=0`$ for $`\alpha \alpha _c`$.
This confirms the conclusion $`A^\mu =0`$ $`\mu `$ (or $`\theta =0`$) for $`\alpha \alpha _c`$ and it implies the relation
$$\sigma ^2=\underset{i}{}\overline{\xi _i^2}(1m_i^2)\frac{N}{2}(1Q),\alpha \alpha _c.$$
(8)
The RS solution is stable against replica symmetry breaking (RSB) for any $`\alpha `$, as expected from positive definiteness of $`H`$. Following ref., we compute the probability distribution of the strategies, which for $`\alpha >\alpha _c`$ is bimodal and it assumes the particularly simple form
$$𝒫(m)=\varphi (z)[\delta (m1)+\delta (m+1)]+\frac{z}{\sqrt{2\pi }}e^{(zm)^2/2}$$
(9)
with $`z=\sqrt{\alpha /(1+Q)}`$ ($`Q`$ taking its saddle point value) and where $`\varphi (z)=(1\mathrm{Erf}(z/\sqrt{2}))/2`$ is the fraction of frozen agents (those who always play one and the same strategy). Below $`\alpha _c`$, $`𝒫(m)`$ is continuous, i.e. $`\varphi =0`$ in agreement with numerical findings.
At the transition the spin susceptibility $`\chi =lim_\beta \mathrm{}\beta (Qq)`$ diverges as $`\alpha \alpha _c^+`$ and it remains infinite for all $`\alpha \alpha _c`$. This is because the ground state is degenerate in many directions (zero modes) and an infinitesimal perturbation can cause a finite shift in the equilibrium values of $`m_i`$. This implies that in the long run, the dynamics (6) leads to an equilibrium state which depends on the initial conditions $`U_{s,i}(t=0)`$. The under-constrained nature of the system is also responsible for the occurrence of anti-persistent effects for $`\alpha <\alpha _c`$. The periodic motion in the subspace $`H=0`$ is probably induced by inertial terms $`d^2U_{s,i}/d\tau ^2`$ which we have neglected, and which require a more careful study of dynamical solutions of Eqs. (3,4). It is however clear that the amplitude of the excursion of $`U_{+1,i}(t)U_{1,i}(t)`$ decreases with $`\mathrm{\Gamma }`$, by the smoothing effect of Eq. (3). When this amplitude becomes of the same order of $`1/\mathrm{\Gamma }`$ anti-persistence is destroyed, which explains the sudden drop of $`\sigma ^2`$ with $`\mathrm{\Gamma }`$ found in ref. .
A natural question arises: is this state individually optimal, i.e. is it a Nash equilibrium of the game where agents maximize the expected utility $`\overline{u_i}=\overline{a_{s,i}A}`$? One way to find the Nash equilibria is to consider stationary solutions of the multi-population replicator dynamics. This takes the form of an equation for the so called mixed strategies, i.e. for the probabilities $`\pi _{s,i}`$ with which agent $`i`$ plays strategy $`s`$. In terms of $`m_i=\pi _{+,i}\pi _{,i}`$, with a little algebra, these equations read
$$\frac{dm_i}{d\tau }=(1m_i^2)\frac{\overline{u_i}}{m_i}.$$
(10)
Observing that $`\frac{\overline{u_i}}{m_i}=\frac{\sigma ^2}{m_i}`$, we can rewrite Eq. (10) as a gradient descent dynamics which minimizes a global function which is exactly the total loss $`\sigma ^2`$ of agents. Nash equilibria then correspond to the local minima of $`\sigma ^2`$ in the domain $`[1,1]^N`$. The quadratic form $`\sigma ^2`$ is not positive definite, which means that there shall be many local minima and the Nash equilibrium is not unique. It is easy to see that Nash equilibria are in pure strategies, i.e. $`m_i^2=1`$ $`i`$, which implies $`\sigma ^2=H`$, by Eq. (7). A detailed characterization of the Nash equilibria shall be given elsewhere. The best Nash equilibrium can be studied applying the replica method to $`\sigma ^2`$ for $`\beta \mathrm{}`$. The multiplicity of Nash equilibria (meta-stable states) manifests itself in the occurrence of replica symmetry breaking for any $`\alpha >0`$ with a non-vanishing $`\sigma ^2/N`$ . The simple RS solution, though incorrect, provides a close lower bound $`F_{NE}^{(RS)}=F_{ID}+\frac{1}{2}(1Q)`$ to $`\sigma ^2/N`$ for $`\beta \mathrm{}`$ (see Fig. 1). For $`\alpha >1/\pi `$, we have $`Q=q=1`$ and $`F_{NE}^{(RS)}(\beta =\mathrm{})=[11/\sqrt{\pi \alpha }]^2`$ positive, whereas $`1=Q<q`$ and $`F_{NE}^{(RS)}=0`$ for $`\alpha <1/\pi `$.
Fig. 1 shows that in a Nash equilibrium agents perform way better than in the MG. This is the consequence of the fact that agents do not take into account their impact on the market (i.e. on $`A^\mu `$) when they update the scores of their strategies by Eq. (4). It is indeed known that reinforcement-learning dynamics based on Eq. (3) is closely related to the replicator dynamics and hence it converges to rational expectation outcomes, i.e. to Nash equilibria. More precisely, ref. suggests that this occurs if Eq. (4) is replaced with
$$U_{i,s}(t+1)=U_{i,s}(t)+u_i^{\mu (t)}(s,s_i(t))/P$$
(11)
Now $`U_{s,i}(t)`$ is proportional to the cumulated payoff that agent $`i`$ would have received had she always played strategy $`s`$ (with other agents playing what they actually played) until time $`t`$. As Fig. 1 again shows this leads to results which coincide with those of the Nash equilibrium. It is remarkable that the (relative) difference between Eqs. (4) and (11) is small, i.e. of order $`1/A^\mu 1/\sqrt{N}`$. Yet, it is not negligible because, when averaged over all states $`\mu `$ it produces a finite effect, specially for $`\alpha <\alpha _c`$ and it affects considerably the nature of the stationary state. This term has the same origin of the cavity reaction term in spin glasses. In order to follow Eq. (11) agents need to know the payoff they would have received for any strategy $`s`$ they could have played. That may not be realistic in complex situations where agents know only the payoffs they receive and are unable to disentangle their contribution from $`G(A^\mu )`$. However agents can account approximately for their impact on the market by adding a cavity term $`+\eta \delta _{s,s_i(t)}`$ to Eq. (4) which “rewards” the strategy $`s_i(t)`$ used with respect to those $`ss_i(t)`$ not used. The most striking effect of this new term, as discussed elsewhere in detail, is that for $`\alpha <\alpha _c`$ an infinitesimal $`\eta >0`$ is sufficient to cause RSB and to reduce $`\sigma ^2/N`$ by a finite amount.
So far, the information $`\mu (t)`$ was randomly and independently drawn at each time $`t`$ form the distribution $`\varrho ^\mu =1/P`$. In the original version of the MG $`\mu `$ is instead endogenously determined by the collective dynamics of agents: $`\mu (t)`$ indeed labels the sequence of the last $`M=\mathrm{log}_2P`$ “minority” signs – i.e. $`\mu (t+1)=[2\mu (t)+1]_{\mathrm{mod}P}`$ if $`A^{\mu (t)}>0`$ and $`\mu (t+1)=[2\mu (t)]_{\mathrm{mod}P}`$ otherwise. The idea is that the information refers to the recent past history of the market, and agents try to guess trends and patterns in the time evolution of the process $`G(A^{\mu (t)})`$. We may say that $`\mu (t)`$ is endogenous information, since it refers to the market itself, as opposed to the exogenous information case discussed above.
Numerical simulations show that the collective behavior of the MG – based on Eq. (4) – under endogenous information is the same as that under exogenous information. Within our approach, the relevant feature of the dynamics of $`\mu (t)`$ is its stationary state distribution $`\varrho ^\mu `$. The key point is that a finite fraction $`1\varphi `$ of agents behave stochastically ($`m_i^2<1`$) because $`Q<1`$. As a consequence, $`A^\mu `$ has stochastic fluctuations of order $`\sqrt{N(1Q)}`$ which are of the same order of its average $`A^\mu \sqrt{H}`$. With endogenous information, these fluctuations of $`A^\mu `$ induce a dynamics of $`\mu (t)`$ which is ergodic in the sense that typically each $`\mu `$ is visited with a frequency $`\varrho ^\mu 1/P`$ in the stationary state. The situation changes completely when agents follow Eq. (11). Indeed the system converges to a Nash equilibrium where agents play in a deterministic way, i.e. $`m_i^2=1`$ (or $`Q=\varphi =1`$). The noise due to the stochastic choice of $`s_i`$ by Eq. (3) is totally suppressed. The system becomes deterministic and the dynamics of $`\mu (t)`$ locks into some periodic orbit. The ergodicity assumption then breaks down: Only a small number $`\stackrel{~}{P}P`$ of patterns $`\mu `$ are visited in the stationary state of the system, whereas the others never occur ($`\varrho ^\mu =0`$). This leads to an effective reduction of the parameter $`\alpha \stackrel{~}{\alpha }=\stackrel{~}{P}/N`$, which further diminishes $`\sigma ^2`$. Numerical simulations show that $`\stackrel{~}{P}\sqrt{P}`$ which imply that $`\stackrel{~}{\alpha }0`$ in the limit $`P=\alpha N\mathrm{}`$, i.e $`\sigma ^2/N0`$.
In summary we have shown how methods of statistical physics of disordered systems can successfully be applied to study models of interacting heterogeneous agents. Our results extend easily to more general models and, more importantly, the key ideas can be applied to more realistic models of financial markets, where heterogeneities arise e.g. from asymmetric information.
We acknowledge J. Berg, A. De Martino, S. Franz, F. Ricci-Tersenghi, S. Solla, M. Virasoro and Y.-C. Zhang for discussions and useful suggestions. This work was partially supported by Swiss National Science Foundation Grant Nr 20-46918.98.
|
no-problem/9904/hep-ex9904012.html
|
ar5iv
|
text
|
# SUMMARY OF EXPERIMENTAL ELECTROWEAK PHYSICS11footnote 1 For proceedings of the 17th International Workshop on Weak Interactions and Neutrinos (WIN99), Cape Town, South Africa, January 24-30, 1999.
## 1 Introduction
Electroweak physics had its experimental beginning in inelastic neutrino scattering neutral current measurements. Particular measurements may be interpreted directly or combined in global fits to constrain the Higgs mass and possibilities for new physics. The increasingly precise magnetic moment measurement of the muon at Brookhaven will limit nonstandard possibilities.
The $`Z`$ mass measurement has developed a precision in the same league with $`G_F^\mu `$ and $`\alpha _{EM}`$, and the shape and decays show that we understand the decay process as well as what states are available. For example there is room for only three neutrinos. The various $`Z`$ asymmetries from LEP and SLC give the strongest indirect constraint on the Higgs mass. With $`e^+e^{}hadrons`$ measurements at BES and Novosibirsk complementing or confirming PQCD calculations, the precision of $`\alpha _{EM}`$ evolved to the $`Z`$ mass has improved, and this constraint is becoming stronger.
The combination of $`W`$ mass and top quark mass is more of a check at the moment. The Tevatron analyses for $`W`$ and top masses with existing data are becoming mature, and substantial improvement will come with data from the next run, which should start in 2000. Considerable improvement on the $`W`$ mass is anticipated from the LEP collaborations with recent data, and more data at higher energy is coming.
New strategies in neutrino scattering make neutral current measurements interesting, and deep inelastic scattering at HERA is becoming of interest from the electroweak point of view. The absence of the Higgs particle in direct searches is becoming as significant an influence on what possibilities remain as the indirect limits.
Precision electroweak studies are continuing on many fronts including $`\tau `$ studies, and the pending observation of $`\nu _\tau `$ interactions. None of these efforts has allowed us to break out of the standard framework. I summarize results as presented, but update to Moriond 99 numbers.
## 2 BNL 821 Muon g-2
The study of the magnetic moment of the muon at CERN was precise enough to demonstrate the presence of hadronic corrections. The goal of the ongoing program at Brookhaven is to become precise enough to demonstrate the electroweak corrections. More accurate calculations of the hadronic corrections are helping to make this realistic.
The experiment is a muon storage ring consisting of a continuous, finely adjustable, iron dominated super-conducting magnet. Mapping and adjusting the field has been an ongoing program. The momentum of the muons is adjusted to minimize the effect of the embedded electrostatic quadrupoles on muon spin precession. Decays are observed at instrumented windows around the ring, with high energy decay electrons acting to spin analyze the muons.
A measurement from an initial run with pion decay injection approaches the CERN accuracy. The numbers are listed in Table 1. This result was limited by low intensity and detector effects, particularly due to pion injection, as well as the field quality. In two more recent runs, muon injection was established, detectors improved, and the field much improved. The available data should produce a $`\pm 1`$ ppm measurement. The fringe field of the inflector magnet will be improved in order for future runs to reach the goal of $`\pm 0.3`$ ppm for both signs.
## 3 Measurements of the $`Z`$
The precision $`Z`$ line shape has been an adventure story with significant implications. The measurement is
$$m(Z)=91.1867\pm 0.0021\mathrm{GeV}/\mathrm{c}^2$$
(1)
$$\mathrm{\Gamma }(z)=2.4939\pm 0.0024\mathrm{GeV}.$$
(2)
The mass is precise enough to rank with the weak and electromagnetic couplings as precision input. That nothing seems missing in $`Z`$ decay places serious constraints on new physics possibilities. Heavy flavor decay rates were a problem but the popular interpretation was otherwise ruled out even before the deviation went away. Agreement continues in the tail of the $`Z`$ at LEP2.
There is one residual small discrepancy seen at LEP and less at Moriond by SLC, and that is in the asymmetry in b decays. At 2.6 $`\sigma `$ or less, depending on input details, the effect is not convincing.
The $`Z`$ asymmetries from LEP have had some updates in $`\tau `$ polarization, and though some further fine tunings are expected, most of the analyses are pretty much complete. SLD has gotten a lot more data recently, and the preliminary results for the last two years data dominate the measurement. The discrepancy between the SLD $`A_{LR}`$ and the average of the LEP effective weak mixing continues but has become a lot less jarring, as seen in Table 2. Note that the overall average is pulled up a bit a by the LEP $`b`$ asymmetry, and down a bit by SLD $`A_{LR}`$. A slight residual discrepancy seems historically appropriate.
The impact of the effective weak mixing measurement is improving as $`\alpha _{EM}(m(Z))`$ is better determined with PQCD calculations. There is discrepancy with old SPEAR hadron rates but new points from BES agree. Both PQCD and data driven calculations are agreeing and improving.
## 4 Measurements of the $`W`$
The Tevatron Collider experiments have advanced the program begun at the CERN $`S\overline{p}pS`$ collider, including $`W`$ mass measurements using leptonic decay transverse mass. Updates at Moriond had DØ adding plug electrons, and CDF adding the most recent electron sample. Beyond that, further improvement will come with the next run, expected to start in 2000.
The LEP experiments have threshold $`W`$ mass measurements, and increasingly precise direct reconstruction measurements. Possible QCD systematics in the four quark mode are being confronted. The large sample at $`\sqrt{s}=189`$ GeV should allow a precision of $`\pm 4045`$ MeV/c<sup>2</sup> when fully analyzed; the ALEPH and L3 analyses included this sample in the Moriond update. Further data will continue to be collected through 2000. The measurements are listed in Table 3.
Searches for nonstandard $`W`$ and $`Z`$ couplings are now dominated by LEP measurements, although DØ makes a notable contribution. The large new data sample at LEP should improve coupling limits by about a factor of three when fully analyzed.
## 5 Measurements of the Top Quark
The impact of improving the $`W`$ precision will be limited by the precision of the top quark mass measurement. The Tevatron data analyses are largely complete with the two experiments in the different channels consistent, as can be seen in Table 4. A couple of years of new data should improve the precision by a factor of at least two.
Detailed studies of top production and decay, including limits on nonstandard decays, have begun. The expected increase in statistics at the Tevatron once the upgraded collider and detectors get going should have a salutary effect on these.
## 6 Deep Inelastic Scattering
The NuTeV group has revived the contribution of neutrinos to the electroweak program. By using a carefully designed beam, a neutrino beam free of anti-neutrinos and vice versa allows the difference in neutral to charged current rates to be used to measure weak mixing. The new data has similar statistics to CCFR but much improved systematics. The electroweak physics implications are illustrated in Fig. 1.
The HERA experiments still see a small excess at high $`Q^2`$, but not so dramatic as it seemed a year ago. The data are sufficient to see propagator effects in NC and CC events. The $`t`$ channel $`W`$ mass derived is compatible with direct measurements, and an electroweak program is getting started.
A neutrino beam at Fermilab for the NUMI project will have space available for a near detector. There are some possibilities for PDF studies.
## 7 The Higgs Search
The Higgs particle of the minimal standard model has given no direct sign of its presence. The lower limit on its mass is growing with the LEP energy as searches are made for the process $`e^+e^{}Z^{}ZH`$. Most signatures involve $`H\overline{b}b`$, so that $`Z`$ pairs give an irreducible background. Fortunately $`Z\overline{b}b`$ is now well understood. The 189 GeV data has now been analyzed by the individual collaborations giving limits as high as
$$m(H)>95.2\mathrm{GeV}/\mathrm{c}^295\%\mathrm{CL}.$$
(3)
The combination of experiments will give some improvement. With data at 200 GeV, a limit or discovery reach up to $`109`$ GeV/c<sup>2</sup> is in prospect.
The recent PDG global fits to electroweak data, using the data as of Vancouver 98, gives a most favored Higgs mass is 107 GeV/c<sup>2</sup>, so the search is covering quite interesting territory. The limit is threatening the viability of various popular scenarios for extending the standard model.
If the Higgs is still missing at the end of the LEP program, with enough luminosity the Tevatron detectors could extend the Higgs search. Eventually LHC detectors will make the search comprehensive.
## 8 Tau Physics
The detailed study of $`\tau `$ decays involves precise decay parameters, neutrino mass limits, and rare decay searches including lepton number violation and CP violation in the $`K_s^0\pi \nu `$ angular distribution. There is plenty of room for non-standard model physics. The program being pursued at CLEO will be being joined at Babar and Belle.
The E872 collaboration at Fermilab is searching for evidence of $`\nu _\tau `$ interactions in emulsion at Fermilab. With part of the data measured and analyzed, they have six candidates. Although this corresponds well to expectations, systematic studies to eliminate background possibilities are pending, but an announcement that interactions have been observed is expected soon.
## 9 Conclusions
The simplest scenario for the standard model, with one residual Higgs particle, remains viable. In the global fits, the strongest constraint comes from measurements of $`Z`$ asymmetries. These dominate the thinness of the indirect allowed region of Fig. 1. Some updates on these measurements are pending, but more progress seems likely from $`\alpha _{EM}(m(Z))`$ improvement.
The $`W`$ mass measurement is improving considerably, and further improvement and an improved top mass measurement, as will come with the next Tevatron run, is needed to compete with the $`Z`$ asymmetries.
The direct Higgs search is beginning to cut into the indirect fit allowed region. Perhaps a positive finding will come soon, but it seems like LHC will be needed to create a contradiction. Perhaps a contradiction, which would break us out of our mold, will come from one of the many electroweak studies which do not directly contribute to the Higgs picture.
## Acknowledgments
I am grateful to all the speakers whose material I am summarizing, and to our wonderful hosts. The work was supported in part by the United States Department of Energy, Division of High Energy Physics, Contract W-31-109-ENG-38.
## References
|
no-problem/9904/astro-ph9904125.html
|
ar5iv
|
text
|
# An Intrinsic Smoothing Mechanism For Gamma-Ray Burst Spectra in the Fireball Model
## 1 Introduction
It is remarkable that the simple fireball model for cosmological gamma-ray bursts can explain the many major features of the gamma-ray bursts and their afterglows (Rees & Mézsáros 1992, 1994; Paczynski & Rhodes 1993; Wijers, Rees, & Mézsáros 1997; Vietri 1997a,b; Waxman 1997a,b; Reichart 1997; Katz & Piran 1997; Sari 1997). It is noted, however, that the fireball model may produce multiple spectral components due to the existence of both the forward and reverse shocks at least at some time period (e.g., Mézsáros, Rees, & Papathanassiou 1994). Moreover, reprocessing of the primary spectrum by, for example, inverse Compton scattering (e.g., Pilla & Loeb 1998), introduces additional features to the spectra, even if it is initially featureless. Finally, resonant line features or recombination lines due to heavy elements, for example, in dense blobs as discussed in Mézsáros & Rees (1998), may add additional features to the continuum. In this Letter it is pointed out that spectral smoothing due to differentially varying Doppler shift for different patches of the fireball at varying angles to the line of sight provides an intrinsic, unavoidable smoothing mechanism. While this effect is fairly well known, a more quantitative analysis focusing on the case of GRBs is useful. This Letter presents such a semi-quantitative analysis.
## 2 Differential Doppler Shift Across a Fireball Front
For the present illustration I assume that the GRB fireball is spherical and homogeneous, for which a single parameter, $`\theta `$, the angle to the burster-observer vector, is sufficient to characterize the direction of a traveling patch of shock heated, radiation emitting material. It is convenient to use the time measured in the rest-frame of the burster, $`t`$, as the independent variable to express other quantities.
First, one has to find the relation between time $`t`$ for a fireball patch at $`\theta `$ and the time measured by the observer on the Earth (called “O” hereafter), $`t_{obs}`$, i.e., the arrival time. Since the apparent perpendicular traveling speed of a patch with $`\theta `$ seen by O is
$$\beta _{}(t)=\frac{\beta (t)\mathrm{sin}\theta }{1\beta (t)\mathrm{cos}\theta }$$
(1)
in units of the speed of light, where $`\beta (t)`$ is the spherical expansion speed of the fireball in the rest-frame of the burster. By definition, $`\beta _{}(t)`$ is
$$\beta _{}(t)=\frac{dr}{dt_{obs}}\mathrm{sin}\theta .$$
(2)
Combining equations (1, 2) gives
$$\frac{dr}{dt_{obs}}=\frac{\beta (t)}{1\beta (t)\mathrm{cos}\theta }.$$
(3)
Since one also has the following relation
$$\frac{dr}{dt}=\beta (t),$$
(4)
one finds the equation relating $`t_{obs}`$ to $`t`$:
$$\frac{dt_{obs}}{dt}=1\beta (t)\mathrm{cos}\theta .$$
(5)
Next, to have a tractable treatment, it is assumed that the Lorentz factor, $`\mathrm{\Gamma }(t)1/\sqrt{1\beta (t)^2}`$, has the following simplified evolution: it is constant (equal to $`\mathrm{\Gamma }_i`$) at $`tt_{dec}`$ and decays at $`t>t_{dec}`$ as
$$\mathrm{\Gamma }=\mathrm{\Gamma }_i(\frac{t}{t_{dec}})^\alpha ,$$
(6)
where $`\mathrm{\Gamma }_i`$ is the initial Lorentz factor and $`t_{dec}`$ (measured in the rest-frame of the burster) characterizes the transition time after which deceleration of the fireball expansion becomes significant (hence the fireball kinetic energy can be converted into radiation) and can be expressed approximately as $`t_{dec}=(\frac{3E}{4\pi \mathrm{\Gamma }_i^2c^5m_pn})^{1/3}`$ (Blandford & McKee 1976), where $`E`$ is the initial fireball energy, $`n`$ is the density of the circumburster medium (which is assumed to be uniform, for simplicity) and other notations are conventional. Note that $`\alpha =3`$, if the fireball cools radiatively efficiently, and $`\alpha =3/2`$, if the fireball cools only adiabatically.
To make the point in a simple way it is assumed that the total luminosity per unit frequency (in the comoving frame) of the shock front at a fixed time $`t`$ is a delta function in frequency (i.e., a monochromatic spectrum):
$$L_\nu (\nu ^{},t)=C(t)\delta [\nu ^{}\nu _0(t)]$$
(7)
in the frame comoving with the blastwave. The characteristic frequency $`\nu _0(t)`$ at time $`t`$ in the comoving frame is parameterized as
$$\nu _0(t)=A(\frac{\mathrm{\Gamma }}{\mathrm{\Gamma }_i})^\psi ,$$
(8)
where $`A`$ is a constant. Note that, in the case of synchrotron radiation and assuming that both the electron thermal energy density and the magnetic energy density are fixed fractions of the post-shock nucleon thermal energy density, one has $`\psi =3`$. $`C(t)`$ is expressed as
$$C(t)=B(\frac{t}{t_{dec}})^\xi ,$$
(9)
where $`B`$ is another constant and the $`\xi `$ parameterizes the temporal profile of the amplitude of the radiation.
At a given time O receives radiation from different parts across the fireball surface, emitted at varying times in the burster frame; i.e., radiation from regions of varying $`\theta `$ at varying $`t`$ \[see equation (5) for the relation between $`t`$ and $`t_{obs}`$\] is seen by O at the same time (Sari 1998). The received frequency of the radiation at $`t_{obs}`$ from region with $`\theta `$ emitted at $`t`$ is
$$\nu (t_{obs})=\nu _0(t)D(\theta ,t),$$
(10)
where $`D(\theta ,t)`$ is the Doppler factor for regions with $`\theta `$ at time $`t`$:
$$D(\theta ,t)=\frac{1}{\mathrm{\Gamma }(t)[1\beta (t)\mathrm{cos}\theta ]}.$$
(11)
The flux density observed by O at frequency $`\nu `$ at time $`t_{obs}`$ is
$`S(\nu ,t_{obs})={\displaystyle \frac{1}{8\pi d^2}}{\displaystyle _1^1}L_\nu (\nu ^{},t)D^3(\theta ,t)𝑑\mu `$ (12)
where $`\nu ^{}`$($`=\nu /D`$) is the frequency in the blastwave frame, $`d`$ is the distance of the GRB from $`O`$ and $`\mu \mathrm{cos}\theta `$. Combining equations (10,11) gives
$$\mathrm{d}\mu =\left(\frac{1\beta \mu }{\beta }\right)\frac{\mathrm{d}\nu }{\nu }.$$
(13)
Inserting equations (7,13) into equation (12) and integrating over $`\nu `$ give
$`S(\nu ,t_{obs})={\displaystyle \frac{C(t)}{8\pi d^2\nu _0(t)}}D^3(\theta ,t){\displaystyle \frac{1\beta (t)\mathrm{cos}\theta }{\beta (t)}}.`$ (14)
Note that $`S(\nu ,t_{obs})`$ is in a parametric form; given $`t_{obs}`$, one can determine the burster frame time $`t`$ for a given patch at $`\theta `$ using equation (5). Then, one determines $`S(\nu ,t_{obs})`$ using equation (14) combined with equations (6,8,9,11), given $`t`$ and $`\theta `$. Meantime, $`\nu (t_{obs})`$ is related to $`t`$ and $`\theta `$ by equation (10). Thus, one can find $`S(\nu ,t_{obs})`$ as a function of $`\nu (t_{obs})`$ at $`t_{obs}`$. Let us consider a few simple but relevant cases to illustrate the effect.
Case 1: $`\alpha =3/2`$, $`\psi =0`$ and $`\xi =0`$. This case may have some bearing on such radiation features as atomic line features whose intrinsic frequencies are independent of $`t`$ (i.e., $`\psi =0`$).
Case 2: $`\alpha =3`$, $`\psi =3`$ and $`\xi =1`$. This case may be related to the epoch where radiative cooling is efficient and intensity at the frequency in question is still rising, which could be relevant for radiation at frequencies lower than the peak of the synchrotron radiation spectrum (due to a truncated power-law electron distribution) at early times of a GRB event.
Case 3: $`\alpha =3`$, $`\psi =3`$ and $`\xi =1`$. This may be related to the epoch where radiative cooling is efficient and intensity at the frequency in question has started to decrease, which could be relevant for radiation at frequencies higher than the peak frequency of the spectrum at early times of a GRB event.
Case 4: $`\alpha =3/2`$, $`\psi =3`$ and $`\xi =1`$, This case is similar to Case (2) with the primary difference that radiative cooling is unimportant here. This may be relevant for GRB afterglows such as the radio afterglows when electron cooling time is likely to be significantly longer than the dynamic time of the expanding fireball at frequencies lower than the peak frequency of the spectrum.
Case 5: $`\alpha =3/2`$, $`\psi =3`$ and $`\xi =1`$, This case is similar to Case 4 but for frequencies higher than the peak frequency of the spectrum.
While it is convenient to express various quantities using $`t`$ as the independent time variable, one needs to express the final observables using $`t_{obs}`$, which is related to $`t`$ (for $`\theta =0`$; see equation 5) as
$$t_{obs}=\frac{t_{dec}}{2\mathrm{\Gamma }_i^2}\left[1+\frac{1}{2\alpha +1}\left(\frac{t}{t_{dec}}\right)^{2\alpha +1}\right]$$
(15)
for the simplified solution of $`\mathrm{\Gamma }(t)`$ given by equation (6). Figure (1a) shows the flux density as a function of frequency at $`t_{obs}=t_{dec}/\mathrm{\Gamma }_i^2`$ for the five cases. The frequency is normalized such that unity corresponds to the radiation from regions with $`\theta =0`$ and the flux density is normalized to be unity at unity frequency. The sharp turns to the left for cases (ii,iii,iv,v) correspond to the sharp turn of the evolution of the Lorentz factor at $`t=t_{dec}`$. Figure (1b) shows the flux density as a function of frequency at $`t_{obs}=5000t_{dec}/\mathrm{\Gamma }_i^2`$ for the five cases. Also shown in both panels are two straight lines in the upper right corner indicating the spectral slope of $`0.75`$ and $`1.25`$, respectively, which bracket the range of the spectral slope for various cases shown. Note that sharp turns as seen in (1a) are not visible simply because they appear at much lower intensity level than the displayed range in the figure. Note that $`t_{dec}=16\left(\frac{E}{10^{52}\mathrm{erg}}\right)^{1/3}\left(\frac{\mathrm{\Gamma }_i}{300}\right)^{2/3}\left(\frac{n}{1\mathrm{c}\mathrm{m}^3}\right)^{1/3}\mathrm{days}`$. Therefore, for typical values of $`E`$, $`\mathrm{\Gamma }_i`$ and $`n`$, $`t_{obs}`$ is of order a second and a day, respectively, after the fireball explosion for the two cases shown in (1a) and (1b). These two cases may respectively be relevant for bursts in gamma-ray and afterglows at lower energy bands. It should be noted that, although the external shock model is used to illustrate the smoothing magnitude, the results should be applicable to the internal shock model as well.
Recall that in all cases a delta function spectrum at $`\nu `$ is assumed in the comoving frame at a given time. One sees that this delta function spectrum is smoothed out to appear as a broad spectrum with a half-width-half-maximum (HWHM) of $`(0.62.0)\nu `$ for all the cases considered, except for Case 1 where HWHM is $`0.1\nu `$. An immediate implication from Figure 1 (see the dot-long-dashed curves on the upper right corner in the two panels) is that the observed spectra of GRBs or their afterglows should roughly have $`\nu ^1`$, if the electron distribution function power index $`p`$ is equal to greater than $`3`$. In other words, the observed spectra cannot be steeper than $`\nu ^1`$ regardless the value of $`p`$. This slope æ
It is of interest to understand where the different radiation that correspond to different frequencies in Figure 1 comes from. Figure 2 shows the frequency seen by O at a given time as a function of the emission shell radius from the burster, $`r`$, expressed in units of $`ct_{dec}`$. It is seen that, for realistic cases (ii,iii,iv,v) that correspond to the continuum radiation of the GRBs and afterglows, higher frequency radiation comes from earlier time $`t`$ (in the burster frame) with larger angle $`\theta `$ up to $`t_{dec}`$ after which there is a downturn to lower frequency at still earlier times due to assumed constancy of $`\mathrm{\Gamma }`$ thus constancy of $`\nu _0(t)`$. For case (i) lower frequency radiation comes from earlier time $`t`$ with larger $`\theta `$. In both panels (a,b) also shown along the solid curves using solid dots are the corresponding $`\theta `$ values in degrees. Dotted and dashed curves are also punctuated by open circles and open squares, corresponding to the same $`\theta `$ values.
## 3 Discussion
It is shown that differentially varying Doppler boost of different patches of the fireball front provides an intrinsic, unavoidable smoothing mechanism for the spectra of gamma-ray bursts and their afterglows. The detailed smoothing patterns are complicated, depending upon various factors such as the evolution of the Lorentz factor and the evolution of the intrinsic (i.e., comoving frame) radiation spectrum. Nonetheless, for plausible ranges of model parameters of interest, a comoving frame delta function spectrum at $`\nu `$ is smoothed to have a HWHM of $`(0.62.0)\nu `$, assuming that the time evolution of the characteristic frequency $`\nu (t)`$ is proportional to some positive power of the shock front Lorentz factor, $`\mathrm{\Gamma }^\psi `$ (where $`\psi >0`$; see equation 8). This type of smoothing may be applicable to continuous spectra such as from synchrotron mechanism.
In the case $`\psi =0`$ (appropriate for atomic line features which are independent of the blastwave dynamics), the spectral smoothing is smaller, with a HWHM of $`0.1\nu `$. Thus, a sharp linelike emission feature (in the comoving frame) would be smoothed out to have an equivalent width of about $`0.1\nu `$. Furthermore, the spectral profile of such a feature will be asymmetrical with a sharp cutoff at the high end (see the two solid curves in Figure 1).
Two interesting and natural consequences arise due to the differential Doppler smoothing. First, the observed spectra of GRBs or their afterglows cannot be steeper than $`\nu ^\alpha `$ with $`\alpha =0.751.25`$ (Figure 1), even though the intrinsic spectra in the comoving shock frame may be much steeper, if the circumburster medium is uniform and electron and magnetic energies are fixed fractions of the total post-shock energy with time. Second, a generic fast-rise-slow-decay type temporal profile of the GRB bursts is expected, not necessarily reflecting the intrinsic temporal profiles of the bursts in the comoving frame. This can be easily seen by considering the case where the intrinsic (blastwave frame) spectrum is a bivariate delta function in both time and frequency. In this case the fast rise occurs when the radiation from the region around $`\theta =0`$ enters the observer’s finite band. Subsequently, radiation from regions with gradually increasing $`\theta `$ is received at decreasing amplitudes in the same observer’s finite band with a decaying time scale of $`t_{dec}/2\mathrm{\Gamma }^2`$ (in observer’s frame).
The work is supported in part by grants AST9318185 and ASC9740300. I thank Bohdan Paczyński for discussion.
|
no-problem/9904/physics9904038.html
|
ar5iv
|
text
|
# Towards standard methods for benchmark quality ab initio thermochemistry — W1 and W2 theory
## I Introduction
Thermochemical data such as molecular heats of formation are among the most crucial quantitative chemical data. Thanks to great progress made in recent years in both methodology and computer technology, a broad range of empirical, semiempirical, density functional, and ab initio schemes now exist for this purpose (for a recent collection of reviews, see Ref.).
At present, only ab initio-based methods can claim ‘chemical accuracy’ ($`\pm `$1 kcal/mol). The most popular such schemes are undoubtedly the G2 and G3 theories of Pople and coworkers (which are based on a combination of additivity approximations and empirical corrections applied to relatively low-level calculations), followed by the CBS-Q and CBS/APNO methods which are intricate combinations of extrapolation and empirical correction schemes. With the exception of CBS/APNO (which allows for 0.5 kcal/mol accuracy, on average, but is restricted to first-row compounds), all these schemes allow for mean absolute errors of about 1 kcal/mol, although errors for some individual molecules (e.g. SO<sub>2</sub>, SiF<sub>4</sub>) can be much larger (e.g. about 8–12 kcal/mol for SiF<sub>4</sub> using G2 theory, and 4 kcal/mol using G3 theory).
In fact, many of the experimental data in the ”enlarged G2 set” employed in the parametrization of several of these methods (notably G3 theory and several of the more recent density functional methods) themselves carry experimental uncertainties larger than 1 kcal/mol.
The aim pursued in the present work is a more ambitious one than chemical accuracy. In light of the prevalent use of kJ/mol units in the leading thermochemical tables compendia (JANAF and CODATA), we shall arbitrarily define a mean absolute error of one such unit, i.e. 0.24 kcal/mol, as ‘calibration accuracy’ — with the additional constraint that no individual error be larger than the ‘chemical accuracy’ goal of 1 kcal/mol.
One of us has recently shown that this goal is achievable for small polyatomics using present technology. The approach followed employed explicit treatment of inner-shell correlation, coupled cluster calculations in augmented basis sets of $`spdfg`$ and $`spdfgh`$ quality, and extrapolation of the valence correlation contribution to the atomization energy using formulas based on the known asymptotic convergence behavior of pair correlation energies. In this manner, total atomization energies (TAE<sub>e</sub>) of about 15 first-row diatomics and polyatomics for which experimental data are known to about 0.1 kcal/mol could be determined to within 0.25 kcal/mol on average without any empirical parameters. (Upon introducing an empirical correction for A–N bonds, this could be improved to 0.13 kcal/mol, clearly within the target.) In fact, using this method, an experimental controversy concerning the heat of formation of gaseous boron — a quantity that enters any ab initio or semiempirical calculation of the heat of formation of any boron compound — could be resolved by a benchmark calculation of the total atomization energy of BF<sub>3</sub>.
Benchmark studies along similar lines by several other groups (e.g. those of Helgaker, Bauschlicher, Dunning) point in the same direction. Among those, Bauschlicher was the first to suggest that the inclusion of scalar relativistic corrections may in fact be essential for accurate results on second-row molecules.
High-accuracy results on second-row compounds can only be achieved in this manner — as has been shown repeatedly — if high-exponent $`d`$ and $`f`$ functions are added to the basis set. As shown by one of us, these ‘inner shell polarization functions’ address an SCF-level effect which bears little relationship to inner-shell correlation, and actually dwarfs the latter in importance (contributions as high as 10 kcal/mol having been reported).
All these approaches carry a dual disadvantage: their extravagant computational cost and their reliance on the quantum chemical expertise of the operator.
The target of the present study was to develop computational procedures that meet the following requirements:
* they should have mean absolute errors on the order of 0.25 kcal/mol or less, and problem molecules (if any) should be readily identifiable;
* the method should be applicable to at least first-and second-row molecules;
* it should be robust enough to be applicable in a fairly ‘black-box’ fashion by a nonspecialist;
* it should rely as little as possible (preferably not at all) on empirical parameters, empirical additivity corrections, or other ‘fudges’ derived from experimental data;
* relatedly, it should explicitly include all the physical effects that are liable to affect molecular binding energies of first-and second-row compounds, rather than rely upon absorbing them in empirical parametrization;
* last but not least, it should be sufficiently cost-effective that a molecule the size of, e.g., benzene should be treatable on a workstation computer.
In the course of this work, we will present two schemes which we shall denote W1 and W2 (for Weizmann-1 and Weizmann-2) theories. W2 theory yields about 0.2 kcal/mol (or better) accuracy for first-and second-row molecules with up to four heavy atoms, and involves no empirical parameters. W1 theory is applicable to larger systems (we shall present benzene and trans-butadiene as examples), yet still yields a mean absolute error of about 0.30 kcal/mol and includes only a single, molecule-independent, empirical parameter which moreover is derived from calculated rather than experimental results.
## II Computational details
Most electronic structure calculations reported in this work were carried out using MOLPRO 97.3 and MOLPRO 98.1 running on a Silicon Graphics (SGI) Octane workstation and on the SGI Origin 2000 of the Faculty of Chemistry. The full CCSDT (coupled cluster with all connected single, double, and triple substitutions) calculations were carried out using ACES II running on a DEC Alpha 500/500 workstation.
SCF and valence correlation calculations were carried out using correlation consistent polarized $`n`$-tuple zeta (cc-pV$`n`$Z, or V$`n`$Z for short) ($`n`$=D, T, Q, 5, 6) and augmented correlation consistent polarized $`n`$-tuple zeta (aug-cc-pV$`n`$Z, or AV$`n`$Z for short) ($`n`$=D, T, Q, 5, 6) basis sets of Dunning and coworkers. The maximum angular momentum parameter $`l`$, which occurs in the extrapolation formulas for the correlation energy, is identified throughout with the $`n`$ in V$`n`$Z and AV$`n`$Z. Except for the calculation of the electron affinity of hydrogen, regular V$`n`$Z basis sets were used throughout on hydrogen atoms.
Most valence correlation calculations were carried out using the CCSD (coupled cluster with all single and double substitutions) and CCSD(T) (i.e. CCSD followed by a quasiperturbative estimate of the effect of connected triple excitations) electron correlation methods. The CCSD(T) method is known to be very close to an exact solution within the given one-particle basis set if the wave function is dominated by dynamical correlation.
Where possible, imperfections in the treatment of connected triple excitations were estimated by comparing with full CCSDT calculations. The effect of connected quadruple and higher excitations were estimated by small basis set FCI (full configuration interaction) calculations — which represent exact solutions with a finite basis set.
Inner-shell correlation contributions were evaluated by taking the difference between valence-only and all-electron CCSD(T) calculations in special core-correlation basis sets. For first-row compounds, both Dunning’s ACVQZ (augmented correlation consistent core-valence quadruple zeta) basis set and the Martin-Taylor (MT) core correlation basis sets were considered; for second-row compounds only the MT basis sets. The latter are generated by completely decontracting a CV$`n`$Z or ACV$`n`$Z basis set, and adding one tight $`p`$ function, three high-exponent $`d`$ functions, two high-exponent $`f`$ functions, and (in the case of the MTv5z basis set) one high-exponent $`g`$ function to the basis set. The additional exponents were derived from the highest ones already present for the respective angular momenta, successively multiplied by 3.0. The smallest such basis set, MTvtz (based on VTZ) is also simply denoted MT.
Scalar relativistic corrections were calculated at the ACPF (averaged coupled pair functional) level as expectation values of the first-order Darwin and mass-velocity terms. An idea of the reliability of this approach is given by comparing a very recent relativistic (Douglas-Kroll) coupled cluster calculation of the relativistic contribution to TAE\[SiH<sub>4</sub>\], $``$0.67 kcal/mol, with the identical value of $``$0.67 kcal/mol obtained by means of the present approach. For GaCl, GaCl<sub>2</sub>, and GaCl<sub>3</sub> — where relativistic effects are an order of magnitude stronger than even in the second-row systems considered here — Bauschlicher found that differences between Douglas-Kroll calculations and the presently followed approach amounted to 0.12 kcal/mol or less on the binding energy.
Spin-orbit coupling constants were evaluated at the CASSCF-CI level using the $`spdf`$ part of the MTav5z basis set. (For a recent review of the methodology involved, see Ref..)
Density functional calculations for the purposes of obtaining certain reference geometries and zero-point energies were carried out using the Gaussian 98 package. Both the B3LYP (Becke 3-parameter-Lee-Yang-Parr) and B3PW91 (Becke 3-parameter-Perdew-Wang-1991) exchange-correlation functionals were considered.
Most geometry optimizations were carried out at either the CCSD(T)/VQZ+1 or the B3LYP/VTZ+1 (in some cases B3PW91/VTZ+1) levels of theory, where the notation V$`n`$Z+1 indicates the addition to all second-row atoms of a single high-exponent $`d`$-type ‘inner polarization function’ with an exponent equal to the highest $`d`$ exponent in the Dunning V5Z basis set. In the past this was found to recover the largest part of the effects of inner polarization on geometries and vibrational frequencies. (We note that for molecules consisting of first-row atoms only, the V$`n`$Z+1 basis sets are equivalent to regular V$`n`$Z basis sets.)
Past studies of the convergence behavior of the SCF energy have shown it to be very well described by a geometric extrapolation of the type first proposed by Feller, $`A+B/C^l`$. Clearly, for this purpose a succession of three SCF/AV$`n`$Z basis sets is required.
For the valence correlation CCSD and (T) energies, two extrapolation formulas were considered. The first, $`A+B/(l+1/2)^\alpha `$, was proposed by Martin — the philosophy being that using the extrapolation exponent as an adjustable parameter would enable inclusion of higher-order terms in the asymptotic expansion
$$A/(L+1)^3+B/(L+1)^4+C/(L+1)^5+\mathrm{}$$
(1)
while the denominator shift of 1/2 was a compromise — for identification of the $`l`$ in cc-pV$`l`$Z with L — between hydrogen and nonhydrogen atoms. The second formula, simply $`A+B/l^3`$, was proposed by Helgaker and coworkers — where $`l`$ was identified with $`L1`$ throughout. Halkier et al. already noted that in terms of the extrapolated energy using $`A+B/(l+C)^D`$, the parameters C and D were very strongly coupled, and that it only made sense to vary one of them.
The combination of treatments for SCF, CCSD valence correlation, (T), imperfections in the T treatment, and connected quadruple and higher excitations is compactly denoted here by W\[p5;p4;p3;p2;p1\], in which p1 denotes the basis sets involved in the SCF extrapolation, p2 the basis sets involved in the CCSD extrapolation, p3 those in the (T) extrapolation (which may or may not be different from p2), p4 (if present) the basis sets used in correcting for imperfections in the treatment of connected triple excitations, and p5 (if present) those involved in evaluating the effect of connected quadruple and higher excitations. If any of the p’s consists of a single index, a simple additivity approximation is implied; two indices denote a two-parameter extrapolation of the type $`A+B/l^3`$, while three indices indicate a three-parameter extrapolation of the type $`A+B/(l+1/2)^\alpha `$ in the case of correlation contributions, and $`A+B/C^l`$ in the case of SCF contributions. For example, the level of theory used in the previous work of Martin and Taylor would be W\[TQ5;TQ5;TQ5\] in the present notation, while W\[D;Q;TQ5;TQ5;TQ5\] indicates W\[TQ5;TQ5;TQ5\]+CCSDT/AVQZ-CCSD(T)/AVQZ+FCI/AVDZ-CCSDT/AVDZ.
## III Atomic electron affinities as a ‘litmus test’
The electron affinities of the first-and second-row atoms have often been used as benchmarks for high-level electronic structure methods (see e.g. the introductions to Refs. for reviews). Because electron affinities involve a change in the number of electrons correlated in the system, they are very taxing tests for any electron correlation method; in addition, they involve a pronounced change in the spatial extent of the wave function, making them very demanding in terms of the basis set as well.
Until recently, three of the first-and second-row atomic electron affinities were imprecisely known experimentally (B, Al, and Si): this situation was changed very recently by high-precision measurements for recent experiments for B, Al, and Si.
The approach we have chosen here for the SCF and valence correlation components is summarized in our notation as W\[$`n`$,Q,56,56,Q56\] for the first-row atoms, and W\[$`n`$,Q,Q5,Q5,TQ5\] for the second-row atoms. The effect of inner-shell correlations was assessed at the CCSD(T)/MTav5z level, while Darwin and mass-velocity corrections were evaluated at the ACPF/MTav5z level. Finally, spin-orbit splittings were calculated at the CASSCF-CI level with the $`spdf`$ part of a MTav5z basis set. (For technical reasons, the $`h`$ functions were omitted in both the scalar relativistic and spin-orbit calculations, as were the $`g`$ functions in the latter.)
Our best computed results are compared with experiment in Table I, where results from recent calibration studies are also summarized.
Agreement between computed and observed values can be described without reservation as excellent: the mean absolute error amounts to 0.0009 eV. The fact that this accuracy is obtained systematically and across the board strongly suggest that the ’right result was obtained for the right reason’. Upon eliminating the corrections for imperfections in CCSD(T), i.e. restricting ourselves to W\[56,56,Q56\] for first-row atoms and W\[Q5,Q5,TQ5\] for second-row atoms, the mean absolute error increases by an order of magnitude to 0.009 eV, i.e. about 0.2 kcal/mol. As we shall see below, this is essentially the type of accuracy we can obtain for molecules without corrections for CCSD(T) imperfections.
The importance of Darwin and mass-velocity corrections increases, as expected, with increasing Z, and its contribution becomes quite nontrivial for atoms like Cl. It is therefore to be expected that, e.g., in polar second-row molecules like ClCN or SO<sub>2</sub> they will contribute substantially to TAE as well.
The importance of inner-shell correlation effects is actually largest for Al, because of the small gap between valence and sub-valence orbitals in the early second-row elements.
Table II compares the convergence behavior of the extrapolated valence correlation contributions as a function of the largest basis set used, both using the Martin three-term and Helgaker two-term formulas. While both formulas appear to give the same answer if the underlying basis sets are large enough, the two-term formula is by far the more stable towards reduction of the basis sets used in the extrapolation. Since the use of W\[Q56,Q56,Q56\] is hardly an option for molecules, the two-term formula appears to be the formula of choice.
Following the suggestion of a referee, we have considered (Table II) the performance of some other extrapolation formulas for the valence correlation energy. As a point of reference, we have taken an “experimental valence correlation contribution to EA”, which we derived by subtracting all computed contributions other than the valence correlation energy from the best experimental EAs. While some residual uncertainties may remain in some of the individual contributions, these should be reliable to 0.001 eV on average.
As seen in Table II, performance of the geometric series extrapolation $`A+B/C^n`$ is outright poor: in fact, for extrapolation from AV{D,T,Q}Z results the error is twice as large as that caused by not extrapolating at all. If AV{T,Q,5}Z basis sets are used, mean absolute error drops to 0.015 eV, which is still an order of magnitude larger than for the $`A+B/l^3`$ extrapolation, and only slightly better than not extrapolating at all. Finally, for AV{Q,5,6}Z basis sets, the error is three times smaller than complete omission of extrapolation, but three times larger than that of using any of the following formulas: $`A+B/l^3`$ , $`A+B/l^C`$ , or $`A+B/(l+1/2)^4+C/(l+1/2)^6`$ . All three of the latter yield a mean absolute error of about 0.001 eV, on about the same order of accuracy as the reference values. For the smallest basis set series AV{D,T,Q}Z, the mixed exponential-Gaussian extrapolation $`A+B/\mathrm{exp}(l1)+C/\mathrm{exp}((l1)^2)`$ represents a very substantial improvement over $`A+B/C^l`$, and actually exhibits slightly better performance than $`A+B/l^3`$. For the AV{T,Q,5}Z series which is of greatest interest here, the Halkier et al. $`A+B/l^3`$ formula by far outperforms the other formulas considered.
In short, it appears to be established that the two-term formula of Helgaker and coworkers is the extrapolation method of choice overall, with the Martin three-term formulas the second-best choice provided basis sets of AV{T,Q,5}Z quality are used. The mixed exponential-Gaussian formula performs slightly better than $`A+B/l^3`$ if only AV{D,T,Q}Z basis sets are used (see however Section VI.C below).
Computed spin-orbit contributions to the electron affinities are compared in Table III to values obtained from observed fine structures. While small deviations appear to persist, these may at least in part be due to higher-order spin-orbit effects which were neglected in the calculation rather than to deficiencies in the electronic structure treatment. At any rate, to the accuracy relevant for our purpose (establishing spin-orbit corrections to molecular binding energies) it appears to be immaterial whether the computed or the experimentally derived values are used.
Finally, the convergence of the SCF component is so rapid that it appears to be essentially irrelevant which extrapolation formula is used — the amount bridged by the extrapolation is on the order of 0.0001 eV.
## IV Results for molecules
Since application of electron correlation methods more elaborate than CCSD(T) would be well-nigh impossible for molecules of practical size, we have restricted ourselves to W\[Q5;Q5;TQ5\] and W\[TQ5;TQ5;TQ5\].
Inner-shell correlation contribution, as well as scalar relativistic corrections, were initially computed with the largest basis sets practicable – in most cases ACV5Z or MTavqz (see Table IV for details).
From a prerelease version of a re-evaluation of the experimental data in the G2/G3 set at the National Institute for Standards and Technology (NIST), we have selected 28 first-and second-row molecules which satisfy the following criteria: (a) the uncertainty in the experimental total atomization energy TAE is on the order of 0.25 kcal/mol or better; (b) the molecules are not known to exhibit severe multireference effects; (c) anharmonic vibrational zero-point energies are available from either experiment or high-level ab initio calculations (see footnotes to Table IV for details).
Geometries were optimized at the CCSD(T)/VQZ+1 level, and to all second-row atoms a complement of two tight d and one tight f function were added in every basis set to ensure saturation in inner-shell polarization effects. In all cases, the exponents were derived as even-tempered series $`\alpha \beta ^n`$ with $`\beta =3.0`$ and $`\alpha `$ the highest exponent already present for that angular momentum.
Computed (W\[Q5;Q5;TQ5\]) and observed results are compared in Table IV. The excellent agreement between theory and experiment is immediately apparent: in many cases, the computed results fall within the already quite narrow experimental error bars. Over the entire sample of molecules, the mean absolute error is 0.24 kcal/mol, with the largest errors being about 0.6 kcal/mol (O<sub>2</sub> and F<sub>2</sub>). Restricting our sample to first-row molecules only, we find a mean absolute error of 0.24 kcal/mol, which however gets reduced to 0.17 kcal/mol (maximum error 0.39 kcal/mol for N<sub>2</sub>) upon elimination of F<sub>2</sub>, NO, and O<sub>2</sub> as having known appreciable nondynamical correlation effects. Over the subset of second-row molecules in our sample MAE is 0.23 kcal/mol (maximum error 0.44 kcal/mol for H<sub>2</sub>S); upon elimination of H<sub>2</sub>S and SO<sub>2</sub> this is lowered to 0.20 kcal/mol.
It should be noted that these MAEs are comparable to those found by Martin and Taylor for a sample of first-row molecules, yet unlike their study no correction for N-containing bonds is required here.
The possibility that the errors in F<sub>2</sub>, NO, and O<sub>2</sub> are actually due to residual basis set incompleteness and/or that the excellent agreement with experiment for the other molecules is actually due to an error compensation involving deficiencies in the predicted basis set limit, was examined by carrying out W\[56;56;Q56\] calculations for H<sub>2</sub>O, F<sub>2</sub>, NO, O<sub>2</sub>, N<sub>2</sub>, HF, and CO. As seen in Table V, the predicted basis set limits do not differ materially from their W\[Q5;Q5;TQ5\] counterparts, strongly suggesting that the latter expression in fact does reach the basis set limit and that the residual errors are largely due to imperfections in the CCSD(T) method.
While molecules liable to exhibit such errors are readily identifiable from inspection of the largest coupled cluster amplitudes or evaluation of the $`𝒯_1`$ diagnostic, an even simpler criterion is apparently offered by the ratio TAE\[SCF\]/TAE\[SCF+val.corr.\]. In “well-behaved” molecules such as CH<sub>4</sub> and H<sub>2</sub>O, the SCF component makes up upwards of two-thirds of the binding energy, while in NO and in O<sub>2</sub> it makes up no more than a third and a fifth, respectively, of the total and F<sub>2</sub> is actually metastable towards dissociation at the SCF level. While for some molecules of this variety we actually obtain excellent results (e.g. ClF), this may be due to error compensation or to the binding energies being fairly small to begin with.
Further inspection of Table IV reveals that some of the ‘negligible’ contributions are in fact quite significant at the present precision level: for instance, Darwin and mass-velocity contributions in SO<sub>2</sub> amount to -0.71 kcal/mol (for SiF<sub>4</sub> a somewhat extravagant -1.88 kcal/mol was found), while atomic spin-orbit splitting in such compounds as Cl<sub>2</sub>, ClF, and SO<sub>2</sub> amounts to -1.68, -1.23, and -1.01 kcal/mol, respectively. Inner-shell correlation contributions of 2.36 (C<sub>2</sub>H<sub>4</sub>), 2.44 (C<sub>2</sub>H<sub>2</sub>), 1.68 (OCS), and 1.76 (ClCN) kcal/mol speak for themselves; interestingly (as noted previously), these effects on the whole do not seem to be more important in second-row than in first-row compounds.
Finally, we shall compare the performance of W\[TQ5;TQ5;TQ5\] and W\[Q5;Q5;TQ5\] (Table VI). In general, the results with the three-point valence correlation extrapolation are at best of the same quality as those with the two-point valence correlation extrapolation and in many cases agree less well with experiment. We therefor will use the two-point extrapolation exclusively henceforth.
## V W2 theory and its performance
Having established that our ’base level of theory’ can obtain the right results for the right reason, we shall now proceed to consider simplifications.
### A Inner-shell correlation
The use of the smaller MT basis set for the scalar relativistic contributions is found to have an effect of about 0.01 kcal/mol or less, with 0.02 kcal/mol being the largest individual cases. This approximation can therefore safely be made.
Using the same MT basis set for the core correlation contribution on average affects energetics by 0.03 kcal/mol, the largest individual effects being 0.07 kcal/mol for H2S, and 0.08 kcal/mol for OCS.
Even so, in fact, the core correlation calculations are quite CPU-time consuming, particularly for second-row compounds, due to the large number of electrons being correlated. Any further reduction would obviously be welcome — it should be noted that the MT basis set was developed not with efficiency, but with saturation (in the core-valence correlation energy) in mind. Further experimentation revealed that the tightest p, d, and f functions could safely be eliminated, but that further basis set reductions adversely affect the quality of the core correlation contribution computed. The reduced basis set shall be denoted as MTsmall, and in fact consists of a completely decontracted cc-pVTZ basis set with two tight $`d`$ and one tight $`f`$ functions added. Since this basis set only has about half the basis functions of the ACVQZ basis set per heavy atom, it represents a very significant CPU time savings (about 16 times) in a CCSD(T) calculation. The only molecule for which we see a substantial difference with the MT basis set is SO<sub>2</sub>, for which Bauschlicher and Ricca previously noted that the inner-shell correlation contribution is unusually sensitive to the basis set.
For the evaluation of the Darwin and mass-velocity corrections, differences with the larger MT basis set are less than 0.01 kcal/mol across the board.
A further reduction in CPU time for the core correlation contribution would have been achieved if MP2 or even CCSD calculations could be substituted for their CCSD(T) counterparts. However, as seen from Table VII, CCSD underestimates the CCSD(T) core correlation contributions for several molecules by as much as 50%. The behavior of MP2 is quite similar and the MP2–CCSD differences are substantially smaller than the (T) contribution, suggesting that it is the treatment of connected higher excitations that is the issue. Predictably, the largest (T) effects on the core correlation contribution occur in molecules where connected triple excitations are important for the valence binding energy as well, e.g. SO<sub>2</sub>, F<sub>2</sub>, Cl<sub>2</sub>, N<sub>2</sub>. Conversely, in CH<sub>3</sub> or CH<sub>4</sub>, which have quite small (T) contributions to the binding energy, CCSD does perform excellently for the core correlation contribution. In PH<sub>3</sub> and H<sub>2</sub>S, on the other hand, substantial errors in the core correlation are seen even as the (T) contribution to the valence correlation binding energy is quite small — it should be noted, however, that both the absolute inner-shell correlation energy and the (T) contribution to it are much more important in these second-row systems than in their first-row counterparts.
One may rightly wonder whether the inner-shell contributions are in fact converged at the CCSD(T) level. Unfortunately, if a more elaborate treatment is already impractical for the valence correlation, this would a fortiori be true for the inner-shell correlation. We did, however, carry out a CCSDT/MTsmall calculation on the N<sub>2</sub> molecule, which we chose as a representative case of failure of the CCSD approach for core correlation. The resulting CCSDT level core contribution, 0.87 kcal/mol, is only 0.05 kcal/mol larger than the CCSD(T) value of 0.82 kcal/mol, to be compared with 0.42 kcal/mol at the MP2 and 0.52 kcal/mol at the CCSD level. It cannot be ruled out a priori that connected quadruple and higher excitations might contribute to the inner-shell correlation energy. However, since apparently their importance for the valence correlation binding energy is very small (otherwise a treatment that completely ignores them would not yield the type of agreement with experiment found in this work), it seems unlikely that they would greatly contribute to the inner-shell correlation energy.
The “G3large” basis set used to evaluate, among other things, inner-shell correlation effects in G3 theory is still smaller than the MTsmall basis set, and its performance therefore is certainly of interest. Alas, in Table VII it is seen that in many cases it seriously overestimates the inner-shell correlation energy, almost certainly because of basis set superposition error which is apparently more of an issue for inner-shell correlation energies than for their valence counterparts. In G3 theory, the inner-shell correlation is evaluated at the MP2 level: hence the two errors cancel to a substantial extent.
### B Zero-point energy
Not in all cases is a complete anharmonic force field calculation feasible. We find in Table VIII that B3LYP/cc-pVTZ+1 harmonic zero-point energies scaled by 0.985 reproduce the rigorous anharmonic zero-point energies quite nicely. (The scaling factor is about halfway between what would be required for fundamentals, about 0.97, and harmonics, about 1.00.)
### C Separate extrapolation of CCSD and (T)
The (T) contribution makes up a relatively small part of the valence correlation energy, while its evaluation, in large basis sets and for systems with very many electrons, will dominate the CPU time. For instance, in a very recent study on SiF<sub>4</sub>, a CCSD(T) calculation with an AVQZ basis set on F and a VQZ+2d1f basis set on Si took 50h7 using MOLPRO on an SGI Octane workstation (768 MB of memory being allocated to the job), of which 41h30 were spent in the (T) step alone.
In addition, it was previously noted by Helgaker and coworkers that the (T) contribution appears to converge faster with the basis set than the CCSD correlation energy, for which reason they actually propose its separate evaluation in a smaller basis set. In the present case, we have considered extrapolating it from AVTZ+2d1f and AVQZ+2d1f results rather than AVQZ+2d1f and AV5Z+2d1f, respectively. In our adopted notation, this becomes W\[TQ;Q5;TQ5\].
As seen in Table VI, the difference in quality between W\[TQ;Q5;TQ5\] and W\[Q5;Q5;TQ5\] appears to be essentially negligible. This is an important conclusion, since it means that the largest basis set calculation to be carried out is only at the CCSD level, at a fraction of the cost of the full CCSD(T) counterpart — moreover it can be done using direct algorithms.
### D Protocol for W2 theory
The protocol obtained by introducing the succesful approximations given above will be denoted here as W2 (Weizmann-2) theory. Its steps consist of the following:
* geometry optimization at the CCSD(T)/VQZ+1 level, i.e. CCSD(T)/VQZ if only first-row atoms are present;
* zero-point energy obtained from a CCSD(T)/VTZ+1 anharmonic force field or, failing that, B3LYP/VTZ+1 frequencies scaled by 0.985 (vide infra);
* Carry out CCSD(T)/AVTZ+2d1f and CCSD(T)/AVQZ+2d1f single-point calculations;
* Carry out a CCSD/AV5Z+2d1f single-point calculation;
* the SCF component of TAE is extrapolated by $`A+B/C^l`$ from SCF/AVTZ+2d1f, SCF/AVQZ+2d1f, and SCF/AV5Z+2d1f results ($`l`$=3, 4, and 5, respectively);
* the CCSD valence correlation component is obtained from applying $`A+B/l^3`$ to CCSD/AVQZ+2d1f; and CCSD/AV5Z+2d1f valence correlation energies (l=4 and 5, respectively). It is immaterial whether this is done on total energies or on components to TAE;
* the (T) valence correlation component is obtained from applying $`A+B/l^3`$ results to CCSD(T)/AVTZ+2d1f and CCSD(T)/AVQZ+2d1f values for the (T) contribution. It is again immaterial whether this is done on total energies or on components to TAE;
* core correlation computed at CCSD(T)/MTsmall level
* scalar relativistic corrections (and, if necessary, spin-orbit splittings) computed at ACPF/MT level. To save CPU time, this can be combined into a single job with the previous step.
On a typical workstation at the time of writing (e.g. an SGI Octane with 1 GB of RAM and 2$`\times `$18 GB external disks) its applicability range would be about three heavy atoms in $`C_{2v}`$ symmetry, although the main limiting factor would be disk space and larger systems could be treated if a direct CCSD code were available.
We will illustrate the CPU time savings made in the W2 approach, compared to our most rigorous calculations, using two examples: a first-row diatomic (CO) and a second-row molecule (OCS). Using MOLPRO on an SGI Octane workstation, the most accurate calculations reported in this work (Table IV) required 21h36 for CO and no less than 362h12 for OCS. W2 theory yields essentially identical results at a cost of 1h12 (CO) or 13h42 (OCS) — a reduction by a factor of 20–30 which is typical of the other molecules.
## VI W1 theory and its performance
In an effort to obtain a method that is applicable to larger systems, we introduce a few further approximations. Relevant results can be found in Table VI.
### A Use of density functional reference geometries
B3LYP/cc-pVTZ+1 geometries are close enough to their CCSD(T)/VQZ+1 counterparts that their use does not cause a major effect on the final computed result. There is one notable exception to this rule for the molecules considered here: Cl<sub>2</sub>, for which the B3LYP/VTZ+1 bond distance of 2.0130 Å is quite different from its CCSD(T)/VQZ+1 counterpart, 1.9972 Å. (The experimental value is 1.987<sub>9</sub> Å.) While B3LYP and B3PW91 on the whole tend to produce essentially identical geometries and harmonic frequencies, it has been argued previously that the B3PW91 functional may be somewhat more reliable for systems with high electron density; and in fact, the B3PW91/VTZ+1 bond distance of 1.9912 Å is much closer to the CCSD(T)/VQZ+1 and experimental value.
Even so, the use of a B3LYP/VTZ+1 reference geometry still does not affect the computed $`D_e`$ by more than 0.14 kcal/mol. We conclude that CCSD(T) geometry optimizations, which will become fairly costly for larger molecules, can safely be replaced by B3LYP calculations, which can also serve for obtaining zero-point energies.
### B Further reduction of basis set sizes
The obvious first suggestion would be to also carry out the CCSD extrapolation using smaller basis sets, i.e. W\[TQ,TQ,DTQ\].
The effect for the SCF component of TAE is very small on condition that the extrapolation is carried out not on the individual total energies but rather on the computed SCF components of TAE themselves. Clearly error compensation occurs between the molecule and the constituent atoms.
The effect on the valence correlation component, unfortunately, is rather more significant. Over the ‘training set’, MAE rises to 0.37 kcal/mol even after SO<sub>2</sub> (which clearly is a pathological case here) has been eliminated. Aside from the latter, eliminating systems with mild nondynamical correlation does not lead to any significant reduction in MAE. Also noteworthy is that, on average, the binding energies appear to be somewhat overestimated: this is easily explained from the fact that basis sets like AVTZ+2d1f are not quite saturated in the radial correlation energy either, and that therefore the TAE\[AVQZ+2d1f\]$``$TAE\[AVTZ+2d1f\] gap will be an overestimate of the TAE\[L=4\]$``$TAE\[L=3\] gap.
### C Use of empirical extrapolation exponents
Truhlar considered the use of $`L`$-extrapolation formulas with empirical exponents, carried out from cc-pVDZ and cc-pVTZ calculations, as an inexpensive alternative to very large basis set calculations.
We will investigate here a variant of this suggestion adapted to the present framework. The valence correlation component to TAE will indeed be extrapolated using the formula $`A+B/l^\beta `$, in which $`\beta `$ is now an empirical parameter — we will denote this W\[Q5;Q5;TQ5\]$`\beta `$ and the like.
We then add in all the further corrections (core correlation, scalar relativistics, spin-orbit) that occur in W2 theory, and try to determine $`\beta `$ by minimizing MAE with respect to the experimental TAE values for our ‘training set’. Not surprisingly, for W\[Q5;Q5;TQ5\]$`\beta `$ this yields an optimum exponent ($`\beta `$=2.98) which differs insignificantly from the ‘ideal’ value of 3.0. Alternatively, $`\beta `$ could be optimized for the best possible overlap with the W\[56;56;Q56\] results: in fact, the same conclusion is obtained, namely that making $`\beta `$ an empirical parameter does not improve the quality of the results.
For W\[TQ;TQ;DTQ\] however, the situation is rather different. The optimum exponent $`\beta `$ is found to be 3.18 if optimized against the experimental TAE values, and 3.16 (insignificantly different) if optimized against the W\[Q5;Q5;TQ5\] results. In both cases, MAE drops to 0.30 kcal/mol, and on average no more overestimation occurs.
W\[TQ;TQ;DTQ\]3.18 represents a significant savings over W2 theory. Its time-determining step in molecules with many electrons will be the evaluation of the parenthetical triples in the AVQZ+2d1f basis set — their elimination would be most desirable.
A natural suggestion would then be W\[DT;TQ;DTQ\]$`\beta `$. Optimization of $`\beta `$ against the experimental TAE values yields $`\beta `$=3.26; the not greatly different $`\beta `$=3.22 is obtained by minimization of the deviation from W\[Q5;Q5;TQ5\] results for the training set. Since the latter does not explicitly depend on experimental results and therefore minor changes in the computational protocol do not require recalculation for the entire ‘training set’, we will opt for the latter alternative.
In either case, we obtain MAE=0.30 kcal/mol — for a calculation that requires not more than an AVTZ+2d1f basis set for the largest CCSD(T) calculation, and an AVQZ+2d1f basis set for the largest CCSD calculation. Again, the latter is amenable to a direct algorithm.
### D Protocol for W1 theory
We thus propose the following protocol for a computational level which we will call W1 (Weizmann-1) theory:
* geometry optimization at the B3LYP/VTZ+1 level (B3LYP/VTZ if only first-row atoms are present). Alternatively, the B3PW91 exchange-correlation functional may be preferable for some systems like Cl<sub>2</sub> — under normal circumstances, B3LYP/VTZ+1 and B3PW91/VTZ+1 should yield virtually identical geometries;
* zero-point energy obtained from B3LYP/VTZ+1 (or B3PW91/VTZ+1) harmonic frequencies scaled by 0.985;
* Carry out CCSD(T)/AVDZ+2d and CCSD(T)/AVTZ+2d1f single-point calculations;
* Carry out a CCSD/AVQZ+2d1f single-point calculation;
* the SCF component of TAE is extrapolated by $`A+B/C^l`$ from SCF/AVDZ+2d, SCF/AVTZ+2d1f, and SCF/AVQZ+2d1f components of TAE ($`l`$=2, 3, and 4, respectively)
* set $`\beta `$=3.22
* the CCSD valence correlation component is obtained from applying $`A+B/l^\beta `$ to CCSD/AVTZ+2d1f and CCSD/AVQZ+2d1f valence correlation energies (l=3 and 4, respectively). In both this and the next step, it is immaterial whether the extrapolation is carried out on components to the total energy or to TAE;
* the (T) valence correlation component is obtained from applying $`A+B/l^\beta `$ results to CCSD(T)/AVDZ+2d and CCSD(T)/AVTZ+2d1f values for the (T) contribution.
* core correlation contributions are obtained at the CCSD(T)/MTsmall level;
* scalar relativistic and, where necessary, spin-orbit coupling effects are treated at the ACPF/MTsmall level. As in W2 theory, this latter step can be combined in a single job with the previous step.
W1 theory can be applied to fairly large systems (see below). CPU times are dominated by the inner-shell correlation contribution (particularly for second-row compounds), which is reflected in the relatively small time reduction compared to W2 theory — e.g., from 1h12 to 24 for CO and from 13h42 to 8h48 for OCS. In addition — contrary to W2 theory — W1 theory exhibits a pronounced difference in performance between first-row and second-row compounds: for the species in Table VI, MAE is 0.26 kcal/mol for first-row, but 0.40 kcal/mol for second-row compounds. Since the CPU time gap between W1 and W2 theory is fairly narrow for second-row species, we conclude that for accurate work on second-row species — unless precluded by disk space or memory limitations — it may well be worth to ‘walk the extra mile’ and carry out a W2 rather than a W1 calculation. For first-row systems, on the contrary, W1 may well seem the more attractive of the two.
## VII Sample applications to larger systems
By way of illustration, we have carried out some W1 theory calculations on trans-1,3-butadiene and benzene. All relevant computed and observed results are summarized in Table IX. The example of benzene is representative and will be discussed here in detail — it should be mentioned that the calculation was carried out in its entirety on an SGI Octane workstation with 2x18 GB external SCSI disks.
The reference geometry was obtained at the B3LYP/cc-pVTZ level. The zero-point energy at that level, after scaling by 0.985, is found to be 62.04 kcal/mol.
The SCF component of TAE is predicted to be 1044.95 kcal/mol at the one-particle basis set limit, of which only 0.39 kcal/mol is covered by the geometric extrapolation. Of the CCSD valence correlation component of 291.07 kcal/mol, however, some 10.11 kcal/mol is covered by the extrapolation, which also accounts for 2.13 kcal/mol out of the 26.55 kcal/mol connected triple excitations contribution.
The inner-shell correlation contribution is quite sizable at 7.09 kcal/mol, although this number is not qualitatively different from that for three acetylenes or three ethylenes. Finally, Darwin and mass-velocity terms contribute a small but significant -0.96 kcal/mol, and atomic spin-orbit splitting another -0.51 kcal/mol. All adds up to 1367.95 kcal/mol at the bottom of the well, or 1305.92 kcal/mol at 0 K, which is in excellent agreement with the experimental value of 1306.1$`\pm `$0.12 kcal/mol from the NIST WebBook.
The CCSD/VQZ calculation took 10h10’ with MOLPRO on the Octane, the CCSD(T)/VTZ calculation 1h48’ on a single CPU on the Origin 2000. By far the most time-consuming part of the calculation was the inner-shell correlation contribution, at 67h46’, to which another 4h52’ should be added for the Darwin and mass-velocity contribution. We see similar trends in the results for trans-butadiene, which agree with experiment to virtually within the stated experimental uncertainty; for allene, we obtain a value intermediate between the two experimental values proposed in the WebBook.
We find for both molecules that the sum of core-correlation and relativistic contributions can be quite well estimated by additivity approximations. For instance, the core correlation and scalar relativistic contributions with the same basis set for C<sub>2</sub>H<sub>4</sub> are +2.360 and -0.330 kcal/mol, respectively, adding up to 2.030 kcal/mol. Assuming 2 and 3 times this ‘C=C bond equivalent’ for butadiene and benzene, respectively, yields estimated contributions of 4.06 (trans-butadiene), and 6.09 (benzene) kcal/mol, which agree excellently with the directly computed values of 4.02 and 6.12 kcal/mol, respectively. Considering that inner-shell correlation effects should be fairly local in character, such schemes should work quite well for larger organic systems where the valence calculation would still be feasible but the explicit inner-shell calculation would not be.
## VIII Conclusions
We have developed and presented two quasi-‘black box’ schemes for high-accuracy calculation of molecular atomization energies or, equivalently, molecular heats of formation, of first-and second-row compounds.
The less expensive scheme, W1 (Weizmann-1) theory, yields a mean absolute error of 0.30 kcal/mol and includes only a single, molecule-independent, empirical parameter. It requires no larger-scale calculations than CCSD/AVQZ+2d1f and CCSD(T)/AVTZ+2d1f (or, for nonpolar first-row compounds, CCSD/VQZ and CCSD(T)/VTZ). On workstation computers and using conventional coupled cluster algorithms, systems as large as benzene can be treated, while larger systems are feasible using direct coupled cluster methods.
The more expensive scheme, W2 (Weizmann-2) theory, contains no empirical parameters at all and yields a mean absolute error of 0.23 kcal/mol, which is lowered to 0.18 kcal/mol for molecules dominated by dynamical correlation. On workstation computers, molecules with up to three heavy atoms can be treated using conventional coupled cluster algorithms, while larger systems can still be treated using a direct CCSD code.
The inclusion of scalar relativistic (Darwin and mass-velocity) corrections is essential for good results in second-row compounds, particularly highly polar ones. Inclusion of inner-shell correlation contributions is absolutely essential: the basis set denoted as MTsmall (for Martin-Taylor small) appears to represent the best compromise between quality and computational expense. We do not recommend the use of lower-level electron correlation methods than CCSD(T) for the evaluation of the inner-shell contribution.
Among the several infinite-basis set extrapolation formulas for the correlation energy examined, the three-parameter $`A+B/(l+1/2)^\alpha `$ expression proposed by Martin and the $`A+B/l^3`$ expression proposed by Helgaker and coworkers yield the best results for sufficiently large basis sets, with the latter formula to be preferred on grounds of stability of the extrapolated results with the basis sets used. Geometric and mixed geometric-Gaussian extrapolation formulas are unsatisfactory when applied to the correlation energy, although they appear to be appropriate for the SCF component.
The main limiting factor for the quality of our calculations at this stage appears to be imperfections in the CCSD(T) method. This assertion is supported by the fact that the mean absolute error in the computed electron affinities of the atoms H, B–F and Al–Cl drops from 0.009 eV to 0.0009 eV if CCSDT and full CI corrections are included.
Extrapolation of the (T) contribution to the correlation energy can, at no loss in accuracy, be carried out using smaller basis sets than the CCSD contribution.
###### Acknowledgements.
JM is a Yigal Allon Fellow, the incumbent of the Helen and Milton A. Kimmelman Career Development Chair (Weizmann Institute), and an Honorary Research Associate (“Onderzoeksleider in eremandaat”) of the National Science Foundation of Belgium (NFWO/FNRS). GdO acknowledges the Feinberg Graduate School (Weizmann Institute) for a Postdoctoral Fellowship. This research was supported by the Minerva Foundation, Munich, Germany. The authors acknowledge enlightening discussions with (in alphabetical order) Drs. C. W. Bauschlicher Jr. (NASA Ames Research Center), Dr. Thom H. Dunning (PNNL), Prof. Trygve Helgaker (U. of Oslo, Norway), Dr. Frank Jensen (Odense U., Denmark), Dr. Timothy J. Lee (NASA Ames Research Center), and Prof. Peter R. Taylor (UCSD and National Partnership for Advanced Computing Infrastructure), and thank Dr. Peter Stern for technical assistance with various computer systems.
|
no-problem/9904/math9904172.html
|
ar5iv
|
text
|
# A SIMPLE PRACTICAL HIGHER DESCENT FOR LARGE HEIGHT RATIONAL POINTS ON CERTAIN ELLIPTIC CURVES
## 1 Introduction
Many Diophantine problems can be reduced to determining points on elliptic curves. A fascinating example is from Bremner et al , where finding possible representations of $`n`$ as
$$n=(x+y+z)\left(\frac{1}{x}+\frac{1}{y}+\frac{1}{z}\right)$$
is shown to be equivalent to finding points of infinite order on the elliptic curve
$$E_n:u^2=v^3+(n^26n3)v^2+16nv$$
The rank of $`E_n`$ can be estimated from the L-series of the curve, assuming the Birch & Swinnerton-Dyer conjecture, but this just provides evidence of existence. For an actual representation, we need to explicitly find a rational point. If the curve has rank greater than 1, it is usually fairly simple to find a point by a trivial search. For rank 1 curves, however, this is often not feasible. The L-series computations can be extended to give an estimate for the height of a rational point of the curve. If this is large, then a simple search will take far too long.
Recently, Silverman produced a very nice and effective procedure to determine such points. Based on a preprint of this paper, the author coded the method in UBASIC, and used it on several families of elliptic curves from Diophantine problems. This usage has shown that the method starts to become time-consuming for heights greater than about 10 in the Silverman normalisation - 20 in the alternative (from now on all heights will be in the Silverman normalisation).
If the curve has a rational point of order 2, we can use a 4-descent procedure to calculate the point, which we have found to be effective up to heights of about 15. By effective we mean that a point can usually be found in a matter of minutes on a 200MHz PC with a program written in UBASIC. If one is willing to wait several hours, the height barrier can obviously be increased. It should be noted that Silverman’s method works just as well on curves with no rational points of order 2, and so is more general.
## 2 Four-descent
The following set of formulae describe the algebra necessary to perform a general four-descent procedure for the elliptic curve
$$y^2=x^3+ax^2+bx$$
(1)
so the curve is assumed to have at least one point of order 2. This discussion is purely concerned with computing a rational point, and not with determining the rank, and so is not as general as Cremona’s mwrank program, described in .
We can assume without loss of generality that $`x=du^2/v^2`$ and $`y=duw/v^3`$, with $`d`$ squarefree and $`(d,v)=1`$, giving
$$d^2w^2=d^3u^4+d^2au^2v^2+dbv^4$$
(2)
which implies that $`d|b`$, so that $`b=de`$. There are thus a finite set of possible values for d, which we can work out easily from the factors of $`b`$, unless $`b`$ is very large and difficult to factor. Remember that $`d`$ can be negative.
Consider first the simpler quadratic
$$h^2=df^2+afg+eg^2$$
(3)
and look for a solution $`(h_0,f_0,g_0)`$ either by searching or by more advanced methods, and assume $`g_00`$.
If we find a solution, we can parameterise as follows. Define $`x=f/g`$ and $`y=h/g`$, so that the equation is
$$y^2=dx^2+ax+e$$
then the line through $`(x_0,y_0)=(f_0/g_0,h_0/g_0)`$, with gradient $`m`$, meets the curve again where
$$x=\frac{a+dx_0+m(mx_02y_0)}{m^2d}$$
(4)
Assuming $`m=p/q`$ is rational, we simplify this to
$$\frac{ag_0q^2+df_0q^2+p(f_0p2h_0q)}{g_0(p^2dq^2)}$$
(5)
To solve our original problem we look for values of the parameters giving $`f/g=u^2/v^2`$, which leads to the two quadratics
$$ku^2=g_0p^2g_0dq^2$$
(6)
and
$$kv^2=f_0p^22h_0pq+(ag_0+df_0)q^2$$
(7)
with $`k`$ squarefree.
The possible values of $`k`$ are those which divide the resultant of the two quadratics, which means that $`k`$ must divide the determinant
$`\begin{array}{cccc}g_0& 0& dg_0& 0\\ 0& g_0& 0& dg_0\\ f_0& 2h_0& ag_0+df_0& 0\\ 0& f_0& 2h_0& ag_0+df_0\end{array}`$
which reduces to $`g_0^4(a^24b)`$. Let $`k_0`$ be a squarefree divisor (again possibly negative).
We search the first of these quadratics to find a solution $`(u_0,p_0,q_0)`$. If we find one, another simple line-quadratic intersection analysis characterises the solutions to the first quadratic as
$$p=2u_0k_0rsp_0(g_0s^2+k_0r^2)$$
(8)
and
$$q=q_0(g_0s^2k_0r^2)$$
(9)
Substitute these into $`k_0(7)`$, giving the quartic
$$z^2=z_1r^4+z_2r^3s+z_3r^2s^2+z_4rs^3+z_5s^4$$
(10)
with
$$z_1=k_0^3(ag_0q_0^2+df_0q_0^2+f_0p_0^22h_0p_0q_0)$$
(11)
$$z_2=4u_0k_0^3(h_0q_0f_0p_0)$$
(12)
$$z_3=2k_0^2(f_0(2u_0^2k_0dg_0q_0^2+g_0p_0^2)ag_0^2q_0^2)$$
(13)
$$z_4=4u_0g_0k_0^2(f_0p_0+h_0q_0)$$
(14)
$$z_5=g_0^2k_0(ag_0q_0^2+df_0q_0^2+f_0p_0^2+2h_0p_0q_0)$$
(15)
This quartic can then be searched for possible solutions. Having found one, the various transformations eventually lead back to a point on the curve.
## 3 COMPUTING
The above formulae form the basis of a simple code written by the author in UBASIC. This system is fast, simple, free, and runs on a multitude of PC’s, old and new. The major advantage, however, is that large numbers can be dealt with by one single statement at the start of the program - the rest of the code is standard Basic. Constructing a UBASIC code also leads to a structure which can easily be translated into Fortran, C, C++, etc using the many multiple-precision packages available in these languages.
The basic structure of the algorithm is
find divisors d of b
for s1 = s1a to s1b
test if $`h^2=df^2+afg+eg^2`$ soluble with $`|f|+|g|=s1`$
if $`(d,h_0,f_0,g_0)`$ a solution then
find divisors k of $`g_0(a^24b)`$
for s2 =2 to s2b
test if $`ku^2=g_0p^2g_0dq^2`$ with $`|p|+|q|=s2`$
if $`(k_0,u_0,p_0,q_0)`$ a solution, then
form quartic equation from equations (10) to (15)
test if quartic is soluble, and if it is
for s3=2 to s3b
test if quartic is square for $`|r|+|s|=s3`$
if it is, determine point on curve, and stop
next s3
next s2
next s1
If the curve has a rational point of moderate height, the above search-based procedure works very well. We have found the method to be effective for points with height of up to about 15, if we select s2b=99 and s3b=199. It is impossible to predict in advance the best choice of s1a and s1b.
For larger heights, the quartic search needs a larger value of s3b and can take a considerable time. In many cases, the quartic is insoluble so searching is futile. Cremona describes how to test this, based on ideas from Birch and Swinnerton-Dyer , and we have implemented this test as a precursor to searching. This method is probably the most advanced mathematically in the whole procedure. It is also a vital time-saving procedure since the majority of quartics are not soluble.
Cremona also describes various methods for searching the quartic which reduce the time taken. We have chosen NOT to implement these, partly to keep the code reasonably understandable by amateurs, and partly because UBASIC has restrictions on the number of variables. These methods are only effective time-savers if s3b is large.
For investigating families of curves where $`a`$ and $`b`$ are functions of $`n`$, we found the simple search for $`(h_0,f_0,g_0)`$ could fail to find any solutions in the specified range. In such a case we can adapt the search as follows. If
$$h^2=df^2+afg+bg^2/d$$
then
$$d(2h)^2=(2df+ag)^2(a^24b)g^2$$
and if we factor $`a^24b=\alpha \beta ^2`$ with $`\alpha `$ squarefree,
$$dH^2=F^2\alpha G^2$$
(16)
with $`F=2df+ag,G=\beta g,H=2h`$. From a $`(F_0,G_0,H_0)`$ solution, we can recover integer solutions to the original equation as $`f_0=\beta F_0aG_0,g_0=2dG_0,h_0=d\beta H_0`$.
In equation (6), it is clear that $`(\pm u_0,\pm p_0,\pm q_0)`$ satisfy the equation. We do not, however, need to consider all 8 possible combinations of sign. From (11) to (15), the sign of $`u_0`$ is irrelevant if we allow negative values of r or s. Similarly, $`p_0`$ and $`q_0`$ lead to the same quartic as $`p_0`$ and $`q_0`$, and the other two possible combinations give the same quartic. Thus there are only two essentially different quartics, and it is easy to show the relationship between them implies that they are both soluble or both insoluble. It is worthwhile to search both (if soluble), as the coefficients are different so the fixed search range might give a solution for one but not the other.
At several points in our code, we need to factor numbers to find divisors. This is done by an extremely simple-minded search procedure, without recourse to any modern factorisation techniques. So far, this has not had a severe effect on the performance of the code, but one could easily devise an elliptic curve where the factorisation time was dominant. The current code has the advantage that is is short.
Finally, it should be noted that the method could easily find a torsion point of the curve, and not a point of infinite order. Since the latter is usually wanted, the code checks that the point found is not a torsion point from a list of x-values provided by the user.
## 4 Further Descent
As stated, the above method begins to become time-consuming at certain heights. In such cases, several workers have resorted to impressive further descents to determine rational points. A classic example is the work of Bremner and Cassels on the curve $`y^2=x^3+px`$, see . These methods seem to be very problem-dependent and to involve manipulations in algebraic number fields.
The author was interested in a general method of higher descent which does not involve anything more than rational arithmetic - especially for use by the many amateurs interested in Diophantine equations. The following method is based on a remarkably simple idea. The author cannot believe this has not been thought of before, but can find no direct reference to the idea.
The problem in the 4-descent for large heights is the determination of $`(r,s)`$ values giving a square quartic. Suppose
$$z_1r^4+z_2r^3s+z_3r^2s^2+z^4rs^3+z_5s^4=(u_1r^2+u_2rs+u_3s^2)(v_1r^2+v_2rs+v_3s^2)$$
(17)
with $`u_1,u_2,u_3,v_1,v_2,v_3`$ integers, then we can consider the following two quadratics
$$u_1r^2+u_2rs+u_3s^2=k_1m^2$$
(18)
$$v_1r^2+v_2rs+v_3s^2=k_1t^2$$
(19)
with $`k_1`$ squarefree. As before $`k_1`$ divides the resultant, which is
$$u_1^2v_3^2u_1u_2v_2v_32u_1u_3v_1v_3+u_1u_3v_2^2+u_2^2v_1v_3u_2u_3v_1v_2+u_3^2v_1^2$$
(20)
Suppose we find a solution $`(k_1,t_1,r_1,s_1)`$ to the second quadratic, then we can parameterise as
$$r=i^2k_1r_12ijk_1t_1+j^2(r_1v_1+s_1v_2)$$
(21)
$$s=s_1(i^2k_1j^2v_1)$$
(22)
which, when substituted into $`k_1(18)`$, requires the following quartic to be square
$$c_1i^4+c_2i^3j+c_3i^2j^2+c_4ij^3+c_5j^4$$
(23)
with
$$c_1=k_1^3(r_1^2u_1+r_1s_1u_2+s_1^2u_3)$$
(24)
$$c_2=2k_1^3t_1(2r_1u_1+s_1u_2)$$
(25)
$$c_3=k_1^2(4k_1t_1^2u_1+2r_1^2u_1v_1+2r_1s_1u_1v_2+s_1^2(u_2v_22u_3v_1))$$
(26)
$$c_4=2k_1^2t_1(2r_1u_1v_1+s_1(2u_1v_2u_2v_1))$$
(27)
$$c_5=k_1(r_1^2u_1v_1^2+r_1s_1v_1(2u_1v_2u_2v_1)+s_1^2(u_1v_2^2v_1(u_2v_2u_3v_1)))$$
(28)
This quartic can be tested for solubility and then searched.
## 5 Computing the 8-descent
The basic structure of the algorithm is obviously very similar to the 4-descent structure.
find divisors d of b
for s1 = s1a to s1b
test if $`h^2=df^2+afg+eg^2`$ soluble with $`|f|+|g|=s1`$
if $`(d,h_0,f_0,g_0)`$ a solution then
find divisors k of $`g_0(a^24b)`$
for s2 = 2 to s2b
test if $`ku^2=g_0p^2g_0dq^2`$ with $`|p|+|q|=s2`$
if $`(k_0,u_0,p_0,q_0)`$ a solution, then
form quartic equation from equations (10) to (15)
test if quartic soluble, and if so
test if quartic can be factored into 2 integer quadratics
if so, find possible values of $`k_1`$
for s3=2 to s3b
test if $`k_1t^2=v_1r^2+v_2rs+v_3s^2`$ with $`|r|+|s|s3`$
if $`(k_1,t_1,r_1,s_1)`$ a solution, form quartic (20)
test if quartic soluble, and if it is
for s4=2 to s4b
test if $`c_1i^4+c_2i^3j+c_3i^2j^2+c_4ij^3+c_5j^4`$ square
if it is, use various transformations to find point and stop
next s4
next s3
next s2
next s1
The factorisation of the quartic is accomplished by considering the equations:
$`\begin{array}{ccc}z_1& =& u_1v_1\hfill \\ z_5& =& u_3v_3\hfill \\ z_2& =& u_1v_2+v_1u_2\hfill \\ z_4& =& u_3v_2+v_3u_2\hfill \\ z_3& =& u_1v_3+u_2v_2+u_3v_1\hfill \end{array}`$
We thus have to factorise $`z_1`$ and $`z_5`$. For each possible splitting, we solve the third and fourth equations for $`u_2`$ and $`v_2`$. If the solutions are integers we test whether the 6 values satisfy the last equation. We found that quadratics which themselves split into 2 linear factors lead to torsion points, so we test whether the quadratics have rational roots, and reject those which do.
## 6 Numerical Examples
In this section we give some examples of the use of the 8-descent method, with some specimen timings on either a 200MHz or 300MHz PC. It is clear that timings are dependant on the search parameters, so we specify the values of (s2b,s3b,s4b). s1a is set to 2 and we search until a solution is found.
(a) Congruent numbers N are integers which can be the area of a rational right-angled triangle. Finding the sides of such a triangle is equivalent to finding a point of infinite order on the elliptic curve $`y^2=x^3N^2x`$.
One of the most famous of these numbers is $`N=157`$, since it forms the basis of an impressive diagram in Chapter 1 of Koblitz’s book , where the sides involve numbers with 20-30 digits. An L-series calculation gives the height of a point as $`27.3`$. Running the program with s2b=s3b=99 and s4b=199 finds the point with x-coord
$$\frac{\mathrm{1661\hspace{0.17em}3623\hspace{0.17em}1668\hspace{0.17em}1852\hspace{0.17em}6754\hspace{0.17em}0804}}{\mathrm{28\hspace{0.17em}2563\hspace{0.17em}0694\hspace{0.17em}2511\hspace{0.17em}4585\hspace{0.17em}8025}}$$
in 18.8 secs (200MHz). With s4b=99 it takes only 7.0 secs, while for s4b=499 it takes 31.5 secs. But, with s4b=599 the time goes down to 28.3 secs, and if s4b=699 the time is only 9.0 secs. The variation is due to the number of quartics which need to be searched. Up to 499 it takes 22 quartics, but 599 needs only 4, while 699 finds a point on the first. The important point is that it takes less than 1 minute to find such a large point. Even an ancient 80387 machine finds this point in less than 5 minutes.
The author has used this technique in a search for actual solutions to the congruent number problem for $`1N1999`$. The current method has led to the completion of a table of solutions for $`[1,499]`$. The largest height encountered was $`51.15`$ for $`N=367`$. Searching with s2b=s3b=99 and s4b=9999, the following solution was found on a 300MHz machine in 17595 secs.
$$x=\frac{367(\mathrm{4\hspace{0.17em}9695\hspace{0.17em}3629\hspace{0.17em}6085\hspace{0.17em}1360\hspace{0.17em}8777})^2}{(\mathrm{163\hspace{0.17em}8216\hspace{0.17em}8821\hspace{0.17em}6485\hspace{0.17em}0643\hspace{0.17em}1464})^2}$$
It is perfectly possible that the modular form Heegner-point method of Elkies could find this point much faster, but this method is much more difficult for the non-expert to understand. Elkies method also does not generalise to other families of curves.
Since $`x^3N^2x=x(xN)(x+N)`$, we can change the origin to produce the two equivalent elliptic curves $`y^2=x^3\pm 3Nx^2+2N^2x`$. They may be equivalent mathematically, but their performance computationally is not the same. The author has found several solutions to the congruent number problem from these curves having been unsuccessful with the original curve. This also happens in other families of elliptic curves with 3 rational points of order 2.
(b) The paper of Bremner et al mentioned in the introduction has a representation of $`N=564`$ in its abstract. The L-series computation predicts a point with height $`38.01`$. A 300MHz PC found a solution in 4497 secs. with s2b=s3b=99 and s4b=9999. This solution leads to a representation with
x = 32736 87951 95203 44322 22320 98479 60433 77911 47254 01060
y = 53 58267 18225 66098 96868 10234 90522 46809 90105 26717
z = - 1158 25525 22781 02629 66659 36639 59067 36616 11576 01937
(c) The paper of Bremner and Cassels describes the finding of a point on $`y^2=x^3+877x`$. The L-series predicts a height of $`48.0`$. A 300MHz PC finds the following x-coordinate in 51874 seconds with s2b=s3b=99 and s4b=12999.
$$x=\frac{877(\mathrm{7884\hspace{0.17em}1535\hspace{0.17em}8606\hspace{0.17em}8390\hspace{0.17em}0210})^2}{(\mathrm{6\hspace{0.17em}1277\hspace{0.17em}6083\hspace{0.17em}1879\hspace{0.17em}4736\hspace{0.17em}8101})^2}$$
(4) The final example is included for historical reasons, since it was the first time that the author tried the idea of factorising the quartic. Since this was the start of the current research, clearly the computer programs used in the previous examples were not available. The computations were done by simple searches and algebraic manipulation with Derive.
The problem comes from the diophantine problem of finding an integer triangle with base/altitude = n. For $`n=79`$, we consider the equation $`y^2=x^3+6243x^2+x`$. The L-series computations suggested rank 1, but with a height of over 40 for a rational point. The 2-isogenous curve is $`y^2=x^312486x^2+38975045x`$, which was indicated to have a point with height about 20.
The author selected to try $`d=5`$, which meant looking first for solutions of $`h^2=5f^212486fg+7795009g^2`$. A very simple search program quickly finds $`f=93,g=1,h=2584`$, which means (6) and (7), are
$$ku^2=p^25q^2$$
$$kv^2=93p^25168pq12021q^2$$
where $`k=\pm 1`$. We selected $`k=1`$.
It is possible to parameterise the first as $`p=r^2+5s^2`$,$`q=2rs`$, which gives
$$v^2=93r^410336r^3s47154r^2s^251680rs^3+2325s^4$$
which Derive fairly easily factors as
$$v^2=(3r^2340rs775s^2)(31r^2+68rs3s^2)$$
The two quadratic factors form the basis of (18) and (19), and we find that $`k_1|158`$. Picking $`k_1=1`$, we found several $`(r,s)`$ solutions to $`3r^2340rs775s^2=t^2`$, which lead to parameterisations for r and s, but which lead to insoluble quartics. We then tried $`k_1=158`$, and found the solution $`r=9,s=1,t=4`$, which gives the parameterisation
$$\frac{r}{s}=\frac{(1422i^2+1264ij+367j^2)}{158i^23j^2}$$
and hence to the quartic
$$316^2(74892i^4+154840i^3j+123789i^2j^2+45916ij^3+6725j^4)$$
which has to be square.
Thus quartic was everwhere soluble, so a search was started which quickly found the solution $`i=151,j=158`$, which lead back to a point on the 2-isogenous curve with
$$x=\frac{283684993467631951390020}{46898490944992340041}$$
and thus to a point on the original curve with
$$x=\frac{265479261289194419968505186711433025}{170541875947725676769862564358062336}$$
For interested readers, this point leads to the triangle with sides
$`\begin{array}{ccc}& & \\ A=& \hfill 146586997184778231835321& \hfill 9719440069878865747485658\\ & & \hfill 6410826213286741631164960\\ & & \\ B=& \hfill 89276765348874858876033& \hfill 6294270957750737827730811\\ & & \hfill 8665999941086255389471249\\ & & \\ C=& \hfill 57359536918230561955378& \hfill 6626779319292615973876797\\ & & \hfill 1279754707312477117108209\end{array}`$
## 7 Further Work
The reasonable level of success of this method for several families of elliptic curves provided the impetus to write this report, but there is still much work to be done.
1. Translate the UBASIC code into a compiled high-level language so that the method can be run on non PC’s, especially UNIX workstations.
2. Try further descents on the last quartic produced. The author has experimented with this idea, but initial results are disappointing in that the method seems to be finding multiples of a generator rather than the generator itself. Since the multiples have much larger height it is currently no benefit to try these extra descents.
3. Try to underpin the method with some theory. The method was developed by trying an idea, seeing it work, and improving it. The method does not always work, but does provide another tool in the investigator’s toolbox.
|
no-problem/9904/astro-ph9904133.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
According to hierarchical cosmological scenarios, galaxies form through merging of smaller entities. At least the dark haloes of dissipationless matter hierarchically merge, and it is expected that some of the visible galaxies also interact and exchange mass, while spiraling in a common halo. If major mergers lead to the formation of ellipticals, and leave vestiges such as shells, ripples and loops around present-day elliptical galaxies (Schweizer & Seitzer 1988), signatures of accretion or merger are less easy to see in spiral galaxies. Yet, mass and gas accretion is required in spiral galaxies for several reasons:
* Metallicity distribution in the disk (the G-dwarf problem for instance, requires gas infall)
* Spiral formation and maintenance: episodes of spiral density waves heat the disk, and accreted fresh gas is required to trigger new instabilities
* Renewal of bars and nuclear bars, that drive mass towards the center, and self-destroy
* Reforming the thin disk after minor mergers: galaxies such as the Milky Way re-form a thin disk, while a thick one has been heated by an interacting event
In the following, the main evidences for mass accretion in spiral galaxies will be reviewed, including: the presence of thick disks, counter-rotating components, the ubiquity of warps, or the existence of polar rings.
## 2 Galaxy Interactions and Thickness of Stellar Disks
In hierarchical cosmologies, it is easy to estimate analytically the probability of formation of a dark halo of mass $`M`$ at time $`t`$ from the Press-Shechter theory (1974), revised by Bond et al (1991): a gaussian distribution of fluctuations is assumed, and structures are followed through random walk of linear overdensity with respect to smoothing scale.
From such analytical formulations of merging histories (e.g. Lacey & Cole 1993, 1994), it is possible to relate the dark haloes merger rate to the parameters of the universe (average density, cosmological constant). The merging rates for visible galaxies should follow, although the link is presently not well known (Carlberg 1991, Toth & Ostriker 1992).
For the standard CDM model ($`\mathrm{\Omega }`$=1) for instance, 80% of haloes have accreted at least 10% of their mass in the last 5 10<sup>9</sup> yrs. To reduce the merging rate today, the solution is to consider low $`\mathrm{\Omega }`$ models, for which freezing of halo formation occurs for z $`<`$ 1/$`\mathrm{\Omega }`$. After this epoch, only very few haloes form, and the merger rate of visible galaxies inside haloes is expected also very low. When approximations are taken for the merger conditions of the galaxies, such as a threshold in their relative velocity v $`<v_{mg}v_{escape}`$ (Carlberg 1990, 91), the merger rate can be written as a power law with redshift, $`dn(mergers)/dt(1+z)^m`$, with the power-law $`m`$ increasing with $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$ (typically as $`m\mathrm{\Omega }^{0.42}(1\mathrm{\Lambda })^{0.11}`$).
Observations support a large value of the exponent $`m`$. Statistics of close galaxy pairs from faint-galaxy redshift surveys have shown that the merging rate increases as $`(1+z)^m`$ with $`m=4\pm 1.5`$ (e.g. Yee & Ellingson 1995). Lavery et al (1996) claim that collisional ring galaxies (Cartwheel-type) are also rapidly evolving, with $`m=45`$, although statistics are still insufficient. Many other surveys, including IRAS faint sources, or quasars, have also revealed a high power-law (Carlberg 1991, Carlberg et al 1994).
The fragility of disks with respect to interactions can be used to constrain the merging rate. During an interaction, stellar disks can thicken or even be destroyed (e.g. Gunn 1987). Through the disk thickness of the Milky Way, Toth & Ostriker (1992) constrain the frequency of merging and the value of the cosmological parameters: from analytical and local estimations of the heating rate, they claim that the Milky Way disk has accreted less than 4% of its mass within the last 5 Gyrs. But these local calculations are only rough approximations. The first numerical simulations of the phenomenon of disk thickening through interactions (Quinn et al 1993, Walker et al 1996) appear to confirm the analytical results however: they show that the stellar disk thickening can be large and sudden.
Recently, Huang & Carlberg (1997) and Velazquez & White (1999) reconsider the problem, through numerical simulations, and find on the contrary that the heating of disks have been overestimated. In particular, prograde satellites heat the disks, while retrograde ones produce only a coherent tilt. If the halo is rigid, the thickening of the disk is increased by a factor 1.5 to 2: massive live bulges can therefore help to keep disks thin, in absorbing part of the heating. Also, there are many parameters to explore in simulations: the most important could be the compactness of the interacting companion. If the perturber has a compact core, the heating effect is important, while a more diffuse companion is destroyed by tidal shear before damaging the primary disk.
It should be however remarked that gas hydrodynamics and star formation processes can also alter significantly the processes, since the thin disk can be reformed continuously through gas infall. It is interesting to check on presently interacting galaxies whether the heating or thickening of disks is measurable. In normal galaxies, the ratio of radial scale-length $`h`$ to scale-height $`z_0`$ is about constant and equal to 5 (Bottema 1993); the ratio only goes up for dwarf galaxies. Now, in a sample of edge-on interacting galaxies this ratio was found to be 1.5 to to 2 times lower than normal (Reshetnikov & Combes 1997, Schwarzkopf & Dettmar 1999), as shown in fig 1. This is surprising, when taking into account that the visible ”interacting phase” is only transient, and on a Gyr time-scale, interacting galaxies will return to the ”normal phase”, with again a high $`h/z_0`$ ratio, or thin disk.
A possible interpretation of this result is that the interacting galaxies are warped, since the latter is difficult to distinguish from a thickening at any viewing angle; yet the damping of the warp will thicken the disk in any case. If the thickening is in fact transient, this indicates that the present disk galaxies come from merging of smaller units, that they acquire mass continuously: through gas accretion and subsequent star formation, disks recover their small thickness after galaxy interactions, or in other words, the disk of present day spirals has been assembled at low redshift (Mo et al 1998).
## 3 Counter-Rotating Components
The phenomenon of gas disks counter-rotating with the stellar component is now well known in ellipticals, where the ionised gas disks (or dust lanes) are settled in principal planes (e.g. Bertola et al 1990). Ellipticals were also first discovered with kinematically decoupled stellar cores, which are expected in merger remnants, such as NGC 7252 (e.g. Barnes & Hernquist 1992).
Counter-rotation has also been observed in many spirals during this last decade, although it is more difficult than in ellipticals, since the secondary CR component is not dominating and the primary component is strongly rotating. All possibilities have been observed, either two stellar disks counter-rotating with respect to each-other, or the gas counter to the stars, or even gas versus gas, but not at the same radii in the galaxies (see the reviews of Galletta 1996, Bertola & Corsini 1998). There is presently about 60 systems of counter-rotation recorded in the literature.
In general the counter-rotating component is not dominant, but there is a very special case, NGC 4550, where two almost identical CR stellar disks are observed (Rubin et al 1992). This case is a puzzle, since the second disk cannot have formed through subsequent accretion of gas: the two stellar disks have the same age. The only solution is through a merger of pre-existing spiral galaxies. If major mergers usually give an elliptical galaxy as a remnant, this is not the case when they have aligned direction of their angular momentum. In these rare orientation cases, it is possible to merge two spiral galaxies in one, and reproduce the case of NGC 4550, when the momenta are opposite (Pfenniger 1998, Puerari & Pfenniger 1999).
Can one consider other explanations than mergers for CR components? It is possible to artificially simulate counter-rotation in a certain region of a galaxy, through perpendicular streaming motions due to a bar potential, for instance; but when the 2D velocity field is obtained, confusion is not possible. There are also self-consistent models of barred galaxies including retrograde orbits (Wozniak & Pfenniger 1997), but the origin of the retrograde stars is still gas accretion. A slow bar destruction can be a rare case where stars in box-orbits in a barred galaxy are scattered equally in two CR families of tube orbits, resulting in two opposite streams when the bar has disappeared (Evans & Collett 1994). But the process of bar destruction is in any case related to galaxy interactions and gas accretion.
### 3.1 Stability
How long such counter-rotating systems can live? Does this phenomenon favor gas fueling to the nucleus?
There exists a two-stream instability in flat disks, similar to that in CR plasmas (Lovelace et al 97). If there exists a mode in a given disk, the energy of the modes in the two streams are of opposite signs: the negative E mode can grow by feeding energy in the positive E mode, which produces the instability. There exist also many bending instabilities (Sellwood & Merritt 1994).
If there is only a small fraction of CR stars, these haveon the contrary a stabilising influence with respect to bar formation ($`m=2`$); in a certain sense, they are equivalent to a system with more velocity dispersion (Kalnajs 1977).
But in the case of comparable quantities of CR stars, a one-arm instability is triggered. This is confirmed through N-body simulations: a quasi-stationary one-arm structure forms, and lives for 1–5 periods (Comins et al 1997), first leading, than trailing, and disappears. Fig 2 shows such a simulation, where the common $`m=1`$ pattern is leading for the main direct component, and trailing for the secondary retrograde one.
### 3.2 Counter-rotating gas
Accretion of CR gas in a lenticular galaxy deprived of gas initially is a way to form two stellar counter-rotating disks, after star formation has occured. Thakar & Ryden (1996) have shown that both episodic or continuous gas infall are able to form a stable CR disk, without de-stabilising the pre-existing disk significantly. The conditions are that gas must be extended in phase space and not clumpy, which would heat too much the primary disk. For example, the merger of a gas-rich dense dwarf will have a too large heating effect, unless the mass ratio is quite small, and in such cases, only a small CR disk is produced. However, the final thickness of the disk depends drastically on the gas code used, thicker for sticky particles, and much thinner with SPH (Thakar & Ryden 1998), as well as the settling time-scales.
When gas is present in the initial disk, the presence of two CR streams of gas in the same plane will be very transient: strong shocks will produce heating and rapid dissipation will drive the gas quickly to the center (Kuznetsov et al 1999). This could be a very efficient way to fuel active nuclei. However, the gas could also infall in an inclined plane, or at different radii than those of the pre-existing gas, which can explain the observations of two counter-rotating gas systems.
Polar rings (objects similar to the prototype NGC 4650A) are such cases, where gas settles in a stable plane almost perpendicular to the primary galaxy. Polar-ring galaxies are quite rare in the nearby universe: Whitmore et al (1990) find that about 0.5% of all nearby lenticular galaxies are observed with a polar ring. But since there are projection effects and different selection biases that prevent to see them all, they estimate to about 5% the actual present frequency of PRGs. An estimation of their frequency as a function of redshift will be a precious tool to quantify the merging rate evolution.
## 4 Warps as clues of matter accretion
The majority of spirals are warped in their neutral hydrogen (HI) component (e.g. Sancisi 1976, Bosma 1981, Briggs 1990). This is a long-standing puzzle, since if the gas is considered as test-particles in the halo potential, it should differentially precess, and with a time-scale much shorter than the Hubble time the disk should end up with a corrugated shape and thicken.
Many theories have been proposed to solve the problem. Normal modes of the disk have been ruled out, since they are quickly damped (Hunter & Toomre 1969), but normal modes of the disk in the potential if a mis-aligned halo have been a possibility for a while (Toomre 1983, Sparke & Casertano 1988), until it was realized that they are quickly damped through dynamical friction (Nelson & Tremaine 1995).
The triggering of warps by tidal interaction with companions has been ruled out in the past, since the best examples of warped galaxies appeared isolated. However, this could be changing now that smallest companions can be found, or vestiges of a past merger. This is the case of the warp-prototype NGC 5907, where a conspicuous tidal loop has been observed by Shang et al (1998). It is obvious that this galaxy has accreted a small system in the recent past, and it has also a dwarf companion nearby (see Fig 3). Regular and symmetric warps are those that have already relaxed for a while, and this could explain the apparent lack of correlation with companions.
Finally, the proposition that gas infall could maintain warps around galaxies is easily justified in the framework of hierachical cosmologies (cf Ostriker & Binney 1989; Binney 1992). Gas infalls with slewed angular momentum with respect to the main disk. This accretion will re-align the whole system along a tilted axis. The transient state is the warped state. This hypothesis has been recently supported through numerical simulations by Jiang & Binney (1999). They show that the inner halo and disc tilts as one unit. The halo tilts first in the outer parts, and the tilt propagates then inwards; the disk is entrained and aligns with the halo, it plays the role of a tracer of its orientation. The time-scale of this phenomenon is about 1 Gyr to re-align by 7.
## 5 Conclusions
In summary, there are many evidences that even spiral galaxies have experienced a large number of galaxy interactions in the past, and that their formation proceeds also through hierachical merging and accretion: presence of thick and thin disks, growing number of observed counter-rotating disks, frequency of polar-ring galaxies, ubiquity of HI warps. Both present explanations of warps, either through tidal interactions trigger, or maintenance through gas infall, are compatible with this paradigm.
|
no-problem/9904/hep-th9904111.html
|
ar5iv
|
text
|
# 1 Local 𝑈(𝑁): Yang-Mills integrals
## 1 Local $`U(N)`$: Yang-Mills integrals
In this part we discuss some aspects of results obtained in collaboration with W. Krauth and H. Nicolai and published in ,,. Consider $`D`$-dimensional pure $`SU\left(N\right)`$ Yang-Mills field theory and, inspired by the principle (1), reduce it by brute force to zero dimensions. The continuum path integral, involving traceless hermitian gauge connections $`X_\mu `$, becomes an ordinary matrix integral:
$$𝒵_{D,N}=\underset{A=1}{\overset{N^21}{}}\underset{\mu =1}{\overset{D}{}}\frac{dX_\mu ^A}{\sqrt{2\pi }}\mathrm{exp}\left[\frac{1}{2}\mathrm{Tr}[X_\mu ,X_\nu ][X_\mu ,X_\nu ]\right].$$
(2)
Note that gauge fixing is no longer required here, since the overcounting of gauge-equivalent configurations involves merely a factor of the compact, finite volume of the gauge group: space time has become a point (or more precisely, an infinitesimal torus, since the “point” still keeps a sense of the $`D`$ directions.). Now, as was explained in , the integral eq.(2) still “knows” something about $`D`$-dimensional space-time. Indeed, shifting
$$X_\mu P_\mu +X_\mu $$
(3)
by diagonal matrices $`P_\mu =`$diag$`(p_\mu ^1,\mathrm{},p_\mu ^N)`$ we formally recover Feynman rules which look like the ordinary ones except that the momentum integrations are replaced by sums over discretized momenta $`p_\mu ^ip_\mu ^j`$. As $`N\mathrm{}`$ one might hope that the sums turn back into loop integrals, motivating the correspondence (1). Now in a somewhat complicated quenching and gauge fixing procedure was introduced in order to ensure the recovery of the field theory. Indeed it would seem at first sight that the integral eq.(2) is meaningless without the procedure of since there are unconstrained flat directions in integration space, due to mutually commuting matrices. However, the Monte Carlo results of suggest
Proposition 1a: The Yang-Mills integrals $`𝒵_{D,N}`$ exist iff $`N>\frac{D}{D2}`$.
It would be quite important to find methods enabling one to rigorously prove this statement, or even calculate the partition sums $`𝒵_{D,N}`$. Some important analytic evidence comes from the perturbative calculations of . For $`SU\left(2\right)`$, a proof of the proposition, as well as an analytic expression for $`𝒵_{D,2}`$, is known.
The matrix integrals eq.(2) have beautiful supersymmetric extensions in dimensions $`D=4,6,10`$. These read
$$𝒵_{D,N}^𝒩:=\underset{A=1}{\overset{N^21}{}}\left(\underset{\mu =1}{\overset{D}{}}\frac{dX_\mu ^A}{\sqrt{2\pi }}\right)\left(\underset{\alpha =1}{\overset{𝒩}{}}d\mathrm{\Psi }_\alpha ^A\right)\mathrm{exp}\left[\frac{1}{2}\mathrm{Tr}[X_\mu ,X_\nu ][X_\mu ,X_\nu ]+\mathrm{Tr}\mathrm{\Psi }_\alpha [\mathrm{\Gamma }_{\alpha \beta }^\mu X_\mu ,\mathrm{\Psi }_\beta ]\right].$$
(4)
where we have supersymmetrically added $`𝒩=2\left(D2\right)`$ hermitian fermionic matrices $`\mathrm{\Psi }_\alpha `$ to the models. The $`D=10`$ model corresponds to the dimensional reduction of the maximally supersymmetric conformal $`D=4,𝒩=4`$ Yang-Mills field theory to zero dimensions. It is also the crucial ingredient in the IKKT model for IIB superstrings , which however, instead of taking the large $`N`$ limit, sums $`𝒵_{10,N}^{16}`$ over all values of $`N`$. Following the $`SU\left(2\right)`$ calculations of , the perturbative arguments of , the arguments of , the calculations of , and our Monte Carlo work, we are led to
Proposition 1b: The susy Yang-Mills integrals $`𝒵_{4,N}^4,𝒵_{6,N}^8,𝒵_{10,N}^{16}`$ exist iff $`N2`$.
The analytic results of these integrals are believed to be known, and a rigorous mathematical proof would be welcome.
It is interesting to understand the similarities and differences of these little studied “new” matrix models eqs.(2),(4), whose existence has been missed until recently, in relation to the conventional “old” matrix models of Wigner type. A crucial quantity in the old matrix models is the distribution of eigenvalues of the random matrices. An interesting novel feature of the new matrix models is the fact that, at finite $`N`$, only a finite number of one-matrix moments exist. The numerical results agree with perturbative powercounting arguments, and one is led, for the bosonic models eq.(2), to
Proposition 2a: $`\frac{1}{N}\mathrm{Tr}X_1^{2k}<\mathrm{}`$ iff $`k<N\left(D2\right)\frac{3}{2}D+2`$,
while in the supersymmetric cases $`D=4,6,10`$ eq.(4) one has
Proposition 2b: $`\frac{1}{N}\mathrm{Tr}X_1^{2k}<\mathrm{}`$ iff $`k<D3`$.
Once again, except for $`SU\left(2\right)`$, rigorous proofs of these conjectures are missing. These findings indicate that in the new matrix models the density of eigenvalues falls off much slower (powerlike) than in the old ones (exponential). As $`N\mathrm{}`$ the bosonic densities behave once again rather conservatively (infinitely many moments exist), while for the susy densities the behavior indicated in proposition 2b is independent of $`N`$.
A much more difficult question is whether these models might lead to a “self-quenching” effect where a background $`P_\mu `$ (in eq.(3)), bearing some resemblance to real Yang-Mills theory, is dynamically generated as $`N\mathrm{}`$.
The above Yang-Mills integrals have many applications even at finite $`N`$ (for a recent unexpected one see ); however, here we would like to stress that they constitute an ideal laboratory for developing new large $`N`$ techniques aimed at making progress with ‘t Hooft’s large $`N`$ QCD .
## 2 Global $`U(N)`$: Master partitions
The problem of finding the $`N=\mathrm{}`$ solution to matrix field theories has not even been solved in the presumably simpler case of models with a global $`U\left(N\right)`$ symmetry. The main obstacle has been that no systematic procedure was known to reduce the local number of degrees of freedom from $`𝒪\left(N^2\right)`$ to $`𝒪\left(N\right)`$. In we outlined a general approach for achieving such a reduction for any field theory with a global matrix symmetry. Let us sketch the idea in the specific example of an interacting $`D=2`$ hermitian scalar field theory. It is convenient to put the theory on a lattice:
$$𝒵=\underset{x}{}𝒟M\left(x\right)e^𝒮,$$
$$𝒮=N\mathrm{Tr}\underset{x}{}\left[\frac{1}{2}M\left(x\right)^2+\frac{g}{4}M\left(x\right)^4\frac{\beta }{2}\underset{\mu =1,2}{}\left[M\left(x\right)M\left(x+\widehat{\mu }\right)+M\left(x\right)M\left(x\widehat{\mu }\right)\right]\right],$$
(5)
where the field variables are $`N\times N`$ hermitian matrices $`M\left(x\right)`$ defined on the square lattice sites $`x`$ and $`\widehat{\mu }`$ denotes the unit vector in the $`\mu `$-direction. The measure is the usual flat measure on hermitian matrices. The first step consists in applying the reduction principle (1). Naively reducing the system as in the previous section down to a single point results in an ordinary one-matrix model where the information on the 2$`D`$ lattice is lost. A more careful reduction has to hide the propagation on the lattice in group space; here we will use the beautiful procedure of “twisting”, see and references therein. Using the $`N\times N`$ Weyl-‘t Hooft matrices
$$P=\left(\begin{array}{cccccc}0& 1& & & & \\ & 0& 1& & & \\ & & \mathrm{}& \mathrm{}& & \\ & & & & 0& 1\\ 1& & & & & 0\end{array}\right),Q=\left(\begin{array}{cccccc}1& & & & & \\ & \omega & & & & \\ & & \mathrm{}& & & \\ & & & & \omega ^{N2}& \\ & & & & & \omega ^{N1}\end{array}\right),$$
(6)
where $`\omega =\mathrm{exp}\frac{2\pi i}{N}`$ and $`PQ=\omega QP`$, one can show by a Fourier transform in matrix index space that the one-matrix integral
$$Z=𝒟M\mathrm{exp}N\mathrm{Tr}\left[\frac{1}{2}M^2\frac{g}{4}M^4+\beta \left(MPMP^{}+MQMQ^{}\right)\right],$$
(7)
has the same vacuum energy as the path integral eq.(5).
As a second step we need to reduce the number of variables from $`N^2`$ to $`N`$. The brute force approach would be to diagonalize the matrix $`M`$ and perform the integration over the unitary diagonalizing matrix. One would then obtain an effective action for the $`N`$ eigenvalues of $`M`$. However, calculations at small $`N`$ show that this effective action is extremely complicated in the case at hand. On the other hand, if we change variables from the eigenvalues to partitions, corresponding to a Fourier transform in group space, something very interesting happens. The $`N`$ variables dual to the $`N`$ eigenvalues are the Young weights $`h_i=Ni+m_i`$, $`i=1,\mathrm{},N`$, where the $`m_i`$ are the lengths of the $`i`$’th row in the Young diagram corresponding to the partition. Denoting the partitions through $`h=(h_1,\mathrm{},h_N)`$, the dual representation of the integral eq.(7) is found to be
$$Z=\underset{h}{}_h_h\beta ^{\frac{\left|h\right|}{2}},$$
(8)
where instead of an integration over the $`N\times N`$ matrix $`M`$ we now have a sum over all partitions $`h`$ of the non-negative integer $`\left|h\right|=0,1,2,\mathrm{}`$. Here $`_h`$ contains all the information on the interaction, and essentially requires the general correlation function of the $`U\left(N\right)`$-invariant one-matrix integral
$$_h=N^{\left|h\right|}\underset{i=1}{\overset{N}{}}\frac{\left(Ni\right)!}{h_i!}𝒟M\mathrm{exp}N\mathrm{Tr}\left[\frac{1}{2}M^2\frac{g}{4}M^4\right]\chi _h\left(M\right),$$
(9)
which are known. Here $`\chi _h\left(M\right)`$ are the Schur functions on $`h`$ which are nothing but a complete set of of class functions (non-abelian Fourier modes) on the group. The information on the lattice is contained in the lattice polynomials
$$_h=\mathrm{exp}\frac{1}{N}\mathrm{Tr}\left(PP^{}+QQ^{}\right)\chi _h\left(J\right)|_{J=0}.$$
(10)
Here $``$ denotes the $`N\times N`$ matrix differential operator whose matrix elements are $`_{ji}=\frac{}{J_{ij}}`$. The $`_h`$ are easily shown to be polynomials in the variable $`\frac{1}{N}`$ of maximal degree $`\frac{1}{2}\left|h\right|1`$.
Now the result of this harmonic analysis is that the terms to be summed over partitions in eq.(8) factorize into a piece $`_h`$ containing the information on the local interaction and and the piece $`_h`$ containing the information on the space-time structure. Since there are only $`N`$ variables $`h_i`$ we expect the sum eq.(8) to be dominated at $`N=\mathrm{}`$ by a saddle point, i.e. an effective master partition. In a third and final step we will need to write the full system of bootstrap equations for the saddle point. This will require a deeper analysis of the lattice polynomials. But it should be clear that the problem of solving the large $`N`$ lattice field theory has been reformulated in a rather non-trivial way: In fact, the interacting theory (i.e. $`g0`$ in eq.(5)) is no harder to solve in this dual space of Young weights than the free theory ($`g=0`$).
Acknowledgements
I thank W. Krauth and H. Nicolai for fruitful collaboration, and J. Hoppe, V. A. Kazakov, I. K. Kostov, and J. Plefka for useful discussions. This work was supported in part by the EU under Contract FMRX-CT96-0012.
|
no-problem/9904/astro-ph9904323.html
|
ar5iv
|
text
|
# Magnetic fields and strong density waves in the interacting galaxy NGC 3627
## 1 Introduction
The role of gas flows in spiral arms upon the galactic magnetic field evolution is a lively debated issue. Theories of the field amplification by small scale turbulent motions (e.g. axisymmetric dynamo, Wielebinski & Krause 1993), which well reproduce the polarization properties of galaxies with weak density waves (Urbanik et al. 1997), do not need any spiral arm flows. In other theories (e.g. Chiba 1993), the density wave perturbations are the main agent amplifying the galactic magnetic field. The importance of the latter process compared to the dynamo action may be a function of the density wave strength. Though successful attempts to model the magnetic field evolution driven by the dynamo and spiral arms or bars have been made (Mestel & Subramanian 1991, Subramanian & Mestel 1993, Moss 1997, Moss et al. 1998), this issue has still very poor observational grounds.
Existing observations of large, nearby spirals (Beck et al. 1996) do not allow to state whether in case of strong density waves the magnetic field evolution becomes dominated by processes in spiral arms. Strong density wave signatures are present in M51 and in the inner disk of M81. In M51 the polarization B-vectors follow local structural details of dust lanes (Neininger & Horellou 1996), as expected for a magnetic field dominated by density-wave compression. However, the spiral arms are too tightly wound to separate this field component from a possible dynamo-generated one, which is usually distributed more uniformly in the disk. No clear magnetic field component related to density waves has been identified in M81, however its inner disk shows a subtle network of local compression regions filling the whole interarm space (Visser 1980).
NGC 6946, NGC 4254 and the outer parts of M81 do not show strong density wave signatures. Their magnetic fields form either broad ”magnetic arms“ in the middle of the interarm space (NGC 6946, Beck & Hoernes 1996) or smoothly fill the interarm space (M81, Krause et al. 1989). In NGC 4254 a coherent spiral pattern of polarization B-vectors exists even in regions of completely chaotic optical structures (Soida et al. 1996). The nearby well-studied spirals also have enhanced star formation in spiral arms, destroying the regular fields. We note that the detection of smoothly distributed dynamo-type fields needs a very good sensitivity to extended polarized emission, which is not always ensured by high-resolution studies.
In this paper we present observations with good sensitivity to smooth, extended structures to check whether a galaxy with very strong signs of density waves may have a global magnetic field dominated by the component caused by density wave action. We obtained 10.55 GHz total power and polarization maps of the spiral NGC 3627 interacting within the Leo Triplet (Haynes et al. 1979). The galaxy has a bar and two spiral arms with a broad interarm space discernible even with a modest resolution (see Fig. 1). The western arm contains a long dust lane tracing large-scale gas (and possibly frozen-in field) compression, accompanied by little star formation (cf. H$`\alpha `$ map by Smith et al. 1994). The middle part of the arm is unusually straight, bending sharply in the outer disk. The eastern arm has a heavy dust lane in its southern part, breaking into a subtle network of filaments in its northern half. The southern dust lane segment is accompanied by a chain of bright star-forming regions. Reuter et al. (1996) found perturbations of the galaxy’s CO velocity field possibly due to streaming motions related to spiral arms. NGC 3627 has been also observed in the far infrared by Sievers et al. (1994). A total power map at 1.49 GHz using the VLA D-array was made by Condon (1987).
## 2 Observations and data reduction
The total power and polarization observations at 10.55 GHz were performed in May 1993, as well as in April and May 1994 using the four-horn system in the secondary focus of the Effelsberg 100-m MPIfR telescope (Schmidt et al. 1993). With 300 MHz bandwidth and $`40`$ K system noise temperature, the r.m.s. noise for 1 sec integration and combination of all horns is $`2`$ mJy/beam area in total power and $`1`$ mJy/beam area in polarized intensity.
Each horn was equipped with two total power receivers and an IF polarimeter resulting in 4 data channels containing the Stokes parameters I, Q and U. The telescope pointing was corrected by making cross-scans of Virgo A at time intervals of about 2 hours. As flux calibrator the highly polarized source 3C286 was observed. A total power flux density of 4450 mJy at 10.55 GHz has been adopted using the formulae by Baars et al. (1977). The same calibration factors were used for total power and polarized intensity, yielding a mean degree of polarization of 12.2%, in reasonable agreement with other published values (Tabara & Inoue 1980).
In total 29 coverages of NGC 3627 in the azimuth-elevation frame were obtained. The data reduction process was performed using the NOD2 data reduction package (Haslam 1974). By combining the information from appropriate horns, using the ”software beam-switching“ technique (Morsi & Reich 1986) followed by a restoration of total intensities (Emerson et al. 1979), we obtained for each coverage the I, Q and U maps of the galaxy. All coverages were then combined using the spatial frequency weighting method (Emerson & Gräve 1988), yielding the final maps of total power, polarized intensity, polarization degree and polarization position angles. A digital filtering process, which removes spatial frequencies corresponding to noisy structures smaller than the telescope beam, was applied to the final maps. A special CLEAN procedure to remove instrumental polarization was applied to the polarization data. The original beam of our observations was 1$`\stackrel{}{.}`$13. With the distance modulus of $`30\stackrel{m}{.}37`$ given by Ryan & Visvanathan (1989), corresponding to a distance of 11.9 Mpc, our beamwidth is equivalent to 3.9 kpc in the sky plane. In the galaxy’s disk plane this corresponds to 3.9 and 8 kpc along major and minor axes, respectively.
## 3 Results
### 3.1 Total power emission
The total power map at the original resolution with B-vectors of polarized intensity is shown in Fig. 1. The map has an r.m.s. noise of 0.6 mJy/b.a. Bright total power peaks are found at the bar ends, where both the CO(1-0) and CO(2-1) maps by Reuter et al. (1996) as well as the H$`\alpha `$ map by Smith et al. (1994) show large accumulations of molecular gas and young star formation products. There is no indication of a bright central source.
The outer disk shows a remarkable asymmetry. The total power emission is considerably more extended and decreases more smoothly towards the south than to the north. In this respect it resembles the optical, CO and H$`\alpha `$ morphology: the western arm running southwards extends to a considerably larger distance from the centre than does the eastern one.
A slight extension towards the east at RA<sub>1950</sub> of about $`11^h17^m45^s`$ and Dec<sub>1950</sub> of about $`13\mathrm{°}17\mathrm{}`$ is also seen in the map by Urbanik et al. (1985) and must be real. It has no optical counterpart but corresponds roughly to the region where Haynes et al. (1979) found a counter-rotating HI plume, probably caused by tidal interactions within the Leo Triplet.
The integration of the total power map in elliptical rings using an inclination of $`67\stackrel{}{.}5`$ and a position angle of $`173\mathrm{°}`$ (both taken from the Lyon-Meudon Extragalactic Database) yields an integrated flux density at 10.55 GHz of 103$`\pm `$10 mJy within the radius of 20 kpc, very close to the total flux obtained by Niklas et al. (1995). This value has been combined with available data at lower frequencies collected in Table 1. All values have been converted to the flux density scale by Baars et al. (1977). A weighted power law fit to the data yields a mean spectral index of 0.64$`\pm `$0.04 ($`S_\nu \nu ^\alpha `$). As the deviations from a single power law are comparable to the errors in the observed flux densities (Fig. 2), a spectral index of 0.64 has been adopted for the whole frequency range between 80 MHz and 10.7 GHz.
### 3.2 Polarized intensity
Our polarized intensity map has an r.m.s. noise of 0.18 mJy/b.a.. It shows two asymmetric lobes with B-vectors locally parallel to the principal arms (Fig. 3). The strongest peak of the polarized brightness, with the polarization degree reaching locally 25%, is located west of the galaxy’s centre, at the position of the unusually straight dust lane segment (see also Fig. 5). No bright star-forming regions are present there. The second, weaker peak does not coincide with a prominent dust lane but is located in the interarm region between the northern segment of the eastern arm and the bar where only small, barely visible dust filaments are present. No polarization was detected in the vicinity of a particularly heavy dust lane segment in the southern part of the eastern arm at RA<sub>1950</sub> of about $`11^h17^m42^s`$ and Dec<sub>1950</sub> of $`13\mathrm{°}15\mathrm{}30\mathrm{}`$, accompanied by a chain of star-forming regions.
With a polarization degree less than 3% the bar ends are generally weakly polarized. However, clear polarization patches in the vicinity of RA<sub>1950</sub> of $`11^h17^m41\stackrel{s}{.}5`$ Dec<sub>1950</sub> of $`13\mathrm{°}14\mathrm{}30\mathrm{}`$ and RA<sub>1950</sub> of $`11^h17^m35\stackrel{s}{.}3`$ Dec<sub>1950</sub> of $`13\mathrm{°}17\mathrm{}32\mathrm{}`$, surrounding the bar ends and being marginally significant in Fig. 3, exceed the $`3\sigma `$ noise level after convolving the data to a beamwidth of 1$`\stackrel{}{.}`$3 (Fig. 4). The degree of polarization in these regions is about 5–6%.
The orientations of the polarization B-vectors corrected to face-on position are shown in Fig. 5. West of the centre they run parallel to a straight segment of the dust lane, following its bend in the southern disk. In the polarized peak in the NE disk the B-vectors in the interarm region follow the direction of the dust lane which itself is only weakly polarized.
In the above mentioned weak polarization patches near the bar ends, best visible in Fig. 4, the B-vectors tend to turn smoothly around the terminal points of the bar. No ordered optical or H$`\alpha `$ structures are present there. Close to the northern bar end the vector orientations smoothly join these in the western arm (see also Fig. 5). Near the southern bar end the B-vectors deviate strongly to the east with a large pitch angle. Across the unpolarized region in the southern part of the eastern arm their orientations jump by about $`90\mathrm{°}`$. The question of possible geometrical depolarization at this position is discussed in detail in Sect. 4.
The integration of the polarized intensity map shown in Fig. 3 in the same rings as described in Sect 3.1 yields an integrated polarized flux density of $`6.0\pm 1.8`$ mJy. This implies a mean polarization degree of $`5.8\pm 1.8`$%. An application of the formula of Segalovitz et al. (1976) yields a mean ratio of regular to total field strengths $`B_u/B_t=0.22\pm 0.04`$.
## 4 Discussion
### 4.1 Total magnetic field strength and distribution of total power brightness
The integrated radio spectrum of NGC 3627 does not show obvious deviations from a single power-law with a slope $`\alpha =0.64`$ (see Fig. 2). Using the integrated flux density, $`\alpha =0.64`$ and assuming the minimum-energy or energy equipartition condition we derive a mean total magnetic field strength of $`13\pm 4\mu `$G. The regular field component (assuming that it is entirely parallel to the disk) derived from the polarized intensity equals $`3.5\pm 1.3\mu `$G. This value refers to a magnetic field which is regular over scales larger than our beam of 4 kpc. In computing the above values we assumed a lower limit of the cosmic-ray spectrum of 300 MeV (cf. Beck 1991), a ratio of proton-to-electron density ratio of 100 (Pacholczyk 1970), and a nonthermal disk scaleheight of 1 kpc. The error in the total magnetic field strength includes an uncertainty by a factor of 2 of the proton-to-electron ratio, the disk thickness and the lower energy cutoff, as well as an unknown thermal fraction between 0% and 40%. The mean total magnetic field of NGC 3627 is stronger than average for spiral galaxies (Beck et al. 1996), in spite of the low neutral gas content (Young et al. 1983, Zhang et al. 1993, see also Urbanik 1987).
The bright total power sources at the ends of the bar lie at the positions of huge star-forming molecular complexes, also coincident with HI peaks (Zhang et al. 1993). Their spectral index between 1.49 GHz and 10.55 GHz derived using Condon’s (1987) map is 0.73–0.75, thus nonthermal emission is dominating. Fig. 6 shows the cross-sections along the bar of the total power intensity at 10.55 GHz as well as of the H$`\alpha `$ and CO(1-0) line emission (Smith et al. 1994, Reuter et al. 1996), both convolved to our resolution. All profiles were normalized to their maximum values. The CO profile has a peak at the galaxy’s centre due to the emission from the central molecular complex which in the original maps (Reuter et al. 1996) has the same peak brightness and extent as the CO complexes at the bar ends. However, the H$`\alpha `$ emission lacks the central peak being very weak in the nuclear region. An intense star formation in the central molecular complex with its manifestation in the H$`\alpha `$ line obscured by the dust (abundant in nuclear regions of spiral galaxies) is unlikely as the 10.55 GHz profile (Fig. 6) has a central depression, too. The total power minimum is even deeper than in CO(1-0) after subtracting completely the nuclear region from the original CO map of Reuter et al. (1996). Thus the central molecular complex apparently forms stars at a much lower rate than aggregates of the cold gas at the bar ends. We also note that the HI map of Zhang et al. (1993) shows a central depression, too.
The H$`\alpha `$ emission from the southern bar end is considerably weaker than from the northern one. No such asymmetry exists in radio continuum. The CO(1-0) brightness (thus also the content of an opaque cold gas) is however somewhat higher at the southern than at the northern end of the bar. The mentioned asymmetry of the H$`\alpha `$ emission may thus be caused by a higher absorption in the southern bar end.
The determination of the radial scale length $`r_0`$ of the nonthermal disk by fitting a beam-smoothed exponential model encounters severe problems because of the high inclination and emission excess at the bar ends. Nevertheless reasonable values are in the range 1$`\stackrel{}{.}`$2–1$`\stackrel{}{.}`$8 corresponding to 4.2–6.2 kpc at the distance of 11.9 Mpc.
### 4.2 The magnetic field structure
The polarized brightness is strongly peaked at the middle of a straight portion of the dust lane in the western arm. To check whether the emission is resolved we tried to subtract the beam-smoothed point source at the position of the observed brightness maximum. We found that the polarized peak can be decomposed into an unresolved source with a polarized flux density of about 1.8 mJy and an extension along the southern part of the dust lane with a maximum polarized brightness of about 0.8 mJy/b.a.. The eastern lobe is however rather poorly resolved.
The eastern and western polarized lobes differ not only in their peak brightness but also in their positions and azimuthal extent relative to optical arms. Moving in the galactic disk along the azimuth anticlockwise from the corresponding bar ends, we observe initially a very similar increase of polarized brightness (Fig. 7). However, while the polarized intensity in the western arm continues to rise reaching a maximum at an azimuthal distance of about $`100\mathrm{°}`$ from the southern bar end, the polarized brightness in the eastern arm drops at an azimuthal distance of $`75\mathrm{°}`$ from the northern bar end, showing even a local minimum at $`100\mathrm{°}`$. As the inner parts of both arms have a similar shape the unpolarized region in the eastern arm does not result from effects related to the spiral arm geometry. The statistical significance of the differences between the profiles was estimated by averaging them in non-overlapping azimuthal angle intervals corresponding to the beam size at the appropriate azimuthal distance from the bar end. This yielded for each profile 7 statistically independent points. Assuming that they represent independent random variables having the r.m.s. dispersion equal to the polarization map noise we found that the probability that the differences between profiles result purely from random fluctuations is smaller than $`2\times 10^6`$. This result was checked to be independent of the starting point of averaging intervals.
The dust lanes in the western arm and its segment in the inner part of the eastern one coincide with ridges of CO(1-0) emission (Reuter et al. 1996, see also Fig. 8a) tracing very dense, narrow and elongated molecular gas complexes forming in density-wave compression regions. They are also visible in the HI map of Zhang et al. (1993). While the western polarized lobe peaks on the CO ridge and extends along it, the eastern one falls on a hole in the CO emission. On the other hand, the CO (and HI) ridge in the southern part of the eastern arm, being even stronger than the western one, coincides with a completely unpolarized region. The narrow, elongated CO features are thus not always associated with highly polarized regions, as one would expect from pure compression of magnetic field by density waves. We note however, that the western CO ridge is accompanied by only isolated, small HII regions (Smith et al. 1994, Fig. 8b) while the unpolarized CO ridge in the eastern arm hosts a chain of large complexes of H$`\alpha `$-emitting gas. Faraday effects at 10.55 GHz are negligible, thus the depolarization is primarily of geometrical nature. Tangling of the magnetic field by star-forming process inside the H$`\alpha `$-bright knot is insufficient: most of the star formation occurs outside of the dust lane, thus it cannot destroy a possible density wave-related field and occupies a too small volume to randomize a smoothly-distributed dynamo field in a whole disk quadrant. However, at the inclination of $`67\stackrel{}{.}5`$ the polarization degree may be significantly lowered by vertical magnetic field fluctuations developing above the strongly star-forming chain in the eastern arm. They may be caused by vertical chimneys or superbubbles powered by multiple supernova explosions (see e.g. Mineshige et al. 1993, Tomisaka 1998). Some role of Parker instabilities (Parker 1966) cannot be excluded, too. The magnetic field structures stretching perpendicularly to the disk with a significant vertical field component, projected to the sky plane and seen by a large beam together with the disk-parallel field (either concentrated in the dust lane or filling the whole disk), may provide an efficient depolarizing agent.
### 4.3 Magnetic field models
To judge whether our polarization map is dominated by the density-wave magnetic field component or by the axisymmetric, dynamo-type we need the beam-smoothed models of polarized emission from magnetic fields of an assumed structure. Four kinds of models were computed using techniques described by Urbanik et al. (1997):
* A model assuming a regular magnetic field concentrated in prominent dust lanes and running along them. In addition the magnetic field running along the bar could be switched on and off.
* The above model without polarized emission from the strongly star-forming segment of the eastern arm.
* A model assuming an axisymmetric, spiral, plane-parallel field with a constant intrinsic pitch angle of $`30\mathrm{°}`$ (mean value for NGC 3627).
* The above model without polarized emission in the eastern region corresponding to the discussed star-forming arm segment, as expected for strong vertical field fluctuations seen in projection together with a disk-parallel field.
In all models the adopted radial distribution of the total field strength and cosmic ray electron density was set to yield an exponential total power disk with a radial scale length $`r_0`$ (Sect. 4.1). The intrinsic degree of polarization was rising linearly from $`p_1`$ in the centre to $`p_2`$ in the disk outskirts. $`r_0`$, $`p_1`$ and $`p_2`$ were adjusted to yield the best qualitative agreement of the models with observations.
The best results presented in Fig. 9a–d are as follows:
* The presence of two polarized lobes with B-vectors running parallel to optical spiral arms is reproduced by both axisymmetric and spiral arm models. They both give the position of the western lobe on the middle of the spiral arm, in agreement with observations.
* Both axisymmetric and spiral arm models need an extra depolarizing agent in the star-forming segment of the eastern arm, otherwise both models give the maximum of polarized intensity where observations show a complete lack of polarization (Fig. 9a and c).
* Only the axisymmetric models (with and without an extra depolarization, Fig. 9c and d) correctly place the NE lobe in the interarm space. The spiral arm models (Fig. 9a and b) invariably give the position of the eastern polarized lobe on the position of the prominent dust lane which disagrees with observations.
* Only the axisymmetric models reproduce the observed regions of a weak polarized signal encircling minima at the bar ends with B-vectors turning smoothly from one arm to the other.
* Even with a suppressed polarization in the SE disk region the axisymmetric model yields similar peak amplitudes of both lobes and thus does not reproduce their observed asymmetry. The spiral arm model does this considerably better.
* Another difficulty of the axisymmetric model is a too large extent of the modelled lobes into the outer disk compared to their rather peaked shape in NGC 3627 (especially of the western one). Varying $`r_0`$ and/or $`p_2`$ can make the western lobes more peaked but moves it to the interarm space, worsening the agreement with observations.
The last two difficulties of the axisymmetric model can be somewhat relaxed by adding an unresolved polarized source in the middle of the western arm, where its unusually straight part (see Fig. 5) and a steep HI gradient on this disk side are suggestive for an external gas compression (Haynes et al. 1989). However, the difficulties of the spiral arm model can only be improved by adding a widespread, significant axisymmetric magnetic field.
Attempts to reproduce the variations of face-on corrected magnetic pitch angles $`\psi `$ with the azimuthal angle in the disk are shown in Fig. 10. The observed changes of $`\psi `$, and especially a jump near the azimuthal angle of $`90\mathrm{°}`$, rule out a purely axisymmetric magnetic field. However, addition of the discussed unpolarized region to our axisymmetric model yields the jump at the correct position, though its exact shape in our simple model is still far from reality. The spiral arm models shown in Fig. 9a and b also have some dip at about 90<sup>o</sup>, however they yield abrupt jumps of $`\psi `$ at $`150\mathrm{°}`$ and $`330\mathrm{°}`$. These features do not depend on model parameters, nor on the inclusion or exclusion of low-brightness regions in model maps. They naturally result from the spiral arm shape and cannot be removed without changing the basic model assumptions. In the azimuthal angle range of $`270\mathrm{°}`$ to $`330\mathrm{°}`$ the spiral arm model deviates also from the data much more than that assuming the axisymmetric field.
Despite very strong density waves NGC 3627 still shows clear signatures of axisymmetric, dynamo-type magnetic fields. At present it is hard to say whether it dominates the disk field, showing only locally effects of external compression, or whether it coexists with the density-wave component as an important constituent of the global magnetic field. A detailed discrimination between these possibilities needs observations with a considerably higher resolution complemented by extensive computations of a whole grid of detailed quantitative models of NGC 3627, which is beyond the scope of this paper.
## 5 Summary and conclusions
The strongly interacting Leo Triplet galaxy NGC 3627 has been observed at 10.55 GHz with the 100-m MPIfR radio telescope. Total power and polarization maps with a resolution of 1$`\stackrel{}{.}`$13, very sensitive to extended, diffuse polarized emission were obtained. Their analysis in the context of optical, CO and H$`\alpha `$ data yielded the following results:
* The total power map shows two bright maxima at the bar ends, coincident with strong CO and H$`\alpha `$ peaks. There is no evidence of a significant radio emission from the central region, thus a large central molecular complex (Reuter at al. 1996) has star formation rate much lower than the molecular gas accumulations at the bar ends and its weak H$`\alpha `$ emission is not entirely due to a strong dust obscuration. However, differences in absorption could explain the asymmetry of H$`\alpha `$ emission between the bar ends.
* The polarized emission forms two asymmetric lobes: a strong one peaking on the dust lane bent inwards in the western arm and extending along this arm while a weaker one is located in the interarm space in the NE disk. We also detected diffuse, extended, polarized emission encircling the bar ends away from spiral arms. The polarization B-vectors run parallel to the principal arms, twisting around the bar ends in weakly polarized regions.
* The southern part of the eastern arm is completely depolarized. This region shows signs of strong density wave compression however, it contains a lot of ionized gas indicating strong star formation. At the galaxy’s inclination, the development of vertical magnetic instabilities seen in projection together with the disk-parallel magnetic field could be a suitable depolarizing agent.
* Attempts to qualitatively explain the distribution of polarized intensity and the B-vector geometry in NGC 3627 in terms of simple magnetic field models suggest the presence of a significant (if not dominant) axisymmetric, dynamo-type field. However, to best explain our polarization maps all models need an extra geometrical depolarization e.g. by vertical fields above the discussed star-forming segment of the eastern arm. To reproduce the polarization asymmetry the dynamo-generated field also requires an extra polarized component (probably due to external compression?) at the position of the straight western arm segment.
The present work demonstrates that even in galaxies with strong density waves observations sensitive to extended diffuse polarized emission cannot be fully explained by the density wave-related magnetic field component but show clear signatures of large-scale axisymmetric dynamo-type fields. On the other hand, in the same object the density-wave component may show up much better or become dominant in high-resolution interferometric observations underestimating the extended polarized emission. We believe that combined interferometric and single-dish data on such objects supported by extensive modelling might help to establish the mutual relationships and relative roles of turbulence and density-wave flows in galactic magnetic field evolution.
###### Acknowledgements.
The Authors wish to express their thanks to Dr Beverly Smith from IPAC for providing us with her H$`\alpha `$ map in a numerical format. We are grateful to numerous colleagues from the Max-Planck-Institut für Radioastronomie (MPIfR) in Bonn, in particular to Drs E.M. Berkhuijsen and P. Reich for their valuable discussions during this work. M.S. and M.U. are indebted to the Directors of the MPIfR for the invitations to stay at this Institute, where substantial parts of this work were done, and to Dr H-P. Reuter for his assistance in using his CO maps. They are also grateful to colleagues from the Astronomical Observatory of the Jagiellonian University in Kraków and in particular to Drs. K. Otmianowska-Mazur and M. Ostrowski for their comments. We thank to the anonymous referee for the valuable remarks. This work was supported by a grant from the Polish Research Committee (KBN), grants no. 578/P03/95/09. and 962/P03/97/12. Large parts of computations were made using the HP715 workstation at the Astronomical Observatory in Kraków, partly sponsored by the ESO C&EE grant A-01-116 and on the Convex-SPP machine at the Academic Computer Centre ”Cyfronet“ in Kraków (grant no. KBN/C3840/UJ/011/1996 and KBN/SPP/UJ/011/1996).
|
no-problem/9904/math9904033.html
|
ar5iv
|
text
|
# Infinitesimal deformations of a Calabi-Yau hypersurface of the moduli space of stable vector bundles over a curve
## 1. Introduction
Let $`X`$ be a compact connected Riemann surface of genus $`g`$, with $`g2`$. Let $`_\xi :=(n,\xi )`$ denote the moduli space of stable vector bundles $`E`$ of rank $`n`$, with $`n2`$, over $`X`$, such that the line bundle $`^nE`$ is isomorphic to a fixed holomorphic line bundle $`\xi `$ over $`X`$. The degree $`d=\text{deg}(\xi )`$ and $`n`$ are assumed to be coprime. We also assume that if $`g=2`$, then $`n2,3`$, and if $`g=3`$, then $`n2`$.
The moduli space $`_\xi `$ is a connected smooth projective variety over $``$, and for fixed $`n`$, the moduli space $`_\xi `$ is isomorphic to $`_\xi ^{}`$ if $`\xi ^{}`$ is another holomorphic line bundle with $`\text{deg}(\xi )=\text{deg}(\xi ^{})`$. We take $`\xi `$ to be of the form $`L^d`$, where $`L`$ is a holomorphic line bundle over $`X`$ such that $`L^{(2g2)}`$ is isomorphic to the canonical line bundle $`K_X`$.
The Picard group $`\text{Pic}(_\xi )`$ is isomorphic to $``$. The anticanonical line bundle $`K__\xi ^1`$ is isomorphic to $`\mathrm{\Theta }^2`$, where $`\mathrm{\Theta }`$ is the ample generator of $`\text{Pic}(_\xi )`$, known as the generalized theta line bundle.
Let $`D`$ be a smooth divisor on $`_\xi `$ such that the holomorphic line bundle $`𝒪__\xi (D)`$ over $`_\xi `$ is isomorphic to $`K__\xi ^1`$. Such a divisor is a connected simply connected smooth projective variety with trivial canonical line bundle. In other words, $`D`$ is a Calabi-Yau variety.
If we move the triplet $`(X,L,D)`$, in the space of all triplets $`(X^{},L^{},D^{})`$, where $`D^{}`$ is a smooth Calabi-Yau hypersurface on a moduli space of stable vector bundles, of the above type, over $`X^{}`$, then we get deformations of the complex manifold $`D`$, simply by associating the complex manifold $`D^{}`$ to any triplet $`(X^{},L^{},D^{})`$. The Kodaira-Spencer infinitesimal deformation map for this family gives a homomorphism from the tangent space of the moduli space of triplets $`(X,L,D)`$, of the above type, into $`H^1(D,T_D)`$, the space parametrizing the infinitesimal deformations of the complex manifold $`D`$. The main result here, \[Theorem 3.3\], says
The above Kodaira-Spencer infinitesimal deformation map is an isomorphism.
Consequently, there is an exact sequence
$`(1.1)`$
$$0\text{Hom}(l,H^0(_\xi ,K__\xi ^1)/l)H^1(D,T_D)H^1(X,T_X)\mathrm{\hspace{0.17em}0},$$
where $`lH^0(_\xi ,K__\xi ^1)`$ is the one dimensional subspace defined by $`D`$. The above inclusion map
$$\text{Hom}(l,H^0(_\xi ,K__\xi ^1)/l)H^1(D,T_D)$$
corresponds to the deformations of $`D`$ obtained by moving the hypersurface of the fixed variety $`_\xi `$, i.e., $`X`$ is kept fixed, and the projection $`H^1(D,T_D)H^1(X,T_X)`$ in (1.1) is the forgetful map from the space of infinitesimal deformations of the triplet $`(X,L,D)`$ to the space of infinitesimal deformations of $`X`$. From the above exact sequence (1.1) it follows immediately that
$$dimH^1(D,T_D)=\mathrm{\hspace{0.17em}3}g4+dimH^0(_\xi ,K__\xi ^1).$$
We note that the dimension of any $`H^0(_\xi ,\mathrm{\Theta }^k)`$, in particular that of $`H^0(_\xi ,K__\xi ^1)`$, is given by the Verlinde formula.
Let $`𝒰_D`$ denote the restriction to $`X\times D`$ of a Poincaré vector bundle over $`X\times _\xi `$. For any $`xX`$, the vector bundle over $`D`$, obtained by restricting $`𝒰_D`$ to $`x\times D`$, is denoted by $`(𝒰_D)_x`$. The following result is used in the proof of Theorem 3.3.
For any $`xX`$, the vector bundle $`(𝒰_D)_x`$ is stable with respect to any polarization on $`D`$. Moreover, the infinitesimal deformation map
$$T_xXH^1(D,\text{Ad}((𝒰_D))_x),$$
for the family $`𝒰_D`$ of vector bundles over $`D`$ parametrized by $`X`$, is an isomorphism.
This result was proved in \[3, Theorem 2.5\] under the assumption that $`n3`$. Here it is extended to the rank two case \[Theorem 2.1\].
## 2. Restriction of the universal vector bundle
We continue with the notation of the introduction.
The anticanonical line bundle $`K__\xi ^1:=^{\mathrm{top}}T__\xi `$ is isomorphic to $`\mathrm{\Theta }^2`$ \[6, page 69, Theorem 1\], where the generalized theta line bundle $`\mathrm{\Theta }`$ is the ample generator of the Picard group $`\text{Pic}(_\xi )`$; the Picard group is isomorphic to $``$.
Let $`D_\xi `$ be a smooth divisor, satisfying the condition that the line bundle $`𝒪__\xi (D)`$ is isomorphic to $`K__\xi ^1`$. Let
$$\tau :D_\xi $$
denote the inclusion map. Using the Poincaré adjunction formula, we have $`K_D\tau ^{}K__\xi \tau ^{}𝒪__\xi (D)`$. In view of the assumption $`𝒪__\xi (D)K__\xi ^1`$, the canonical line bundle $`K_D`$ is trivial. Since the divisor $`D`$ is ample, it is connected. Since the moduli space $`_\xi `$ is simply connected, the divisor $`D`$ is also simply connected. Therefore, $`D`$ is a Calabi-Yau variety.
Fix a Poincaré vector bundle $`𝒰`$ over $`X\times _\xi `$. In other words, for any $`m_\xi `$, the vector bundle over $`X`$ obtained by restricting $`𝒰`$ to $`X\times m`$ is represented by the point $`m`$. Let $`\mathrm{Ad}(𝒰)`$ denote the rank $`n^21`$ vector bundle over $`X\times _\xi `$ defined by the trace zero endomorphisms of $`𝒰`$. The vector bundle $`(\text{Id}_X\times \tau )^{}𝒰`$ (respectively, $`(\text{Id}_X\times \tau )^{}\mathrm{Ad}(𝒰)`$) over $`X\times D`$ will be denoted by $`𝒰_D`$ (respectively, $`\mathrm{Ad}(𝒰_D)`$).
For any fixed $`xX`$, let $`𝒰_x`$ denote the vector bundle over $`_\xi `$ obtained by restricting $`𝒰`$ to $`x\times _\xi `$. The vector bundle over $`D`$ obtained by restricting $`𝒰_D`$ (respectively, $`\mathrm{Ad}(𝒰_D)`$) to $`x\times D`$ will be denoted by $`(𝒰_D)_x`$ (respectively, $`\mathrm{Ad}(𝒰_D)_x`$).
Since $`H^2(D,)=`$, the stability of a vector bundle over $`D`$ does not depend on the choice of polarization needed to define the degree of a coherent sheaf over $`D`$.
Theorem 2.1.For any point $`xX`$, the vector bundle $`(𝒰_D)_x`$ over $`D`$ is stable. Moreover, the infinitesimal deformation map
$$T_xXH^1(D,\mathrm{Ad}(𝒰_D)_x)$$
for the family $`𝒰_D`$ of vector bundles over $`D`$ parametrized by $`X`$, is an isomorphism.
Proof. If $`n3`$ and also $`g3`$, then the theorem has already been proved in \[3, Theorem 2.5\].
Take a point $`xX`$. We start, as in the proof of Theorem 2.5 of , by considering the exact sequence
$$0\text{Ad}(𝒰)_x𝒪__\xi (D)\text{Ad}(𝒰)_x\stackrel{F}{}\tau _{}\text{Ad}(𝒰_D)_x\mathrm{\hspace{0.17em}0},$$
over $`X\times _\xi `$, where $`F`$ denotes the restriction map. This yields the long exact sequence
$$H^1(_\xi ,\text{Ad}(𝒰)_x𝒪__\xi (D))H^1(_\xi ,\text{Ad}(𝒰)_x)$$
$$H^1(D,\text{Ad}(𝒰_D)_x)H^2(_\xi ,\text{Ad}(𝒰)_x𝒪__\xi (D))$$
of cohomologies. If we consider $`𝒰`$ as a family of vector bundles over $`_\xi `$ parametrized by $`X`$, then the infinitesimal deformation map
$$T_xXH^1(_\xi ,\text{Ad}(𝒰)_x)$$
is an isomorphism \[5, page 392, Theorem 2\]. In view of the above long exact sequence, to prove that the infinitesimal deformation map is surjective it suffices to establish the following lemma.
Lemma 2.2.If $`i=1,2`$, then the following vanishing of cohomology
$$H^i(_\xi ,\mathrm{Ad}(𝒰_x)𝒪__\xi (D))=0$$
is valid.
Proof of Lemma 2.2. This lemma was proved in \[3, Lemma 2.1\] under the assumption that $`n3`$. So in the proof we will assume that $`n=2`$ and $`g4`$.
Let $`p`$ denote, as in \[3, Section 3\], the natural projection of the projective bundle $`(𝒰_x)`$ over $`_\xi `$ onto $`_\xi `$. Let $`T_p^{\text{rel}}`$ denote the relative tangent bundle for the projection $`p`$ from $`(𝒰_x)`$ to $`_\xi `$. Since $`R^1p_{}T_p^{\text{rel}}=0`$ and $`p_{}T_p^{\text{rel}}\mathrm{Ad}(𝒰_x)`$, for any $`i=0,1,2`$, the isomorphism
$$H^i(_\xi ,\mathrm{Ad}(𝒰_x)𝒪__\xi (D))=H^i(U,p^{}K__\xi T_p^{\text{rel}})$$
is obtained from the Leray spectral sequence for the map $`p`$.
If $`E`$ is a stable vector bundle of rank two and degree one over $`X`$, then the vector bundle $`E^{}`$ over $`X`$ obtained by performing an elementary transformation
$$0E^{}EL_x\mathrm{\hspace{0.17em}0},$$
where $`L_x`$ is a one dimensional quotient of the fiber $`E_x`$, is semistable. Therefore, we have a morphism, which we will denote by $`q`$, from $`(𝒰_x)`$ to the moduli space $`_{\xi (x)}`$. Here $`\xi (x)`$ denotes the line bundle $`\xi 𝒪_X(x)`$, and $`_{\xi (x)}`$ is the moduli space of semistable vector bundles over $`X`$ of rank two and determinant $`\xi (x)`$.
Define $`U(𝒰_x)`$ to be the inverse image, under that map $`q`$, of the stable locus of $`_{\xi (x)}`$.
The line bundle $`T_p^{\text{rel}}`$ is isomorphic to the relative canonical bundle $`K_q^{\text{rel}}`$ \[7, page 85\], . Therefore, to prove the lemma it suffices to show that
$`(2.3)`$
$$H^i(U,p^{}K__\xi K_q^{\text{rel}})=\mathrm{\hspace{0.17em}0},$$
where $`i=0,1,2`$.
Using the isomorphism of $`T_p^{\text{rel}}`$ with $`K_q^{\text{rel}}`$, from
$`(2.4)`$
$$q^{}K_{_{\xi (x)}}K_q^{\text{rel}}K_Up^{}K__\xi K_p^{\text{rel}}$$
we have
$$p^{}K__\xi K_q^{\text{rel}}q^{}K_{_{\xi (x)}}\left(K_q^{\text{rel}}\right)^3.$$
Since the restriction of the line bundle $`p^{}K__\xi K_p^{\text{rel}}`$ to a fiber of the map $`q`$ has strictly negative degree, using the above isomorphism, and the projection formula, we have
$`(2.5)`$
$$H^i(U,p^{}K__\xi K_q^{\text{rel}})=H^{i1}(_{\xi (x)},K_{_{\xi (x)}}R^1q_{}\left(K_q^{\text{rel}}\right)^3),$$
where $`i=0,1,2`$.
The map $`q`$ is smooth fibration $`^1`$ fibration over an open subset $`U^{}`$ of $`_{\xi (x)}`$. The assumption that the genus of $`X`$ at least four, ensures that the codimension of the complement of $`U^{}`$ is at least four. Therefore, by using the Hartog type theorem for cohomology, the isomorphism (2.5) is established.
Setting $`i=0`$ in (2.5), we conclude that $`H^0(U,p^{}K__\xi K_q^{\text{rel}})=0`$.
The following proposition is needed for our next step.
Proposition 2.6.Let $`W`$ be a holomorphic vector bundle of rank two over a complex manifold $`Z`$, and let $`f:(V)Z`$ be the corresponding projective bundle. Then there are canonical isomorphisms
$$R^1f_{}K_f^3S^4(W)\left(\stackrel{2}{}W^{}\right)^2R^0f_{}T_f^2,$$
where $`K_f`$ (respectively, $`T_f`$) is the relative canonical (respectively, anticanonical) line bundle.
Proof of Proposition 2.6. To construct the isomorphisms, let $`V`$ be a complex vector space of dimension two. Choosing a basis of $`V`$, we identify the tangent bundle $`T_{(V)}`$ with $`𝒪_{(V)}(2)`$, and also obtain an identification of the line $`^2V^{}`$ with $``$. Since,
$$H^0((V),𝒪_{(V)}(m))=S^m(V),$$
we have an isomorphism of $`H^0((V),T_{(V)}^2)`$ with $`S^4(V)\left(^2V^{}\right)^2`$. Now it is a straight forward computation to check that this isomorphism is $`GL(V)`$ invariant, i.e., it does not depend on the choice of a basis of $`V`$. Therefore, this pointwise construction of a canonical isomorphism of vector spaces induces an isomorphism
$$R^0f_{}T_f^2S^4(W)\left(\stackrel{2}{}W^{}\right)^2$$
between vector bundles.
To obtain the other isomorphism in the statement of the proposition, first note that by the Serre duality we have $`H^0((V),T_{(V)}^2)=H^1((V),K_{(V)}^3)^{}`$. Now the canonical identification of $`S^4(W)\left(^2W^{}\right)^2`$ with its dual, namely $`S^4(W^{})\left(^2W\right)^2`$, gives the other isomorphism. This completes the proof of Proposition 2.6.$`\mathrm{}`$
The isomorphisms in Proposition 2.6 are canonical isomorphisms, i.e., they are compatible with the pull back of $`W`$ using any map $`Z^{}Z`$, and furthermore, the isomorphisms are compatible with substituting $`W`$ by $`WL`$, where $`L`$ is a holomorphic line bundle over $`Z`$.
Combining Proposition 2.6 with (2.5), and using the projection formula, we get that if $`i=0,1,2`$, then
$`(2.7)`$
$$H^i(U,p^{}K__\xi K_q^{\text{rel}})=H^{i1}(_{(x)},K_{_{\xi (x)}}q_{}\left(T_q^{\text{rel}}\right)^2)$$
$$=H^{i1}(U,q^{}K_{_{\xi (x)}}\left(T_q^{\text{rel}}\right)^2).$$
Indeed, the first isomorphism in (2.7) is a consequence of (2.5) and Proposition 2.6, and since $`R^1p_{}\left(T_q^{\text{rel}}\right)^2=0`$, the second isomorphism in (2.7) is valid. Although there is no universal vector bundle over $`X\times _{\xi (x)}`$, the properties of the isomorphism $`R^1f_{}K_f^3R^0f_{}T_f^2`$ in Proposition 2.6 that were explained earlier, evidently ensure that the isomorphism in (2.7) is valid. More precisely, the pointwise construction of the isomorphism between $`R^1q_{}\left(K_q^{\text{rel}}\right)^3`$ and $`q_{}\left(T_q^{\text{rel}}\right)^2`$ gives an isomorphism of vector bundles.
Using (2.4), and the earlier mentioned fact that $`T_p^{\text{rel}}K_q^{\text{rel}}`$, we obtain that
$$q^{}K_{_{\xi (x)}}\left(T_q^{\text{rel}}\right)^2p^{}K__\xi \left(K_p^{\text{rel}}\right)^3.$$
Since the restriction of $`\left(K_p^{\text{rel}}\right)^3`$ to a fiber of $`p`$ has strictly negative degree, we have $`p_{}\left(K_p^{\text{rel}}\right)^3=0`$. Consequently, the above isomorphism simplifies the terms in (2.7) to give the following isomorphism
$`(2.8)`$
$$H^{i1}(U,q^{}K_{_{\xi (x)}}\left(T_q^{\text{rel}}\right)^2)=H^{i2}(_\xi ,K__\xi R^1p_{}\left(K_p^{\text{rel}}\right)^3)$$
where $`i=0,1,2`$.
Note that we obtain $`H^1(U,p^{}K__\xi K_q^{\text{rel}})=0`$ by setting $`i=1`$ in (2.8).
In order to complete the proof of the lemma we need to show that
$`(2.9)`$
$$H^2(U,p^{}K__\xi K_q^{\text{rel}})=\mathrm{\hspace{0.17em}0}.$$
To prove the above statement first observe that using (2.8), and setting $`i=2`$, we have the following isomorphism
$`(2.10)`$
$$H^2(U,p^{}K__\xi K_q^{\text{rel}})=H^0(_\xi ,K__\xi R^1p_{}\left(K_p^{\text{rel}}\right)^3).$$
Now using Proposition 2.6 we have
$$H^0(_\xi ,K__\xi R^1p_{}\left(K_p^{\text{rel}}\right)^3)=H^0(_\xi ,K__\xi S^4(𝒰_x)\left(\stackrel{2}{}𝒰_x^{}\right)^2),$$
where $`𝒰_x`$, as defined earlier, is the vector bundle over $`_\xi `$ obtained by restricting the Poincaré bundle $`𝒰`$ to the subvariety $`x\times 𝒰_\xi X\times 𝒰_\xi `$.
The vector bundle $`𝒰_x`$ is known to be stable. Consequently, the vector bundle
$$S^4(𝒰_x)\left(\stackrel{2}{}𝒰_x^{}\right)^2$$
is semistable. Now, since the vector bundle $`S^4(𝒰_x)\left(^2𝒰_x^{}\right)^2`$ is the dual of itself, its degree is zero. On the other hand, the degree of $`K__\xi `$ is strictly negative. From these it follows that the vector bundle
$$K__\xi S^4(𝒰_x)\left(\stackrel{2}{}𝒰_x^{}\right)^2$$
does not admit any nonzero section, since it is semistable of strictly negative degree. In view of (2.10), this establishes the assertion (2.9). Therefore, the assertion (2.3) is valid. This completes the proof of the lemma.$`\mathrm{}`$
Since we have established, in Lemma 2.2, the rank two analog of Lemma 2.1 of , the proof of the stability of the vector bundle $`(𝒰_D)_x`$ for rank at least three, as given in \[3, Theorem 2.5\], is also valid for the rank two case if $`g4`$.
We note that \[3, Theorem 2.5\] was proved under the assumption that $`g3`$. However, the proof remains valid for $`g=2`$ if the condition that the rank is at least four is imposed. Under this condition, the codimension of the subvariety over which the map $`q`$ fails to be smooth and proper is sufficiently large in order to be able to apply the analog Hartog’s theorem, which has been repeatedly used, for the cohomologies in question.
This completes the proof of Theorem 2.1.$`\mathrm{}`$
In view of the above Lemma 2.2, all the results established in Section 2 of for rank $`n3`$ remain valid for rank two and $`g4`$.
## 3. Computation of the infinitesimal deformations
Let $`X`$ be a compact connected Riemann surface of genus $`g`$, with $`g2`$. Take a holomorphic line bundle $`\xi `$ over $`X`$ of degree $`d`$. Let $`_\xi :=(n,\xi )`$ denote the moduli space of stable vector bundles $`E`$ of rank $`n`$ over $`X`$, with $`^nE=\xi `$. For another line bundle $`\xi ^{}`$ of degree $`d`$, the variety $`(n,\xi ^{})`$ is isomorphic to $`(n,\xi )`$. Indeed, if $`\eta `$ is a line bundle over $`X`$ with $`\eta ^n=\xi ^{}\xi ^{}`$, then the map defined by $`EE\eta `$ is an isomorphism from $`(n,\xi )`$ to $`(n,\xi ^{})`$. Therefore, we can rigidify (infinitesimally) the choice of $`\xi `$ by the following procedure. Fix a line bundle $`L`$ of degree one over $`X`$ such that $`L^{(22g)}`$ is isomorphic to the tangent bundle $`T_X`$. We fix $`\xi `$ to be $`L^d`$.
We will assume that the integers $`n`$ and $`d`$ are coprime, and $`n2`$. We will further assume that if $`g=2`$ then $`n2,3`$, and if $`g=3`$, then $`n2`$.
The above numerical assumptions are made in order to ensure that the assertion in Theorem 2.1 is valid for $`_\xi `$.
Take a smooth divisor $`D`$ on $`_\xi `$ such that $`𝒪__\xi (D)=K__\xi ^1`$. Consider the exact sequence of sheaves
$$0𝒪__\xi 𝒪__\xi (D)\tau _{}N_D\mathrm{\hspace{0.17em}0}$$
over $`_\xi `$, where $`N_D`$ is the normal bundle of the divisor $`D`$, and $`\tau `$ is the inclusion map of $`D`$ into $`_\xi `$. Since
$$H^1(_\xi ,𝒪__\xi )=\mathrm{\hspace{0.17em}0},$$
using the exact sequence of cohomologies, the space of sections $`H^0(D,N_D)`$ gets identified with the quotient vector space $`H^0(_\xi ,𝒪__\xi (D))/`$.
Let $`𝒮`$ denote the space of all divisors $`D^{}`$ on $`_\xi `$ such that $`D^{}`$ is homologous to $`D`$, i.e., they are represented by the same element in $`H^2(_\xi ,)`$. Therefore, $`𝒮`$ is identified with $`H^0(_\xi ,K__\xi ^1)`$. The tangent space to $`𝒮`$, at the point $`[D^{}]𝒮`$ representing a divisor $`D^{}`$, has the following identification
$$T_{[D^{}]}𝒮=H^0(D^{},N_D^{})=H^0(_\xi ,𝒪__\xi (D^{}))/.$$
Let $`𝒩`$ denote the moduli space of triplets of the form $`(X,L,D)`$, where $`X`$, $`L`$ and $`D`$ are as above (the line bundle $`L`$ is a $`(2g2)`$-th root of $`K_X`$). So $`𝒩`$ is an open subset of moduli space of triplets of the form $`(X,L,\alpha )`$, where $`\alpha `$ is a linear subspace of $`H^0(_\xi ,K__\xi ^1)`$ of dimension one. The space $`𝒩`$ parametrizes a family of Calabi-Yau varieties, simply by associating the Calabi-Yau variety $`D`$ to any triplet $`(X,L,D)𝒩`$.
Take a point $`\gamma :=(X,L,D)`$ in the moduli space $`𝒩`$. Associated to this family is the homomorphism
$`(3.1)`$
$$F:T_\gamma 𝒩H^1(D,T_D)$$
that maps the tangent space $`T_\gamma 𝒩`$ of $`𝒩`$ at $`\gamma `$ to the space of infinitesimal deformation of the complex manifold $`D`$. In other words, this homomorphism $`F`$ sends any tangent vector $`vT_\gamma 𝒩`$ to the corresponding Kodaira-Spencer infinitesimal deformation class of $`D`$ for the above family parametrized by $`𝒩`$.
The vector space $`T_\gamma 𝒩`$ fits naturally into the short exact sequence
$`(3.2)`$
$$0H^0(D,N_D)T_\gamma 𝒩H^1(X,T_X)\mathrm{\hspace{0.17em}0},$$
where the projection $`T_\gamma 𝒩H^1(X,T_X)`$ corresponds to the forgetful map, which sends any point $`(X^{},L^{},D^{})𝒩`$ to the point represented by $`X^{}`$ in the moduli space of Riemann surfaces; the inclusion $`H^0(D,N_D)T_\gamma 𝒩`$ in (3.2) corresponds to the obvious homomorphism $`T_{[D]}𝒮T_\gamma 𝒩`$, where $`𝒮`$, as before, is $`H^0(_\xi ,K__\xi ^1)`$, the space of anticanonical divisors on $`_\xi `$.
Theorem 3.3.The Kodaira-Spencer infinitesimal deformation map $`F`$ constructed in (3.1) is an isomorphism of the tangent space $`T_\gamma 𝒩`$ with $`H^1(D,T_D)`$.
Proof. We start by considering the exact sequence
$$0T_D\tau ^{}T__\xi N_D\mathrm{\hspace{0.17em}0}$$
of vector bundles over $`D`$, where $`N_D`$ is the normal bundle of $`D`$, and $`\tau `$, as before, is the inclusion map of $`D`$ into $`_\xi `$. This gives us the exact sequence
$`(3.4)`$
$$H^0(D,\tau ^{}T__\xi )H^0(D,N_D)H^1(D,T_D)H^1(D,\tau ^{}T__\xi )H^1(D,N_D)$$
of cohomologies.
Since the canonical line bundle $`K_D`$ is trivial, and $`N_D\tau ^{}K__\xi ^1`$ is ample, the Kodaira vanishing theorem gives
$`(3.5)`$
$$H^1(D,N_D)=\mathrm{\hspace{0.17em}0}.$$
Therefore, the homomorphism $`H^1(D,T_D)H^1(D,\tau ^{}T__\xi )`$ in (3.4) is surjective.
Our next aim is to show that
$`(3.6)`$
$$H^0(D,\tau ^{}T__\xi )=0,$$
which would be the first step in turning (3.4) into the short exact sequence (1.1) that we are seeking.
For that purpose, consider the vector bundle $`𝒰_D`$ over $`X\times D`$ obtained by restricting a Poincaré bundle. Let $`\varphi `$ (respectively, $`\psi `$) denote the projection of $`X\times D`$ to $`X`$ (respectively, $`D`$). The vector bundle $`R^1\psi _{}\mathrm{Ad}(𝒰_D)`$ over $`D`$ is naturally isomorphic to $`\tau ^{}T__\xi `$. Also, $`\psi _{}\mathrm{Ad}(𝒰_D)=0`$, as the vector bundle $`(𝒰_D)_x`$ is stable, hence simple, for every $`xX`$ \[Theorem 2.1\]. The vector bundle $`\mathrm{Ad}(𝒰_D)`$, as in Section 2, is the subbundle of $`\mathrm{End}(𝒰_D)`$ consisting of trace zero endomorphisms. Now, using the Leray spectral sequence for the projection $`\psi `$, the isomorphism
$$H^0(D,\tau ^{}T__\xi )=H^1(X\times D,\mathrm{Ad}(𝒰_D))$$
is obtained.
The vector bundle $`(𝒰_D)_x`$ over $`D`$, defined in Section 2, has been proved to be stable in Theorem 2.1. So, we have $`H^0(D,\mathrm{Ad}((𝒰_D)_x))=0`$ for every $`xX`$. Consequently, the isomorphism
$$H^1(X\times D,\mathrm{Ad}(𝒰_D))=H^0(X,R^1\varphi _{}\mathrm{Ad}(𝒰_D))$$
is obtained.
Now, from the second part of Theorem 2.1 we have a natural isomorphism
$$R^1\varphi _{}\mathrm{Ad}(𝒱_D)=T_X$$
obtained using the Poincaré bundle. Finally, since $`H^0(X,T_X)=0`$, the assertion in (3.6) is an immediate consequence of the above isomorphism.
Using (3.5) and (3.6), the exact sequence in (3.4) reduces to
$`(3.7)`$
$$0H^0(D,N_D)H^1(D,T_D)H^1(D,\tau ^{}T__\xi )\mathrm{\hspace{0.17em}0}.$$
The comparison of (3.7) with (3.2) shows that the next step has to be computation of $`H^1(D,\tau ^{}T__\xi )`$.
Consider the short exact sequence
$$0T__\xi 𝒪__\xi (D)T__\xi \tau _{}\tau ^{}T__\xi \mathrm{\hspace{0.17em}0}$$
of sheaves over $`_\xi `$. We know that $`H^2(_\xi ,T__\xi )=0`$ \[5, page 391, Theorem 1.a\]. Also, we have (3.6). Consequently, the exact sequence yields the long exact sequence
$`(3.8)`$
$$0H^1(_\xi ,T__\xi K__\xi )H^1(_\xi ,T__\xi )$$
$$H^1(D,\tau ^{}T__\xi )H^2(_\xi ,T__\xi K__\xi )\mathrm{\hspace{0.17em}0}$$
of cohomologies; note that $`H^i(_\xi ,\tau _{}\tau ^{}T__\xi )=H^i(D,\tau ^{}T__\xi )`$.
It was proved in that the Kodaira-Spencer deformation map for $`_\xi `$, as the Riemann surface $`X`$ moves in the moduli space of Riemann surfaces, is an isomorphism of $`H^1(_\xi ,T__\xi )`$ with $`H^1(X,T_X)`$. Therefore, comparing (3.2) with (3.7), and using the exact sequence (3.8), we conclude that in order to complete the proof of the theorem, it suffices to establish the following statement : if $`i=1,2`$, then
$`(3.9)`$
$$H^i(_\xi ,T__\xi K__\xi )=\mathrm{\hspace{0.17em}0}.$$
Indeed, (3.9) implies that $`H^1(D,\tau ^{}T__\xi )=H^1(_\xi ,T__\xi )=H^1(X,T_X)`$.
To prove (3.9), let $`\delta `$ denote the dimension of the variety $`_\xi `$. The Serre duality gives the following isomorphism
$`(3.10)`$
$$H^i(_\xi ,T__\xi K__\xi )=H^{\delta i}(_\xi ,\mathrm{\Omega }__\xi ^1)^{}=H^{1,\delta i}(_\xi )^{}.$$
(Here $`H^{j,k}(_\xi ):=H^k(_\xi ,\mathrm{\Omega }__\xi ^j)`$.)
To finish the proof of the statement (3.9) we need to use some properties of the Hodge structure of the cohomology algebra $`H^{}(_\xi ,)`$, which will be recalled now.
Fix a Poincaré bundle $`𝒰`$ over $`X\times _\xi `$. Let $`c_k:=c_k(𝒰)H^{k,k}(X\times _\xi )`$ denote the $`k`$-th Chern class of $`𝒰`$. For any $`\alpha H^{i,j}(X)`$, we have
$`(3.11)`$
$$\lambda (k,\alpha ):=_Xc_kf^{}\alpha H^{k+i1,k+j1}(_\xi ),$$
where $`f`$ denotes the obvious projection of $`X\times _\xi `$ onto $`X`$, and $`_X`$ is the Gysin map for this projection, which is constructed by integrating differential forms on $`X\times _\xi `$ along the fibers of the projection $`f`$. The collection of all these cohomology classes $`\{\lambda (k,\alpha )\}`$, constructed in (3.11), generate the cohomology algebra $`H^{}(_\xi ,)`$ \[1, page 581, Theorem 9.11\]. On the other hand, we know that the following
$$H^{0,1}(_\xi )=0$$
is valid.
With these properties of $`H^{}(_\xi ,)`$ at our disposal, we are in a position to prove that the algebra generated by the cohomology classes $`\{\lambda (k,\alpha )\}`$ cannot have a nonzero element in $`H^{1,\delta 1}(_\xi )`$ or $`H^{1,\delta 2}(_\xi )`$, where $`\delta =dim_{}_\xi `$.
To prove the above assertion, suppose that
$$\omega =\omega _1\omega _2\mathrm{}\omega _l$$
is a nonzero element in $`H^{1,\delta 1}(_\xi )H^{1,\delta 2}(_\xi )`$, where $`\omega _j\{\lambda (k,\alpha )\}`$ for all $`j[1,l]`$. We will see that $`\omega _jH^{0,1}(_\xi )`$ for at least one $`j[1,l]`$. Since $`H^{0,1}(_\xi )=0`$, this would prove that $`\omega =0`$.
First observe that in (3.11), we have $`k+i1k1`$ and $`k+j1k`$, as $`dim_{}X=1`$. In other words, we have
$`(3.12)`$
$$(k+j1)(k+i1)1.$$
Let $`\omega _iH^{a_i,b_i}(_\xi )`$, where $`i[1,l]`$. Then $`a_i1`$, and consequently from (3.12) the inequality $`b_i2`$ is obtained. Furthermore, $`a_j0`$ for at most one $`j[1,l]`$. If $`a_i=0`$, then $`b_i1`$; but the possibility $`b_i=1`$ is ruled out as $`H^{0,1}(_\xi )=0`$. Therefore, all $`\omega _i`$ except one is a scalar. Now, if $`a_j=1`$, then from (3.12) we have $`b_j2`$. On the other hand, we have $`\delta 2>2b_j`$. Consequently, we conclude that $`\omega =0`$.
Since the cohomology classes $`\lambda (k,\alpha )`$ are of pure type, i.e.,
$$\lambda (k,\alpha )H^{a,b}(_\xi )$$
for some integers $`a`$ and $`b`$, it is easy to see that for any $`i0`$, the cohomology group $`H^i(_\xi ,)`$ is generated, as a complex vector space, by completely decomposable elements, i.e., elements of the type $`\omega `$ considered above. Therefore, we have $`H^{1,\delta 1}(_\xi )=0=H^{1,\delta 2}(_\xi )`$.
In view of (3.10), this completes the proof of the statement (3.9). We already noted that the statement (3.9) completes the proof of the theorem.$`\mathrm{}`$
As a consequence of Theorem 3.3 we get that
$$dimH^1(D,T_D)=\mathrm{\hspace{0.17em}3}g4+dimH^0(_\xi ,\mathrm{\Theta }^2).$$
The dimension of $`H^0(_\xi ,\mathrm{\Theta }^2)`$ is given by the Verlinde formula.
Remark 3.13. From \[2, page 760, Proposition 1\], coupled with \[2, page 759, Théorème 1\], it follows that $`H^0(D,T_D)=0`$. We note that this is also an immediate consequence of (3.6).
We have $`H^2(D,T_D)=H^{m1,2}(D)=H^{m,3}(_\xi )`$, where $`m=dimD`$; the second isomorphism is obtained from the Lefschetz hyperplane theorem \[4, page 156\]. The earlier proof that $`H^{1,\delta i}(_\xi )=0`$ for $`i=1,2`$, easily extends to prove that $`H^{1,\delta 3}(_\xi )=0`$. Therefore, we have $`H^2(D,T_D)=0`$. However, by a theorem due to Bogomolov-Kawamata-Tian-Todorov it is already known that the deformations of a Calabi-Yau variety are unobstructed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.